Category: Linux

Vendor and Cloud lock-in; Good? Bad? Indifferent?

Vendor lock-in, also known as proprietary lock-in or customer lock-in, is when a customer becomes dependent on a vendor for products and services. Thus, the customer is unable to use another vendor without substantial switching costs.

The evolving complexity of data center architectures makes migrating from one product to another difficult and painful regardless of the level of “lock-in.” As with applications, the more integrated an infrastructure solution, architecture and business processes, the less likely it is to be replaced.

The expression “If it isn’t broke, don’t fix it” is commonplace in IT.

I have always touted the anti-vendor lock-in motto. Everything should be Open Source and the End User should have the ability to participate, contribute, consume and modify solutions to fit their specific needs. However is this always the right solution?

Some companies are more limited when it comes to resources. Others are incredibly large and complex making the adoption of Open Source (without support) complicated. Perhaps a customer requires a stable and validated platform to satisfy legal or compliance requirements. If the Vendor they select has a roadmap that matches the companies there might be synergy between the two and thus, Vendor lock-in might be avoided. However, what happens when a Company or Vendor suddenly changes their roadmap?

Most organizations cannot move rapidly between architectures and platform investments (CAPEX) which typically only occur every 3-5 years. If the roadmap deviates there could be problems.

For instance, again let’s assume the customer needs a stable and validated platform to satisfy legal, government or compliance requirements. Would Open Source be a good fit for them or are they better using a Closed Source solution? Do they have the necessary staff to support a truly Open Source solution internally without relying on a Vendor? Would it make sense for them to do this when CAPEX vs OPEXi s compared

The recent trend is for Vendors to develop Open Source solutions; using this as a means to market their Company as “Open” which has become a buzzword. Such terms like Distributed, Cloud, Scale Out, and Pets vs Cattle have also become commonplace in the IT industry.

If a Company or individual makes something Open Source but there is no community adoption or involvement is it really an open Source project? In my opinion, just because I post source code to GitHub doesn’t truthfully translate into a community project. There must be adoption and contribution to add features, fixes, and evolve the solution.

In my experience, the Open Source model works for some and not for others. It all depends on what you are building, who the End User is, regulatory compliance requirements and setting expectations in what you are hoping to achieve. Without setting expectations, milestones and goals it is difficult to guarantee success.

Then comes the other major discussion surrounding Public Cloud and how some also considered it to be the next evolution of Vendor lock-in.

For example, if I deploy my infrastructure in Amazon and then choose to move to Google, Microsoft or Rackspace, is the incompatibility between different Public Clouds then considered lock-in? What about Hybrid Cloud? Where does that fit into this mix?

While there have been some standards put in place such as OVF formats the fact is getting locked into a Public Cloud provider can be just as bad or even worse than being locked into an on-premise architecture or Hybrid Cloud architecture, but it all depends on how the implementation is designed. Moving forward as Public Cloud grows in adoption I think we will see more companies distribute their applications across multiple Public Cloud endpoints and will use common software to manage the various environments. Thus, being a “single pane of glass” view into their infrastructure. Solutions like Cloudforms are trying to help solve these current and frustrating limitations.

Recently, I spoke with someone who mentioned their Company selected OpenStack to prevent Vendor lock-in as it’s truly an Open Source solution. While this is somewhat true, the reality is moving from one OpenStack distribution to another is far from simple. While the API level components and architecture are mostly the same across different distributions the underlying infrastructure can be substantially different. Is that not a type of Vendor lock-in? I believe the term could qualify as “Open Source solution lock-in.”

The next time someone mentions lock-in ask them what they truly mean and what they are honestly afraid of. Is it that they want to participate in the evolution of a solution or product or that they are terrified to admit they have been locked-in to a single Vendor for the foreseeable future?

Is it that they want to participate in the evolution of a solution or product or that they are terrified to admit they have been locked-in to a single Vendor for the foreseeable future?

The future is definitely headed towards Open Source solutions and I think companies such as Red Hat and others will guide the way, providing support and validating these Open Source solutions helping to make them effortless to implement, maintain, and scale.

All one needs to do is look at the largest Software Company in the world, Microsoft, and see how they are aggressively adopting Open Source and Linux. This is a far cry from the Microsoft v1.0 which solely invested in their own Operating System and neglected others such as Linux and Unix.

So, what do you think? Is Vendor lock-in, whether software related, hardware related, Private or Public Cloud, truly a bad thing for companies and End Users or is it a case by case basis?

Advertisements

Linux + Microsoft + Developers = Microsoft 2.0

I am sure everyone has heard the news. Microsoft has brought the bash shell into Windows. What does this mean and why should you care?

Let us discuss the facts:

First, everyone should calm down and grab a soda or your favorite beverage. Take a deep breath. Exhale. Now we can begin……

This is not a virtual machine or container embedded into Windows, nor is it cross-compiled tools, it is native.

Third-party tools have enabled this sort of thing for years but a direct partnership between Microsoft and Canonical will bring flexibility and convenience for developers who prefer using these binaries and tools.

More essentially, it speaks to Microsoft’s invigorating position on open-source improvement. A group of engineers at Microsoft have been working diligently adjusting some Microsoft research innovation to fundamentally perform constant interpretation of Linux syscalls into Windows OS syscalls.

Linux nerds can consider it kind of inverted “wine” — Ubuntu pairs running locally in Windows. Microsoft describes it their “Windows Subsystem for Linux.”

How big of a deal is this?

While I have to admit its cool I am not shocked or surprised this has occurred. Microsoft desktop percentages have been pretty flat or declining for the past few years. Other Operating Systems such as Linux, Apple, Google and others have been slowing gaining market share.

People now have an overwhelming choice of Operating Systems to select from and sometimes run multiple at once for developments purposes. You can run Windows on a Mac natively, or within a virtual container like Parallels or you can dual boot into Windows or OSX. Same thing with Linux and Windows as well. Virtualization within an Operating System is nothing new but having it integrated into the OS is extremely different.

It’s always been suggested that users new to Linux tend select Ubuntu over other distributions because it has been considered by some to be more desktop friendly. I agree and disagree, this was the case in the past but would say Fedora is just as friendly and there are many other distributions, such as Arch, for example, that users claim are better for new Linux users or more feature rich. It really depends on what your needs are and then selecting the Linux distribution that matches those needs or your own personal preferences. Another thing to mention is hardware was previously another factor that users had to consider in the past as some Linux distributions had more driver diversity than others but I think we have mostly cleared that hurdle.

So, Linux runs on Windows. That’s pretty cool. Will this somehow change my life?

No. The average person’s life is probably not going to change.Coffee isn’t going to taste better or worse. They aren’t getting a pay increase and no one is going to shower them with candy and gold. However if you are a developer it might be a bigger deal to you compared to others.

None of this is ready for “prime time” today as it was just announced but it shows that Microsoft is thinking outside of the box. As someone who worked there in the past, this is clearly a new Microsoft. I wonder if they will break Microsoft apart like Google did with Alphabet. That seems to be a common theme. Dell selling off a part of itself and HP separating business units.

Maybe that will be the next big announcement?

Red Hat Gluster Storage now available in Google Cloud Platform!

Today we announced the availability of Red Hat Gluster Storage in Google Cloud Platform as a fully supported offering. Red Hat Gluster Storage will give Google Cloud Platform users the ability to use a scale-out, POSIX-compatible, massively scalable, elastic file storage solution with a global namespace.

This offering will bring existing users of Red Hat Gluster Storage another supported public cloud environment in which they can run their POSIX-compatible file storage workloads. For their part, Google Cloud Platform users will have access to Red Hat Gluster Storage, which they can use for several cloud storage use cases, including active archives, rich media streaming, video rendering, web serving, data analytics, and content management. POSIX compatibility will give users the ability to move their existing on-premise applications to Google Cloud Platform without having to rewrite applications to a different interface.

Enterprises can also migrate their data from an on-premise environment to the Google Cloud Platform, easily leveraging the geo-replication capabilities of Red Hat Gluster Storage.

A Red Hat Gluster Storage node in Google Cloud Platform is created by attaching Google standard persistent disks (HDD) or persistent solid-state drives (SSD) to a Google Compute Engine (GCE) instance. Two or more such nodes make up the trusted storage pool of storage nodes. To help protect against unexpected failures, the Red Hat Gluster Storage nodes that constitute the trusted storage pool should be instantiated across Google Cloud Platform’s zones (within the same region), up to and including a single zone.

Gluster volumes are created by aggregating available capacity from Red Hat Gluster Storage instances. Capacity can be dynamically expanded or shrunk to meet your changing business demands. Additionally, Red Hat Gluster Storage provides geo-replication capabilities that enable data to be asynchronously replicated from one Google Cloud Platform region to another, thereby enabling disaster recovery for usage scenarios that need it in a master-slave configuration.

Anticipated roadmap features like file-based tiering in Red Hat Gluster Storage include providing the capability to create volumes with a mix of SSD- and HDD-based persistent disks providing storage tiering (hierarchical storage management) in the cloud in a transparent manner.

Red Hat Gluster Storage in Google Cloud Platform will be accessed using the highly performant Gluster native (Fuse-based) client from Red Hat Enterprise Linux 6, Red Hat Enterprise Linux 7, and other Linux-based clients. Users may also use NFS or SMB.

We are excited that users will be able to take advantage of all the Red Hat Gluster Storage features in Google Cloud Platform, including replication, snapshots, directory quotas, erasure coding, bit-rot scrubbing, and geo-replication, because they now have a compelling option for their scale-out file storage use cases in Google’s cloud.

Republished from: http://goo.gl/9dLkC8

OpenShift and Kubernetes!

Recently Red Hat launched OpenShift Enterprise 3.1, the company’s software for managing PaaS-based workloads. Google-developed Kubernetes originally to manage large numbers of Containers within its own environment and it was natural to integrate OpenShift Enterprise 3.1 with Kubernetes.

For those who are unsure about what Kubernetes specifically is, Kubernetes is a web-scale tool created by Google that is appropriate for the enterprise. While some large-scale enterprise may not have as many Containers as Google, Kubernetes-based technologies can nonetheless be used to manage smaller more diverse container workloads of the enterprise.

The theory behind Containers is eliminating waste and making infrastructure flexible and easier to deploy regardless of the footprint it is sitting on. A Container is unaware and doesn’t care if it’s running within a virtual machine or on bare metal just as a virtual machine is not aware of its existence operating as a fully independent operating systems virtualized within a sandbox.

While virtualization has pushed technology forward it has also led to sprawl and waste. If you consider a typical LAMP stack platform there is a tremendous waste involved setting up each of the typical tiered components. Each virtual machine deployed has to be configured, patched for security, managed, updated and treated as a physical system which also includes monitoring, licensing, and application uptime. So really all we have accomplished is decreasing our hardware footprint and allowing a higher density ratio of applications to hardware resources.

This advancement though has not eliminated the need to manage the endpoints and with the recent string of security attacks, it’s easy to see that less is, in fact, more. The fewer endpoints you expose to the internet or between internal systems the less risk you have of becoming compromised. I am not going to dive deep into security design, ethics and such in this posting as that’s far too much to cover.

We also have not eliminated any of the “virtual waste” that was thus created with the transition of moving from a physical world to a virtual world. Developers are notorious for requesting more resources than are needed and administrators are still stuck building virtual servers, deploying applications and having to maintain the health of the entire stack and typically are expected to do all of this with less staff because everything is virtual….. It just runs. Right!?

By consolidating the number of endpoints needing to be managed, just as we did with moving from physical to virtual, Containers will increase density and efficiency. They will also lessen the burden of the systems administrators and engineers as there will be a substantial decrease in the number of endpoints that will need to be managed. This will increase productivity and lead to rapid releases of software as rolling deployments can be simplified.

The plain and simple truth is Containers unlock the potential for a “build, test, run, and done” methodology. You can build a Container and run it in AWS, Google, Azure, in your own private data center or on a laptop. Containers are lightweight and can be “spun up” in seconds as opposed to minutes, days or hours when compared to virtual machines. They use fewer resources and can be rightsized more appropriately and make better use of the resources they are allocated. Having a smaller footprint also allows simplification of managing endpoints.

A great example of how a Container could be used would be for a web scale front end store. An Enterprise could deploy web service based Containers such as Apache in a stateless manner and scale up and out as needed. SPinningn up new Containers when traffic increases and then decommissioning them when traffic decreases. This is in stark contrast to a typical environment where you would build a virtual machine and deploy an application in advance of an expected increase in traffic. Then, once traffic decreases you are left with nothing but waste as the application sits idle until the next burst.

Talk about a waste of resources!

All that cooling, power, licensing, monitoring, managing and data center footprint for something to just sit there, idle. It’s entirely a waste of CAPEX and OPEX!

Anyway! Back to Kubernetes!

Kubernetes works in conjunction with OpenShift to help provides lifecycle management for Containers. Kubernetes then takes this to the next level by providing orchestration and the ability to manage any size clusters! Another benefit of Kubernetes orchestration container management is it can manage and allocate resources on a host or cluster dynamically with fault tolerance to guarantee workload reliability. It allows nodes to be tagged or labeled allowing developers or administrators to select and control where defined workloads could and should be running.

This is especially useful when moving Containers between development, test, quality assurance, operational readiness and propagation to production. In some of these environments, there might be different hardware or service level agreements that must be met and this helps assure discernibility.

Apple vs Android? Fortune vs Future

It happens once again, every year. Apple releases their new hardware and software updates and like a moth is drawn to the flame, I MUST see and experience what all the fuss is about.

I tried to love the iPhone when it was first released, but the fact that MMS didn’t work, battery life was abysmal, and it was lacking in multiple other categories (specifically security and encryption) it failed to replace my Blackberry. A year or so went on and with every new release and update more of my friends jumped on board the Apple train. Those with Sidekicks threw away their fun flippy keyboards and Blackberry users ditched their tiny screen all day battery life devices embracing touch-screen Apple goodness. sent from the Heavens. I assume many Windows Mobile users did the same as well though I can’t say I know anyone who ever used Windows Mobile.

It was the second iteration of the iPhone 4 that I once again was tricked into looking at the iPhone platform. The aggressive curves, glass body, smooth user interface, wonderful camera and all the other goodness they packed in that tiny little 4″ screen just called out to me. Watching Jonathan Ive, with his charismatic and soothing voice talk about this magical device made me lust to try the iPhone once again. After buying and using it for a few days, I went back to my Blackberry. Once again, battery life was the primary problem and I felt security was still not entirely addressed as well.

Fast forward to 2015 as we move into 2016. The iPhone 6s is all grown up. The hardware is pretty as ever and IOS has all of the features I have always felt the operating systems was missing to be a true Enterprise phone, including full encryption. So why with all of the issues (excluding battery) resolved can I not bring me to own such a device? Why can’t I just be like the other tens of millions of people? As I sat in the sterile white Apple Store showroom staring wide-eyed at the silver iPhone 6s sitting in my hand, I kept wondering “What is wrong with me? Why can’t I be like everyone else?”

That is when reality whacked me in the side of the head! Well, not really. It was some women who had just purchased a 27″ iMac and was attempting to navigate the horde of customers in the store all bedazzled by the Apple gadgets and gizmos that accidentally (or maybe purposefully) slammed her new toy into the side of my head. Regardless, it knocked something loose and I had an epiphany.

I cannot own an iPhone because it forces users to conform. The iPhone only allows you to interact with it in a specific manner and allows no customizability other than rearranging your little dancing application icons on your home screen, everything is inflexible! The operating system is right, you are wrong, accept it, move on. I am a USER (0444) and thus, should be treated as such. Like a child who’s parental unit knows better, let the iPhone decide what I should and should not be able to do.

The other problem I have with the platform is the lack of developer freedom and refusal to adopt specific hardware. Android manufacturers have been suing NFC for longer than I can remember and wireless charging has become mainstream. In fact, many Starbucks in Seattle actually have units built into tables that can wireless charge your mobile device.

I love using enhanced features such as Samsung and Android Pay. I use them at least once a day if not more. It makes things so simple as compared to rummaging through my wallet, selecting a credit card, swiping it (or inserting the smart chip side) waiting, and then having to sign (sometimes) and collect a receipt. How annoying is that and it’s soooooo 1990. Have we gone back in time because it sure feels like it?

Anyway, this was not meant to be a long post and surely not one to strike a debate at 5:41 am on a Wednesday morning. Speaking of, why the heck am I awake at this time anyway?

OpenShift and Kubernetes!

Recently Red Hat launched OpenShift Enterprise 3.1, the company’s software for managing PaaS-based workloads. Google-developed Kubernetes originally to manage large numbers of Containers within its own environment and it was natural to integrate OpenShift Enterprise 3.1 with Kubernetes.

For those who are unsure about what Kubernetes specifically is, Kubernetes is a web-scale tool created by Google that is appropriate for the enterprise. While some large-scale enterprise may not have as many Containers as Google, Kubernetes-based technologies can nonetheless be used to manage smaller more diverse container workloads of the enterprise.

The theory behind Containers is eliminating waste and making infrastructure flexible and easier to deploy regardless of the footprint it is sitting on. A Container is unaware and doesn’t care if it’s running within a virtual machine or on bare metal just as a virtual machine is not aware of its existence operating as a fully independent operating systems virtualized within a sandbox.

While virtualization has pushed technology forward it has also led to sprawl and waste. If you consider a typical LAMP stack platform there is a tremendous waste involved setting up each of the typical tiered components. Each virtual machine deployed has to be configured, patched for security, managed, updated and treated as a physical system which also includes monitoring, licensing, and application uptime. So really all we have accomplished is decreasing our hardware footprint and allowing a higher density ratio of applications to hardware resources.

This advancement though has not eliminated the need to manage the endpoints and with the recent string of security attacks, it’s easy to see that less is, in fact, more. The fewer endpoints you expose to the internet or between internal systems the less risk you have of becoming compromised. I am not going to dive deep into security design, ethics and such in this posting as that’s far too much to cover.

We also have not eliminated any of the “virtual waste” that was thus created with the transition of moving from a physical world to a virtual world. Developers are notorious for requesting more resources than are needed and administrators are still stuck building virtual servers, deploying applications and having to maintain the health of the entire stack and typically are expected to do all of this with less staff because everything is virtual….. It just runs. Right!?

By consolidating the number of endpoints needing to be managed, just as we did with moving from physical to virtual, Containers will increase density and efficiency. They will also lessen the burden of the systems administrators and engineers as there will be a substantial decrease in the number of endpoints that will need to be managed. This will increase productivity and lead to rapid releases of software as rolling deployments can be simplified.

The plain and simple truth is Containers unlock the potential for a “build, test, run, and done” methodology. You can build a Container and run it in AWS, Google, Azure, in your own private data center or on a laptop. Containers are lightweight and can be “spun up” in seconds as opposed to minutes, days or hours when compared against virtual machines. They use fewer resources and can be rightsized more appropriately and make better use of the resources they are allocated. Having a smaller footprint also allows simplification of managing endpoints.

A great example of how a Container could be used would be for a web scale front end store. An Enterprise could deploy web service based Containers such as Apache in a stateless manner and scale up and out as needed. SPinningn up new Containers when traffic increases and then decommissioning them when traffic decreases. This is in stark contrast to a typical environment where you would build a virtual machine and deploy an application in advance of an expected increase of traffic. Then, once traffic decreases you are left with nothing but waste as the application sits idle until the next burst.

Talk about a waste of resources!

All that cooling, power, licensing, monitoring, managing and data center footprint for something to just sit there, idle. It’s entirely a waste of CAPEX and OPEX!

Anyway! Back to Kubernetes!

Kubernetes works in conjunction with OpenShift to help provides lifecycle management for Containers. Kubernetes then takes this to the next level by providing orchestration and the ability to manage any size clusters! Another benefit of Kubernetes orchestration container management is it can manage and allocate resources on a host or cluster dynamically with fault tolerance to guarantee workload reliability. It allows nodes to be tagged or labeled allowing developers or administrators to select and control where defined workloads could and should be running.

This is especially useful when moving Containers between development, test, quality assurance, operational readiness and propagation to production. In some of these environments, there might be different hardware or service level agreements that must be met and this helps assure discernibility.

How to build a large scale multi-tenant cloud solution

It’s not terribly difficult to design and build a turnkey integrated pre-configured SDDC ready to use solution. However building one that completely abstracts the compute, storage and network physical resources and provides multiple tenants a pool of logical resources along with all the necessary management, operational and application level services and allows to scale resources with seamless addition of new rack units.

The architecture should be a vendor agnostic solution with limited software tie-in to hardware vendor specifics but expandable to support various vendor hardware needs with plug-n-play architecture.

Decisions should be made early if the solution will come in various forms and factors from appliances, quarter, half and full racks providing different levels of capacity, performance, redundancy HA, SLA’s. Building a ground-up architecture to expand to mega rack scale architecture in future with distributed infrastructure resources without impacting the customer experience and usage.

The design should contain more than one physical rack with each rack unit composing of: Compute Servers with direct attached storage (software defined) a Top of the Rack and Management Switches hardware Data Plane, Control Plane and Management Plane software Management plane software Platform level Operations, Management and Monitoring software Application-centric workload Services.

Most companies have a solution based on a number of existing technologies, architectures, products, and processes that have been part of the legacy application hosting and IT operations environments. These environment can usually be repurposed for some of the scalable cloud components which saves time, cost and the result is a stable environment that operations can still manage/operate with existing processes and solutions.

In order to evolve the platform to provide not only for stability and supportability but additional features such as elasticity and improved time to market companies should begin immediately initiating a project to investigate and redesign the underlying platform.

In scope for this effort are assessments of the network physical and logical architecture, the server physical and logical architecture, the storage physical and logical architecture, the server virtualization technology, and the platform-as-a-service technology.

The approach to this effort will include building a mini proof of concept based on a hypothesized preferred architecture and benchmarking it against alternative designs. This proof of concept then should be able to scale to a production sized system.

Implement a scalable elastic IaaS – PaaS leveraging self-service automation and orchestration that enables end users the ability to self-service provision applications within the cloud itself.

Suggested phases of the project would be as follows:

Phase Description:

  • Phase I Implementation of POC platforms
  • Phase II Implementation of logical resources
  • Phase III Validation of physical and logical resources
  • Phase II Implementation of platform as a service components
  • Phase IV Validation of platform as a service components
  • Phase V Platform as a service testing begins
  • Phase VI Review, document complete knowledge transfer
  • Phase VII Present fact findings to executive management

Typically there are four fundamental components to cloud design; infrastructure, platform, applications, and business process.

The infrastructure and platform as a service components are typically the ideal starting place to drive new revenue opportunities, whether by reselling or enabling greater agility within the business.

With industries embracing cloud design at a record pace and technology corporations focusing on automation this allows the benefit of moving towards a cloud data infrastructure design.

Cloud Data infrastructure allows the ability to provide services, servers, storage, and networking on-demand at any time with minimal limits helping to create new opportunities and drive new revenue.

The “Elastic” pay-as-you-go data center infrastructure should provide a managed services platform allowing application owner groups the ability to operate individually while sharing a stable common platform.

Having a common platform and infrastructure model will allow applications to mature while minimizing code changes and revisions due to hardware, drivers, software dependencies and infrastructure lifecycle changes.

This will provide a stable scalable solution that can be deployed at any location regardless of geography.

Today’s data centers are migrating away from the client-server distributed model of the past towards the more virtualized model of the future.

Storage: As business applications grow in complexity, the need for larger more reliable storage becomes a data center imperative. Disaster Recovery / Business Continuity: Data centers must maintain business processes for the overall business to remain competitive. Dense server racks make it very difficult to keep data centers cool and keep costs down. Cabling: Many of today’s data centers have evolved into a complex mass of interconnected cables that further increase rack density and further reduce data center ventilation.

These virtualization strategies introduce their own unique set of problems, such as security vulnerabilities, limited management capabilities, and many of the same proprietary limitations encountered with the previous generation of data center components.

When taken together, these limitations serve as barriers against the promise of application agility that the virtualized data center was intended to provide.

The fundamental building block of an elastic infrastructure is the workload. Workloads should be thought of as the amount of work that a single server or ‘application gear/container/instance’ can provide given the amount of resources allocated to it

Those resources encompass compute (CPU & RAM), data (disk latency & throughput), and networking (latency & throughput). A workload is an application, part of an application, or a group of application that’s work together. There are two general types of workload that the most customers need to address: those running within a Platform-as-a-Service construct and those running on a hypervisor construct. Sometimes bare metal should also be considered where applicable but this is in rare circumstances.

Much like database sharding, the design should be limited by fundamental sizing limitations which will allow a subset of resources to be configured at maximum size hosting multiple copies of virtual machines, applications group and distributed load balanced across a cluster of hypervisors that share a common persistent storage back end.

This is similar to load balancing but not exactly the same as a customer or specific application will only be placed in particular ‘Cradles’. A distribution system will be developed to determine where tenants will be placed upon login to and direct them to the Cradle they were assigned.

In order to aggregate as many workloads as possible in each availability zone or region, a specific reference architecture design should be made to determine the ratio virtual servers per physical server.

The size will be driven by a variety of factors including oversubscription models, technology partners, and network limitations.The initial offering will result in a prototype and help determine scalability & capacity and this design should scale in a linear predictable fashion.

The cloud control system and its associated implementations will be comprised of Regions or Availability Zones. Similar in many ways to what Amazon AWS does currently.

The availability zone model allows the ability to isolates one fault domain from another. Each availability zone has isolation and redundancy in management, hardware, network, power, and facilities. If power is lost in a given availability zone tenants in another availability zone are not impacted. Each availability zone resides in a single datacenter facility and is relatively independent. Availability zones are then aggregated into a regions and regions into the global resource pool.

The basic components would be as follows:

· Hypervisor and container management control plane
· Cloud orchestration
· Cloud blueprints/templates
· Automation
· Operating system and application provisioning
· Continuous application delivery
· Utilization monitoring, capacity planning, and reporting

hardware considerations should be as follows:

· Compute scalability
· Compute performance
· Storage scalability
· Storage performance
· Network scalability
· Network performance
· Network architecture limitations
· Oversubscription rates & capacity planning
· Solid-state flash leveraged to increase performance and decrease deployment times

Business concerns would be:

· Cost-basis requirements
· Margins
· Calculating cost VS profits to show ROI (chargeback/show back)
· Licensing costs

The extensibility of the solution dictates the ability to use third party tools for authentication, monitoring, and legacy applications. The best cloud control system should allow the ability to integrate legacy systems and software with relative ease. Its my own personal preference to lead with Open Source software but that decision is left to the user to decide.

Monitoring,  capacity planning, and resource optimization should consider the following:

· Reactive – Break-Fix monitoring where systems and nodes are monitored for availability and service is manually restored
· Proactive – Collect metrics data to maintain availability, performance, and meet SLA requirements
Forecasting – Use proactive metric data to perform capacity planning and optimize capital usage

Because cloud computing is a fundamental paradigm shift in how Information Technology services are usually delivered it will cause significant disruption inside most of the current organizations. Helping each of these organizations embrace the change will be key.

While final impacts are currently impossible to measure it’s clear that a self-service model is clearly the future and integral to delivering customer satisfaction, both from an internal or external user perspective.

Some proof of concept initiatives would be as follows:

· Determine a go-forward architecture for the IaaS and PaaS offering inclusive of a software defined network
· Benchmark competing architecture options against one another from a price, performance, and manageability perspective
· Establish a “mini-cradle” that can be maintained and used for future infrastructure design initiatives and tests
· Determine how application deployment can be fully or partially automated
· Determine a cloud control system to facilitate provisioning of Operating Systems and multi-tiered applications
· Complete the delivery of FAC to generate metrics and provide statistics
· Show the value of self-service to internal organizations
· Measure the ROI based on cost of the cloud service delivery combined with the business value
· Don’t build complex for the initial offering
· Avoid spending large amounts of capital expenses on the initial design

After implementing a proof of concept testing encompassing the following(and more) should be done:

Proof of Functionality

  • The solution system runs in our datacenter; on our hardware
  • The solution system can be implemented with multi-network configuration
  • The solution system can be implemented with as few manual steps as possible (automated installation)
  • The solution systems have the ability to drive implementation via API
  • The solution system provides a single point of management for all components
  • The solution system enables dynamic application mobility by decoupling the definition of an application from the underlying hardware and software
  • The solution system can support FAC production operating systems
  • The solution system Hypervisor and guest OS are installed and fully functional
  • The solution systems support internal and external authentication against existing authentication infrastructure.
  • The solution system functions as designed and tested

Proof of Resiliency

  • The solution system components are designed for high availability
  • The solution system provides multi-zone (inter-DC,inter-region, etc.) management
  • The solution system provides multi Data Center management

Integration Testing

  • The solution system is compatible with legacy, current, and future systems integration

Complexity Testing

  • The solution system has the ability to manage both simple and complex configurations

Metric Creation

  • The solution systems have metrics that can be monitored