Category: Google

Vendor and Cloud lock-in; Good? Bad? Indifferent?

Vendor lock-in, also known as proprietary lock-in or customer lock-in, is when a customer becomes dependent on a vendor for products and services. Thus, the customer is unable to use another vendor without substantial switching costs.

The evolving complexity of data center architectures makes migrating from one product to another difficult and painful regardless of the level of “lock-in.” As with applications, the more integrated an infrastructure solution, architecture and business processes, the less likely it is to be replaced.

The expression “If it isn’t broke, don’t fix it” is commonplace in IT.

I have always touted the anti-vendor lock-in motto. Everything should be Open Source and the End User should have the ability to participate, contribute, consume and modify solutions to fit their specific needs. However is this always the right solution?

Some companies are more limited when it comes to resources. Others are incredibly large and complex making the adoption of Open Source (without support) complicated. Perhaps a customer requires a stable and validated platform to satisfy legal or compliance requirements. If the Vendor they select has a roadmap that matches the companies there might be synergy between the two and thus, Vendor lock-in might be avoided. However, what happens when a Company or Vendor suddenly changes their roadmap?

Most organizations cannot move rapidly between architectures and platform investments (CAPEX) which typically only occur every 3-5 years. If the roadmap deviates there could be problems.

For instance, again let’s assume the customer needs a stable and validated platform to satisfy legal, government or compliance requirements. Would Open Source be a good fit for them or are they better using a Closed Source solution? Do they have the necessary staff to support a truly Open Source solution internally without relying on a Vendor? Would it make sense for them to do this when CAPEX vs OPEXi s compared

The recent trend is for Vendors to develop Open Source solutions; using this as a means to market their Company as “Open” which has become a buzzword. Such terms like Distributed, Cloud, Scale Out, and Pets vs Cattle have also become commonplace in the IT industry.

If a Company or individual makes something Open Source but there is no community adoption or involvement is it really an open Source project? In my opinion, just because I post source code to GitHub doesn’t truthfully translate into a community project. There must be adoption and contribution to add features, fixes, and evolve the solution.

In my experience, the Open Source model works for some and not for others. It all depends on what you are building, who the End User is, regulatory compliance requirements and setting expectations in what you are hoping to achieve. Without setting expectations, milestones and goals it is difficult to guarantee success.

Then comes the other major discussion surrounding Public Cloud and how some also considered it to be the next evolution of Vendor lock-in.

For example, if I deploy my infrastructure in Amazon and then choose to move to Google, Microsoft or Rackspace, is the incompatibility between different Public Clouds then considered lock-in? What about Hybrid Cloud? Where does that fit into this mix?

While there have been some standards put in place such as OVF formats the fact is getting locked into a Public Cloud provider can be just as bad or even worse than being locked into an on-premise architecture or Hybrid Cloud architecture, but it all depends on how the implementation is designed. Moving forward as Public Cloud grows in adoption I think we will see more companies distribute their applications across multiple Public Cloud endpoints and will use common software to manage the various environments. Thus, being a “single pane of glass” view into their infrastructure. Solutions like Cloudforms are trying to help solve these current and frustrating limitations.

Recently, I spoke with someone who mentioned their Company selected OpenStack to prevent Vendor lock-in as it’s truly an Open Source solution. While this is somewhat true, the reality is moving from one OpenStack distribution to another is far from simple. While the API level components and architecture are mostly the same across different distributions the underlying infrastructure can be substantially different. Is that not a type of Vendor lock-in? I believe the term could qualify as “Open Source solution lock-in.”

The next time someone mentions lock-in ask them what they truly mean and what they are honestly afraid of. Is it that they want to participate in the evolution of a solution or product or that they are terrified to admit they have been locked-in to a single Vendor for the foreseeable future?

Is it that they want to participate in the evolution of a solution or product or that they are terrified to admit they have been locked-in to a single Vendor for the foreseeable future?

The future is definitely headed towards Open Source solutions and I think companies such as Red Hat and others will guide the way, providing support and validating these Open Source solutions helping to make them effortless to implement, maintain, and scale.

All one needs to do is look at the largest Software Company in the world, Microsoft, and see how they are aggressively adopting Open Source and Linux. This is a far cry from the Microsoft v1.0 which solely invested in their own Operating System and neglected others such as Linux and Unix.

So, what do you think? Is Vendor lock-in, whether software related, hardware related, Private or Public Cloud, truly a bad thing for companies and End Users or is it a case by case basis?

Advertisements

Cloud Wars – Starring Amazon, Microsoft, Google, Rackspace, Red Hat and OpenStack: The fate of the OS!?

Below is my opinion. Feel free to agree or disagree in the comments or shares but please be respectful to others.

There have been some discussions regarding the Cloud Wars and the current state of the Cloud. One thing I recently participated in was a discussion regarding Microsoft, Red Hat, and Linux distribution adoptions.

Since Microsoft announced the release of their software on Linux platforms, adopted Linux distributions and Linux-based software many people are wondering what this brave new world will look like as we move into the future.

First, we should discuss the elephant in the room. Apple has grown considerably in the desktop market while other companies shares have shrunk. We cannot discount the fact that IOS/OSX are in fact Operating Systems. There are also other desktop/server Operating Systems such as Windows, Chrome, Fedora, CentOS, Ubuntu and other Linux distributions. My apologies for not calling out others as there are far too many to mention. Please feel welcome to mention any overlooked in the comments that you feel I should have included.

The recent partnership between Microsoft and Red Hat has been mutually beneficial and we are seeing more companies that historically ignored Linux now forming alliances with distributions as it has been greatly adopted in the Enterprise. The “battlefield” is now more complex than ever.

Vendors must contend with customers moving to the Public Cloud and adopting “Cloud Centric” application design as they move to a Software as a Service model. In the Cloud, some Operating Systems will erode while others will flourish.

Let’s not forget there is a cost for using specific Operating Systems in the Cloud and other options can be less costly. There are ways to offset this by offering users the ability to bring their own licensing or selecting the de facto Operating System of choice for a Public Cloud. These can be viable options for some and deal breakers for others.

Public Clouds like Azure and Google are still young but they will both mature quickly. Many feel Google may mature faster than others and become a formidable opponent to the current Public Cloud leader which is Amazon.

Some have forgotten that Google was previously in a “Cloud War” of their own when they were competing with Yahoo, Microsoft, Ask, Cuil, Snap, Live Search, Excite and many others. The most recent statistics show Google holding at 67.7% of the search market, which is a considerable lead over everyone else. Google after all was born in the Cloud, lives in the Cloud and understands it better than anyone else. Many things they touch turn to gold, like Chrome, Gmail, Android and other web based applications.

https://www.netmarketshare.com/search-engine-market-share.aspx?qprid=4&qpcustomd=0

In the Private Cloud, Microsoft, VMware, Red Hat, Canonical, and Oracle are in contention with one another. Some are forming strategic alliances and partnerships for the greater good and pushing the evolution of software. Others are ignoring evolution and preferring to move forward, business as usual.

When market shares erode companies sometimes rush and make poor miscalculated decisions. One only needs to look at the fate of Blackberry to see how a company can fall rapidly from the top to the bottom of the market. Last I checked, Blackberry didn’t even own 1% of the market in the Mobile arena.

As we move into the future of Cloud, whether that is Public or Private, we will see more strategic partnerships and barriers collapse. With so many emerging technologies on the horizon and the Operating Systems becoming more of a platform for containerized applications it’s also becoming less relevant than previous.

I have heard individuals predict in the future we will write code directly to the Cloud and I agree that this will eventually happen. Everything will be ambiguous to the developer or programmer and there will be “one ring to rule them all” but the question to be answered is what ring will that be and who will be wearing it?

It’s doubtful we will ever only have a single Cloud, platform or programming language but I think we will see the birth of code and platform translators. I look at computers, technology and programming the same as spoken language. Some people learn a native language only, others learn a native tongue and branch out to other languages and may even one day become a philologist for example.

I am anxious to see how things evolve and am looking forward to seeing the development of the Cloud and internet applications. I hope I am able to witness things such as self-drivings cars, self-piloting airplanes, and real-time data analysis.

Perhaps instead of Cloud we should use the term Galaxy.

OpenShift and Kubernetes!

Recently Red Hat launched OpenShift Enterprise 3.1, the company’s software for managing PaaS-based workloads. Google-developed Kubernetes originally to manage large numbers of Containers within its own environment and it was natural to integrate OpenShift Enterprise 3.1 with Kubernetes.

For those who are unsure about what Kubernetes specifically is, Kubernetes is a web-scale tool created by Google that is appropriate for the enterprise. While some large-scale enterprise may not have as many Containers as Google, Kubernetes-based technologies can nonetheless be used to manage smaller more diverse container workloads of the enterprise.

The theory behind Containers is eliminating waste and making infrastructure flexible and easier to deploy regardless of the footprint it is sitting on. A Container is unaware and doesn’t care if it’s running within a virtual machine or on bare metal just as a virtual machine is not aware of its existence operating as a fully independent operating systems virtualized within a sandbox.

While virtualization has pushed technology forward it has also led to sprawl and waste. If you consider a typical LAMP stack platform there is a tremendous waste involved setting up each of the typical tiered components. Each virtual machine deployed has to be configured, patched for security, managed, updated and treated as a physical system which also includes monitoring, licensing, and application uptime. So really all we have accomplished is decreasing our hardware footprint and allowing a higher density ratio of applications to hardware resources.

This advancement though has not eliminated the need to manage the endpoints and with the recent string of security attacks, it’s easy to see that less is, in fact, more. The fewer endpoints you expose to the internet or between internal systems the less risk you have of becoming compromised. I am not going to dive deep into security design, ethics and such in this posting as that’s far too much to cover.

We also have not eliminated any of the “virtual waste” that was thus created with the transition of moving from a physical world to a virtual world. Developers are notorious for requesting more resources than are needed and administrators are still stuck building virtual servers, deploying applications and having to maintain the health of the entire stack and typically are expected to do all of this with less staff because everything is virtual….. It just runs. Right!?

By consolidating the number of endpoints needing to be managed, just as we did with moving from physical to virtual, Containers will increase density and efficiency. They will also lessen the burden of the systems administrators and engineers as there will be a substantial decrease in the number of endpoints that will need to be managed. This will increase productivity and lead to rapid releases of software as rolling deployments can be simplified.

The plain and simple truth is Containers unlock the potential for a “build, test, run, and done” methodology. You can build a Container and run it in AWS, Google, Azure, in your own private data center or on a laptop. Containers are lightweight and can be “spun up” in seconds as opposed to minutes, days or hours when compared to virtual machines. They use fewer resources and can be rightsized more appropriately and make better use of the resources they are allocated. Having a smaller footprint also allows simplification of managing endpoints.

A great example of how a Container could be used would be for a web scale front end store. An Enterprise could deploy web service based Containers such as Apache in a stateless manner and scale up and out as needed. SPinningn up new Containers when traffic increases and then decommissioning them when traffic decreases. This is in stark contrast to a typical environment where you would build a virtual machine and deploy an application in advance of an expected increase in traffic. Then, once traffic decreases you are left with nothing but waste as the application sits idle until the next burst.

Talk about a waste of resources!

All that cooling, power, licensing, monitoring, managing and data center footprint for something to just sit there, idle. It’s entirely a waste of CAPEX and OPEX!

Anyway! Back to Kubernetes!

Kubernetes works in conjunction with OpenShift to help provides lifecycle management for Containers. Kubernetes then takes this to the next level by providing orchestration and the ability to manage any size clusters! Another benefit of Kubernetes orchestration container management is it can manage and allocate resources on a host or cluster dynamically with fault tolerance to guarantee workload reliability. It allows nodes to be tagged or labeled allowing developers or administrators to select and control where defined workloads could and should be running.

This is especially useful when moving Containers between development, test, quality assurance, operational readiness and propagation to production. In some of these environments, there might be different hardware or service level agreements that must be met and this helps assure discernibility.