Category: Docker/Containers

Red Hat Gluster Storage Leads The Charge on Persistent Storage for Containers

Offers choice of deployment configurations for containerized applications

By Irshad Raihan and Sayan Saha, Red Hat Storage

One of the key reasons that software-defined storage has risen to fame over the last decade is the multiple aspects of agility it offers. As we move into the era of application-centric IT, microservices and containers, agility isn’t just a good idea, it could mean the difference between survival and extinction.

 

Agility in a container-centric data center

As we covered in a recent webinar, Red Hat Gluster Storage offers unique value to developers and administrators looking for a storage solution that is not only container-aware but also serves out storage for containerized applications natively.

One critical aspect of agility offered by  Red Hat Storage is that the storage can be deployed in a number of configurations in relation to the hardware where the containers reside. This allows architects to choose the best configuration that makes the most sense for their particular situation, and yet allows them to transition to a different configuration with minimal impact to applications.

 

Dedicated scale-out storage for containerized applications

If you’re a storage admin looking to provide a stand-alone storage volume to applications running in containers, Red Hat Gluster Storage can expose a mount point so your applications have access to a durable, distributed storage cluster.

image02

In this configuration, the Red Hat Gluster Storage installation runs in an independent cluster (either on-premise or in one of the supported public clouds Microsoft Azure, AWS, or Google Cloud Platform) and is accessed over the network from a platform like Red Hat OpenShift.

Red Hat OpenShift – optimized to run containerized applications and workloads –  ships with the appropriate Gluster storage plugins necessary to make this configuration work out of the box.

 

Container Native Storage – Persistent storage for containers with containers!

In another deployment configuration, you can run containerized Red Hat Gluster Storage runs inside Red Hat’s OpenShift Container Platform. Red Hat Gluster Storage containers are orchestrated using kubernetes, OpenShift’s container orchestrator like any other application container.

The storage container (kubernetes pod) pools and serves out local or direct attached storage from hosts (to be consumed by application containers for their persistent storage needs) offering Gluster’s rich set of enterprise-class storage features, data services, and data protection capabilities for applications and microservices running in OpenShift.

Exactly one privileged Red Hat Gluster Storage container is instantiated per host as a Kubernetes pod. As a user, you benefit from being able to deploy enterprise grade storage using a workflow that is consistent with their application orchestration, use a converged (compute + storage) deployment model and can choose storage-intensive nodes (hosts with local or direct attached storage) within a cluster for deploying storage containers, optionally collocated with application containers.

image00

This solution, known as container-native storage currently generally available from Red Hat leverages an open source project named Heketi. contributed by Luis Pabón (one of the speakers on the recent webinar). Heketi is a RESTful volume manager that allows for programmatic volume allocation and provides the glue necessary to manage multiple Gluster volumes across clusters, thus  allowing kubernetes  to provision storage without being limited to a single Red Hat Gluster Storage cluster.

Heketi enhances the user experience of dynamically managing storage, whether it’s via the API or as a developer working in the OpenShift Container Platform and runs as a container itself inside OpenShift in the container-native storage solution providing a service endpoint for Gluster As a storage administrator, you no longer need to manage or configure bricks, disks, or trusted storage pools. The Heketi service will manage all hardware for you, enabling it to allocate storage on demand. Any disks registered with Heketi must be provided in raw format, which will then be managed by it using LVM on the disks provided.

image03

This is a key differentiator for Red hat Gluster Storage. As far as we can tell, no other storage vendor is able to provide this flavor of container-native storage, and certainly not with the level of integration provided with OpenShift Container Platform. As a number of early adopters have told us, it’s invaluable to have a single point of support all the way up from the operating system layer, to orchestration, app dev, and storage.

 

Stay tuned. We’re not done.

We are working hard to continue to innovate to make is a much more seamless experience for developers and administrators alike to manage storage in a containerized environment.

image01
We’ve delivered a number of industry-first innovations over the past year and will continue to focus on enabling a seamless user experience for developers and administrators looking to adopt containers as the preferred deployment platform. Stay tuned.

 

Authors

This post was originally created by Irshad Raihan & Sayan Saha.

Vendor and Cloud lock-in; Good? Bad? Indifferent?

Vendor lock-in, also known as proprietary lock-in or customer lock-in, is when a customer becomes dependent on a vendor for products and services. Thus, the customer is unable to use another vendor without substantial switching costs.

The evolving complexity of data center architectures makes migrating from one product to another difficult and painful regardless of the level of “lock-in.” As with applications, the more integrated an infrastructure solution, architecture and business processes, the less likely it is to be replaced.

The expression “If it isn’t broke, don’t fix it” is commonplace in IT.

I have always touted the anti-vendor lock-in motto. Everything should be Open Source and the End User should have the ability to participate, contribute, consume and modify solutions to fit their specific needs. However is this always the right solution?

Some companies are more limited when it comes to resources. Others are incredibly large and complex making the adoption of Open Source (without support) complicated. Perhaps a customer requires a stable and validated platform to satisfy legal or compliance requirements. If the Vendor they select has a roadmap that matches the companies there might be synergy between the two and thus, Vendor lock-in might be avoided. However, what happens when a Company or Vendor suddenly changes their roadmap?

Most organizations cannot move rapidly between architectures and platform investments (CAPEX) which typically only occur every 3-5 years. If the roadmap deviates there could be problems.

For instance, again let’s assume the customer needs a stable and validated platform to satisfy legal, government or compliance requirements. Would Open Source be a good fit for them or are they better using a Closed Source solution? Do they have the necessary staff to support a truly Open Source solution internally without relying on a Vendor? Would it make sense for them to do this when CAPEX vs OPEXi s compared

The recent trend is for Vendors to develop Open Source solutions; using this as a means to market their Company as “Open” which has become a buzzword. Such terms like Distributed, Cloud, Scale Out, and Pets vs Cattle have also become commonplace in the IT industry.

If a Company or individual makes something Open Source but there is no community adoption or involvement is it really an open Source project? In my opinion, just because I post source code to GitHub doesn’t truthfully translate into a community project. There must be adoption and contribution to add features, fixes, and evolve the solution.

In my experience, the Open Source model works for some and not for others. It all depends on what you are building, who the End User is, regulatory compliance requirements and setting expectations in what you are hoping to achieve. Without setting expectations, milestones and goals it is difficult to guarantee success.

Then comes the other major discussion surrounding Public Cloud and how some also considered it to be the next evolution of Vendor lock-in.

For example, if I deploy my infrastructure in Amazon and then choose to move to Google, Microsoft or Rackspace, is the incompatibility between different Public Clouds then considered lock-in? What about Hybrid Cloud? Where does that fit into this mix?

While there have been some standards put in place such as OVF formats the fact is getting locked into a Public Cloud provider can be just as bad or even worse than being locked into an on-premise architecture or Hybrid Cloud architecture, but it all depends on how the implementation is designed. Moving forward as Public Cloud grows in adoption I think we will see more companies distribute their applications across multiple Public Cloud endpoints and will use common software to manage the various environments. Thus, being a “single pane of glass” view into their infrastructure. Solutions like Cloudforms are trying to help solve these current and frustrating limitations.

Recently, I spoke with someone who mentioned their Company selected OpenStack to prevent Vendor lock-in as it’s truly an Open Source solution. While this is somewhat true, the reality is moving from one OpenStack distribution to another is far from simple. While the API level components and architecture are mostly the same across different distributions the underlying infrastructure can be substantially different. Is that not a type of Vendor lock-in? I believe the term could qualify as “Open Source solution lock-in.”

The next time someone mentions lock-in ask them what they truly mean and what they are honestly afraid of. Is it that they want to participate in the evolution of a solution or product or that they are terrified to admit they have been locked-in to a single Vendor for the foreseeable future?

Is it that they want to participate in the evolution of a solution or product or that they are terrified to admit they have been locked-in to a single Vendor for the foreseeable future?

The future is definitely headed towards Open Source solutions and I think companies such as Red Hat and others will guide the way, providing support and validating these Open Source solutions helping to make them effortless to implement, maintain, and scale.

All one needs to do is look at the largest Software Company in the world, Microsoft, and see how they are aggressively adopting Open Source and Linux. This is a far cry from the Microsoft v1.0 which solely invested in their own Operating System and neglected others such as Linux and Unix.

So, what do you think? Is Vendor lock-in, whether software related, hardware related, Private or Public Cloud, truly a bad thing for companies and End Users or is it a case by case basis?

Cloud Wars – Starring Amazon, Microsoft, Google, Rackspace, Red Hat and OpenStack: The fate of the OS!?

Below is my opinion. Feel free to agree or disagree in the comments or shares but please be respectful to others.

There have been some discussions regarding the Cloud Wars and the current state of the Cloud. One thing I recently participated in was a discussion regarding Microsoft, Red Hat, and Linux distribution adoptions.

Since Microsoft announced the release of their software on Linux platforms, adopted Linux distributions and Linux-based software many people are wondering what this brave new world will look like as we move into the future.

First, we should discuss the elephant in the room. Apple has grown considerably in the desktop market while other companies shares have shrunk. We cannot discount the fact that IOS/OSX are in fact Operating Systems. There are also other desktop/server Operating Systems such as Windows, Chrome, Fedora, CentOS, Ubuntu and other Linux distributions. My apologies for not calling out others as there are far too many to mention. Please feel welcome to mention any overlooked in the comments that you feel I should have included.

The recent partnership between Microsoft and Red Hat has been mutually beneficial and we are seeing more companies that historically ignored Linux now forming alliances with distributions as it has been greatly adopted in the Enterprise. The “battlefield” is now more complex than ever.

Vendors must contend with customers moving to the Public Cloud and adopting “Cloud Centric” application design as they move to a Software as a Service model. In the Cloud, some Operating Systems will erode while others will flourish.

Let’s not forget there is a cost for using specific Operating Systems in the Cloud and other options can be less costly. There are ways to offset this by offering users the ability to bring their own licensing or selecting the de facto Operating System of choice for a Public Cloud. These can be viable options for some and deal breakers for others.

Public Clouds like Azure and Google are still young but they will both mature quickly. Many feel Google may mature faster than others and become a formidable opponent to the current Public Cloud leader which is Amazon.

Some have forgotten that Google was previously in a “Cloud War” of their own when they were competing with Yahoo, Microsoft, Ask, Cuil, Snap, Live Search, Excite and many others. The most recent statistics show Google holding at 67.7% of the search market, which is a considerable lead over everyone else. Google after all was born in the Cloud, lives in the Cloud and understands it better than anyone else. Many things they touch turn to gold, like Chrome, Gmail, Android and other web based applications.

https://www.netmarketshare.com/search-engine-market-share.aspx?qprid=4&qpcustomd=0

In the Private Cloud, Microsoft, VMware, Red Hat, Canonical, and Oracle are in contention with one another. Some are forming strategic alliances and partnerships for the greater good and pushing the evolution of software. Others are ignoring evolution and preferring to move forward, business as usual.

When market shares erode companies sometimes rush and make poor miscalculated decisions. One only needs to look at the fate of Blackberry to see how a company can fall rapidly from the top to the bottom of the market. Last I checked, Blackberry didn’t even own 1% of the market in the Mobile arena.

As we move into the future of Cloud, whether that is Public or Private, we will see more strategic partnerships and barriers collapse. With so many emerging technologies on the horizon and the Operating Systems becoming more of a platform for containerized applications it’s also becoming less relevant than previous.

I have heard individuals predict in the future we will write code directly to the Cloud and I agree that this will eventually happen. Everything will be ambiguous to the developer or programmer and there will be “one ring to rule them all” but the question to be answered is what ring will that be and who will be wearing it?

It’s doubtful we will ever only have a single Cloud, platform or programming language but I think we will see the birth of code and platform translators. I look at computers, technology and programming the same as spoken language. Some people learn a native language only, others learn a native tongue and branch out to other languages and may even one day become a philologist for example.

I am anxious to see how things evolve and am looking forward to seeing the development of the Cloud and internet applications. I hope I am able to witness things such as self-drivings cars, self-piloting airplanes, and real-time data analysis.

Perhaps instead of Cloud we should use the term Galaxy.

OpenShift and Kubernetes!

Recently Red Hat launched OpenShift Enterprise 3.1, the company’s software for managing PaaS-based workloads. Google-developed Kubernetes originally to manage large numbers of Containers within its own environment and it was natural to integrate OpenShift Enterprise 3.1 with Kubernetes.

For those who are unsure about what Kubernetes specifically is, Kubernetes is a web-scale tool created by Google that is appropriate for the enterprise. While some large-scale enterprise may not have as many Containers as Google, Kubernetes-based technologies can nonetheless be used to manage smaller more diverse container workloads of the enterprise.

The theory behind Containers is eliminating waste and making infrastructure flexible and easier to deploy regardless of the footprint it is sitting on. A Container is unaware and doesn’t care if it’s running within a virtual machine or on bare metal just as a virtual machine is not aware of its existence operating as a fully independent operating systems virtualized within a sandbox.

While virtualization has pushed technology forward it has also led to sprawl and waste. If you consider a typical LAMP stack platform there is a tremendous waste involved setting up each of the typical tiered components. Each virtual machine deployed has to be configured, patched for security, managed, updated and treated as a physical system which also includes monitoring, licensing, and application uptime. So really all we have accomplished is decreasing our hardware footprint and allowing a higher density ratio of applications to hardware resources.

This advancement though has not eliminated the need to manage the endpoints and with the recent string of security attacks, it’s easy to see that less is, in fact, more. The fewer endpoints you expose to the internet or between internal systems the less risk you have of becoming compromised. I am not going to dive deep into security design, ethics and such in this posting as that’s far too much to cover.

We also have not eliminated any of the “virtual waste” that was thus created with the transition of moving from a physical world to a virtual world. Developers are notorious for requesting more resources than are needed and administrators are still stuck building virtual servers, deploying applications and having to maintain the health of the entire stack and typically are expected to do all of this with less staff because everything is virtual….. It just runs. Right!?

By consolidating the number of endpoints needing to be managed, just as we did with moving from physical to virtual, Containers will increase density and efficiency. They will also lessen the burden of the systems administrators and engineers as there will be a substantial decrease in the number of endpoints that will need to be managed. This will increase productivity and lead to rapid releases of software as rolling deployments can be simplified.

The plain and simple truth is Containers unlock the potential for a “build, test, run, and done” methodology. You can build a Container and run it in AWS, Google, Azure, in your own private data center or on a laptop. Containers are lightweight and can be “spun up” in seconds as opposed to minutes, days or hours when compared to virtual machines. They use fewer resources and can be rightsized more appropriately and make better use of the resources they are allocated. Having a smaller footprint also allows simplification of managing endpoints.

A great example of how a Container could be used would be for a web scale front end store. An Enterprise could deploy web service based Containers such as Apache in a stateless manner and scale up and out as needed. SPinningn up new Containers when traffic increases and then decommissioning them when traffic decreases. This is in stark contrast to a typical environment where you would build a virtual machine and deploy an application in advance of an expected increase in traffic. Then, once traffic decreases you are left with nothing but waste as the application sits idle until the next burst.

Talk about a waste of resources!

All that cooling, power, licensing, monitoring, managing and data center footprint for something to just sit there, idle. It’s entirely a waste of CAPEX and OPEX!

Anyway! Back to Kubernetes!

Kubernetes works in conjunction with OpenShift to help provides lifecycle management for Containers. Kubernetes then takes this to the next level by providing orchestration and the ability to manage any size clusters! Another benefit of Kubernetes orchestration container management is it can manage and allocate resources on a host or cluster dynamically with fault tolerance to guarantee workload reliability. It allows nodes to be tagged or labeled allowing developers or administrators to select and control where defined workloads could and should be running.

This is especially useful when moving Containers between development, test, quality assurance, operational readiness and propagation to production. In some of these environments, there might be different hardware or service level agreements that must be met and this helps assure discernibility.

Performing under pressure – OpenStack – Good, Bad and the Ugly

Having to perform under pressure.

I am sure we have all been there in one way or another and some of us handle it better than others. It doesn’t matter if the pressure if work-related, personal or a mix of both it’s still difficult and sometimes insurmountable to perform while under a great deal of pressure.

Recently I was asked to field a copious amount of questions regarding OpenStack, how we built our infrastructure, what worked, what didn’t, how did we implement Ceph, how did we deal with security, how did we overcome trials and tribulations of a data center transformation project. Not to mention while all of this was commencing we still were required to provide support, capacity planning, security, compliance, automation and continued standing up new infrastructure fault domains and provision workloads.

Now, I am sure some people look at this and say “Meh, I can do that. No big deal” but for me, my team, and those we interface with this was not such an easy task. You have a “changing of the guard” when it comes to deploying new technology. The internet of things as people are calling it or the software-defined data center, both terms which I absolutely dislike, are not a promise of the future, they are here, now, today, all around you.

There are some that respond well to change, they accept it, adopt it and love it. Those are the ones that excel. They are the first people to run into a burning building, diffuse the situation logically and then take action. There are others who call the fire department to come and put out the blaze and wait to see what happens. Lastly, there are those that sit around filming the building burn down without care and with disregard to those inside risking their lives by choice. I think most of us forget that firefighters choose to put their lives on the line every single day. I am by no means comparing information technology to firefighting so before you flame me, I am just using it an analogy.

I have always been a person who runs first into the fire. Sometimes this is a good thing and sometimes it’s a bad thing. Eventually, one of the times you run into a burning building you are bound to get hurt, trapped and/or needing assistance from the others around you to overcome overwhelming odds.

So, back to pressure, OpenStack and having to describe the good, bad and ugly pieces of it.

We run a fairly unique implementation of OpenStack as opposed to many others and do not segment storage from computing (we use cgroups to limit, account for, and isolate resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes for Nova and Ceph. Cisco UCS converged infrastructure was implemented as opposed to white box hardware and our Ceph implementation uses a combination of SSD and HDD for performance optimization (that’s actually pretty typical).

SolidFire is used on the backend for high-performance latency sensitive applications since it allows in-line deduplication, compression and offers out of the box replication. The diagram below is a good representation of our current implementation.

Data center transformations are an incredibly sensitive topic. Much more than most people would think and especially when you are moving to a new architecture that is still in its teenage years and growing faster than a weed. Getting people to comprehend OpenStack vs VMware, XEN or traditional KVM is already a tough task in some companies and once you start telling everyone that the DevOps culture is the wave of the future and “GUI’s are for sissies” some people get pissed off.

Once again, here is where the line gets drawn in the sand. Some people LOVE GUI’s and they refuse to use command line unless it’s absolutely necessary. I can tell you from my past experience doing consulting that I have seen more network engineers use Cisco Fabric Manager than I have seen use command line. Have you ever used Fabric Manager?! It’s a nightmare!

You begin to introduce your VMware-centric people to the simple Horizon dashboard and suddenly people begin to sweat and start asking questions such as, what storage is being used, how do I know where the VM (instance! its called an instance dang it!) lives? How would I troubleshoot a problem? Is there going to be a better GUI for me to use? What about something like the vCenter client, does OpenStack have that? Does OpenStack have HA? Does OpenStack have DRS (load balancing between hypervisors)? That’s when the situation gets ugly…..

Do individuals start to say “Who is going to support this? I’m not supporting this! There is nooo wayyyyyy I can support this. Are we sure this is the right move? I mean, VMware works well right. Why would we change?.” This is when people either perform under pressure or fall apart. I mean, imagine a room packed to the gills with IT staff all with this look on their face like you just called their Ferrari ugly and now you are going to trade in for what they view as a Pinto. My apologies to those that drive a Pinto. It’s a great car, no, really, it is……

That’s when you start talking about continuous integration, Containers, Jenkins, Git, Repos, version control, security, deploying bare metal as a service, deploying Hadoop as a service, deploying Containers as a service… pretty much-deploying ANYTHING as a service up to and including disaster recovery.

Shouldn’t it be as simple as a product group logging into a portal or catalog like Service Now and checking off a list of boxes, magically get an estimated operational price and then getting an email in a few hours saying their project is ready to go. The fact is, yes! Amazon has been doing it. Google has been doing it. Microsoft has been doing it. Actually, if you think about it all major hosting providers are doing just that. They write intelligent code and automation to provision software as a service.

I almost always lean towards software as a service over any other term. If you are providing a virtual instance, it’s running containers, it uses Ceph for the back end and Gluster for the shared file system, aren’t all of those software defined? What about deploying Hadoop on top of OpenStack via Sahara. Is that also not software defined?!

Companies of all shapes and sizes want the features of Amazon without the price tag and the ability to manage resources within their own private data center. It’s about knowing where you data is, how data is being backed up and saved, clear and concise monitoring of workloads, monitoring the data center specifications and statistics, validating compliance of customer data and securing it… it’s all about CONTROL.

This is what OpenStack provides. A set of tools for companies of all sizes to deploy a series of services and a standard set of APIs that allow developers, DevOps, and administrators to provision their own elastic scalable infrastructure in the same manner that Amazon does and in some ways BETTER than Amazon.

If Amazon was the latest and greatest thing since sliced bread everyone would be on the bandwagon, Facebook, Google, Yahoo, Microsoft and all others would just say “Screw this. Let’s just deploy into Amazon, fire 70% of our staff and we are good to go!” The fact is, Amazon has its own set of difficulties. Ever tried running containers in ECS? What about wanting to deploy in a region that’s not supported? Maybe governance requires data to not leave a country’s borders and Amazon doesn’t have anything in that area? What about risk and compliance?

Factually we have been through all of this before. Remember mainframes? I do. In fact, I would say many companies still use them and are a huge part of their technology stack. When x86 came strolling along and promised the ability to replace mainframes with cheaper and smaller hardware that required less overall investment companies became hooked! Fast forward to 2015 and we are seeing a similar change, some at the physical layer but more at the logical layer.

Back to pressure. I almost forgot that was part of this post since I have been ranting.

With so much new technology being deployed at such an aggressive rate it’s really hard to be an SME at all things. I can’t say I know everything about Ceph, there are a ton of moving parts. I can’t say I am an expert when it comes to OpenStack because there are multiple distributions and multiple projects within OpenStack itself. I cannot say I know every single specific detail on how software-defined networking works and what is best to implement as it depends on the infrastructure, use case, and hardware. At some point, you have to sit back and trust members of your team to be the subject matter experts.

These people are the ones you trust the most. They take the pressure off when you need to make difficult informed decisions. They are the ones that suggest solutions and how to implement such with the lowest amount of risk and the greatest return on investment. They are the experts and they should have your full trust and the best interests of the company in mind.

I admit I am a very technical individual especially at my current level, but I am by no means an expert of all this related to software as a service. If I was, I wouldn’t need a team and I could do it all myself. Need to program some stuff is Ruby, I got it. Need me to write some stuff in Python, no worries! Need me to write some bash stuff, simple as pie. Need me to develop Puppet modules to deploy your code and have intelligent rollback capability, a piece of cake. I hope you understand I am being somewhat sarcastic but if you can excel all of those things then you are amazing! Want a job? No, seriously… do you?

A healthy job is likely to be one where the pressures on management and employees are appropriate in relation to their abilities and resources, to the amount of control they have over their work and to the support they receive. I do not believe health is the absence of disease or infirmity but a positive state of complete physical, mental and social well-being. In a healthy working environment, there is not only an absence of harmful conditions but an abundance of health-promoting ones.

Work-related stress is usually caused by a poor organization (the way jobs and work systems are designed and how we manage them). For example, lack of control over work processes, poor management, and lack of support from colleagues and supervisors can all be contributors.

I for one am a workaholic. Yes, I admit it. I am in a 12 step program to try and get back to a normal life and detach myself from The Borg collective. It’s tough though with so many emerging and new technologies coming that I am excited about, but I am trying my best for my health, both physically and mentally. The technology isn’t going to disappear overnight so it’s better to learn to pace ourselves instead of trying to run a marathon as a sprint.

Find healthy ways to relieve the pressure. Have an open door policy with your staff, team, manager and other employees. I think one thing we overlook in our current era is talking. We instead hide behind chat, email, and other electronic forms of communication and are slowly forgetting to be human. I blame Google for all of this. Joking! As we enter the age of software as a service, let’s try and be more human. After all, I am sure some/most of us have seen Blade Runner so we know how the story goes.

The day the systems administrators was eliminated from the Earth… fact or fiction?

As software becomes more complex and demands scalability of the cloud, IT’s mechanics of today, the systems administrator, will disappear. Tomorrow’s systems administrator will be entirely unlike anything we have today.

For as long as there have been computer systems, there has always been a group of individuals managing them and monitoring them named system administrators. These individuals were the glue of data centers,  responsible for provisioning and managing systems. From the monolithic platforms of the old ages to todays mixed bag approach of hardware, storage, operating systems, middleware, and software.

The typical System Administrator usually possessed super human diagnostic skills and repair capabilities to keep a complex mix of various disparate systems humming along happily. The best system administrators have always been the “Full Stack” individuals who were armed with all skills needed to keep systems up and running but these individuals were few and far between.

Data centers have become more complex over the past decade as systems have been broken down, deconstructed into functional components and segregated into groupings. Storage has been migrated to centralized blocks like a SAN and NAS thus inevitably forcing personnel to become specialized in specific tasks and skills.

Over the years, this same trend has happened with Systems Infrastructure Engineers/Administrators, Network Engineers/Administrators and Application Engineers/Administrators.

Everywhere you look intelligence is being built directly into products.I was browsing the aisle at Lowe’s this past weekend and noted that clothes washers, dryers, and refrigerators are now being shipped equipped with WIFI and NFC to assist with troubleshooting problems, collecting error logs and opening problem service tickets. No longer do we need to pour over those thousand pages long manuals looking for error code EC2F to tell us that the water filter has failed, the software can do it for us! Thus is has become immediately apparent that if tech such as this has made its way into low-level basic consumer items things must be changing even more rapidly at the top.

I obviously work in the tech industry and would like to think of myself as a technologist and someone who is very intrigued by emerging technologies. Electric cars, drones, remotely operated vehicles, smartphones, laptops that can last 12+ hours daily while fitting in your jeans pocket and the amazing ability to order items from around the globe and have them shipped to your door. These things astound me.

The modern car was invented in 1886 and in 1903, we invented the airplane. The first commercial air flight was not until 1914 but to see how far we have come in such a short time is astounding. It almost makes you think we were asleep for the last Century prior.

As technology has evolved there has been a need for software to also evolve at a similarly rapid pace. In many ways, we have outpaced software with hardware engineering over the last Score and now software is slowly catching up and surpassing hardware engineering.

Calm down, I know I am rambling again. I will digress and get to the point.

The fact is, the Systems Administrator as we know it is a dying breed. Like the dinosaur, the caveman and the wooly mammoth. All of these were great at some things but never enough to stay alive and thus were wiped out.

So what happens next? Do we all lose our jobs? Does the stock fall into a free fall and we all start drinking Brawndo the Thirst Mutilator (if you havent seen Idiocracy I feel for you.) The fact is, it’s going to be a long, slow and painful death.

Companies are going to embrace cloud at a rapid rate and as this happens people will either adapt or cling to their current ways. Not every company is going to be “cloudy”.

Stop. Let me state something. I absolutely HATE the word Cloud. It sounds so stupid. Cloud. Cloud. Cloud. Just say it. How about we all instead embrace the term share nothing scalable distributed computing. That sounds better.

So, is this the end of the world? No, but it does mean “The Times They Are a Changin” to quote Mr. Dylan.

A fact is, change is inevitable. If things didn’t change we would still be living in huts, hunting with our bare hands and using horses as our primary methods of transportation. We wouldn’t have indoor toilets, governments, rules, regulations, or protection from others as there would be no law system.

Sometimes change is good and sometimes its bad. In this case, I see many good things coming down the road but I think we all need to see the signs posted along the highway.

Burying ones head in the dirt like an Ostrich is not going to protect you.

Docker buys SocketPlane

Why is this important?

Well, SocketPlane’s entire six-person staff is joining Docker and will be helping the container-centric startup develop a networking API that makes it possible to string together hundreds to thousands of containers together no matter if the containers reside in different data centers. This means enabling the extensibility to deploy containers in various public clouds. Talk about serious flexibility!

I foresee that this common API will eventually have integration with other SDN solutions, such as NSX, ACI, PlumGrid. I guess time will tell.