Image Cloud computing can be a highly competitive market, both for the companies that provide a cloud service and the employees that help those …
Matt Asay has a smart piece over on InfoWorld about some ongoing struggles with OpenStack, as evidenced by Red Hat’s most recent earnings call.
+MORE AT NETWORK WORLD: What broke Amazon’s cloud +
It begs the question: Are containers to blame?
As big as the community behind OpenStack has been, [Red Hat CEO Jim] Whitehurst declared Docker the “single biggest topic that comes up among … [Red Hat’s] leading [customers].” In fact, Whitehurst noted that he hears more from customers about Docker than OpenStack.
To read this article in full or to leave a comment, please click here
Despite common perceptions, enterprises don’t need to reach the promised land of a full “devops transformation” to start using Docker. They don’t need to have a microservices model or a fleet of full-stack engineers. In fact, Docker is a good fit for enterprises that are in the thick of a multi-year IT transformation and can actually help big teams implement devops best practices more quickly.
Hybrid cloud is the goal of nearly half of enterprises, most of which are in the process of some kind of devops toolchain adoption. Both are messy processes. Enterprises are hiring cloud consultants, consolidating data centers, breaking down barriers between engineering teams, and migrating new applications to Amazon Web Services (AWS) or other public clouds.
Mastering the hybrid cloud
Despite the supposed flexibility benefits of hybrid clouds, it is quite an engineering feat to manage security and scalability across multiple complex systems. The vast majority of an enterprise’s applications are burdened by internal dependencies, network complications, and huge on-premises database clusters. The idea of moving an application from one cloud to another “seamlessly” is laughable. For most enterprises, cloud bursting is a pipe dream.
Sponsored by VB
This is where Docker fills a critical gap. The top reason enterprises are using Docker is to help them deploy across multiple systems, migrate applications, and remove manual reconfiguration work. By building in application dependencies into containers, Docker containers significantly reduce interoperability concerns. Docker works equally well on bare-metal servers, virtual machines, AWS instances, etc.
As a result, applications that run well in test environments built on a public cloud instance will run exactly the same in production environments in on-premises private clouds. Applications that run on bare metal servers can also run in production on any public cloud platform.
Accelerating a devops culture
This is good news for enterprises looking to push a devops culture transition forward. The devops movement is really about moving faster and consuming fewer resources. Enabling developers to provision Docker containers, run tests against them, and deploy to production in minutes is cost-efficient and eliminates a developer’s worst enemy: manual system configuration work.
Docker is also a good fit for evolving enterprises because they are usually the most skittish about vendor lock-in. Container standardization makes it that much easier to move across clouds operated by multiple vendors.
The Docker team is also pushing to make the software enterprise-ready. After acquiring SocketPlane six months ago, Docker announced major upgrades in networking that allow containers to communicate across hosts. The team is working to complete a set of networking APIs which would make Docker networking enterprise-grade and guarantee application portability throughout the application lifecycle.
Testing as security features mature
However, there are still some major hurdles to jump. Enterprises are rightly concerned about Docker security in hybrid environments. Containers may resemble virtualization, but they have vastly different implications for system segregation, log aggregation, and monitoring. Enterprise applications often have strict governance procedures that require extensive logging and monitoring. Quite simply, there is no mature orchestration tool that monitors security across multiple Docker clusters. Most monitoring tools on the market don’t have a view of transient instances in public clouds, let alone the sub-virtual machine entities.
In the case of a security threat, Docker containers currently would require a lot of manual security patching. Docker allows you to make an update to your base image, but developers would have to manually ensure that base image is running in each container. Some form of image inheritance is necessary for Docker to be ready for a mission-critical enterprise application.
For enterprises that require multitenancy for isolating multiple clients’ environments, Docker is truly not an option. You’re running on the same kernel in the same kernel space, which is not equivalent to separate VMs under a hypervisor. Enterprises with sophisticated backup tools may find that Docker containers present an extra layer of challenge to getting data shipped on time and to the right places.
Docker is quite possibly the answer to enterprises’ challenges in hybrid cloud. But it is also a brand new technology without many orchestration or security monitoring tools that enterprises need to use Docker in production. Now is the time for enterprises to investigate Docker, try to get their app running in hybrid test environments, and get to know their pain points, but probably not the time to use Docker clusters in production.
Stephanie Tayengco leads cloud operations at Logicworks, a cloud strategy and management provider.
Powered by VBProfiles