Published on 12 December 2013 by

Docker is an open source framework that automates the deployment of applications in lightweight and portable containers. The Docker framework is modelled on the concept of the standard shipping containers that are used to transport much of the world’s goods. Like shipping containers you can build, fill, open and transport Docker containers. These containers can then be run in a wide variety of places: on your laptop, in the Cloud, on a virtual machine or even on physical hardware.

Docker has quickly become popular for the:

  • Automation of application packaging and deployment
  • Creation of lightweight, private PAAS environments
  • Automated testing and continuous integration and deployment
  • Deployment and scaling of web apps, databases and backend services

Since Docker was announced, however, there have been a lot of discussions about where Docker fits with configuration management tools like Puppet.

I’ve spent a bit of time thinking about scenarios, images, and management tooling, and talking to people about how they use Docker, either with or without configuration management tools. I didn't learn any startling insights but I did decide that, like most aspects of the domain, there is a lot of room for a lot of tools.

Take the DevOps survey and get a chance to win some great prizes.

Docker is first and foremost an image building and management solution. One of the largest objections to the "golden image" model is that you end up with image sprawl: large numbers of (deployed) complex images in varying states of versioning. You create randomness and exacerbate entropy in your environment as your image use grows. Images also tend to be heavy and unwieldy. This often forces manual change or layers of deviation and unmanaged configuration on top of images because the underlying images lack appropriate flexibility.

Compared to traditional image models Docker is a lot more lightweight: Images are layered and you can quickly iterate on them. There is some legitimate argument to suggest that these attributes alleviate many of the management problems traditional images present. It is not immediately clear, though, that this alleviation represents the ability to totally replace or supplant configuration management tools.

There is amazing power and control to be gained through the idempotence and introspection that configuration management tools can provide. And Docker itself still needs to be installed, managed and deployed on a host. That host also needs to be managed. In turn Docker containers may need to be orchestrated, managed and deployed, often in conjunction with external services and tools. Configuration management tools excel at providing these capabilities.

It is also apparent that Docker represents (or perhaps more accurately encourages) some different behaviors for hosts, applications and services: short-lived, disposable, and focused on single services being provided in a container.

These behaviors do not lend themselves or resonate strongly with the need for configuration management tools. With these behaviors you are rarely concerned with long-term management of state, entropy is less of a concern because containers rarely live long enough for it to be, and the recreation of state may often be cheaper than the remediation of state.

The most commonly cited use case is testing. Docker containers are becoming a feature of fast, agile and disposable test environments that are wired into CI tools such as Jenkins. In these use cases, a Docker container is created by a Jenkins job, configured by Docker to run the required tests and then shut down. Here, the limited lifespan of the testing host does not lend itself to running a configuration management tool and indeed running that tool could well add overhead, complexity and time to a process where every second counts.

But I don't believe all infrastructure can be represented with these behaviors. Much of it can, and perhaps in the future more of it will be, but it's not exclusive and will likely exist alongside more traditional infrastructure deployment. The long-lived host—perhaps also the host that needs to run on physical hardware—still has a role in many organizations. I'm also starting to see Cloud and virtual machine consumers, especially some of those on Amazon, with long-running instances whose uptime is measured closer to the traditional physical hosts they used to operate.

As a result of these diverse management needs, and combined with the need to manage Docker itself, I think we'll see both Docker and configuration management tools being deployed in the majority of organizations. Indeed I can see the potential for some incredibly powerful deployments tools that combine containers, configuration management, continuous integration, continuous delivery and service orchestration.

Disclosure: I was an employee of Puppet Labs and have a financial stake in the company. I currently work at Docker Inc and am also working on a book about Docker.

James Turnbull

About the author: A former IT executive in the banking industry and author of five technology books, James has been involved in IT Operations for 20 years and is an advocate of open source technology. He joined Puppet Labs in March 2010 as the VP of Operations, was VP of Engineering at Venmo and is currently VP of Services at Docker Inc. We highly recommend that you read his blog and follow him on Twitter.

Learn More

Share via:

Add new comment

The content of this field is kept private and will not be shown publicly.

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.