published on 24 April 2017

If you’ve ever wanted to get started with Puppet or Docker — or both — you’ve probably faced a bit of a chicken-or-egg conundrum. Should I use Puppet to deploy Docker on my nodes, and then use Puppet to define container images? Or should I use Docker containers to deploy Puppet so I can test dashboards and other modules without having to build out my infrastructure?

In reality, you can do both. In this blog post, I’ll show you how to use some of the Puppet tools to create working containerized environments. You may well try these in development, but I think you’ll quickly see how you can transfer what you learn to production applications.

Handy resources

A few handy repositories in the Puppet Forge and GitHub will help get you started. Keep in mind that some of these modules are classified as experimental, but they’re fully functional and based on sound Puppet and Docker principles.

Puppet in Docker

https://github.com/puppetlabs/puppet-in-docker - A series of Dockerfiles, and the associated build toolchain, for building Docker images containing Puppet and related software.

Puppet in Docker examples

https://github.com/puppetlabs/puppet-in-docker-examples - This repository builds on Puppet in Docker by providing various examples, from running Puppet on container-centric operating systems like CoreOS or Atomic to building a full Puppet stack on top of a container scheduler.

The image_build module

https://github.com/puppetlabs/puppetlabs-image_build - This module enables you to build various images, including Docker images, from Puppet code. Examples include NGINX and Apache containers.

Getting started

Whether you’re just getting started with containers or you’ve been using them for a while, one of the easiest ways to deploy Docker is with the Puppet module, available at https://forge.puppet.com/puppetlabs/docker_platform. The fact that these modules have been downloaded nearly two million times from the Puppet Forge gives you an idea of their utility.

To deploy Docker on any node managed by your Puppet master, you can simply add the basic class:

include ‘docker’

There are other options for the class, but that will get you started, particularly if you’re looking to do your work on a development virtual machine.

Once deployed to a node, Docker will run and behave just as it would if you installed it manually. Using Puppet to automate this basic step, though, is a great way to deploy as many instances you want, the same way every time.

With Docker installed, you can start running some simple tests, like pulling down the latest version of the Centos image:

$ docker run -it centos:latest /bin/bash

Let’s take it a step further by having Docker create a new container with Puppet inside, which can perform any instruction you would pass in a manifest. For example:

$ docker run --name apply-test puppet/puppet-agent apply -e 'file { "/tmp/adhoc": content => "Written by Puppet" }'

This will pull the puppet/puppet-agent image from Docker Hub and use Puppet to apply a change, namely creating a file in /tmp/adhoc containing the words, “Written by Puppet.”

If you now run a diff on that container, you’ll see what’s changed from the original image. In this case, upon running, the container created new folders and added content to them, including the /tmp/adhoc file:

root@node02:~# docker diff apply-test
C /etc
C /etc/puppetlabs
C /etc/puppetlabs/puppet
A /etc/puppetlabs/puppet/ssl
A /etc/puppetlabs/puppet/ssl/certificate_requests
A /etc/puppetlabs/puppet/ssl/certs
A /etc/puppetlabs/puppet/ssl/private
A /etc/puppetlabs/puppet/ssl/private_keys
A /etc/puppetlabs/puppet/ssl/public_keys
...
C /tmp
A /tmp/adhoc

This is a good way to experiment with and learn Puppet with very little infrastructure or overhead. Instead of building out a full virtual machine and setting it up as a Puppet node, you can use a straightforward Docker command to see how Puppet would make a change or apply some action. It’s fast and you can quickly see your workflows in action. If something breaks, you can just remove the containers and start over.

Build a Puppet environment with a single command

You can actually move well beyond launching a handful of containers, and run your Puppet infrastructure on top of a containers-as-a-service platform. This can be accomplished with the puppet/puppetserver image, which will deploy a fully functioning Puppet master:

$ docker run --net puppet --name puppet --hostname puppet puppet/puppetserver

In this example, the Puppet master is created in a container called puppet on a Docker network named "puppet." The only piece you really need is docker run puppet/puppetserver, but the other bits allow you to attach your master to a specific network, name the container for easy reuse, and set the hostname so container-based agent nodes can find it.

All the open-source components of a stand-alone Puppet infrastructure are available in this same fashion, including Puppet Server, PuppetDB and various dashboards. You can put them all back together as a stack with Docker Compose.

Compose is a YAML file format that enables you to describe a series of containers in key-value pairs, and define how they should be related and linked. You can manually install docker-compose or install it using the docker::compose class in the docker module mentioned above.

To test the power of Docker Compose, create a docker-compose.yml file or download the sample from the examples.

The file describes several container images, including puppetserver, puppetdb-postgres, puppetboard and puppetexplorer. These last two are browser-based dashboard components that will become accessible when Docker Compose completes.

Everything will be pulled, built and booted from a single command executed from the directory where you’ve saved your docker-compose.yml file:

$ docker-compose up

docker-compose upFigure 1. Several Puppet nodes are deployed as containers using docker-compose.

This takes a couple of minutes to pull down and install everything into a series of containers, but when it’s done you’ll have a whole environment, which gives you a way to get a very useful test environment in a few minutes. You can can see all the bits and pieces and see how it works in moments, instead of setting up a series of VMs.

If you run a simple docker ps command, you’ll see all the running containers and their ports, including Puppet Explorer, which in my case was running at port 32772. By pointing a browser to the container host node IP address and that port — http://hostname:32772 — you should see the dashboard:

dashboardFigure 2. The Caddy dashboard for Puppet Explorer, running in a container installed using docker-compose.

At the same time, docker-compose installed PuppetDB and the browser-based dashboard that makes monitoring database activity a snap:

PuppetDB dashboardFigure 3. The PuppetDB dashboard, running in a container created by docker-compose.

Conclusion

You should now begin to see how the combination of Puppet and Docker gives you new and powerful ways to expand your development environment. Instead of deploying one Puppet node at a time, you can use simple Docker commands to do the work for you. That means you can spend less time building the platform, and more time developing and testing. And because these environments are up and running in minutes, you can deploy as many as you want almost anywhere you want — and easily start over with a clean install every time.

Gareth Rushgrove is a senior software engineer at Puppet.

Learn more

Share via:

Add new comment

The content of this field is kept private and will not be shown publicly.

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.