Published on 30 March 2016 by

CoreOS, the company behind the CoreOS operating system and the Tectonic Kubernetes distribution, has released a number of popular open source projects in the last few years for building distributed applications. These include the rkt container engine, the distributed configuration store etcd, the virtual networking component Flannel and the CoreOS operating system itself. In this post we’ll take a look at how each of these can work with Puppet.


etcd is a distributed key-value store that provides a reliable way to store data across a cluster of machines. It’s a common building block for modern distributed systems, from CloudFoundry to Kubernetes. The common use case for etcd is to store configuration items in its database, avoiding the need to synchronize files across many machines. The individual machines can instead subscribe to etcd and watch for any changes, updating themselves automatically when changes are required.

The cristifalcas/etcd module provides the ability to install and manage etcd using Puppet; it supports various RHEL flavours as well as Debian. In the simplest case that means:

include etcd

That’s not very interesting, but the module also allows for much more fine-grained configuration of etcd. Here’s a more involved example seeding an initial three-node cluster and setting various endpoints for the cluster to listen on. Depending on your network environment, and the interfaces you want etcd to listen on, you might need to do more or less configuration.

class { 'etcd':
  listen_client_urls          => '',
  advertise_client_urls       => "http://${::fqdn}:2379,",
  listen_peer_urls            => '',
  initial_advertise_peer_urls => "http://${::fqdn}:2380,",
  initial_cluster             => [

Once etcd is up and running, you’ll need to enter data into the key/value store. Some of this information might be static, and relevant only to the current state of the cluster, but some if it will likely be static, and therefore, usefully managed in code. Luckily, the cristifalcas/etcd module provides a type for managing individual keys in etcd using Puppet. Here’s a quick example of setting a simple network configuration key.

etcd_key { '/':
  value => '{ "Network": "" }',

Remember, etcd is a simple key-value pair store, so the key and the value can be anything you require: the current version of an application to install or the setting of a particular feature flag, for instance.


Not all applications are able to consume configuration directly from etcd; most applications today expect a configuration file on disk. Confd isn’t from CoreOS, but it does makes using etcd for those use cases easier. Confd runs as a daemon process, watching etcd for changes to keys and updating configuration files (or running scripts) when they change. For instance, when a new host is booted and registers a new key in etcd, confd running on a nginx proxy can detect the change immediately and update the nginx configuration files with the new host. The ajcrowe/confd module makes it trivial to get up and running with confdl:

class { 'confd':
  nodes    => [ 'etcd-1:4001', 'etcd-2:4001' ],
  interval => 10,
  prefix   => '/confd',

The module also contains a useful defined type for using confd. Here we’re watching the /nginx/upstream/01 key for any changes, and automatically updating the upstream config file and reloading nginx. We even make sure the new configuration is valid by running the check_cmd.

confd::resource { 'nginx_upstream_01':
  dest       => '/etc/nginx/conf.d/upstream_01.conf',
  src        => 'nginx_upstream.tmpl',
  keys       => [ '/nginx/upstream/01' ],
  group      => 'root',
  owner      => 'root',
  mode       => 0644,
  check_cmd  => '/usr/sbin/nginx -t',
  reload_cmd => '/usr/sbin/nginx -s reload',

This is powerful because it allows you to make your system configuration much more dynamic and reactive to changes in network topology, without requiring manual intervention to commit and deploy code changes.


rkt is a container engine for Linux clusters, created by CoreOS, which focuses on security, simplicity, and composability. rkt has an interesting capability: it can run the same container with varying degrees of protection, from lightweight OS-level namespace and capability isolation up to heavier VM-level hardware virtualization. This blog post on getting started with rkt provides lots of examples for those not yet familiar with the tools.

CoreOS recently released version 1.0 of rkt, and to accompany that release we have a puppetlabs/rkt module to install it. The module currently helps with installing a specific version of the rkt command line tool, as well as providing a class for installing the acbuild container build tool.

include rkt
include rkt::acbuild

The module also provides the start of a native type for managing pods in rkt. At the moment, this just allows you to list pods (a collection of containers managed by rkt) on the system, but in the future should allow for launching new pods, managing the accompanying systemd services or building rkt images.

$ puppet resource rkt_pod
rkt_pod { 'c8ecd9ae':
  ensure     => 'exited',
  app        => 'hello',
  image_name => 'hello',

Please do open issues, and send pull requests if you’d like to see more features here.


Flannel is a virtual network that gives a subnet to each host for use with container runtimes. Under the hood it uses etcd for coordination. By allowing for IP addresses to be assigned to individual containers in a consistent way, you can avoid the need to manage complex port mapping arrangements.

The cristifalcas/flannel module supports using Puppet to install and configure Flannel on RHEL-based operating systems.

class { 'flannel':
  etcd_endpoints => "http://${::fqdn}:2379",
  etcd_prefix    => '/',
  configure_etcd => true,
  network        => '',

This works nicely with the above-mentioned etcd module to get you quickly set up with Flannel and its dependencies.


Last but not least is the CoreOS operating system. CoreOS is designed for security, consistency, and reliability. Instead of installing packages via yum or apt, CoreOS uses Linux containers to manage your services at a higher level of abstraction. It also comes with services like etcd built in, and provides a simple way of updating the entire operating system atomically rather than updating everything one package at a time.

CoreOS nodes are typically bootstrapped using cloud-init. But this doesn’t provide a mechanism to manage the new nodes over time, which tends to be Puppet’s killer feature. Running Puppet on a node also allows for collecting inventory data from Facter and storing it in PuppetDB for later analysis, which is especially important in an environment where you have a number of different operating systems and want all your information in one place.

The demo jumanjihouse/puppet-on-coreos provides an excellent proof of concept for running Puppet in CoreOS. It is a simple example of running Puppet in a Docker container, which is then run on each CoreOS host. The important part is, the container is run with a lot of volumes mounted from the underlying host, allowing Puppet to manage the host, and not just what runs inside the container. The reason for this method is that all software on CoreOS runs isolated inside containers, but by using volume mounts, it’s still possible to manage the underlying host. This not only allows for Puppet to manage aspects of the host; it also means you have all the inventory information available in PuppetDB for querying and audit purposes.

Here’s the relevant excerpt from the unit file:

ExecStart=/usr/bin/docker run \
  --name %n \
  --net=host \
  -v /media/staging:/opt/staging \
  -v /etc/systemd:/etc/systemd \
  -v /etc/puppet:/etc/puppet \
  -v /var/lib/puppet:/var/lib/puppet \
  -v /home/core:/home/core \
  -v /etc/os-release:/etc/os-release:ro \
  -v /etc/lsb-release:/etc/lsb-release:ro \
  -v /etc/coreos:/etc/coreos:rw \
  -v /run:/run:ro \
  -v /usr/bin/systemctl:/usr/bin/systemctl:ro \
  -v /lib64:/lib64:ro \
  -v /sys/fs/cgroup:/sys/fs/cgroup:ro \
  jumanjiman/puppet:latest \
  agent --no-daemonize --logdest=console --server=puppet    --environment=production

The repository comes with instructions for trying this out, plus scripts and tests for getting started. If you’re interested in running Puppet on CoreOS, please let us know in the comments to this post.


The nice folks at CoreOS continue to release great open source software for anyone building complex infrastructure. And the above examples show just how quickly the fantastic Puppet community adopts good software and makes it configurable using the Puppet language. Thanks to everyone who has created or contributed to any of the above projects. And if you know of any other great examples, please let us know in the comments.

Gareth Rushgrove is a senior software engineer at Puppet Labs.

Learn more

Share via:

Add new comment

The content of this field is kept private and will not be shown publicly.

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.