Published on 18 August 2014 by

*This post was originally written and published on the Server Density Blog.

Server Density has been using Puppet Enterprise in production since 2010 and use it to deploy their monitoring on over 100 servers in data centers all over the world. Puppet Enterprise helps in a number of ways, and although it is used as standard config management, that only represents 25% of the use case at Server Density.

The software relies on Puppet Enterprise in four distinct ways - infrastructure, config management, failover and deploys - each of which will be outlined in this post by David Mytton founder and CEO of Server Density.

How we use Puppet Enterprise - Infrastructure

We first started using Puppet Enterprise when we moved our environment to Softlayer, where we have a mixture of bare metal servers and public cloud instances, totalling around 75-100 nodes. When this was set up, we ordered the servers from Softlayer then manually installed Puppet Enteprise before applying our manifests to get things configured.

Although we recently evaluated moving to running our own environment in colo data centres, we have made the decision to switch our environment from Softlayer to Google Cloud. My general view remains that colo is significantly cheaper in the long run but there are some initial capital expenses which we don't want to spend. We also want to make use of some of the Google products like BigQuery - I'll be writing about this in more detail on the Server Density blog as we complete the move.

Using Google Cloud (specifically, Google Compute Engine), or indeed any of the other major cloud providers, means we can make use of Puppet Enterprise modules to define the resources within our code. Instead of having to manually order them through the control panels, we can define them in the Puppet manifests alongside the configuration. We're using the gce_compute module but there are also modules for Amazon and others.

For example, defining an instance plus a 200GB volume:

gce_instance { 'mms-app1':
  ensure         => present,
  machine_type   => 'n1-highmem-2',
  zone           => 'us-central1-a',
  network        => 'private',
  tags           => ['mms-app', 'mongodb'],
  image          => 'projects/debian-cloud/global/images/backports-debian-7-wheezy-v20140605',

gce_disk { 'mms-app1-var-lib-mongodb':
  ensure      => present,
  description => 'mms-app1:/var/lib/mongodb',
  size_gb     => '200',
  zone        => 'us-central1-a',

The key here is that we can define instances in code, next to the relevant configuration for what's running on them, then let Puppet deal with creating them.

How we use Puppet Enterprise - Config management

This is the original use case for Puppet Enterprise - defining everything we have installed on our servers in a single location. It makes it easy to deploy new servers and keep everything consistent.

It also means any unusual changes, fixes or tweaks are fully version controlled and documented so we don't lose things over time (e.g. we have a range of fixes for MongoDB to work around issues and make optimisations which have been built up over time and through support requests, all of which are documented in Puppet Enterprise).

Server Density Puppet Manifests

We use the standard module layout as recommended by Puppet Labs, contained within a Github repo and checked with puppet lint before commit so we have a nicely formatted, well structured library describing our setup. Changes go through our usual code review process and get deployed with the puppet master picking up the changes and rolling them out.

Previously, we wrote our own custom modules to describe everything but more recently where possible we use modules from the Puppet Forge. This is because they often support far more options and are more standardised than our own custom modules. For example, the MongoDB module allows us to install the server and client, set options and even configure replica sets:

 include site::mongodb_org

  class {'::mongodb::server':
    ensure    => present,
    bind_ip   => '',
    replset   => 'mms-app',

  mount {'/var/lib/mongodb':
    ensure  => mounted,
    atboot  => true,
    device  => '/dev/sdb',
    fstype  => 'ext4',
    options => 'defaults,noatime',
    require => Class['::mongodb::server'],

  mongodb_replset { 'mms-app':
    ensure  => present,
    members => ['mms-app1:27017', 'mms-app2:27017', 'mms-app3:27017']

We pin specific versions of packages to ensure the same version always gets installed and we can control upgrades. This is particularly important to avoid sudden upgrades of critical packages, like databases!

The Server Density monitoring agent is also available as a Puppet Forge module to automatically install the agent, register it and even define your alerts.

All combined, this means we have our MongoDB backups running on Google Compute Engine, deployed using Puppet Enterprise and monitored with Server Density.


How we use Puppet Enterprise - Failover

We use Nginx as a load balancer and use Puppet variables to list the members of the proxy pool. This is deployed using a Puppet Forge nginx module we contributed some improvements to.

When we need to remove nodes from the load balancer rotation, we can do this using the Puppet web UI as a manual process, or by using the console rake API. The UI makes it easy to apply the changes so a human can do it with minimal chance of error. The API allows us to automate failover in particular conditions, such as if one of the nodes fails.

How we use Puppet Enterprise - Deploys

This is a more unusual way of using Puppet but has allowed us to concentrate on building a small portion of the deployment mechanism, taking advantage of the puppet agent which runs on all our servers already. It saves us having to use custom SSH commands or writing our own agent, and allows us to customise the deploy workflow to suit our requirements.

It works like this:

  1. Code is committed in Github into master (usually through merging a pull request, which is how we do our code reviews).
  2. A new build is triggered by Buildbot which runs our tests, then creates the build artefacts - the stripped down code that is actually copied to the production servers.
  3. Someone presses the deploy button in our internal control panel, choosing which servers to deploy to (branches can also be deployed) and the internal version number is updated to reflect what should be deployed).
  4. ‘</opt/puppet/bin/mco puppetd runonce -I>‘ is triggered on the selected hosts and the puppet run notices that the deployed version is different from the requested version.
  5. The new build is copied onto the servers.


Status messages are posted into Hipchat throughout the process and any one of our engineers can deploy code at any time, although we have a general rule not to deploy non-critical changes after 5pm weekdays and after 3pm on Fridays.

There are some disadvantages to using Puppet for this. Firstly, the Puppet agent can be quite slow on low spec hardware. Our remote monitoring nodes around the world are generally low power nodes so the agent runs very slowly. It's also eventually consistent because deploys won't necessarily happen at the same time, so you need to account for that in new code you deploy.

Puppet Enterprise is most of your documentation

These four use cases mean that a lot of how our infrastructure is set up and used is contained within text files. This has several advantages:

  • It's version controlled - everyone can see changes and they are part of our normal review process.
  • Everyone can see it - if you want to know how something works, you can read through the manifests and understand more, quickly.
  • Everything is consistent - it's a single source of truth, one place where everything is defined.

It's not all of our docs, but certainly makes up a large proportion because it's actually being used, live. And everyone knows how much they hate keeping the docs up to date!

David Mytton is the founder and CEO of Server Density, creator of a server monitoring tool that offers advanced alerting for problems with your websites and infrastructure from locations around the world.

Share via:
Posted in:
The content of this field is kept private and will not be shown publicly.

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.