homeblogintroducing masterless puppet bolt

Introducing serverless Puppet with Bolt

Bolt was first introduced as a simple, agentless tool for running tasks on smaller infrastructures made up of a wide variety of remote hosts. Users told us they wanted one language to handle both one-off tasks and model-based automation. The idea was to allow you to run common commands or bring your own existing scripts to manage routine automation.

It quickly became apparent that Bolt could do much more — including handling mature Puppet code — while remaining easy to use. The latest release allows users to leverage existing content from the Puppet Forge from the comfort of their own workstations, introducing the ability to apply classes from modules and take advantage of built-in types such as file, service and package. These capabilities make Bolt a great way to start automating.

Can I use Bolt to get started with a serverless workflow from my workstation?

A lot of users want or need to get started from their workstation, and don't want another server to manage. This mode of using Puppet has always existed, but requires solving a number of problems yourself to use on more than a single node. You need to sync all the modules to each node, copy any other files over, and run 'puppet apply'. It also fundamentally changes the security model: where are secrets stored and accessed, and how do you restrict data to only what's needed for each node. The advantages of this approach are that it removes the need for a central server and is conceptually simpler to set up.

Bolt handles all that, making it easy to get started provisioning and managing a small set of nodes from your workstation, or to manage only certain aspects of your systems. It compiles catalogs on your workstation that contain only the input needed for each node, can pull in secrets as needed, and copies module plugin code to nodes when applying the catalog.

Starting with a short example, let's walk through setting up an nginx server (on Debian or Ubuntu, to keep it simple for the moment). This is a Bolt plan that ensures the 'nginx' package is installed, creates a file that serves our site content, and starts the 'nginx' web server. It uses the apply_prep function to install packages needed by apply on remote nodes.

How can I set up an nginx server with Bolt?

  1. Install Bolt
  2. Go to ~/.puppetlabs/bolt/modules (create the directories if necessary)
  3. Create a new module using PDK with pdk new module profiles and add a plans directory (or create ~/.puppetlabs/bolt/modules/profiles/plans)
  4. Add the code above to the manifest profiles/plans/nginx_install.pp
  5. Set up an Ubuntu node to work with Docker (lab 2 of our Task Hands-on-lab walks through getting Ubuntu running with Docker or Vagrant)
  6. Run bolt plan run profiles::nginx_install --nodes <NODE NAME>
  7. From a web browser, navigate to <NODE NAME> and you should see a page saying 'hello!'

For a general intro to Bolt, our Tasks Hands-on-lab walks through learning many of its features step by step.

For more on what's happening behind the scenes and how Bolt handles more complex manifest code, see Bolt's docs on applying manifest blocks.

How can I do orchestration with Bolt?

The additional power that Bolt brings is in its ability to tie together and thread data through multiple instances of "puppet apply".

We can extend the original example of setting up several nginx servers to include configuring them behind a load balancer.

First, let's abstract the nginx setup by pulling it into a class. Note that we've also generalized it to work on RedHat systems.

  1. In the nginx module, run the command pdk new class profiles::server (or create ~/.puppetlabs/bolt/modules/profiles/manifests), and place the following in profiles/manifests/server.pp

  1. Update our plan to use the class, and setup an HAProxy load balancer.

  1. Install module dependencies.
  • Create ~/.puppetlabs/bolt/Puppetfile with the modules you want:

  • Install the Puppetfile bolt puppetfile install
  1. Run bolt plan run profiles::nginx_install servers=<nodea>,<nodeb> lb=<nodec>

You should now be able to go to your load balancer and have it respond with "hello! from NODE NAME", with NODE NAME corresponding to whichever nginx server handled your request.

Note that Vox Pupuli maintains an nginx module that you could swap in for our simple server class to manage more complex nginx configuration.

In this example we've done a couple of notable things:

  • We configured several nginx servers, then immediately passed important details — their name and IP address — to our load balancer config.
  • We used existing content from the Forge to quickly get our load balancer up and running.
  • We demonstrated using classes to structure our code, and made it reusable by others via modules or down-the-road in ongoing management with Puppet Enterprise.

This addition to Bolt makes it much easier to get started automating existing workflows thanks to how quickly you can leverage existing solutions from the Forge. We're excited to see what you do with it!

Michael Smith is a principal software engineer at Puppet.

Learn more