Tune Puppet for Performance with Profiler

As with most complex tools, Puppet can occasionally perform less quickly than you might expect. In order to help you figure out what might be slowing things down, Puppet has a built-in profiler that allows you to see where Puppet is spending its time in the catalog compilation process. In this post, I’ll be showing you how to gather this profiling data, how to interpret it, and what you can do to help us continue to improve Puppet.

To gather some profile data, set profile=true in the master section of your puppet.conf file, or use the --profile flag during a Puppet run. You will also need to run Puppet with the --debug flag enabled.

Puppet will now track profile data for catalog compilations, and print it into stdout in your terminal. You can specify a log destination to save this profile information out to a file by using the --logdest=/absolute/path/to/log.file flag during a Puppet run. When configured properly, the profile data will look like this:

2014-10-12 20:53:10 -0700 Puppet (debug): PROFILE [apply] 1 Setup server facts for compiling: took 0.0001 seconds
2014-10-12 20:53:10 -0700 Puppet (debug): PROFILE [apply] 2.1 Compile: Set node parameters: took 0.0003 seconds
2014-10-12 20:53:10 -0700 Puppet (debug): PROFILE [apply] 2.2 Compile: Created settings scope: took 0.0060 seconds
2014-10-12 20:53:10 -0700 Puppet (debug): PROFILE [apply] 2.3 Compile: Evaluated main: took 0.1648 seconds

As you can see, every line that is profile data has PROFILE somewhere in it.

Let’s make sense of what we are seeing on the screen. You will have a date and time entry for when the log was created, followed by Puppet (debug): PROFILE to let us know that this is a profiling debug log entry.

Inside the square brackets you will see apply or, if you are running this on a Puppet master, you’ll see a seemingly random number. This number represents the compilation transaction ID; you need this because multiple catalogs can be compiled at the same time. The number provides an easy way to follow just one catalog compile in the logs. If you are running puppet apply, you will see only apply, since no more than one catalog will be compiled for the node on which you are currently running puppet apply.

After the transaction ID, you see some numbers with dots between them. These numbers show you the path that Puppet has followed to get to a particular action, and how the various pieces of profiling output relate to each other. The numbers will almost always start with 1, and then numbers like 1.1 and 1.1.1 show that you’re looking deeper down in the tree. You may see this tree grow very large if you are calling the include() function often in a lot of places.

Next to the transaction ID will be the action that the compiler performed. This could be a number of things — for example, whether an include function was called, or the Heira function. You can also see when resources are evaluated. Finally, at the very end of the line is the wall-clock time that the step took (always in seconds).

Now you can profile each of your manifest compilations to see where things are slowing down, whether in template rendering, a custom function, or a built-in function, and decide where it makes sense to optimize.

We at Puppet Labs are very interested in seeing real-world profile data to help us improve Puppet for you and other users. If you are interested in helping us out, you can run the script we wrote to gather profile data, generate a file, and send it to us at britt@puppetlabs.com. The script will also gather some hardware facts about the system from Facter, show whether any external node classifiers are being used, and roughly how many nodes are being managed in the installation. After we analyze data we receive from our community, we will have a good idea of where we can focus our efforts to make the best improvements for everyone using Puppet.

If you are interested in helping, check out the Puppet master profiler on GitHub. And if you would like to know more about profiling Puppet, we encourage you to join the discussion in #puppet and #puppet-dev on freenode.

Britt Gresham is an associate software engineer at Puppet Labs. Andy Parker contributed to this blog post.

Learn more

  • The new Puppet Server makes Puppet performance much faster. Here's an interesting read about the platform and architecture choices enabling that performance, and the more flexible metrics reporting that's built in.
  • Five things you can automate with Puppet Enterprise supported modules, in a free resource.
  • If you haven’t used Puppet Enterprise yet, go ahead and download it now. You can try it out on up to 10 nodes for free. And here's the Puppet Enterprise Quick Start Guide to make your entry a smooth one.
  • Check out the rich panoply of learning resources on our website, including training classes and free self-guided learning programs.
Puppet sites use proprietary and third-party cookies. By using our sites, you agree to our cookie policy.