How To Maintain Your Puppet Enterprise Console

The Puppet Enterprise Console is the central place where you can manage and analyze elements of your infrastructure. Since the PE Console contains so much useful information about your infrastructure, there are a few things you should do to ensure it is well maintained and to prevent future issues with performance or disk space.

Following these steps is an easy way to prevent an unwanted outage in your Puppet Enterprise infrastructure.

1. How Do I Prevent the Console Database From Growing Indefinitely?

You will want to run the reports:prune rake task frequently to keep the size of your database down. I recommend running this task daily to keep the size of the database consistent on disk.

If you are running Puppet Enterprise 3.2, you can utilize the pe_console_prune class that is included. If you installed Puppet Enterprise 3.2 as a new installation, the pe_console_prune module will already be applied to your console node. However, if you upgraded from a prior version, the module will be available for use on your master, but will not have been applied yet to your console node.

If you are currently on a version of Puppet Enterprise prior to 3.2, you can still install your own cron job to run the reports:prune rake task on a daily basis.

Here is an example invocation of the rake task to prune all reports older than 30 days:

sudo /opt/puppet/bin/rake -f /opt/puppet/share/puppet-dashboard/Rakefile RAILS_ENV=production reports:prune upto=30 unit=day

2. How Do I Reclaim Disk Space After Pruning My Console Reports?

If you have never run the reports:prune rake task before then your database is probably taking up a lot of disk space and you may want to reclaim some of that disk space from your database files. The easiest way to accomplish this is to run the db:raw:optimize rake task after running the reports:prune rake task.

Be warned though: the optimization process will require twice as much space to function as the size of the database you’re optimizing. For example, if your database was 30 GB and you deleted 10 GB of data by running the reports:prune task, you will need 20 GB of free disk space in order to run the db:raw:optimize rake task.

Running this task with the vacuum flag is not recommended on a regular basis, but is necessary for reclaiming disk space in situations like this.

sudo /opt/puppet/bin/rake -f /opt/puppet/share/puppet-dashboard/Rakefile RAILS_ENV=production db:raw:optimize[reindex+vacuum] 

3. What Should I Do If I See Pending Tasks In the Console?

When Puppet Enterprise agents submit reports to the puppet master, the master passes them off to the console so they can be used in all of the wonderful reporting the console provides. If you are finding that your console is processing reports, but cannot keep up with the number of reports coming in, you may want to increase the number of delayed_job workers for your console.

The delayed_job workers process reports that the master passes off to the console. By default, there are two delayed_job workers, but if you have a large infrastructure you may want to increase the number of delayed_job workers up to the number of CPUs you have on the server. Having more delayed_job workers than CPUs would likely be counter-productive; I recommend adding one delayed_job worker process at a time until you’ve reached your desired performance.

Here’s more details for how you can add more delayed_job worker processes.

4. How Do I Back Up All the Puppet Enterprise Databases?

We've recently added light documentation on one way you can back up your Puppet Enterprise databases: How To Backup the PE Databases

It is imperative that you back up your Puppet Enterprise databases. The console database holds all the classification information for your nodes that you set up through the console web interface. If you were to lose that information, you would lose the classification that defines your infrastructure. PuppetDB holds reporting information, but more importantly, stores exported resources if you are using them. Exported resources can always be re-exported if you lose your PuppetDB database, but could put your infrastructure in an unknown state while they are being re-exported after creating a blank PuppetDB.

5. How Much Disk Space Do I Need for My Puppet Enterprise Databases?

There's no exact method to answer this question. I am currently recommending between 100 GB and 200 GB for a several-hundred-node deployment.

The truth is, how much disk space you need completely depends on:

  • How many nodes you have in your deployment
  • How many resources you manage with puppet
  • How often you run the puppet agent
  • The runinterval setting in puppet.conf
  • How many days of report history you keep
  • The number of days you choose to maintain with the reports:prune task mentioned earlier
  • The PuppetDB report-ttl setting

You can come to a good estimation of how much disk space you need by measuring the growth of the database files over a one-day period. Here’s how:

  • Mutiply the one-day growth of the PuppetDB database by the report-ttl setting.
  • Multiply the one-day growth of the console database by the number of days of reports you choose to maintain

You can also expect your database to continue growing even after you've reached the number of days of reports you choose to maintain. This is because you're likely to bring more resources and nodes under Puppet management over time, as the vast majority of our customers do.

Check Your Disk Space Now!

If you haven't checked the amount of free space on disk that’s available for your Puppet databases, I encourage you to check right now to make sure the databases have adequate room for growth, and that you've followed the above steps to make sure the databases won't grow forever.

Happy maintaining!

Learn More

Puppet sites use proprietary and third-party cookies. By using our sites, you agree to our cookie policy.