Tuning infrastructure nodes

Use these guidelines to configure your installation to maximize its use of available system (CPU and RAM) resources.

PE is composed of multiple services on one or more infrastructure hosts. Each service has multiple settings that can be configured to maximize use of system resources and optimize performance. The default settings for each service are conservative, because the set of services sharing resources on each host varies depending on your infrastructure. Optimized settings vary depending on the complexity and scale of your infrastructure.

Configure settings after an install or upgrade, or after making changes to infrastructure hosts, including changing the system resources of existing hosts, or adding new hosts, including compilers.

Primary server tuning

These are the default and recommended tuning settings for your primary server or disaster recovery replica.

Compiler tuning

These are the default and recommended tuning settings for compilers running the PuppetDB service.

Puppet Server PuppetDB
JRuby max active instances Java heap (MB) Reserved code cache (MB) Command processing threads Java heap (MB) Read Maximum Pool Size Write Maximum Pool Size
4 cores, 8 GB RAM Default 3 1536 384 1 819 4 2
6 cores, 12 GB RAM Default 4 2048 512 1 1228 6 2
Recommended 4 3072 512 1 1228 6 2

Legacy compiler tuning

These are the default and recommended tuning settings for legacy compilers without the PuppetDB service.

Puppet Server
JRuby max active instances Java heap (MB) Reserved code cache (MB)
4 cores, 8 GB RAM Default 3 2048 512
6 cores, 12 GB RAM Default 4 2048 512
Recommended 5 3840 480

Using the puppet infrastructure tune command

The puppet infrastructure tune command outputs optimized settings for PE services based on recommended guidelines.

When you run puppet infrastructure tune, it queries PuppetDB to identify infrastructure hosts and their processor and memory facts, and outputs settings in YAML format for use in Hiera.

The command is compatible with most standard PE configurations, including those with compilers, a replica, or standalone PE-PostgreSQL. The command must be run on your primary server as root.

These are the options commonly used with the puppet infrastructure tune command:
  • --current outputs existing tuning settings from the console and Hiera, and identifies duplicate settings found in both places.
  • --memory_per_jruby <MB> outputs tuning recommendations based on specified memory allocated to each JRuby in Puppet Server. If you implement tuning recommendations using this option, specify the same value for puppetserver_ram_per_jruby.
  • --memory_reserved_for_os <MB> outputs tuning recommendations based on specified RAM reserved for the operating system.
  • --common outputs common settings — identical on several nodes — separately from node-specific settings.

For more information about the tune command, run puppet infrastructure tune --help.

Tuning parameters

Tuning parameters let you customize PE components for maximum performance and hardware resource utilization.

Specify tuning parameters using Hiera for the best scalability and consistency. If you must use the console, add the parameter to the appropriate infrastructure node group using the method suitable for the parameter type:
  • Specify puppet_enterprise::profile parameters, including java_args, shared_buffers, and work_mem, as parameters of their class.
  • Specify all other tuning parameters as configuration data.

RAM per JRuby

The puppetserver_ram_per_jruby setting determines how much RAM is allocated to each JRuby instance in Puppet Server. In installations with compilers running the PuppetDB service, this setting is a good starting point for tuning your installation, because the value you specify is factored into several other parameters, including JRuby max active instances and heap allocation on compilers running PuppetDB.

Parameter
puppet_enterprise::puppetserver_ram_per_jruby
Default value
512 MB
Accepted values
Integer (MB)
How to calculate

If you have complex Hiera code, many environments or modules, or large reports, you might need to increase this setting. You can generally achieve good performance by allocating up to around 2 GB per JRuby. If 2 GB is inadequate, you might benefit from enabling environment caching.

Console node group
PE Master

JRuby max active instances

The jruby_max_active_instances setting controls the maximum number of JRuby instances to allow on the Puppet Server, or how many plans can run concurrently in the orchestrator. Each plan uses one JRuby instance, and nested plans use their calling plan's JRuby.

Parameter
Puppet Serverpuppet_enterprise::master::puppetserver::jruby_max_active_instances
Tip: This setting is referred to as max_active_instances in the pe-puppet-server.conf file and in open source Puppet. It's the same setting.
Orchestration services — puppet_enterprise::profile::orchestrator::jruby_max_active_instances
Default value
Primary server — Number of CPUs - 1, minimum 1, maximum 4
Compilers — Number of CPUs x 0.75, minimum 1, maximum 24
Orchestration services — Orchestrator heap size (java_args) / 1024, minimum 1
Accepted values
Integer
How to calculate
Puppet Server — As a conservative estimate, one JRuby process uses approximately 512 MB of RAM. Four JRuby instances works for most environments. Because increasing the maximum number of JRuby instances also increases the amount of RAM used by Puppet Server, make sure the Puppet Server heap size (java_args) is scaled proportionally. For example, if you set jruby_max_active_instances to 4, set Puppet Server java_args to at least 2 GB.
Orchestration services — Setting the orchestrator heap size (java_args) automatically sets the number of JRuby instances available inside orchestrator. For example, setting the orchestrator heap size to 5120 MB enables a maximum of five JRuby instances, or plans, to run concurrently. Enabling too many JRuby instances might reduce system performance, especially if the plans you're running use a lot of memory. Increase the orchestrator heap size if you notice poor performance while running plans.
Note: JRuby instances are used only by plans running in the orchestrator, and they are deallocated once a plan finishes running. Tasks are not affected by JRubys.
Console node group
Puppet Server — PE Master or, for compilers running the PuppetDB service, PE Compiler
Orchestration services — PE Orchestrator

JRuby max requests per instance

The jruby_max_requests_per_instance setting determines the maximum number of HTTP requests a JRuby handles before it's terminated. When a JRuby instance reaches this limit, it's flushed from memory and replaced with a fresh one.

Parameter
puppet_enterprise::master::puppetserver::jruby_max_requests_per_instance
Tip: This setting is referred to as max_requests_per_instance in the pe-puppet-server.conf file and in open source Puppet. It's the same setting.
Default value
100,000
Accepted values
Integer
How to calculate
More frequent JRuby flushing can help address memory leaks, because it prevents any one interpreter from consuming too much RAM. However, performance is reduced slightly each time a new JRuby instance loads. Ideally, set this parameter to get a new interpreter no more than every few hours. There are multiple interpreters running with requests balanced across them, so the lifespan of each interpreter varies.
Console node group
PE Master

Java heap

The java_args setting is used to specify heap size: the amount of memory that each Java process is allowed to request from the operating system. You can specify heap size for each PE service that uses Java, including Puppet Server, PuppetDB, and console and orchestration services.

Heap size is specified as Xmx and Xms, the maximum and minimum heap size, respectively. Typically, the maximum and minimum are set to the same value so that heap size is fixed, for example { 'Xmx' => '2048m', 'Xms' => '2048m' }.

Parameters
Puppet Serverpuppet_enterprise::profile::master::java_args
Tip: This setting might be referred to as puppet_enterprise::master::java_args or puppet_enterprise::master::puppetserver::java_args. They are all the same thing: profile::master filters down to master, which filters down to master::puppetserver.
PuppetDBpuppet_enterprise::profile::puppetdb
Console services — puppet_enterprise::profile::console
Orchestration services — puppet_enterprise::profile::orchestrator
Default values
Puppet Server — 2 GB
PuppetDB — 256 MB
Console services — 256 MB
Orchestration services — 704 MB
Accepted values
JSON string
Console node group
Puppet Server — PE Master or PE Compiler
PuppetDB — PE PuppetDB or, for compilers running the PuppetDB service, PE Compiler
Console services — PE Console
Orchestration services — PE Orchestrator

Puppet Server reserved code cache

The reserved_code_cache setting specifies the maximum space available to store the Puppet Server code cache during catalog compilation.

Parameter
puppet_enterprise::master::puppetserver::reserved_code_cache
Default value
Primary server — If total RAM is less than 2 GB, the Java default is used. Otherwise, 512 MB.
Compilers — Number of JRuby instances x 128 MB, min 128 MB, max 2048 MB
Accepted values
Integer (MB)
How to calculate
JRuby requires an estimated 128 MB of cache space per instance, so to determine the minimum amount of space needed: number of JRuby instances x 128 MB
Console node group
PE Master or, for compilers running the PuppetDB service, PE Compiler

PuppetDB command processing threads

The command_processing_threads setting specifies how many command processing threads PuppetDB uses to sort incoming data. Each thread can process a single command at a time.

Parameter
puppet_enterprise::puppetdb::command_processing_threads
Default value
Primary server — Number of CPUs x 0.5, minimum 1
Compilers — Number of CPUs x 0.25, minimum 1, maximum 3
Accepted values
Integer
How to calculate

If the PuppetDB queue is backing up and you have CPU cores to spare, increasing the number of threads can help process the backlog more rapidly. Avoid allocating all of your CPU cores for command processing, as doing so can starve other PuppetDB subsystems of resources and actually decrease throughput.

Console node group
PE PuppetDB or, for compilers running the PuppetDB service, PE Compiler

PostgreSQL shared buffers

The shared_buffers setting specifies the amount of memory the PE-PostgreSQL server uses for shared memory buffers.

Parameter
puppet_enterprise::profile::database::shared_buffers
Default value
Available RAM x 0.25, minimum 32 MB, maximum 4096 MB
Accepted values
Integer (MB)
How to calculate

The default of 25 percent of available RAM is suitable for most installations, but you might see improved console performance by increasing shared_buffers up to 40 percent of available RAM.

Console node group
PE Database

PostgreSQL working memory

The work_mem setting specifies the maximum amount of memory used for queries before writing to temporary files.

Parameter
puppet_enterprise::profile::database::work_mem
Default value
(Available RAM / 1024 / 8) + 0.5, minimum 4, maximum 16
Accepted values
Integer (MB)
Console node group
PE Database