Tune infrastructure nodes
Use these guidelines to configure your Puppet Enterprise (PE) installation to maximize use of available system resources (CPU and RAM).
PE includes of multiple services running on one or more infrastructure hosts. Services running on the same host share the host's resources. You can configure each service's settings to maximize use of system resources and optimize performance.
Each service's default settings are conservative, and your optimal settings depend of the complexity and scale of your infrastructure.
Configure these settings after you install PE, upgrade PE, or make changes to infrastructure hosts (such as changing existing hosts' system resources, adding new hosts, or adding or changing compilers).
Primary server tuning
These are the default and recommended tuning settings for your primary server or disaster recovery replica.
Hardware | Setting category | Puppet Server | PuppetDB | Console | Orchestrator | PostgreSQL | |||||
---|---|---|---|---|---|---|---|---|---|---|---|
JRuby max active instances | Java heap (MB) | Reserved code cache (MB) | Command processing threads | Java heap (MB) | Java heap (MB) | Java heap (MB) | JRuby max active instances | Shared buffers (MB) | Work memory (MB) | ||
4 cores, 8 GB RAM | Default | 3 | 2048 | 512 | 2 | 256 | 256 | 704 | 1 | 976 | 4 |
Recommended | 2 | 1024 | 192 | 1 | 819 | 655 | 819 | 1 | 1638 | 4 | |
With legacy compilers | 2 | 1024 | 192 | 2 | 1228 | 655 | 819 | 1 | 1638 | 4 | |
6 cores, 10 GB RAM | Default | 4 | 2048 | 512 | 3 | 256 | 256 | 704 | 1 | 1488 | 4 |
Recommended | 3 | 2304 | 288 | 1 | 1024 | 819 | 1024 | 1 | 2048 | 4 | |
With legacy compilers | 2 | 1536 | 192 | 3 | 1536 | 819 | 1024 | 1 | 2048 | 4 | |
8 cores, 12 GB RAM | Default | 4 | 2048 | 512 | 4 | 256 | 256 | 704 | 1 | 2000 | 4 |
Recommended | 3 | 2304 | 288 | 2 | 1228 | 983 | 1228 | 1 | 2457 | 4 | |
With legacy compilers | 3 | 2304 | 288 | 4 | 1843 | 983 | 1228 | 1 | 2457 | 4 | |
10 cores, 16 GB RAM | Default | 4 | 2048 | 512 | 5 | 256 | 256 | 704 | 1 | 3024 | 4 |
Recommended | 5 | 3840 | 480 | 2 | 1638 | 1024 | 1638 | 2 | 3276 | 4 | |
With legacy compilers | 4 | 3072 | 384 | 5 | 2457 | 1024 | 1638 | 2 | 3276 | 4 | |
12 cores, 24GB RAM | Default | 4 | 2048 | 512 | 6 | 256 | 256 | 704 | 1 | 4096 | 4 |
Recommended | 8 | 6144 | 768 | 3 | 2457 | 1024 | 2457 | 3 | 4915 | 4 | |
With legacy compilers | 5 | 3840 | 480 | 6 | 3686 | 1024 | 2457 | 3 | 4915 | 4 | |
16 cores, 32GB RAM | Default | 4 | 2048 | 512 | 8 | 256 | 256 | 704 | 1 | 4096 | 4 |
Recommended | 9 | 9216 | 864 | 4 | 3276 | 1024 | 3276 | 3 | 6553 | 4 | |
With legacy compilers | 7 | 7168 | 672 | 8 | 4915 | 1024 | 3276 | 3 | 6553 | 4 |
Compiler tuning
These are the default and recommended tuning settings for compilers running the PuppetDB service.
Hardware | Setting category | Puppet Server | PuppetDB | |||||
---|---|---|---|---|---|---|---|---|
JRuby max active instances | Java heap (MB) | Reserved code cache (MB) | Command processing threads | Java heap (MB) | Read Maximum Pool Size | Write Maximum Pool Size | ||
4 cores, 8 GB RAM | Default | 3 | 1536 | 384 | 1 | 819 | 4 | 2 |
6 cores, 12 GB RAM | Default | 4 | 2048 | 512 | 1 | 1228 | 6 | 2 |
Recommended | 4 | 3072 | 512 | 1 | 1228 | 6 | 2 |
Legacy compiler tuning
These are the default and recommended tuning settings for legacy compilers without the PuppetDB service.
Hardware | Setting category | Puppet Server | ||
---|---|---|---|---|
JRuby max active instances | Java heap (MB) | Reserved code cache (MB) | ||
4 cores, 8 GB RAM | Default | 3 | 2048 | 512 |
Recommended | 3 | 1536 | 288 | |
6 cores, 12 GB RAM | Default | 4 | 2048 | 512 |
Recommended | 5 | 3840 | 480 |
The puppet infrastructure tune
command
The puppet infrastructure tune
command outputs
optimized settings for Puppet Enterprise (PE) services based on
recommended guidelines.
Running puppet infrastructure tune
queries PuppetDB to identify processor and memory facts about
your infrastructure hosts. The command outputs settings in YAML format for you to use in
Hiera.
This command is compatible with most standard PE configurations, including those with compilers, a replica, or standalone PostgreSQL.
You must run this command on your primary server as root. Using sudo
for elevated privileges is not sufficient. Instead, start a root
session by running sudo su -
, and then run the puppet infrastructure
command.
puppet infrastructure tune
command:-
--current
outputs existing tuning settings from the PE console and Hiera. This option also identifies duplicate settings declared in both the console and Hiera -
--memory_per_jruby <MB>
outputs tuning recommendations based on specified memory allocated to each JRuby in Puppet Server. If you implement tuning recommendations using this option, specify the same value forpuppetserver_ram_per_jruby
. -
--memory_reserved_for_os <MB>
outputs tuning recommendations based on specified RAM reserved for the operating system. -
--common
outputs common settings, which are identical on several nodes, separately from node-specific settings.
For more information about the tune command, run puppet infrastructure tune
--help
.
puppet infrastructure tune
command fails if environmentpath
(in your puppet.conf
file) is set to multiple environments. Comment
out this setting before running this command. For details about this setting, refer to
environmentpath
in
the open source Puppet
documentation.Tuning parameters
Configure tuning parameters to customize your PE service settings for optimum performance and hardware resource utilization.
Specify tuning parameters in Hiera for the best scalability and consistency. You can learn About Hiera in the Puppet documentation.
- Specify
puppet_enterprise::profile
parameters (includingjava_args
,shared_buffers
, andwork_mem
) as parameters of their class. - Specify all other tuning parameters as configuration data.
How to configure PE explains the different ways you can configure PE parameters.
RAM per JRuby
The puppetserver_ram_per_jruby
setting determines how
much RAM is allocated to each JRuby instance in Puppet Server.
You might need to change this setting if you have complex Hiera code, many environments or modules, or large reports.
- Console node group
- PE Master
- Parameter
puppet_enterprise::puppetserver_ram_per_jruby
- Default value
-
512
MB - Accepted values
- An integer representing a number of MB
- How to calculate
- You can usually achieve good performance by allocating around 2 GB per JRuby.
JRuby max active instances
The jruby_max_active_instances
setting can be set in multiple places.
It controls the maximum number of JRuby instances to allow on the
Puppet Server and how many plans can run concurrently in the
orchestrator.
Puppet Server
jruby_max_active_instances
- Console node group
- If Puppet Server runs on the primary server: PE Master
- Parameter
-
puppet_enterprise::master::puppetserver::jruby_max_active_instances
Tip: This parameter is the same as themax_active_instances
parameter in the pe-puppet-server.conf settings and in open source Puppet. - Default value
- If Puppet Server runs on the primary server, the default
value is the number of CPUs minus 1. The minimum is
1
, and the maximum is4
. - Accepted values
- An integer representing a number of JRuby instances
- How to calculate
- As a conservative estimate, one JRuby process uses
approximately 512 MB of RAM. For most installations, four JRuby instances are adequate.Important: Because increasing the maximum number of JRuby instances also increases the amount of RAM used by Puppet Server, make sure to proportionally scale the Puppet Server Java heap size (
java_args
). For example, if you setjruby_max_active_instances
to 4, set Puppet Server'sjava_args
to at least 2 GB.
Orchestrator jruby_max_active_instances
Running a plan consumes one JRuby instance. If a plan calls other plans, the nested plans use the parent plan's JRuby instance. JRuby instances are deallocated once a plan finishes running, and tasks are not affected by JRuby availability.
- Console node group
- PE Orchestrator
- Parameter
puppet_enterprise::profile::orchestrator::jruby_max_active_instances
- Default value
- The default value is the orchestrator heap size (
java_args
) divided by 1024. The minimum is1
. - Accepted values
- An integer representing a number of JRuby instances
- How to calculate
- Because the
jruby_max_active_instances
default value is derived from the orchestrator heap size (java_args
), changing the orchestrator heap size automatically changes the number of JRuby instances available to the orchestrator. For example, setting the orchestrator heap size to 5120 MB allows up to five JRuby instances (or plans) to run concurrently.
JRuby max requests per instance
The jruby_max_requests_per_instance
setting determines the maximum
number of HTTP requests a JRuby handles before it's
terminated. When a JRuby instance reaches this limit, it's
flushed from memory and replaced with a fresh one.
- Console node group
- PE Master
- Parameter
-
puppet_enterprise::master::puppetserver::jruby_max_requests_per_instance
Tip: This parameter is the same as themax_requests_per_instance
parameter in the pe-puppet-server.conf settings and in open source Puppet. - Default value
100000
- Accepted values
- An integer representing a number of HTTP requests
- How to calculate
- More frequent JRuby flushing can help address memory leaks, because it prevents any one interpreter from consuming too much RAM. However, performance is reduced slightly each time a new JRuby instance loads. Therefore, set this parameter to get a new interpreter no more than once every few hours.
Java heap
The java_args
settings specify heap
size, which is the amount of memory that each Java process can request from the
operating system. You can specify a heap size for each PE
service that uses Java, including Puppet Server, PuppetDB, the console, and the orchestrator
Xmx
) and minimum (Xms
) value. Usually, the
maximum and minimum are the same so that the heap size is fixed, for
example:{ 'Xmx' => '2048m', 'Xms' => '2048m' }
- Puppet Server Java heap
- Console node group: PE Master or PE Compiler
- PuppetDB Java heap
- Console node group: If the PuppetDB service runs on compilers, set this parameter on the PE Compiler node group. Otherwise, set this parameter on the PE PuppetDB node group.
- Console services Java heap
- Console node group: PE Console
- Orchestrator Java heap
- Console node group: PE Orchestrator
Puppet Server reserved code cache
The reserved_code_cache
setting specifies the maximum space
available to store the Puppet Server code cache during catalog
compilation.
- Console node group
- If the PuppetDB service runs on compilers, set this parameter on the PE Compiler node group. Otherwise, set this parameter on the PE Master node group.
- Parameter
puppet_enterprise::master::puppetserver::reserved_code_cache
- Default value
- If Puppet Server runs on your primary server: If total RAM is less than 2 GB, then the Java default is used. Otherwise, the default value is 512 MB.
- Accepted values
- An integer representing a number of MB
- How to calculate
- JRuby requires an estimated 128 MB of cache space for each instance. To determine the minimum amount of space needed multiple the number of JRuby instances by 128 MB.
PuppetDB command processing threads
The command_processing_threads
setting specifies how many command
processing threads PuppetDB uses to sort incoming data. Each
thread can process one command at a time.
- Console node group
- If the PuppetDB service runs on compilers, set this parameter on the PE Compiler node group. Otherwise, set this parameter on the PE PuppetDB node group.
- Parameter
puppet_enterprise::puppetdb::command_processing_threads
- Default value
- If the PuppetDB service runs on compilers, the
default value is the number of CPUs multiplied by 0.25 (with a minimum of
1
and a maximum of3
). - Accepted values
- An integer representing a number of threads.
- How to calculate
- If the PuppetDB queue is backing up and you have CPU cores to spare, increasing the number of threads can help process the backlog more rapidly.
PostgreSQL max connections
The max_connections
setting determines the maximum number of
concurrent connections allowed to the PE-PostgreSQL server. It should be configured to accommodate all
infrastructure nodes running PuppetDB.
- Console node group
- PE Database
- Parameter
puppet_enterprise::profile::database::max_connections
- Default value
400
- Accepted values
- An integer representing the number of concurrent connections allowed. The
minimum is
200
. - How to calculate
- Set the
max_connections
parameter to a number greater than the sum of read and write connections across all PuppetDB instances in your PE installation, including compilers and the primary server. The connection count from each instance should equal(command processing threads * 2) + number of JRuby instances
. Rule out any underlying performance issues prior to adjustingmax_connections
.
PostgreSQL shared buffers
The shared_buffers
setting specifies the amount of
memory the PE-PostgreSQL
server uses for shared memory buffers.
- Console node group
- PE Database
- Parameter
puppet_enterprise::profile::database::shared_buffers
- Default value
- The available RAM multiplied by 0.25, with a minimum of 32 MB and a maximum of 4096 MB
- Accepted values
- An integer representing a number of MB
- How to calculate
- The default value is suitable for most installations, but console performance might
improve if you increase
shared_buffers
up to 40% of available RAM.
PostgreSQL working memory
The work_mem
setting specifies the maximum amount of
memory used for queries before writing to temporary files.
- Console node group
- PE Database
- Parameter
puppet_enterprise::profile::database::work_mem
- Default value
- Based on the following
calculation:
(Available RAM / 1024 / 8) + 0.5
- Accepted values
- An integer representing a number of MB
PostgreSQL WAL disk space
The max_slot_wal_keep_size
setting specifies the
maximum allocated WAL disk space for each replication slot. This prevents the pg_wal
directory from growing infinitely.
If you have set up disaster recovery, this setting prevents an unreachable replica from consuming all of your primary server's disk space when the PE-PostgreSQL service on the primary server attempts to retain change logs that the replica hasn't acknowledged.
If your replica is offline long enough to reach the max_slot_wal_keep_size
value, replication slots are dropped to allow the
primary server to continue functioning normally. When the replica comes back online,
you'll know replication slots were dropped if puppet infra
status
returns a message that replication is inactive for PostgreSQL's status. To restore PostgreSQL replication, run puppet
infra reinitialize replica
on your replica.
- Console node group
- PE Database
- Parameter
puppet_enterprise::profile::database::max_slot_wal_keep_size
- Default value
- 12288 MB (twice the size of the
max_wal_size
parameter)Important: If you don't have enough disk space for the default setting, you must adjust this value. - Accepted values
- An integer representing a number of MB