What's new since PE 2019.8

This page describes the new features, enhancements, deprecations, and other notable changes since the previous LTS release (2019.8), specifically PE versions 2021.0 through 2021.7. The previous LTS release stream comprised PE versions 2019.8.0 through 2019.8.12.

This page does not include resolved issues because most bug fixes were applied to both the 2019.8.z and 2021.y streams at the time of resolution, except those that only impacted one stream or the other. For information about outstanding issues in 2021.7.z, refer to the PE known issues. For resolved issues included in the first release of the 2021.7 series, go to the PE 2021.7 release notes.

Some, but not all, features and changes described on this page applied to both the 2019.8.z and 2021.y streams.

This list does not specify the interim release number for each feature or change. You can find the original release notes for each interim release, including bug fixes resolved in 2021.0 through 2021.6 and all release notes for the 2019.8.z series, in the Documentation for other PE versions.

Important: Before upgrading to 2021.7:
  • Review the Upgrade cautions for important information that could impact your upgrade.
  • Get familiar with the latest System requirements including hardware requirements, supported operating systems, supported browsers, and network configurations.


PostgreSQL upgrade
PE version 2021.6 upgraded PostgreSQL to version 14. When you upgrade to 2021.7, your PostgreSQL instance is migrated from version 11 to version 14.
CAUTION: Review information about the PostgreSQL 14 upgrade in PE 2021.6 before upgrading.

If PE does not manage your PostgreSQL instance, you must Upgrade your unmanaged PostgreSQL installation before upgrading your primary server, compilers, and agents to 2021.7.

Lockless code deploys are no longer experimental
Since debuting as an experimental feature, we've worked with customers to enable Lockless code deploys over previous releases. The feature has been stable since the last bug fix in version 2021.2, and we're confident lockless code deploys are ready for prime time.
Run plans without blocking code deployments
You can allow orchestrator to run plans without blocking your code deployments, and you can deploy code without waiting for plans to finish. For instructions, refer to Running plans alongside code deployments.
Important: When you enable this feature, Puppet functions or plans that call other plans might behave unexpectedly if a code deployment occurs while the plan is running.
Force stop in-progress Puppet runs
By default, POST /command/stop prevents new runs from starting, but allows in-progress runs to finish. Now you can use the force option to block new runs and stop in-progress runs. This is useful, for example, if you need to stop a task that is hanging.
Automatically sync LDAP user details and group membership
Prior to 2021.7, user details and group membership for LDAP-based users only refreshed when users logged in. Now, LDAP group bindings, user names, and descriptions update automatically every 30 minutes (by default) for every LDAP user in the system. If a user is no longer present in LDAP or has no group bindings, all user-group associations are removed from the user and all of the user's known tokens are revoked.
You can disable automatic refresh or change the refresh time by changing the puppet_enterprise::profile::console::ldap_sync_period parameter. Learn more about this parameter in Configure RBAC and token-based authentication settings.
Stop LDAP users from logging in if they have no group membership
You can use the exclude-groupless-ldap-users setting to prevent LDAP users with no group memberships from logging in. This setting is off by default. To learn how to enable this setting, go toRequire LDAP group membership to log in.
SAML support
SAML 2.0 support allows you to securely authenticate users with single sign-on (SSO) and/or multi-factor authentication (MFA) through your SAML identity provider. Go to SAML authentication to learn about configuring SAML connections in PE.
Add a custom disclaimer banner to the console
You can Create a custom login disclaimer for your PE console login page.
Disaster recovery support for FIPS platforms
Disaster recovery is now supported for FIPS 140-2 compliant Red Hat Enterprise Linux (RHEL) 7 and 8.

API changes

New Orchestrator scheduling API endpoints
These new endpoints replaced five other scheduling endpoints (described below in Deprecated endpoints). Existing scheduled jobs are automatically migrated to the new scheduling system.
GET /scheduled_jobs/environment_jobs
GET /scheduled_jobs/environment_jobs/<job-id>
POST /scheduled_jobs/environment_jobs
PUT /scheduled_jobs/environment_jobs/<job-id>
New RBAC API endpoints
Disclaimer endpoints
GET /v2/users
POST /command/roles/add-users
POST /command/roles/remove-users
POST /command/roles/add-user-groups
POST /command/roles/remove-groups
POST /command/roles/add-permissions
POST /command/roles/remove-permissions
POST /command/users/revoke
POST /command/users/reinstate
POST /command/users/add-roles
POST /command/users/remove-roles
Use Puppet Server API to update CRLs
Supply a list of CRL PEMs to the certificate_revocation_list endpoint to insert updated copies of the applicable CRLs into the trust chain. The CA updates matching CRLs saved on disk if the submitted ones have a higher CRL number than their counterparts. Use these endpoints if your CRLs require frequent updates. Do not use these endpoints to update the CRL associated with the Puppet CA signing certificate (only earlier ones in the certificate chain).
Request changes
Several endpoints have additional keys and/or values that you can use in your requests. Visit the page for each endpoint to learn about these additions.
GET /jobs requests allow min_finish_timestamp and max_finish_timestamp.
GET /plan_jobs requests allow min_finish_timestamp, max_finish_timestamp, order, and order_by.
GET /jobs/<job-id>/nodes requests allow state, order and order_by.
GET /v1/events and GET /v2/events requests allow order.
GET /usage requests allow events.
POST /command/environment_plan_run requests allow type (within the params object) and userdata.
POST /command/deploy, POST /command/task, and POST /command/plan_run requests allow userdata
POST /command/stop requests allow force, which blocks new runs and stop in-progress runs.
Response changes
Responses from several endpoints have additional keys. Visit the page for each endpoint to learn about these additions.
GET /jobs and GET /plan_jobs responses include userdata, duration, created_timestamp, and finished_timestamp. Also, the pagination objects returned by these endpoints report "total": 0, instead of "total": null, when there are no jobs.
GET /jobs/<job-id> and GET /plan_jobs/<job-id> responses include userdata.
GET /v2/events responses containing information about orchestrator events (Puppet agent runs and task runs) include additional information about the job start time, end time, duration, and status.
Deprecated endpoints
Important: Tools that rely on the deprecated endpoints must be upgraded to use the new endpoints.
LDAP GET /v1/ds endpoint was deprecated in favor of the more secure v2 GET /ds endpoint.
GET /scheduled_jobs (deprecated) replaced by GET /scheduled_jobs/environment_jobs and GET /scheduled_jobs/environment_jobs/<job-id>
DELETE /scheduled_jobs/<job-id> (deprecated) replaced by PUT /scheduled_jobs/environment_jobs/<job-id>
POST /command/schedule_deploy (deprecated) replaced by POST /scheduled_jobs/environment_jobs
POST /command/schedule_plan (deprecated) replaced by POST /scheduled_jobs/environment_jobs
POST /command/schedule_task (deprecated) replaced by POST /scheduled_jobs/environment_jobs

Bundled module changes

pe_status_check module included in PE
The pe_status_check module helps keep your PE installation in an ideal state. Read About the pe_status_check module to learn how the module works and how to get the module's reports.
Important: If you have previously specified a version of this module, from the Forge or other sources, in your code, we recommend removing this version before upgrading to allow the version bundled with PE to be asserted.
Puppet metrics collector module included in PE
The Puppet metrics collector module collects Puppet metrics by default, but system metrics collection is disabled. To enable the module to collect system metrics, change puppet_enterprise::enable_system_metrics_collection to true.
Important: If you have already downloaded the metrics collector module from the Forge, you must either uninstall your copy of the module or upgrade it to the version installed with PE.
PE databases module included in PE
The pe_databases module is bundled with PE and enabled by default.
Important: If you have already downloaded the PE databases module from the Forge, we recommend you upgrade to the version installed with PE.
To disable this module, set puppet_enterprise::enable_database_maintenance to false.
Removed pe_java_ks module
The pe_java_ks module has been removed from PE packages. If you have any references to the packaged module in your code base, you must remove these references to avoid errors in catalog runs.

Certificate, access, and security-related changes

Upgraded Bouncy Castle
We are now shipping Bouncy Castle 1.70, which has improved support for TLSv1.3.
Updated PostgreSQL driver
We updated the PostgreSQL driver in some PE component to address CVE-2022-31197. The application was not vulnerable to exploit prior to this update.
Certificate, CA, CRL, and related changes
Disk usage is better when syncing certificate authority data between the primary and replica.
You can use --force to bypass node verification failure and force certificate regeneration when your primary server certificates are damaged.
The puppetserver ca prune action runs during upgrades. On upgrade, the CA CRL is purged of duplicate entries, potentially making it a much smaller file. The Puppet CA also no longer adds duplicate entries to the CRL in the first place.
As part of the ongoing effort to remove harmful terminology, the command to regenerate primary server certificates has been renamed puppet infrastructure run regenerate_primary_certificate.
Use the crl_refresh_interval parameter to enable agents to re-download their CRLs on regular intervals.
The default CA directory has moved to /etc/puppetlabs/puppetserver/ca from its previous location at /etc/puppetlabs/puppet/ssl/ca. This change helps prevent unintentionally deleting your CA files in the process of regenerating certificates. If applicable, you're prompted with CLI instructions for migrating your CA directory after upgrade:
/opt/puppetlabs/bin/puppet resource service pe-puppetserver ensure=stopped 
/opt/puppetlabs/bin/puppetserver ca migrate 
/opt/puppetlabs/bin/puppet resource service pe-puppetserver ensure=running
/opt/puppetlabs/bin/puppet agent -t
Passwords and tokens
For improved security, the lookup password is no longer preserved when the LDAP configuration page is reloaded or revisited in the console. You must enter the lookup password every time you make a change to the LDAP configuration, and it is required if there is a lookup user specified.
You can switch the algorithm PE uses to store passwords from the default SHA-256 to argon2id by configuring new password algorithm parameters. To configure the algorithm, see Configure the password algorithm.
Note: Argon2id is not compatible with FIPS-enabled PE installations.
There are configurable Password complexity parameters that local users see as requirements when creating a new password. For example, Usernames must be at least {8} characters long.
RBAC generates and accepts only cryptographic tokens, instead of JSON web tokens (jwt), for password resets.
Tokens can be generated, viewed, and revoked in the PE console. On the My account page, you can create tokens, revoke tokens, and view a list of your currently active tokens on the Tokens tab. Administrators can view and revoke another user's tokens on the User details page.
Prevent replay attacks in SAML
SAML can now handle replay attacks by storing message IDs with their timestamps and rejecting message IDs that have been recently used, which prevents a bad actor from replaying a previously valid message to gain access. Stored message IDs are purged every 30 minutes.
Encrypt backups
Use the puppet-backup create command with an optional --gpgkey to encrypt backups.
Return sensitive data from tasks
You can return sensitive data from tasks by using the _sensitive key in the output. The orchestrator redacts the key value so that it isn't printed to the console or stored in the database. Plans must include unwrap() to get the value. This feature is not supported when using the PCP transport in Bolt.
Use masked inputs for sensitive parameters
The console uses password inputs for sensitive parameter in tasks and plans to mitigate a potential "over the shoulder" attack vector.
Automatically sync LDAP user details and group membership
LDAP group bindings, user names, and descriptions update automatically every 30 minutes (by default) for every LDAP user in the system. If a user is no longer present in LDAP or has no group bindings, all user-group associations are removed from the user and all of the user's known tokens are revoked. Learn more about this parameter in Configure RBAC and token-based authentication settings.
Stop LDAP users from logging in if they have no group membership
You can use the exclude-groupless-ldap-users setting to prevent LDAP users with no group memberships from logging in. This setting is off by default. To learn how to enable this setting, go toRequire LDAP group membership to log in.

Code Manager, r10k, and file sync changes

File sync client status output
Profiling metrics are reported for versioned deploys and basic deploys in the file-sync client's debug status output.
The status output from the file sync storage service (specifically at the debug level), no longer reports the staging directory’s status. Removing this staging information reduces timeout errors in the logs, removes heavy disk usage created by the endpoint, and preserves memory if there are many long-running status checks in Puppet Server.
File sync always overrides the contents of the live directory when syncing.
This corrects any local changes made in the live directory outside of Code Manager's workflow.
Configure module deployment scope
By default, Code Manager utilizes the r10k --incremental deploys feature for improved performance. Incremental deploys only sync modules whose definitions allow their version to "float" (such as Git branches) and modules whose definitions have been added or changed since the environment's last deployment. SVN modules are not supported. To disable this behavior (and deploy all module code regardless of change or float status), set Code Manager's full_deploy parameter to true, as described in Configuring module deployment scope.
Custom Forge server authentication
Code Manager now supports authentication to custom servers through the authorization_token in the forge_settings parameter when Configuring Forge settings for Code Manager.
Include r10k stacktrace in failed deployment output
Use the r10k_trace parameter in your Code Manager settings to include r10k stacktrace in the error output for failed deployments.
Performance improvements
Code Manager deploys are faster because unmanaged resources are more efficiently purged.
Previously, Code Manager deployed whole modules to disk, often including the spec directory. The spec directory is only used for testing and is not useful in a production environment. Now, Code Manager deletes the spec dirs from deployments to decrease disk size. You can disable this behavior for each module by setting exclude_spec to false for relevant module declarations in your Puppetfile.
When polling for new commits, if the file sync client doesn't receive data from the file sync storage service for 30 seconds, the file sync client times out.
The environment_timeout setting's default value is now 5m.
Removed settings
Removed environment_timeout_mode.
Replaced purge-whitelist with purge-allowlist. No backwards compatibility. You must update your Code Manager and file sync configurations to use purge-allowlist.

Patching changes

Run patches sequentially in the group_patching plan
The pe_patch::group_patching plan now has a parameter called sequential_patching, which defaults to false (disabled). When set to true, nodes in the specified patch group are patched, rebooted (if needed), and the post-reboot script run (if specified) one a time, rather than all at once.
Avoid spam during patching
The patching task and plan now log fact generation, rather than echoing Uploading facts. This change reduces spam from servers with lots of facts.
Patch nodes with built-in health checks
The new group_patching plan patches nodes with pre- and post-patching health checks. The plan verifies that Puppet is configured and running correctly on target nodes, patches the nodes, waits for any reboots, and then runs Puppet on the nodes to verify that they're still operational.
Run a command after patching nodes
The post_patching_scriptpath parameter in the pe_patch class allows you to run an executable script or binary on a target node after patching is complete.
The pre_patching_command parameter has been renamed to pre_patching_scriptpath to more clearly indicate that you must provide the file path to a script, rather than an actual command.
Patch nodes despite certain read-only directory permissions
Patching files are moved to directories that are less likely to be read-only.
If you use patch-management, be aware of the following:
  • Before upgrading, you might want to back up existing patching log files, located on patch-managed nodes at /var/cache/pe_patch/run_history or C:\ProgramData\pe_patch. Existing log files are deleted when the patching directory is moved.
  • After upgrading, you must run Puppet on patch-managed nodes before running the patching task again, or the task fails.

Other changes

Upgraded JRuby
We are now shipping JRuby
Optimized some PuppetDB queries
Improved performance of queries that puppet infrastructure uses to look up Puppet infrastructure node certnames.
Report compilation failure results for apply blocks
If the compilation for a node targeted fails while running a plan with an apply block, the console now displays error results on the Plan details page. The results are stored in the database and can be queried.
Run the puppet infra run command with WinRM
The command puppet infra run now supports a --use-winrm flag, which forces the run command to connect to nodes via WinRM and use Bolt instead of the orchestrator.
More options when running the support script
This version of PE includes version 3 of the PE support script, which offers more options for modifying the support script's behavior.
Export node data from task runs as CSV
In the console, on the Task details page, you can export the node data results from task runs to a CSV file by clicking Export data.
Differentiate backup and restore logs
Backup and restore log files are now appended with timestamps, and they aren't overwritten with each backup or restore action.
Clean up old PE versions with smarter defaults
When cleaning up old PE versions with puppet infrastructure run remove_old_pe_packages, you no longer need to specify pe_version=current to clean up versions prior to the current one. current is now the default.
Customize value report estimates
You can now customize the low, med, and high time-freed estimates provided by the PE value report by specifying any of the value_report_* parameters in the PE Console node group in the puppet_enterprise::profile::console class.
Install the Puppet agent despite issues in other YUM repositories
When installing the Puppet agent on a node, the installer's YUM operations are now limited to the PE repository, allowing the agent to be installed successfully even if other YUM repositories have issues.
Get better insight into replica sync status after upgrade
Replica upgrades issue warnings instead of errors if re-syncing PuppetDB between the primary and replica nodes takes longer than 15 minutes.
Fix replica enablement issues
When provisioning and enabling a replica (with puppet infra provision replica --enable), the command times out if there are issues syncing PuppetDB, and provides instructions for fixing any issues and separately provisioning the replica.
Use Hiera lookups outside of apply blocks in plans
You look up static Hiera data in plans outside of apply blocks by adding the plan_hierarchy key to your Hiera configuration.
View the error location in plan error details
The puppet plan functions provide the file and line number where the error occurred in the details key of the error response.
Configure how many times the orchestrator allows status request timeouts
Configure the allowed_pcp_status_requests parameter to define how many times an orchestrator job allows status requests to time out before the job fails.
Add custom parameters when installing agents in the console
In the console, on the Install agent on nodes page, you can click Advanced install and add custom parameters to the pe_boostrap task to use during installation.
Update facts cache terminus to use JSON or YAML
The facts cache terminus is now JSON by default. You can configure the facts_chache_terminus parameter to switch from JSON to YAML.
Reduce query time when querying nodes with a fact filter
When you run a query in the console that populates information on the Status page to PuppetDB, the query uses the optimize_drop_unused_joins feature in PuppetDB to increase performance when filtering on facts. You can disable drop-joins by setting the environment variable PE_CONSOLE_DISABLE_DROP_JOINS=yes in /etc/sysconfig/pe-console-services and restarting the console service.
Renamed settings
These settings were replaced to removed harmful terminology. No guaranteed backwards compatibility. You must update your configurations and code to the new settings as part of your upgrade to 2021.7.
master-conf-dir is now server-conf-dir
master-code-dir is now server-code-dir
master-var-dir is now server-var-dir
master-log-dir is now server-log-dir
master-run-dir is now server-run-dir
master_uris is now primary_uris

Platform support

PE 2021.0 through 2021.6 added support for these platforms:
Primary server platforms
AlmaLinux x86_64 for Enterprise Linux 8
Amazon Linux 2
Red Hat Enterprise Linux 8 FIPS x86_64
Rocky Linux x86_64 for Enterprise Linux 8
SUSE Linux Enterprise Server 15 x86_64
Ubuntu (General Availability kernels) 20.04 amd64
Agent platforms
AlmaLinux x86_64 for Enterprise Linux 8
Debian 11 (Bullseye) amd64
Fedora 32, 34
macOS 11 x86_64
macOS 12 x86_64, M1
Microsoft Windows 11 x64
Microsoft Windows Server 2022 x86_64
Red Hat Enterprise Linux 8 FIPS x86_64
Red Hat Enterprise Linux 8 ppc64le
Red Hat Enterprise Linux 9 x86_64
Rocky Linux x86_64 for Enterprise Linux 8
Ubuntu 18.04 aarch64
Ubuntu 20.04 aarch64
Ubuntu 22.04 x86_64
Client tools platforms
macOS 11
macOS 12 M1, M2
Ubuntu 22.04 x86_64
Patch management platforms
Amazon Linux 2
Microsoft Windows 11 x64
Ubuntu 22.04 x86_64

Platform deprecations and removals

Deprecated primary server platforms
CentOS 8
Deprecated agent platforms
CentOS 8
Debian 8
Enterprise Linux 5
Enterprise Linux 7 ppc64le
Fedora 30, 31
macOS 10.14
Microsoft Windows 7, 8.1
Microsoft Windows Server 2008, 2008 R2
Solaris 10
SUSE Linux Enterprise Server 11
SUSE Linux Enterprise Server 12 ppc64le
Ubuntu 16.04 (all architectures)
Removed agent platforms
Important: Before upgrading to this version, remove the pe_repo::platform class for the following operating systems from the PE Master node group in the console, and from your code and Hiera.
AIX 6.1
Enterprise Linux 4
Enterprise Linux 6 s390x
Enterprise Linux 7 s390x
Fedora 26, 27, 28, 29
Mac OS X 10.9, 10.12, 10.13
SUSE Linux Enterprise Server 11
SUSE Linux Enterprise Server 12 s390x