PE release notes
Sections
These are the new features, enhancements, resolved issues, and deprecations in this version of PE.
Security and vulnerability announcements are posted at https://puppet.com/docs/security-vulnerability-announcements.
PE 2019.8.12
Released August 2022
Enhancements
- Orchestrator API endpoints return
"total": 0
if there are no jobs -
Orchestrator API v1 endpoints that return
pagination
containing the total number of jobs (such as GET /jobs, GET /scheduled_jobs, and GET /plan_jobs) now return"total": 0
, instead of"total": null
, when there are no jobs. - Addressed CVEs
- We updated the PostgreSQL driver in some PE component to address CVE-2022-31197. The application was not vulnerable to exploit prior to this update.
Platform support
Ubuntu 16.04 is no longer a supported agent platform.
- Agent
- macOS 12 M1
- Client tools
- Ubuntu (General Availability kernels) 22.04 x86_64
- Patch Management
- Ubuntu (General Availability kernels) 22.04 x86_64
Deprecations and removals
Ubuntu 16.04 is no longer a supported agent platform.
Resolved issues
-
full-deploy
didn't override--incremental
-
Code Manager's
full-deploy
option, used for Configuring module deployment scope, now correctly overrides the default--incremental
deploy behavior. - Code Manager couldn't fetch code on FIPS platforms
- On FIPS platforms running PE versions
2019.8.10 or 2019.8.11, Code Manager and r10k couldn't fetch code from your code
repo due to
libssh
attempting to use algorithms that are not allowed on FIPS. In PE 2019.8.12, the disallowed algorithms are disabled inlibssh
, allowing Code Manager and r10k to successfully fetch code. - Orchestrator ignored
_noop
when passed torun_task()
through a plan - When a plan passed the
_noop
flag to therun_task()
function, the PE Orchestrator now correctly acknowledges the_noop
flag. - Orchestrator doesn't restart unexpectedly during the
convert_legacy_compiler
plan - Previously, when running the
enterprise_tasks::convert_legacy_compiler
plan, the hosts in thepcp-brokers
array could change order. This caused thepe-orchestration-services
service to restart (as a result of detecting a presumed configuration change) and, ultimately, caused the plan to fail. - Orchestrator couldn't run tasks within modules named
tasks
orscripts
- You can now successfully run tasks that are within modules named
tasks
orscripts
. - Incorrect
run-time
for splayed agent runs - In previous PE versions, when agent runs
were splayed, the
run-time
reported in the PE console was incorrect. - Sensitive parameters sometimes exposed in cleartext in job results
- Sensitive plan parameters from Bolt plans that execute actions over PCP transport are no longer stored in the orchestrator database and, therefore, are properly masked in the job results.
PE 2019.8.11
Released May 2022
puppet_agent
module to
upgrade your agents, you must install version 4.11.0 of the puppet_agent
module before upgrading PE
to 2019.8.11. Upgrades can fail when using earlier versions of this
module.New features
- Lockless code deploys are no longer experimental
- Since debuting as an experimental feature, we've worked with customers to enable Lockless code deploys over previous releases. This feature has been stable since the last bug fix in version 2019.8.6, and we're confident lockless code deploys are ready for prime time.
- Run plans without blocking code deployments
- You can allow orchestrator to run plans without blocking your code
deployments, and you can deploy code without waiting for plans to
finish. For instructions, refer to Running plans alongside code deployments.Important: When you enable this feature, Puppet functions or plans that call other plans might behave unexpectedly if a code deployment occurs while the plan is running.
Enhancements
- Query jobs by timestamp minimums and maximums
- The
/jobs
and/plan_jobs
endpoints have two new query parameters:-
min_finish_timestamp
returns only the jobs finished at or after the given UTC timestamp. -
max_finish_timestamp
returns only the jobs finished at or before the given UTC timestamp.
-
- File sync client time out
- When polling for new commits, if the file sync client doesn't receive data from the file sync storage service for 30 seconds, the file sync client times out.
- Optimized some PuppetDB queries
- Improved performance of queries that
puppet infrastructure
uses to look up Puppet infrastructure node certnames. - Upgraded Bouncy Castle
- We are now shipping Bouncy Castle 1.70, which has improved support for TLS 1.3.
- Report compilation failure results for apply blocks
- If the compilation for a node targeted fails while running a plan with an apply block, the console now displays error results on the Plan details page. The results are stored in the database and can be queried.
Platform support
- Primary server
- FIPS 140-2 compliant Red Hat Enterprise Linux 8 x86_64
- Agent
- macOS 12 x86_64
Resolved issues
-
Code Manager didn't delete
spec
directories -
Code Manager deploys failed to delete
spec
directories due to an issue with r10k. To fix this, we upgraded r10k to version 3.14.1. - LDAP connection timeout setting was ignored
- In the console, the LDAP
connect_timeout
field wasn't correctly passed to the connection attempt and, as a result, the value was not applied in the LDAP configuration. - The plan that upgrades the secondary node sometimes failed
- When running
puppet infrastructure upgrade compiler
orpuppet infrastructure upgrade replica
, the plan that upgrades the secondary node sometimes failed on the step whereinstall.bash
runs on the node to update the agent. - Failed code compilations in apply blocks didn't report as failed
- When a code compilation failed in a plan apply block, the nodes targeted in the apply block continued to report as in progress rather than reporting as failed.
- Subsequent code deployments didn't report duplicate errors
- In previous PE versions, if a code deployment encountered an error on the initial deployment, and the same error occurred on a subsequent deployment, the subsequent deployment incorrectly reported a successful deployment. Now, subsequent code deployments consistently report errors that were originally encountered on earlier deployments (if the error is still present during the subsequent deployment).
PE 2019.8.10
Released February 2022
Enhancements
Bypass node verification failure when primary server certificates are damaged
You can use --force
to bypass node verification
failure and force certificate regeneration when your primary server certificates are
damaged.
Simplified login error message
The Puppet Enterprise (PE) console login error message instructs
the user to contact an administrator rather than consult the console-services
log.
Improved disk usage when syncing certificate authority data
Disk usage is better when syncing certificate authority data between the primary and replica.
Platform support
This version adds support for these platforms.
- Microsoft Windows Server 2022 x86_64
- Red Hat Enterprise Linux 9 x86_64
- Amazon Linux 2
Deprecations and removals
Platforms deprecated
Support for these platforms is deprecated in this release.
- CentOS 8
- CentOS 8
Resolved issues
Expired GPG key caused install failures
The GPG key bundled with PE versions prior to 2019.8.4 expired on 17 August 2021, which could cause a failure when PE packages are being added to the system. Customers upgrading to a newer version are no longer affected.
Packages were not marked as automatically installed by APT if you set security_only
to true
When running the pe_patch::patch_server
task with the
security_only
parameter set to true
, packages were not marked as being automatically
installed by APT. This caused problems if you rely on
this marking for APT
autoremove
to remove old packages. The patch_server
task now marks packages as automatically
installed, regardless of the security_only
parameter.
r10k can recover from typos in the config or Puppetfile
r10k now updates its cache repos when the remote URL changes. This allows r10k to recover from typos in the config or Puppetfile.
Couldn't complete restoration if the r10k_remote
parameter wasn't set
The r10k_remote
parameter wasn't set when restoring a
backup with scope
set to certs
, code
, or puppetdb
. This prevented you from finishing restoration because the
commands necessary to finish restoring your primary server did not
print.
The fail_plan
function didn't show custom error
information
The fail_plan
function ignored the kind
and details
parameters, which are used to specify custom, machine-parseable information about an
error.
Scheduled jobs failed on FIPS installs
Scheduled jobs, including tasks and plans, couldn't run or be listed on FIPS installs
of PE and resulted in the error javax.crypto.BadPaddingException
.
PE 2019.8.9
Released November 2021
Enhancements
TLS v1.3 is enabled by default
PE is now compatible with TLSv1.2 and TLSv1.3 by default for both FIPS and non-FIPS installations. To update your protocol or ciphers, review the Configuring security settings docs. For a list of compatible ciphers, see the Ciphers reference.
Run patches sequentially in the group_patching
plan
The pe_patch::group_patching
plan now has a parameter
called sequential_patching
, which defaults to
false
(disabled). When set to true
, nodes in the specified patch group are patched,
rebooted (if needed), and the post-reboot script run (if specified) one a time,
rather than all at once.
Run the puppet infra run
command with
WinRM
The command puppet infra run
now supports a --use-winrm
flag, which forces the run
command to connect to nodes via WinRM and use Bolt instead of the orchestrator.
More options when running the support script
This version of PE includes version 3 of the PE support script, which offers more options for modifying the support script's behavior to meet your needs.
full-deploy
setting in Code Manager
By default, Code Manager now utilizes the r10k
--incremental
deploys feature for improved performance. Incremental
deploys only sync modules whose definitions allow their version to "float" (such as
Git branches) and modules whose definitions have
been added or changed since the environment's last deployment. SVN modules are not
supported.
To disable this behavior (and deploy all module code regardless of change or float
status), set Code Manager's full_deploy
parameter to true
.
Platform support
This version adds support for these platforms.
- AlmaLinux x86_64 for Enterprise Linux 8
- Rocky Linux x86_64 for Enterprise Linux 8
- Ubuntu 18.04 aarch64
- Debian 11 (Bullseye) amd64
- Red Hat Enterprise Linux 8 FIPS x86_64
- AlmaLinux x86_64 for Enterprise Linux 8
- Rocky Linux x86_64 for Enterprise Linux 8
Resolved issues
Client-side lockfiles were not deleted on startup
Server-side lockfiles were cleaned up on startup by Puppet Server, but client-side lockfiles were not. Now both client- and server-side lockfiles are deleted during the Puppet Server startup process.
r10k deleted files in environments pointed to by symlinks
In control repositories containing symlinks, r10k incorrectly interpreted the files in symlinked locations as duplicates and deleted these files.
Failed or in-progress reboots reported that they finished rebooting successfully
When rebooting a node using the pe_patch::group_patching
plan, the check to detect if a node rebooted
always detected that it finished rebooting successfully, even if the reboot failed
or was still in progress, due to a parsing error in the output. This behavior was
observed and tested on RHEL-based platform versions 6
and 7, and SLES version 12, but might have existed on other platforms as
well.
Windows agent installation failed if user name contained a space
The Windows agent install script failed if executed
with a user name that included a space, like Max
Spacey
. You received the error Something went wrong with the
installation along with exit code 1639. You can now use spaces in
usernames without causing a failure.
Configuring environmentdir
to be a relative path
caused deploy failures
When deploying modules from a Puppetfile using r10k or
Code Manager, the deploy failed if your environmentdir
was configured to be a relative path
instead of an absolute path (default).
The puppet code
tool output informational data to
stderr
A regression in the puppet code
tool caused Code Manager to output information to stderr
, whether it was successful or not. This was
inconvenient if deploys were done through pipelines that were configured to register
failures based on stderr
output, because the
behavior of puppet code
always led to a failure
notification. Now, the notification is printed to stdout
instead of stderr
.
The puppet plan
subcommand segfaulted
When run without arguments, the puppet plan
subcommand segfaulted. A check was added to ensure the command has arguments set
when called.
PE 2019.8.8
Released September 2021
Enhancements
Code Manager support for Forge authentication
Code Manager now supports authentication to custom servers.
You can configure this authentication via hieradata by setting
authorization_token
within the forge_settings
parameter:
---
puppet_enterprise::master::code_manager::forge_settings:
baseurl: "https://private-forge.mysite"
authorization_token: "Bearer mysupersecretauthtoken"
Copied!
You must prepend the token with 'Bearer'
, particularly if you use
Artifactory as your Forge server.
Puppet metrics collector module included in PE installation
true
:puppet_enterprise::enable_metrics_collection: true
puppet_enterprise::enable_system_metrics_collection: true
Copied!
If you have already downloaded the module from the Forge, you must either uninstall your copy of the module or upgrade it to the version installed with PE.
PE databases module included in PE installation
pe_databases
module is now included in PE installations and upgrades. The module is disabled
by default, but can be enabled by setting this parameter to true
:puppet_enterprise::enable_database_maintenance: true
Copied!
If you have already downloaded the module from the Forge, we recommend you upgrade to the version installed with PE.
Query by order and view timestamps in GET
/plan_jobs
endpoint
The GET /plan_jobs
endpoint response now includes a
timestamp
field, and you can include the
following sorting parameters order
and order_by
in your request.
Faster Code Managerdeploys
Code Manager deploys are now faster because unmanaged resources are more efficiently purged.
Platform support
This version adds support for these platforms.
- macOS 11
Deprecations and removals
Platforms deprecated
- Enterprise Linux 5
- Fedora 30, 31
- macOS 10.14
Resolved issues
r10k refactors erroneously passed flag into modules and broke impact analysis
Recent r10k refactors broke Continuous Delivery for PE's impact analysis in the 2019.8.7 and 2021.2.0
release. These refactors passed a default_branch_override
to r10k via Code Manager's
API. r10k erroneously passed the flag to all modules
created when the Puppetfile was parsed. This flag was
not supported for Forge modules and displayed the
following error:
ERROR -> Failed to evaluate /etc/puppetlabs/code-staging/environments/production_cdpe_ia_1624622874129/Puppetfile
Original exception:
R10K::Module::Forge cannot handle option 'default_branch_override'
Copied!
This bug is now fixed.
Replica promotion could fail in air-gapped installations
If your primary server included AIX or Solaris
pe_repo
classes, replica promotion failed in
air-gapped environments because the staged AIX and
Solaris tarballs weren't copied to the
replica.
r10k
deployment
purge level was unsafe when run with parallel
deploys
Previously, Code Manager occasionally failed and returned an HTTP 500 error during environment deployments. This error occurred because of how Code Manager handled a bug/race condition when using pools of r10k caches. This bug also affected Continuous Delivery for PE users. Now, Continuous Delivery for PE users no longer encounter issues related to this race condition and Code Manager's parallel deploys no longer conflict each other.
PE 2019.8.7
Released June 2021
Enhancements
Update CRLs
You can now update your CRLs using the new API endpoint: certificate_revocation_list
.
This new endpoint accepts a list of CRL PEMs as a body, inserting updated copies of
the applicable CRLs into the trust chain. The CA updates the matching CRLs saved on
disk if the submitted ones have a higher CRL number than their counterparts. You can
use this endpoint if your CRLs require frequent updates. Do not use the endpoint to
update the CRL associated with the Puppet CA signing certificate (only earlier ones
in the certificate chain).
Filter by node state in jobs endpoint
You can filter the nodes by their current state in the /jobs/:job-id/nodes
endpoint when retrieving a list of nodes
associated with a given job. The following node states are available to query:
new
ready
running
stopping
stopped
finished
failed
Sort activities by oldest to newest in events endpoint
In the activity service API, the /v1/events
and
/v2/events
endpoints now allow you to sort
activities from either oldest to newest (asc
) or
newest to oldest (desc
).
Export node data from task runs to CSV
In the console, on the Task details page, you can now export the node data results from task runs to a CSV file by clicking Export data.
Disable force-sync mode
File sync now always overrides the contents of the live directory when syncing. This default override corrects any local changes made in the live directory outside of Code Manager's workflow. You can no longer disable file sync's force-sync mode to implement this enhancement.
Differentiate backup and restore logs
Backup and restore log files are now appended with timestamps and aren't overwritten
with each backup or restore action. Previously, backup and restore logs were created
as singular, statically named files, backup.log
and
restore.log
, which were overwritten on each
execution of the scripts.
Encrypt backups
You can now encrypt backups created with the puppet-backup
create
command by specifying an optional --gpgkey
.
Clean up old PE versions with smarter defaults
When cleaning up old PE versions with puppet infrastructure run remove_old_pe_packages
, you no
longer need to specify pe_version=current
to clean
up versions prior to the current one. current
is now
the default.
Platform support
This version adds support for these platforms.
- macOS 11
- Red Hat Enterprise Linux 8 ppc64le
- Ubuntu 20.04 aarch64
- Fedora 34
Deprecations and removals
Replace purge-whitelist
with
purge-allowlist
For Code Manager and file sync, the term
purge-whitelist
is deprecated and replaced with the new setting
name purge-allowlist
. The functionality and purpose of both setting
names are identical.
Platforms deprecated
- Debian 8
Resolved issues
Windows agent installation failed with a manually transferred certificate
Performing a secure installation on Windows nodes by manually transferring the primary server CA certificate failed with the connection error: Could not establish trust relationship for the SSL/TLS secure channel.
Upgrading a replica failed after regenerating the master certificate
If you previously regenerated the certificate for your master, upgrading a replica from 2019.6 or earlier could fail due to permission issues with backed up directories.
The apply
shim in pxp-agent didn't pick up
changes
ruby_apply_shim
didn't
update properly, which caused plans containing apply
or apply_prep
actions to fail when run through the
orchestrator, and resulted in this error
message:Exited 1:\n/opt/puppetlabs/pxp-agent/tasks-cache/apply_ruby_shim/apply_ruby_shim.rb:39:in `<main>': undefined method `map' for nil:NilClass (NoMethodError)\n
Copied!
Running client tool commands against a replica could produce errors
Running puppet-code
, puppet-access
, or puppet query
against
a replica produced an error if the replica certificate used the legacy common name
field instead of the subject alt name. The error has been downgraded to a warning,
which you can bypass with some minimal security risk using the flag --use-cn-verification
or -k
, for example puppet-access login
-k
. To permanently fix the issue, you must regenerate the replica
certificate: puppet infrastructure run
regenerate_replica_certificate
target=<REPLICA_HOSTNAME>
.
Generating a token using puppet-access
on Windows resulted in zero-byte token file
error
Running puppet-access login
to generate a token on
Windows resulted in a zero-byte token file error.
This is now fixed due to the token file method being changed from os.chmod
to file.chmod
.
Invoking puppet-access
when it wasn't configured
resulted in unhelpful error
If you invoked puppet-access
while it was missing a
configuration file, it failed and returned unhelpful errors. Now, a useful message
displays when puppet-access
needs to be configured
or if there is an unexpected answer from the server.
Enabling manage_delta_rpm
caused agent run failures
on CentOS and RHEL
8
Enabling the manage_delta_rpm
parameter in the pe_patch
class caused agent run failures on CentOS and RHEL 8 due
to a package name change. The manage_delta_rpm
parameter now appropriately installs the drpm
package, resolving the agent run issue.
Editing a hash in configuration data caused parts of the hash to disappear
If you edited configuring data with hash values in the console, the parts of the hash that did not get edited disappeared after committing changes—and then reappeared when the hash was edited again.
Null characters in task output caused errors
Tasks that print null bytes caused an orchestrator database error that prevented the result from being stored. This issue occurred most frequently for tasks on Windows that print output in UTF-16 rather than UTF-8.
Plans still ran after failure
When pe-orchestration-services exited unexpectedly, plan jobs sometimes continued running even though they failed. Now, jobs are correctly transitioned to failed status when pe-orchestration-services starts up again.
Plan apply activity logging contained malformed descriptions
In activity entries for plan apply actions, the description was incorrectly prepended
with desc
.
Patch task failed on Windows nodes with old logs
When patching Windows nodes, if an existing patching log file was 30 or more days old, the task failed trying to both write to and clean up the log file.
Errors when enabling and disabling versioned deploys
Previously, if you switched back and forth from enabling and disabling versioned deploys mode, file sync failed to correctly manage deleted control repository branches. This bug is now fixed.
Lockless code deployment lead to failed removal of old code directories
Previously, turning on lockless code deployment led to full disk utilization because of the failed removal of previous old code directories. To work around this issue, you must manually delete existing old directories. However, going forward—the removal is automatic.
PE 2019.8.6
Released May 2021
Enhancements
Customize value report estimates
You can now customize the low
, med
, and high
time-freed estimates
provided by the PE value report by specifying any of
the value_report_*
parameters in the PE Console
node group in the puppet_enterprise::profile::console
class.
Re-download CRL on a regular interval
You can now configure the new parameter crl_refresh_interval
to
re-download the agent's CRL on a regular interval. Use the console to configure the
interval in the PE Agent group, in the
puppet_enterprise::profile::agent
class, and enter a duration
(e.g. 60m) for Value.
Remove staging directory status for memory, disk usage, and timeout error improvements
The status output of the file sync storage service (specifically at the
debug
level), no longer reports the staging directory’s status.
This staging information removal reduces timeout errors in the logs, removes heavy
disk usage created by the endpoint, and preserves memory if there are many
long-running status checks in Puppet Server.
Exclude events from usage endpoint response
In the /usage
endpoint, the new events
parameter allows you to specify whether to
include
or exclude
event activity information from the response. If set to
exclude
, the endpoint only returns information
about node counts.
Return sensitive data from tasks
You can now return sensitive data from tasks by using the _sensitive
key in the output. The orchestrator then redacts the key
value so that it isn't printed to the console or stored in the database, and plans
must include unwrap()
to get the value. This feature
is not supported when using the PCP transport in Bolt.
Avoid spam during patching
The patching task and plan now log fact generation, rather than echoing Uploading facts. This change reduces spam from servers with a large amount of facts.
Parameter name updates
As part of the ongoing effort to remove harmful terminology, the parameter master_uris
was renamed primary_uris
.
Platform support
This version adds support for these platforms.
- Fedora 32
Resolved issues
Upgrade failed with cryptic errors if agent_version
was configured for your infrastructure pe_repo
class
If you configured the agent_version
parameter for the
pe_repo
class that matches your infrastructure
nodes, upgrade could fail with a timeout error when the installer attempted to
download a non-default agent version. The installer now warns you to remove the
agent_version
parameter if applicable.
Upgrade with versioned deploys caused Puppet Server crash
If versioned_deploys
was enabled when upgrading to version 2019.8.6
or 2021.1, then the Puppet Server crashed.
Compiler upgrade failed with client certnames defined
Existing settings for client certnames could cause upgrade to fail on compilers, typically with the error Value does not match schema: {:client-certnames disallowed-key}.
Compiler upgrade failed with no-op configured
Upgrade failed on compilers running in no-op mode. Upgrade now proceeds on infrastructure nodes regardless of their no-op configuration.
Installing Windows agents with the .msi package
failed with a non-default INSTALLDIR
When installing Windows agents with the .msi package,
if you specified a non-default installation directory, agent files were nonetheless
installed at the default location, and the installation command failed when
attempting to locate files in the specified INSTALLDIR
.
Installing agents failed with GPG key error on select platforms
When installing Puppet agent version 6.21.1 on Enterprise Linux 5 and SUSE Linux Enterprise Server 11 using the installer script, installation failed with an error about a bad GPG key.
Backup failed with an error about the stockpile
directory
The puppet-backup create
command failed under certain
conditions with an error that the /opt/puppetlabs/server/data/puppetdb/stockpile
directory was
inaccessible. That directory is now excluded from backup.
Patching failed on Windows nodes with non-default agent location
On Windows nodes, if the Puppet agent was installed to a location other than the default C: drive, the patching task or plan failed with the error No such file or directory.
Patching failed on Windows nodes when run during a fact generation
The patching task and plan failed on Windows nodes if run during fact generation. Patching and fact generation processes, which share a lock file, now wait for each other to finish before proceeding.
File sync failed to terminate pe-puppetserver
Java
process
The file sync client failed to terminate the pe-puppetserver
Java
process when it shut down because of a sync error.
File sync failed to copy symlinks if versioned deploys was enabled
If you enabled versioned deploys, then the file sync client failed to copy symlinks and incorrectly copied the symlinks' targets instead. This copy failure crashed the Puppet Server.
Injection attack vulnerability in csv exports
There was a vulnerability in the console where .csv
files could contain malicious user input when exported. The values =
, +
, -
, and @
are now
prohibited at the beginning of cells to prevent an injection attack.
The License page in the console timed out
Some large queries run by the License page caused the page to have trouble loading and timeout.
PE 2019.8.5
Released February 2021
Enhancements
Install the Puppet agent despite issues in other YUM repositories
When installing the Puppet agent on a node, the installer's YUM operations are now limited to the PE repository, allowing the agent to be installed successfully even if other YUM repositories have issues.
Clean up old packages after upgrade
A new command, puppet infrastructure run
remove_old_pe_packages pe_version=current
cleans up old PE packages remaining at /opt/puppet/packages/public
. For pe_version
, you can specify a SHA, a version number, or current
. All packages older than the specified version
are removed.
Get better insight into replica sync status after upgrade
Improved error handling for replica upgrades now results in a warning instead of an error if re-syncing PuppetDB between the primary and replica nodes takes longer than 15 minutes.
Fix replica enablement issues
When provisioning and enabling a replica (puppet infra
provision replica --enable
), the command now times out if there are
issues syncing PuppetDB, and provides instructions
for fixing any issues and separately provisioning the replica.
Patch nodes with built-in health checks
The new group_patching
plan patches nodes with pre-
and post-patching health checks. The plan verifies that Puppet is configured and running correctly on target
nodes, patches the nodes, waits for any reboots, and then runs Puppet on the nodes to verify that they're still
operational.
Run a command after patching nodes
A new parameter in the pe_patch
class, post_patching_scriptpath
enables you to run an
executable script or binary on a target node after patching is complete.
Additionally, the pre_patching_command
parameter has
been renamed pre_patching_scriptpath
to more clearly
indicate that you must provide the file path to a script, rather than an actual
command.
Patch nodes despite certain read-only directory permissions
Patching files have moved to more established directories that are less likely to be
read-only: /opt/puppetlabs/pe_patch
for *nix, and C:\ProgramData\PuppetLabs\pe_patch
for Windows. Previously, patching files were located at
/var/cache/pe_patch
and /usr/local/bin
for *nix and C:\ProgramData\pe_patch
for Windows.
- Before upgrading, optionally back up existing patching log files, located on
patch-managed nodes at
/var/cache/pe_patch/run_history
orC:\ProgramData\pe_patch
. Existing log files are deleted when the patching directory is moved. - After upgrading, you must run Puppet on patch-managed nodes before running the patching task again, or the task fails.
Use Hiera lookups outside of apply blocks in plans
You look up static Hiera data in plans outside of
apply blocks by adding the plan_hierarchy
key to
your Hiera configuration.
See the duration of Puppet and plan runs
New duration
, created_timestamp
, and finished_timestamp
keys allow you to see the duration of Puppet and
plan runs in the GET /jobs and
GET /plan_jobs endpoints.
View the error location in plan error details
Plan functions provide the file and line number where the error occurred in the
details
key of the error response.
Run plans on PuppetDB queries and node classifier group targets
The params
key in the POST /command/environment_plan_run endpoint allows
you to specify PuppetDB queries and node groups as
targets during a plan run.
Use masked inputs for sensitive parameters
The console now uses password inputs for sensitive parameter in tasks and plans to mitigate a potential "over the shoulder" attack vector.
Configure how many times the orchestrator allows status request timeouts
Configure the new allowed_pcp_status_requests
parameter to define how many times an orchestrator job allows status requests to
time out before the job fails. The parameter defaults to "35"
timeouts. You can use the console to configure it in the
PE Orchestrator group, in the puppet_enterprise::profile::orchestrator
class.
Accept and store arbitrary data related to a job
userdata
key allows you to supply
arbitrary key-value data to a task, plan, or Puppet
run. The key was added to the following endpoints:- POST /command/deploy
- POST /command/task
- POST /command/plan_run
- POST /command/environment_plan_run
- GET /jobs
- GET /jobs/:job-id
- GET /plan_jobs
- GET /plan_jobs:/job-id
Sort and reorder nodes in node lists
New optional parameters are available in the GET /jobs/:job-id/nodes endpoint that allow you to sort and reorder node names in the node list from a job.
Name versioned directories with SHAs
The file sync client uses SHAs corresponding to the branches of the control repository to name versioned directories. You must deploy an environment to update the directory names.
Configure failed deployments to display r10k stacktrace in error output
Configure the new r10k_trace
parameter to include
the r10k stack trace in the error output of failed deployments. The parameter
defaults to false
. Use the console to configure the
parameter in the PE Master group, in the puppet_enterprise::master::code_manager
class, and enter
true
for Value.
Reduce query time when querying nodes with a fact filter
When you run a query in the console that populates information on the
Status page to PuppetDB,
the query uses the optimize_drop_unused_joins feature in PuppetDB to increase performance when filtering on
facts. You can disable drop-joins by setting the environment variable PE_CONSOLE_DISABLE_DROP_JOINS=yes
in /etc/sysconfig/pe-console-services
and restarting the
console service.
Resolved issues
PuppetDB restarted continually after upgrade with deprecated parameters
After upgrade, if the deprecated parameters facts_blacklist
or cert_whitelist_path
remained, PuppetDB restarted after each Puppet run.
Tasks failed when specifying both
as the input
method
In task metadata, using both
for the input method
caused the task run to fail.
Patch task misreported success when it timed out on Windows nodes
If the pe_patch::patch_server
task took longer than
the timeout setting to apply patches on a Windows
node, the debug output noted the timeout, but the task erroneously reported that it
completed successfully. Now, the task fails with an error noting that the task timed
out. Any updates in progress continue until they finish, but remaining patches
aren't installed.
Orchestrator created an extra JRuby pool
During startup, the orchestrator created two JRuby
pools - one for scheduled jobs and one for everything else. This is because the JRuby pool was not yet available in the configuration
passed to the post-migration-fa
function, which
created its own JRuby pool in response. These JRuby pools accumulated over time because the stop
function didn't know about them.
Console install script installed non-FIPS agents on FIPS Windows nodes
The command provided in the console to install Windows nodes installed a non-FIPS agent regardless of the node's FIPS status.
Unfinished sync reported as finished when clients shared the same identifier
Because the orchestrator and puppetserver file-sync clients shared the same
identifier, Code Manager reported an unfinished sync as
"all-synced": true
. Whichever client finished polling first,
notified the storage service that the sync was complete, regardless of the other
client's sync status. This reported sync might have caused attempts to access tasks
and plans before the newly-deployed code was available.
Refused connection in orchestrator startup caused PuppetDB migration failure
A condition on startup failed to delete stale scheduled jobs and prevented the orchestrator service from starting.
Upgrade failed with Hiera data based on certificate extensions
If your Hiera hierarchy contained levels based off
certificate extensions, like {{trusted.extensions.pp_role}}
, upgrade could fail if that Hiera entry was vital to running services, such as
{{java_args}}
. The failure was due to the
puppet infrastructure recover_configuration
command, which runs during upgrade, failing to recognize the hierarchy level.
File sync issued an alert when a repository had no commits
When a repository had no commits, the file-sync status recognized this repository’s state as invalid and issued an alert. A repository without any commits is still a valid state, and the service is fully functional even when there are no commits.
Upgrade failed with infrastructure nodes classified based on trusted facts
If your infrastructure nodes were classified into an environment based on a trusted fact, the recover configuration command used during upgrade could choose an incorrect environment when gathering data about infrastructure nodes, causing upgrade to fail.
Backups failed if a Puppet run was in progress
The puppet-backup
command failed if a Puppet run was in progress.
Default branch override did not deploy from the module's default branch
A default branch override did not deploy from the module’s default branch if the branch override specified by Impact Analysis did not exist.
Module-only environment updates did not deploy in Versioned Deploys
Module-only environment updates did not deploy if you tracked a module's branch and redeployed the same control repository SHA, which pulled in new versions of the modules.
PE 2019.8.4
Released November 2020
This version updates the PostgreSQL version to address critical security vulnerabilities.
PE 2019.8.3
Released November 2020
New features
Value report
A new Value report page in the Admin section of the console estimates the amount of time reclaimed by using PE automation. The report is configurable based on your environment. See Value report for more information.
Enhancements
Spend less time waiting on puppet infrastructure
commands
The puppet infrastructure
commands that use plans,
for example for upgrading, provisioning compilers, and regenerating certificates,
are now noticeably faster due to improvements in how target nodes are verified.
Provision a replica without manually pinning the target node
You're no longer required to manually pin the target replica node to the
PE Infrastructure Agent group before running puppet infrastructure provision replica
. This action —
which ensures that the correct catalog and PXP
settings are applied to the replica node in load balanced installations — is now
handled automatically by the command.
Configure environment caching
Using new environment timeout settings, you can improve Puppet Server performance by caching long-lived
environments and purging short-lived environments. For example, in the PE
Master node group, in the puppet_enterprise::master
class, set environment_timeout_mode
= from_last_used
and environment_timeout
= 30m
to clear short-lived environments 30 minutes
from when they were last used. By default, when you enable Code Manager, environment_timeout
is set to unlimited, which caches
all environments.
Configure the number of threads used to download modules
A new configuration parameter, download_pool_size
,
lets you specify the number of threads r10k uses to download modules. The default is
4
, which improves deploy performance in most
environments.
Configure PE-PostgreSQL autovacuum cost limit
The cost limit value used in PE-PostgreSQL autovacuum operations is now set at a more
reasonable default that scales with the number of CPUs and autovacuum workers
available. The setting is also now configurable using the
puppet_enterprise::profile::database::autovacuum_vacuum_cost_limit
parameter.
Previously, the setting was not configurable, and it used the PostgreSQL default, which could result in database tables and indexes growing continuously.
Rerun Puppet or tasks on failed nodes only
You can choose to rerun Puppet or a task only on the nodes that failed during the initial run by selecting Failed nodes on the Run again drop down.
Run plans only when required parameters are supplied
In the console, the option to commit a plan appears after you supply all required parameters. This is to prevent plan failures by accidentally running plans without required parameters.
Schedule plans in the console and API
You can use the console to schedule one time or recurring plans and view currently
scheduled plans. Additionally, you can use the schedule_plan
command for scheduling one-time plan runs using the
POST /command/schedule_plan endpoint.
Use sensitive parameters in plans
You can use the Sensitive
type for parameters in
plans. Parameters marked as Sensitive
aren't stored
in the orchestrator's database, aren't returned via API calls, and don't appear in
the orchestrator's log. Sensitive parameters are also not visible from the
console.
View more details about plans
The environment a plan was run from is now displayed in the console on the
Job details page. The environment is returned from the
/plan_jobs
endpoint using the new environment
key. See GET /plan_jobs for more information.
Additionally, the parameters supplied to tasks that are run as part of a plan are displayed in the console on the Plan details page. Sensitive parameters are masked and are never stored for a task run.
Differentiate between software and driver patch types for Windows
PE now ignores Driver
update types in Windows Updates by default and only
includes the Software
type, cutting down on
unnecessary patch updates. To change this default, configure the new windows_update_criteria
parameter in the pe_patch
class by removing or changing the Type
argument. See Patch management parameters for more information about the parameter.
Serve patching module files more efficiently
Certain pe_patch
module files are now delivered to
target nodes using methods that improve scalability and result in fewer file
metadata checks during Puppet runs.
Receive a notification when CA certs are about to expire
The console now notifies you if a CA certificate is expiring soon. The Certificates page in the sidebar displays a yellow ! badge if a certificate expires in less than 60 days, and a red ! badge if a certificate expires in less than 30 days. If there are certificates that need signing in addition to the certificates expiring, the number of certificates that need to be signed is displayed in the badge but the color stays the same.
View details about a particular code deployment
The Code Manager
/deploys/status
endpoint now includes the deployment
ID in the "deploys-status"
section for incomplete
deploys so you can correlate status output to a particular deployment
request.
Additionally, you can query the Code Manager
/deploys/status
endpoint with a deployment ID to see
details about a particular deployment. The response contains information about both
the Code Manager deploy and the sync to compilers for the
resulting commit.
Troubleshoot code deployments
File sync now uses the public git SHA recorded in the signature field of the .r10k-deploy.json
file instead of an internal SHA that
file sync created. Additionally, versioned directories used for lockless deploys now
use an underscore instead of a hyphen so that paths are valid environment names.
With these changes, you can now map versioned directories directly to SHAs in your
control repository.
When upgrading to 2019.3 or later with versioned deploys enabled, versioned
directories are recreated with underscores. You can safely remove orphaned
directories with hyphens located at
/opt/puppetlabs/server/data/puppetserver/filesync/client/versioned-dirs
.
Report on user activities
A new GET /v2/events API that tracks more user activities, like date, time, remote IP address, user ID, and action. You can use the console to generate a report of activities on the User details page.
Platform support
This version adds support for these platforms.
- Red Hat Enterprise Linux 8 aarch64
Deprecations and removals
Master removed from docs
Documentation for this release replaces the term master with primary server. This change is part of a company-wide effort to remove harmful terminology from our products.
For the immediate future, you’ll continue to encounter master
within the product, for example in parameters, commands, and
preconfigured node groups. Where documentation references these codified product
elements, we’ve left the term as-is.
As a result of this update, if you’ve bookmarked or linked to specific sections of a docs page that include master in the URL, you’ll need to update your link.
Whitelist and blacklist deprecated
In the interest of removing racially insensitive terminology, the terms
whitelist and blacklist are deprecated in favor of
allowlist and blocklist. Classes, parameters, and file
names that use these terms continue to work, but we recommend updating your
classification, Hiera data, and pe.conf
files as soon as possible in preparation for
their removal in a future release.
These are the classes, parameters, task parameters, and file names that are affected.
- puppet_enterprise::pg::cert_whitelist_entry
- puppet_enterprise::certs::puppetdb_whitelist
- puppet_enterprise::certs::whitelist_entry
- puppet_enterprise::master::code_manager::purge_whitelist
- puppet_enterprise::master::file_sync::whitelisted_certnames
- puppet_enterprise::orchestrator::ruby_service::whitelist
- puppet_enterprise::profile::ace_server::whitelist
- puppet_enterprise::profile::bolt_server::whitelist
- puppet_enterprise::profile::certificate_authority::client_whitelist
- puppet_enterprise::profile::console::cache::cache_whitelist
- puppet_enterprise::profile::console::whitelisted_certnames
- puppet_enterprise::profile::puppetdb::sync_whitelist
- puppet_enterprise::profile::puppetdb::whitelisted_certnames
- puppet_enterprise::puppetdb::cert_whitelist_path
- puppet_enterprise::puppetdb::database_ini::facts_blacklist
- puppet_enterprise::puppetdb::database_ini::facts_blacklist_type
- puppet_enterprise::puppetdb::jetty_ini::cert_whitelist_path
- /etc/puppetlabs/console-services/rbac-certificate-whitelist
Split-to-mono migration removed
The puppet infrastructure run migrate_split_to_mono
command has been removed. The command migrated a split installation to a standard
installation with the console and PuppetDB on the
primary server. Upgrades to PE 2019.2 and later
required migrating as a prerequisite to upgrade, so this command is no longer used.
If you're upgrading from an earlier version of PE
with a split installation, see Migrate from a split to a standard
installation in the documentation for your current version.
Resolved issues
Upgrade and puppet infrastructure
commands failed
if your primary server was not in the production
environment
Upgrades and puppet infrastructure
commands —
including replica upgrade and compiler provisioning, conversion, and upgrade —
failed with a Bolt::RunFailure if your primary server was not in the
production
environment.
This release fixes both issues, and upgrades to this version are unaffected.
- Verify that you've specified your non-
production
infrastructure environment for these parameters:pe_install::install::classification::pe_node_group_environment
puppet_enterprise::master::recover_configuration::pe_environment
- Run
puppet infra recover_configuration --pe-environment <PRIMARY_ENVIRONMENT>
- When upgrading, run the installer with the
--pe_environment
flag:sudo ./puppet-enterprise-installer –- --pe_environment <PRIMARY_ENVIRONMENT>
Upgrade failed if a PostgreSQL repack was in progress
If a PostgreSQL repack operation was in progress when you attempted to upgrade PE, the upgrade could fail with the error cannot drop extension pg_repack because other objects depend on it.
Upgrade failed with an unenabled replica
PE upgrade failed if you had a provisioned, but not enabled, replica.
Compiler provisioning failed if a single compiler was unresponsive
Thepuppet infrastructure provision
compiler
command failed if any compiler in your pool failed a
pre-provisioning health check.
puppet infrastructure
commands failed with an
external node classifier
With an external node classifier, puppet
infrastructure
commands, such as puppet
infrastructure compiler upgrade
and puppet
infrastructure provision compiler
, failed.
Automated Puppet runs could fail after running compiler or certificate regeneration commands
After provisioning compilers, converting compilers, or regenerating certificates with
puppet infrastructure
commands, automated Puppet runs could fail because the Puppet service hadn't restarted.
puppet infrastructure recover_configuration
misreported success
if specified environment didn't exist
If you specified an invalid environment when running puppet infrastructure
recover_configuration
, the system erroneously reported that the
environment's configuration was saved.
Runs, plans, and tasks failed after promoting a replica
After promoting a replica, infrastructure nodes couldn't connect to the newly
promoted primary server because the master_uris
value still pointed to the old primary server.
This release fixes the issue for newly-provisioned replicas, however if you have an
enabled replica, in both the PE Agent and PE
Infrastructure Agent node groups, in the puppet_enterprise::profile::agent
class, verify that the setting for
master_uris
matches the setting for server_list
. Both values must include both your primary
server and replica, for example ["PRIMARY.EXAMPLE.COM",
"REPLICA.EXAMPLE.COM"]
. Setting these values ensures that agents can
continue to communicate with the promoted replica in the event of a failover.
Skipping agent configuration when enabling a replica deleted settings for the PE Agent group
If you used the --skip-agent-config
flag with puppet infra enable replica
or puppet infra provision replica --enable
, any custom settings that you
specified for server_list
and pcp_broker_list
in the PE Agent node group were
deleted.
Replica commands could leave the Puppet service disabled
The reinitialize replica command as well as the provision replica command, which includes reinitializing, left the Puppet service disabled on the replica.
Provisioning a replica failed after regenerating the primary server certificate
If you previously regenerated the certificate for your primary server, provisioning a replica can failed due to permission issues with backed up directories.
Console was inaccessible with PE set to IPv6
If you specified IPv6, PE Java services still listened to the IPv4 localhost. This mismatch could prevent access to the console as Nginx proxied traffic to the wrong localhost.
Apply blocks failed to compile
Puppet Server might have failed to compile apply blocks for plans when there were more than eight variables, or when variables had names that conflict with key names for hashes or target settings in plans.
Yaml plans displayed all parameters as optional in the console
Yaml plans listed all parameters as having default values, regardless of whether there is a default value set in the code or not. This caused all parameters to display defaults in orchestrator APIs and show as optional in the console. Yaml plans no longer display all parameters as optional.
Running puppet query
produced a cryptic
error
puppet query
with insufficient permissions
produced an error similar to
this:ERROR - &{<nil> } (*models.Error) is not supported by the TextConsumer, can be resolved by supporting TextUnmarshaler interface
Primary server reported HTTP error after Qualys scan
When running a Qualys scan, the primary server no longer reports the error "HTTP Security Header Not Detected. Issue at Port 443".
Nodes CSV export failed with PQL query
The csv export functionality no longer produces an error when you specified nodes using a PQL query.
The wait_until_available
function didn’t work
with multiple transports
When a target included in the TargetSpec
argument to
the wait_until_available
plan function used the ACE
(remote) transport, the function failed immediately and wouldn't wait for any of the
targets in the TargetSpec
argument.
Unnecessary logs and failed connections in bolt-server
and ace-server
When requests were made with unsupported ciphers, bolt-server
and ace-server
would log
stack traces. Stack traces might lead to unusual growth in the logs for those
services when, for example, they are scanned by security scanning products. The Puma
Server library in those services has been updated to prevent emitting the stack
traces into the bolt-server.log
and ace-server.log
.
Patch task could misreport success for Windows nodes
When patching Windows nodes, running the pe_patch::patch_server
task always reported success,
even if there were problems installing one or more updates. With this fix, the task
now fails with an error message about which updates couldn't be installed
successfully.
The pe_patch
fact didn't consider classifier
environment group
When pe_patch
scheduled scripts that uploaded facts,
the facts didn't consider the current environment the node was compiling catalogs
in. If the local agent environment didn’t match the environment specified by the
server, the facts endpoint included tags for an unexpected environment.
Reenabling lockless code deploys could fail
Reenabling lockless code deploys could fail due to the persistence of the versioned code directory. With this release, any existing versioned code directory is deleted – and recreated – when you reenable lockless code deploys.
File-sync client repo grew after frequent commits
The file-sync client repo no longer grows rapidly when there are frequent commits to it. For example, when syncing the CA dir for DR, and many new certificates are signed are revoked quickly.
PE 2019.8.1
Released August 2020
Enhancements
Value reporting
A new values API reports details about automated changes that PE makes to nodes, and provides an estimate of time freed by each type of change based on intelligent defaults or values you provide. You can also specify an average hourly salary and see an estimate of cost savings for all automated changes.
Console navigation and workflow improvements
- The Classification page was renamed Node groups.
- The setup page was renamed Admin.
- There is a new Inventory section in the sidebar, which contains the Nodes, Node groups, and Packages pages.
- The Inventory page was removed. To add nodes to inventory, click Add nodes in the upper right corner of the Nodes page.
- There is a new Access control page, which contains tabs for Users, User roles, User groups, and External directory.
- The Configuration tab was broken out into two tabs: Classes and Configuration data. The Classes tab is for declaring classes and setting parameters while the Configuration data tab is for setting parameters without declaring classes.
- There is a New in 2019.8 page in the sidenav, which lists console-related release notes. It will be updated after each z release and is visible for the first two weeks after a release.
Compiler conversion runs in parallel
When you convert all compilers at one time with puppet
infrastructure run convert_legacy_compiler all=true
, the process is now
noticeably faster due to streamlining in when Puppet
runs occur on target hosts.
Console displays enum and boolean plan parameter values in select menu
You can select plan parameters that are boolean or enum types from a drop down menu in the Value field.
Updates to metrics endpoints
Access to endpoints under /metrics are now controlled by
trapperkeeper-authorization and configured in the Puppet Serverauth.conf
file. The default rule allows remote access with a valid Puppet certificate.
Setting the v2 metrics endpoint to debug no longer displays debug messages from Jolokia. In order to see debugging messages, set a configuration value in addition to the usual logback changes.
Deprecations and removals
Application orchestration features in the Puppet language
- Keywords:
site
,application
,consumes
, andproduces
- Metaparameters:
export
andconsume
- Resource kinds:
application
,site
,capability_mapping
Puppet::Parser::EnvironmentCompiler
-
Puppet::Parser::Compiler::CatalogValidator::SiteValidator
Puppet::Parser::Compiler::CatalogValidator::EnvironmentRelationshipValidator
-
Puppet::Type#is_capability?
Puppet::Type#application?
- Environment catalog REST API
SUSE Linux Enterprise Server dependencies
SUSE Linux Enterprise Server nodes no longer have a dependency onlibboost_*
and libyamlcpp
packages. Resolved issues
Upgrading Windows agents using the puppet_agent module could restart non-Puppet services
If you're using a log aggregator, upgrading Windows agents using the puppet_agent module could cause non-Puppet services to restart.
Upgrading agents using the puppet_agent module could produce non-breaking errors
Upgrading agents from versions 6.14 or 6.15 using the puppet-agent module could
produce errors about an unavailable file resource or unknown HTTP resource. These
errors occurred only during the initial Puppet agent
run, when the agent was still using versions 6.14 or 6.15 with an updated primary
server. The error resolved after the puppet-agent
service restarted.
Pre-upgrade check produced inaccurate errors on standalone PE-PostgreSQL nodes
## Pre-Upgrade Checks Warning: Puppet agent is not running. Error: No configuration file found at /etc/puppetlabs/client-tools/services.conf. This file is installed automatically on Puppet Server nodes. Make sure you are running the command on a primary master, primary master replica, or compile master. Error: Try 'puppet infrastructure help status' for usage
The error occurred because the pre-upgrade check verified services running on the primary server which were not present on standalone PE-PostgreSQL nodes.
Upgrade could fail with custom structured facts
If you use custom facts that use structured facts, upgrade could fail with an error related to your custom fact, for example: undefined method '[]' for nil:NilClass (Puppet::Error).
Upgrade commands failed if PXP agents were configured to connect to load balancers
In installations with load balancers, the puppet
infrastructure upgrade
commands could fail if the PXP agent on infrastructure nodes connected to load
balancers instead of to the primary server. The upgrade plan now verifies
configuration and prompts you to fix any issues before continuing with the
upgrade.
Compiler upgrade could fail to upgrade Puppet Server
The puppet infrastructure upgrade compiler
command
could fail to upgrade Puppet Server depending on how the
catalog was built for performing the upgrade.
Converting all
legacy compilers failed in
disaster recovery installations
With disaster recovery enabled, the command to convert legacy compilers with the
option all=true
failed.
Converting legacy compilers failed with autosigning enabled
Running puppet infrastructure run
convert_legacy_compiler
with autosigning enabled caused the conversion
to fail during certificate regeneration.
Converting legacy compilers could fail with DNS alternative names
If dns_alt_names
were specified in the [agent]
section of puppet.conf
, the puppet infrastructure run
convert_legacy_compiler
command failed because it didn't recognize the
alternative names. As a temporary workaround, we recommended moving dns_alt_names
to the [main]
section of puppet.conf
on the
compilers to be converted, however [agent]
is the
preferred section to specify this parameter. The compiler conversion command now
recognizes DNS alternative in either the [agent]
or
[main]
section of puppet.conf
.
Missing package dependencies for SUSE Linux Enterprise Server agent nodes
On agent nodes running SUSE Linux Enterprise Server 15, the libyaml-cpp
package and operating system packages
prefixed with libboost_
were no longer bundled with
Puppet agent, and also might not have been
included in the operating system.
Command to regenerate agent certificates didn't work with nodes behind a load balancer
In large and extra-large installations with load balancers, the command puppet infrastructure run regenerate_agent_certificate
failed because compilers didn't have the tasks needed to run the command, and agent
nodes don't communicate directly with the primary
server.
With lockless code deploy enabled, deleted branches could increase disk use
If you deleted a branch from your control repository with lockless deploys enabled, some artifacts could remain on disk and increase your disk use.
With lockless code deploy enabled, deploying with --wait
could produce an erroneous timeout
Deploying code from the command line or API with the --wait
flag produced a timeout error, even though the code deploy
completed.
The blackout_windows
parameter in pe_patch
class couldn't handle time zones with negative
UTC offset
If you used a negative value to offset the timezone when setting the blackout_windows
parameter for patching node groups, the
pe_patch
fact would return an error.
The pe_patch
fact wouldn't generate if there was
a parsing error
The pe_patch
fact couldn't be generated if there was
an error when parsing the latest cached catalog for the node. Additionally, if you
did not have puppetlabs-stdlib
installed, packages
that were fixed to a particular version in the node's catalog were not recognized by
pe_patch
.
Node search input didn't respond to Enter key
The node name search bar on the Nodes page in the console didn't respond to the Enter key to search for a node and you had to select Submit manually. You can now use Enter to search for nodes.
Console radiator bars had a width of zero
In the console, the colored bars in the radiator were broken and didn't show the correct layout. The radiator has been fixed.
PE 2019.8
Released June 2020
New features
Patch management
You can now manage patches on *nix and Windows nodes in the Patch Management section of the console. After setting up patching node groups, you can view the patch status for your nodes, filter available patches by type and operating system, and run a pre-filled task to apply patches to selected nodes from the Patches page. For information on configuring patch management and applying patches, see Managing patches.
Lockless code deploys
Using Code Manager, you can now optionally deploy code to versioned code directories rather than the live code directory. This change enables Puppet Server to continue serving catalog requests even as you deploy code.
You can enable lockless code deploys by setting
puppet_enterprise::profile::master::versioned_deploys
to
true
. For more information about lockless code deploys, see
Enable lockless code deploys.
Enhancements
Improvements to puppet infrastructure upgrade
commands
When you specify more than one compiler to upgrade, the puppet infrastructure
upgrade compiler
command now upgrades all compilers at the same time,
rather than sequentially. Additionally, with both the compiler and replica upgrade
commands, you can now specify the location of an authentication token other than the
default. For example: puppet infrastructure upgrade compiler
--token-file=<PATH_TO_TOKEN>
.
More secure code deploys
Permissions for the Puppet code directory are now managed by file sync directly, instead of relying on symlinks. This change improves security during code deployment.
Logging for puppet infrastructure
commands that
use the orchestrator
A new log file located at /var/log/puppetlabs/installer/orchestrator_info.log
contains run
details about puppet infrastructure
commands that
use the orchestrator, including the commands to provision and upgrade compilers,
convert legacy compilers, and regenerate agent and compiler
certificates.
Improved error handling for plans
Before running plans, the built-in check for node connectivity now provides more descriptive error messages, such as host key verification failures.
Unspecified default values for tasks and plans are supplied automatically
optional
in the
console. New scheduling options in the console
You can now specify scheduled tasks and Puppet jobs to run every two weeks or every four weeks.
Plan support for apply()
on pcp
transports
Plans now support using the apply_prep()
function and
blocks of Puppet code within calls to apply()
. The feature is only available on targets
connected to PE using the PCP transport and does not
work on nodes connected over SSH or WinRM.
Support for new options in the command/deploy endpoint
filetimeout
http_connect_timeout
http_keepalive_timeout
http_read_timeout
ordering
skip_tags
tags
use_cached_catalog
usecacheonfailure
Platform support
This version adds support for these platforms
- macOS 10.15
Deprecations and removals
Razor removed
Razor has been removed from PE in this release. If you want to continue using Razor, you can use the open source version of the tool.Support for bolt.yaml
settings in plans
removed
Settings from bolt.yaml
are no longer read from the
environment directory. The modulepath
setting is
only configurable from environment.conf
.
Platforms removed
Support for these platforms is removed in this release:
- Enterprise Linux 6
- Ubuntu 16.04
Resolved issues
Upgrade removed custom classification rules from PE Master node group
Custom rules that you used to classify compilers in the PE
Master node group were removed upon upgrade, or when you ran puppet infrastructure configure
.
Upgrade failed with a Could not retrieve facts error
Could not retrieve facts ... undefined method `split' for nil:NilClass (Puppet::Error)from /opt/puppetlabs/installer/lib/ruby/gems/2.5.0/gems/facter-4.0.20/lib/custom_facts/util/loader.rb:125:in `load'
Upgrading a replica could temporarily lock the agent on the primary server
If you tried to run Puppet on your primary server
before the puppet infrastructure upgrade replica
command completed, you could encounter an error that a Puppet run was already in progress.
FIPS installs didn't fully support cert chain validation
In FIPS environments, RBAC could not connect to LDAP using a pem
or jks
file.
Command to remove old PostgreSQL versions failed on Ubuntu
When run on Ubuntu nodes, the puppet infrastructure run remove_old_postgresql_versions
command
failed, erroneously reporting that PostgreSQL wasn't
installed.
Enabling a replica could fail immediately after provisioning
When running puppet infrastructure provision replica --enable
, the
command could fail after the replica was provisioned but before it was enabled if
services on the replica were still starting up. The command now waits for services
to start and verifies that replication has completed before enabling the
replica.
Ubuntu 20.04 couldn't be installed with PE package management
Ubuntu 20.04 wasn't available for installation as a
pe_repo
class, even though it was a supported
agent platform.
Loading plan lists crashed console services
When plan run results were large, the console crashed due to high memory usage on the
Plan details page. An optional results query parameter
has been added to the GET/plan_jobs
endpoint. This
parameter keeps you from experiencing high memory usage in the console when loading
results for large plan runs.
Default value for tasks and plans dropped in middleware
When a task had a default value of false
or null
, the console metadata panel did not display the
default value.
Event inspector displayed wrong table types
Browsing the event inspector sometimes created inconsistencies in tables and errors in table links.