Upgrading Puppet Enterprise
Upgrade your PE installation as new versions become available.
Upgrade paths
These are the valid upgrade paths for PE.
If you're on version... | Upgrade to... | Notes |
---|---|---|
2019.1.z | You're up to date! | |
2019.0.z | 2019.1 | |
2018.1.1 or later | 2019.1 | You must have version 2018.1.1 or later in order to complete prerequisites for upgrade to 2019.1. For details, see Upgrade cautions. |
2018.1.0 | 2018.1.z | |
2017.3.z | 2018.1 | |
2017.2.z | 2018.1 | |
2017.1.z | 2018.1 | |
2016.5.z | 2018.1 | |
2016.4.10 or later | 2018.1 | |
2016.4.9 or earlier | latest 2016.4.z, then 2018.1 | To upgrade to 2018.1 from 2015.2.z through 2016.4.9, you must first upgrade to the latest 2016.4.z. |
2016.2.z | ||
2016.1.z | ||
2015.3.z | ||
2015.2.z | ||
3.8.x | latest 2016.4.z, then 2018.1 | To upgrade from 3.8.x, you must first migrate to the latest 2016.4.z. This upgrade requires a different process than upgrades from other versions. |
Upgrade cautions
These are the major updates to recent PE versions that you should be aware of when upgrading.
TLSv1 and v1.1 disabled in PE 2019.1
AIX
CentOS 5
RHEL 5
SLES 11
Solaris 10
Windows Server 2008r2
Migration of PuppetDB
resource_events
table in PE 2019.1
Upgrading from PE versions prior to 2019.1 requires PuppetDB to perform a migration of the
resource_events
table. This
migration can take a couple of hours to complete for installations with thousands of
agents. The migration also produces a lot of disk writes, which can increase the
performance overhead of VM snapshots. If you have installations large enough to be
using an external database node, consider a database backup as a rollback strategy
instead of a VM snapshot.
Truncating the resource_events
table to decrease migration time
Truncating the
resource_events
can significantly reduce migration time and lessen your downtime, especially if your
table is larger than a few gigabytes (GB). If you truncate the table, the Events page in the PE console will be temporarily
blank after the upgrade. The data will be repopulated with the restoration of
regular Puppet runs.
resource_events
table, run the following
command on the node where PostgreSQL is running (the master node in a
monolithic installation, the PuppetDB node in a split installation, or the
standalone PE-PostgreSQL
node):su pe-postgres --shell /bin/bash --command "/opt/puppetlabs/server/bin/psql --dbname pe-puppetdb --command \"SELECT relname, pg_size_pretty(pg_relation_size(oid)) AS size FROM pg_class WHERE relname='resource_events';\""
resource_events
table on a monolithic installationresource_events
table on a monolithic installation, run the following
command on your master:
su - pe-postgres --shell /bin/bash --command "/opt/puppetlabs/server/bin/psql --dbname pe-puppetdb --command 'TRUNCATE resource_events'"
Truncate the resource_events
table on a high availability
installation
resource_events
table on a high
availability installation:- Stop PuppetDB on your master and
replica:
sudo puppet resource service puppetdb ensure=stopped
- Run the truncation command on the node where PostgreSQL is running
(the master in a monolithic installation, or the standalone PE-PostgreSQL
node):
su - pe-postgres --shell /bin/bash --command "/opt/puppetlabs/server/bin/psql --dbname pe-puppetdb --command 'TRUNCATE resource_events'"
- Restart PuppetDB on your master and
replica:
sudo puppet resource service puppetdb ensure=running
Truncate the resource_events
table on a split
installation
resource_events
table on a split
installation, run the following command on the node where PostgreSQL is running (generally the PuppetDB node in a split installation, or
the standalone PE-PostgreSQL
node):su - pe-postgres --shell /bin/bash --command "/opt/puppetlabs/server/bin/psql --dbname pe-puppetdb --command 'TRUNCATE resource_events'"
Certificate architecture and handling in PE 2019.0
PE 2019.0 and later, courtesy of Puppet Server, uses an intermediate certificate authority architecture by default. When you upgrade to PE 2019.0 or later, you can optionally regenerate certificates to adopt the intermediate certificate architecture.
To adopt the new CA architecture, both your master and agents must be upgraded, and you must regenerate certificates. You can use pre-6.x agents with a Puppet 6.x or PE 2019.0 or later master, but this combination doesn't take advantage of the new intermediate certificate authority architecture. If you don't upgrade all of your nodes to 6.x, don't regenerate certificates, because pre-6.x agents won't work with the new CA architecture.
MCollective removal in PE 2019.0
If you're upgrading from a 2018.1 installation with MCollective enabled, you must take additional steps to ensure a successful upgrade.
Before upgrade
- Remove MCollective from nodes in your infrastructure. If any nodes are configured with MCollective or ActiveMQ profiles when you attempt to upgrade, the installer halts and prompts you to remove the profiles. For example, remove PE MCollective node group and any of the deprecated parameters:
- mcollective_middleware_hosts
- mcollective
- mcollective_middleware_port
- mcollective_middleware_user
- mcollective_middleware_password
Tip: If your PuppetDB includes outdated catalogs for nodes that aren't currently being managed, the installer might report that MCollective is active on those nodes. You can deactivate the nodes withpuppet node deactivate
or use Puppet to update the records.
After upgrade
- Manually remove these node groups:
PE MCollective
PE ActiveMQ Broker
Any custom node group you created for ActiveMQ hubs
If you customized classification with references to MCollective or ActiveMQ profiles, remove the profiles from your classification. In this version of PE, nodes that include MCollective or ActiveMQ profiles trigger a warning during agent runs. Future versions of PE that remove the profiles completely can trigger failures in catalog compilation if you leave the profiles in place.
Removing MCollective
Remove MCollective and its related files from the nodes in your infrastructure. You must have PE version 2018.1.1 or later to complete this task.
The server components of MCollective, including pe-activemq and the peadmin user, are removed from the master and the MCollective service on agents is stopped. You must complete the upgrade to 2019.0 or later to completely remove MCollective from agents.
Test modules before upgrade
To ensure that your modules work with the newest version of PE, update and test them with Puppet Development Kit (PDK) before upgrading.
If you are already using PDK, your modules should pass validation and unit tests with your currently installed version of PDK.
Update PDK with each new release to ensure compatability with new versions of PE.
After you've verified that your modules work with the new PE version, you can continue with your upgrade.
Upgrade a monolithic installation
To upgrade, run the text-based PE installer on your master, and then upgrade any additional components. To upgrade with high availability enabled, you must also run the upgrade script on your replica.
Back up your PE installation.
If you encounter errors during upgrade, you can fix them and run the installer again.
Upgrade a split installation
To upgrade a split or large environment installation, run the text-based installer on each infrastructure node in your environment, and then upgrade any additional components.
Back up your PE installation.
If you encounter errors during upgrade, you can fix them and run the installer again.
Upgrade the master
Upgrading the master is the first step in upgrading a split or large environment installation.
Upgrade PuppetDB
In a split installation, after you upgrade the master, you're ready to upgrade PuppetDB.
Upgrade the console
In a split installation, after you upgrade the master and PuppetDB, you're ready to upgrade the console.
Run Puppet on infrastructure nodes
To complete a split upgrade, run Puppet on all infrastructure nodes in the order that they were upgraded.
- Run Puppet on the master node.
- Run Puppet on the PuppetDB node.
- Run Puppet on the console node.
Upgrade remaining infrastructure nodes
After the main components of your infrastructure are upgraded, you must upgrade any additional infrastructure nodes, such as compilers, hubs, and spokes.
Migrate from a split to a monolithic installation
Split installations, where the master, console, and PuppetDB are installed on separate nodes, are deprecated. Migrate from an existing split installation to a monolithic installation—with or without compilers—and a standalone PE-PostgreSQL node.
The
puppet infrastructure run
command leverages built-in Bolt
plans to automate certain management tasks. To use this command, you must be able to
connect using SSH from your master to any nodes that the command modifies. You can
establish an SSH connection using key forwarding, a local key file, or by specifying
keys in .ssh/config
on your master.
For more information, see
Bolt OpenSSH configuration
options.
To view all available parameters, use the --help
flag. The logs for all puppet infrastructure run
Bolt plans are
located at /var/log/puppetlabs/installer/bolt_info.log
.
pe.conf
, unpinning and uninstalling packages from
affected infrastructure nodes, and running Puppet multiple times. Treat this
process as you would any major migration by thoroughly testing it in
an environment that's as similar to your production environment as
possible.Upgrading PostgreSQL
If you use the default PE-PostgreSQL database installed alongside PuppetDB, you don't have to take special steps to upgrade PostgreSQL. However, if you have a standalone PE-PostgreSQL instance, or if you use a PostgreSQL instance not managed by PE, you must take extra steps to upgrade PostgreSQL.
You must upgrade a standalone PE-PostgreSQL instance each time you upgrade PE. To upgrade a standalone PE-PostgreSQL instance, simply run the installer on the PE-PostgreSQL node first, then proceed with upgrading the rest of your infrastructure.
-
Back up databases, wipe your old PostgreSQL installation, install the latest version of PostgreSQL, and restore the databases.
-
Back up databases, set up a new node with the latest version of PostgreSQL, restore databases to the new node, and reconfigure PE to point to the new
database_host
. -
Run
pg_upgrade
to get from the older PostgreSQL version to the latest version.
Checking for updates
To see the version of PE you're currently using, run puppet --version
on the command line. Check the PE download site to find information about the latest
maintenance release.
pe-puppetserver
service restarts.
As part of the check, it passes some basic, anonymous information to Puppet servers. You can optionally
disable update checking.