Upgrading Puppet Enterprise
Upgrade your PE installation as new versions become available.
Upgrade paths
These are the valid upgrade paths for PE.
If you're on version... | Upgrade to... | Notes |
---|---|---|
2019.7 (latest) | You're up to date! | |
2019.y | latest | |
2018.1.2 or later 2018.1.3 or later with disaster recovery |
latest | You must have version 2018.1.2 or
later in order to complete prerequisites for upgrade to the
latest version. With disaster recovery enabled, you must have version 2018.1.3 in order to upgrade to the latest version. Alternatively, you can forget and then recreate your replica after upgrade. |
2018.1.0 or 2018.1.1 | 2018.1.z | |
2017.3.z | 2018.1 | |
2017.2.z | 2018.1 | |
2017.1.z | 2018.1 | |
2016.5.z | 2018.1 | |
2016.4.10 or later | 2018.1 | |
2016.4.9 or earlier | 2016.4.z, then 2018.1 | To upgrade to 2018.1 from 2015.2.z through 2016.4.9, you must first upgrade to the latest 2016.4.z. |
2016.2.z | ||
2016.1.z | ||
2015.3.z | ||
2015.2.z | ||
3.8.x | 2016.4.z, then 2018.1 | To upgrade from 3.8.x, you must first migrate to the latest 2016.4.z. This upgrade requires a different process than upgrades from other versions. |
Upgrade cautions
These are the major updates to PE since the last long-term support release, 2018.1. Review these recommendations and plan accordingly before upgrading to this version.
PuppetDB migrations in PE 2019.1, 2019.3, and 2019.7
Upgrades to PE 2019.1, 2019.3, and 2019.7 involve database migrations that can slow upgrade significantly. Deleting PuppetDB reports and truncating the resource events table before you upgrade can reduce migration time and lessen your downtime.
- Stop the PuppetDB service:
service pe-puppetdb stop
- On your PE-PostgreSQL server, create a file named
/tmp/delete-reports.sql
and set it to be owned by thepe-postgres
user (chown pe-postgres:pe-postgres /tmp/delete-reports.sql
). - Add contents to the
.sql
file according to your PE version.PE versions earlier than 2019.3PE 2019.3 through 2019.7BEGIN TRANSACTION; ALTER TABLE certnames DROP CONSTRAINT IF EXISTS certnames_reports_id_fkey; UPDATE certnames SET latest_report_id = NULL; TRUNCATE TABLE reports CASCADE; ALTER TABLE certnames ADD CONSTRAINT certnames_reports_id_fkey FOREIGN KEY (latest_report_id) REFERENCES reports(id) ON DELETE SET NULL; COMMIT TRANSACTION;
BEGIN TRANSACTION; ALTER TABLE certnames DROP CONSTRAINT IF EXISTS certnames_reports_id_fkey; UPDATE certnames SET latest_report_id = NULL; DO $$ DECLARE r RECORD; BEGIN FOR r IN (SELECT tablename FROM pg_tables WHERE tablename LIKE 'resource_events_%') LOOP EXECUTE 'DROP TABLE ' || quote_ident(r.tablename); END LOOP; END $$; TRUNCATE TABLE reports CASCADE; ALTER TABLE certnames ADD CONSTRAINT certnames_reports_id_fkey FOREIGN KEY (latest_report_id) REFERENCES reports(id) ON DELETE SET NULL; COMMIT TRANSACTION;
- Run the command:
su - pe-postgres -s /bin/bash -c "/opt/puppetlabs/server/bin/psql -d pe-puppetdb -f /tmp/delete-reports.sql"
Java 11 upgrade in PE 2019.3
PE 2019.3 includes an upgrade from Java version 8 to version 11. If you've customized PE Java services, or use plug-ins that include Java code, test PE 2019.3 and later thoroughly in a non-production environment before upgrading.
Orchestrator memory use increase in PE 2019.2
Puppet orchestrator uses more memory in version 2019.2 than in previous versions due to the addition of a Java virtual machine (JVM), which enables new features and functionalities such as plans. If your memory use is near capacity when running PE 2019.1 or older versions, allocate additional memory before upgrading to PE 2019.2.
Additionally, take care when writing plans, as they can require more memory than is allocated to the orchestrator. To work around this issue, rewrite the plan or increase the memory allocated to the orchestrator.
PostgreSQL 11 upgrade in PE 2019.2
PE 2019.2.0 includes an upgrade from pe-postgresql
version 9.6 to version 11. As with any
major version bump of PostgreSQL, the datastore must
be migrated to a format compatible with the new version of PostgreSQL. The PE
installer performs this migration automatically using the PostgreSQL
pg_upgrade
utility. Because both the 9.6 datastore
and the new 11 datastore remain present on disk, the partition used for PostgreSQL must have enough space for the migrated
datastore (calculated with a margin as 110% of the current 9.6 datastore). The
installer issues a warning and cancels the upgrade if there is insufficient
space.
The datastore migration also increases the amount of time required to complete the upgrade. The time required varies depending on your installation's size and hardware setup, but broadly, expect between two and four minutes of additional time per 10GB of datastore size.
The pe-modules
pe_postgresql_info
fact provides information about the size of your PostgreSQL installation as well as the size and
number of available bytes for the partition. To review this fact, run facter -p pe_postgresql_info
on the PE node that runs the pe-postgresql
service (either the master
node or the standalone PostgreSQL node in extra-large
installations).
After upgrading, you can optionally remove packages and directories associated with
older PostgreSQL versions with the command
puppet infrastructure run remove_old_postgresql_versions
. If
applicable, the installer prompts you to complete this cleanup.
TLSv1 and v1.1 disabled in PE 2019.1
-
AIX
-
CentOS 5
-
RHEL 5
-
SLES 11
-
Solaris 10
-
Windows Server 2008 R2
Certificate architecture and handling in PE 2019.0
- To upgrade to 2019.0 or later and keep your existing CA, upgrade infrastructure nodes and agents as normal. You can continue to use pre-6.x agents with a Puppet 6.x or PE 2019.0 or later master as long as you don't regenerate certificates.
-
To migrate to 2019.0 or later and keep your existing CA, install
the new version and copy
/etc/puppetlabs/puppet/ssl
from your old master. You can continue to use pre-6.x agents with a Puppet 6.x or PE 2019.0 or later master as long as you don't regenerate certificates. - To adopt the new CA architecture, upgrade both your master and agents, and then regenerate certificates. If you don't upgrade all of your nodes to 6.x, don't regenerate certificates, because pre-6.x agents won't work with the new CA architecture.
MCollective removal in PE 2019.0
If you're upgrading from a 2018.1 installation with MCollective enabled, you must take additional steps to ensure a successful upgrade.
Before upgrade
- Remove MCollective from nodes in your infrastructure. If any nodes are configured with MCollective or ActiveMQ profiles when you attempt to upgrade, the installer halts and prompts you to remove the profiles. For example, remove PE MCollective node group and any of the deprecated parameters:
- mcollective_middleware_hosts
- mcollective
- mcollective_middleware_port
- mcollective_middleware_user
- mcollective_middleware_password
Tip: If your PuppetDB includes outdated catalogs for nodes that aren't currently being managed, the installer might report that MCollective is active on those nodes. You can deactivate the nodes withpuppet node deactivate
or use Puppet to update the records.
After upgrade
- Manually remove these node groups:
PE MCollective
PE ActiveMQ Broker
Any custom node group you created for ActiveMQ hubs
If you customized classification with references to MCollective or ActiveMQ profiles, remove the profiles from your classification. In this version of PE, nodes that include MCollective or ActiveMQ profiles trigger a warning during agent runs. Future versions of PE that remove the profiles completely can trigger failures in catalog compilation if you leave the profiles in place.
Removing MCollective
Remove MCollective and its related files from the nodes in your infrastructure. You must have PE version 2018.1.1 or later to complete this task.
The server components of MCollective, including pe-activemq and the peadmin user, are removed from the master and the MCollective service on agents is stopped. You must complete the upgrade to 2019.0 or later to completely remove MCollective from agents.
Test modules before upgrade
To ensure that your modules work with the newest version of PE, update and test them with Puppet Development Kit (PDK) before upgrading.
If you are already using PDK, your modules should pass validation and unit tests with your currently installed version of PDK.
Update PDK with each new release to ensure compatability with new versions of PE.
After you've verified that your modules work with the new PE version, you can continue with your upgrade.
Upgrade a standard installation
To upgrade, run the PE installer on your master, and then upgrade any additional components.
Back up your PE installation.
Migrate from a split to a standard installation
Split installations, where the master, console, and PuppetDB are installed on separate nodes, are no longer supported as of PE version 2019.2. Before upgrading to 2019.2 or later, migrate from an existing split installation to a standard (formerly called monolithic) installation—with or without compilers—and a standalone PE-PostgreSQL node.
You must be running a version of PE
on all infrastructure nodes that includes the puppet infrastructure run
command. To verify that this
command is available on your systems, run puppet infrastructure run --help
.
The
puppet infrastructure run
command leverages built-in Bolt plans to automate
certain management tasks. To use this command, you must be able to connect using SSH
from your master to any nodes that the command modifies. You can establish an SSH
connection using key forwarding, a local key file, or by specifying keys in .ssh/config
on your master. For more
information, see
Bolt OpenSSH configuration options.
pe.conf
, unpinning and uninstalling
packages from affected infrastructure nodes, and running Puppet multiple times. Treat this process as you
would any major migration by thoroughly testing it in an environment that's as
similar to your production environment as possible.Upgrading PostgreSQL
If you use the default PE-PostgreSQL database installed alongside PuppetDB, you don't have to take special steps to upgrade PostgreSQL. However, if you have a standalone PE-PostgreSQL instance, or if you use a PostgreSQL instance not managed by PE, you must take extra steps to upgrade PostgreSQL.
You must upgrade a standalone PE-PostgreSQL instance each time you upgrade PE. To upgrade a standalone PE-PostgreSQL instance, simply run the installer on the PE-PostgreSQL node first, then proceed with upgrading the rest of your infrastructure.
-
Back up databases, wipe your old PostgreSQL installation, install the latest version of PostgreSQL, and restore the databases.
-
Back up databases, set up a new node with the latest version of PostgreSQL, restore databases to the new node, and reconfigure PE to point to the new
database_host
. -
Run
pg_upgrade
to get from the older PostgreSQL version to the latest version.
Checking for updates
To see the version of PE
you're currently using, run puppet --version
on the command line. Check the PE download site to find
information about the latest maintenance release.
pe-puppetserver
service restarts.
As part of the check, it passes some basic, anonymous information to Puppet servers. You can optionally disable
update checking.