Upgrading Puppet Enterprise
Sections
- Retired primary server platforms in 2019.8
- PuppetDB migrations in PE 2019.1, 2019.3, and 2019.7
- Java 11 upgrade in PE 2019.3
- Retired split installations in PE 2019.2
- Orchestrator memory use increase in PE 2019.2
- PostgreSQL 11 upgrade in PE 2019.2
- TLSv1 and v1.1 disabled in PE 2019.1
- Certificate architecture and handling in PE 2019.0
- MCollective removal in PE 2019.0
Upgrade your PE installation as new versions become available.
Upgrade paths
These are the valid upgrade paths for PE.
If you're on version... | Upgrade to... | Notes |
---|---|---|
2019.8.z | 2019.8.12 (overlap support) or 2021.7.z (LTS) |
Important: 2019.8.z is EOL as of February
2023.
Upgrade to 2019.8.12 before upgrading to the new LTS (2021.7.z). For important information about this upgrade, refer to Upgrading Puppet Enterprise in the 2021.7.z documentation. |
2019.y | 2019.8.z | |
2018.1.2 or later 2018.1.3 or later with disaster recovery |
2019.8.z | You must have version 2018.1.2 or
later in order to complete prerequisites for upgrade to the
latest version. With disaster recovery enabled, you must have version 2018.1.3 in order to upgrade to the latest version. Alternatively, you can forget and then recreate your replica after upgrade. |
2018.1.0 or 2018.1.1 | 2018.1.z | |
2017.3.z | 2018.1.z | |
2017.2.z | ||
2017.1.z | ||
2016.5.z | ||
2016.4.10 or later | ||
2016.4.9 or earlier | 2016.4.z, then 2018.1 | To upgrade to 2018.1 from 2015.2.z through 2016.4.9, you must first upgrade to the latest 2016.4.z. |
2016.2.z | ||
2016.1.z | ||
2015.3.z | ||
2015.2.z | ||
3.8.x | 2016.4.z, then 2018.1 | To upgrade from 3.8.x, you must first migrate to the latest 2016.4.z. This upgrade requires a different process than upgrades from other versions. |
Upgrade cautions
These are the major changes to PE since the last long-term support release, 2018.1. Review these recommendations and plan accordingly before upgrading to this version.
Retired primary server platforms in 2019.8
Support for Enterprise Linux 6 and Ubuntu 16.04 as a primary server platform was removed in 2019.8. If your primary server is installed on one of these platforms, you must update the operating system before you can upgrade to this version of PE.
Follow these steps to upgrade from an unsupported primary server platform.
- Configure a new node with a supported primary server platform, for example Enterprise Linux 7 or 8, or Ubuntu 18.04.
- Install your current PE version on the new node.
- Back up your existing installation.
- Restore your installation on the new primary server using the backup you created.
- Upgrade to the latest PE version.
- If you have compilers, reprovision them.
- If you have a replica, forget and then reprovision it.
PuppetDB migrations in PE 2019.1, 2019.3, and 2019.7
Deleting PuppetDB reports and truncating the resource events table before you upgrade can reduce migration time and lessen downtime, especially when the upgrade involves a significant database migration.
- On your primary server (and replica), stop the PuppetDB service by running:
puppet resource service pe-puppetdb ensure=stopped
Copied!Remember: If you have a replica, you must do these steps on the primary server and replica at the same time. - The next step depends on the Puppet Enterprise (PE) version
you're upgrading from. Regardless of the version, if you have a replica, you must
perform these steps on the primary server and replica at the same time.
-
If you're upgrading from PE 2019.7 or
later: Run the packaged
delete-reports
command on your primary server (and replica):/opt/puppetlabs/bin/puppetdb delete-reports
Copied! -
If you're upgrading from a PE version
earlier than 2019.7:
- On your PE-PostgreSQL server, create a file named
/tmp/delete-reports.sql
and set it to be owned by thepe-postgres
user (chown pe-postgres:pe-postgres /tmp/delete-reports.sql
). Then, - Add contents to the
.sql
file according to your PE version.For PE versions 2019.3 through 2019.6, insert the following content:BEGIN TRANSACTION; ALTER TABLE certnames DROP CONSTRAINT IF EXISTS certnames_reports_id_fkey; UPDATE certnames SET latest_report_id = NULL; DO $$ DECLARE r RECORD; BEGIN FOR r IN (SELECT tablename FROM pg_tables WHERE tablename LIKE 'resource_events_%') LOOP EXECUTE 'DROP TABLE ' || quote_ident(r.tablename); END LOOP; END $$; TRUNCATE TABLE reports CASCADE; ALTER TABLE certnames ADD CONSTRAINT certnames_reports_id_fkey FOREIGN KEY (latest_report_id) REFERENCES reports(id) ON DELETE SET NULL; COMMIT TRANSACTION;
Copied!For PE versions earlier than 2019.3, insert the following content:BEGIN TRANSACTION; ALTER TABLE certnames DROP CONSTRAINT IF EXISTS certnames_reports_id_fkey; UPDATE certnames SET latest_report_id = NULL; TRUNCATE TABLE reports CASCADE; ALTER TABLE certnames ADD CONSTRAINT certnames_reports_id_fkey FOREIGN KEY (latest_report_id) REFERENCES reports(id) ON DELETE SET NULL; COMMIT TRANSACTION;
Copied! - Run this command on your primary server (and replica):
su - pe-postgres -s /bin/bash -c "/opt/puppetlabs/server/bin/psql -d pe-puppetdb -f /tmp/delete-reports.sql"
Copied!
- On your PE-PostgreSQL server, create a file named
-
If you're upgrading from PE 2019.7 or
later: Run the packaged
- Wait for the deletion process to finish.
- Restart the PuppetDB service on your primary server
(and replica) by
running:
puppet resource service pe-puppetdb ensure=running
Copied!
Java 11 upgrade in PE 2019.3
PE 2019.3 includes an upgrade from Java version 8 to version 11. If you've customized PE Java services, or use plug-ins that include Java code, test PE 2019.3 and later thoroughly in a non-production environment before upgrading.
Retired split installations in PE 2019.2
Split installations, where the primary server, console, and PuppetDB are installed on separate nodes, are no longer supported.
Before upgrading to 2019.2 or later, you must migrate from a split to a standard installation. For instructions, see Migrate from a split to a standard installation in the documentation for your current version.
Orchestrator memory use increase in PE 2019.2
Puppet orchestrator uses more memory in version 2019.2 than in previous versions due to the addition of a Java virtual machine (JVM), which enables new features and functionalities such as plans. If your memory use is near capacity when running PE 2019.1 or older versions, allocate additional memory before upgrading to PE 2019.2.
Additionally, take care when writing plans, as they can require more memory than is allocated to the orchestrator. To work around this issue, rewrite the plan or increase the memory allocated to the orchestrator.
PostgreSQL 11 upgrade in PE 2019.2
PE 2019.2.0 includes an upgrade from pe-postgresql
version 9.6 to version 11. This upgrade
involves a datastore migration that requires extra disk space (110% of the current 9.6
datastore) and extra time to upgrade (roughly two and four minutes of additional time
per 10GB of datastore size). The installer issues a warning and cancels the upgrade if
there is insufficient space.
To review the size of your PostgreSQL installation as well
as the size and number of available bytes for the partition, run facter -p
pe_postgresql_info
on the node that runs the pe-postgresql
service.
To speed the migration and optimize queries, clean up the PE-PosgreSQL database prior to
upgrade by applying the pe_databases module to nodes running the
pe-postgresql
service. For best results, apply the
module at least one week prior to upgrade to allow the module's maintenance schedule
enough time to clean all databases.
After upgrading, you can optionally remove packages and directories associated with older
PostgreSQL versions with the command puppet
infrastructure run remove_old_postgresql_versions
. If applicable, the
installer prompts you to complete this cleanup.
TLSv1 and v1.1 disabled in PE 2019.1
-
AIX
-
CentOS 5
-
RHEL 5
-
SLES 11
-
Solaris 10, 11
-
Windows Server 2008 R2
Certificate architecture and handling in PE 2019.0
- To upgrade to 2019.0 or later and keep your existing CA, upgrade infrastructure nodes and agents as normal. You can continue to use pre-6.x agents with a Puppet 6.x or PE 2019.0 or later primary server as long as you don't regenerate certificates.
-
To migrate to 2019.0 or later and keep your existing CA, install
the new version and copy
/etc/puppetlabs/puppet/ssl
from your old primary server. You can continue to use pre-6.x agents with a Puppet 6.x or PE 2019.0 or later primary server as long as you don't regenerate certificates. - To adopt the new CA architecture, upgrade both your primary server and agents, and then regenerate certificates. If you don't upgrade all of your nodes to 6.x, don't regenerate certificates, because pre-6.x agents won't work with the new CA architecture.
MCollective removal in PE 2019.0
If you're upgrading from a 2018.1 installation with MCollective enabled, you must take additional steps to ensure a successful upgrade. If any nodes are configured with MCollective or ActiveMQ profiles when you attempt to upgrade, the installer halts and prompts you to remove the profiles.
Before upgrading to 2019.x
-
Remove MCollective from nodes in your infrastructure:
- In the console, click Node groups, and select the node group PE Infrastructure.
- On the Configuration tab, in the puppet_enterprise class, set the mcollective parameter to absent.
- Click Add parameter and commit the change, then run Puppet on infrastructure nodes.
-
Remove any of these deprecated parameters:
- mcollective_middleware_hosts
- mcollective
- mcollective_middleware_port
- mcollective_middleware_user
- mcollective_middleware_password
Tip: If your PuppetDB includes outdated catalogs for nodes that aren't currently being managed, the installer might report that MCollective is active on those nodes. You can deactivate the nodes withpuppet node deactivate
or use Puppet to update the records.
After upgrading to 2019.x
- Manually remove these node groups:
PE MCollective
PE ActiveMQ Broker
Any custom node group you created for ActiveMQ hubs
If you customized classification with references to MCollective or ActiveMQ profiles, remove the profiles from your classification. In this version of PE, nodes that include MCollective or ActiveMQ profiles trigger a warning during agent runs. Future versions of PE that remove the profiles completely can trigger failures in catalog compilation if you leave the profiles in place.
Test modules before upgrade
To ensure that your modules work with the newest version of PE, update and test them with Puppet Development Kit (PDK) before upgrading.
If you are already using PDK, your modules should pass validation and unit tests with your currently installed version of PDK.
Update PDK with each new release to ensure compatibility with new versions of PE.
After you've verified that your modules work with the new PE version, you can continue with your upgrade.
Upgrade PE
Upgrade PE infrastructure components to get the latest features and fixes. Follow the upgrade instructions for your installation type to ensure you upgrade components in the correct order. Coordinate upgrades to ensure all infrastructure nodes are upgraded in a timely manner, because agent runs and replication fail on infrastructure nodes running a different agent version than the primary server.
Review the upgrade cautions for major changes to architecture and infrastructure components which might affect your upgrade.
Configure non-production environment for infrastructure nodes
If your infrastructure nodes are in an environment other than production
, you must manually configure PE to
use your chosen environment before you upgrade.
production
.Upgrade a standard installation
To upgrade a standard installation, run the PE installer on your primary server, and then upgrade any additional components.
Back up your PE installation.
If you're upgrading a replica, ensure you have a valid admin RBAC token. If you're upgrading from 2018.1, the RBAC token must be generated by a user with Job orchestrator and Node group view permissions.
Remove from the console (in the PE Master node group),
Hiera, or pe.conf
any agent_version
parameters that you've set in the
pe_repo
class that matches your infrastructure
nodes. Doing so ensures that upgrade isn't blocked by attempting to download a
non-default agent version for your infrastructure OS and architecture.
Upgrade a large installation
To upgrade a large installation, run the PE installer on your primary server, and then upgrade compilers and any additional components.
Back up your PE installation.
Ensure you have a valid admin RBAC token in order to upgrade compilers or a replica. If you're upgrading from 2018.1, the RBAC token must be generated by a user with Job orchestrator and Node group view permissions.
Remove from the console (in the PE Master node group),
Hiera, or pe.conf
any agent_version
parameters that you've set in the
pe_repo
class that matches your infrastructure
nodes. Doing so ensures that upgrade isn't blocked by attempting to download a
non-default agent version for your infrastructure OS and architecture.
Optionally convert legacy compilers to the new style compiler running the PuppetDB service.
Upgrade an extra-large installation
For help upgrading an extra-large installation, reach out to your technical account manager.
Upgrade a standalone PE-PostgreSQL installation
To upgrade a large installation with standalone PE-PostgreSQL, run the PE installer first on your PE-PostgreSQL node, then on your primary server, and then upgrade any additional components.
Back up your PE installation.
Ensure you have a valid admin RBAC token in order to upgrade compilers. If you're upgrading from 2018.1, the RBAC token must be generated by a user with Job orchestrator and Node group view permissions.
Remove from the console (in the PE Master node group),
Hiera, or pe.conf
any agent_version
parameters that you've set in the
pe_repo
class that matches your infrastructure
nodes. Doing so ensures that upgrade isn't blocked by attempting to download a
non-default agent version for your infrastructure OS and architecture.
Optionally convert legacy compilers to the new style compiler running the PuppetDB service.
Upgrade an unmanaged PostgreSQL installation
To upgrade a PE installation that relies on an
unmanaged PostgreSQL database, you must first upgrade PostgreSQL to version 11, if necessary. Then, prepare your
Puppet Enterprise (PE) databases and pe.conf
file, and finally upgrade PE. Upgrade
steps vary slightly depending on whether you use password or SSL authentication.
Back up your PE installation.
Ensure you have a valid admin RBAC token in order to upgrade compilers.
Remove from the console (in the PE Master node group),
Hiera, or pe.conf
any agent_version
parameters that you've set in the
pe_repo
class that matches your infrastructure
nodes. Doing so ensures that upgrade isn't blocked by attempting to download a
non-default agent version for your infrastructure OS and architecture.
postgres
. Change these to match your
installation as needed.Optionally convert legacy compilers to the new style compiler running the PuppetDB service.
Migrate PE
As an alternative to upgrading, you can migrate your PE installation. Migrating results in little or no downtime, but it requires additional system resources because you must configure a new primary server.
Migrate a standard installation
Migrate a standard installation by standing up a new primary server, restoring it with your existing installation, upgrading it, and then pointing agents at the new primary.
Review the upgrade cautions for major changes to architecture and infrastructure components which might affect your upgrade.