PE known issues

These are the known issues in PE 2023.0.

Installation and upgrade known issues

These are the known issues for installation and upgrade in this release.

Converting legacy compilers fails with an external certificate authority

If you use an external certificate authority (CA), the puppet infrastructure run convert_legacy_compiler command fails with an error during the certificate-signing step.
Agent_cert_regen: ERROR: Failed to regenerate agent certificate on node <compiler-node.domain.com>
Agent_cert_regen: bolt/run-failure:Plan aborted: run_task 'enterprise_tasks::sign' failed on 1 target
Agent_cert_regen: puppetlabs.sign/sign-cert-failed Could not sign request for host with certname <compiler-node.domain.com> using caserver <master-host.domain.com>
To work around this issue when it appears:
  1. Log on to the CA server and manually sign certificates for the compiler.
  2. On the compiler, run Puppet: puppet agent -t
  3. Unpin the compiler from PE Master group, either from the console, or from the CLI using the command: /opt/puppetlabs/bin/puppet resource pe_node_group "PE Master" unpinned="<COMPILER_FQDN>"
  4. On your primary server, in the pe.conf file, remove the entry puppet_enterprise::profile::database::private_temp_puppetdb_host
  5. If you have an external PE-PostgreSQL node, run Puppet on that node: puppet agent -t
  6. Run Puppet on your primary server: puppet agent -t
  7. Run Puppet on all compilers: puppet agent -t

Converted compilers can slow PuppetDB in multi-region installations

In configurations that rely on high-latency connections between your primary servers and compilers – for example, in multi-region installations – converted compilers running the PuppetDB service might experience significant slowdowns. If your primary server and compilers are distributed among multiple data centers connected by high-latency links or congested network segments, reach out to Support for guidance before converting legacy compilers.

Disaster recovery known issues

These are the known issues for disaster recovery in this release.

Certificates and keys cannot be backed up or restored by specifying the certs scope

In Puppet Enterprise (PE) 2023.0 and 2021.0 - 2021.7.2, if you run the puppet-backup create command and specify a scope of certs, the command fails to back up the certificate authority (CA) root key and certificates. In addition, if you run a full backup without specifying the scope, and then run the puppet-backup restore command with a scope of certs, the restore operation fails.

This issue occurs because the default directory (cadir) was updated starting with Puppet 7, but the update was not immediately implemented in the puppet-backup create and puppet-backup restore commands.

As a workaround, you can run the backup and restore commands without specifying a scope of certs. For more information about the directory change, see New CA directory location. The directory location is documented in cadir.

FIPS known issues

These are the known issues with FIPS-enabled PE in this release.

FIPS-enabled PE 2023.0 can't use the default system cert store

FIPS-compliant builds running PE 2023.0 can't use the default system cert store, which is used automatically with some reporting services. This setting is configured by the report_include_system_store Puppet parameter that ships with PE.

Removing the puppet-cacerts file (located at /opt/puppetlabs/puppet/ssl/puppet-cacerts) can allow a report processor that eagerly loads the system store to continue with a warning that the file is missing.

If HTTP clients require external certs, we recommend using a custom cert store containing only the necessary certs. You can create this cert store by concatenating existing pem files and configuring the ssl_trust_store Puppet parameter to point to the new cert store.

Puppet Server FIPS installations don’t support Ruby’s OpenSSL module

FIPS-enabled PE installations don't support extensions or modules that use the standard Ruby Open SSL library, such as hiera-eyaml. As a workaround, you can use a non-FIPS-enabled primary server with FIPS-enabled agents, which limits the issue to situations where only the primary uses the Ruby library. This limitation does not apply to versions 1.1.0 and later of the splunk_hec module, which supports FIPS-enabled servers. The FIPS Mode section of the module's Forge page explains the limitations of running this module in a FIPS environment.

Configuration and maintenance known issues

These are the known issues for configuration and maintenance in this release.

Task jobs that are scheduled without explicitly defined timeouts fail to run

In Puppet Enterprise (PE) 2023.0, any scheduled task that was not created with a timeout option fails to start at the scheduled time. This issue affects task jobs that were scheduled before an upgrade and new task jobs that were not created with an explicit timeout.

Determining whether scheduled jobs are affected

Scheduled jobs of type environment_task where timeout is null are affected by this issue. You can query for all scheduled task jobs and check for null timeout values by using a query that is similar to the following example.

Example query:
curl -k -X GET -H "X-Authentication: $(puppet access show)" 
"https://$(hostname -f):8143/orchestrator/v1/scheduled_jobs/environment_jobs?type=task""
Example output:
{
  "items" : [ {
    "description" : "",
    "schedule" : {
      "start_time" : "2023-03-10T00:00:00.000Z",
      "interval" : null
    },
    "next_run" : {
      "time" : "2023-03-10T00:00:00.000Z"
    },
    "name" : "5",
    "type" : "environment_task",
    "last_run" : null,
    "id" : "https://slow-labyrinth.delivery.puppetlabs.net:8143/orchestrator/v1/scheduled_jobs/environment_jobs/5",
    "environment" : "production",
    "input" : {
      "name" : "package",
      "noop" : false,
      "scope" : {
        "query" : "inventory[certname] { facts.aio_agent_version ~ \"\\\\d+\" }"
      },
      "timeout" : null,
      "transport" : "pxp",
      "parameters" : {
        "name" : "nginx",
        "action" : "status"
      },
      "concurrency" : null,
      "sensitive_parameters" : [ ]
    },
    "owner" : {
      "email" : "",
      "user-agent" : "Apache-HttpAsyncClient/4.1.5 (Java/17.0.5-internal)",
      "is_revoked" : false,
      "last_login" : "2023-02-24T19:27:50.566Z",
      "is_remote" : false,
      "login" : "admin",
      "is_superuser" : true,
      "id" : "42bf351c-f9ec-40af-84ad-e976fec7f4bd",
      "role_ids" : [ 1 ],
      "display_name" : "Administrator",
      "is_group" : false,
      "ip-address" : "10.16.132.58, 127.0.0.1, 10.16.150.60"
    },
    "userdata" : { }
  } ],
  "pagination" : {
    "offset" : 0,
    "total" : 1
  }
}

Workaround

For jobs that have type environment_task and have a null value for input.timeout option, you can implement the following workaround. Use the PE console to view the scheduled tasks list and delete and replace the affected jobs in the console.

Specifying a timeout

The default timeout in PE 2023.0 is 2400 seconds (40 minutes). If you know how long task execution takes place on a particular node, you can specify a timeout that is appropriate for that task (typically, the average node execution time plus 10 or 20 percent).

Related issue: When a scheduled task is edited in the PE console, concurrency and timeout options are dropped

A related issue occurs when a scheduled task is edited in the PE console. Because timeouts cannot be edited in the console, tasks edited in the console lose the originally specified timeout value. These tasks must be deleted and replaced in the PE console, as documented in the workaround.

puppet infrastructure tune fails with multi-environment environmentpath

The puppet infrastructure tune command fails if environmentpath (in your puppet.conf file) is set to multiple environments. To avoid the failure, comment out this setting before running this command. For details about the environmentpath setting, refer to environmentpath in the open source Puppet documentation.

Restarting or running Puppet on infrastructure nodes can trigger an illegal reflective access operation warning

When restarting PE services or performing agent runs on infrastructure nodes, you might see this warning in the command-line output or logs: Illegal reflective access operation ... All illegal access operations will be denied in a future release

These warnings are internal to PE service components and have no impact on their functionality. You can safely disregard them.

Orchestration services known issues

These are the known issues for the orchestration services in this release.

There are no known issues related to Orchestration services at this time.

Console and console services known issues

The known issues in this release for the console and console services are documented.

For remote users, access rights cannot be revoked or reinstated from the console

In the Puppet Enterprise (PE) console, you cannot revoke or reinstate the access rights of remote users. As a workaround, you can use the role-based access control (RBAC) application programming interface (API) to manage access.

To revoke a user’s access to PE, form a request to the following endpoint:
POST /command/users/revoke
For instructions, see POST /command/users/revoke.
To reinstate a user’s access to PE, form a request to the following endpoint:
POST /command/users/reinstate
For instructions, see POST /command/users/reinstate.

Patching known issues

These are the known issues for patching in this release.

Patching fails with excluded YUM packages

In the patching task or plan, using yum_params to pass the --exclude flag in order to exclude certain packages can result in task or plan failure if the only packages requiring updates are excluded. As a workaround, use the versionlock command (which requires installing the yum-plugin-versionlock package) to lock the packages you want to exclude at their current version. Alternatively, you can fix a package at a particular version by specifying the version with a package resource for a manifest that applies to the nodes to be patched.

Code management known issues

These are the known issues for Code Manager, r10k, and file sync in this release.

Changing a file type in a control repo produces a checkout conflict error

Changing a file type in a control repository – for example, deleting a file and replacing it with a directory of the same name – generates the error JGitInternalException: Checkout conflict with files accompanied by a stack trace in the Puppet Server log. As a workaround, deploy the control repo with the original file deleted, and then deploy again with the replacement file or directory.

Code Manager and r10k do not identify the default branch for module repositories

When you use Code Manager or r10k to deploy modules from a Git source, the default branch of the source repository is always assumed to be main. If the module repository uses a default branch that is not main, an error occurs. To work around this issue, specify the default branch with the ref: key in your Puppetfile.