Continuous Delivery for PE known issues


These are the known issues for the Continuous Delivery for PE 3.x release series.

Group permissions persist in the web UI until page refresh

If you edit the permissions for a user group and then move on to editing the permissions for a second user group, the permissions selected for the first user group are still shown in the web UI. To work around this issue, refresh the page.

Webhooks do not fire when custom Docker image names are included in a job

In Continuous Delivery for PE versions 3.1.x and 3.2.x, if a custom Docker image in the format <IMAGE>:<VERSION> is included in a job, webhooks for that job fail to fire. To work around this issue, you must include the repository name in the Docker image name. For example, instead of writing puppet-dev-tools:latest, you must write puppet/puppet-dev-tools:latest.

Rerun job control is unresponsive after two hours for Bitbucket Cloud users

This known issue applies only to Bitbucket Cloud users. When a pipeline run for a control repo or module was completed more than two hours ago, clicking the Rerun Job button results in an Authentication failed error.

Purging unmanaged firewall rules with the puppetlabs-firewall module deletes required firewall settings

If your Continuous Delivery for PE node uses the puppetlabs-firewall module to manage its firewall settings, and if a resources { 'firewall': purge => true } metaparameter is set on the node or at a higher level, Puppet will remove the unmanaged Docker firewall rules Continuous Delivery for PE requires to run successfully. To work around this issue, disable unmanaged firewall rule purging for your Continuous Delivery for PE node by changing the metaparameter to resources { 'firewall': purge => false }.

Deployments for module regex branches are not supported when managing pipelines as code in versions 3.0 and 3.1

In Continuous Delivery for PE versions prior to 3.2.0, deployments using the feature branch deployment policy cannot be included in a module regex branch pipeline that is managed with a .cd4pe.yaml file. To work around this issue, upgrade to version 3.2.0 or newer, or click Manage pipelines and select Manage in the web UI, then delete and recreate all deployments in the pipeline.

Module impact analysis tasks cannot be added to a pipeline after upgrading to version 3.0.0

If you added the credentials for the PE instance associated with a module pipeline's deployment tasks to Continuous Delivery for PE before you upgraded to version 3.0.0, you are unable to add impact analysis tasks to the pipeline. To work around this issue, delete and re-add the PE instance's credentials, giving the PE instance the same friendly name it had previously.

Modules page does not display latest deployment summaries

On the Modules page, information about the most recent deployment is not shown.

Custom deployment policies aren't initially shown for new control repos

When your first action in a newly created control repo is to add a deployment to a pipeline, any custom deployment policies stored in the control repo aren't shown as deployment policy options. To work around this issue, click Built-in deployment policies, then Custom deployment policies to refresh the list of available policies.

Automatic PE integration fails if the value of puppetdb_port is set as a string

If the puppetdb_port parameter's value in the puppet_enterprise class in the PE Infrastructure node group is set as a string, automatic integration of PE fails with an Automatic configuration failed error. To work around this issue, use the PE console to change the puppetdb_port parameter's value to an integer.

Regex branch module deployments fail if the :control_branch pattern is used for multiple modules

Deploying a module from a regex branch pipeline fails if more than one module in your Puppetfile uses the :branch => :control_branch pattern. To work around this issue, make sure that the default_branch parameter is set in the Puppetfile for every Git-sourced module that uses the :branch => :control_branch pattern.

Docker configuration changes to jobs are not immediately available

When you update the Docker configuration for a job, several minutes elapse before your changes take effect. To work around this issue, wait at least five minutes after making a Docker configuration change before attempting to run the job.

Users removed from all workspaces cannot add new workspaces

If you delete or are removed from all workspaces of which you are a member, you are directed to the Add New Workspace screen. If you log out or navigate away from this screen without creating a new workspace, you are unable to access any workspaces or get back to the Add New Workspace screen until invited to an existing workspace by another user. To work around this issue, create a new workspace when prompted, or request an invitation to an existing workspace.

How helpful was this page?
Puppet sites use proprietary and third-party cookies. By using our sites, you agree to our cookie policy.