Structuring a .cd4pe.yaml file

When managing your pipelines with code, the pipeline definitions are expressed in a structured format and stored in a file named .cd4pe.yaml, which is kept in your control repo or module repo.

Your .cd4pe.yaml file must use this structure:
  • spec_version - The version of the pipelines-as-code file specification you've used to write this YAML file. Currently, only one specification version, v1, is available.
  • config - Pipeline configuration settings for the control repo or module repo as a whole.
  • pipelines - The names of the pipelines you're creating.
    • triggers - What source control activities the pipeline will listen for.
    • stages - The parts of the pipeline, and how the pipeline run will advance (or pause and wait for instructions) based on the results of each part.
      • steps - All the details about the exact work the pipeline will perform.
Here's an example of a complete .cd4pe.yaml file for a control repo. This file describes pipelines for two branches (a main branch and a regex branch).
spec_version: v1
config:
  enable_pull_requests_from_forks: false
  deployment_policy_branch: production
  enable_pe_plans: true
pipelines:
  /feature_.*/:
    triggers:
      - pull_request
      - commit
    stages:
      - name: "Lint/Parser validation"
        auto_promote: all_succeeded
        steps:
          - type: job
            name: control-repo-puppetfile-syntax-validate
          - type: job
            name: control-repo-template-syntax-validate
      - name: "Deploy feature environment"
        steps:
          - type: deployment
            pe_server: cdpe-delivery
            policy: feature_branch
  main:
    triggers:
      - pull_request
      - commit
    stages:
      - name: "Lint/Parser validation"
        auto_promote: all_succeeded
        steps:
          - type: job
            name: control-repo-puppetfile-syntax-validate
          - type: job
            name: control-repo-template-syntax-validate
          - type: job
            name: control-repo-hiera-syntax-validate
          - type: job
            name: control-repo-manifest-validate
      - name: "Impact Analysis"
        auto_promote: any_succeeded
        steps:
          - type: impact_analysis
            concurrent_compilations: 10
            deployments:
              - "Direct merge to Production"
              - "my custom deploy 1"
          - type: pull_request_gate
      - name: "Deploy to lower UAT"
        auto_promote: false
        steps:
          - type: deployment
            policy: direct_merge
            name: "Direct merge to Production"
            target:
              type: node_group
              node_group_id:   fcda068f-e499-44ef-81f2-255fc487b2e2
            pe_server: cdpe-delivery
            parameters:
              stop_if_nodes_fail: 10
              noop: false
              timeout: 60
          - type: deployment
            name: "my custom deploy 1"
            policy: 
              source: control-repo
              name: deployments::canary
            target:
              type: node_group
              node_group_id: fcda068f-e499-44ef-81f2-255fc487b2e2
            pe_server: cdpe-delivery
            parameters:
              noop: true
              max_node_failure: 50
              canary_size: 4

Using a .cd4pe.yaml file with a module

A .cd4pe.yaml file for a module looks almost identical to one for a control repo, with a few important exceptions:
  1. If the module pipeline includes an impact analysis step, you must include a pointer to the control repo used in the deployment associated with the impact analysis task.
  2. If the .cd4pe.yaml file includes a regex branch pipeline using the feature branch deployment policy, your deployment step must include the name of the control repo associated with the module and the control repo branch on which to base the feature branches Continuous Delivery for PE will create.
Here's an example of a complete .cd4pe.yaml file for a for a module. The control_repo pointer for the impact analysis task is included in the target: key section at the bottom of the staging pipeline.
spec_version: v1
config:
  enable_pull_requests_from_forks: false
pipelines:
  staging:
    triggers:
      - commit
    stages:
    - name: "Impact analysis"
      steps:
      - type: impact_analysis
        deployments:
        - "Deployment to staging-rustler-1"
        concurrent_compilations: 10
        all_deployments: false
      auto_promote: false
    - name: "Deployment to staging"
      steps:
      - type: deployment
        name: "Deployment to staging-rustler-1"
        policy:
          name: eventual_consistency
        timeout: 3600000
        concurrent_compilations: 0
        all_deployments: false
        pe_server: pe-github
        target:
          type: node_group
          node_group_id: 250fd263-8456-4114-b559-9c6d9fa27748
          control_repo: cdpe-code-rustler-app-2020
Here's a module's regex branch pipeline entry using the feature branch deployment policy. The control_repo and base_feature_branch parameters at the bottom of the type: deployment step are required for this special type of pipeline. These parameters specify which control repo is associated with this module, and which branch of that control repo should be used to create feature branches when this pipeline is triggered.
pipelines:
 /feature_.*/:
    triggers:
    - commit
    stages:
    - name: "Validation stage"
      steps:
      - type: job
        name: "module-pdk-validate"
      auto_promote: any_completed
    - name: "Feature branch deployment stage"
      steps:
      - type: deployment
        name: "Deployment to hot-dog-prod-2"
        policy:
          name: "cd4pe_deployments::feature_branch"
        all_deployments: false
        pe_server: "hot-dog-prod-2"
        target:
          type: node_group
        control_repo: "cc-hot-dog-app"
        base_feature_branch: "production"
      auto_promote: false

Validating your .cd4pe.yaml file

Every time you commit a change to your .cd4pe.yaml file in your control repo or module repo, Continuous Delivery for PE uses the YAML code to render the pipelines' definitions in the web UI. To ensure that you commit only well-formed YAML code, use the validation tool in the Continuous Delivery for PE web UI before committing.

  1. In the Continuous Delivery for PE web UI, navigate to the control repo or module whose pipelines you're managing with code.
  2. At the top of the Pipelines section, click Manage pipelines.
  3. Select Manage as code. Continuous Delivery for PE displays your current pipelines in YAML format.
  4. Update the code in the window with the changes you wish to make. Alternatively, you can delete the contents of the window and paste in code you've written elsewhere.
  5. To check the syntax of your code, click Validate.
    • If your syntax is invalid, an error message about the location of the issue appears.
    • If your syntax is valid, Copy to clipboard is activated.
  6. Once your changes are complete and successfully validated, copy them into the .cd4pe.yaml file in your source control or module and commit your change.

.cd4pe.yaml file syntax

Your .cd4pe.yaml file must be formatted according to the syntax in this section.

Spec version

Every .cd4pe.yaml file begins with a spec_version declaration expressed as v<VERSION NUMBER>. This sets the version of the Continuous Delivery for PE Pipelines as Code specification used when writing the file.

Presently, there is only one version of the Continuous Delivery for PE Pipelines as Code specification, so your file must begin as follows:
---
spec_version: v1

Config

The config section of your .cd4pe.yaml file provides global pipeline configuration settings for all the pipelines for the control repo or module where the YAML file is housed.

config key Description Value Default
enable_pull_requests_from_forks States whether pull requests from forks of the control repo or module can trigger its pipelines. Accepts a Boolean. false
deployment_policy_branch Optional. The name of the branch where custom deployment policies are kept. If custom deployment policies are found on the specified branch, they can be used when building pipelines. Accepts a string. -
enable_pe_plans Optional. States whether to allow the inclusion of Bolt tasks in custom deployment policies used in pipelines for this control repo or module. Accepts a Boolean. true
Here's a sample config section for a control repo that allows pull requests from forks and serves custom deployment policies from the production branch:
config:
  enable_pull_requests_from_forks: true
  deployment_policy_branch: production
  enable_pe_plans: true

Pipelines

The pipelines section names each pipeline you're creating for a control repo or module. Its value is key/value pairs where each key corresponds to a pipeline. The key must have either the same name as a branch or a regular expression (regex).
Note: Only one regular expression entry is allowed per control repo or module.
To create two pipelines (one for a regex branch and one for the main branch), format the pipelines section as follows:
pipelines:
  /feature_.*/:
	triggers:
  	- pull_request
  	- commit
	stages:
  	- name: "Lint/Parser validation"
    	  auto_promote: all_succeeded
    	  steps:
          - type: job
            name: control-repo-puppetfile-syntax-validate
   
  main:
	triggers:
  	- pull_request
  	- commit
	stages:
  	- name: "Deploy to production"
    	  auto_promote: all_succeeded
    	  steps:
          - type: deployment
        	policy: direct_merge
        	name: "Direct merge to Production"
        	target:
          	  type: node_group
          	  node_group_id:   fcda068f-e499-44ef-81f2-255fc487b2e2
        	pe_server: cdpe-delivery
        	parameters:
          	  stop_if_nodes_fail: 10
          	  noop: false

Triggers

The triggers section must be set for each pipeline you create.

This section specifies the events in your source control system that cause the pipeline to start a run.

The only allowed values are pull_request and commit. You can set either or both of these values. If no value is set, the pipeline will not run except when triggered manually.

Stages

At least one stages section must be set for each pipeline you create.

The stages section accepts an array where each item is a stage in the pipeline. Each stage has a name, instructions about when and whether to auto-promote to the next stage in the pipeline, and a list of steps to be executed in parallel.

stages key Description Value Default
name The name you're giving the pipeline stage. Accepts a string in quotes. Stage <STAGE NUMBER>
auto_promote Instructions on whether or not to automatically go on to the next pipeline stage based on the conditions specified. Accepts one of the following:
  • false - Only manual promotions allowed.
  • any_succeeded - Auto-promote if any step succeeds.
  • all_succeeded - Auto-promote only if all steps succeed.
all_succeeded

Steps

At least one step section must be set for each stages section you create.

A step defines a particular job to be performed in a pipeline stage. All steps are run concurrently, so the order in which you list steps in a stage doesn't matter.

Every step is defined using a hash of key/value pairs. The type key is always required, and the other keys are dependent on the type of step selected. The type key accepts the following values:
type values Description
job A named test for Puppet code. The jobs available to your workspace are found by clicking Jobs in the navigation bar in the web UI.
impact_analysis A report assessing the code change in the pipeline for potential impact to nodes and configurations in deployments defined later in the pipeline.
pull_request_gate A conditional checkpoint that ends the pipeline execution at the pull request gate if the pipeline was triggered by a pull request. If the pipeline was triggered by a commit, the pipeline execution will continue beyond the pull request gate.
deployment A Puppet code deployment.

Keys for type: job

job key Description Required? Default
name The name of the job as listed in the Jobs page for the workspace. yes -
To add a job called control-repo-puppetfile-syntax-validate to a pipeline, format the type: job entry as follows:
steps:
  - type: job
    name: control-repo-puppetfile-syntax-validate

Keys for type: impact_analysis

Note: If you include an impact analysis step in a .cd4pe.yaml file for a module, see the instructions in the Keys for type: deployment section about setting control_repo on the deployment associated with the imapct analysis task.
impact_analysis key Description Required? Default
concurrent_compilations How many catalog compilations to perform at the same time.
Tip:

If your compilers are hitting capacity when performing an impact analysis, lower this number. Lowering this number increases the impact analysis run time.

no 10
deployments

A list of the deployments to assess the potential impact.

Accepts an array of strings where each item is the name of a deployment in your pipeline definition.

You must set either deployments or all_deployments. -
all_deployments

Whether to assess the impact to all deployments in the pipeline.

Accepts a Boolean value.

false
puppetdb_connection_timeout_sec The timeout period (in seconds) for requests to PuppetDB during an impact analysis task. no 120
To add an impact analysis task running 10 compilations at a time on two specific deployments, and a second impact analysis task running on all deployments in your pipeline that times out PuppetDB requests after three minutes, form the type: impact_analysis entries as follows:
steps:
  - type: impact_analysis
    concurrent_compilations: 10
    deployments:
      - "Direct merge to Production"
      - "My PCI deployment"
  - type: impact_analysis
    all_deployments: true
    puppetdb_connection_timeout_sec: 180

Keys for type: pull_request_gate

The pull_request_gate step type does not take any additional keys.

To add a pull request gate to your pipeline, format the type: pull_request_gate entry as follows:
steps:
  - type: pull_request_gate

Keys for type: deployment

deployment key Description Value Required?
policy The deployment policy to be used. For built-in deployment policies, accepts the policy name as a string.

For custom deployment policies, accepts a key-value pair defining the policy to be used.

yes
name The name you're giving the deployment. Accepts a string in quotes. yes
target The infrastructure (node group) that the deployment will target.

The target key is not used for deployments from regex branches.

Accepts key/value pairs and requires a type: node_group key. If used for a module pipeline that contains an impact analysis task, also requires a control_repo: key.
Note: See Formatting the target key below.
yes, except for regex branches
pe_server The name in Continuous Delivery for PE of the Puppet Enterprise instance the deployment will target. Accepts a string.
Tip:

Find the name of your PE instance by going to Settings > Puppet Enterprise in the web UI.

yes
parameters Any relevant parameters defined by the deployment policy. To find out which parameters the policy takes, see Built-in deployment policies or refer to the custom deployment policy’s documentation. Accepts key/value pairs. no
control_repo The name of the control repo a module is associated with. Accepts a string. only for module regex branch pipelines using the feature branch deployment policy
base_feature_branch The branch in the control_repo that Continuous Delivery for PE will use to create feature branches. Accepts a string. only for module regex branch pipelines using the feature branch deployment policy
To add a deployment using a built-in deployment policy to your pipeline, format the type: deployment entry as follows:
steps:
  - type: deployment
    policy: cd4pe_deployments::direct_merge
    name: "Direct merge to Production"
    target:
      type: node_group
      node_group_id:  fcda068f-e499-44ef-81f2-255fc487b2e2
    pe_server: cdpe-delivery
    parameters:
      stop_if_nodes_fail: 10
      noop: false
      timeout: 60
To add a deployment using a custom deployment policy to your pipeline, format the type: deployment entry as follows:
steps:
  - type: deployment
    policy: 
      name: deployments::custom_policy1
      source: name-of-control-repo
    name: "Direct merge to Production"
    target:
      type: node_group
      node_group_id:  fcda068f-e499-44ef-81f2-255fc487b2e2
    pe_server: cdpe-delivery
    parameters:
      policy_parameter1: 10
      policy_parameeter2: false
CAUTION: Do not include sensitive parameters in your custom deployment policy if you use the custom deployment policy in a pipeline managed with code.
To add a deployment to a module for a regex branch pipeline using the feature branch deployment policy, format the type: deployment entry as follows:
steps:
  - type: deployment
    policy:
      name: "cd4pe_deployments::feature_branch"
    name: "Deployment to hot-dog-prod-2"
    target:
          type: node_group
    pe_server: "hot-dog-prod-2"
    control_repo: "cc-hot-dog-app"
    base_feature_branch: "production"
Formatting the target key
Note: The target key is not used for deployments from regex branches.
When used for a control repo pipeline or a module pipeline that does not include impact analysis, the target key must be formatted as follows:
    target:
      type: node_group
      node_group_id:  fcda068f-e499-44ef-81f2-255fc487b2e2

Presently, node_group is the only available type.

To locate your target node group's ID:
  1. In the PE console, click Node groups (or Classification in PE versions prior to 2019.8.1).
  2. Locate your target node group and click its name. A page with information about the node group opens.
  3. In the URL for the page, locate and copy the alphanumeric string that follows /groups/. This is your node group ID.

When used for a module pipeline that includes an impact analysis step, the target key must be formatted as follows:

    target:
      type: node_group
      node_group_id:  fcda068f-e499-44ef-81f2-255fc487b2e2
      control_repo:  cdpe-2018-1-pe-master-1-control

The control_repo key's value is the name of the control repo used in the deployment associated with the impact analysis task.