.cd4pe.yaml
file structure
When managing your pipelines with code, pipeline definitions are expressed in a
structured format and stored in a .cd4pe.yaml
file that is
stored in your control repo or module repo.
.cd4pe.yaml
file must use this structure:-
spec_version
: The version of the pipelines-as-code file specification you've used to write this YAML file. Currently, only one specification version,v1
, is available. -
config
: Pipeline configuration settings for the control repo or module repo as a whole. -
pipelines
: The names of pipelines you're creating, either the branch name (for fixed-branch pipelines) or the naming convention (for regex branch pipelines). Nested under each named pipeline are:-
triggers
: What source control activities the pipeline listens for. -
stages
: Parts of the pipeline. Each stage includes:-
name
: The stage's name. -
auto_promote
: The promotion condition that determines if and how the pipeline proceeds to the next stage after completing steps in the current stage. -
steps
: Details about the exact work the pipeline performs in each stage, such as jobs and deployments. Each step must specify atype
. Additional specifications depend on the type.
-
-
.cd4pe.yaml
file for a
control repo. This file describes a regex branch pipeline and a main branch pipeline. Notice
the regex branch pipeline is identified by the
feature branch naming convention, /feature_.*/
.spec_version: v1
config:
enable_pull_requests_from_forks: false
deployment_policy_branch: production
enable_pe_plans: true
pipelines:
/feature_.*/:
triggers:
- pull_request
- commit
stages:
- name: "Lint/Parser validation"
auto_promote: all_succeeded
steps:
- type: job
name: control-repo-puppetfile-syntax-validate
- type: job
name: control-repo-template-syntax-validate
- name: "Deploy feature environment"
steps:
- type: deployment
pe_server: cdpe-delivery
policy: feature_branch
main:
triggers:
- pull_request
- commit
stages:
- name: "Lint/Parser validation"
auto_promote: all_succeeded
steps:
- type: job
name: control-repo-puppetfile-syntax-validate
- type: job
name: control-repo-template-syntax-validate
- type: job
name: control-repo-hiera-syntax-validate
- type: job
name: control-repo-manifest-validate
- name: "Impact Analysis"
auto_promote: any_succeeded
steps:
- type: impact_analysis
concurrent_compilations: 10
deployments:
- "Direct merge to Production"
- "my custom deploy 1"
- type: pull_request_gate
- name: "Deploy to lower UAT"
auto_promote: false
steps:
- type: deployment
policy: direct_merge
name: "Direct merge to Production"
target:
type: node_group
node_group_id: fcda068f-e499-44ef-81f2-255fc487b2e2
pe_server: cdpe-delivery
parameters:
stop_if_nodes_fail: 10
noop: false
timeout: 60
- type: deployment
name: "my custom deploy 1"
policy:
source: control-repo
name: deployments::canary
target:
type: node_group
node_group_id: fcda068f-e499-44ef-81f2-255fc487b2e2
pe_server: cdpe-delivery
parameters:
noop: true
max_node_failure: 50
canary_size: 4
Using a .cd4pe.yaml
file with a module
.cd4pe.yaml
file for a module is almost identical to one
for a control repo, with these exceptions:- If the module pipeline includes an impact analysis step, you must include a pointer to the control repo used in the deployment associated with the impact analysis task.
- If the
.cd4pe.yaml
file includes a regex branch pipeline using the Feature branch deployment policy, the deployment step must include the name of the control repo associated with the module and the name of the control repo branch on which to base the feature branches Continuous Delivery for PE creates.
.cd4pe.yaml
file for a for
a module with an impact analysis step. Notice, at the end of the pipeline, the target
parameter for the type:
deployment
step includes a control_repo
pointer
for the impact analysis task. spec_version: v1
config:
enable_pull_requests_from_forks: false
pipelines:
staging:
triggers:
- commit
stages:
- name: "Impact analysis"
auto_promote: false
steps:
- type: impact_analysis
deployments:
- "Deployment to staging-rustler-1"
concurrent_compilations: 10
all_deployments: false
- name: "Deployment to staging"
steps:
- type: deployment
name: "Deployment to staging-rustler-1"
policy:
name: eventual_consistency
timeout: 3600000
concurrent_compilations: 0
all_deployments: false
pe_server: pe-github
target:
type: node_group
node_group_id: 250fd263-8456-4114-b559-9c6d9fa27748
control_repo: cdpe-code-rustler-app-2020
control_repo
and base_feature_branch
parameters in the type:
deployment
step are required for this type of pipeline. These parameters specify
which control repo is associated with this module and which branch of that control repo is
used to create feature branches when this pipeline is triggered.
spec_version: v1
config:
enable_pull_requests_from_forks: false
pipelines:
/feature_.*/:
triggers:
- commit
stages:
- name: "Validation stage"
auto_promote: any_completed
steps:
- type: job
name: "module-pdk-validate"
- name: "Feature branch deployment stage"
auto_promote: false
steps:
- type: deployment
name: "Deployment to hot-dog-prod-2"
policy:
name: "cd4pe_deployments::feature_branch"
all_deployments: false
pe_server: "hot-dog-prod-2"
control_repo: "cc-hot-dog-app"
base_feature_branch: "production"
.cd4pe.yaml
file validation
Every time you commit a change to a .cd4pe.yaml
file
in your control repo or module repo, Continuous Delivery for PE uses the YAML code to
render the pipelines' definitions in the web UI. Before committing changes to your repos,
use the Continuous Delivery for PE validation tool to make suer your YAML code is
well-formed.
.cd4pe.yaml
file syntax
Your .cd4pe.yaml
file must be formatted properly.
Spec version
Every .cd4pe.yaml
file begins with a spec_version
declaration expressed as v<VERSION_NUMBER>
. This sets the version of the Continuous Delivery for Puppet Enterprise (PE) pipelines-as-code specification used to
write the file.
---
spec_version: v1
Config
config
section of the .cd4pe.yaml
file defines global pipeline configuration settings for
all pipelines for the control or module repo where the file is stored. You can use
these keys in the config
section:
config key |
Description | Value | Default |
---|---|---|---|
enable_pull_requests_from_forks |
Controls whether the repo's pipelines can be triggered by pull requests from forks | Boolean | false |
deployment_policy_branch |
(Optional) Specifies a branch where custom deployment policies are kept. If custom deployment policies exist on the specified branch, you can use them when building pipelines. | A branch name, as a string | N/A |
enable_pe_plans |
(Optional) Controls whether Bolt tasks can be included in custom deployment policies used in this repo's pipelines. | Boolean | true |
config
section is for a control
repo. It allows pull requests from forks and serves custom deployment policies from
the production
branch:config:
enable_pull_requests_from_forks: true
deployment_policy_branch: production
enable_pe_plans: true
Pipelines
pipelines
section names each pipeline
you're creating for a control repo or module. It requires key/value pairs where each
key corresponds to a pipeline and the values are triggers, stages, and other
pipeline contents. The key must use either a specific branch's name or a regular
expression (for regex branch
pipelines). pipelines
section has two
pipelines. The first pipeline is for the repo's main
branch, whereas the second is a regex branch pipeline that applies to any branches
whose names start with feature_
. Notice the regex
branch pipeline's key is the full regular expression surrounded by forward
slashes.pipelines:
main:
triggers:
stages:
/feature_.*/:
triggers:
stages:
Here is the same pipelines
section with triggers
, stages
, and steps
in each stage (each
of these components are explained below):pipelines:
main:
triggers:
- pull_request
- commit
stages:
- name: "Deploy to production"
auto_promote: all_succeeded
steps:
- type: deployment
policy: cd4pe_deployments::direct
name: "Direct merge to Production"
target:
type: node_group
node_group_id: fcda068f-e499-44ef-81f2-255fc487b2e2
pe_server: cdpe-delivery
parameters:
stop_if_nodes_fail: 10
noop: false
/feature_.*/:
triggers:
- pull_request
- commit
stages:
- name: "Lint/Parser validation"
auto_promote: all_succeeded
steps:
- type: job
name: control-repo-puppetfile-syntax-validate
Triggers
Each pipeline must have a triggers
section that
specifies the events in your source control system that start the pipeline.
pull_request
and commit
. You can set either or both of these values. If
no value is set, the pipeline does not run unless triggered manually. For example,
this main
branch pipeline is triggered by both pull
requests and commits:pipelines:
main:
triggers:
- pull_request
- commit
If you want your pipelines to be triggered by pull requests from forks, you must
specify this in the config
section.
Stages
Each pipeline must have a stages
section.
The stages
section accepts an array where each item
is a stage in the pipeline. Each stage has a name, instructions about when and
whether to auto-promote to the next stage in the pipeline, and a list of steps to be
executed in parallel.
The name
key is the stage's name. It accepts a string
in quotes, and the default value is "Stage
<STAGE_NUMBER>"
.
auto_promote
key provides instructions on when
and if you want the pipeline to automatically proceed to the next stage. It accepts
one of the following values:-
false
: Requires manual promotion. The pipeline does not auto-promote. -
all_succeeded
: Auto-promote only if all steps succeed. This is the defaul value. -
any_succeeded
: Auto-promote if any step succeeds. -
all_completed
: Auto-promote only if all steps complete (succeed or fail) and no steps are canceled. -
any_completed
: Auto-promote if any step completes (succeeds or fails). Does not promote if all steps are canceled.
Steps
stage
must have at least one step
. A step defines a particular job to be performed in a pipeline
stage. type
key is always required. Requirements for other keys depend on the
step's type
. The type
key accepts these values:-
job
: Specify a job available in the workspace where this pipeline exists. -
pull_request_gate
: A conditional checkpoint that forcefully pauses the pipeline if the pipeline was triggered by a pull request. If the pipeline was triggered by a commit, the pipeline continues beyond the pull request gate. -
impact_analysis
: Reports the potential impact the code change might have on nodes and configurations in deployments defined later in the pipeline. -
deployment
: A Puppet code deployment.
type: job
steps: You must specify the job name
as listed on the workspace's
Jobs page in the web UI. For example, this step uses a
job called control-repo-puppetfile-syntax-validate
:
steps:
- type: job
name: control-repo-puppetfile-syntax-validate
type: pull_request_gate
steps: No
additional keys are needed or available. To add a pull request gate to your
pipeline, add this code to the steps
:steps:
- type: pull_request_gate
type: impact_analysis
steps: There are three
requirements. First, you must use one of these keys to specify deployments for which
you want to access the potential impact:-
all_deployments
: Assess the impact to all deployments in the pipeline. This key accepts a Boolean value, and the default value isfalse
. -
deployments
: List specific deployments you want to assess. Specify an array of strings where each item is the name of a deployment in your pipeline definition. -
percentage_node_filter
: Run impact analysis on a percentage of nodes impacted by the incoming changes. This key accepts floating point values between 1 to 100 inclusive.
type: deployment
step, and you must set the
control repo
key on the deployment associated
with the impact analysis step. For more information, refer to the information about
type: deployment
and the target
key at the end of this page.type:
impact_analysis
step:-
concurrent_compilations
: How many catalog compilations to perform at the same time. The default value is10
.Tip: If your compilers are hitting capacity when performing an impact analysis, lower this number. However, lowering this number increases the impact analysis run time. -
puppetdb_connection_timeout_sec
: The timeout period (in seconds) for requests to PuppetDB during an impact analysis task. The default value is120
.
steps:
- type: impact_analysis
concurrent_compilations: 10
deployments:
- "Direct merge to Production"
- "My PCI deployment"
- type: impact_analysis
all_deployments: true
puppetdb_connection_timeout_sec: 180
type: deployment
steps: Specify these required keys:-
policy
: The deployment policy used. For built-in deployment policies, provide one of the following policy key-value pairs:cd4pe_deployments::direct
cd4pe_deployments::eventual_consistency
cd4pe_deployments::feature_branch
cd4pe_deployments::rolling
For custom deployment policies, provide a key-value pair defining the policy to be used.CAUTION: Do not include sensitive parameters in custom deployment policies used in pipelines managed with code. -
name
: The name, as a string in quotes, that you're giving to the deployment. -
target
: The infrastructure (node group) the deployment targets. Requires thetype: node_group
key and one or more node group IDs as key/value pairs. In a module pipeline with an impact analysis task, this also requires acontrol_repo
key. More information about this key, including how to find node group IDs, is provided below.Restriction: Do not use this key for deployments from regex branch pipelines. -
pe_server
: The name, as a string, of the PE instance the deployment targets. You must use the name shown in the Continuous Delivery for PE web UI at .
The parameters
key is optional. It accepts key/value
pairs specifying relevant parameters defined by the deployment policy. To find out
which parameters the policy accepts, go to Built-in deployment policies
or refer to the custom deployment policy’s documentation.
For module regex branch pipelines using the feature branch deployment policy, you
must also specify the contro_repo
(the name, as a
string, of the control repo a module is associated with) and the base_feature_branch
(the name, as a string, of the
control repo branch you want Continuous Delivery for PE to use to create
feature branches).
deployment
step uses a built-in deployment
policy: steps:
- type: deployment
policy: cd4pe_deployments::direct
name: "Direct merge to Production"
target:
type: node_group
node_group_id: fcda068f-e499-44ef-81f2-255fc487b2e2
pe_server: cdpe-delivery
parameters:
stop_if_nodes_fail: 10
noop: false
timeout: 60
deployment
step uses a custom deployment policy:
steps:
- type: deployment
policy:
name: deployments::custom_policy1
source: name-of-control-repo
name: "Direct merge to Production"
target:
type: node_group
node_group_id: fcda068f-e499-44ef-81f2-255fc487b2e2
pe_server: cdpe-delivery
parameters:
policy_parameter1: 10
policy_parameter2: false
deployment
step belongs to a regex branch
pipeline and uses the feature branch deployment policy:
steps:
- type: deployment
policy:
name: "cd4pe_deployments::feature_branch"
name: "Deployment to hot-dog-prod-2"
target:
type: node_group
pe_server: "hot-dog-prod-2"
control_repo: "cc-hot-dog-app"
base_feature_branch: "production"
The target
key format depends on other characteristics of the
pipeline, however; the target
key is never used for
deployments from regex branch pipelines.
target
key must include the type
and node_group_id
,
such as: target:
type: node_group
node_group_id: fcda068f-e499-44ef-81f2-255fc487b2e2
node_group
is the only available type
. To locate the node_group_id
: - In the PE console, click Node groups (or Classification in PE versions prior to 2019.8.1).
- Locate your target node group and click its name. A page with information about the node group opens.
- In the page URL, locate and copy the alphanumeric string that follows
/groups/
. This is your node group ID.
When used for a module pipeline that includes an impact analysis step, the target
key must also include the control_repo
(in addition to the type
and node_group_id
), such as:
target:
type: node_group
node_group_id: fcda068f-e499-44ef-81f2-255fc487b2e2
control_repo: cdpe-2018-1-pe-master-1-control
The control_repo
value is the name of the control repo used in the
deployment associated with the impact analysis task.