Backing up PAM using snapshots

Snapshots are point-in-time backups of your Puppet Application Manager (PAM) deployment, which can be used to roll back to a previous state or restore your installation into a new cluster for disaster recovery.

Full and partial snapshots

There are two options available when you're creating a snapshot for your Puppet Application Manager (PAM) deployment, full snapshots (also known as instance snapshots) and partial (or application) snapshots. For full disaster recovery, make sure you've configured and scheduled regular full snapshots stored on a remote storage solution such as an S3 bucket or NFS share.

Full snapshots offer a comprehensive backup of your PAM deployment, because they include the core PAM application together with the Puppet applications you've installed in yourPAM deployment. You can use a full snapshot to restore your PAM deployment and all of your installed Puppet applications to a previous backup. For example, you could use a full snapshot to revert an undesired configuration change or a failed upgrade, or to migrate your PAM deployment to another Puppet-supported cluster.

Partial snapshots are available from the PAM console, but are limited in their usefulness. To restore from a partial snapshot, you must already have an installed and functioning version of PAM. A functioning PAM installation is needed because the option to restore a partial snapshot can only be accessed from the Snapshots section of the PAM admin console.

Partial snapshots only back up the Puppet application you specified when you configured the snapshot, for example, Continuous Delivery for Puppet Enterprise, or Puppet Comply. They do not back up the underlying PAM deployment. Partial snapshots are sometimes useful if you want to roll back to a previous version of a specific Puppet application that you've installed on your PAM deployment, but are far less versatile than full snapshots. To make sure that you have all disaster recovery options available to you, use a full snapshot wherever possible.

Configure snapshots

Before using snapshots, select a storage location, set a snapshot retention period, and indicate whether snapshots are created manually or on a set schedule.

Important: Disaster recovery requires that the store backend used for backups is accessible from the new cluster. When setting up snapshots in an offline cluster, make sure to record the registry service IP address with the following command:
kubectl -n kurl get svc registry -o jsonpath='{.spec.clusterIP}'

Be sure to record the value returned by this command as it is required when creating a new cluster to restore to as part of Disaster recovery with PAM.

  1. In the upper navigation bar of the Puppet Application Manager UI, click Snapshots > Settings & Schedule.
  2. The snapshots feature uses https://velero.io, an open source backup and restore tool. Click Check for Velero to determine whether Velero is present on your cluster, and to install it if needed.
  3. Select a destination for your snapshot storage and provide the required configuration information. You can choose to set up snapshot storage in the PAM UI or on the command line. Supported destinations are listed below. We recommend using an external service or NFS, depending on what is available to you:
    • Internal storage (default)
    • Amazon S3
    • Azure Blob Storage
    • Google Cloud Storage
    • Other S3-compatible storage
    • Network file system (NFS)
    • Host path
    Amazon S3 storage

    If using the PAM UI, provide the following information:

    Field Description
    Bucket The name of the AWS bucket where snapshots are stored.
    Region The AWS region the bucket is available in.
    Path Optional. The path within the bucket where all snapshots are stored.
    Use IAM instance role? If selected, an IAM instance role is used instead of an access key ID and secret.
    Access key ID Required only if not using an IAM instance role. The AWS IAM access key ID that can read from and write to the bucket.
    Access key secret Required only if not using an IAM instance role. The AWS IAM secret access key that is associated with the access key ID.

    If using the command line, run the appropriate command:

    Not using an IAM instance role:

    kubectl kots velero configure-aws-s3 access-key --access-key-id <string> --bucket <string> --path <string> --region <string> --secret-access-key <string>
    Using an IAM instance role:
    kubectl kots velero configure-aws-s3 instance-role --bucket <string> --path <string> --region <string>

    Azure Blob Storage

    If using the PAM UI, provide the following information:

    Field Description
    Note: Only connections via service principals are currently supported.
    Bucket The name of the Azure Blob Storage container where snapshots are stored.
    Path Optional. The path within the container where all snapshots are stored.
    Subscription ID Required only for access via service principal or AAD Pod Identity. The subscription ID associated with the target container.
    Tenant ID Required only for access via service principal . The tenant ID associated with the Azure account of the target container.
    Client ID Required only for access via service principal . The client ID of a Service Principle with access to the target container.
    Client secret Required only for access via service principal . The Client Secret of a Service Principle with access to the target container.
    Cloud name The Azure cloud for the target storage (options: AzurePublicCloud, AzureUSGovernmentCloud, AzureChinaCloud, AzureGermanCloud)
    Resource group The resource group name of the target container.
    Storage account The storage account name of the target container
    If using the command line, run the following:
    kubectl kots velero configure-azure service-principle --client-id <string> --client-secret <string> --cloud-name <string> --container <string> --path <string> --resource-group <string> --storage-account <string> --subscription-id <string> --tenant-id <string>

    Google Cloud Storage

    If using the PAM UI, provide the following information:

    Field Description
    Bucket The name of the GCS bucket where snapshots are stored.
    Path Optional. The path within the bucket where all snapshots are stored.
    Service account The GCP IAM Service Account JSON file that has permissions to read from and write to the storage location.

    If using the command line, run the appropriate command:

    For service account authentication:
    kubectl kots velero configure-gcp service-account --bucket <string> --path <string> --json-file <string>
    For Workload Identity authentication:
    kubectl kots velero configure-gcp workload-identity --bucket <string> --path <string> --json-file <string>

    Other S3-compatible storage

    If using the PAM UI, provide the following information:

    Field Description
    Bucket The name of the bucket where snapshots are stored.
    Path Optional. The path within the bucket where all snapshots are stored.
    Access key ID The access key ID that can read from and write to the bucket.
    Access key secret The secret access key that is associated with the access key ID.
    Endpoint The endpoint to use to connect to the bucket.
    Region The region the bucket is available in.
    If using the command line, run the following:
    kubectl kots velero configure-other-s3 --namespace default --bucket <string> --path <string>  --access-key-id <string> --secret-access-key <string> --endpoint <string> --region <string>

    Network file system (NFS)

    Take note of these important steps before you begin configuration:
    • Make sure that you have the NFS server set up and configured to allow access from all the nodes in the cluster.
    • Make sure all the nodes in the cluster have the necessary NFS client packages installed to be able to communicate with the NFS server.
    • Make sure that any firewalls are properly configured to allow traffic between the NFS server and nodes in the cluster.

    If using the PAM UI, provide the following information:

    Field Description
    Server The hostname or IP address of the NFS server.
    Path The path that is exported by the NFS server.
    If using the command line, run the following:
    kubectl kots velero configure-nfs --namespace default --nfs-path <string> --nfs-server <string>

    Host path

    Note that the configured path must be fully accessible by user/group 1001 on your cluster nodes. Host path works best when backed by a shared network file system.

    On the command line, run the following:
    kubectl kots velero configure-hostpath --namespace default --hostpath <string>
  4. Click Update storage settings to save your storage destination information.
    Depending on your chosen storage provider, saving and configuring your storage provider might take several minutes.
  5. Optional: To automatically create new snapshots on a schedule, select Enable automatic scheduled snapshots on the Full snapshots (instance) tab. (If desired, you can also set up a schedule for capturing partial (application-only) snapshots.)
    You can schedule a new snapshot creation for every hour, day, or week, or you can create a custom schedule by entering a cron expression.
  6. Finally, set the retention schedule for your snapshots by selecting the time period after which old snapshots are automatically deleted. The default retention period is one month.
    Note: A snapshot's retention period cannot be changed once the snapshot is created. If you update the retention schedule, the new retention period applies only to snapshots created after the update is made.
  7. Click Update schedule to save your changes.
Results
Snapshots are automatically created according to your specified schedule and saved to the storage location you selected. You can also create an unscheduled snapshot at any time by clicking Start a snapshot on the Dashboard or on the Snapshots page.

Roll back changes using a snapshot

When necessary, you can use a snapshot to roll back to a previous version of your Puppet Application Manager set-up without changing the underlying cluster infrastructure.

To roll back changes:

  1. In console menu of the Puppet Application Manager UI, click Snapshots > Full Snapshots (Instance).
  2. Select the snapshot you wish to roll back to from the list of available snapshots and click Restore from this backup cycle icon.
  3. Follow the instructions to complete either a partial restore or a full restore.
    A full restore is useful if you need to stay on an earlier version of an application and want to disable automatic version updates. Otherwise, a partial restore is the quicker option.