Automate PAM and Puppet application offline installations

During a fresh offline installation of Puppet Application Manager (PAM) and a Puppet application, you have the option to configure the software automatically rather than completing the installation script interview.

Before you begin

Ensure that your system meets the PAM system requirements.

Automate PAM and Puppet application offline installations on Puppet-supported clusters

  1. Install Puppet Application Manager. For detailed instructions, see PAM HA offline installation.
  2. Define the configuration values for your Puppet application installation, using Kubernetes YAML format.
    apiVersion: kots.io/v1beta1
    kind: ConfigValues
    metadata: 
      name: app-config
    spec: 
      values: 
        accept_eula: 
          value: has_accepted_eula
        annotations: 
          value: "ingress.kubernetes.io/force-ssl-redirect: 'false'"
        hostname: 
          value: "<HOSTNAME>"
        root_password: 
          value: "<ROOT ACCOUNT PASSWORD>"
    Tip: View the keyword names for all settings by clicking View files > upstream > config.yaml in Puppet Application Manager.
    Replace the values indicated:
    • Replace <HOSTNAME> with a hostname you want to use to configure an Ingress and to tell job hardware agents and web hooks how to connect to it. You might need to configure your DNS to resolve the hostname to your Kubernetes hosts.
    • Replace <ROOT ACCOUNT PASSWORD> your chosen password for the application root account. The root account is used to administer your application and has full access to all resources and application-wide settings. This account must NOT be used for testing and deploying control repositories or modules.
    • Optional. These configuration values disable HTTP-to-HTTPS redirection, so that SSL can be terminated at the load balancer. If you want to run the application over SSL only, change the force-ssl-redirect annotation to true.
    • Optional. If your load balancer requires HTTP health checks, you can now enable Ingress settings that do not require Server Name Indication (SNI) for /status. To enable this setting, add the following to the config values statement:
      enable_lb_healthcheck:
        value: "1"
    Note: The automated installation automatically accepts the Puppet application end user license agreement (EULA). Unless Puppet has otherwise agreed in writing, all software is subject to the terms and conditions of the Puppet Master License Agreement located at https://puppet.com/legal.
  3. Write your license file and the configuration values generated in the previous step to the following locations:
    • Write your license file to ./replicated_license.yaml
    • Write your configuration values to ./replicated_config.yaml
  4. Download the application bundle:
    curl -L <APPLICATION BUNDLE URL> -o <APPLICATION BUNDLE FILE>
  5. Copy the application bundle to your primary and secondary nodes and unpack it:
    tar xzf ./<APPLICATION BUNDLE FILE>
  6. Run the application install command on your primary node. Replace the <YOUR CHOSEN PASSWORD> , <APPLICATION NAME>, <APPLICATION BUNDLE FILE> values in the example below with your own values:
    KOTS_PASSWORD=<YOUR CHOSEN PASSWORD>
    kubectl kots install <APPLICATION NAME> --namespace default --shared-password $KOTS_PASSWORD --license-file ./license.yaml --config-values ./config.yaml --airgap-bundle ./<APPLICATION BUNDLE FILE> --port-forward=false
    # wait several minutes for the application to deploy; if it doesn’t show up, preflights or another error might have occurred
    Note: If you want to install a specific version of the application, include the --app-version-label=<VERSION> flag in the install command.

Automate PAM and Puppet application offline installations on customer-supported clusters

Before you begin
  1. If you haven’t already done so, install kubectl.
  2. Puppet Application Manager is expected to work on any certified Kubernetes distribution that meets the following requirements. We validated and support:
    • Google Kubernetes Engine

    • AWS Elastic Kubernetes Service

    If you use a different distribution, contact Puppet Support for more information on compatibility with PAM.

  3. Make sure your Kubernetes cluster meets the minimum requirements:
    • Kubernetes version 1.19-1.23.
    • A default storage class that can be used for relocatable storage.
    • A standard Ingress controller that supports websockets (we have tested with Project Contour and NGINX).
    • We currently test and support Google Kubernetes Engine (GKE) clusters.
    Note: If you’re using self-signed certificates on your Ingress controller, you must ensure that your job hardware nodes trust the certificates. Additionally, all nodes that use Continuous Delivery for PE webhooks must trust the certificates, or SSL checking must be disabled on these nodes.
    Important: If you are installing Puppet Comply on Puppet Application Manager, the ingress controller must be configured to allow request payloads of up to 32 MB. Ingress controllers used by Amazon EKS commonly default to a 1 MB maximum — this causes all report submissions to fail.

    The ingress must have a generous limit for total connection time. Setting the connection timeout to infinity in conjunction with an idle timeout is recommended.

  4. If you are setting up Puppet Application Manager behind a proxy server, the installer supports proxies configured via HTTP_PROXY/HTTPS_PROXY/NO_PROXY environment variables.
    Restriction: Using a proxy to connect to external version control systems is currently not supported.
  1. Define the configuration values for your Puppet application installation, using Kubernetes YAML format.
    apiVersion: kots.io/v1beta1
    kind: ConfigValues
    metadata: 
      name: app-config
    spec: 
      values: 
        accept_eula: 
          value: has_accepted_eula
        annotations: 
          value: "ingress.kubernetes.io/force-ssl-redirect: 'false'"
        hostname: 
          value: "<HOSTNAME>"
        root_password: 
          value: "<ROOT ACCOUNT PASSWORD>"
    Tip: View the keyword names for all settings by clicking View files > upstream > config.yaml in Puppet Application Manager.
    Replace the values indicated:
    • Replace <HOSTNAME> with a hostname you want to use to configure an Ingress and to tell job hardware agents and web hooks how to connect to it. You might need to configure your DNS to resolve the hostname to your Kubernetes hosts.
    • Replace <ROOT ACCOUNT PASSWORD> your chosen password for the application root account. The root account is used to administer your application and has full access to all resources and application-wide settings. This account must NOT be used for testing and deploying control repositories or modules.
    • Optional. These configuration values disable HTTP-to-HTTPS redirection, so that SSL can be terminated at the load balancer. If you want to run the application over SSL only, change the force-ssl-redirect annotation to true.
    • Optional. If your load balancer requires HTTP health checks, you can now enable Ingress settings that do not require Server Name Indication (SNI) for /status. To enable this setting, add the following to the config values statement:
      enable_lb_healthcheck:
        value: "1"
    Note: The automated installation automatically accepts the Puppet application end user license agreement (EULA). Unless Puppet has otherwise agreed in writing, all software is subject to the terms and conditions of the Puppet Master License Agreement located at https://puppet.com/legal.
  2. Write your license file and the configuration values generated in the previous step to the following locations:
    • Write your license file to ./replicated_license.yaml
    • Write your configuration values to ./replicated_config.yaml
  3. Download the application bundle:
    curl -L <APPLICATION BUNDLE URL> -o <APPLICATION BUNDLE FILE>
  4. Create and run the following script, supplying values specific to your installation for the variables:
    #!/bin/bash
    REGISTRY=<YOUR_CONTAINER_REGISTRY>
    APP_K8S_NAMESPACE=<DESIRED_NAMESPACE_IN_TARGET_CLUSTER>
    APP_BUNDLE=<PATH_TO_AIRGAP_BUNDLE_FROM_STEP_3>
    PAM_PASSWORD=<DESIRED_PAM_CONSOLE_PASSWORD>
    LICENSE_FILE=<PATH_TO_LICENSE_FILE_FROM_STEP_1>
    CONFIG_FILE=<PATH_TO_CONFIG_FILE_FROM_STEP_2>
    
    curl https://kots.io/install | bash
    curl -LO https://github.com/replicatedhq/kots/releases/download/v$(kubectl kots version | head -n1 | cut -d' ' -f3)/kotsadm.tar.gz
    
    kubectl kots admin-console push-images ./kotsadm.tar.gz ${REGISTRY}
    kubectl kots admin-console push-images ${APP_BUNDLE} ${REGISTRY}
    kubectl kots install puppet-application-manager --namespace ${APP_K8S_NAMESPACE} --shared-password ${PAM_PASSWORD} --license-file ${LICENSE_FILE} --config-values ${CONFIG_FILE} --airgap-bundle ${APP_BUNDLE} --disable-image-push --kotsadm-registry ${REGISTRY} --port-forward=false --skip-preflights
    Tip: If the script fails, it might be because:
    • The push-images commands require that the local machine where the script is running has push access to the registry.
    • The install command requires read access to the registry from the target cluster.
    • Offline HA installs of GKE can't run preflights; therefore --skip-preflights must be included.