PAM HA offline install

Use these instructions to install Puppet Application Manager (PAM) in an air-gapped or offline environment where the Puppet Application Manager host server does not have direct access to the internet.

Before you begin
  1. Review the Puppet Application Manager system requirements.

    For HA implementations, each server must meet the following minimum requirements. Secondaries must only be added after setting up three primaries:

    Node type Memory Storage CPUs Open ports
    Primary 7 GB + application requirements

    At least 50 GB on an unformatted storage device (such as a partition or raw device) + additional application-specific storage

    At least 100 GB for /var/lib. This is primarily divided among:

    • 2 GB for /var/lib/etcd
    • 4 GB for /var/lib/rook (plus buffer)
    • 32 GB for /var/lib/kubelet
    • 40 GB for /var/lib/containerd

    Note: Ceph storage back-end prefers the file system inhabited by /var/lib/rook to remain below 70% utilization.

    SSDs (or similarly low-latency storage) are recommended for /var/lib/etcd and /var/lib/rook.

    4 + application requirements

    TCP: 443, 2379,2380, 6443, 6783, 8000, 8800, 9001 (offline only) and 10250

    UDP: 6783, 6784

    Secondary 1.5 GB + application requirements

    At least 80 GB for /var/lib. This is primarily divided among:

    • 32 GB for /var/lib/kubelet

    • 40 GB for /var/lib/containerd

    1 + application requirements
    Note: Swap, Firewalld, and SELinux are not supported for use with this version of Puppet Application Manager. The installation script attempts to disable these services if they are present.

    If you want to keep SELinux enabled, append the -s preserve-selinux-config switch to the Puppet Application Manager install command.

  2. Ensure that IP address ranges and are locally accessible. See Resolve IP address range conflicts for instructions.
    Note: The minimum size for CIDR blocks used by Puppet Application Manager are:
    • Standalone - /24 for pod and service CIDRs
    • HA - /23 for pod and service CIDRs
    • Default of /22 is recommended to support future expansion
  3. Ensure that the nodes can resolve their own hostnames, through either local host mapping or a reachable DNS server.
  4. Set all nodes used in your HA implementation to the UTC timezone.
  5. If you use the puppetlabs/firewall module to manage your cluster's firewall rules with Puppet, be advised that purging unknown rules from changes breaks Kubernetes communication. To avoid this, apply the puppetlabs/pam_firewall module before installing Puppet Application Manager.

This installation process results in a basic Puppet Application Manager instance that is configured for optional high availability. Installation takes several (mostly hands-off) minutes to complete.
  1. Install and configure a load balancer (or two if you want to segment internal and external traffic - for more information, see Architecture overview). Round-robin load balancing is sufficient. For an HA cluster, the following is required:
    • A network (L4, TCP) load balancer for port 6443 across primary nodes. This is required for Kubernetes components to continue operating in the event that a node fails. The port is only accessed by the Kubernetes nodes and any admins using kubectl.

    • A network (L4, TCP) or application (L7, HTTP/S) load balancer for ports 80, and 443 across all primaries and secondaries. This maintains access to applications in event of a node failure. Include 8800 if you want external access to the Puppet Application Manager UI.

      Note: Include port 8000 for webhook callbacks if you are installing Continuous Delivery for PE.
  2. From a workstation with internet access, download the cluster installation bundle (note that this bundle is ~4GB):
  3. Copy the installation bundle to your primary and secondary nodes and unpack it:
    tar xzf puppet-application-manager.tar.gz
  4. Run the installation command:
    cat | sudo bash -s airgap 
    Note: An unformatted storage device is required. This can be either a partition or raw storage device.

    By default this installation automatically uses devices (under /dev) matching the pattern vd[b-z]. Attach a device to each host. Only devices that match the pattern, and are unformatted, are used.

    If necessary, you can override this pattern by providing a patch during installation; append -s installer-spec-file=patch.yaml to the installation command.

    kind: Installer
      name: patch
        blockDeviceFilter: sd[b-z] # for standard SCSI disks
    1. When prompted for a load balancer address, enter the address of the DNS entry for your load balancer.
    2. The installation script prints the address and password (only shown once, so make careful note of it) for Puppet Application Manager:
      Login with password (will not be shown again): <PASSWORD>
      Note: If you lose this password or wish to change it, see Reset the Puppet Application Manager password for instructions.
  5. Follow instructions outlined after the following line in the install script:
    To add MASTER nodes to this installation, copy and unpack this bundle on your other nodes, and run the following: 
    cat ./ | sudo bash -s airgap 
  6. Add the two new nodes to your load balancer.
  7. Navigate to the Puppet Application Manager UI using the address provided by the installation script (http://<PUPPET APPLICATION MANAGER ADDRESS>:8800) and follow the prompts.
    The Puppet Application Manager UI is where you manage Puppet applications. You’ll be guided through the process of setting up SSL certificates, uploading a license, and checking to make sure your infrastructure meets application system requirements.
What to do next

Follow the instructions for installing your Puppet applications on Puppet Application Manager.

For more information on installing Continuous Delivery for PE offline, see Install Continuous Delivery for PE in an offline environment.

For more information on installing Comply offline, see Install Comply offline.