PAM HA offline install

Use these instructions to install Puppet Application Manager (PAM) in an air-gapped or offline environment where the Puppet Application Manager host server does not have direct access to the internet.

Before you begin
  1. Review the Puppet Application Manager system requirements.

    For HA implementations, each server must meet the following minimum requirements. Secondaries must only be added after setting up three primaries:

    Node type Memory Storage CPUs Open ports
    Primary 7 GB + application requirements

    At least 50 GB on an unformatted storage device (such as a partition or raw device) + additional application-specific storage

    At least 100 GB for /var/lib. This is primarily divided among:

    • 2 GB for /var/lib/etcd
    • 4 GB for /var/lib/rook (plus buffer)
    • 32 GB for /var/lib/kubelet
    • 40 GB for /var/lib/containerd

    Note: Ceph storage back-end prefers the file system inhabited by /var/lib/rook to remain below 70% utilization.

    SSDs (or similarly low-latency storage) are recommended for /var/lib/etcd and /var/lib/rook.

    4 + application requirements

    TCP: 443, 2379,2380, 6443, 6783, 8000, 8800, 9001 (offline only) and 10250

    UDP: 6783, 6784

    Secondary 1.5 GB + application requirements

    At least 80 GB for /var/lib. This is primarily divided among:

    • 32 GB for /var/lib/kubelet

    • 40 GB for /var/lib/containerd

    1 + application requirements
    Note: Swap is not supported for use with this version of Puppet Application Manager (PAM). The installation script attempts to disable Swap if it is enabled.
  2. (Optional) If necessary, prepare additional steps related to SELinux and Firewalld:

    The PAM installation script disables SELinux and Firewalld by default. If you want to keep SELinux enabled, append the -s preserve-selinux-config switch to the PAM install command. This may require additional configuration to adapt SELinux policy to the installation.

    If you want to keep Firewalld enabled:

    1. Make sure Firewalld is installed on your system.

    2. To prevent the installation from disabling Firewalld, provide a patch file to the PAM install command using -s installer-spec-file=patch.yaml, where patch.yaml is the name of your patch file. For reference, here's an example patch file that enables Firewalld during installation, starts the service if it isn't running, and adds rules to open relevant ports:
      apiVersion: cluster.kurl.sh/v1beta1
      kind: Installer
      metadata:
        name: patch
      spec:
        firewalldConfig:
          firewalld: enabled
          command: ["/bin/bash", "-c"]
          args: ["echo 'net.ipv4.ip_forward = 1' | tee -a /etc/sysctl.conf && sysctl -p"]
          firewalldCmds:
            - ["--permanent", "--zone=trusted", "--add-interface=weave"]
            - ["--zone=external", "--add-masquerade"]
            # SSH port
            - ["--permanent", "--zone=public", "--add-port=22/tcp"]
            # HTTPS port
            - ["--permanent", "--zone=public", "--add-port=443/tcp"]
            # Kubernetes etcd port
            - ["--permanent", "--zone=public", "--add-port=2379-2830/tcp"]
            # Kubernetes API port
            - ["--permanent", "--zone=public", "--add-port=6443/tcp"]
            # Weave Net port
            - ["--permanent", "--zone=public", "--add-port=6783/udp"]
            # Weave Net port
            - ["--permanent", "--zone=public", "--add-port=6783-6874/tcp"]
            # CD4PE Webhook callback port (uncomment line below if needed)
            # - ["--permanent", "--zone=public", "--add-port=8000/tcp"]
            # KOTS UI port
            - ["--permanent", "--zone=public", "--add-port=8800/tcp"]
            # CD4PE Local registry port (offline only, uncomment line below if needed)
            # - ["--permanent", "--zone=public", "--add-port=9001/tcp"]
            # Kubernetes component ports (kubelet, kube-scheduler, kube-controller)
            - ["--permanent", "--zone=public", "--add-port=10250-10252/tcp"]
            # Reload firewall rules
            - ["--reload"]
          bypassFirewalldWarning: true
          disableFirewalld: false
          hardFailOnFirewalld: false
          preserveConfig: false
  3. Ensure that IP address ranges 10.96.0.0/22 and 10.32.0.0/22 are locally accessible. See Resolve IP address range conflicts for instructions.
    Note: The minimum size for CIDR blocks used by Puppet Application Manager are:
    • Standalone - /24 for pod and service CIDRs
    • HA - /23 for pod and service CIDRs
    • Default of /22 is recommended to support future expansion
  4. Ensure that the nodes can resolve their own hostnames, through either local host mapping or a reachable DNS server.
  5. Set all nodes used in your HA implementation to the UTC timezone.
  6. If you use the puppetlabs/firewall module to manage your cluster's firewall rules with Puppet, be advised that purging unknown rules from changes breaks Kubernetes communication. To avoid this, apply the puppetlabs/pam_firewall module before installing Puppet Application Manager.

This installation process results in a basic Puppet Application Manager instance that is configured for optional high availability. Installation takes several (mostly hands-off) minutes to complete.
  1. Install and configure a load balancer (or two if you want to segment internal and external traffic - for more information, see Architecture overview). Round-robin load balancing is sufficient. For an HA cluster, the following is required:
    • A network (L4, TCP) load balancer for port 6443 across primary nodes. This is required for Kubernetes components to continue operating in the event that a node fails. The port is only accessed by the Kubernetes nodes and any admins using kubectl.

    • A network (L4, TCP) or application (L7, HTTP/S) load balancer for ports 80, and 443 across all primaries and secondaries. This maintains access to applications in event of a node failure. Include 8800 if you want external access to the Puppet Application Manager UI.

      Note: Include port 8000 for webhook callbacks if you are installing Continuous Delivery for PE.
  2. From a workstation with internet access, download the cluster installation bundle (note that this bundle is ~4GB):
    https://k8s.kurl.sh/bundle/puppet-application-manager.tar.gz
  3. Copy the installation bundle to your primary and secondary nodes and unpack it:
    tar xzf puppet-application-manager.tar.gz
  4. Run the installation command:
    cat install.sh | sudo bash -s airgap 
    Note: An unformatted storage device is required. This can be either a partition or raw storage device.

    By default this installation automatically uses devices (under /dev) matching the pattern vd[b-z]. Attach a device to each host. Only devices that match the pattern, and are unformatted, are used.

    If necessary, you can override this pattern by providing a patch during installation; append -s installer-spec-file=patch.yaml to the installation command.

    apiVersion: cluster.kurl.sh/v1beta1
    kind: Installer
    metadata:
      name: patch
    spec:
      rook:
        blockDeviceFilter: sd[b-z] # for standard SCSI disks
    1. When prompted for a load balancer address, enter the address of the DNS entry for your load balancer.
    2. The installation script prints the address and password (only shown once, so make careful note of it) for Puppet Application Manager:
      ---
      Kotsadm: http://<PUPPET APPLICATION MANAGER ADDRESS>:8800
      Login with password (will not be shown again): <PASSWORD>
      ---
      Note: If you lose this password or wish to change it, see Reset the Puppet Application Manager password for instructions.
  5. Follow instructions outlined after the following line in the install script:
    To add MASTER nodes to this installation, copy and unpack this bundle on your other nodes, and run the following: 
    cat ./join.sh | sudo bash -s airgap 
    kubernetes-master-address=...
  6. Add the two new nodes to your load balancer.
  7. Navigate to the Puppet Application Manager UI using the address provided by the installation script (http://<PUPPET APPLICATION MANAGER ADDRESS>:8800) and follow the prompts.
    The Puppet Application Manager UI is where you manage Puppet applications. You’ll be guided through the process of setting up SSL certificates, uploading a license, and checking to make sure your infrastructure meets application system requirements.
What to do next

Follow the instructions for installing your Puppet applications on Puppet Application Manager.

For more information on installing Continuous Delivery for PE offline, see Install Continuous Delivery for PE in an offline environment.

For more information on installing Comply offline, see Install Comply offline.