PAM HA offline installation

Use these instructions to install Puppet Application Manager (PAM) in an air-gapped or offline environment where the Puppet Application Manager host server does not have direct access to the internet.

Before you begin
  1. Review the Puppet Application Manager system requirements.
  2. Note that Swap is not supported for use with this version of Puppet Application Manager (PAM). The installation script attempts to disable Swap if it is enabled.
  3. (Optional) If necessary, prepare additional steps related to SELinux and Firewalld:

    The PAM installation script disables SELinux and Firewalld by default. If you want to keep SELinux enabled, append the -s preserve-selinux-config switch to the PAM install command. This may require additional configuration to adapt SELinux policy to the installation.

    If you want to keep Firewalld enabled:

    1. Make sure Firewalld is installed on your system.

    2. To prevent the installation from disabling Firewalld, provide a patch file to the PAM install command using -s installer-spec-file=patch.yaml, where patch.yaml is the name of your patch file. For reference, here's an example patch file that enables Firewalld during installation, starts the service if it isn't running, and adds rules to open relevant ports:
      apiVersion: cluster.kurl.sh/v1beta1
      kind: Installer
      metadata:
        name: patch
      spec:
        firewalldConfig:
          firewalld: enabled
          command: ["/bin/bash", "-c"]
          args: ["echo 'net.ipv4.ip_forward = 1' | tee -a /etc/sysctl.conf && sysctl -p"]
          firewalldCmds:
            - ["--permanent", "--zone=trusted", "--add-interface=flannel.1"]
            - ["--zone=external", "--add-masquerade"]
            # SSH port
            - ["--permanent", "--zone=public", "--add-port=22/tcp"]
            # HTTPS port
            - ["--permanent", "--zone=public", "--add-port=443/tcp"]
            # Kubernetes etcd port
            - ["--permanent", "--zone=public", "--add-port=2379-2830/tcp"]
            # Kubernetes API port
            - ["--permanent", "--zone=public", "--add-port=6443/tcp"]
            # Flannel Net port
            - ["--permanent", "--zone=public", "--add-port=8472/udp"]
            # CD4PE Webhook callback port (uncomment line below if needed)
            # - ["--permanent", "--zone=public", "--add-port=8000/tcp"]
            # KOTS UI port
            - ["--permanent", "--zone=public", "--add-port=8800/tcp"]
            # CD4PE Local registry port (offline only, uncomment line below if needed)
            # - ["--permanent", "--zone=public", "--add-port=9001/tcp"]
            # Kubernetes component ports (kubelet, kube-scheduler, kube-controller)
            - ["--permanent", "--zone=public", "--add-port=10250-10252/tcp"]
            # Reload firewall rules
            - ["--reload"]
          bypassFirewalldWarning: true
          disableFirewalld: false
          hardFailOnFirewalld: false
          preserveConfig: false
  4. Ensure that IP address ranges 10.96.0.0/22 and 10.32.0.0/22 are locally accessible. See Resolve IP address range conflicts for instructions.
    Note: The minimum size for CIDR blocks used by Puppet Application Manager are:
    • Standalone - /24 for pod and service CIDRs
    • HA - /23 for pod and service CIDRs
    • Default of /22 is recommended to support future expansion
  5. Ensure that the nodes can resolve their own hostnames, through either local host mapping or a reachable DNS server.
  6. Set all nodes used in your HA implementation to the UTC timezone.
  7. If you use the puppetlabs/firewall module to manage your cluster's firewall rules with Puppet, be advised that purging unknown rules from changes breaks Kubernetes communication. To avoid this, apply the puppetlabs/pam_firewall module before installing Puppet Application Manager.

  8. If you're restoring a backup from a previous cluster, make sure you include the kurl-registry-ip=<YOUR_IP_ADDRESS> installation option. For more information, see Migrating PAM data to a new system.

This installation process results in a basic Puppet Application Manager instance that is configured for optional high availability. Installation takes several minutes (mostly hands-off) to complete.

For more context about HA components and structure, refer to the HA architecture section of the Architecture overview.

  1. Install and configure a load balancer (or two if you want to segment internal and external traffic - for more information, see Architecture overview). Round-robin load balancing is sufficient. For an HA cluster, the following is required:
    • A network (L4, TCP) load balancer for port 6443 across primary nodes. This is required for Kubernetes components to continue operating in the event that a node fails. The port is only accessed by the Kubernetes nodes and any admins using kubectl.

    • A network (L4, TCP) or application (L7, HTTP/S) load balancer for ports 80, and 443 across all primaries and secondaries. This maintains access to applications in event of a node failure. Include 8800 if you want external access to the Puppet Application Manager UI.

      Note: Include port 8000 for webhook callbacks if you are installing Continuous Delivery for PE.
  2. From a workstation with internet access, download the cluster installation bundle (note that this bundle is ~4GB):
    https://k8s.kurl.sh/bundle/puppet-application-manager.tar.gz
  3. Copy the installation bundle to your primary and secondary nodes and unpack it:
    tar xzf puppet-application-manager.tar.gz
  4. Run the installation command:
    cat install.sh | sudo bash -s airgap 
    Note: An unformatted, unpartitioned storage device is required.

    By default this installation automatically uses devices (under /dev) matching the pattern vd[b-z]. Attach a device to each host. Only devices that match the pattern, and are unformatted, are used.

    If necessary, you can override this pattern by providing a patch during installation; append -s installer-spec-file=patch.yaml to the installation command.

    apiVersion: cluster.kurl.sh/v1beta1
    kind: Installer
    metadata: 
      name: patch
    spec: 
      rook: 
        blockDeviceFilter: "sd[b-z]"
    1. When prompted for a load balancer address, enter the address of the DNS entry for your load balancer.
    2. The installation script prints the address and password (only shown once, so make careful note of it) for Puppet Application Manager:
      ---
      Kotsadm: http://<PUPPET APPLICATION MANAGER ADDRESS>:8800
      Login with password (will not be shown again): <PASSWORD>
      ---
      Note: If you lose this password or wish to change it, see Reset the PAM password for instructions.
  5. Add two additional primary nodes to your offline installation using the instructions provided in the install script:
    To add MASTER nodes to this installation, copy and unpack this bundle on your other nodes, and run the following: 
    cat ./join.sh | sudo bash -s airgap 
    kubernetes-master-address=...
  6. Add the two new nodes to your load balancer.
  7. Navigate to the Puppet Application Manager UI using the address provided by the installation script (http://<PUPPET APPLICATION MANAGER ADDRESS>:8800) and follow the prompts.
    The Puppet Application Manager UI is where you manage Puppet applications. You’ll be guided through the process of setting up SSL certificates, uploading a license, and checking to make sure your infrastructure meets application system requirements.
What to do next

Follow the instructions for installing your Puppet applications on Puppet Application Manager. For more information, see Install applications via the PAM UI.

For more information on installing Continuous Delivery for PE offline, see Install Continuous Delivery for PE in an offline environment.

For more information on installing Comply offline, see Install Comply offline.