PAM HA offline install
Use these instructions to install Puppet Application Manager (PAM) in an air-gapped or offline environment where the Puppet Application Manager host server does not have direct access to the internet.
- Review the Puppet Application Manager
For HA implementations, each server must meet the following minimum requirements. Secondaries must only be added after setting up three primaries:
Node type Memory Storage CPUs Open ports Primary 7 GB + application requirements
At least 50 GB on an unformatted storage device (such as a partition or raw device) + additional application-specific storage
At least 100 GB for
/var/lib.This is primarily divided among:
- 2 GB for
- 4 GB for
- 32 GB for
- 40 GB for
Note: Ceph storage back-end prefers the file system inhabited by
/var/lib/rookto remain below 70% utilization.
SSDs (or similarly low-latency storage) are recommended for
4 + application requirements
TCP: 443, 2379,2380, 6443, 6783, 8000, 8800, 9001 (offline only) and 10250
UDP: 6783, 6784
Secondary 1.5 GB + application requirements
At least 80 GB for
/var/lib. This is primarily divided among:
32 GB for
40 GB for
1 + application requirementsNote: Swap is not supported for use with this version of Puppet Application Manager (PAM). The installation script attempts to disable Swap if it is enabled.
- 2 GB for
- (Optional) If necessary, prepare additional steps related to SELinux and
The PAM installation script disables SELinux and Firewalld by default. If you want to keep SELinux enabled, append the
-s preserve-selinux-configswitch to the PAM install command. This may require additional configuration to adapt SELinux policy to the installation.
If you want to keep Firewalld enabled:
Make sure Firewalld is installed on your system.
To prevent the installation from disabling Firewalld, provide a patch file to the PAM install command using
-s installer-spec-file=patch.yaml, where
patch.yamlis the name of your patch file. For reference, here's an example patch file that enables Firewalld during installation, starts the service if it isn't running, and adds rules to open relevant ports:
apiVersion: cluster.kurl.sh/v1beta1 kind: Installer metadata: name: patch spec: firewalldConfig: firewalld: enabled command: ["/bin/bash", "-c"] args: ["echo 'net.ipv4.ip_forward = 1' | tee -a /etc/sysctl.conf && sysctl -p"] firewalldCmds: - ["--permanent", "--zone=trusted", "--add-interface=weave"] - ["--zone=external", "--add-masquerade"] # SSH port - ["--permanent", "--zone=public", "--add-port=22/tcp"] # HTTPS port - ["--permanent", "--zone=public", "--add-port=443/tcp"] # Kubernetes etcd port - ["--permanent", "--zone=public", "--add-port=2379-2830/tcp"] # Kubernetes API port - ["--permanent", "--zone=public", "--add-port=6443/tcp"] # Weave Net port - ["--permanent", "--zone=public", "--add-port=6783/udp"] # Weave Net port - ["--permanent", "--zone=public", "--add-port=6783-6874/tcp"] # CD4PE Webhook callback port (uncomment line below if needed) # - ["--permanent", "--zone=public", "--add-port=8000/tcp"] # KOTS UI port - ["--permanent", "--zone=public", "--add-port=8800/tcp"] # CD4PE Local registry port (offline only, uncomment line below if needed) # - ["--permanent", "--zone=public", "--add-port=9001/tcp"] # Kubernetes component ports (kubelet, kube-scheduler, kube-controller) - ["--permanent", "--zone=public", "--add-port=10250-10252/tcp"] # Reload firewall rules - ["--reload"] bypassFirewalldWarning: true disableFirewalld: false hardFailOnFirewalld: false preserveConfig: false
Ensure that IP address ranges
10.32.0.0/22are locally accessible. See Resolve IP address range conflicts for instructions.Note: The minimum size for CIDR blocks used by Puppet Application Manager are:
- Standalone - /24 for pod and service CIDRs
- HA - /23 for pod and service CIDRs
- Default of /22 is recommended to support future expansion
- Ensure that the nodes can resolve their own hostnames, through either local host mapping or a reachable DNS server.
- Set all nodes used in your HA implementation to the UTC timezone.
If you use the
puppetlabs/firewallmodule to manage your cluster's firewall rules with Puppet, be advised that purging unknown rules from changes breaks Kubernetes communication. To avoid this, apply the
puppetlabs/pam_firewallmodule before installing Puppet Application Manager.
Install and configure a load balancer (or two if you want to segment internal
and external traffic - for more information, see Architecture overview).
Round-robin load balancing is sufficient. For an HA cluster, the following is
A network (L4, TCP) load balancer for port 6443 across primary nodes. This is required for Kubernetes components to continue operating in the event that a node fails. The port is only accessed by the Kubernetes nodes and any admins using
A network (L4, TCP) or application (L7, HTTP/S) load balancer for ports 80, and 443 across all primaries and secondaries. This maintains access to applications in event of a node failure. Include 8800 if you want external access to the Puppet Application Manager UI.Note: Include port 8000 for webhook callbacks if you are installing Continuous Delivery for PE.
From a workstation with internet access, download the cluster installation
bundle (note that this bundle is ~4GB):
Copy the installation bundle to your primary and secondary nodes and unpack
tar xzf puppet-application-manager.tar.gz
Run the installation command:
cat install.sh | sudo bash -s airgapNote: An unformatted storage device is required. This can be either a partition or raw storage device.
By default this installation automatically uses devices (under
/dev) matching the pattern
vd[b-z]. Attach a device to each host. Only devices that match the pattern, and are unformatted, are used.
If necessary, you can override this pattern by providing a patch during installation; append
-s installer-spec-file=patch.yamlto the installation command.
apiVersion: cluster.kurl.sh/v1beta1 kind: Installer metadata: name: patch spec: rook: blockDeviceFilter: sd[b-z] # for standard SCSI disks
- When prompted for a load balancer address, enter the address of the DNS entry for your load balancer.
The installation script prints the address and password (only shown
once, so make careful note of it) for Puppet Application Manager:
--- Kotsadm: http://<PUPPET APPLICATION MANAGER ADDRESS>:8800 Login with password (will not be shown again): <PASSWORD> ---Note: If you lose this password or wish to change it, see Reset the Puppet Application Manager password for instructions.
Follow instructions outlined after the following line in the install
To add MASTER nodes to this installation, copy and unpack this bundle on your other nodes, and run the following: cat ./join.sh | sudo bash -s airgap kubernetes-master-address=...
- Add the two new nodes to your load balancer.
Navigate to the Puppet Application Manager UI using the address
provided by the installation script (
http://<PUPPET APPLICATION MANAGER ADDRESS>:8800) and follow the prompts.The Puppet Application Manager UI is where you manage Puppet applications. You’ll be guided through the process of setting up SSL certificates, uploading a license, and checking to make sure your infrastructure meets application system requirements.
Follow the instructions for installing your Puppet applications on Puppet Application Manager.
For more information on installing Continuous Delivery for PE offline, see Install Continuous Delivery for PE in an offline environment.
For more information on installing Comply offline, see Install Comply offline.