PAM system requirements

You can install Puppet Application Manager (PAM) on a Puppet-supported cluster or add PAM to a customer-supported cluster. Before installing PAM, ensure that your system meets these requirements.

Customer-supported cluster hardware requirements

The following Kubernetes distributions are supported:
  • Google Kubernetes Engine

  • AWS Elastic Kubernetes Service

If you use a different distribution, contact Puppet Support for more information on compatibility with PAM.

Application requirements:
Application CPU Memory Storage Ports
Continuous Delivery for Puppet Enterprise (PE) 3 CPU 8 GB 280 GB Ingress, NodePort 8000
Note: NodePort is configurable
Puppet Comply® 7 CPU 7 GB 35 GB Ingress, NodePort 30303
Note: NodePort is configurable
Make sure that your Kubernetes cluster meets the minimum requirements:
  • Kubernetes version 1.24-1.26.
  • A default storage class that can be used for relocatable storage.
  • A standard Ingress controller that supports websockets (we have tested with Project Contour and NGINX).
  • We currently test and support Google Kubernetes Engine (GKE) clusters.
Cluster ports: In addition to the NodePorts used by your Puppet applications, make sure that TCP port 443 is open for your ingress controller.

Puppet-supported HA cluster hardware requirements

A high availability (HA) configuration uses multiple servers to provide availability in the event of a server failure. A majority of servers must be available to preserve service availability. Below are suggested configurations for each application.

Continuous Delivery for Puppet Enterprise (PE)

Three servers (referred to as primaries during installation) with the following minimum requirements:
CPU Memory Storage Open ports
6 CPU 10 GB

100 GB on an unformatted storage device.

1 GB for /var/log/apiserver for Kubernetes audit logs.

An additional 140 GB for /var/lib. You can use separate filesystems if necessary, but it is not a requirement to do so. For your reference, here is how the usage is roughly divided:
  • 2 GB for /var/lib/etcd
  • 10 GB for /var/lib/rook (plus buffer)
  • 32 GB for /var/lib/kubelet
  • 80 GB for /var/lib/containerd
Note: The storage backend prefers the file system inhabited by /var/lib/rook to remain below 70% utilization.

SSDs (or similarly low-latency storage) are recommended for /var/lib/etcd and /var/lib/rook.

TCP: 80, 443, 2379, 2380, 6443, 8000, 8800, and 10250

UDP: 8472

Puppet Comply

Three servers (referred to as primaries during installation) with the following minimum requirements:
CPU Memory Storage Open ports
7 CPU 10 GB

100 GB on an unformatted storage device.

1 GB for /var/log/apiserver for Kubernetes audit logs.

An additional 140 GB for /var/lib. You can use separate filesystems if necessary, but it is not a requirement to do so. For your reference, here is how the usage is roughly divided:
  • 2 GB for /var/lib/etcd
  • 10 GB for /var/lib/rook (plus buffer)
  • 32 GB for /var/lib/kubelet
  • 80 GB for /var/lib/containerd
Note: The storage backend prefers the file system inhabited by /var/lib/rook to remain below 70% utilization.

SSDs (or similarly low-latency storage) are recommended for /var/lib/etcd and /var/lib/rook.

TCP: 80, 443, 2379, 2380, 6443, 8800, 10250, and 30303

UDP: 8472

Continuous Delivery for Puppet Enterprise (PE) and Puppet Comply

Three servers (referred to as primaries during installation) with the following minimum requirements:
CPU Memory Storage Open ports
8 CPU 13 GB

150 GB on an unformatted storage device.

1 GB for /var/log/apiserver for Kubernetes audit logs.

An additional 140 GB for /var/lib. You can use separate filesystems if necessary, but it is not a requirement to do so. For your reference, here is how the usage is roughly divided:
  • 2 GB for /var/lib/etcd
  • 10 GB for /var/lib/rook (plus buffer)
  • 32 GB for /var/lib/kubelet
  • 80 GB for /var/lib/containerd
Note: The storage backend prefers the file system inhabited by /var/lib/rook to remain below 70% utilization.

SSDs (or similarly low-latency storage) are recommended for /var/lib/etcd and /var/lib/rook.

TCP: 80, 443, 2379, 2380, 6443, 8000, 8800, 10250, and 30303

UDP: 8472

For a detailed example of an HA configuration running Continuous Delivery for PE and Puppet Comply, see Example of an HA cluster that supports CDPE and Comply.

Networking requirements

Gigabit Ethernet (1GbE) and a latency of less than 10 milliseconds (ms) between cluster members is sufficient for most deployments. For more information on networking for specific Puppet Application Manager components, see the documentation for Ceph, and etcd.

Cluster port requirements

Puppet Application Manager (PAM) uses the following ports in an HA cluster architecture:

Category Port Protocol Purpose Source
Puppet application ports 443 TCP Web UI

Relies on Server Name Indication to route requests to the application.

Browser
Continuous Delivery for Puppet Enterprise (PE) ports 8000 TCP Webhook service Source control
Puppet Comply ports 30303 TCP Communication with Puppet Enterprise (PE) PE instance
Platform ports 2379, 2380 TCP High availability (HA) communication

Only needs to be open between the cluster's primary nodes.

etcd on the Kubernetes host.
6443 TCP Kubernetes API

Might be useful to expose to workstations.

Admin workstation
8472 UDP Kubernetes networking - Flannel Kubernetes host
8800 TCP PAM Admin browser
9001 TCP Internal registry in offline installs only.

Requires configuring an Ingress to use this port.

Kubernetes host
9090 TCP Rook CSI RBD Plugin Metrics Kubernetes host
10250 TCP Kubernetes cluster management

Only communicates in one direction, from a primary to other primaries and secondaries.

Kubernetes host

Additionally, these ports are configured by default: 30900 (Prometheus UI), 30902 (Grafana UI), and 30903 (Alertmanager UI)

For Kubernetes-specific information, refer to Networking Requirements in the Kurl documentation.

IP address range requirements

Important: Puppet Application Manager must be installed on nodes with static IP assignments because IP addresses cannot be changed after installation.

Ensure that IP address ranges 10.96.0.0/22 and 10.32.0.0/22 are locally accessible. See Resolve IP address range conflicts for instructions.

Note: The minimum size for CIDR blocks used by PAM are:
  • /23 for pod and service CIDRs

  • Default of /22 is recommended to support future expansion

Antivirus and antimalware considerations

Antivirus and antimalware software can impact PAM and its applications or prevent them from functioning properly.

To avoid issues, exclude the following directories from antivirus and antimalware tools that scan disk write operations:
  • /var/lib/rook
  • /var/lib/kubelet
  • /var/lib/containerd

Firewall modules

If you use the puppetlabs/firewall module to manage your cluster's firewall rules with Puppet, be advised that purging unknown rules from changes breaks Kubernetes communication. To avoid this, apply the puppetlabs/pam_firewall module before installing Puppet Application Manager.

If you've already installed PAM, apply the pam_firewall module and then restart the kube-proxy service to recreate its iptables rules by running the following on a primary:
systemctl restart kubelet
                    kubectl -n kube-system delete pod -l k8s-app=kube-proxy
                    kubectl -n kube-flannel delete pod -l app=flannel

For more information, see the PAM firewall module.

Supported operating systems

Puppet Application Manager and the applications it supports can be installed on these operating systems:

Operating system Supported versions
Amazon Linux

2

CentOS

7.4, 7.5, 7.6, 7.7, 7.8, 7.9

8.0, 8.1, 8.2, 8.3, 8.4

Oracle Linux

7.4, 7.5, 7.6, 7.7, 7.8, 7.9

8.0, 8.1, 8.2, 8.3, 8.4, 8.5, 8.6, 8.7, 8.8

Red Hat Enterprise Linux (RHEL)

7.4, 7.5, 7.6, 7.7, 7.8, 7.9

8.0, 8.1, 8.2, 8.3, 8.4, 8.5, 8.6, 8.7, 8.8

9.0, 9.1, 9.2

Rocky Linux

9.0, 9.1, 9.2

Ubuntu (General availability kernels)

18.04

20.04

22.04

Puppet-supported standalone hardware requirements

Here are the suggested configurations for standalone installations.

Continuous Delivery for Puppet Enterprise (PE)

CPU Memory Storage Open ports
4 CPU 8 GB
220 GB for /var/lib and /var/openebsThis is primarily divided among:
  • 2 GB for /var/lib/etcd
  • 32 GB for /var/lib/kubelet
  • 80 GB for /var/lib/containerd
  • 100 GB for /var/openebs

TCP: 80, 443, 2379, 2380, 6443, 8000, 8800, and 10250

UDP: 8472

Puppet Comply

CPU Memory Storage Open ports
7 CPU 7 GB 220 GB for /var/lib and /var/openebsThis is primarily divided among:
  • 2 GB for /var/lib/etcd
  • 32 GB for /var/lib/kubelet
  • 80 GB for /var/lib/containerd
  • 100 GB for /var/openebs

TCP: 80, 443, 2379, 2380, 6443, 8800, 10250, and 30303

UDP: 8472

Cluster port requirements

Puppet Application Manager (PAM) uses the following ports in a standalone architecture:

Category Port Protocol Purpose Source
Puppet application ports 442 TCP Web UI

Relies on Server Name Indication to route requests to the application.

Browser
Continuous Delivery for Puppet Enterprise (PE) ports 8000 TCP Webhook service Source control
Puppet Comply ports 30303 TCP Communication with Puppet Enterprise PE instance
Platform ports 6443 TCP Kubernetes API

Might be useful to expose to workstations.

Admin workstation
8472 UDP Kubernetes networking - Flannel Kubernetes host
8800 TCP PAM Admin browser
9001 TCP Internal registry in offline installs only.

Requires configuring an Ingress to use this port.

Kubernetes host
10250 TCP Kubernetes cluster management

Only communicates in one direction, from a primary to other primaries and secondaries.

Kubernetes host

Additionally, these ports are configured by default: 30900 (Prometheus UI), 30902 (Grafana UI), and 30903 (Alertmanager UI)

For Kubernetes-specific information, refer to Networking Requirements in the Kurl documentation.

IP address range requirements

Important: Puppet Application Manager must be installed on nodes with static IP assignments because IP addresses cannot be changed after installation.

Ensure that IP address ranges 10.96.0.0/22 and 10.32.0.0/22 are locally accessible. See Resolve IP address range conflicts for instructions.

Note: The minimum size for CIDR blocks used by PAM are:
  • /24 for pod and service CIDRs

  • Default of /22 is recommended to support future expansion

Antivirus and antimalware considerations

Antivirus and antimalware software can impact PAM and its applications or prevent them from functioning properly.

To avoid issues, exclude the following directories from antivirus and antimalware tools that scan disk write operations:
  • /var/openebs
  • /var/lib/kubelet
  • /var/lib/containerd

Firewall modules

If you use the puppetlabs/firewall module to manage your cluster's firewall rules with Puppet, be advised that purging unknown rules from changes breaks Kubernetes communication. To avoid this, apply the puppetlabs/pam_firewall module before installing Puppet Application Manager.

If you've already installed PAM, apply the pam_firewall module and then restart the kube-proxy service to recreate its iptables rules by running the following on a primary:
systemctl restart kubelet
                    kubectl -n kube-system delete pod -l k8s-app=kube-proxy
                    kubectl -n kube-flannel delete pod -l app=flannel

For more information, see the PAM firewall module.

Detailed hardware requirements

For additional compute capacity, you can horizontally scale HA and standalone architectures by adding secondary nodes. During installation, only add secondaries after setting up all primaries.

You can add secondaries to HA and standalone architectures; however in standalone architectures, secondaries do not increase availability of the application, and data storage services are pinned to the host they start on and cannot be moved.

Here are the baseline requirements to run cluster services on primaries and secondaries. Any Puppet applications require additional resources on top of these requirements.
Node type CPU Memory Storage Open ports
Primary 4 CPU 7 GB At least 50 GB on an unformatted storage device in addition to application-specific storage (below) for the Ceph storage backend. This can be satisfied by multiple devices if more storage is needed later, but should be balanced across primaries.

1 GB for /var/log/apiserver for Kubernetes audit logs.

An additional 140 GB for /var/lib. You can use separate filesystems if necessary, but it is not a requirement to do so. For your reference, here is how the usage is roughly divided:
  • 2 GB for /var/lib/etcd

  • 10 GB for /var/lib/rook (plus buffer)

  • 32 GB for /var/lib/kubelet

  • 80 GB for /var/lib/containerd

Note: Ceph storage backend prefers the file system inhabited by /var/lib/rook to remain below 70% utilization.

SSDs (or similarly low-latency storage) are recommended for /var/lib/etcd and /var/lib/rook.

TCP: 80, 443, 2379,2380, 6443, 8800, and 10250

UDP: 8472

Secondary 1 CPU 1.5 GB

1 GB for /var/log/apiserver for Kubernetes audit logs.

120 GB for /var/lib. You can use separate filesystems if necessary, but it is not a requirement to do so. For your reference, here is how the usage is roughly divided:
  • 32 GB for /var/lib/kubelet
  • 80 GB for /var/lib/containerd

Applications are composed of multiple smaller services, so you can divide CPU and memory requirements across multiple servers. The listed ports can be accessed from all primaries and secondaries, but only need to be exposed on nodes you include in your load balancer. Apply application-specific storage to all primary nodes.

Application-specific requirements:
Application CPU Memory Storage Ports
Continuous Delivery for Puppet Enterprise (PE) 3 CPU 8 GB 50 GB 80, 443, 8000
Puppet Comply 7 CPU 7 GB 50 GB 80, 443, 30303

The minimum recommended size for a secondary node is 4 CPU and 8 GB of memory to allow some scheduling flexibility for individual services.

Example of an HA cluster capable of running Continuous Delivery for PE and Comply

An HA cluster capable of running both Continuous Delivery for Puppet Enterprise (PE) and Puppet Comply requires 10 CPU and 15 GB of application-specific memory in addition to per-node baselines. You can create a cluster from 4 CPU, 8 GB nodes. Each primary uses all CPU and 7 GB of memory for cluster services, providing 0 CPU and 1 GB of memory for application workloads; each secondary uses 1 CPU and 1.5 GB of memory for cluster services, providing 3 CPU and 6.5 GB of memory for application workloads. Create the cluster as follows:

  • Three primaries provide an excess of 3 GB of memory for application workloads. Each primary must have 150 GB of storage in an unformatted, unpartitioned storage device for Ceph and 140 GB of storage for /var/lib.

  • Three secondaries provide an excess of 9 CPU and 19.5 GB of memory for application workloads. Each secondary must have 120 GB of storage for /var/lib.

This diagram illustrates the suggested three-node configuration for a cluster capable of running Continuous Delivery for Puppet Enterprise (PE) and Puppet Comply:PAM HA cluster example with Comply and Continuous Delivery for Puppet Enterprise

Web URL and port requirements for firewalls

Puppet Application Manager interacts with external web URLs for a variety of installation, configuration, upgrade, and deployment tasks. Puppet Application Manager uses the following web URLs for internal and outbound network traffic.
Category URLs
Puppet Application Manager and platform
  • get.replicated.com
  • registry.replicated.com
  • proxy.replicated.com
  • api.replicated.com
  • k8s.kurl.sh
  • kurl-sh.s3.amazonaws.com
  • replicated.app
  • registry-data.replicated.com
Container registries
  • gcr.io
  • docker.io
  • index.docker.io
  • registry-1.docker.io
  • auth.docker.io
  • production.cloudflare.docker.com
  • quay.io
Puppet Enterprise
  • pup.pt
  • forgeapi.puppet.com
  • pm.puppetlabs.com
  • amazonaws.com
  • s3.amazonaws.com
  • rubygems.org

For information about containers and firewalls, refer to the Networking Requirements in the Kurl documentation.

Firewall modules

If you use the puppetlabs/firewall module to manage your cluster's firewall rules with Puppet, be advised that purging unknown rules from changes breaks Kubernetes communication. To avoid this, apply the puppetlabs/pam_firewall module before installing Puppet Application Manager.

If you've already installed PAM, apply the pam_firewall module and then restart the kube-proxy service to recreate its iptables rules by running the following on a primary:
systemctl restart kubelet
                    kubectl -n kube-system delete pod -l k8s-app=kube-proxy
                    kubectl -n kube-flannel delete pod -l app=flannel

For more information, see the PAM firewall module.

Supported browsers

The following browsers are supported for use with the Puppet Application Manager UI:

Browser Supported versions
Google Chrome Current version as of release
Mozilla Firefox Current version as of release
Microsoft Edge Current version as of release
Apple Safari Current version as of release