Architecture overview

Puppet Application Manager (PAM) runs on Kubernetes. We provide several supported configurations for different use cases.

PAM can run on Puppet-supported or customer-supported Kubernetes clusters. Due to potential variations in the architecture of customer-supported clusters, the architecture overview provided on this page assumes PAM is running on Puppet-supported clusters. For more information on installing on a customer-supported Kubernetes cluster, see Install Puppet applications using PAM on a customer-supported Kubernetes cluster.

Terminology

Throughout this documentation, we use a few terms to describe different roles nodes can take:

  • Primary - A primary node runs core Kubernetes components (referred to as the Kubernetes control plane) as well as application workloads. At least three primaries are required to support high availability for Puppet Application Manager. These are also sometimes referred to as masters.
  • Secondary - A secondary node runs application workloads. These are also sometimes referred to as workers.

Puppet Application Manager is built on the KOTS (Kubernetes off-the-Shelf) project, and we occasionally use its CLI tools (kubectl, kots) to manage the installation.

Standalone architecture

Standalone is optimized for limited resources, storing data directly on disk. If you need to remove optional components like Prometheus and Grafana to decrease resource utilization, see Optional components. While additional compute capacity can be added through secondary nodes, this does not provide increased resilience as data is only stored on the node where a component service runs.

For information on migrating data from standalone to HA deployments, see Migrating data between two systems with different architectures.

HA architecture

A high availability (HA) architecture provides high availability for scheduling application services during failure and uses Ceph for distributed storage in case of node failure. Individual applications may still experience some loss of availability (up to 10 minutes) if individual services do not have replicas and need to be rescheduled. For more information, see Reduce recovery time when a node fails. An HA implementation requires a cluster of three primary nodes. Additional compute capacity can be added through secondary nodes.

The HA architecture installs Prometheus and Alertmanager. These are used to provide system monitoring in the Puppet Application Manager UI. Prometheus and Alertmanager are unauthenticated on ports 30900 and 30903, and you are recommended to control access to these ports via firewall rules. For information on how to remove Prometheus and Alertmanager, see Optional components.

Puppet Application Manager architectures

The following diagram and lists outline some of the core components involved in standalone and HA architectures and how they communicate. For a detailed list of ports used by Puppet Application Manager, refer to the Cluster port requirements sections of the PAM system requirements. For firewall information, refer to Web URL and port requirements for firewalls.

Cluster node architecture and port diagram.

Standalone architecture
Puppet Application Manager
Lives on a cluster within a Linux host.
The PAM application includes the admin console, application services, and PostgreSQL.
PAM communicates out of the Linux host to fetch updates.
UI ports
The application UI communicates on 80/443 to the Linux host.
The admin console HTTPS UI communicates on 8800 to the Linux host.
Backplane and internal ports
Backplane ports include 8472 (UDP) and 10250 (TCP).
Backplane ports can also be used within a single host for inter-process communication.
These ports are only used within a single host for inter-process communication (Flannel): 8472 (UDP)
Additional default ports
30900: Prometheus UI
30902: Grafana UI
30903: Alertmanager UI
HA cluster architecture
Control plane (primaries)
Multiple primaries that can also run application workloads.
Structured as clusters within Linux hosts with a device or partition for Ceph.
Each primary hosts PAM and can run application services in addition to supporting either PostgreSQL or the admin console.
Workers (secondaries)
Can be added later to add capacity for running application workloads.
Structured as clusters within Linux hosts.
Network or Application Balancer
The balancer communicates out to the control plane (primaries) and workers (secondaries).
Receives admin console HTTPS UI communication over 8800.
Receives application UI communication over 80/443.
Network load balancer internal APIs communicate with primaries and secondaries over 6443.
To learn about setting up health checks for your load balancer, go to Load balancer health checks.
Backplane and internal ports
Backplane ports include 2379/2380 (TCP), 8472 (UDP), and 10250 (TCP).
Backplane ports can also be used within a single host for inter-process communication.
These ports are only used within a single host for inter-process communication (Flannel): 8472 (UDP)
Additional default ports
30900: Prometheus UI
30902: Grafana UI
30903: Alertmanager UI

UNSUPPORTED: Legacy architecture

Note: The legacy architecture utilizes Rook 1.0, which is incompatible with Kubernetes version 1.20 and newer versions. Kubernetes version 1.19 is no longer receiving security updates. The legacy architecture reached the end of its support lifecycle on 30 June 2022, and Puppet no longer updates legacy architecture components.

The Puppet Application Manager legacy architecture reflects an older configuration that used Ceph 1.0 which hosted data directly on the file system. Installing the legacy architecture is no longer supported.

For information on upgrading to a newer version of the legacy architecture, see PAM legacy upgrades and PAM offline legacy upgrades.

For information on migrating data from a legacy architecture to a standalone or HA architecture, go to our Support Knowledge Base instructions: