Troubleshooting PAM
Use this guide to troubleshoot issues with your Puppet Application Manager installation.
How to look up your Puppet Application Manager architecture
kubectl get installer --sort-by=.metadata.creationTimestamp -o jsonpath='{.items[-1:].metadata.name}' ; echo
-
HA architecture:
puppet-application-manager
-
Standalone architecture:
puppet-application-manager-standalone
-
Legacy architecture: Any other value, for example,
puppet-application-manager-legacy
,cd4pe
, orcomply
Resolve IP address range conflicts
When installing Puppet Application Manager, IP address ranges 10.96.0.0/22
and 10.32.0.0/22
must not be used by other nodes on the local network.
- Standalone - /24 for pod and service CIDRs
- HA - /23 for pod and service CIDRs
- Default of /22 is recommended to support future expansion
patch.yaml
file and add the
installer-spec-file=patch.yaml
argument when running the
installation script (see below):Reset the PAM password
As part of the installation process, Puppet Application Manager (PAM) generates a password for you. You can update this password to one of your choosing after installation.
Update the PAM TLS certificate
A self-signed TLS certificate secures the connection between your browser and Puppet Application Manager (PAM). Once the initial Puppet Application Manager setup process is complete, you can upload new certificates by enabling changes to the installation's Kubernetes secrets.
Reduce recovery time when a node fails
If a node running a non-replicated service like PostgreSQL fails, expect some service downtime.
How much downtime depends on the following factors:
- Timeout for communication between Kubernetes services (at least one minute to mark the node as unreachable).
- Timeout for the ekco service to determine that pods need to be rescheduled. The default is five minutes after node is marked unreachable.
- Time to restart services (at least two minutes, possibly up to five minutes, if there are complex dependencies).
The ekco service can be configured to reschedule
pods more quickly by configuring the installation with a
patch.yaml
similar to the following:
apiVersion: cluster.kurl.sh/v1beta1
kind: Installer
metadata:
name: patch
spec:
ekco:
nodeUnreachableToleration: 1m
Apply the patch during an install or upgrade by including installer-spec-file=patch.yaml
as an install option.
PAM components
Puppet Application Manager (PAM) uses a range of mandatory and optional components.
Support services
- Database: PostgreSQL (single instance) - https://www.postgresql.org/
- Object storage: previously MinIO - https://min.io, now Ceph - https://ceph.io
- tlser for basic TLS cert management - https://github.com/puppetlabs/tlser
- kurl_proxy for HTTPS proxying outside the Ingress (ports besides 80/443): https://github.com/replicatedhq/kots/tree/v1.36.1/kurl_proxy
Kubernetes components
- Networking (CNI): Flannel - https://github.com/flannel-io/flannel
- Storage (CSI): Rook - https://rook.io, Ceph - https://ceph.io
- Ingress: Project Contour - https://projectcontour.io
- Kubernetes Cluster: kURL - https://kurl.sh
- Embedded kURL Cluster Operator: ekco - https://github.com/replicatedhq/ekco
- Admin Console: KOTS - https://kots.io
- Snapshots: Velero - https://velero.io, Restic - https://restic.net
- Monitoring: Prometheus - https://prometheus.io
- Registry: Docker Registry - https://docs.docker.com/registry/
Optional components
Prometheus (+Grafana) and Velero (+Restic) are optional components:
- Prometheus+Grafana uses 112m/node + 600m CPU, 200MiB/node + 1750MiB RAM
- Velero+Restic uses 500m/node + 500m CPU, 512MiB/node + 128MiB RAM
If you do not need these optional components, they can be omitted from the initial install and further upgrades with a patch similar to the following:
apiVersion: cluster.kurl.sh/v1beta1
kind: Installer
metadata:
name: patch
spec:
prometheus:
version: ''
velero:
version: ''
If you want to remove optional components that are already installed, use the following command:
kubectl delete ns/monitoring ns/velero
Generate a support bundle
When seeking support, you might be asked to generate and provide a support bundle. This bundle collects a large amount of logs, system information and application diagnostics.
To create a support bundle:
- In Puppet Application Manager UI, click Troubleshoot > Generate a support bundle.
-
Select a method for generating the support bundle:
- Generate the bundle automatically. Click Analyze <APPLICATION NAME> (<APPLICATION NAME> is replaced in the UI by the name of the Puppet application you have installed), and Puppet Application Manager generates the bundle for you and uploads it to the Troubleshoot page.
- Generate the bundle manually. Click the prompt to generate a custom command for your installation, then run the command on your cluster. Follow the prompts to upload the bundle to Puppet Application Manager.
- Review the collected data before forwarding it to Puppet, as it may contain sensitive information that you wish to redact.
- Return to the Troubleshoot page, download the newly created support bundle, and send it to your Puppet Support contact.
Create a support bundle from the command line
If installation of the Puppet Application Manager, or upload of an app, on an embedded kURL cluster fails, it may not be possible to access the UI to generate a support bundle.
You can generate a support bundle by using the default kots.io spec. To do this, run the following command:
kubectl support-bundle https://kots.io
On an offline server, you can copy the default kots.io spec by using the following command:
curl -o spec.yaml https://kots.io -H 'User-agent:Replicated_Troubleshoot/v1beta1'
The spec can then be uploaded to the server. Use the local spec by running:
kubectl support-bundle /path/to/spec.yaml
If the Puppet Application Manager UI is working and the app is installed, you can use:
kubectl support-bundle http://<server-address>:8800/api/v1/troubleshoot/<app-slug>
If the app is not installed but the Puppet Application Manager UI is running:
kubectl support-bundle http://<server-address>:8800/api/v1/troubleshoot
If you do not already have the support-bundle kubectl plugin installed, install it by using the command below:
curl https://krew.sh/support-bundle | bash
Or by installing krew2 and running:
kubectl krew install support-bundle
Using sudo
behind a proxy server
Many of the commands you run to install or configure Puppet Application Manager (PAM) require root access. In the PAM documentation, commands that require root access use
sudo
to elevate privileges. If you're running PAM behind a proxy, sudo
might not work correctly. If you're having trouble running commands with sudo
, and you're behind a proxy, try switching to the root
user and running the command without sudo
.
kURL can only be upgraded two minor versions at a time
Because kURL does not support upgrading more than two Kubernetes minor release versions at once, if you're upgrading from an older version of PAM, you might need to follow a specific upgrade path to avoid failures. For example, PAM version 1.80.0 uses Kubernetes version 1.21.x, so you can upgrade up to PAM 1.91.3 (Kubernetes version 1.23.x), but not to PAM 1.94.0 (Kubernetes version 1.24.x). To determine the specific upgrade path for your installation, please check the table of Kubernetes versions for each version of PAM.
Attempting to upgrade too far at once returns the following error message: The currently installed kubernetes version is <CURRENT VERSION>. The requested version to upgrade to is <INVALID_TARGET_VERSION>. Kurl can only be upgraded two minor versions at time. Please install <VALID_TARGET_VERSION> first.