Troubleshooting PAM

Use this guide to troubleshoot issues with your Puppet Application Manager installation.

How to look up your Puppet Application Manager architecture

If you're running PAM on a Puppet-supported cluster, you can use the following command to determine your PAM architecture version:
kubectl get installer --sort-by=.metadata.creationTimestamp -o jsonpath='{.items[-1:]}' ; echo
Depending on which architecture you used when installing, the command returns one of these values:
  • HA architecture: puppet-application-manager
  • Standalone architecture: puppet-application-manager-standalone
  • Legacy architecture: Any other value, for example, puppet-application-manager-legacy, cd4pe, or comply

Resolve IP address range conflicts

When installing Puppet Application Manager, IP address ranges and must not be used by other nodes on the local network.

Note: The minimum size for CIDR blocks used by Puppet Application Manager are:
  • Standalone - /24 for pod and service CIDRs
  • HA - /23 for pod and service CIDRs
  • Default of /22 is recommended to support future expansion
To resolve IP address range conflicts, create a patch.yaml file and add the installer-spec-file=patch.yaml argument when running the installation script (see below):
  1. If you use IP addresses internally that overlap, add the following to your patch.yaml file ( used here as an example range):
    kind: Installer
      name: patch
        podCidrRange: "/23"
  2. If you use IP addresses internally that overlap, add the following to your patch.yaml file ( used here as an example range):
        serviceCidrRange: "/23"
    CAUTION: The podCIDR and serviceCIDR ranges must not overlap.
  3. Once your patch.yaml file is set up, add the installer-spec-file=patch.yaml argument when you run the installation script:
    cat | sudo bash -s airgap installer-spec-file=patch.yaml
    Remember: Add the installer-spec-file=patch.yaml argument any time you re-run the installation script, such as when reinstalling to upgrade to a new version.

Reset the PAM password

As part of the installation process, Puppet Application Manager (PAM) generates a password for you. You can update this password to one of your choosing after installation.

  1. To reset the Puppet Application Manager password, run the following as the root user:
    kubectl -n default kots reset-password
    The system prompts you to enter a new password of your choosing.
  2. If the command fails with an unknown command "kots" for "kubectl" error, it's because /usr/local/bin is not in the path. To address this error, either update the path to include /usr/local/bin, or run:
    /usr/local/bin/kubectl-kots reset-password default

Update the PAM TLS certificate

A self-signed TLS certificate secures the connection between your browser and Puppet Application Manager (PAM). Once the initial Puppet Application Manager setup process is complete, you can upload new certificates by enabling changes to the installation's Kubernetes secrets.

Use this process if you chose not to add a TLS certificate when installing Puppet Application Manager, or if you need to update your existing TLS certificate.
  1. Enable changes to your installation's kotsadm-tls Kubernetes secret by running:
    kubectl -n default annotate secret kotsadm-tls acceptAnonymousUploads=1
  2. Restart the kurl-proxy pod to deploy the change by running:
    kubectl delete pods $(kubectl get pods -A | grep kurl-proxy | awk '{print $2}')
  3. Once the kurl-rpoxy pod restarts and is back up and running, navigate to https://<HOSTNAME>:8800/tls and upload your new TLS certificate.

Reduce recovery time when a node fails

If a node running a non-replicated service like PostgreSQL fails, expect some service downtime.

How much downtime depends on the following factors:

  • Timeout for communication between Kubernetes services (at least one minute to mark the node as unreachable).
  • Timeout for the ekco service to determine that pods need to be rescheduled. The default is five minutes after node is marked unreachable.
  • Time to restart services (at least two minutes, possibly up to five minutes, if there are complex dependencies).

The ekco service can be configured to reschedule pods more quickly by configuring the installation with a patch.yamlsimilar to the following:

kind: Installer
  name: patch
    nodeUnreachableToleration: 1m

Apply the patch during an install or upgrade by including installer-spec-file=patch.yaml as an install option.

Important: This patch needs to be included during all future upgrades to avoid resetting the option.

PAM components

Puppet Application Manager (PAM) uses a range of mandatory and optional components.

Support services

Kubernetes components

Optional components

Prometheus (+Grafana) and Velero (+Restic) are optional components:

  • Prometheus+Grafana uses 112m/node + 600m CPU, 200MiB/node + 1750MiB RAM
  • Velero+Restic uses 500m/node + 500m CPU, 512MiB/node + 128MiB RAM

If you do not need these optional components, they can be omitted from the initial install and further upgrades with a patch similar to the following:

kind: Installer
  name: patch
    version: ''
    version: ''
Important: This patch needs to be included during upgrades to avoid adding the components later.

If you want to remove optional components that are already installed, use the following command:

kubectl delete ns/monitoring ns/velero

Load balancing

The following load balancer requirements are needed for a HA install:

  • A network (L4, TCP) load balancer for port 6443 across primary nodes. This is required for Kubernetes components to continue operating in the event that a node fails. The port is only accessed by the Kubernetes nodes and any admins using kubectl.

  • A network (L4, TCP) or application (L7, HTTP/S) load balancer for ports 80, and 443 across all primaries and secondaries. This maintains access to applications in event of a node failure. Include 8800 if you want external access to the Puppet Application Manager UI.

    Note: Include port 8000 for webhook callbacks if you are installing Continuous Delivery for PE.
Important: If you are using application load balancing, be aware that Ingress items use Server Name Indication (SNI) to route requests, which may require additional configuration with your load balancer. If your load balancer does not support SNI for health checks, enable Enable load balancer HTTP health check in the Puppet Application Manager UI Config page .

Generate a support bundle

When seeking support, you might be asked to generate and provide a support bundle. This bundle collects a large amount of logs, system information and application diagnostics.

To create a support bundle:

  1. In Puppet Application Manager UI, click Troubleshoot > Generate a support bundle.
  2. Select a method for generating the support bundle:
    • Generate the bundle automatically. Click Analyze <APPLICATION NAME> (<APPLICATION NAME> is replaced in the UI by the name of the Puppet application you have installed), and Puppet Application Manager generates the bundle for you and uploads it to the Troubleshoot page.
    • Generate the bundle manually. Click the prompt to generate a custom command for your installation, then run the command on your cluster. Follow the prompts to upload the bundle to Puppet Application Manager.
  3. Review the collected data before forwarding it to Puppet, as it may contain sensitive information that you wish to redact.
  4. Return to the Troubleshoot page, download the newly created support bundle, and send it to your Puppet Support contact.

Create a support bundle from the command line

If installation of the Puppet Application Manager, or upload of an app, on an embedded kURL cluster fails, it may not be possible to access the UI to generate a support bundle.

You can generate a support bundle by using the default spec. To do this, run the following command:

kubectl support-bundle

On an offline server, you can copy the default spec by using the following command:

curl -o spec.yaml -H 'User-agent:Replicated_Troubleshoot/v1beta1'

The spec can then be uploaded to the server. Use the local spec by running:

kubectl support-bundle /path/to/spec.yaml

If the Puppet Application Manager UI is working and the app is installed, you can use:

kubectl support-bundle http://<server-address>:8800/api/v1/troubleshoot/<app-slug>

If the app is not installed but the Puppet Application Manager UI is running:

kubectl support-bundle http://<server-address>:8800/api/v1/troubleshoot

If you do not already have the support-bundle kubectl plugin installed, install it by using the command below:

curl | bash

Or by installing krew2 and running:

kubectl krew install support-bundle

Using sudo behind a proxy server

Many of the commands you run to install or configure Puppet Application Manager (PAM) require root access. In the PAM documentation, commands that require root access use sudo to elevate privileges. If you're running PAM behind a proxy, sudo might not work correctly. If you're having trouble running commands with sudo, and you're behind a proxy, try switching to the root user and running the command without sudo.

kURL can only be upgraded two minor versions at a time

Because kURL does not support upgrading more than two Kubernetes versions at once, if you're upgrading from an older version of PAM, you might need to follow a specific upgrade path to avoid failures.
  • If you're on PAM version 1.56.0 or earlier, you must upgrade to PAM 1.80.0 before upgrading to PAM 1.81.1 or later.

Attempting to upgrade too far at once returns the following error message: The currently installed kubernetes version is <CURRENT VERSION>. The requested version to upgrade to is <INVALID_TARGET_VERSION>. Kurl can only be upgraded two minor versions at time. Please install <VALID_TARGET_VERSION> first.