homeblogpe metrics in splunk puppet report viewer 3 1

PE Metrics in Splunk: Puppet Report Viewer 3.1

A number of improvements have been made to the Puppet Report Viewer add-on for Splunk since it was initially released. The most notable changes added in version 3.1 of the add-on are the tracked metrics, allowing for better troubleshooting of performance issues in your Puppet installation.

What’s new in v3.1?

Version 3.1 of the Puppet Report Viewer app replaces the default dashboards available in the Metrics tab with all new dashboards, measuring a number of useful metrics for Puppet Server, PuppetDB, and Orchestrator. Below is a list of the specific metrics that are tracked in the new dashboards.

Dashboard: Puppet Server

Puppet server

Performance Metrics

  • Memory Usage
    • Amount of memory allocated to Puppet Server, including the amount of memory currently being utilized.
  • Non-Heap Memory Usage
    • Amount of memory being utilized that is not part of the configured heap size.
  • CPU Usage
    • Percentage of CPU usage by Puppet Server.
  • Average Requested JRubies
    • The number of requests for JRubies.
  • Average Free JRubies
    • The number of free JRubies in the pool.
  • Average JRuby Borrow / Compile Time
    • The amount of time that Puppet Server "holds" a JRuby as a resource for a request / amount of time it takes Puppet Server to compile a catalog.
  • Average Wait Time
    • The amount of time Puppet Server has to wait for an available JRuby to become available.

Workload Metrics

  • JRuby Borrow Timers (Avg)
    • The time spent with a borrowed JRuby.
  • JRuby Borrow Timers (Rate)
    • Number of operations per second.
  • HTTP Endpoint (Mean)
    • The amount of time spent serving requests by endpoint.
  • Function Times (Mean)
    • The amount of time during catalog compilation spent in function calls.

Dashboard: PuppetDB

Puppetdb

Performance Metrics

  • Memory Usage
    • Amount of memory allocated to PuppetDB, including the amount of memory currently being utilized.
  • Non-Heap Memory Usage
    • Amount of memory being utilized that is not part of the configured heap size.
  • CPU Usage
    • Percentage of CPU usage by PuppetDB.
  • Commands Per Second
    • Meter measuring commands successfully processed.
  • Command Processing Time
    • Timing statistics for the processing of previously enqueued commands.
  • Queue Depth
    • Number of currently enqueued commands.
  • Replace Catalog Time
    • Amount of time spent replacing catalogs.
  • Replace Facts Time
    • Amount of time spent replacing facts.
  • Store Report Time
    • Amount of time spent storing the report.
  • GC CPU Usage
    • Percentage of CPU usage during garbage collection.
  • GC Stats
    • Count of garbage collection, including duration.

Workload Metrics

  • Command Persistence Time (Avg)
    • Amount of time for PDB commands to successfully complete.
  • Read Duration (Avg)
    • Amount of time spent reading data.
  • Peak Read Pool Wait
    • Amount of time waiting for a connection to the read pool.
  • Read Pool Pending Connections
    • Number of connections waiting on the read pool.
  • Write Duration (Avg)
    • Amount of time spent writing data.
  • Peak Write Pool Wait
    • Amount of time waiting for a connection to the write pool.
  • Write Pool Pending Connections
    • Number of connections waiting on the write pool.
  • Global Discards
    • Meter measuring commands discarded as invalid after 5 attempts to process.
  • Global Fatals
    • Meter measuring fatal processing errors.

Dashboard: Orchestrator (PE Only)

Performance Metrics

  • Memory Usage
    • Amount of memory allocated to Orchestrator, including the amount of memory currently being utilized.
  • Non-Heap Memory Usage
    • Amount of memory being utilized that is not part of the configured heap size.
  • GC CPU Usage
    • Percentage of CPU usage during garbage collection.
  • GC Stats
    • Count of garbage collection, including duration.

Configuration

To begin taking advantage of these changes, ensure you have the latest version of the following components installed:

While the data used to generate these new dashboards is sent to Splunk via the splunk_hec module, it relies on data generated by the puppet_metrics_collector module. As of PE 2019.8.7 the metrics collector module is installed by default. Versions prior to 2019.8.7 will automatically install the metrics collector module upon installing the splunk_hec module.

In PE 2019.8.7+, with splunk_hec and the Puppet Report Viewer properly configured, you will simply configure the following parameters within the puppet_enterprise class in the PE Infrastructure node group:

  • puppet_enterprise::enable_metrics_collection: true
  • puppet_enterprise::enable_system_metrics_collection: true Note: System metrics include PostgreSQL related metrics. While the dashboards do not rely on this data currently, future versions of the Puppet Report Viewer will include dashboards populated by this data.

In your hiera data you will then want to configure the following parameter:

  • puppet_metrics_collector::metrics_server_type: ‘splunk_hec’

Prior to PE 2019.8.7 you will want to add the puppet_metrics_collector class to your Primary Server with the metrics_server_type parameter set to splunk_hec.

Additional information regarding specifics for configuring the aforementioned integration tools can be found in the linked documentation below.

Example Troubleshooting Scenario

Corey ben Efrayim is a Sr. Support Engineer and Puppet Integrations SME at Puppet.

Learn more

Take full advantage of the Puppet x Splunk integration by also utilizing PE Event Forwarding, as well as the Puppet Alert Actions add-on. Event Forwarding provides a comprehensive audit trail of activities in Puppet Enterprise, while the Alert Actions add-on allows users to take actions based on those activities. One example would be revoking an RBAC user who made changes to a specific Node Group in the PE Console.

Puppet sites use proprietary and third-party cookies. By using our sites, you agree to our cookie policy.