Puppet Enterprise 2018.1
- Welcome to Puppet Enterprise® 2018.1
- Release notes
- New features
- Enhancements
- Deprecations and removals
- Known issues
- Installation and upgrade known issues
- High availability known issues
- PuppetDB and PostgreSQL known issues
- Puppet Server known issues
- Puppet and Puppet services known issues
- Supported platforms known issues
- Configuration and maintenance known issues
- Console and console services known issues
- Orchestration services known issues
- Permissions known issues
- Code management known issues
- Razor known issues
- Internationalization known issues
- Resolved issues
- What's new since PE 2016.4
- Getting started
- Installing
- Choosing an architecture
- System requirements
- What gets installed and where?
- Installing Puppet Enterprise
- Purchasing and installing a license key
- Installing agents
- Installing network device agents
- Installing compile masters
- Installing ActiveMQ hubs and spokes
- Installing PE client tools
- Installing external PostgreSQL
- Uninstalling
- Upgrading
- Configuring Puppet Enterprise
- Configuring and tuning your Puppet Enterprise infrastructure
- Configuring and tuning Puppet Server
- Configuring and tuning the console
- Configuring and tuning PuppetDB
- Configuring and tuning orchestration
- Configuring proxies
- Configuring Java arguments for Puppet Enterprise
- Configuring ulimit for PE services
- Tuning monolithic installations
- Writing configuration files
- Analytics data collection
- Static catalogs in Puppet Enterprise
- Configuring high availability
- Accessing the console
- Managing access
- Inspecting your infrastructure
- Managing nodes
- Adding and removing nodes
- Running Puppet on nodes
- Grouping and classifying nodes
- Making changes to node groups
- Environment-based testing
- Preconfigured node groups
- Designing system configs: roles and profiles
- Node classifier service API
- Forming node classifier requests
- Groups endpoint
- Classes endpoint
- Classification endpoint
- Commands endpoint
- Environments endpoint
- Nodes endpoint
- Group children endpoint
- Rules endpoint
- Import hierarchy endpoint
- Last class update endpoint
- Update classes endpoint
- Validation endpoints
- Node classifier errors
- Managing applications
- Orchestrating Puppet and tasks
- Running jobs with Puppet orchestrator
- Configuring Puppet orchestrator
- Using Bolt
- Direct Puppet: a workflow for controlling change
- Running Puppet on demand
- Running tasks
- Writing tasks
- Reviewing jobs
- Puppet orchestrator API v1 endpoints
- Puppet orchestrator API: forming requests
- Puppet orchestrator API: command endpoint
- Puppet orchestrator API: events endpoint
- Puppet orchestrator API: inventory endpoint
- Puppet orchestrator API: jobs endpoint
- Puppet orchestrator API: scheduled jobs endpoint
- Puppet orchestrator API: plan jobs endpoint
- Puppet orchestrator API: tasks endpoint
- Puppet orchestrator API: root endpoint
- Puppet orchestrator API: error responses
- Managing and deploying Puppet code
- Provisioning with Razor
- SSL and certificates
- Regenerating certificates: monolithic installs
- Regenerating certificates: split installs
- Individual PE component cert regeneration (split installs only)
- Regenerate Puppet agent certificates
- Regenerate compile master certs
- Using an external certificate authority
- Use a custom SSL certificate for the console
- Generate a custom Diffie-Hellman parameter file
- Disable TLSv1 in PE
- Managing MCollective
- Maintenance
- Troubleshooting
Although Puppet Application Orchestration can help you manage any distributed set of infrastructure, it's primarily designed to configure an application stack. The simple application stack used in the following extended example comprises a database server on one machine and a web server that connects to the database on another machine.
With previous Puppet coding techniques, you'd write classes and defined types to define the configuration for these services, and you'd pass in class parameter data to tell the web server class how to connect to the database. With application orchestration, you can write Puppet code so this information can be exchanged automatically. And when you run Puppet, the services will be configured in the correct order, rather than repeatedly until convergence.
Application orchestration workflow
The application orchestration workflow illustrates the major steps in the application orchestration workflow—from authoring your application to configuring it with the orchestrator.
Prior hands-on experience writing Puppet code is required to author applications for use with application orchestration. You should also be familiar with modules.
Details about the code for this LAMP application module are available at puppetlabs/appmgmt-module-lamp.
- Create the service resources and application components. In the applications you compose, application components share information with each other by exporting and consuming environment-wide service resources.
An application component is an independent bit of Puppet code that can be used alongside one or more other components to create an application. Components are often defined types that consist of traditional Puppet resources that describe the configuration of the component (file, package, and service, for example).
- Create the
Sql
service resource inlamp/lib/puppet/type/sql.rb
.Puppet::Type.newtype :sql, :is_capability => true do newparam :name, :namevar => true newparam :user newparam :password newparam :port newparam :host end
- Define the database application component in
lamp/manifests/db.pp
.# Creates and manages a database define lamp::db( $db_user, $db_password, $host = $::fqdn, $port = 3306, $database = $name, ) { include mysql::bindings::php mysql::db { $name: user => $db_user, password => $db_password, } }
- Define the HTTP resource in
lamp/manifests/web.pp
.define lamp::web( $port = '80', $docroot = '/var/www/html', ){ class { 'apache': default_mods => false, default_vhost => false, } apache::vhost { $name: port => $port, docroot => $docroot, } } Lamp::Web produces Http { ip => $::ipaddress, port => $port, host => $::fqdn }
- Define the application server component in
lamp/manifests/app.pp
.# Creates and manages an app server define lamp::app ( $docroot, $db_name, $db_port, $db_user, $db_host, $db_password, $host = $::fqdn, ) { notify { "Hello! This is the ${name}'s rgbank::web component" } ... }
- Create the web component.
- In
lamp/manifests/db.pp
, write the produces statement, which expresses that theLamp::Db
component produces theSql
service resource. The produces statement is included outside of the defined type.Lamp::Db produces Sql { user => $db_user, password => $db_password, host => $host, database => $name, port => $port }
- Create the
- Create the application definition.The application definition (or model) is where you connect all the pieces together. It describes the relationship between the application components and the exchanged service resources.Since the application definition shares the name of the module, you put it in
lamp/manifests/init.pp
.application lamp ( $db_user, $db_password, $docroot = '/var/www/html', ) { lamp::web { $name: docroot => $docroot, export => Http["lamp-${name}"], } lamp::app { $name: docroot => $docroot, consume => Sql["lamp-${name}"], } lamp::db { $name: db_user => $db_user, db_password => $db_password, export => Sql["lamp-${name}"], } }
- Instantiate the application.In the application instance, create a unique version of your application and specify which nodes to use for each component.
- Use the orchestrator commands to run
Puppet and configure the application.
- Run
puppet app show
to see the details of your application instance. - Run
puppet job run
to run Puppet across all the nodes in the order specified in your application instance. - Run
puppet job list
to show running and completed orchestrator jobs.At the start of a job run, the orchestrator prints a job plan that shows what’s included in the run and the expected node run priority. The nodes are grouped by depth. Nodes in level 0 have no dependencies and will run first. Nodes in the levels below are dependent on nodes in higher levels.
As your job progresses, the orchestrator will print results of each node run after it completes. A "Success!" message prints when all jobs complete.
- Run