The VMware vRealize Orchestrator (vRO) Puppet plug-in v2.0 provides tools and out-of-the-box components that easily create, provision, and manage application stacks on virtual servers.
Note: This user guide walks you through setting up a reference implementation of the plug-in, vRO, and vRA, and isn’t designed for out-of-the-box use in production. Once you’ve completed this guide, you should have a working environment and examples with which you can develop your own Puppet code, vRO workflows, and vRA blueprints.
With a single click, Puppet, vRO, and vRA can automatically create a VM, install the Puppet agent, autosign its certificate, add Puppet roles and profiles, install the required Puppet modules and the software they configure, and set up the server for immediate use.
If you’re new to Puppet and vRO, you can use vRO and vRA to set up a live, functional Puppet-managed system with much less effort than building one manually.
If you’re experienced with vRO, vRealize Automation (vRA), and Puppet, you can also use this plug-in to model common Puppet workflows as vRO workflows and vRA blueprints, then deploy them just as easily as other VMs while maintaining the advantages of a Puppet-managed infrastructure.
Note: This version of the plug-in was built with new and more robust features, including compatibility with new features introduced in Puppet 4. This plug-in doesn’t support in-place upgrades from the previous vRO Puppet plug-in version (1.0); if you’re using the version 1.0 of the plug-in, you must remove it before installing version 2.0.
The vRO Puppet plug-in 2.0 requires:
Agent nodes being managed by Puppet must run an operating system supported by the Puppet agent.
Note: The 32-bit version of Microsoft Windows Puppet agent is not compatible with vRO plug-in management. You must use the 64-bit (x64) agent.
The following instructions guide you through installing and configuring a reference implementation of the vRO Puppet plug-in using Puppet Enterprise 2016.4.2 and vRO/vRA 7.1. This implementation is designed to create a development environment with vRO, vRA, and Puppet running as quickly as possible in order to help you learn how these tools work together.
The reference implementation isn’t designed with production deployments in mind. Once you’re familiar with how the plug-in works, you can install it into your production vRO/vRA infrastructure and build compatible workflows and blueprints.
The plug-in works with many implementations of Puppet, vRO, and vRA, including vRO/vRA 6 and open source Puppet 4.6.2 and newer. While you can use these instructions to set up this plugin with other versions of Puppet and vRO/vRA, we recommend using this reference implementation for development.
Note: If you’re already experienced with Puppet, vRO, vRA, and the vRO Puppet plug-in, see part 3 for a quick reference of properties and usage.
Install Puppet Enterprise on a VM or server. This will serve as the Puppet master server, and must be accessible over the network from the vRO appliance or server.
Note: For this reference implementation of the vRO plug-in, this must be a new, clean installation of Puppet Enterprise with Code Manager disabled.
Add the vRO Puppet plug-in starter pack content by cloning its GitHub repository into an environment.
On a clean installation, run the vRA node classification setup script with root privileges from within the
cd puppet-vro-starter_content sudo bash scripts/vra_nc_setup.sh
Note: This script doesn’t randomize the UUID for the classification group it creates, so it will create or replace the same group instead of creating a new group if it is run more than once.
The starter content repository provides reference implementations of Puppet roles and profiles for Linux and Windows web server stacks, utility scripts to prepare the master server for vRO, and a templated autosigning script. Once you understand how Puppet, vRO, and vRA work together, you can use these reference implementations to help build your own Puppetized vRO/vRA implementations.
If you’re already experienced with Puppet, vRO, and vRA, you can replace this reference implementation with your own code or control repository.
Ensure that the Puppet master has a valid DNS hostname and NTP configured. If you don’t have or use a DNS server, provide a valid hostname for the server’s IP address in the master server’s hosts file (typically
Note: Make sure that a hostname is properly configured on the machines you’re installing PE on. All nodes must know their own hostnames. This can be done by properly configuring reverse DNS on your local DNS server, or by setting the hostname explicitly. Setting the hostname usually involves the
hostnamecommand and one or more configuration files, while the exact method varies by platform.
Additionally, all nodes must be able to reach each other by name. This can be done with a local DNS server, or by editing the hosts file on each node (such as
/etc/hostson a Linux node) to point to the proper IP addresses.
Initiate a Puppet run on the master server:
sudo puppet agent -t
The vRO starter content creates a user account on the Puppet master (
vro-plugin-user, default password
puppetlabs) and adds rules to the
sudoers file allowing it to run commands with elevated privileges as required by the plug-in.
It also adds the following settings to the master’s
PermitRootLogin yes PasswordAuthentication yes ChallengeResponseAuthentication no
Note: If you do not allow a sudo-capable user to run commands for vRO — for instance, if you remove the
puppet-vro-useraccount or revoke its sudoers privileges — you will need to provide vRO with remote access to a user account on the master with those capabilities, or to the master’s root user, which is insecure.
Note: If you haven’t yet installed vRO, refer to the vRO documentation.
Download the Puppet plug-in’s .vmoapp package from the VMware Solution Exchange.
Login to the vRO server’s control center at
Click the Plugins tab.
Click the Install plug-in button.
Read and accept the EULA, then click Install.
Install the vRO Puppet plug-in’s .vmoapp package downloaded from the VMware Solution Exchange.
After the installation confirmation message appears, click the Startup Options link in the message reminding you to restart the Orchestrator server.
On the Startup Options page, click the Restart button under the Current Status heading.
From the main vRO web interface page, click the Start Orchestrator Client link to download the Java vRO client.
Open the client from the location where you downloaded it.
You can confirm that the Puppet plug-in content is available by opening the Library/Puppet folder in the workflows tab of the client’s left pane.
vRO must be made aware of a Puppet master server by adding it to the inventory.
In the vRO client, open the workflow tab.
Navigate to Library/Puppet/Configuration
Click the Add a Puppet Master workflow.
Click the Run button in the right pane to start the workflow.
A window appears with common parameters required for vRO to access the master, such as the master’s display name, hostname or IP address, SSH port, and SSH/RBAC credentials.
Fill out the fields, then click Save.
Note: If you do not provide a root user’s credentials, choose Yes for the “Use sudo for shell commands on this master” option. The vRO plug-in must be able to run elevated shell commands on the master to perform most tasks.
After clicking Save, the workflow begins to run.
Once the workflow has finished, you can view the master server in the inventory tab of the vRO client’s left pane, under Puppet.
You can confirm that the process successfully finished by selecting the master server and confirming that the detected Puppet product and version facts appear.
Tip: To troubleshoot unexpected results, open the workflow’s results in the workflow tab, and then consult the Logs tab in the right pane.
Before you can run workflows in vRO, you need to set an active Puppet master.
In the vRO client’s workflows tab, open the EventBroker Install PE agent workflow.
In the right pane, click the Pencil icon to edit the workflow.
In the workflow’s General tab, edit the value of the activePuppetMaster attribute.
A window appears containing valid items from your inventory.
Select the Puppet master you previously added from the inventory, then click Select.
Save and close the workflow.
Repeat the above process for the EventBroker Purge PE Agent Node workflow.
Note: If you haven’t yet installed vRA, refer to the vRA documentation.
Once vRO and the Puppet plug-in are configured, you can use vRealize Automation (vRA) to request servers using blueprints.
Out of the box, the vRO Puppet plug-in provides reference blueprints that deploy a basic web server stack in a vSphere VM:
By requesting a VM with one of these blueprints, Puppet, vRO, and vRA together automatically:
To build a VM via one of these blueprints:
From the Catalog tab in the vRA web interface, request a PE Linux Webserver or PE Windows Webserver.
Review the VM’s general traits by clicking on the VM (for instance, CentOS_7_vSphere_VM for the PE Linux Webserver). Note that in addition to the CPU, memory, and storage settings, the Puppet plug-in adds puppetCodeEnvironment and puppetRoleClass traits with values provided by the blueprint.
vRA will begin provisioning the server. You can track its progress in the Infrastructure tab, where the Status column will update with each step of the provisioning process.
You can also view logs of the run in the Logs tab of the workflow’s result in the vRO client.
Note: For detailed information about designing vRA blueprints, consult the vRA documentation.
The provided blueprints and workflows serve as reference implementations of the vRO Puppet plug-in’s capabilities. The starter pack content contains sample Linux and Windows web server blueprints, as well as the Puppet modules, roles, and profiles that power those workflows. Consult or modify these blueprints and starter content when designing your own vRO workflows and vRA blueprints.
Puppet plug-in features are available to blueprint designers as Properties (such as “Linux Puppet VM”) in the vRA blueprint designer.
After adding a Puppet plug-in property to a blueprint, new merged properties will be available. For instance, you can set the autosigning shared secret, SSH password, and SSH username as properties of a Linux Puppet VM.
The vRO Puppet plug-in’s workflows are also available to the workflow subscription designer, allowing you to associate events and conditions with workflows as part of a blueprint.
The vRO Puppet plug-in uses the following properties for blueprint and workflow development:
|puppetRoleClass||Puppet.RoleClass||string||The fully qualified Puppet class that implements the node’s role.|
|puppetCodeEnvironment||Puppet.CodeEnvironment||string||The environment on the Puppet master in which vRO should look for Puppet code.|
|puppetNodeCertname||Puppet.Node.Certname||string||The Puppet agent sets this based on the node’s
|puppetAutosignSharedSecret||Puppet.Autosign.SharedSecret||secureString||The shared secret that nodes should provide to the Puppet master in order to autosign certificate requests.|
|sshUsername||Puppet.SSH.Username||string||Username used to connect to a node via SSH.|
|sshPassword||Puppet.SSH.Password||secureString||Password used to connect to a node via SSH.|
|winRMUsername||Puppet.WinRM.Username||string||Username used to connect to a node via WinRM.|
|winRMPassword||Puppet.WinRM.Password||secureString||Password used to connect to a node via WinRM.|
Securing passwords used in the manifest is beyond the scope of this reference implementation. As a starting point, many Puppet deployments use Hiera, a key/value lookup tool for configuration, with eyaml, or encrypted YAML, to solve this problem. This solution not only provides secure storage for the password value, but also provides parameterization to support reuse, opening the door to easy password rotation policies across an entire network of nodes.
Workflows that purge nodes, such as the EventBroker Purge PE Agent Node workflow, use the
puppet node purge command. After being purged, those nodes are not added to the Puppet master’s Certificate Revocation List (CRL). This means these nodes might still be able to connect to the master after being purged.
After purging a node, you must run the following on the master to update the CRL:
puppet cert clean -y <NODE-HOSTNAME> pgrep -f puppet-server | xargs kill -HUP
For known issues and details about specific plug-in releases, see the release notes.