homeblogextending enterprise hybrid cloud automation capabilities puppet enterprise

Extending Enterprise Hybrid Cloud automation capabilities with Puppet Enterprise

If you read Curt Stalhood's A guide to Puppet integration with VMware vRealize Automation, you saw one possible way of integrating Puppet Enterprise with vRealize Automation (vRA) as a part of EMC’s Enterprise Hybrid Cloud (EHC) solution. The integration relies on the vRA Guest Agent. One of the caveats is that the vRA Guest Agent is not available for all the operating systems that Puppet Enterprise supports.

Andreas did a short presentation of the integration at an internal EMC event in Paris, with step-by-step instructions on how to set up this integration. Karol was present at the meeting, and after Andreas' session, told Andreas about another option for integrating Puppet Enterprise with vRA, using the vRealize Orchestrator (vRO) Puppet plugin from VMware. This option has some additional benefits:

  • It is a more efficient integration of Puppet. The plugin can be used to automatically install the Puppet agent, sign the certificate on the Puppet master server and configure the newly provisioned node using manifests or Hiera classification.
  • The plugin works for both Linux and Windows virtual machines (VMs) and can be used with a broader range of operating systems than the vRealize Automation guest agent.
  • Integration with vRO extensibility is the recommended approach for integrating vRealize Automation into external management tools going forward, and this approach is leveraged in EMC’s Enterprise Hybrid Cloud solution.

We (Karol and Andreas) have developed and tested this integration in a virtual hands-on lab environment for the EHC solution version 3.5, built using the following components:

  • VMware vRealize Automation 6.2.3
  • VMware vRealize Orchestrator 6.0.2
  • Puppet Enterprise 2015.3.3 or 2016.1.1 (we started with 2015.3.3)

Enterprise Hybrid Cloud and Puppet Enterprise

The Enterprise Hybrid Cloud is an engineered solution targeted to enterprise customers who want to execute on a hybrid cloud vision. The solution combines hardware, software and services from EMC, VCE, VMware, Pivotal and certified partners for rapid implementation. These tight integrations allow customers to accelerate time to value, while reducing cost and risk.

A lack of resources and expertise in-house is the top challenge many organizations face when considering cloud adoption, according to various studies, including RightScale 2016 State of the Cloud Report. Our solution aims to answer these challenges by removing the complexity for enterprises. Rather than attempting to build the cloud on their own, IT teams can focus on aligning a proven cloud technology with their business. Thousands of engineering hours were spent designing, integrating and testing this solution in EMC’s labs. You can learn more about this solution and its benefits by visiting this website.

Enterprise Hybrid Cloud and Puppet Enterprise are a powerful combination. Enterprise Hybrid Cloud makes it easy to build a private or hybrid cloud, define a service catalog and expose it through a self-service portal with the appropriate governance measures for enterprise customers. It delivers out-of-the-box infrastructure-as-a-service (IaaS) services with differentiated service levels in terms of availability and performance, and provides cost transparency for them.

Puppet Enterprise, available as one of the solution integrations, extends the automation capabilities of the solution, with linkages to thousands of modules available at Puppet Forge and maintained by Puppet and the community. These can be used to automate provisioning of business applications, middleware components and services that are used by enterprises. PE significantly improves compliance and eliminates configuration drift, while allowing you to update applications and services automatically over time.

Customers already using Puppet Enterprise or the open source Puppet distribution can bring Puppet's value to their Enterprise Hybrid Cloud environment. The ability to manage configuration of hundreds, thousands, even tens of thousands of nodes from the same portal makes it much easier for IT operators to do their job.

The integration of Puppet with vRealize Automation through the vRO plugin simplifies orchestration by defining a workflow for automating:

  • Provisioning of VMs.
  • Installation of the Puppet agent on the VMs (nodes).
  • Signing of the node certificate at the Puppet master server.
  • Classifying the nodes.
  • Any needed remediation.

When a VM is de-provisioned, it can be automatically removed from the nodes managed by the master server for easy cleanup.

vRealize Automation, a key component of the Enterprise Hybrid Cloud solution, provides a secure self-service portal where your users can request new IT services and manage specific cloud and IT resources, while ensuring compliance with business policies. Requests for IT services, including infrastructure, applications, desktops, and many others, are processed through a common service catalog to provide a consistent user experience.

Integrating Puppet Enterprise with vRealize Automation

For those of you who would like to try it, the step-by-step details of how to configure the integration are described below. Please note these descriptions outline the necessary steps for configuring with a Puppet Enterprise version. The steps for open source Puppet are slightly different.

Set up Puppet Enterprise 2015.3.3 or 2016.1.1

We start by setting up the Puppet Enterprise master server. From an architecture perspective, it can be located externally to Enterprise Hybrid Cloud, or share the same infrastructure, but we need to enable network communication between vRO and the Puppet master (over SSH), vRO and tenant VMs (for Linux over SSH; for Windows, you need to enable WinRM from the PowerShell host — more details later), and between the master and tenant VMs (port 8140 at the master).

The tenant VMs will be managed by the Puppet agent, installed automatically from the master server at provisioning time. In a production Enterprise Hybrid Cloud solution implementation, the master server or cluster (depending on if it is monolithic or distributed deployment) would be hosted on a separate management cluster. As this step is pretty standard with no special requirements, we will not cover it in detail, but refer to the official installation guide instead: Installing Puppet Enterprise Overview

For this integration demo we use the monolithic installation of Puppet Enterprise. Please check the system requirements for the different installation types before you start to install Puppet Enterprise.

Install and configure vRealize Orchestrator Puppet plugin

Next, we need to install the official Puppet integration plugin for vRealize Orchestrator. vRO is a powerful orchestration engine from VMware, included in vCenter Server Standard license. It allows for visual creation of automation workflows, which can coordinate complex IT processes and integrate with external systems. It’s also distributed together with vRA, which is an important component of the Enterprise Hybrid Cloud solution, extending its integration and automation capabilities. vRO Puppet plugin is available free of charge at VMware Solution Exchange, together with the documentation.

vRO Puppet plugin enables vRA and Puppet to work together, providing Puppet-based configuration management and application deployment capabilities to the catalogue of services offered and managed in the vRA-based self-service portal. It supports both open source Puppet and Puppet Enterprise, and provides a collection of vRO workflows, allowing for automated installation and configuration of the Puppet agent (both Linux and Windows), as well as signing node certificates and node classification using Hiera or manifest files.

Please note: The current version of the vRO Puppet plugin is compatible with Puppet Enterprise 3.7.0 or 3.3 and open source Puppet 3.7.1 or 3.6.2 only. It uses Hiera or manifests for classification, and does not yet officially support Node Management, which was introduced with Puppet Enterprise 3.7. Puppet is now working with VMware to release a new version of the plugin that will also support Node Management, as well as the most recent releases of open source Puppet Open and Puppet Enterprise. However, as the plugin uses mostly SSH-based connections from vRO to the Puppet master, most of the workflows also work with the latest open source and Puppet Enterprise releases. By following the instructions described below, you can start using it within your test and development environment without official support from Puppet or VMware.

vRA 6.2.3 and vRO 6.0.2 are included in the base configuration of Enterprise Hybrid Cloud 3.5 and should already be installed in your solution environment. We expect our instructions to work with custom vRA deployment as well. If you need help, please follow the instructions at vRealize Automation - Installation and Configuration.

To install and activate the Puppet plugin, we need to open the vRO configuration interface (on TCP port 8283), choose the Plug-ins option from the menu, upload the downloaded file o11nplugin-puppet-1.0.0.vmoapp and restart the vRO services. After the successful install, the vRO configuration screen will look like this:

VRO Configuration Now let’s move to the vRO Client interface to continue the configuration. The workflows installed with the plugin are grouped in the Puppet folder of the workflow library, as shown below:

VRO Puppet Folder To pair up the Puppet master server with vRO, we need to run the Add a Puppet master workflow and verify the configuration with Validate a Puppet master. If both execute successfully, that means the basic integration is done and vRO is able to talk with the Puppet master server. If the workflow fails, please check your name resolution, time synchronization and also firewall configuration, to ensure vRO can connect to the master with SSH over TCP port 22.

Configure dynamic node classification in Puppet Enterprise console

We can now move to the next step and start configuring two sample service roles. We chose NGINX and Apache to be able to show two different catalog items within vRA.

We start with the configuration from the Puppet Enterprise side by using the node classifier and creating a dynamic classification that will simply get a role applied. With that concept, we follow our best practice to assign configuration data by using the roles and profiles concepts. Profiles configure technology stacks in a site-appropriate way; a given node may have many profile classes included in its role. Role classes combine multiple profiles and occasional lone resources to create complete node descriptions. A given node will have only one role class assigned to it.

For this integration, we create two simple roles. The first role is nginxwebserver, and we give it three profiles:

  • profile::nginx for installation of all NGINX necessary components
  • profile::security with security configurations like ssh and firewall
  • profile::baseconfiguration with basic operating system configurations.

Here you will see the configuration of the nginxwebserver role:

NGINX Webserver role We did the same for our Apache webserver role:

Apache Webserver role Here you can see the major benefit when doing the classification of your nodes with roles and profiles: Both roles can use the same profile and configuration for security (profile::security) and base configuration (profile::baseconfiguration). Both roles are saved within role manifest files inside our production directory environment.

You will find more details about the role and profile concept for classification in Assigning configuration data with role and profile modules.

For the next step, we need to install a module that will distribute a custom fact via pluginsync; this will help us to identify the role that will be applied to the system. It will simply generate the custom fact myrole with information about the role that should be applied to the node.

Later you will see that we fill in this information by using a vRO workflow that will tag the node with its role by dropping a file. For that, we use a module from the Puppet Forge called ehc_role, which is compatible with both Linux and Windows nodes. You can simply install it to your monolithic installation of Puppet Enterprise by using the command puppet module install andulla-ehc_role on your Puppet master. For production environments, we recommend installing and managing the the module by using Puppet Enterprise Code Management.

After installation and a first Puppet run for newly provisioned nodes with vRA (described later), you will see a new custom fact, myrole with information about the role that should be applied to the system. For example, apache_web_server:

Custom fact myrole After the module is installed, we can start using the Puppet Enterprise node classifier to create two new dynamic groups. For the new Apache web server group, within the rules section we simply filter by the new custom fact myrole if the value of it is apache_web_server.

Apache Web Server Rules Inside the Classes section, we simply added the new role role::apachewebserver. Apache Web Server Classes We did the same for the new NGINX web server group — within the rules section, we simply filtered for the new custom fact myrole if the value of it is nginx_web_server.

NGINX Web Server Rules Inside the Classes section we added the new role role::nginxwebserver NGINX Web Server Classe With that we created two dynamic groups. That means every new VM (Puppet node) provisioned with vRA will automatically have all configurations applied within the role based on the custom fact myrole. You will find more about using the Puppet Enterprise node classifier and dynamic groups in Assigning configuration data with the PE console.

For simplification, we will not show all the configurations that will be applied within the different profiles. If you plan to manage Apache and NGINX web server instances, we recommend using the following Puppet Forge modules, which you can install by using the command puppet module install or by using Puppet Enterprise Code Management:

Tuning the integration vRO workflows to make them compatible with Puppet Enterprise 2016.1.1

In our approach, we leverage the vRO extensibility feature of vRA. That requires creating two vRO workflows, both of which will be executed automatically by the vRA portal at VM lifecycle state changes, after provisioning a VM and at disposal time. In the interest of time, we will not provide exact instructions on configuring these workflows, but rather describe them on a high level, and point you to the Github repository package with the fully configured workflows, which can be imported into vRO.

First we need to create a workflow that will configure a newly provisioned VM as a Puppet node. If you look into the Samples folder of the workflows library installed with the Puppet plugin, you’ll notice the following sample: Single Class - Install, Configure, Sign, Classify and Remediate Node. We will leverage this sample to do the heavy lifting. It automatically installs a Puppet agent within a guest OS of the provisioned VM (it doesn’t need to be preinstalled in a template), sends the certificate signing request to the master, signs the certificate at the master, classifies the node and remediates the node. Remediating the node means it will periodically run Puppet on the node to bring it into our desired configuration state.

If you look at the schema of this workflow, you’ll notice that it integrates several smaller workflows supplied with the plugin into an "uber" workflow, which automates the whole configuration process and includes error handling.

Modification for Linux nodes

One of the smaller workflows we use as a part of this uber workflow is Install Linux agent with SSH, which as the name suggests, installs the Puppet agent on a new node. If you try to use this workflow with the latest Puppet Enterprise release (2016.1.1), you’ll notice that it failsl on the agent installation. We modify slightly to correct this. Let’s duplicate the agent installation workflow with a new name — EHC Install Puppet Enterprise Linux Agent with SSH — and change the actual agent installation script by replacing the original block of code defining the install command with a much simpler one that uses curl for installation of the agent. Here's the original block of code:

var command = "puppet_repo_baseurl=" +     System.getModule("com.vmware.o11n.plugin.puppet").escapeShellArgument(installerBaseU    rl) +
  " puppet_hostname=" +     System.getModule("com.vmware.o11n.plugin.puppet").escapeShellArgument(masterHostna    me) +
  " puppet_version=" +     System.getModule("com.vmware.o11n.plugin.puppet").escapeShellArgument(masterVersion    ) +
  " bash " + installScriptPath;

var command Below, you'll see the simpler code, employing curl for installation of the agent. This is a best practice, and you find more details about it in The OS/architecture of the Puppet master and the agent node are the same:

var masterHostname = master.host;
var command = "curl -k https://" + masterHostname + ":8140/packages/current/install.bash     | bash";
Modification for Windows nodes

Configuration for Windows nodes requires special consideration. First of all, management of Windows nodes by the Puppet vRO plugin is accomplished by leveraging PowerShell and WinRM, instead of SSH. The node you would like to manage with the plugin should have PowerShell installed, and additional configuration should be done in the base template. For simplification, we will provide instructions for basic authentication, while in the production environment, Kerberos authentication should be used.

In the base Windows template for nodes, open an elevated PowerShell command line, and execute the following commands to enable WinRM:

winrm quickconfig
winrm set winrm/config/service/auth ‘@{Basic="true"}’
winrm set winrm/config/service ‘@{AllowUnencrypted="true"}’
winrm set winrm/config/winrs ‘@{MaxMemoryPerShellMB="2048"}’

Next you need to download the appropriate Puppet agent installer package for the managed nodes. You can get it from the Puppet Windows Agents download page. Upload the agent installer to a repository located on the master server, to the /opt/puppetlabs/server/data/packages/public/current/ directory. For the latest version of Puppet Enterprise (2016.1.1) and x64 version of the Windows operating system, the agent installation file is named puppet-agent-1.4.1-x64.msi. You can read more on installing agent packages in Installing Puppet Enterprise agents.

If you try the Windows agent installation workflow included in the vRO Puppet plugin, it will fail, attempting to download an old version of the Windows agent from the internet. Instead, let’s make a slight modification to correct this. First duplicate Install Windows Agent with PowerShell and save it with a new name — for example, EHC Install Puppet Enterprise Windows Agent with PowerShell. We'll comment out these lines defining the remote agent installation command:

    var script = '$puppetEnterprise = ' +     System.getModule("com.vmware.o11n.plugin.puppet.node").escapePowerShellValue(isEnte    rprise) + '\n' +
   			  '$puppetRepoBaseUrl = ' +     System.getModule("com.vmware.o11n.plugin.puppet.node").escapePowerShellValue(install    erBaseUrl) + '\n' +
   			  '$puppetVersion = ' +     System.getModule("com.vmware.o11n.plugin.puppet.node").escapePowerShellValue(puppe    tVersion) + '\n' +

var script Now we'll add these lines with a new script:

  var masterHostname = master.host;
  var script =  '$webc = New-Object System.Net.WebClient \n' +
  '[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true} \n' +
  '$webc.DownloadFile("https://' + masterHostname +     ':8140/packages/current/puppet-agent-1.4.1-x64.msi","C:\\Windows\\Temp\\pa.msi") \n' +
  'cmd /c msiexec /i C:\\Windows\\Temp\\pa.msi     PUPPET_AGENT_STARTUP_MODE=Manual';

masterHostname It’s a simple PowerShell script that leverages WebClient to download the agent installation file from the master over SSL (ignoring certificate errors), saving that to C:\Windows\Temp and executing in a silent mode.

This modified version can be embedded into a modified version of the uber workflow automating the whole node classification process. Another thing we did to get this working in our nested virtualization lab environment was a minor modification of the timeout setting in the default Puppet plugin workflow, Remediate Windows Node with PowerShell. The script attribute of the workflow includes a PowerShell script. You will find it within the Attributes section of the workflow. The following two lines close to the end of the included script are changed. Here's the original:

# Sleep for 2 seconds to make sure all status/report files are readable
Start-Sleep -s 2

Here's the changed script:

# Sleep for 5 seconds to make sure all status/report files are readable
Start-Sleep -s 5
Extending the sample workflow to support Puppet Enterprise node classifier

Next, let’s make a duplicate of the Single Class - Install, Configure, Sign, Classify and Remediate Node workflow and give it a new name — for example, EHC PO and PE - Install, Configure, Sign, Classify and Remediate Node. We try to make this workflow compatible with the roles and profiles concept, and also with manifest and Hiera classification, so you can use it for both open source Puppet and Puppet Enterprise.

We create an action, EHC Get Puppet Version, that will identify the version of Puppet you are using (open source or Enterprise) by checking the system user on the system (pe-puppet or puppet). We change the installation of the Linux agent with SSH by using a check to use appropriate workflows for installation, depending on the version. For Puppet Enterprise, we replace the Install Linux Agent with SSH workflow with the EHC Install Puppet Enterprise Linux Agent with SSH (change workflow option is available in schema editor with a right-click). We also remove the old Rake API classification capabilities (as this was only being used with PE 3.7 and PE 3.8), and replace it with two workflows that will copy the role information of the newly provisioned node to the node itself by using a workflow, EHC Save Role to Linux Node for Linux nodes, and EHC Save Role To Windows Node for Windows nodes. The Linux workflow will save the role information within a file called /etc/myrole, and the Windows workflow within a cmd script, %windir%/myrole.cmd. Information for both will be collected by the custom fact myrole we installed during the last step. We also change the presentation view of the workflow so you can specify the role of the node.

You can test this modified integration workflow by executing it manually on a sample provisioned VM in an environment, before integrating with vRA. In our case, we have successfully tested it with the following input parameters:

  • Puppet master: should be the FQDN or IP of your Puppet master.

  • Environment: should be the Puppet environment of the node.

  • Node classification type: Here we select the PE node classifier type of classification. You can also use manifests or Hiera-based types as classification if you don’t plan to use the dynamic group classification we used with Puppet Enterprise node classifier. Please note if you are using Code Management you should not use manifests or Hiera classification types, as their files will be overridden with each change inside your control repo.

  • Node role within Puppet Enterprise: This should be the value of the role that we will use within the node classifier to identify the role that should be applied. For example apache_web_server.

Puppet Master Information Within the node information section you have to specify the following:

  • Puppet agent setup action: You can select between the following options:
    • Install, configure and sign
    • Configure and sign
    • Sign

As we will test all tests we will select the option Install, configure and sign.

  • Invoke a single Puppet run: We choose Yes to trigger a Puppet run after the installation, configuration and signing of the certificate.
  • Machine type: We specify Linux. You can also select Windows if you plan to test this workflow with a Windows node.
  • Hostname: The hostname to connect to the new Puppet node. Please make sure your DNS is configured correctly.
  • Username and Password: The username and password necessary to connect via SSH or WinRM (if Windows) to the node.
  • Installer base URL override: This parameter is not necessary, as we already changed the workflow to install the Puppet Enterprise Linux agent so it will always use the Puppet master FQDN. If you plan to use it with open source Puppet, you have to specify the base URL here.
  • Register Puppet master address in hosts file: We specify yes to make sure the FQDN of the master can be resolved from the new Puppet agent.

Node Information Within the classification Information we specify the following:

  • Replace assigned classes: No in order to not override already classified manifest files for the node (if they exist). It will not be used, as we are classifying the host with a role rather than by using classes.
  • Class name: Here we specify the class name of the class we used within our site.pp file of the environment. We don’t need to configure this, as we classify the nodes with roles.
  • Class parameters: Here you could specify parameters for the class(es) you specified within the class name parameter. We don’t have parameters in our role, so no need to specify anything.

Classification Information

Creating a workflow for removing a node from Puppet Enterprise management

We also need to configure a workflow that will remove the node from Puppet Enterprise management console when a node ends its life cycle and is destroyed. The current vRO Puppet plugin doesn’t contain a workflow that purges a node, but we can easily create it by duplicating and modifying the Clean Node Certificate workflow from the Node Management folder, and saving that as EHC Puppet Purge Node, for instance.

Slightly modify these two lines of code in the workflow:

var result =     System.getModule("com.vmware.o11n.plugin.puppet").executeCommand(master, "puppet",     ["cert", "clean", nodeName, "--color=false"]);

cert clean command Here's what you the lines should be changed to:

var result =     System.getModule("com.vmware.o11n.plugin.puppet").executeCommand(master, "puppet",     ["node", "purge", nodeName]);

node purge command And this code should be changed:

	throw "Failed to clean cert. nodeName=" + nodeName + " exitCode=" +     result.exitCode + " output=" + result.output + " error=" + result.error;

Failed to clean cert Here's what it should be changed to:

	throw "Failed to purge node. nodeName=" + nodeName + " exitCode=" +     result.exitCode + " output=" + result.output + " error=" + result.error;

Faild to purge node Again, you can test this modified integration workflow by executing it manually on the same sample provisioned VM configured as a Puppet node, before integrating with vRA. Please note that with this workflow, we only trigger the puppet node purge command on the Puppet master, as the VM will be deleted from vRA. We are not restarting the pe-puppetserver service that is currently required if you remove a node (to update the CRL). If you don’t use vRA, you can find additional information and steps on how to deactive a node in Puppet Enterprise in Removing Nodes.

Install vRO extensibility for vRealize Automation

Next we need to install vCO customization in vRA, in order to be able to execute vRO workflows at different stages of the lifecycle of the VM that is managed by vRA. This capability is offered by the vRA plugin for vRO. You'll find more information in Machine Extensibility.

This plugin is preinstalled on an integrated vRO instance inside the vRealize Automation appliance. In Enterprise Hybrid Cloud, we’re leveraging an external vRO instance, and this plugin is installed and partially configured during solution implementation. (vRA and IaaS hosts are added — you can find more details on Page 14 of the linked Machine Extensibility PDF). What is not done by default is the installation of the vRO customization. We can do that by selecting Library > vRealize Automation > Infrastructure > Extensibility (in the previous release: Library > vCloud Automation Center > Infrastructure Administration > Extensibility) and running the Install vCO customization workflow.

Create VM lifecycle extensibility workflows for Puppet Enterprise integration

In order to successfully leverage the Puppet integration workflows we configure in one of the previous steps, we need to execute them with the right parameters passed to the vRO from the vRA portal. Probably the easiest way to do that (rather than modifying them) is to encapsulate them in new workflows, which will take the input properties from the portal, parse and eventually transform them to a correct format, and then execute the integration workflows with the right parameters — all without any modification.

The vRA plugin for vRO comes with a sample extensibility workflow called Workflow template in the extensibility folder (vCloud Automation Center > Infrastructure Administration > Extensibility). It demonstrates how we can parse the properties passed from the vRA portal in vRO, and it can be used as a starting point for developing any workflow for VM lifecycle extensibility.

Let’s duplicate this sample with a more meaningful name — for example, EHC Puppet Integration - Provisioning Linux — then add our modified version of the EHC PO and PE - Install, Configure, Sign, Classify and Remediate Node workflow before the end, and add the following few lines of code at the end of the Display Inputs scriptable task workflow item:

hostname = vCACVmProperties.get("VirtualMachine.Network0.Address");
nodeName = vCACVm.virtualMachineName;
puppetRole = vCACVmProperties.get("Puppet.Role");

These lines take three properties passed from the vRA portal and set up the right input parameters for our integration workflow:

  • hostname used for establishing SSH connectivity is set to the VM IP address
  • nodeName is set to the VM name
  • puppetRole is set to the name of the role we would like to classify the node with (we will manually configure this property in the vRA service blueprint later).

The complete “EHC Puppet Integration - Provisioning Linux” workflow schema may look like this:

Provisioning Linux In this example, we’ve added two additional items to wait for VMware Tools to start in a newly provisioned VM, before running the actual Puppet Enterprise integration workflow.

Similarly we can create the EHC Puppet Integration - Deprovisioning workflow, which doesn’t need to include any additional waiting task:

Puppet Purge Node

Configure the sample vRealize Automation service blueprints

Now it’s time to configure the blueprints for our sample Apache and NGINX Web Server services. At this time, you need to have a preconfigured Linux VM template with VMware Tools and customization specification already defined within your VMware vSphere environment.

Let’s log in to vRA as the user with Service Architect privileges and configure a new vSphere IaaS blueprint, with the configuration similar to the one shown in the screenshots below, and publish that. We assume that the reader has at least basic vSphere and vRealize Automation administration skills:

Edit Blueprint Information Edit Blueprint Build Information Now we need to associate our two workflows with the appropriate lifecycle events for this blueprint. We can do that by switching back to the vRO client and executing the Assign a state change workflow to a blueprint and its virtual machines workflow from the Extensibility folder.

Let’s assign the integration workflow to the MachineProvisioned state by providing similar workflow input parameters as below:

vCloud Automation Center When asked for the blueprint mapping, expand the tree from the registered vRA IaaS host and add the right blueprint from the Blueprints folder:

Blueprint mapping

Finally, choose the workflow to be executed at the change state event:

vCenter Orchestrator workflow Similarly, we assign a de-provisioning workflow to “MachineDisposing” state by running the same workflow again, but with the appropriate input parameters.

After running these workflows, let’s go back to vRealize Automation and edit the service blueprint again, and go to the Properties tab. You’ll notice the two blueprint properties created automatically, which associate our integration workflows with the blueprint. The values for the ExternalWFStubs.MachineDisposing and ExternalWFStubs.MachineProvisioned properties are just the unique IDs of our integration workflows, which you can check in the vRO client.

What we need to do now is to manually add one more property — Puppet.Role — to pass the role name myrole to the integration workflow:

Edit Blueprint vSphere

Create the self-service catalog item and test the service

All required configuration is done. Now we can save the modified blueprint, create the catalog item and entitle users to consume it:

Self Service Portal

We can test if the integration was successful by ordering the item from the service catalog, opening a web browser to the IP address of the provisioned item and verifying the node has been registered and successfully remediated in the Puppet Enterprise web console.

The following video demonstrates the configured integration from our lab environment:

<iframe width="420" height="315" src="https://www.youtube.com/embed/LWhJliKH1wg" frameborder="0" allowfullscreen></iframe>

Those of you who will be at EMC World 2016 next week may want to try the integration between Enterprise Hybrid Cloud and Puppet Enterprise in the hands-on lab.

Quick note on vRealize Automation 7 compatibility

In December 2015, VMware released a new generation of the vRA product, numbered 7.0. This new version includes enhanced Lifecycle Extensibility through the use of an event broker. This new capability can be also leveraged to integrate with Puppet Enterprise, but we couldn’t test that in our lab environment. The integration documented here should be beneficial for customers using both vRA6.x (included in Enterprise Hybrid Cloud 3.5) and vRA7 (which will be adopted as a part of the solution later on this year).


Enterprise Hybrid Cloud and Puppet Enterprise make a powerful combination, allowing our customers to quickly benefit from the fully automated delivery of dozens of business applications, middleware software and services, all exposed in a self-service catalog.

By extending the solution with Puppet Enterprise, customers can quickly get access to thousands of Puppet-supported and community-supported modules published on the Puppet Forge. Puppet Enterprise simplifies not only provisioning of applications and services, but also Day 2 operations, automating updates as well as compliance by eliminating configuration drift.

This translates into greater business value. You get agility, standardization and reduced operational costs, plus more flexibility and choice. The integrations described in this article might be beneficial to other vRealize Automation and Puppet customers, including those leveraging the open source version of Puppet.


Karol Boguniewicz is a technical marketing manager focused on the Enterprise Hybrid Cloud solution at EMC, and is also a VMware vExpert. You can find him on Twitter as @cl0udguide and visit his personal blog at cl0udguide.wordpress.com.

Andreas Wilke is a technical solutions engineer at Puppet. You can find him on Twitter as @AutomateCloud or on Google+.

Learn more

Puppet sites use proprietary and third-party cookies. By using our sites, you agree to our cookie policy.