Published on 23 November 2015 by

As part of our Application Orchestration Webinar Series, we recently conducted a webinar which discussed how a DevOps approach offers enterprises a way to accelerate application delivery. The replay is available below. A lot of great questions came up during the Q&A, so we decided to share the most common ones here and provide some added detail.

With Puppet Application Orchestration do I have to re-architect everything? Do I have to get rid of my roles and profiles? Do I have to get rid of the modules I have already written or have pulled down from the Forge?

The really short answer is “no.” You can reuse your profiles and modules very easily. You don’t need to re-architect or rewrite your Puppet code. You just add a few new words to your language.

We have over 3,700+ modules on the Forge that provide code you can just start using right away with Puppet Application Orchestration. If you’ve already got modules in-house, just plug them into this new framework and start running Puppet in order and exchange information between the different parts of your application.

Now something that used to be tricky, such as figuring out how to make sure the database connection information was accessible by your application service, is now just part of this export and consume relationship, where the capability of a database actually gets consumed by the app servers and because of that relationship, Puppet knows exactly what order to run things.

You literally take that same graph-based capability we have been doing for a decade within nodes and now bring it out across nodes to talk about a whole application whether it is on one development node or scaled out to 10, 50 or however many nodes in your full production application.

How can I run Puppet Application Orchestration using client tools from another machine? Can I be on a different agent machine and run Puppet Application Orchestration or is this something I have to do from the Puppet master?

In Puppet Enterprise, the job runner and orchestration components are set up through our role-based access control (RBAC) system. You can set up a user account with permissions to get a token and run the Puppet orchestrator. To request a login, you use the Puppet access tool, put in your username and password, and then it saves the token to do these commands. Now you can go in and see your job list, your apps, and run the commands.

This can be done from any machine in your environment. You just need to have the Puppet Enterprise agent and these tools installed on it as well as a corresponding account in Puppet Enterprise RBAC. That can be a user derived from LDAP or Active Directory. We use a token-based system that has a lifetime for how long it is around and how long that user is authorized to interface with the orchestrator, running jobs, listing jobs, and seeing the status of jobs.

In our first release of Puppet Application Orchestration, you will have the ability to give a user orchestration access. This will become even more fine-grained as we enhance it over time, providing the ability to restrict users down to a particular set of applications. Good things to come!

How does Puppet Application Orchestration fit into the processes I already have in place?

If you are already doing builds of Java apps, for example, and producing WAR files, you may already know that with Puppet you can certainly manage that. You might have a WAR file you deploy onto Apache Tomcat that Puppet can manage for you. Is Puppet Application Orchestration the right solution to actually build that WAR file? Probably not.

You aren’t going to need to change your traditional build processes for your app components. You might be building Java archives, system packages, etc. Those can still be built using your traditional pipelines, but this means you are able to take those outputs and the Puppet code you’ve been using to configure them on systems and now string them together into a full multi-tier application instead of focusing just on a single node or just the app or database tier.

Now you can think about it as a complete application with all of the pieces brought together and truly orchestrated. In the past, users may have been solving this problem by running Puppet with another tool, then having to parse that information to determine the next thing they should do. By having this within Puppet itself, you have one tool and one system that can manage everything from those core OS packages all the way up to the application level. You get infomation about a single node but then also know the model between everything.

This gives you in-depth knowledge into how your application is built in terms of what operational resources it consumes, how those operational resources are provisioned themselves, and the ability to build that model. Through that model you can figure out what is the right way to deploy the changes.

In fact, we represent that model as a node graph, showing all those components including packages, users, services, etc. that go into your node’s configuration. For the past 10 years, the way Puppet has been doing this is by actually building a graph and then ensuring that node is exactly the modeled or declared state. You can check everything along this graph and simulate what the changes would be, or we can enforce it.

We are taking this capability and bringing it from within one node out and across nodes. So if you start thinking of one of these nodes in the graph being a database server, another node being an application server, Puppet is managing all of this complexity within each of them. You see dependencies and see an understanding of how they are related to each other. When one thing changes that affects another thing, Puppet already knows because we’ve got a model for that.

What does this have to do with the Hiera data configurations I already have?

Puppet Application Orchestration is additive to the Puppet language. The things you are using today in Puppet and depending on such as Hiera still work. If you are keeping data that you’ve got sectioned off by environment, data center, node, or by node role, and you have those in Hiera, you can keep using it for your data layer with application orchestration as well. It is a purely additive capability that you get with Puppet Enterprise 2015.3.

Is this a Puppet Enterprise-only feature? How does this relate to Open Source Puppet?

Ultimately, everyone (open source and Puppet Enterprise users alike) will be able to use the new extensions to the Puppet DSL to model distributed applications. The language, the environment graph, the service resources -- those are all deliberately part of the Puppet platform. We expect great content on the Forge from every member of our community, commercial and open source, including full application models, just like users contribute and download modules today.

We’re taking advantage of the Puppet Enterprise platform to roll out the first wave of application orchestration capabilities, with a focus on capabilities that teams need to orchestrate application deployments across the infrastructure they manage.

In the future we will release tooling around the core language; the command-line that works with jobs to make it straightforward for Open Source Puppet users to orchestrate the deployments and ongoing management of application and the underlying infrastructure.

In Summary

There are many other fantastic questions covered in the webinar recording. Be sure to check out the on-demand webinar to see for yourself how to easily model your application infrastructure to make installations, upgrades and ongoing management repeatable and reliable.

I hope you try the new application orchestration capabilities that will be available soon in Puppet Enterprise 2015.3 (even if it’s just using a free trial) and share your feedback with us which will inevitably improve Puppet.

Stephanie Stouck is the Principal Product Marketing Manager at Puppet Labs.

Learn more:

Share via:

thanks for the webinar - we are new to puppet and finding these very informative. The link to the recorded version is valuable to those of us in other time zones than US

Great to hear the webinars are helpful for you. We do have another webinar coming up in mid-December that speaks to all of the new capabilities available in Puppet Enterprise 2015.3. The webinar will be delivered across multiple time zones (US, Europe, Asia) and is an excellent opportunity to check out a demo and ask any questions you may have. Be sure to check out What's New in Puppet Enterprise 2015.3: https://puppetlabs.com/resources/webinars. As always, we will have the recording available shortly after the webinar.

Also, as a newbie to Puppet I'd like to share a few resources to help get you started as well.
- The Learning Puppet VM provides you with a safe, convenient virtual environment. - https://puppetlabs.com/download-learning-vm
- All of our docs are available for easy reference. - http://docs.puppetlabs.com/
- Finally, Mike Stahnke, Director of Engineering at Puppet Labs, delivered a great talk on Getting Started with Puppet. - https://puppetlabs.com/presentations/getting-started-puppet-michael-sta…

Welcome to Puppet and good luck!

The content of this field is kept private and will not be shown publicly.

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.