Published on 16 August 2013 by

Besides all the speakers, hands-on sessions, community get-togethers and parties, PuppetConf is great for demos. This year, we're offering three that cover Puppet technologies, from the cutting edge of cloud automation to a sneak peek at what's coming next for the Puppet Enterprise console.

You can visit the PuppetConf schedule to learn more, and here's a quick rundown of what you'll see.

"This is what's happening in your infrastructure tonight."

Demo: Puppet Enterprise's event inspector
Presenter: Puppet Labs UX designer Joe Wagner
Location: The Cirque Room

The Puppet Enterprise console offers visibility into your infrastructure, allows you to browse resources, and provides insight into the state of everything you're managing. With Puppet Enterprise 3.1, coming in Q4, the console will gain a powerful new event inspector that offers even more robust tools to help you quickly assess the state of your infrastructure.

Puppet Labs UX designer Joe Wagner will demo the new event inspector. He says the driving principle behind its design was simple: Busy administrators don't want to spend a lot of time staring at screen after screen of reports.

"Ideally," he says, "you want to see there's nothing to look at and move on. If there's an issue, you need to get as much information as you can as quickly as possible."

The event inspector offers a quick summary of your entire infrastructure in three simple views that show the configuration state of Puppet classes, managed nodes and managed resources.

Cloud application environments require us to think differently about managing infrastructure, shifting our thinking from concern over individual compute nodes to concern about the resources on which our applications depend, wherever they're hosted. The new Puppet Enterprise event inspector offers three perspectives to help with this shift in thinking:

By Class: Classes represent related collections of packages, files and services your applications depend on. The event inspector offers a way to quickly spot issues with classes in use across your infrastructure, then allows you to drill down and investigate either nodes or resources assigned to that class to discover the source of an error and achieve quick resolution.

By Node: System administrators often understand the severity and impact of an outage or error most quickly when they know which specific nodes are involved. It's not a resource-centric approach, but many organizations still name infrastructure for its function and knowing the name of a distressed node can be a help. The event inspector provides easy access to individual compute, storage or network nodes experiencing issues.

By Resource: Complex application environments often require the same package, file or service installed or running across multiple nodes. Catching a failure with a single Puppet-defined resource early can make the difference between a small problem and widespread failure and downtime. The new event inspector features a view focused solely on the state of resources. You'll be able to zero in on the resources with issues without having to worry about which specific nodes host them.

Besides offering a quick means to get to critical reports, the event inspector was designed with ease of navigation in mind.

"We aren't forcing users into an alley they have to back out of," says Joe, noting that all the reports and screens provided by the event inspector offer plenty of context and quick access to higher levels of reporting.

The event inspector is also aware of the change history of your infrastructure: Error reports in the Puppet Enterprise event inspector not only tell you what the problem is, they tell you which change to your versioned infrastructure has caused an issue. If you're using continuous delivery practices to manage changes to your infrastructure, having that information makes it easier to quickly revert to a working configuration if errors in your Puppet code make it into production.

Puppet Technologies in the Continuous Delivery Toolchain

Demo: Using Puppet Technologies for Continuous Delivery
Presenter: Puppet Labs methodologies lead Eric Shamow
Location: The Cirque Room

Lots of IT organizations are embracing continuous delivery to ensure faster, more stable releases and better recovery times when something goes wrong. The goal of a continuous delivery toolchain is to promote code between environments in such a way that it can be tested and validated before moving on to production.

Puppet Labs methodologies lead Eric Shamow will demo work he's done to bring more continuous delivery awareness to Puppet technologies.

At the heart of Eric's demonstration is a Git-aware addition to the puppet command line tool that makes it possible to identify a set of files as a "package," then permits the developer to move those files between environments such as "dev," "staging" or "production" in a pre-defined pipeline.

If you stop to check out Eric's work, make sure to find time for Joe Wagner's Puppet Enterprise event inspector demo: The changes tracked by Eric's continuous delivery demo show up in an event inspector report when something goes wrong with a package moving between environments.

Even in the cloud, SSH in a for loop still isn't a solution.

Demo: Cloud Automation
Presenter: Puppet Labs engineer Zach Leslie
Location: The Cirque Room

Businesses are excited about the cloud, but it also presents lots of new challenges. With Puppet Enterprise 3.0, we addressed a lot of scale, performance and orchestration needs. We also highlighted recent developments that bring networking and storage resources into the software-defined infrastructure picture. But IT organizations still struggle with the creation of cloud compute resources.

Puppet Labs engineer Zach Leslie will demonstrate Puppet code he's written that addresses a problem familiar to system administrators before the wide adoption of IT automation.

"We only have an ad hoc way to deploy instances," says Zach. In other words, for many admins spinning up cloud infrastructure still involves writing a lot of one-off scripts that don't integrate with existing IT automation tools.

Worse, he says, "you're required to have special knowledge for each provider." A script that handles the creation of an AWS node, for instance, won't do you any good with the Rackspace cloud. A script written to address the ins and outs of Azure won't work with anything else.

There's a proof-of-concept module on the Puppet Forge that offers types and providers for cloud resources on Google Compute Engine, but Zach saw an opportunity to do better.

"We can build a type or a module with all the types and providers for a single cloud provider, but that doesn't allow us to switch infrastructure."

His demo will feature an abstraction layer that works with several cloud providers, showing how to create cloud instances using the Puppet DSL, and avoiding the drawbacks of ad hoc scripts that work only with a single provider. Much as the Puppet DSL can be used to abstract away differences between operating systems or networking devices, Zach will show how it can abstract away the differences between clouds.

Learn More

Share via:
Tagged:
The content of this field is kept private and will not be shown publicly.

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.