published on 25 September 2014

Ready for the PuppetConf Day 2 wrap-up? Oh, you haven’t read Day 1 yet? Never mind, you can read this first — it’s not a serial novel, and there’s no spoiler.

As usual, there were far too many great sessions to attend, and this post is just a small taste of what was on offer. (You can see any or all of 87 PuppetConf 2014 talks and presentations by signing up here.) The day kicked off with a keynote from Dan Spurling, vice president of tech services at Getty Images. His talk about creating a culture of Puppet adoption at his company proved very popular. (Dan’s GSD shirt was noticed, too.)

Dan said the Getty Images team was doing “technology for technology’s sake” when he arrived. To sharpen the focus on the business, the teams were reorganized around the services and systems delivered to customers. That makes it sound easy, Dan said, but it’s actually difficult. First, you have to find an executive sponsor who can help remove any barriers. Then you have to set clear expectations, and be realistic about how long it takes for change to happen — because that change relies on people, and people take time to implement a change.

That’s where Puppet came in: as a vehicle for change. (Getty Images started with open source Puppet, shifting to Puppet Enterprise about a year ago.) Dan described how the team has been integrating Puppet not only into greenfields — new initiatives — but also into brownfields, or existing businesses. If the team limited itself to the greenfields, puppetizing would just look like a science experiment — it wouldn’t seem real. If the team could integrate Puppet into established processes, then everyone in the organization would see that it was capable of delivering on real-world promises.

Getty Images doesn’t have a DevOps team, per se, but the approach is clearly DevOps-ey: Encourage communication and collaboration between different groups, get people talking to each other, and align everyone around the same business goals.

The overall message Dan conveyed: You can’t wait for things to be optimal to initiate a change. You must puppetize even when and where it doesn't seem optimal, to demonstrate to the wider company that it can be done.

Next up was Alan Green, a senior Unix systems engineer at Sony Computer Entertainment America. Alan talked about the fact that while people tend to think it’s important to centralize the infrastructure so all groups are aligned, it’s actually more important to be able to respond quickly to customers, internal or external. Each studio (the sound team, the motion capture team, etc.) should have the technologies it needs to do that.

To make this diversity work, it helps to offer people options. Be honest about the tradeoffs: Yes, we can give you the setup you want, but it will take a lot of time; on the other hand, we could do it another way, and get it to you quicker.

You also have to embrace that diversity is the right way to go, and then provide a way for the different groups to connect their work. Puppet is a natural way to facilitate that. It’s also important to make code as available as possible to everyone, and of course, get ready to scale. Sony Entertainment reached the point where it needed to switch from open source Puppet to Puppet Enterprise about two years ago, a move that can be worrying, because it costs money. But ROI shouldn’t be measured only in dollars, Alan said: Diversity can enable innovation, and that’s important for an entertainment company.

Encouraging failure is a path to learning, Alan emphasized, because failure can teach you to do new, exciting things that will make a bigger difference and help you move forward.

Q&A with Luke

Puppet Labs founder and CEO Luke Kanies responded on stage to questions from PuppetConf attendees — most about Puppet and its future, but not all.

There was a question about Puppet Apps, which Luke discussed in his Day 1 keynote and his blog post. “I would be surprised if we release more than one per quarter,” Luke said. “I’d rather put out four than 20, with five releases for each app. We are a small company, and we have to try not to get overextended to the point where we can’t evolve the apps. They have to be evolved to be successful.”

About the future of open source Puppet and Puppet Enterprise, Luke said his goal is to keep the two products complementary, and to understand each is used for different reasons. “The engine is a critical part of the car,” he said in reference to open source and its relationship to the commercial product.

Open source is also an important part of driving innovation. “We’re trying to change how the market works and thinks,” Luke said, “and this is done better with software that’s absolutely everywhere.”

The rise of containers such as Docker came up in the next question: “Where does Puppet fit into environments that don’t require convergence, where instead of adjusting the container you just reprovision?” Luke pointed out that containers are a result of 10 to 15 years of investment in virtualization, so it’s easy to switch from the virtualization world to the containers world — but a container can’t do everything. (James Turnbull's italk on Docker is covered below.)

People also ask what will happen to Puppet when no one owns servers anymore, Luke said. “The world is only going to have more and more servers,” Luke said, whether the same players own them or not. “I’m not afraid that people won’t need to manage their infrastructure. You still need something to make the Docker container look like it should. Even in that world, you are still using Puppet, though in a different way.”

In response to a question about integrating remote orchestration, Luke said the team is now looking at what to do with MCollective, which is based on a powerful but complicated platform. “It’s an area we are investing heavily in, and I’m personally investing heavily in.” He also offered some idea of the form future development could take: “I’m a big fan of small independent tools that do one job and do it correctly, rather than big huge tools that do a lot. I want to make our orchestration better, not by adding to Puppet, but by adding tools. I don’t want to add more functionality to Puppet, but add functionality to the Puppet ecosystem.”

One question was about the process for getting a module on the Forge designated “Puppet Approved.” That process, and the criteria, are still being defined, Luke said, instructing people to hassle Ryan Coleman, product owner of the Forge.

Luke discussed a few more things, so if you’re interested in hearing the whole session, sign up to be notified when the PuppetConf talk videos are available.

Continuous delivery, DevOps and Puppet

One of the most popular talks on continuous delivery and continuous integration was delivered by Gareth Rushgrove, a longtime Puppet user and former U.K. government employee who’s just joined Puppet Labs. Gareth talked about continuous integration for infrastructure as code, notably about moving the outcomes of policy meetings directly into the code itself.

Gareth pointed out that while policies can be discussed and agreed on in meetings, months later, you often have no idea whether that policy is still being actively followed. How to check on that? Write tests against the policies! For example, let’s say the organization wants a rule around launching too many nodes, or retaining stopped nodes, because they are expensive. You can write tests to verify if those policies are being followed. “Everything you worry about, you can turn into a test,” Gareth said.

Testing was a big theme running through Gareth’s talk. He recommended Packer as a declarative approach to building images, and Serverspec for testing images, for example. (You may be interested in reading a short post Gareth wrote about testing Packer-created images with Serverspec.)

Gareth said every infrastructure should have an API, and you can test against that. He pointed out that PuppetDB is chock-full of data, and has an API that you can test against for anything that has data associated with it — for example, to make sure your policy about having security-enforcing packages everywhere in your infrastructure is actually being carried out. Once you think of your infrastructure as data, too, then the possibilities for testing against it become virtually endless. Just as important, you can share the results with the CFO, the security-compliance people, and others in your organization who need to be confident that policies are being followed.

Gareth shared a lot more, and at a pretty fast pace. If this has whetted your appetite, you’ll want to watch the video as soon as it's available.

Sam Bashton’s talk, “Dev to Delivery with Puppet,” focused on the process his consulting firm uses to help clients — from a 20-machine shop to companies with a 2,000-machine infrastructure — get their dev and ops teams working together to deliver better code more reliably. Sam’s a funny guy; he claims his term for this — “opsdevelopment” — got stomped on by the Belgians, and so we all call it DevOps now. Whatever you call it, you can sum it up in a single phrase:

Sam talked about how important it is to use Puppet everywhere, configuring all environments to match live production as closely as possible. It starts with the dev environment, of course, and if that matches production, then developers can feel confident their code will run as intended. An added benefit to ops: By configuring the dev environment with Puppet, your devs become testers of how well you’re writing your Puppet manifests.

Sam goes so far as to say that all your Puppet manifests should be deployable to all environments without any modification. “If you don’t do that, your environments will never really be in sync,” he said.

How is that possible? Hiera is a good tool for this, allowing you to write custom facts so that your manifests can actually be the same for every environment, with the specific facts for each environment set in Hiera.

Sam also covered the benefits of using Vagrant for building virtual machines for testing, and Docker for building container environments that look like production. You can take a look at the Docker images Sam’s firm has built to be used with Vagrant: https://github.com/BashtonLtd/docker-vagrant-images

There was a lot more, including information around logs and metrics, which you can hear in the video. One parting thought: Sam recommended sharing metrics dashboards on large-screen displays around the office, so everyone can see what’s happening in the business.

Microsoft Azure and Puppet

Ross Gardler is president of the Apache Software Foundation. He’s also the senior technology evangelist of Microsoft Open Technologies, Inc., and he spent some time during his session talking about why Microsoft is working with open source technologies: to make sure that Microsoft technology is interoperable with the non-Microsoft technologies its customers want to use.

Interoperability matters because people want to swap out technologies to save money, or because the company is choosing a different path. And heterogeneity is the norm for business these days; data centers are filled with both hardware and software from multiple vendors and open source projects. Puppet is a natural way to resolve that heterogeneity, and knit an infrastructure together with one configuration management solution.

Microsoft’s partnership with Puppet Labs is part of that, enabling customers of the Azure cloud service to use Puppet for managing their cloud infrastructure. Ross demonstrated how to create a Puppet master on Azure, and how to configure virtual machines and node requests. “The focus is on making it as easy as possible,” he said. You can also use Vagrant and Puppet together in Azure, and Puppet is integrated with Microsoft’s Visual Studio, Microsoft’s app-building software.

The Puppet Forge has a number of Puppet Supported modules for Windows, and a couple of the new Puppet Approved modules are for Windows, too.

Docker and Puppet

James Turnbull is well known to PuppetConf attendees and the wider Puppet community. He was at Puppet Labs for three and a half years, moved on to take a leadership role at Docker, and has just joined Kickstarter as vice president of engineering. James’ deep roots in Puppet, and his other career as a technical book author, make him uniquely suited to talk about using Docker and Puppet together.

James advised using Puppet to set up your hardware, install Docker and run Docker containers, and using Dockerfiles to install packages, deploy code and run your services.

Not only can you manage your Docker containers with Puppet, you can even use a Puppet module written by Gareth Rushgrove to do it.

James dived quickly into testing with Docker, using Puppet to build containers, and a lot more. He also got a little recursive towards the end of his talk.

You can read James’ post about building Puppet-based applications inside Docker containers, and also David Lutterkort’s post about using Docker and Puppet together for application management, if you’re interested in diving deeper.

Next year in Portland!

We work hard to put on a great PuppetConf for you all, and we are so happy to see all the excited conversations, fun and learning going on all around us. Thanks for coming, and thank you for the compliments, too.

Didn’t make it this year? Sign up to get notified when the PuppetConf talk videos are available, and make sure you join us next year in Portland, the land of interesting food, great coffee, plus more varied alcohol than you can shake a stick at. You’ll be glad you did.

Aliza Earnshaw is managing editor at Puppet Labs. Senior copywriter Molly Niendorf contributed to this post.

Learn more

Share via:
Posted in:

Add new comment

The content of this field is kept private and will not be shown publicly.

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.