Puppet 3.2.1 landed today. Though it’s a “patch” release, it’s the first public release of the Puppet 3.2 series, and it includes a taste of the Puppet DSL’s future in the form of an experimental parser that introduces some new features you’d expect to find in traditional programming languages.
When you’re responsible for keeping other people’s enterprise websites up and running, you never want to say you’re sorry they’re down.
That’s why Justin Seabrook-Rocha and Patrick Adair both use Puppet technology in their work for Hurricane Electric, an internet services company whose transit backbone connects to more than 2,100 IP networks. Hurricane Electric is located in Fremont, California, and Justin & Patrick both work in the same building that was once the manufacturing facility for NeXT Computer, Steve Jobs’ gig before his 1997 return to the helm of Apple.
Both Justin, a network engineer, and Patrick, a network technician, are registered for PuppetConf in August. They’re expecting to get tips and advice from other attendees and speakers on ways to make their own Puppet infrastructure better, and the latest updates on what’s new with Puppet.
Before I started at Puppet Labs, I was a tech writer at a large corporation (I won’t name them, but their initials are HP). The approach to tech writers there conformed to the traditional “huck it over the cube wall” model I’ve seen at other large enterprises. Anyone who has worked in tech for any time at all has encountered this model, which presents a new product to the writer as a fait accompli and which imagines tech writing as an after-the-fact act of taxonomy: “Here is a thing. The thing has five things stuck to it. Three of those things are red, one of them is made of feathers.” And, we’re done.
More often than not, the huck-it-over-the-wall method results in tech writing nobody reads because it does nothing useful (“I can plainly see that thing is made of feathers, what possible good is this manual going to do me? I’m going to put it back on top of the toilet tank and continue to ignore it.”).
Release management best practices have evolved over time as software tools that manage and automate parts of the process appear. As a result, established structures are ever changing. An example of this is a 2007 piece on Buildmeister about best practices that were inspired by ITIL, the ISO standard IT Infrastructure Library.
There is a fascinating article in a recent Ars Technica on why Facebook creates its own hardware and how it avoids virtualization on its servers. Facebook just unveiled its first data center that has only its own custom hardware, designed per the Facebook-founded Open Compute Project. Facebook answers the “What is virtualization” question by saying, “Something we here at Facebook don’t need.”
Second of two parts. Written by Max Martin. Originally published on Linux.com, republished with permission.
In the first part of this tutorial, we showed how to use Vagrant to automate and manage local virtual machines for a software development environment. We defined a simple Vagrantfile to specify certain attributes for a VM to run a simple web app, and got it running using Vagrant’s command line tools. In this part of the tutorial, we’ll be using Puppet to define and automate the configuration details for our VM. This way, whenever we start up the dev environment with vagrant up
, it will be set up to run our web application without any additional manual configuration.
I found Damon Edwards and Anthony Shortland’s video presentation on DevOps a refreshing change. They see DevOps as a larger, more comprehensive service delivery platform and view the DevOps toolchain as the practical way to make that service delivery platform work. Their excellent diagram divides a service delivery platform for DevOps into four quadrants, with Infrastructure and Applications on the Y axis and Build and Deploy on the X axis.
First of two parts. Written by Max Martin. Originally published on Linux.com, republished with permission.
Setting up a development environment for a web application can seem simple—just use SQLite and WEBrick or a similar development server—but taking shortcuts can quickly lead to problems. What happens when you need to onboard new team members? What if your team members are geographically distributed? How do you prevent bugs from creeping in when the production environment’s configuration drifts away from the development environment? Even if you’ve managed to set up a picture-perfect development environment, what happens when a developer inevitably breaks its configuration?
In CIO Magazine, Mike Sutton and Tym Moore explained how they systematically improved software release management practices at a large telecom company by focusing on key factors affecting the release process, infrastructure, and automation. The themes of the advice were transparency, automation, and communication. The case study looked at an emergency situation for a large business in severe trouble, but the themes are universal. This article, published in 2008, is a classic; it’s practical and pragmatic and still has plenty to say about release management practices today.
Puppet Labs released Facter 1.7 this week, introducing a number of under-the-hood enhancements and a new feature called “external facts” that’s been waiting in the wings for about a year now.
Facter is a cross-platform library for gathering information about nodes managed by Puppet, including domain names, IP addresses, operating systems, Linux distributions, and more. External facts provide a simple way for a puppet agent to provide custom facts without having to write Ruby. Eric Sorenson, Puppet’s open source product owner, told me, “they’re probably the easiest way for people to get an entry into Puppet for extending Puppet or customizing Puppet for their own site.”
Jeremy Schulman, Global Solutions Architect at Juniper Networks, is responsible for developing the Puppet for Junos OS netdev module. This post originally appeared on his blog on the Juniper Networks website on April 2, 2013. It has been reprinted with permission.
The role of Junos technology is to address the problems of today’s networks in a way that is aligned with broader challenges facing IT infrastructure automation as a whole. We all know that managing networks is complex, hard, costly, and requires highly trained engineers. This post is going to talk about managing networks in a whole new way. The concepts in it will change your life. They changed mine.
There is no doubt that something big is happening. Our industry is going through a paradigm shift. Everyone is excited about the idea of “programming” the network. People want to build network solutions independent of hardware vendors; to use open APIs, open software, to collaborate, and to innovate. But most importantly, they need to deliver a network focused on the needs of the consumer of the network. A similar paradigm shift happened a while ago for the IT system administrators (sysadmins) and DevOps – you know, the guys in the data center deploying all those servers or virtual machines driving the need for more networking. As we look forward to how the networking industry may evolve let’s take a quick look back at the history of the sysadmins.
At one point, sysadmins were manually deploying servers, configuring services, and managing the installation of applications – applications that ultimately drive their business. These sysadmins may have had some simple Bash- or Perl- based scripting tools they created themselves, but it was largely ad-hoc. Fast forward to today: sysadmins now use sophisticated configuration management products like Puppet or Chef to fully automate large-scale data center deployments. They write programs to “glue” together these tools with APIs from other vendors like VMware, Amazon, Google, or from other software they download from the open source community. These sysadmins, who were not formally trained software engineers, picked up new programming skills and began focusing on automation as a key business driver, and as a personal asset. They use open APIs and open software. They collaborate. They innovate. They are driving the success of their business. They can (and will) become key influencers in deciding which vendor is deployed in the network.
A few weeks ago, I had the honor of co-presenting at the Bay Area Juniper Users Group (BAJUG), which meets once a quarter in Sunnyvale, CA. Jeremy Schulman from Juniper Networks invited me to co-present with him on the Puppet for Junos OS solution, which became available in February. Haven’t heard about this networking automation solution before this blog? I’ll explain more, but first, I want to briefly summarize my experience at BAJUG.
I really loved the format of the user group. It kicked off with a one-hour keynote presentation from Jeremy on the Puppet for Junos OS solution, which was followed by a series of 10-minute lightning talks. Those talks were given by network guys from Facebook, IETF, Zygna, and more. The event ended with a free-form social. It was a sold out event, attended by about 250 people. Here’s a photograph I took right before my talk.