published on 20 December 2013

In the latest Puppet Podcast, Community team member Kara Sowles is joined by long-time Puppet developer Jeff McCune along with Cody Herriges and Matt Kirby from the technical operations Team.

The engineering staff at Puppet Labs has been growing quickly, which has meant that a lot of development and testing infrastructure has had to adapt and change. It used to be feasible to have each developer test Puppet on either their own laptop or within a single testing infrastructure. But with Puppet supporting more and more operating systems, including AIX and Windows, it's no longer feasible for all the tests to happen locally. Having one QA environment that any developer can render useless also doesn't work too well at scale.

So in this podcast, we explore how the development and operations teams at Puppet Labs are collaborating on a project to create more isolated virtual and physical test environments for each developer. This is enabling developers to be a bit more experimental and free to break things in their environment without worrying about slowing anyone else down.

They're also coming up with a solution to eliminate all the time-consuming steps needed to get that space by creating a variety of self-service interfaces to different parts of their infrastructure.

Their goal is to give new developers the ability to write a patch to Puppet code on their first day of work, then run it through their own QA environment that's optimized for continuous delivery. Jeff talks about how he able to interact with VMware's vSphere API using a combination of fog and the rbvmomi Ruby library.

Jeff, Cody and Matt also talk about the some of their work building out software-defined infrastructure with a network virtualization layer. Tools like Razor, a bare-metal provisioning tool, are sometimes difficult to consistently test within the context of a laptop environment, and are much better tested within the VMware infrastructure maintained by operations. Some other challenges with Razor include changing the behavior of DHCP or PXE boot yourself while not interfering with other production environments or the primary wireless network. The group talks about implementing OpenStack's Neutron networking interface (formerly quantum) and VMware's NSX network in order to achieve this.

Cody also talks about some specific considerations building an automated cloud infrastructure with dynamic storage capabilities. Specific considerations include creating a distributed storage architecture that eliminates the possibility of single-points of failure taking out the entire storage infrastructure for other virtual machines. There are also different optimizations for handling different heavy read vs. heavy write workloads, as well as hosting it within a private cloud or public cloud.

One specific challenge involves gathering enough metrics to know both how to grow and when to expand capacity. Some of the tools being explored include setting up a Ceilometer infrastructure to collect measurements within OpenStack environments.

Finally, the group talks about the process of designing and architecting a boundless infrastructure that is architected for scalability, reliability and elasticity; abstracted enough to plug in new capabilities in the future, yet stable enough that the ops team can sleep through the night.

You can also check out our many other recent podcasts by visiting our podcast page or subscribing in your favorite podcast tool.

Learn More

Share via:
Tagged:
Comments
The content of this field is kept private and will not be shown publicly.

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.