Good testing of Puppet code involves different approaches and techniques that can be complex and time consuming. You can test your manifests’ syntax with puppet parser validate, verify their code style with puppet-lint, test modules logic and behaviour at catalog level with rspec-puppet, and check the actual effect of a Puppet run on dispensable virtual […]
Today, we are launching the beta version of a new question and answer site where you can ask any Puppet questions or provide answers for questions from other Puppet users and developers. We have an active online community of open source and Puppet Enterprise users, and ask.puppetlabs.com is another great forum for answering Puppet questions, […]
In the wake of recent events on the East Coast of the United States, disaster recovery (DR) planning has reared its head again. Of course, it’s a bad time to think about disaster recovery right after an event with such a large impact. However, it’s even worse to never think about it.
Prior to working at Puppet Labs, I spent a lot of time on disaster recovery. For nearly two years, I led a team designing multi-site replication, creating reference architectures for availability and recovery, and selling our business partners on disaster recovery investments. This was for one of the top performing business units at a Fortune 100 company with seven and eight figure budgets for DR.
Disaster recovery is a huge proposition. It’s costly, time consuming, difficult to test correctly and often the first thing cut when doing budget reviews. DR planning is also never complete. You evolve. You change. Your plans need to as well.
The starting points for DR planning can be difficult to find. Infrastructure engineers often jump to technical solutions. Before you figure out the newest wizbang in storage replication technologies and failover, take a step back.
The International Securities Exchange (ISE), a leading US options exchange, has just completed the first phase of its Puppet Enterprise deployment. For a company exceeding trading volumes of 2.5 million contracts per DAY, the decision to ‘puppetize’ their infrastructure not at random — it was carefully determined and deliberate. Trevor Pott, reporter from The Register, […]
|Purpose||Helps you automate the management of VMware Tools.|
|Module||rasorsedge/vmwaretools (v4.1.1 tested)|
|Puppet Version||Tested on 2.7+ (Puppet Enterprise 2.0+)|
|Platforms||RHEL, CentOS, SUSE, OEL (post written with CentOS)|
In a previous MOTW, I covered what problem this module solves and addressed a very simple workflow for using the module to manage VMware Tools.
This time, I’m going to dive into how the module is structured and explore some of the more advanced things you can do with it.
Hi, I’m Eric Sorenson (eric0 on #puppet IRC), and in June 2012 I moved from being a community member and Puppet administrator in the field, to working at Puppet Labs as the Product Owner for our open source projects. At the time, my first goal was to help get a great release of the next major version of Puppet (code-named “Telly”) shipped to the world, which launched late last month. Now with the release of Puppet 3.0.1 — which addressed and fixed the biggest issues that our awesome community of early-adopters found in the 3.0.0 release — it seemed like a good time to blog from the rooftops.
I’m new to Puppet Labs, but I have been running Puppet in large-scale production operations since 2009 and, somewhat naïvely, felt like I had a good idea of what Puppet 3 was supposed to look like. There had been, after all, a few dozen bugs in Redmine over the past couple of years in which the “Target Release” field I’d seen James or Nigel set to “Telly” … it was going to fix all the bad behavior we’d all reported over the years, right? Well, not exactly.
The tough thing about major releases of popular products is that the burden of expectations becomes so great, there’s no way reality can measure up. In some cases (The Phantom Menace, Guns n’ Roses “Chinese Democracy”) when the release does come, it’s universally panned on its own merits; other times, the release might have been fantastic if it had come when it was promised, but the timing was such that it had already gone stale (Duke Nukem Forever).
A long time ago (well, June of this year) the Puppet Forge was running without a leader. In my role as community manager, I saw the Forge as having this awesome potential to be the resource for user-generated content surrounding the Puppet community. I knew it was getting more attention, but that was mostly anecdotal. My next step was to find some data that could tell a good story.
Puppet Modules are often the first way people learn and start using Puppet. We’ve had our Puppet Forge for a while, but I didn’t feel like I knew a lot about it. When we were getting ready to interview Product Owners for the Puppet Forge and Modules, I decided I wanted to know more to help me prepare for the interview, and maybe give me some insight into usage patterns that I hadn’t thought about.
Like any geek, I love data. I knew we had all sorts of data in our module download logs, but we had not ever really taken the time to transform that data into awesome information. I started with simple awk/sed/grep to find basic information, like what modules were popular. This worked for a time, but then I wanted to know modules by name, find popular authors, and do things like ignore version number changes.
|Purpose||Fetch and update file data from an S3 bucket|
|Platforms||All, but see ‘Advanced Usage’ for non-Linux|
Puppet and Puppet Enterprise come with a basic file server, allowing agents to fetch files from the master. This capability is suitable for small files, but when used with large binaries it can cause performance issues on the master.
S3file provides a simple Puppet type to fetch and update files stored in an Amazon S3 bucket or in your private OpenStack Swift storage environment. This allows you to store large files outside of Puppet, while still keeping the resource model provided by the existing Puppet file types.
S3file is written to be compatible with the old Puppet 2.7 as well as the latest Puppet 3.0, making it easy to integrate with any Puppet deployment.
Puppet is an IT automation language that has traditionally been used to configure individual nodes. Puppet’s declarative language and dependency model is also suitable for describing entire application stacks on top of public cloud offerings.
This post will explain how Puppet can be used to model resources through Google Compute Engine’s API in order to describe application stacks as reusable and composable configuration files.
Google Compute Engine (GCE) is a service offering from Google that allows users to provision virtual machine instances that run on Google’s infrastructure. The one thing that really stands out about this service compared to similar offerings is how fast it is. Machine instances generally take seconds, not minutes, to spin up.
The GCE API allows users to create all of the resources needed to dynamically model application stacks, including: virtual machine instances, networks, firewalls, and persistent disks. It also allows you to specify a lot of the characteristics of a virtual machine instance like the image that should be used, and how much memory and CPU to allocate to that instance.
What this API can’t do is tell a machine how it should be configured. There is no way to say: “Use this image as a starting place, and then configure yourself to be a mysql database.” This is where Puppet comes in. It can be used with GCE in order to configure the roles that should be assigned to created instances. Puppet can also be used to perform ongoing management of those instances.
This blog will take the concept one step further, explaining not only how Puppet can be used to assign roles to compute instances, but also how Puppet can be used to model the management of all of the compute objects in GCE that are used to create an application stack.
This week’s Module of the Week is a guest post from Carlos Sanchez from MaestroDev.
|Purpose||Manage Apache Maven installation and download artifacts from Maven repositories|
The maven module allows Puppet users to install and configure Apache Maven, the build and project management tool, as well as easily use dependencies from Maven repositories.
If you use Maven repositories to store the artifacts resulting from your development process, whether you use Maven, Ivy, Gradle or any other tool capable of pushing builds to Maven repositories, this module defines a new maven type that will let you deploy those artifacts into any Puppet managed server. For instance, you can deploy WAR files directly from your Maven repository by just using their groupId, artifactId and version, bridging development and provisioning without any extra steps or packaging like RPMs or debs.
The maven type allows you to easily provision servers during development by using SNAPSHOT versions—using the latest build for provisioning. Together with a CI tool, this enables you to always keep your development servers up to date.
Puppet Enterprise can be used to do anything from automating repetitive tasks to managing large, complex infrastructure, but it can also solve real problems in small doses. We’ve compiled a list of awesome things that you can do—or do better, faster, stronger—with 10 nodes or less of Puppet Enterprise.
|Purpose||Deploy, configure, and manage multiple instances of MediaWiki.|
|Platforms||CentOS 6, Debian 6, and Ubuntu 12.04|
A wiki is a very popular way to share information within an organization as well as with the general public. Organizations and individual users who share information using a wiki often need to separate unrelated topics into distinct domains. Although most wiki software such as MediaWiki does not allow for this separation into isolated spaces by design, it is possible to achieve it via multitenancy. In the context of MediaWiki deployment, multitenancy means configuring multiple distinct wiki instances to use the same wiki installation. The objective of this module is to automate the process of Mediawiki installation and allow system administrators to get multiple instances of MediaWiki up and running very easily and quickly.
We’re always looking to add awesome Puppet modules to the Puppet Forge. We often get requests for modules for stuff we aren’t working on, but we know there are community members with tons of experience, or private modules covering the material. We’re launching a contest to get these modules polished and published on the Puppet Forge. In short, we’re hoping to generate interest in a few of our favorite tools.
|Purpose||Manage PostgreSQL servers, databases, and users|
|Module||Previously inkling/postgresql, now puppetlabs/postgresql|
|Puppet Version||2.7+ & PE 2.0+|
|Platforms||Tested on RHEL5, RHEL6, Debian6, Ubuntu 10.04|
PostgreSQL is a powerful, high-performance, free, open-source relational database server. It hasn’t always enjoyed quite as much popularity as its cousin, MySQL; MySQL is enormously popular, as evidenced by its inclusion in the ubiquitous LAMP (Linux-Apache-MySQL-PHP) web development stack. However, these days there seems to be some increasing momentum behind PostgreSQL in many circles. At Puppet Labs, we are starting to use it more heavily—in fact, it’s a prerequisite for our new PuppetDB product.
With that in mind, it seemed important for us to make sure that there was a Puppet module out that made PostgreSQL as easy to manage with Puppet as MySQL is. We searched around on the Puppet Forge to see if anyone had undertaken this yet, and found several useful Postgres modules—but it was important to us that the module API would be familiar to users of the puppetlabs/mysql module.
We were particularly impressed with the functionality offered by the inkling/puppet-postgresql module, developed by Kenn Knowles of Inkling Systems, so we reached out to Kenn to see if he’d be amenable to us helping to refactor the module to leverage his existing functionality with an API similar to the puppetlabs/msyql module. He was, so, we did!
So here’s why you should check out the new 0.2.0 release of the inkling/postgresql module:
So, we must admit: we’re a bit behind schedule. We had more talks than we expected and filtering through all the fantastic submissions for PuppetConf proved more difficult than we originally thought. While we’re still piecing together the last bits of the schedule and finalizing our keynotes, we can’t help but give you a teaser of what’s coming. Tickets are on sale and a few certification seats are still available. Early bird pricing will end on August 27th, so get your tickets while they are still an absolute steal at $500. A full schedule with dates and times will be released as soon as we’ve built it. In the mean time we hope this whets your appetite.
I was O’Reilly’s Velocity conference back in June, giving a talk on hacking Puppet, and Puppet’s configuration language came up a lot. Most people love the language and find it the simplest way of expressing their configurations, but some are frustrated by how simple it is and wish they had a full Turing-complete language like Ruby for specification. I thought it would be worthwhile to discuss why Puppet has a custom language, and dive into some of the benefits and costs.