published on 11 August 2017

Hello! The Forge Modules team at Puppet has dedicated time to updating and automating our processes over the past year. We’ve touched everything from simple module release checks to release job automation to generating graphs for pull-request information about Puppet Supported modules. We’ve also added a lot of automation and scripting around our ChatOps system, HipChat.

In this blog post, I am going to share our improvements, hoping to encourage you to do something similar.

Rake release checks

I think it’s fair to say that everyone is a fan of low-hanging fruit, and for our efficiency update this is exactly the first thing we started with: defining and picking off the low-hanging fruit when it came to automating of module releases.

Our team of five are in charge of 44 Puppet Supported modules and their repositories. We release modules as often as we can, or as often as is needed. On average, we release 10 supported modules per a month, more if we count the release of unsupported modules (which we're also responsible for).

Our process for a module release was entirely manual when we started. As you can imagine, there were lots of small tasks within the module release process, as well as several larger, more critical tasks. There were also several signoffs that needed to happen alongside all the manual steps. With all this, it took about five days, on average, to release a module.

Small tasks, though they may not appear to have much value, can contribute massively towards keeping our modules up to the standard that our users expect. Without these basic checks in place, we would risk the quality of our output. These small tasks include simple things such as checking that no files were committed that exist in .gitignore, or ensuring that no symlinks have been committed.

We already have a series of Rake tasks within our modules, implemented through our spec_helper library, so we decided to build upon this to include tasks to do these simple manual checks. This allowed us to build a parent task that's run against a module when we're coming up to a release.

We proceeded to outline these small tasks and automate by giving each of them a Rake task. We then utilized some of the Rake tasks that were already in place to build a parent task that we would use for sanity-checking a module prior to its release. Below is the Rake task parent ‘release_checks’ and its subsequent children.


Image of module release effeciency


The Rake tasks under the namespace ‘check:’ are those that we created to put this automation in place. The other task invocations are those that were present already but perhaps run manually, or not run at all in this setting. Here's the location of this code: https://github.com/puppetlabs/puppetlabs_spec_helper/blob/master/lib/puppetlabs_spec_helper/rake_tasks.rb#L531

An outline of the release check tasks and what each of them does

:release_checks

  • :validate - This runs syntax validation against Ruby files and runs similar checks on metadata.json files. Essentially, this is our Ruby syntax validation on any Ruby files in a module.
  • :spec - This runs the spec tests on a module (unit tests). Although we have this built into our Jenkins release pipeline, we include it here so that the task can be used by community members to prepare their modules for release.
  • :check:symlinks - Fails if symlinks are present in the modules directory.
  • :check:test_file - Fails if .pp files present in a module's tests folder.
  • :check:dot_underscore - Fails if any ._* files are present in the module's directory.
  • :check:git_ignore - Fails if directories include anything that has been specified in the gitignore file.

It may seem like a very simple piece of work, and that’s because it is! But the addition of this Rake task made a big difference. Now, when a developer goes to release a module, they no longer have to trawl through the repo trying to find a slip-up. No one wants to have to do monkey work, and with this in place we no longer have to.

Those annoying small tasks you gotta do all the time? Automate it. It’s worth it.

Release-to-Forge Jenkins job

If you love something, set it free — and we most certainly love releasing our modules. Whether it’s a bug fix, feature release or a massive update with new language features, we like to make sure we keep a good amount of momentum when it comes to pushing modules out the door to Puppet community members.

Back when I started at Puppet, September 2015, our releases were all done manually. This included several "heavy lifting" steps, as well as the small steps we outlined and automated above. Once we had our release checks automated and in place, the next aim was to automate the actual release process itself. This included several things:

  • Checking out the release SHA locally.
  • Running a release-checks Rake task against the SHA.
  • Manually tagging the SHA after QA and Docs have signed off. sign off
  • Building the package using puppet module build.
  • Uploading the module to the Forge.


Image of module release efficiency


As you can imagine, this process took rather a long time (including getting the SHA signed off by both QA and Docs) —sometimes a week, or even more if something was found during the signoff process.

Then came the fateful day we decided to automate this using Jenkins. As we already had Jenkins pipelines in place for running modules tests against all our supported OSes, we figured this was something we could utilize for taking the sting out of releases.

One of the first things we did was create a new branching strategy for our modules. Some of you may even remember the old style of branches, which included making one branch per release. Now, we have a consistent branching strategy throughout supported modules. Our branching strategy consists of a release branch to contain the prep work for a release, and a master branch for active development, like so:


Image of module release efficiency


We then set up our Jenkins pipelines appropriately: one for master to run against a set of supported OSes and one to run against the release branch, with additional tests to ensure that the module is sound and green before we release.

Our release engineering (RE) team worked closely with the quality engineering (QE) team, providing information on the checks the RE team do to sign off a module for release. The work to automate this was a collaboration between these two teams and the modules team.

QE setup a Jenkins job for building and pushing a module to the Forge. RE then worked on scripting for HipChat to be able to tag an SHA in GitHub, then kick off the push-to-orge job with that SHA.

This means we can now tag and release a module with one command in HipChat once the module is ready for release, as seen below:


Image of module release efficiency


These additions have streamlined our process, and significantly cut the time it takes for us to release a module. The only manual steps now left in this process are the QA and Docs signoffs. Another member of our team, Paula McMaw, is working towards automating signoffs on the QA side right now.

The QA release signoff consists of multiple stages, the majority being the verification of information that's all pretty easy to find on Jenkins and GitHub. We have introduced ChatOps to interact with the Jenkins and GitHub APIs, using simple HTTP requests. Once the information we need is returned, we have added logic to determine whether the module is okay to release.

Our signoff now consists of dropping a command into any room where our instance of Hubot is present, and the checks will take place. The chat will then provide output that states whether we can go ahead with the release or not:


Image of module release efficiency


Obviously when it comes to releases, a human is always going to need to be involved for sanity checking. But with our new automation system in place, we have significantly reduced our point of failure, and we can release a module within a day if needed. A day!

More context, less switching

We’ve discussed how we’ve automated some previously manual tasks and the time saved. What I haven't covered is how much context switching these automated tasks have saved us from. Often these manual steps would include a back-and-forth communication between team members in different departments, time zones, etc. We would have to wait for another team member to sign off, but if they found anything, they would have to await our response, etc. By automating away these tasks, we greatly decreased the feedback loop and consequently greatly reduced instances of context switching, which as everyone knows, can be terribly distracting and spoil concentration. Productivity for the win!

Modules graphs: Know your numbers

We have a biweekly triage PR rotation in which one person looks after keeping pull requests moving on the modules that we support. For that reason, we always knew and appreciated that a lot of the work on modules came from outside Puppet, but we never had any actual data on the amount of value it provided.

It was also considerably difficult to keep track of work. We would suffer a lot from cookie licking, especially when the triage changed hands between sprints. We would have to ensure that any information gained by the triager was passed on to the next person. We would have to try to keep track of those PRs where a community member pinged Puppet for help.

Instead of blindly continuing to struggle against these maintenance problems, we decided to implement a couple of scripts to try and shed some light on the whole thing. The scripts were written to check GitHub using a gem called octokit_utils, which is a Ruby toolkit for accessing GitHub’s API.

In total we made Three scripts for generating graphs or tables:

PR work done

https://github.com/underscorgan/community_management/blob/master/pr_work_done.rb

This script exists to show us what work has been done on PRs for supported modules. It covers the number of PRs closed, number merged, and number commented on, all in a tidy graph format.


Image of module release efficiency


Now that we have this in place, we can really see the value of our work when it comes to PRs. The most impressive thing I took from this graph is the sheer difference between the amount of PRs we close and the amount we merge. We get significant value from community PRs, and this graph shows it.

Vox Pupuli use the community_management module for gathering their statistics, and they also contribute to it. Pretty much anyone can use these scripts to perform analysis on their Github.

PR review list

https://github.com/underscorgan/community_management/blob/master/pr_review_list.rb

This script's aim is to gather all the PRs and display information about them in table form. PRs can be filtered by date created, who the last comment was from, and more.

Image of module release efficiency


The benefits of this script truly became evident as soon as we implemented it. We now have a solid, steady means of monitoring the PRs that come in from both the community and Puppet employees. We can also use the filters to assure that any comments made while pinging a member of the Puppet team for help aren’t lost in the fray of GitHub emails we all receive.

Keeping on top of PRs has always been important to us, and trying to keep track of them was an important milestone. Making sure that we reply to those contributors in need of assistance or review, and keeping our community members happy, makes us happy. After all, as the work-done graph shows, we really do get a lot of value from our community.

Release planning

https://github.com/underscorgan/community_management/blob/master/release_planning.rb

The last script we implemented was for helping us keep track of all supported modules in need of a release. For this, we set our own criteria for a module to be due for release, including commits and the time that had passed since the last release.

Image of module release efficiency
As you can see, it’s a pretty simple HTML table. However, we find ourselves again in the position where the value it brings is paramount. When we did not haveany of this information, we would cut releases when we deemed necessary. However, this meant it was easy for us to miscalculate whether a module needed a release — or not.

Now we can quickly load up this HTML page and quickly check which modules are due for a release.

Finishing notes and thanks

First I’d like to thank a lot of people from Puppet who worked to help all this happen. It was truly a team venture, and everyone in the modules team has contributed at some point. A huge thank you to Morgan Rhodes in site reliability engineering for letting us use her original modules scripts to expand on them, and for all the work she and her team did for automating the release signoff. Without the efforts of Morgan and her team, we would still be in a sorry state indeed.

Another big thank you to Craig Gomes. Without his enablement on these projects, we’d be fairly stuck in the mud still. Having a manager who believes emphatically in our team's self-improvement was a blessing.

I’d like to really encourage you to look at your own boring tedious processes that are ripe for automation. It’s very easy to slip into the bed of routine and get stuck there, and spending time on upgrading or automating your processes can pay off tenfold. It has for us!

Helen Campbell is a software engineer at Puppet.

Learn More

Share via:
Posted in:

Add new comment

The content of this field is kept private and will not be shown publicly.

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.