homeblogbehind scenes puppet forge user experience

Behind the scenes of the Puppet Forge user experience

Background and context

The Puppet Forge started life in 2012 as a central repository for the modules that both Puppet itself and the wider Puppet community were beginning to produce.

The official Puppet Forge makes its first appearance on archive.org in June 2013, with a homepage promoting the latest ntp module release, plus supporting information on installing and sharing modules. This incarnation, hosted at a dedicated URL puppetforge.com, provided a simple text search to access the 1,707 modules available at the time.


A search for “apt” yielded 30 results, spread over 3 pages. By contrast, the same search on the Forge earlier this year returned over 300 results, requiring 31 pages to display them.

And this has been the story of the Forge in recent years — that of massive growth both in terms of visitors and numbers of modules uploaded. We had over 1 million unique visitors on the Forge during 2017. At time of writing, the Forge is a repository for 5,500 modules and growing exponentially.

These are numbers that certainly delight us, but success brings with it challenges. To take the “apt” example above, choosing the right module from 30 results might be a task; but choosing from 300 modules? This puts a huge burden on users to sift through large numbers of search results, often with subtle differences between them.

Integration of Puppet Tasks

Changes to the Forge were already underway last year as we introduced Puppet Tasks to Puppet Enterprise and modules.

Updates made to the Forge at that point were to assist those who came to the Forge specifically to look for tasks, and to promote the hard work of module authors by visibly surfacing this work. Changes included a new search filter, a new homepage category, and inclusion of tasks in the module’s page summary panel, individual search results and individual module page information.

However, even a cursory glance at the Forge of 2013 and the 2018 model told another story, one of creeping complexity. For every piece of extended functionality, for every new module, for every new user tool, there’s a price that is paid in presenting increased complexity to the user.

In short the Forge faced a number of new challenges. We knew the time had come to make the Forge work harder for our users.

A new approach

Our first step in this process was to better understand user behavior. Thankfully, this is something we don’t need to guess at.

At its simplest, the Forge is a repository stored in a database. Users perform a search against that database and a result set is returned. Users then review the results, and assess one or more modules by looking at those individual detail pages, before downloading a tarball or navigating to that module’s repository.


The challenge this simplified behavior flow presents is that users fully understand and expect it to be simple. It’s just search and find, right? So anything that represents friction or increased obstacles working against that expectation is a negative user experience.

Our first stop was Google Analytics, to compare our hypotheses and assumptions against hard data that could tell us:

  • How much time people spend on pages, and which pages.
  • How do people arrive at the Forge homepage? At an individual module page?
  • What does first time use vs. repeat use look like?

From this, we learned surprising details, such as:

  • Only 28% of visits to the Forge begin with the homepage
  • More than 90% of searches were based on a single keyword
  • Less than 8% of users look beyond the first page of results

So analytics tell us what people are doing, but we also want to know more about why and how. We achieve this by cross-referencing the quantitative data from analytics with qualitative data gathered directly from users.

Our user experience (UX) team is constantly trying to find new ways to connect with Puppet users to hear what they have to say and how they say it. We are particularly interested in how they articulate their tasks and challenges. We ran a number of studies and surveys in the last two quarters:

  • At PuppetConf 2017, we ran usability studies at our Puppet Test Pilots stand, trying to understand more about how people interacted with the current Forge. We examined behavior around reviewing search results and evaluating modules. From these videos we were able to observe an occasional disconnect between filtering options and the controls which applied them.
  • We ran a Forge improvement study, asking users about their expectations, what was important to them, their motivations, stated ease of use, primary and secondary goals, and perceived experience. One of the most sobering findings was that 23%+ of participants did not view the Forge as an effective tool.
  • Another study, looking specifically at behavior during module evaluation, was carried out at the end of 2017. With many individual module pages running into thousands of words of content, we needed to know how users interacted with this content, and to what degree. One behavior we witnessed repeatedly was a concurrent use of Google search for related information even as users scanned a module’s detail page.

As an additional step, we reviewed prevalent patterns across the web for search initiation and search result presentation. We know Forge users have a natural technical bent; but when using the web we all like to use familiar tools that work in consistent ways, and not have to guess how something works from website to website.

We put a lot of thought into how we could influence behaviour — not in a coercive manner (we call that dark patterns in UX :), but rather how we might we nudge users to interact in a way that made the Forge work better for them and improved their experience. A simple example might be the pattern of single-word search terms highlighted above. What if we encouraged more descriptive, multi-word strings that better described users’ needs? Could we provide increasingly relevant sets of results for their search? Armed with designs intended to solve the problems we had discovered in our research, we decided to find out.

Before going live with the changes, we carried out tests with users to make sure we were offering an effective experience both for new users, and for those people who visit the Forge regularly. If we had broken the experience for long-time users, we would have taken a major step backwards. Happily, all tests suggested that this was not the case.

Results and next steps

The fruits of this work have recently been pushed live to the Forge. We have a new Forge homepage, a more effective search tool, and more informative module summaries in search results. Our aim is to connect users to smaller result sets of more relevant modules, making assessment and evaluation decisions easier, and delivering the content that people need faster.


Key improvements include:

  • More relevant results more quickly: Where some filters had been available only after a search was performed, the most-used filters can be now be applied up-front at search initiation.
  • Top modules prioritized: With certain modules in far greater demand than others, users previously still had to work just as hard to locate them. Key modules are now promoted from the homepage with no need to search.
  • Variable number of results per page: Previously, we had been limited to 10 results on a single screen of search results; now users can review more results in a single page, while greater numbers of module authors get their module on to page one of search results.
  • More effective summary results: Key module details such as date updated and total downloads were laid out irregularly on search tiles; these are now more prominent and aligned to make them easier to scan.
  • Matching users’ mental model: No one likes to have to re-learn common tasks for different web sites or resources. Our search mechanism, filtering and sorting patterns are now more directly relatable to common web patterns.
  • Badging as an integral part of the design: As module categories had increased, we struggled to accommodate them into the design. Now, category badges have a more considered position in the page layouts.

For more detail on the changes we made, read this great blog post from Nicole Anderson.

We should emphasise that our work is ongoing. We have noted a change in certain behaviors and we’ll be monitoring these more closely while being more responsive to any patterns of use and continuing to hone the Forge’s performance.

Next, our focus shifts to individual module pages, as we look to better facilitate the kinds of page-scanning behaviors we observed during usability studies. Work is already underway on this and we continue to work to improve the Forge experience as another key component of the Puppet ecosystem.

We always want to hear from Puppet users. Let us know how we’re doing and help us to make your job easier and your work better.

Rick Monro is a principal UX architect at Puppet.