published on 27 July 2015

We have just released a new version of our Learning VM, the virtual environment that helps you learn Puppet through a series of fun and interactive quests. The updated Learning VM now runs with Puppet Enterprise 3.8.1, and includes numerous improvements to both the content and underlying technology. Instead of sticking to a rundown of features and bug-fixes, however, I want to use the occasion to play with a question I've been stewing on since I started working on this project in early 2014.

What if we approach a new user’s experience learning Puppet in the same way Puppet itself tackles the task of configuration management? What if I could write some Puppet code, apply it to myself, and be done with learning? How about something like this:

class skills::resources {
  skill { ‘puppet-resource-tool’:
    ensure => mastered,
  }
  skill { ‘puppet-resource-syntax’:
    ensure => got-the-hang-of,
  }
  skill { ‘puppet-resource-types’:
    ensure => know-pretty-well,
  }
}   

class skills::essentials {
  include skills::resources
  include skills::classes
  include skills::modules
}

node ‘user.kevinhenner.human’ {
  include skills::essentials
}

The more I thought about this, the more these parallels started to seem worthy of something more than passing humor.

Think for a moment about how Puppet works. The relationship between the Puppet agent and master is a closed feedback loop that ensures your infrastructure remains in a desired state. First, the agent sends a list of facts to the master. These facts describe the operating system and hardware the agent is running on. The master then uses these facts to compile a catalog that defines all the details of how the agent node should be configured. When the Puppet agent receives this catalog, it compares the state of the node to the desired state described in the catalog, and implements any changes needed to bring the node’s configuration into line. Finally, the agent sends a report of those changes back to the master.

What if we could set up a similar process for a Puppet user who wanted to acquire a specific set of skills? What if we could use some facts about a student to generate a catalog that would specify all the skills he or she needs, develop a learning tool that would check the status of those skills, and provide lessons and exercises designed to close the gap between actual and desired abilities? The model Puppet uses to “teach” a node how to fulfill its role in your infrastructure can help us think about “configuring” a user to fulfill his or her role in an IT organization.

In its current form, our Learning VM falls a bit short of this cybernetic ideal for puppetized learning, but the parallels offer a framework for thinking through the full cycle of a learning tool’s goals and function.

To start with, we need a rigorous and testable way to define our desired state. Just as the definition of a node’s desired configuration depends on its role in your infrastructure, we need to define the Learning VM’s role in the larger context of someone's experience of learning Puppet.

The Learning VM’s scope, which complements our other educational offerings, boils down to two principles. First, the Learning VM should guide the student — someone who might have prior experience with configuration management, but then again, might not — from zero Puppet knowledge up to that magical point where the value gained from Puppet outweighs the time invested in learning it. The Learning VM can’t possibly teach you everything there is to know about Puppet, but it can get you over that first bump in the learning curve as quickly and painlessly as possible.

Second, the Learning VM should give the student a foundation sturdy enough to support the weight of the more advanced skills they will go on to develop. This means a trade-off between the quick bang-for-your-buck of the first principle, and the best practices and adaptable design principles that have longer-term payoff when you take your puppetized infrastructure to an enterprise scale.

Balancing these two ideas gives us a big-picture goal for the Learning VM: The fastest path to value without bad shortcuts.

Of course, research and user feedback help us refine the list, but our current list of quests on the learning VM is based on the set of skills that fulfill this goal. We start by showing a user how to use the Puppet Enterprise console’s graphical interface to efficiently install and configure an application with many components and complex dependencies — in this case, the reporting tool Graphite. After showing this big-picture example, we move on to cover basic topics like resources, manifests, classes, and modules, before getting into the skills involved in more complex concepts such as resource ordering and conditional statements.

To make these skills testable, we break each quest into a series of specific tasks that guide the student through a realistic project related to the quest’s topic. As the user works through a quest, the Learning VM’s quest tool runs a series of Serverspec tests to track completion of each task.

For example, the third task in the resources quest asks the user to use the Puppet resource tool to create a new user with the name galatea. The test for the third task from the resources quest looks like this:

describe “Task 3:” do
  it ‘Creates the user galatea’ do
    file(‘/etc/passwd’).should contain “galatea”
  end
end

Testing that this task is complete helps us track the student’s progress through various aspects of the skill we want to teach, but it doesn’t verify that the user fully understands or retains that skill. Comparing this to our puppetized learning model, we see that we haven’t quite closed the feedback loop. What would it take to close that gap — to create a testing model that would evaluate a student’s mastery of a skill in a way that could feed back into into the lesson?

We’ve got a few ideas about this, but for now, I’ll leave the matter of reliable automated skill testing as an open question.

It’s exciting that we live in an era where the technologies around us — tools like Puppet — have evolved to the level where their complexity and precision can provide helpful models for thinking through human problems like learning. I know we could take this path further than I have in my musings here, and I expect that would lead us in some exciting directions. For now, though, I’m just satisfied that my jokes about puppetized learning have some substance to them!

Kevin Henner is a training solutions engineer at Puppet Labs.

Learn More

  • Grab the Learning VM to get started with Puppet.
  • If you or your team want to take your learning a little deeper — or if you just prefer the more personal approach — check out our classroom and online instructor-led trainings at learn.puppetlabs.com.
Share via:

Finally someone see this pattern as a viable or at least interesting idea to teach and dynamically test knowledge. I envision something like a system that learns itself from the student answers and presents questions or scenarios that challenge the user and not only that asks an "out of content" question.

The first model could be based in a predefined structure of questions by knowledge area and difficulty and for example if a student fails at 3/10 questions, during the exam, about an area, the system reacts and present questions of another area or from the same area but with a minor difficulty, in this case at least we are "forcing" that the student must really know an area and not that it only knows the answer to 1 or 2 questions about it.

Of course the ideal to my is use some Machine Learning to track the behaviour of the student. In the case of the puppet VM, not only testing the final static state with Serverspec but how the student goes to that task or that conclusion and the system can detect antipatterns or if the student reach the man pages, for example, and actually learn from the exam presenter. In this way we could have the best approach to questions and evaluation methods for each students.

Yes, those are some interesting ideas! I really like the system they use for mathematics at khanacademy.org, which has a few of the characteristics you've described. You might have a look there if you're interested in this kind of thing and haven't seen their materials before. I like the approach of testing mastery of a topic by assessing the user's ability to consistently provide correct answers to a series of variations on a problem.

Of course, applying the same model to something like Puppet would be a little more difficult, but still seems feasible for simple things like syntactical patterns in Puppet code. It wouldn't be very difficult, for example, to prompt a student to write a resource declaration of a certain type and with certain attributes, then check that the student's input would parse correctly and yield the expected result.

Add new comment

The content of this field is kept private and will not be shown publicly.

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.