homeblogverifying puppet checking syntax and writing automated tests

Verifying Puppet: Checking Syntax and Writing Automated Tests

One of the issues that crops up when working with Puppet is ensuring that your manifests do what you expect. Errors are bound to happen. A missed brace can make a manifest not compile, or forgetting to include a module or set a variable may mean that running Puppet on the host fails to enforce the expected state. All in all, it would help to have some tools to make sure we’re writing valid code, that it does what it expects, and that if it doesn’t we catch it as soon as possible.

Syntax Checking

At the lowest level of checking, you can use the Puppet parser to do syntax validation. Typos and errors are bound to creep into code, so syntax checking at the end of a long day can go far to improve the quality of your life. There are a couple of places where you can insert syntax validation. One method is by manually running `puppet parser validate selinux.pp` to make sure that the manifest can be parsed before you commit your changes or deploy them to a live environment. If I left out a curly brace in a manifest and then used 'puppet parser validate selinux.pp', then:
    % puppet parser validate selinux.pp
    err: Could not parse for environment production: Syntax error at ‘{‘; expected '}' at /Users/adrien/puppetlabs-mrepo/manifests/repo.pp:252
    err: Try 'puppet help parser validate' for usage
Puppet parser tells me what went wrong, and which line contains the error. In addition, you can integrate syntax checking into your editor. Vim has built in code compilation functionality that can be used to run error checking, so you can quickly validate your code and jump to sections of code with syntax errors. There are plugins like Syntastic that will do continuous checking so you're immediately alerted when syntax errors are made. Lastly, there’s the puppet-lint tool developed by GitHub’s Tim Sharpe that will analyze your manifests and look for deviations from the Puppet style guide. It’s a quick and easy way to ensure that everybody is following a common set of conventions, so as your module collection grows, you’ll have a consistent set of modules instead of sections with cobwebs. Running puppet-lint against a manifest could produce something like the following:
% puppet-lint init.pp 
    WARNING: top-scope variable being used without an explicit namespace on line 79
    WARNING: top-scope variable being used without an explicit namespace on line 81
    WARNING: define defined inside a class on line 59
    ERROR: single quoted string containing a variable found on line 124
    WARNING: string containing only a variable on line 81
    WARNING: => on line isn't properly aligned for resource on line 71
    ERROR: two-space soft tabs not used on line 50
    WARNING: line has more than 80 characters on line 83
    WARNING: line has more than 80 characters on line 84
    ERROR: trailing whitespace found on line 163
    WARNING: mode should be represented as a 4 digit octal value on line 55
Be warned that these steps only validate syntax. If you have variables that are incorrectly spelled or have a bad value, your code will still be completely valid—and will not do what you want. That’s why we have additional tools available to make sure you code actually works as intended.

Writing Automated Tests

Automated testing is one of the key ways to ensure that your libraries and manifests are meeting your expectations. Out of all the ways that you could test your manifests, I’ll highlight two: testing modules and their catalogs, and testing entire systems.

Testing Modules

As mentioned in a previous post by our Release team, you can add rspec and cucumber tests to ensure that your modules are creating resources as you expect. For example, you can write tests that ensure that when including a module to install Apache, the package Apache is installed and the service is started. If you were to further develop on that Apache module, you could move forward knowing that the tests would always ensure those basic behaviors would still exist, and if something changed by accident you would definitively know it had changed. The puppetlabs-apt module has been a focus of a lot of testing, and is a great example of how you can test your modules. Given rspec tests looking like this:
   it { should create_exec("apt_update")
would produce output like this:
    Apt class with no parameters, basic test
      should create Class["apt"]
      should create Exec["apt_update"]
Finished in 0.36027 seconds
2 example, 0 failures
Writing tests is a good way to verify your modules are functional and reusable. Testing also serves as a indicator of quality, demonstrating you have taken the time to ensure the module does what you want it to do. Additionally, you can verify modules in the Puppet Forge if they have tests, and more easily check them for correctness.

Testing Systems

Unit testing individual modules is a great step to take, but at the end of the day you want to know running Puppet on a host will build the host the way you want, and will have the behavior you expect. Being able to pragmatically verify that services like SSH, Postgres, and nginx are running and serving resources is powerful stuff. You can use Cucumber in a standalone manner to ensure that if you run Puppet on a host, you get the host you asked for when all is said and done. You can couple unit tests with system tests, and run everything in a testing environment before your changes go to production systems. Martin Englund has blogged about his experiences with "Behavior Driven Infrastructure" (a play on Behavior Driven Development or BDD) with Cucumber, and did an excellent PuppetConf presentation about his experiences with Puppet and Cucumber.

Benefits of Testing Puppet Code

Everybody wants to have as smooth and seamless a work flow as possible. Deploying changes to your Puppet manifests only to discover that you forgot a comma or brace, or writing a manifest that can’t actually run successfully, can require debugging time that would be better spent elsewhere. Adding a few proactive tools will prevent errors from propagating out, and being able to automatically verify your systems means that you can deploy changes fearlessly and become more agile in your day to day operations.

Learn More