How to automate Windows patching with Puppet
UPDATE: Since the writing of this blog post, Kevin has published the patching_as_code module on the Puppet Forge, that contains an evolution of the approach described below, while being much easier to adopt and implement. If you’d like to implement automated patching through Puppet code, please take a look at this module!
Something I get asked frequently is: "Can Puppet do patch management?"
And my response to that is: "Absolutely! But, what does patch management look like to you?"
Puppet does not prescribe one specific, absolute way that you should do patch management. Instead, Puppet helps you to orchestrate the patch management process in a way that works for your organization. Which means it will probably be slightly different across different users and companies.
Let’s get specific with real examples
That's both great and... not so great. Because if you're like me, you're probably not thrilled about the idea of a you-can-do-anything-you-set-your-mind-to blank canvas... I much rather like to see a real example of what you can do first, and then modify that to fit my specific needs.
So I thought I'd share the automated patch management solution for Windows that I've built for my own environment. It's a useful example of how to go about automating the patching process, using the tools that Puppet and a number of modules on the Forge provide.
But, you might ask, why not just use Windows Server Update Services (WSUS)? Like most patch management solutions for Windows, the solution described in the blog post leverages WSUS for better control of which updates are detected as necessary on systems. Compared to simply scanning against the public Windows Update site, WSUS provides control on the products, languages and categories of updates that are scanned for.
But when it comes to enforcing updates, WSUS is similar to Active Directory GPO’s in that it only provides a ‘spray gun' method to enforcement. You simply approve updates centrally on the WSUS server, and then wait (or hope) that systems will autonomously download the updates, install them at the correct times and reboot afterwards. No evidence is created on when which updates were installed, or when reboots happened. You only notice that the pie chart in the WSUS console looks different than before.
On to our Windows patch management solution
With Puppet, we have more control on both enforcement and reporting and thus we can build a better solution on top of WSUS.
Let's first take a look at the functional steps of the patching process that I wanted to automate:
- Figure out which patches are needed on all my nodes
- Have a way to control which patches are allowed to be installed
- Limit the time of installation to specific patch windows, which can be different across nodes
- Automatically reboot nodes when necessary, but only within the patch window
- Have clear records of which patches were installed and when
Great, now let's dive a little deeper into each step and see how we can automate that step with Puppet.
Step 1: Figure out which patches are needed
Obviously, it all starts with knowing what patches our systems need. There's a number of modules on the Puppet Forge that provide the capability to scan servers for updates, and report the information back in the form of a structured fact. Structure is useful, as it makes further automation possible with the fact as the source. I originally started out with the jpi-updatereporting_win module from Joey Piccola, which provides a fact that includes a very useful
missing_update_kbs section that lists all the needed KB-updates in a neat array. There were one or two small issues with the module, for which I've posted a PR, but at the time of writing it hasn't yet been merged into a new version.
A colleague pointed out that there has been a broader effort underway to provide patch management capabilities for all operating systems (not just Windows) in the os_patching module by Tony Green. Since then we have collaborated on a module update to provide that same
missing_update_kbs section in his fact, which was recently published to the Forge. A nice benefit of the os_patching module is that it also includes the os_patching::patch_server task, which can be used to manually install the updates that are reported as needed. I personally use this task on Linux. For Windows I wanted to go a step further and fully automate the process and see the patches as native managed resources in Puppet.
To enable the reporting, we simply need to install the module and add 1 line to our Puppet code:
Let's build this into some Puppet code, so that we can classify it to our nodes later:
If you run that code against a node, you'll see a new
os_patching fact get reported. Here's an example:
We can see a number of useful things here:
package_updatessection provides a description of the patches that are needed. On linux, this lists all the packages in need of an update.
missing_update_kbssection lists just the KB numbers, making it easy to use that array for patching later
patch_windowsection allows us to set an arbitrary value, which we can use to place nodes into actual patch/maintenance windows later (see step 3)
rebootssection provides information on whether or not our Windows nodes are in need of a reboot. We can use this later to automatically reboot nodes when necessary.
Step 2: Have a way to control which patches are allowed to be installed
Now that we have a way to report needed patches, we need a way to control which of those patches we actually approve of. This is often a layered approach, as there are multiple places where you could assert such controls.
Whenever you need a layered approach, Hiera is essential. Hiera is a powerful way to store parameter data outside of your puppet code. Hiera enables storing parameter data in a hierarchical structure to minimize code duplication. That way, your Puppet code contains the logic, while your Hiera data contains the parameter values for all specific scenarios.
In my setup, I have 3 defined layers:
- Classification in WSUS, combined with an automatic approval group
- A patch blacklist in Hiera
- A patch whitelist in Hiera
Starting with WSUS: as explained above, I use this to have better control over which updates are reported as needed by systems, and whether or not that update can then be retrieved for installation. In my case, I wanted all needed patches in specific categories to get automatically approved for installation. First, I used the wsusserver module from the Forge to automatically setup and configure the WSUS server itself:
This ensures WSUS is configured with the right products, classifications and an automatic approval group called 'AutoApproval'. All I need now is to configure the managed nodes to use this WSUS server and configure themselves to use the AutoApproval group. We can do that by adding the wsus_client module and defining its class in our
Next, I wanted control over the reported set of needed updates, and be able to selectively blacklist or whitelist updates for specific groups of nodes. This allows me to have WSUS auto-approve all updates, but then have specific updates be prevented from being installed via Puppet if I deemed it necessary.
Getting the list of reported updates from the
os_patching fact is straight forward:
Now to filter out updates through a whitelist or blacklist, we can use a fairly simple evaluation like this:
This ensures that:
- ..if a whitelist is defined at all for this node, we immediately filter the available updates to those that are on the whitelist.
- ...if a whitelist is not defined, but a blacklist is defined, we filter the available updates to those that are not on the blacklist
After this, we have an
$updates variable containing an array of updates, upon which we can iterate later for automated installation.
Let's add this logic to our code, with the
$blacklist variables defined as optional parameters to the
profile::patch_mgmt_win class, so that we can set the values for those parameters via Hiera when needed:
To optionally define for example a blacklist, you simple create a
profile::patch_mgmt_win::blacklist value in Hiera for any node(s) that need it. Here's what I'm using today:
Step 3: Limit the time of installation to a specific patch window
Now that we have our list of patches to be installed, we want to make sure they only get installed during a patching maintenance window. For this I've opted to use the builtin
schedule resource, which allows you to ensure that specific resources can only be enforced within a specific window. Outside of the window, the resource gets reported as "skipped". That's also useful, because it tells us what will happen later to that node (when it reaches its window).
Here's an example of a
schedule defining a patch window:
This creates a schedule resource named 'patch_window', that we can tie to other resources by adding
schedule => 'patch_window' to their resource declaration. Once added, that resource will only be allowed to be enforced on Sundays between 1AM and 4AM.
One caveat here is that I’m using Puppet’s generic scheduling feature here, which only controls during what time resources can start to be enforced. So in the case above, if a Puppet run starts just before 4AM, it will be able to start the installation of an update up to 3:59AM. The completion of the patch installation will then run, for the most part, outside of the defined patch window. As Microsoft’s monthly Cumulative Updates can take up to an hour or so to install, you many want to take this into consideration, and end the patch window defined in the Puppet schedule resource an hour earlier than the actual patch window that is agreed with the business.
You may wonder why repeating only 3 times? Well this is because, in my case, the maximum I want to happen during a patch window is this:
- On the first Puppet run: if the node was already in need of a reboot, reboot the node first
- On the second Puppet run: apply any patches needed by this node
- On the third Puppet run: reboot the node if applied patches have caused a reboot to be needed
You could also create separate schedule resources for patching vs reboots, to have even more control over what happens when. In some circumstances, for example if a patch has difficulty getting installed successfully, you may end up in the situation that the patch is finally installed on the third run and a reboot is still needed but no longer allowed to happen. Having a separate reboot window before and after the patch window would prevent this. For me though, the combined patch & reboot window is good enough at the moment. And I can always check the value of
reboot_required in the
os_patching fact to determine if there are any systems that still need a reboot.
I want the actual patch window to be different across nodes, so I'll again leverage Hiera for that. In our
profile::patch_mgmt_win we will simply expect a
$patch_window variable coming from Hiera, or define a default one (Sunday 1-4AM) for when that doesn't happen:
In Hiera, we need a
profile::patch_mgmt_win::patch_window variable defined for each node, for example:
To easily control this with a single variable, I've added this extra layer to my Hiera hierarchy:
...and created several different variants of the
profile::patch_mgmt_win::patch_window variable in each
common.yaml file under patch window directories below a
maintenance directory in Hiera:
All we need to do now is to make sure the
patch_window section in the
os_patching fact contains the value for the patch window that the node should be in. The most flexible way to do this is by setting the
os_patching::patch_window variable for each node through Hiera, for example:
Alternatively this could also be set via the node classifier in Puppet Enterprise, which may give you easier ways of grouping nodes together and then setting the patch window parameter that way.
Step 4: Automatically reboot nodes when necessary
To achieve this, we will enforce a
reboot resource (from the reboot module) whenever the
reboot_required value in the
os_patching fact is
true. The code looks like this:
adding that to our
profile::patch_mgmt_win class, we get:
Step 5: Have clear records of which patches were installed and when
Finally, let’s create the code to actually install the updates, so that we also create a record (of enforced Puppet resources) to show which patches were installed and when this happened.
I've been collaborating with Alexander Tsirel on his windows_updates module, for which the new version is now also available on the Forge. This module makes it a breeze to install Windows KB patches using Puppet code. It's as simple as:
Let's integrate that into our
profile::patch_mgmt_win class to get the final solution:
I’ve uploaded the above code to a Github Gist, you can grab it here
Here’s how the new code was integrated into the class:
In the section
if $facts['os_patching']['reboots']['reboot_required'] == true I added:
...which will make the
Reboot['patch_window_reboot'] resource a requirement on all
windows_updates::kb resources. This ensures that if there is a reboot already pending, the reboot will happen before any updates are installed.
At the very end of the class, I created an iteration loop:
This creates a
windows_updates::kb resource for each patch that is in the
$updates array, and ties it to our desired patch window.
windows_updates module will suppress reboots while installing patches, so another Puppet run is needed afterwards to detect that a reboot is needed (detected by the
os_patching fact) and then automatically reboot the node.
With this solution in place, I've not put any more effort in updating my Windows servers, it now all just happens automatically!
- Check out the
os_patchingmodule on the Puppet Forge
- Dig into the
windows_updatesmodule on the Puppet Forge
- Catch up on Hiera
- Visit our Puppet + Windows use case page
- See more Windows modules on the Puppet Forge
Kevin Reeuwijk is a principal sales engineer at Puppet.