homeblogusing puppet kubernetes and openshift

Using Puppet with Kubernetes and OpenShift

OpenShift is Red Hat's container application platform. It allows developers to quickly develop, host, and scale applications in a cloud environment, providing an integrated set of tools for managing your container-based applications — everything from deployment to container repositories to access control, to built-in metrics and monitoring services. OpenShift is also available in an open source distribution called OpenShift Origin.

With the launch of Version 3, OpenShift is now built around Kubernetes, the cluster manager released by Google — not surprising, since Red Hat has been one of the main contributors to the open source Kubernetes project from its initial release. As regular readers of this blog will know, we recently released a module for managing Kubernetes resources (like Pods, Replication Controllers and Services) using Puppet. In this blog post, I'll look at how you can use that module to power your OpenShift-based PaaS.

Running OpenShift

There are several ways of getting an OpenShift cluster up and running, depending on your requirements. You can opt for the managed services from RedHat (running in either the public cloud or your own data center) or look to run OpenShift Origin yourself. The OpenShift getting started documentation contains lots of helpful advice for administrators.

Assuming you don’t already have access to an installation of OpenShift v3, the fastest route I’ve found to trying it out is using the local Vagrant VM.

Note that the above will download the very latest version of OpenShift, and it will take a little time. You can check it worked by hitting the console URL in your browser.

With OpenShift running, it’s useful to install the oc CLI tool locally so we can interact with it. You can download the relevant package for your operating system from the GitHub release page.

In order to use the CLI, you need to authenticate. For this we can use the oc login command mentioned in the vagrant up output.

This will prompt you for a username and password. For the purposes of this demo, we require an admin user, so use the username admin and the password admin. With all that set up, you should be able to use the oc tool to interact with your OpenShift cluster.

If you're familiar with Kubernetes, you’ll probably recognize the last command as the output from kubectl. Note that oc acts as a proxy for kubectl, so the commands you expect to work — like get rc or delete pods — should all be present and correct.

Using Puppet with OpenShift

With OpenShift set up, let's look at using the Kubernetes module to create and manage an application. We’ll use the canonical guestbook Kubernetes example for this. For a more detailed look at the Puppet code for this example, you can see the detailed walkthrough we published earlier.

First install the Kubernetes Puppet module as per the instructions in the module's README. Once that’s done, remember to copy the configuration file generated by oc login above into the Puppet config directory. The exact directory will vary depending on your installation of Puppet, but you can find the correct directory with the following command:

You’ll most likely be running one of the following two commands to copy the configuration file into the right place.

With that out of the way, let’s download the Puppet code for the example.

Take a look at the guestbook.pp file. You should see the Puppet Kubernetes types used to describe the various pods, services and controllers which make up the guestbook application. With that in place, let’s use Puppet to run the examples. For the purposes of this demo we’ll use apply, but you could also use agent here to ensure that changes over time are managed.

According to the Puppet output, the various pods, services and controllers have been created. We can use the oc tool to take a closer look.

Exposing the service using the OpenShift console

The guestbook is now running on OpenShift, but how do we access it externally? For that, let’s take a look at the rather nice OpenShift console. If you’re using the Vagrant setup above, the console should be available on Log in again with the username admin and the password admin. Select the default project, and you should be presented with an overview showing the various resources.


Find the frontend service in the overview list. It should indicate that it is associated with the frontend controller and has three pods running. Look for the Create Route link to the right of the service name. Click on Create Route and follow the instructions. You'll see there are defaults set — leave all of these as they are, and hit Create. This should return you to the overview page.


The frontend service should now have a URL associated with it: http://frontend-default.apps. (xip.io is a magic domain name that provides a wildcard DNS for any IP address. This address is pointing back to your OpenShift cluster.)


Accessing that URL should load the guestbook application we launched using Puppet.


At the excellent KubeCon conference a few weeks ago in London, I spoke about the potential for higher level interfaces for Kubernetes. A big part of that potential comes from compatibility between different Kubernetes-based services, and from these platforms building higher-level interfaces without limiting access to lower-level ones. OpenShift is a great example of this in practice. As a result of that design, you can use Puppet to manage your Kubernetes resources running in OpenShift.

Gareth Rushgrove is a senior software engineer at Puppet.

Learn more