Published on 16 December 2015 by

In the first blog post about Puppet and Kubernetes, I introduced the new Puppet Kubernetes module and talked at a high level about configuration management and modern cluster-aware distributed systems. In this post, I will dive into more details by way of an example.

The Hello World example for Kubernetes is the guestbook application. This creates a redis master/slave setup and a load-balanced web application, all using Kubernetes replication controllers and services. So what better way of demonstrating the new Puppet Kubernetes module than using that example?

If you would like to try this out yourself, you’ll need a working Kubernetes cluster. The official documentation covers lots of ways of getting started; the simplest way is to use the excellent Google Container Engine service.

Setting Up

The Kubernetes module uses the kubeclient library to communicate with the Kubernetes API. So first we’ll need to install that. This will vary a little depending on how you installed Puppet, but for the latest version of puppet-agent you should run:

/opt/puppetlabs/puppet/bin/gem install kubeclient --no-ri --no-rdoc

With that installed we can install the module itself:

puppet module install garethr-kubernetes

Finally you’ll need to provide Puppet with a working kubectl configuration file. This assumes you have a working Kubernetes setup from above.

cp ~/.kube/config ~/.puppetlabs/etc/puppet/kubernetes.conf

With that we can get to the Puppet code.

Describing Kubernetes Resources in Puppet

For the following, it is worth being familiar with the canonical guestbook example, and in particular the YAML files and use of kubectl to create the resources. Here's an example:

apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
  tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

In the following example, you’ll see the equivalent Puppet code. The first thing that should become apparent is that the Puppet code follows the exact same structure. This provides quite a low-level interface, but should be immediately recognisable to anyone already familiar with Kubernetes. Puppet already provides several ways of building your abstractions on top of these primitives, too.

First let’s create the redis-master replication controller. This will in turn create a single pod running redis.

kubernetes_replication_controller { 'redis-master':
  ensure   => 'present',
  metadata => {
    'labels' => {'app' => 'redis', 'role' => 'master', 'tier' => 'backend'},
    'namespace' => 'default',
  },
  spec     => {
    'replicas' => 1,
    'template' => {
      'metadata' => {
        'labels' => {'app' => 'redis', 'role' => 'master', 'tier' => 'backend'},
      },
      'spec' => {
        'containers' => [
          {
            'image' => 'redis',
            'name' => 'master',
            'ports' => [
              {'containerPort' => 6379, 'protocol' => 'TCP'}
            ],
            'resources' => {
              'requests' => {
                'cpu' => '100m',
                'memory' => '100Mi',
              }
            }
          }
        ]
      }
    }
  }
}

We’ll also describe a matching service for redis-master, for referring to the setup from other resources.

kubernetes_service { 'redis-master':
  ensure   => 'present',
  metadata => {
    'labels' => {'app' => 'redis', 'role' => 'master', 'tier' => 'backend'},
    'namespace' => 'default',
  },
  spec     => {
    'ports' => [
      {'port' => 6379, 'protocol' => 'TCP', 'targetPort' => 6379}
    ],
    'selector' => {
      'app' => 'redis',
      'role' => 'master',
      'tier' => 'backend',
    }
  }
}

Next up is a replication controller, with two replicas, for the redis slaves. Again this matches very closely to the YAML format.

kubernetes_replication_controller { 'redis-slave':
  ensure   => 'present',
  metadata => {
    'labels' => {'app' => 'redis', 'role' => 'slave', 'tier' => 'backend'},
    'namespace' => 'default',
  },
  spec     => {
    'replicas' => '2',
    'template' => {
      'metadata' => {
        'labels' => {'app' => 'redis', 'role' => 'slave', 'tier' => 'backend'}
      },
      'spec' => {
        'containers' => [
          {
            'env' => [{'name' => 'GET_HOSTS_FROM', 'value' => 'dns'}],
            'image' => 'gcr.io/google_samples/gb-redisslave:v1',
            'name' => 'slave',
            'ports' => [
              {'containerPort' => '6379', 'protocol' => 'TCP'}
            ],
            'resources' => {'requests' => {'cpu' => '100m', 'memory' => '100Mi'}},
          }
        ]
      }
    }
  }
}

And a matching service for referring to the resulting pods.

kubernetes_service { 'redis-slave':
  ensure   => 'present',
  metadata => {
    'labels' => {'app' => 'redis', 'role' => 'slave', 'tier' => 'backend'},
    'namespace' => 'default',
  },
  spec     => {
    'ports' => [
      {'port' => 6379, 'protocol' => 'TCP'}
    ],
    'selector' => {
      'app' => 'redis',
      'role' => 'slave',
      'tier' => 'backend',
    }
  }
}

Next we’ll describe the frontend controller, with three pods serving our web application.

kubernetes_replication_controller { 'frontend':
  ensure   => 'present',
  metadata => {
    'labels' => {'app' => 'guestbook', 'tier' => 'frontend'},
    'namespace' => 'default',
  },
  spec     => {
    'replicas' => '3',
    'template' => {
      'metadata' => {
        'labels' => {'app' => 'guestbook', 'tier' => 'frontend'}
      },
      'spec' => {
        'containers' => [
          {
            'env' => [{'name' => 'GET_HOSTS_FROM', 'value' => 'dns'}],
            'image' => 'gcr.io/google_samples/gb-frontend:v3',
            'name' => 'php-redis',
            'ports' => [
              {'containerPort' => '80', 'protocol' => 'TCP'}
            ],
            'resources' => {'requests' => {'cpu' => '100m', 'memory' => '100Mi'}},
          }
        ]
      }
    }
  }
}

And finally we’ll create a load-balanced service to front our web application pods.

    kubernetes_service { 'frontend':
  ensure   => 'present',
  metadata => {
    'labels' => {'app' => 'guestbook', 'tier' => 'frontend'},
    'namespace' => 'default',
  },
  spec     => {
    'type' => 'LoadBalancer',
    'ports' => [
      {'port' => 80, 'protocol' => 'TCP'}
    ],
    'selector' => {
      'app' => 'guestbook',
      'tier' => 'frontend',
    }
  }
}

Running the Example

The above snippets map to the six steps of the official tutorial. With Puppet, we can create all of these resources from one manifest, using one command. The full source code for the above can be found in the examples directory](https://github.com/garethr/garethr-kubernetes/blob/master/examples/guestbook.pp) of the module.

Running the example locally using puppet apply looks like this:

$ puppet apply examples/guestbook.pp --test
Info: Loading facts
Info: Loading facts
Notice: Compiled catalog for pro.local in environment production in 0.33 seconds
Info: Applying configuration version '1448445589'
Info: Checking if frontend exists
Info: Creating kubernetes_service frontend
Notice: /Stage[main]/Main/Kubernetes_service[frontend]/ensure: created
Info: Checking if frontend exists
Info: Creating kubernetes_replication_controller frontend
Notice: /Stage[main]/Main/Kubernetes_replication_controller[frontend]/ensure: created
Info: Checking if redis-master exists
Info: Creating kubernetes_service redis-master
Notice: /Stage[main]/Main/Kubernetes_service[redis-master]/ensure: created
Info: Checking if redis-master exists
Info: Creating kubernetes_replication_controller redis-master
Notice: /Stage[main]/Main/Kubernetes_replication_controller[redis-master]/ensure: created
Info: Checking if redis-slave exists
Info: Creating kubernetes_service redis-slave
Notice: /Stage[main]/Main/Kubernetes_service[redis-slave]/ensure: created
Info: Checking if redis-slave exists
Info: Creating kubernetes_replication_controller redis-slave
Notice: /Stage[main]/Main/Kubernetes_replication_controller[redis-slave]/ensure: created
Notice: Finished catalog run in 2.61 seconds

This should create a fully working guestbook in about 60 seconds or so, once the images download and the pods start up. You can use the kubectl command to find the IP address for the load balanced service:

$ kubectl get services frontend
NAME           LABELS                                    SELECTOR            IP(S)            PORT(S)
frontend       name=frontend                             name=frontend       10.191.253.158   80/TCP
                                                                             104.197.92.229
$ kubectl describe services frontend | grep "LoadBalancer Ingress"
LoadBalancer Ingress: 104.197.92.229

Tidying Up

Puppet’s declarative model means we can do other interesting things, like ensuring that certain resources are no longer present. In this case, we can use a simple manifest to clean up the guestbook application. For a single resource, it looks like this:

  kubernetes_replication_controller { 'redis-master':
 ensure => absent,

}

The examples folder for the module has a full manifest for deleting the entire guestbook application. This can be used locally like so:

$ puppet apply examples/guestbook-delete.pp --test  ~/Documents/garethr-kubernetes
Info: Loading facts
Info: Loading facts
Notice: Compiled catalog for pro.local in environment production in 0.27 seconds
Info: Applying configuration version '1448445744'
Info: Checking if frontend exists
Info: Deleting kubernetes_service frontend
Notice: /Stage[main]/Main/Kubernetes_service[frontend]/ensure: removed
Info: Checking if frontend exists
Info: Deleting kubernetes_replication_controller frontend
Notice: /Stage[main]/Main/Kubernetes_replication_controller[frontend]/ensure: removed
Info: Checking if redis-master exists
Info: Deleting kubernetes_service redis-master
Notice: /Stage[main]/Main/Kubernetes_service[redis-master]/ensure: removed
Info: Checking if redis-master exists
Info: Deleting kubernetes_replication_controller redis-master
Notice: /Stage[main]/Main/Kubernetes_replication_controller[redis-master]/ensure: removed
Info: Checking if redis-slave exists
Info: Deleting kubernetes_service redis-slave
Notice: /Stage[main]/Main/Kubernetes_service[redis-slave]/ensure: removed
Info: Checking if redis-slave exists
Info: Deleting kubernetes_replication_controller redis-slave
Notice: /Stage[main]/Main/Kubernetes_replication_controller[redis-slave]/ensure: removed
Notice: Finished catalog run in 2.56 seconds

Conclusions

The above really just demonstrates basic feature parity between the standard Kubernetes YAML format and the Puppet code. We’ve also looked only at Services and Replication Controllers, but the module also has some support for Secrets, Volumes, Quotas and more.

The real value of Puppet for managing Kubernetes, however, is in creating abstractions and in management over time. For instance:

  • In the example manifests above, we can change values and simply re-apply the manifest (or have the Puppet agent do it for you). For instance, changing the number of replicas in the frontend controller.
  • Even without changing the manifest, we can rerun Puppet. In this case, Puppet won’t change anything, but it will tell us that everything is exactly how we wanted it.
  • You’ll also note that the individual manifests have lots of repetition. Using a simple defined type, we could remove that, and pass in the important variables as parameters to a guestbook type.
  • The Puppet language also has tools for validating code and for writing unit tests, as well as distributing and sharing reusable modules.

Whether you’re new to Kubernetes or someone already in the ecosystem, do let us know what you think about the potential for using Puppet to manage your Kubernetes resources.

Gareth Rushgrove is a senior software engineer at Puppet Labs.

Learn More

Share via:

I had heard of puppet but hadn't had a chance to work with it before. I had some nasty shell scripts for handling waiting for masters too which I wish I knew was a joke with puppet. This is awesome, just wanted to say thanks.

I chose puppet over a handful of others for my GCP deployments after reading this.

Add new comment

The content of this field is kept private and will not be shown publicly.

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.