Announcing Puppet support for Google Cloud Platform
Today we are thrilled to announce, alongside our friends at Google, a new project between Google and Puppet to make it easy to manage Google Cloud Platform (GCP) services with Puppet. To learn more, register for the webinar with me and Nelson Araujo from Google on 13 September 2017.
- Google Container Engine: install / docs | source
- Google Compute Engine: install / docs | source
- Google Cloud SQL: install / docs | source
- Google Cloud DNS: install / docs | source
- Google Cloud Storage: install / docs | source
There is also one additional helper module that provides a single unified authentication mechanism for all modules that is available here, and will be a dependency for all the other published modules.
We are looking forward to releasing more modules over the next several months. These modules are dynamically built using a code-generation tool developed by Google to generate Puppet types and providers from API specifications. The modules released today allow GCP users to integrate management of their cloud resources into the same infrastructure-as-code workflows and practices they use when managing applications deployed to these resources.
Getting started with Puppet for GCP
The simplest and fastest way to get a feel for what the new modules can accomplish is by using Puppet’s standalone mode, or “puppet apply.” Here are the steps you should follow to set up some GCP infrastructure:
- Install the GCP modules from the Puppet Forge.
- Get a service account with privileges on the GCP resources you want to manage, and generate/download a Key ID.
- Ensure you have enabled the GCP APIs for the services you intend to use.
- Describe your GCP infrastructure in Puppet:
- Define a gauth_credential resource.
- Define your GCP resources.
- Apply your manifest.
Here are more details on each of these steps:
1. Install the GCP modules
All the new GCP modules are available on the Puppet Forge and GitHub, or you can also use the Google-provided meta module to install them all at once. Since we are getting started using “puppet apply,” once all the system prerequisites are installed you won’t need administrative privileges.
There are two Ruby gems that the modules need to function, “googleauth” and “google-api-client.” These can be easily installed into your Puppet installation using the “puppet resource” command which provides a CLI-driven interface to leveraging Puppet.
To now install the actual modules you can use the meta module by running:
Or, individually the five published modules can be installed with:
2. Get a service account and enable APIs
To enable a high level of flexibility and portability, and to remove the need to store your personal credentials somewhere, all the authentication and authorization to GCP services can be done through service account credentials. A service account comes with the ability to enable only the minimal number of permissions required to get the appropriate amount of work done, thereby limiting the risk associated with unauthorized action.
Go here to learn more about service accounts, and how to create and enable them. Then look at how to assign the appropriate roles to the account and create/download keys to be used for authentication to GCP by Puppet.
Also make sure you have enabled the APIs for each of the GCP services you intend to use.
3. Describe your GCP infrastructure
All code examples from here on out are for the purpose of this getting started to be in the same init.pp file.
3a. Define the authentication mechanism to GCP
This is the first required resource that you must define and it will directly leverage the service account you set up in the previous section.
In this example I download the key file I created in the previous section to my home directory and renamed it “engine-only.json” just for my own personal reference because I assigned this service account permissions for only Google Compute Engine. The resource title is the more important part since that title will be referenced by further defined resources. Those resources will look up this resource and use the authentication information defined. In case you were wondering, you can define multiple of these with different names and credential pairs so that different resources or app teams can authenticate safely to appropriate projects and services.
3b. Define your GCP cloud resources
At this point all the prerequisites are out of the way so we can get to defining some actual GCP infrastructure. All of the available resources can be viewed here in aggregate. We’ll go ahead and do something everyone is familiar with for the blog post today: launching a number of Ubuntu virtual machines upon some persistent disks. You can find the entire example in an easy-to-download Gist on GitHub.
Before defining the actual virtual machines, make Puppet aware of the default VPC, region, and zone you want to operate in by defining a gcompute_network, gcompute_region, gcompute_machine_type, and gcompute_zone resources.
Now five virtual machines are a quick and easy gcompute_disk and gcompute_instance resource plus a loop.
The above two resources show the creation of ten 50GB disks in us-west1-c, derived from the Ubuntu LTS image, and then each disk is attached to a corresponding instance that is leveraging the default VPC for connectivity.
4. Run Puppet to apply the manifest
To bring your defined infrastructure online you simply need to inform Puppet of the name of the file that contains our code from earlier. Puppet will enforce the state you’ve informed it of and make sure the five instances and disks are brought online. This is accomplished from running “puppet apply” and will result is similar log output as the following:
If you wish to have your defined infrastructure continually enforced by Puppet you can also add this code to a production Puppet Enterprise installation and have an agent periodically or on demand validate that all named instance have been created.
Now that you’ve given it a try, you’re ready to dive head first into integrating the management and definition of your cloud resources into your organization’s infrastructure-as-code practices. These modules are aimed squarely at reducing the friction associated with building fluid, portable infrastructure so that everyone can reap the rewards of migrating their applications to and across the cloud. We here at Puppet are looking forward to continuing to work with Google on this project and excited to see the continued progress as we march forward with broader GCP coverage. I personally am also looking forward to the imminent release of the technology Google developed that generates these modules from the GCP API specifications.
If you have questions about this effort, please visit the Puppet on GCP Discussions forum.