Puppet, Your Operating System Installer, And You

With all the discussion lately about Cloud, including winning the Judges Choice award at Under the Radar, the work around Puppet in Ubuntu's Cloud, and mCloud's announcement, you may be asking how Puppet fits in with your deployment environments and whether Puppet is more or less a Cloud technology. Puppet has the advantage of working in both cloud and non-cloud environments. In the basic sense, clouds are just a "large number of virtual machines" that isolate the need for virtual machine placement so it is easier to ignore the challenges of differences between physical hardware. Puppet can be used in your datacenter management whether you are running a public cloud setup (like EC2), a private cloud instance (like Eucalyptus), or if you're doing things manually with physical and virtual machines in traditional ways (such as using Cobbler). There are two fundamentally different ways to look at doing OS deployment. The scripted approach revolves around technologies like Kickstart or Preseed. These techniques have the advantage of, like Puppet language, being text based, easy to modify, low cost to exchange over the network, and easy to keep in source control. In a scripted deployment, you generally want to strive for doing as little as possible configuration in the main answer file, and just install the bare minimum OS and the Puppet packages. You can see how easy this is for Debian/Ubuntu and Fedora/Red Hat/CentOS systems here. This makes for an excellent option if you are concerned with having a detailed audit history of what happens inside your operating systems. These approaches also have the advantages of being able to install the latest versions of packages at install time -- which can be great if you are concerned about security updates. Puppet also works with OS imaging technology. Images may be easier to set up initially, but don't offer the same version control abilities that a scripted system has. If you are doing image deployment, there are various ways you could use Puppet. In a very large heterogenous network, you could use Puppet to build the images themselves, and then clone them out to the nodes. An example of this is Red Hat's thincrust. While images offer speedy deployment times, however, this practice doesn't really account for the wide ability for Puppet to take into account configurable variables, such as those obtained from facter. The solution then is to use the image build for the core of the system, but keep the system puppet managed (in real time) after deployment. The OS image should contain a startup script that, upon boot, registers the node with the puppetmaster and performs the rest of the OS configuration via Puppet. This allows for the flexibility of non-image based management within the context of a provisioning system that prefers to rely on images. Puppet has the benefit of "just understanding computers", which means you can run it everywhere, regardless of your deployment choices. If you move from an internal cloud to EC2 tomorrow, Puppet can move with you. As the numbers of systems you are managing grow, Puppet can be easily used to automate the same actions you were performing locally inside a cloud context. Isolating the deployment framework from the automation framework leaves your deployment options fully open, and the same content you develop for your internal deployments can be used tomorrow in the cloud. Through the use of node classification and variables, it's also easy to write content that performs as desired in both contexts simultaneously.
Puppet sites use proprietary and third-party cookies. By using our sites, you agree to our cookie policy.