Disclaimer: EMC is a proud member of the Puppet Supported Program. These are my thoughts and not necessarily those of my employer.
I work at EMC which is a federation of well-known brand names; VMware, RSA, Pivotal & EMC II, and they all have the same goal: the Software-Defined Data Center. It’s become a real buzzword these last two years where anyone who’s anyone within the IT industry is embracing the SDDC; all our competitors and partners, and our joint customers, and I’d like to explain my take on it. I see cloud as the operational function of being able to be agile with data center resources, and SDDC being the technical implementation to make sure you can actually deliver on the promises made by that operational model.
To do this, we need to find tools that can actually define the infrastructure, applications, and systems into layers of abstraction. Every element of the SDDC becomes a layer, and you can add on layers to do whatever you want, say, deploy a scalable application tied to a NoSQL database and automatically open up a network flow between them and from the load balancer to the users, all on a scalable converged infrastructure that is abstracted away from the deployers and the users. To do that manually would take a lot of time and time is not only money nowadays, time to deliver might actually make or break innovation and development of new products and services.
In my line of work I see automated deployment and configuration management as the right tools to achieve this in a faster and less error-prone way, and that’s why me and many of my colleagues have been using Puppet to solve deployment and configuration issues both internally and externally.
Externally, our customers achieve competitive advantage by automating their deployments, because not only will it be quicker to get it up and running but it also maintains correct configurations across the lifespan of the application and/or system. Other benefits that they see are faster time to market and greater satisfaction for their own customers (internal and external) who see less downtime, and no or fewer mistakes made during deployment. Making sure that the setup of a system is consistent throughout it’s lifecycle is hard, and that’s where Puppet Enterprise really shines and will correct things that are wrongly configured automatically.
We see a huge demand globally to be able to use automation tools for deployments and configuration management of our solutions, and you’ll read more about one such solution in an upcoming blog post by my colleague Eoghan Kelleher.
This is not only for our external customers though. We also use Puppet internally to enable us to do more innovative things in less time than ever before. In our labs we have bare-metal servers where we can deploy whatever we want, and with tools like Puppet we can deploy converged infrastructure on a bare-metal server because it’s possible to treat the pieces as layers, as any other application. This goes for operating systems, networking and storage, that combined can become parts of something larger. With tools like Puppet, we can create and manage all the pieces that we need to build out the new and modern software-defined data center infrastructure. And that’s just the start.
Enabling your DevOps team to have an automated setup from bare-metal servers to fully utilized applications on top of a converged infrastructure is now not only possible, it fairly easy. Looking at tools like Puppet Labs Razor for bare-metal deployments, networking configuration from members of the Puppet Labs Supported Program, storage deployments using Puppet modules for scalable storage solutions such as EMC ScaleIO, and application deployment from all the great modules over at the Puppet Labs Forge, it’s getting easier and easier to manage larger parts of your SDDC.
Repeatable, executable documentation
One of the things I really find fascinating about tools like Puppet is that we can finally have what I and many others like to call repeatable, executable documentation. This is essentially what a Puppet manifest is. And it’s awesome.
With Puppet manifests we could standardize the deployment process not just for our customers, but also for us. If we were to ship out a rack to be configured full of services, and another customer buys a similar rack, we could, in principle, use the same manifest with only minor changes to deploy another full rack with configured services. This is extremely powerful not just for us and our delivery teams, but also for our customers who would be up and running faster than ever before.
Some might argue that if we automate everything we’re taking away the responsibilities of the administrators. I would say that admins can become more strategic than before. If they don’t look at what can be automated, they put themselves at risk of doing something wrong during deployment, or afterwards during maintenance, and they might be burying themselves into a time sink of configuration and troubleshooting. Managing manually means time and money, which some IT organizations still have little of. If they are looking at things that can being automated, automation can help them save time and be more efficient and make the best use of the people already on staff and those they might now have the budget to hire, which in turn makes them even more valuable.
I see that automation is critical not only for the future of IT but also for current environments to be able to thrive and continue to innovate. You could say automation is vital for everyone to continue to be competitive. And I am happy to be a small part of the Puppet Supported Program.
- Sign up for our upcoming webinar with F5 Managing Load-Balanced Applications with F5 and Puppet
- Puppet Labs CIO Nigel Kersten talks about the Puppet Supported Partner program and how it will enable automation of the entire data center.
- Download the 2014 State of DevOps Report