Why Automation? Predictability, Consistency & the Confidence to Innovate
The following content appears in a slightly different version in the white paper, 9 (+1) Ways IT Automation Will Make Your Company More Successful (and Your Job More Interesting). Download the complete paper to learn more about the benefits of automation to the business, and to the technical people who work in it.
Every sysadmin does things differently — it’s just human. When your team sets up servers manually, entirely from scratch, they won't be identical. Each one will be set up in a unique way, depending on who set it up. In fact, you've probably noticed this — that you can tell who set up a server just by certain characteristics. Even when one sysadmin sets up all the servers in a cluster, you are likely to find variations that person isn’t even aware of.
People use shell scripts or gold master images to provision new servers to avoid exactly these variations. But subsequent manual changes will create configuration drift, and that drift will increase over time, until it's manually remediated.
The inconsistency that results from scripting, master images and manual change makes it very difficult to troubleshoot operational issues. As you’ve probably observed for yourself, it’s common to have one server in a group of seemingly identical servers produce errors when the others are working just fine, due to variations you can't see at first glance. Inconsistency between systems also makes it hard for one admin to help another, or to back up someone who’s out sick or on vacation.
Inconsistent configuration can also make automation itself ineffective — even downright dangerous. If you are running enough (unintentionally) unique systems, you can’t predict what an automated change will actually do, so in some sense, when you apply automation scripts, you are flying blind.
When it comes to developing software, the variations caused by manual changes make it practically impossible to have testing and staging environments that are truly the same as production. The resulting flawed development process wastes time, and the delivered code will have errors, because you couldn’t identify them before deploying to production. People's time gets spent on fixing bugs, instead of developing the next feature, or tuning a feature according to user feedback.
Computers execute the same tasks the same way, every time. They don’t get inspired to do something in a more elegant or efficient way, and they don’t get bored and inattentive. So using automation makes it much easier to establish standard methods and protocols for doing IT work. You can rely with confidence on the outcome of your processes, because they’re predictable: When you update a license key, for example, you know it won’t cause a half-day outage in a system you rely on. And if an update does break something, you can roll back to the last known good state — at least, you can if you’re using a version control system in conjunction with your configuration management software.
Consistency is extremely important when you’re scaling your infrastructure. Scaling doesn't just add more machines, it also adds more complexity. Any inconsistencies in your environment will expand exponentially because of increased complexity, causing more problems and more work fixing those problems.
Consistent performance is something management teams value highly, and no wonder — it makes it possible to plan with confidence. When ops provides consistency to the organization's IT, you're providing the stable ground that allows new things to be created.
Aliza Earnshaw is managing editor at Puppet Labs.