When beginning a Puppet deployment are many decisions and trade-offs to make around code promotion, source code control, workflows, and resource modeling. These decisions will have a long-term impact on the viability of your Puppet deployment. In this session Michael will walk through some of those decisions along with how to begin your Puppet rollout.
- Ready to dig in? The Learning Puppet VM provides you with a safe, convenient virtual environment to get started.
Michael Stahnke: Welcome to PuppetConf. How many of you have been to a PuppetConf before? A couple of people okay. I guess this is one of my favorite times of the year, because we could talk about automation non-stop for two days straight and that’s one of my favorite topics. So this discussion is going to be largely about automation. What you are not going to see is a ton of source code up on slides. This is much more about thought-provoking points, I guess. The things I wish I would've known before I started trying automate infrastructure. So this is actually how I do slides. You can give me a PowerPoint template all day long and it’s just not going to happen. So that's just me, welcome to PuppetConf. You can follow me on Twitter, if you care to. I say stupid things on the Internet, just like everyone else.
This is me. This is my bio that I use over and over and over again. I wrote a book called Pro OpenSSH in 2005. I encourage you to buy it and not read it, and that way I get royalties and you don’t get dumber. EPEL, I was one the guys who founded the EPEL repository. If you guys are Centos or RHEL users you probably know what EPEL is. There were seven of us I think in 2005 that started complaining about not having the right packages there and I jumped in on that. So I am really, really passionate about packaging and software delivery and deployment and that's really what got me into automation as well. And of course I work at Puppet Labs and have for a couple of years.
Getting started with Puppet. What I want to talk about are patterns for successful Puppet, a successful puppet deployment. And I use patterns, I don't use the word best practice. Best implies I know all possible options and all of your variables, I don't. Patterns are generally good ideas, sometimes they won’t work for you. You may be in a situation where you know, one of these patterns fails miserably. You may be in a position where somebody else declares something a best practice and it’s the worst idea you have ever heard. So I talk about patterns in automation way more often than I talk about best practices. So why do we automate? This is really where it comes down to where you start thinking about the problems. And everybody thinks speed right away for automation. The reason I automate is speed. And that's great, that’s absolutely the reason you should be automating things.
When I was running a team at a very large company, consistency was the main reason I wanted to automate. I had SysAdmins that were swapping in and out of roles. I had new infrastructure people all the time. I wanted consistency in the infrastructure way more than I cared about how fast that infrastructure was deploying. So consistency to me was the most important thing. And the last thing is, if I can automate it the way the problems that I keep having I can move on to the problems I want to be solving. The more fun problems, the problems that are either more business related or just more fun technologically or just interesting or I can get to the bar on time. Whatever it is that you want to do and that’s why we automate.
So the first pattern is picking the right things to automate. And I know during the keynotes this morning, it was don't do it if you can automate it. That’s a great mantra. If you are starting from nothing, that also sounds like an impossible hill to climb. So pick the right things to start with.
Most people think I want to dive in for high value and this is where a lot of projects get funded in corporations. They think, I want to work on automating SAP end to end. And I am like, so do I, totally so do I. But it's very difficult. So you think I want DB clusters I want my big ERP solutions, I want my middleware and these are the absolute worst things to start with on your automation project. They are very complicated. They are very complex. They are requiring crazy versions of lib motif to be installed just to run the installer.
So you want to graph everything kind of on your cost and value chart. And this is you know a very advanced chart that I spent lots of times with our designer on. Cost. The cost is your time and how complex is it. People can say OpEx if free or headcount time is free. It’s not. You have an hourly rate. You are costing your company money. You're costing yourself your time. That is not free. The complexity again, ERP systems, super, super complex. Lots of interdependencies. Lots of problems. So then value. Value comes in at "How often am I running into this issue?" Those are the first things I want to automate. If I hit them everyday and they are a pain in my butt, I want them automated.
And then variability. Can I solve this one, this thing, this same way every time? How many IF statements do I need? How much flow control do I need around what I'm automating? If I don’t need a lot, it’s probably the perfect thing to start with. I will get into real examples in a bit too.
So using my highly advanced graphing technique, I was able to plot where things land roughly in terms of cost versus value. And in the upper left-hand, you have this is the highest cost and the lowest value. The bottom left is lowest cost, Lowest value. Like automating the way my account is set up on every system, valuable to me, not valuable to a lot of other people. But it’s pretty easy to do, so that’s why it’s in the bottom left-hand corner. You know in the upper right you see the very complicated things, middleware database, your ERP is somewhere over there. High-value, but really high cost. So where you really want to start is definitely on the bottom half of the chart, bottom left or right. It really doesn’t make a huge difference if you are really just learning, but push the value toward the right more.
Things that are the same on all systems are a great place to start. Syslog is one of my absolute favorite places to start with automation, for two reasons. One it's usually pretty simple to gather all your logs in one spot. And two, it’s usually pretty easy to automate. And three -- I think I sort of had two reasons, but I am going to come up with a third right now -- Is that the configuration is the same and like when you have all the logs there you're actually a better Sys Admin. So like already you are delivering value by automating one thing. I can look at the logs at one place. I can see problems in my environment.
So low-cost. The configuration items you know are the things you should start with for automation and lower ability, which I think I hit on. But things are different. Like if you have 85 syslog servers that are collecting, you know if they start with this, a letter Q on this hostname, they route to over here. And if they start with letter Z, they route to over even if it’s not the right thing to do. Pick something that’s kind of the same everywhere.
So optimize for frequency and optimize for fleetwide impact. When I say fleet, I basically mean everything you are managing. If it's the same on every single endpoint, that’s where you should start with your automation.
So I usually go breadth, it’s the same everywhere. Syslog like I said is one my favorite places to start. Host keys you know so that when SSH into the next box, it doesn’t say, "Do you trust this key?" And everybody just types "yes" anyway. You can get all those on the systems. Managing accounts especially if you have local accounts for all of your system administrators and maybe you have you know an external authentication source for everybody else that’s not a sys administrator or whatever.
You know I also like accounts in LDAP. I am a huge LDAP. Boy we can talk about that in the hallway anytime you want.
And then monitoring and configuration is another thing. Because most hosts have at least a set of core monitoring that is the same. You know I usually care about disk space monitoring. I usually care about "Can this thing talk to network?" If those are happening and I can get those automated. I've already provided value by doing simple things lots of times.
Look I learned how to use keynote for one slide. Pick the right things to automate that was the first pattern.
Pattern two: Don't learn two things at once. This is basically saying don't try to automate something while you're learning it. This happens all the time. I talk to people every day that are trying to automate – they are like. "Oh. I want to play with riak. It's the coolest thing since sliced bread, I think." I am like okay, right on, I mean riak pretty cool. And like so, I need a Puppet module to automate my installation of riak and I am like have you found a used case for riak anywhere. Like do you know what you are going to do with it? Do you know why you need it? Are you good at it? Can you tell if it’s working? If you can't, probably not a good time to start with the automation side of it.
The other thing that I see people do all the freaking time is they try to automate their automation framework as the very first thing you do. This is dumb, don't do it. This has cost friends of mine years of their life. Because they go like Puppet sucks, because I can’t deploy itself and blah, blah, blah, it can’t bootstrap itself. And I am like yeah, it’s kind of a hard problem. You know I wrote part of the way the Puppet Enterprise bootstraps itself. And it’s a hard problem. Automating your automation framework with the automation framework that you are trying to deploy at the same time is not simple. And best-case scenario, all you have got now is an automation framework; you still haven’t provided any value to anybody else using any services on your network. So really, if you take one thing away from this entire presentation, that's it. Do not automate your automation framework as your first project.
And the reason is when things go wrong, you don't know why. Something goes wrong and back to my riak example. Is it because I don't know what the hell I am doing with riak? Or is it because I'm not good at Puppet? Or I am not good at my automation tool? And the answer is I don't know. I have two variables. I need one variable to prove out scientific methods.
So the way that I work with automation and I talked to several other people at Puppet Labs that have done successful deployments of Puppet and I said, you know, what are the things that you wish you had learned? And three-pass automation came up from Eric Shamow and he is incredible. He is giving some talks on the continuous delivery and stuff later this week, you should definitely check him out.
The first step is actually figure it out. So in riak. I like to play with riak. I don’t know how it works. I know I have to install some Erlang. I know I have to get some keys set up and some buckets made or whatever. And I started studying them. I learned about that thing. I didn't mention Puppet once in there, and that was absolutely by design. I am trying to learn about the technology before I automate it.
The second pass is refine your setup. Okay, well I can go through my Shell history and see that I installed this package and then I remove this one and then I need the devel headers for this other thing and then I had to RM something and I did CD in this directory and build some stuff. And I got lost oh then I decided that was I going to check the Internet, for Twitter for a while and you know I got sidetracked. And basically refine the set up. Get it down to the steps that it actually takes to do it.
And now your third step is really getting into declarative state. And once you have figured out the technology you figured out exactly the steps to get it going. Now start thinking about your automation solution. Because once you know it, it’s actually very easy to automate. Learning and automating at the same time. Not ideal. So we are going to pick the right things to automate and we are not going to try and learn two things at once.
Once you have Puppet on your box. They are on your systems. You have a wealth of information available to you. And this is before you are automating really anything. You just have Puppet running. You have a Puppet agent out there. It’s checking into a Master. It’s checking into a console.
Inventory service is one of the first things that can make you make good decisions about your infrastructure. Cool. It came up pretty nicely. So inventory service shows up just data. When factor runs. Factor is a fact collection agent that runs in all your systems before a Puppet runs. It tells you things about how much RAM you have. What's your IP address. What's your Mac address. You know what version of Puppet in Ruby are we on. Just basic true information about your system. Facts if you will.
And you can write custom ones on top of that and so we will talk about in a minute. But just having this reporting in. I know a lot of Windows administrators that they once they got this stuff reporting in on Windows systems they were able to see things that they had never seen before and never accrued before. And the same thing happens in the Linux realm all the time. I can see how much processor accounts I have in every box and cores and you know just basic hardware information can make you make good decisions.
Here is another example with just some different stuff pulled out.
Custom facts. You can write custom facts fairly easily and there's two ways to do this. Well there is really more than two. But I will talk about.
Ruby is the best way that we talked about for years. Ruby is scary to some people. That's okay. The facts is actually one of the easiest little Ruby snippets you can write. We have some good documentation on doing it. But, I have an even easier way for you to get facts into that report and that's we have a thing called facts.d, which is basically just drop keyvalue pairs you know like this variable equals this. So I can say who do I contact about this box? Who is the technical approver for changes? Like I say technical approver equals Michael Stahnke. And that would be a factor or a fact that would show up on factor runs and it will show up in this inventory service screen. And right away my knock picks up something and they say. Oh there is something going on and I need to make a change. Who do I contact? Michael Stahnke is on there. Because it was a custom fact that somebody wrote.
On this screen. There is PE version. PE major minor patch and OS made version are all custom facts that are being pulled in and postgres' default version are all custom facts that were pulled in, in this example. So you can’t even tell what's custom and what's not. It's just they're first-class citizens.
Live management. Live management is kind of the graphical front end for MCollective and our message bus architecture in Puppet Enterprise. You can learn a lot about your infrastructure this way and learning about your infrastructure will make you automate your infrastructure more betterly obviously in better ways.
So this is just showing us that there’s differences across our systems. I have a couple of systems selected on the left inside. The password for root is not the same on all the nodes and it’s highlighting that. Okay. So I have learned something. I like to have the same password everywhere or I don’t depending on what my policies. But now I know. The RAL is one of the most powerful features of puppet. Again, when I was talking to everybody that had run major Puppet deployments at Puppet Labs are saying. What do you wish you had known early? This came up from every single person. RAL stands for "Resource Abstraction Layer." It is the thing that made Puppet interesting to me over CFEngine or ISconf or the other tools that were available in 2006.
I can say package and sheer presence. I don’t have to know if that was a yum call. If that was an apt-get call. If that was calling out to ports. It’s abstracted. Resources are abstracted at the appropriate level, and the RAL does that. I think this is – this is like we don’t tout enough about Puppet. And I am going to do it. So the RAL is awesome.
So Puppet resource is the commandline way to interact with the RAL. We used to have a thing called RALSH which was just the RAL Shell. But it was merged into this into core Puppet. So it’s Puppet resource. Cool. It looks good as well.
So I can say Puppet resource package elfutils, which is you know just an example that I was contriving for this presentation exactly. Elfutils comes out and it gives me the version. I can copy paste that code right into my Puppet Manifests and it will work perfectly and I like that. And almost all resources work that well. That are a couple that has some caveats.
Now are there other parameters that I could be passing package. Sure. I could say. I just want to make sure it’s installed. I don’t care what version it is. You know I could say. Make sure it’s never installed and sure it’s absent. I mean there’s other things you can do. But you can learn about your system and if you just run Puppet resource package and don’t give it that last argument. It will output that for every single package on your system.
Another cool thing you can do is like you know this is the same example with users. And as you can see it gives us way more information about a user route than it did about a package. A package is pretty much, is it there or is it not. A user has a lot of attributes. My user is Stahnke I am present. You know I have a Gcos field. This is Michael Stahnke. I have a GID. I have groups. I have you know home directory and password ageing and all the stuff. And so this kind of I am stepping into automation I really am not super familiar with Puppet. But I can ask Puppet tell me what this object looks like. I can copy that into a Manifest and now I can apply that Manifest under the second system.
So now I at least have two systems that look the same. I have the UID. The same GID. I am in the same groups. That’s very powerful and you can learn a lot about your system this way.
Puppet Describe is another thing that actually I just learned about like during preparing for this talk. I had either already known and forgotten about it completely or I just learned about it and I don’t know which one it was. But Puppet Describe is kind of amazing. I realize that looks pretty small. I typed Puppet Describe Cron up there and all I was doing is I was putting everything about the way that Puppet interacts with Cron. It tells me what it’s doing. Why is it’s doing it. It give you example code and it just keeps scrolling like there is no way I could have fit all this on one slide. But we do this for every resource type that’s in Puppet. So you can say you know Puppet Describe user. Puppet Describe group. Puppet Describe. You know SE Linux settings if you want to. It’s just really, really nice. It gives you tons of information. I don’t need to remember the URL. I don’t need to search Google or find things. It’s inline with there. I think it’s great and the examples obviously make it very. Very usable.
No-op. So No-op was kind of instruction set that we had on CPU architecture for a long time and it was never really part of configuration management in the early designs. If you look at the early kind of config patterns. They didn't have this. Tell me what you would've done without doing it. But Puppet does have this and so the way Puppet runs. You can just say hey, if I am out of compliance. You say, I want this package installed and if it’s not installed. Don’t actually install it. Just tell me but hey this thing isn't installed and I would've installed it. If you had told me to. That’s No-op and it is one of the most powerful features of Puppet as well. The RAL and No-op are both very, very exciting to me.
The other thing is if you're not super familiar with different operating systems because of the way the RAL works with No-op you can learn about this operating system. I ran RHEL for years. I am a RHEL guy, I am CentOS guy. Like I said I did stuff with EPEL. I really, really know how RPM and VM work.
When I first took a job at Puppet Labs, we ran our infrastructure on Debian. Well, fuck. I don't know Debian very well. So what do I do? What I actually ended up doing and I'm not lying. I would just run the Puppet commands and turn on debug mode and see what commands it was running to figure out how to do things. Because I didn’t remember how depackage worked. I didn’t remember how apt-get. Would had been like seven years since I had touched a Debian box. Yeah I think it was potato the last time I was playing with it. And which is like what, two releases in Debian years, no. You know so right here. You can see that it’s actually telling me. I am running this depackage query command. I am running this show format like and I just think that’s superpowerful. I still use this today if I have to manage anything on a Mac. I cannot figure out how to add users on the command line on a Mac ever. So I run puppet. I look to see what commands it outputs and then you know, if I need to do other things with it, I can. You can learn about different operating systems by tracing the other direction through the RAL and I think this is super awesome.
There is also some tooling available. Geppetto came in through our Cloudsmith acquisition and we have been talking to Cloudsmith for a long time about the way that they saw developer tooling working with Puppet and they were way ahead of anything we were going to come out, obviously we just bought them. Geppetto is an eclipse-based IDE for running Puppet modules and it’s really, really nice. In that, it does syntax completion, you could mouse over something and it gives you the documentation. You can say I'm running Puppet 26 or 27 or 30 and it will validate syntax based on which version you are writing this. It’s just a selectable parameter in there. It will auto complete, it will say, you know you probably spelt this class name wrong and you are required. Because I don't see this class anywhere else in your module path. It is really, really helpful if you are really starting to trying to get it started.
You know it’s eclipse-based, some people love that, some people don't like that so much and that’s okay. I think it’s a very powerful tool. Some of the other interaction stuff that you can do with it, like running tests and stuff kind of just in line with eclipse is pretty cool.
The other side of that’s vim. I suppose you use could EMX. I don't, so I am not going to talk about it. In vim, we have you know syntax highlighting has been there forever. We now have macros that will you know line up your arrows here. So like when you type something and you hit space. It just lines up all the arrow. So you know the stuff is neater to read. It’s cleaner. It’s easier to share with others.
So there is some good tooling here. We have IDEs. We have Live Management. We have the inventory service. We have custom facts. We have the RAL. We have no-op. So these are all tools that I'm using to learn about my infrastructure and learn about what's going on and I haven’t even automated anything yet, this is just learning.
And then you also have a community, obviously we have a giant, giant, giant Puppet community, largest channel on automation on freenode by far. I think we are the 10th largest channel overall on freenode, which is kind of crazy. But IRC, mailing lists, ask.puppetlabs.com or Forge. There’s a lot of places to get good information about this stuff. And we have course have puppetlabs.com/learn which you can sign up for classes and stuff like that.
So we are going to pick the right things to automate based on cost and value. We are going to learn about what we want to automate before we try to automate it. And you can use some of the Puppet tooling to make you more productive in both your automation quest and in learning about your infrastructure.
This is the most controversial pattern, I will tell you that right now. Starting simple and staying simple is not simple. I work at Puppet Labs. It turns out, Puppet Labs have some people that are really freaking good at Puppet. Sometimes being really good at Puppet means they get really complicated really quickly. You know, you talk about and well, I just want to install two web servers and have a load balancer in front of it. And I am thinking just you know 4 to 5 modules maybe, you know a couple of hundred lines of automation code. They come back with you know this 10,000 line thing that has you know error handling, up through everything.
And you know it's also scalable out to a 100,000 nodes, in case you want to do this n times and it’s just like. This is great. It’s awesome. It is really complicated to work on. It’s really complicated for me. And so everybody is guilty of this. You think you need to go up to the next level of complexity. Stay at the level you are at, as long as you can. It will be easier for you to learn. It will be easier for you to teach your coworkers. It will be easier for you to get the simple things done. And getting lots of simple things done is extremely valuable versus getting a few complicated things done depending on how often they are happening.
So the other thing when you're automating to think about when you want to stay simple is if what you are automating is a bad idea to start with. You just get crap, but faster and on a more regular cadence. So that's really not what you're looking for. So think about what you are automating and why you are automating it and deliver on a good process. And the main thing that really boiled down to in my discussions with the guys at Puppet Labs was small, single-purpose modules. A lot of people start out with a monolithic module like I am going to have the WebSphere module and it’s going to deploy you know WebSphere and JAVA and it’s going to get the IBM you know SDK motif crap and opt and all this stuff going on, right.
That’s a horrible idea. It will not work and the reason is the next time you need something else that requires Java, you are going to have a conflict. Well, I need a Java. That should have been abstracted. That should have been a separate module or I needed a web server and that should have been separate or I needed user accounts that look like A, B and C. That should have been separate. Having lots and lots of small modules is a really, really good pattern.
Modules like do one thing. Do one thing well you learn that about UNIX. Like in your first you know week as a UNIX administrator, modules are the same way.
My most popular module on the Forge and I don’t have a ton of them. I do a lot with the Puppet Labs modules, but stahnma has a few, it’s an EPEL module, it’s called just stahnma/EPEL. All it does is setup the EPEL repository, that’s it. It is included as a dependency in hundreds of other modules. Why? Not because it’s amazing. But it does one thing. One thing exactly and it does it well. And that’s kind of the reused patterns. So it can be reused easily, because it doesn’t make assumptions of, well, when you wanted EPEL. You actually also wanted RPM Forge and you wanted to install you know MP3 codex and you wanted all of those, no I didn’t actually. I just wanted my single package repository.
So take time measurements to think about what you want to automate. What I did with my team literally once a month I would go through a report of every single incident that came into our help desk. And I would say, what do we have? How many of them do we have? And I will find okay we got you know 600 tickets. We resolved 580 of them. 230 of them were setting up printers on you know largely AIX boxes around the world, you are printing contracts.
Okay. What if I didn’t have to do that anymore? How much time does that take and we ask. You know we figured all out. Okay, it’s well three-fourths of a person, in terms of time. It’s not a ton of time. But it’s not none. And it’s happening 240 times. How many of those happen more than once because we messed it up. Oh about another 40. Okay so it’s really happening 280 times. So now we are up to almost a full person. If I can save a full person through automation, that’s pretty damn good. So we took some time measurements. We did that. We would spend a month on it. I would be like, this month we are automating PrintQ setups on AIX that was all I worked on. And at the end of that month, I had saved a person. And that's how I selected what I wanted to automate. The other ways you can do things are what are the first things you do whenever a system comes online.
And this is the first thing that I do whenever a system comes online. I actually did this the other night. I was just sitting there at home. I have spun up a VM and I was like, what do I actually do. I did it all manually and I wrote down some steps. There were a lot and a lot of them are really, really simple. Installing vim with you know color and syntax highlighting, I literally do it every time I spin up a box. It’s three lines in Puppet, actually it’s one line if you, you know want to do curly braces. But it's really simple to do. And so what are all these other things that I do. Those are great places to start with automation.
At least making so your SysAdmins have a better life before you can try making everybody else have a better life. If you guys aren't getting value out of your automation suite and out of Puppet. Your customers aren't either. If you don't believe in it, if it’s not helping you, they won’t believe in it and it won’t help them.
So then it was delivering units of value, how do I – I have done you know the kind of the first 10, 20 things that happen when I build up a system. I have automated those, I really believe in Puppet. How do I get my management team to believe in Puppet and Puppet Labs and all the stuff they're doing?
And the first thing I did was contact my database people. So start simple, stay simple, now push the value. Iteration, this is kind of a misname pattern, but I couldn’t think of a better name because there's you know too hard problems in computer science, naming things, caching and validation and off-by-one errors. Iteration, get better, start over, work on it again. I have a module, okay it only handles CentOS, now I need it to handle Fedora as well, now I need it to handle Debian add it, iterate on it, version control it.
This chart was something that I wrote for an article two, two and a half years ago. And I still like it a lot. And it's not really because I want my pimp my ideas, it’s because I still think it's how people should approach configuration management. It just starts very simply and these slides will be available, so if you can’t read it all I am not going to read it to you, but basically it says you want to go through a journey on how you do configuration management. At the bottom you have my systems are handcrafted by artisans and I live in Portland, so we are really into free-range bits.
But centralized and then you move onto centralized, where you have -- all my configurations are centralized in some way, but they are labeled with you know .back or .mic or .date then I just, you know I don’t have them version control it, they are just kind of offsetting in a pile.
Level 3, you are normalized and then you get into version control, congratulations you are now sane.
Level 4, you start templating, you start extracting data, you start moving into more, I'm approaching reusability, but it’s really about my site reusability, I can share my work with my co-worker and they are cool with it and they get it. We can speak the same languages, we can talk together and both accomplish goals.
And then level 5 is all about reusability. I have abstracted my data out of my modules, I have abstracted things using your Hiera or other data injection technologies. Now I can share that module on the Forge, it’s usable for other people, they can get value out of the work I have done. There wasn’t any secret sauce in there.
So that’s kind of the journey that you want to iterate on and I see, I have seen organizations that were all the way at five, go back to one because they couldn’t figure out how to automate something.
And I have seen people go from 1 to 5 very quickly, most people get stuck between 3 and 4. And it turns out, you don’t need to be a 5, you don’t need to be at a place, where I can share every piece of infrastructure automation with other company. There are certain cases where that’s extremely valuable, there are certain cases, where it’s really stupid. So again it’s patterns and not best practices.
Learn from developers, also in the iteration paradigm. Take what you've learned watching developers do work and apply it to Puppet. That’s versioning your code, like I said at level 3, you reach sanity.
If you are not versioning your Puppet code in some way, please do, unless you are the only guy who works on it. If you are the only person who works on it, I really don’t care if you version it. If you work with upto one other person you should be versioning your code.
Have a lab. I learned Puppet in a lab environment that I blew up literally everyday for a month. Either I would blow it up, because I wrote really, really bad Puppet ideas. Or at night, at midnight, it just reset itself. Because I decided that I should be able to rebuild anything I am doing I am using Puppet.
It turns out once you start you know just repaving your machines you will figure out what you're doing every day. It was a little painful. But it really helped and people say, well how did you have time to do that, once you are fighting all these fires, once you are working on incident tickets and I was, you know working 16-hour days isn't awesome. That’s what I was doing, so.
But then you have to improve your workflows with that. You know you are working with a lab, you are working with partners. You are working with other teams, like I was just about tell a story about these database, this database team.
I was trying to sell Puppet to my manager and it was like, this is awesome. You need to give us more time to work on this. It will save us time in the long run. This is an investment rather than a cost. And what finally tipped the scale was when I added a checkbox on our server provisioning application that just installed the Oracle client. You know not a huge fan of having to install Oracle clients. It turns out our DBAs weren’t either. They would much rather spend time on rack and you know making sure that we are tuning properly and all this. So I added a button, that just all it did was eventually include an Oracle class. It got configured right away.
And then the database thing and he kind of was what else can you do? Suddenly then we had budget. So that's a great way to solve problems.
And of course re-factoring. That Oracle module went through at least four major rewrites. The first time it was just – we were just going to start at the package, the next time it was we were going to set the package and you know configure DNS names and get all those things right. And the third time it was, you know we are going to give you options about what databases you want to be able to talk to and which ones you don’t and things like that.
One of the other thing you will hear from Puppet Labs employees especially is execs are horrible. Exec is a resource type in Puppet. That basically is run this command or run this script. The reason it’s horrible or labeled horrible is that it breaks the model. I can't tell what's happening once you run an exec. If you exec our to Perl script that you know sets out something, all I can tell is what the exit code was on that script, if it was even written with good exit codes. If the script is just exit zero, it gets success every time it runs, not matter what.
And so I can’t tell what happened, I can't really use No-op on it very well, like I said, I would have run the script. But I don’t know what the result would have been because I just would have run it. So exec is a little weird and you can move over to define type, which is kind of a collection of things going on. It might be well. I am really setting up a file. I am setting up permissions. I am setting up a user and you can do all those things. And that’s a define type and that’s a little better, because the model can handle that a little more. And it didn’t get to fully modeled. And so this is a journey.
A lot of people will say don’t ever start with exec. You know what, absolutely start with exec. I don’t have a problem with it. If it's delivering value, if it’s automating. You are getting consistent stuff, you are getting it faster and you're having time to work on more important problems. Exec is solving problems.
This is where I say it’s a pattern and other people would say it’s the worst practice. I am okay with that. You can make your own choices on this. There are trade-offs exec is against the model, but exec does get things done. Sometimes getting things done is way more important than being philosophically pure.
So back to this. You have configured. You're automating. You are iterating. You are trying to work your way up the journey to being I am a fully model, fully reusable infrastructure. I could walk into anywhere and manage it, using these reusable building blocks. It’s cool, you're working your way up.
And those are the kinds of thought processes you really want to have when you are automating. You want to be better, if you're not trying to continuously improve both your automation, your skill level on your team, you're not going to succeed. My favorite thing about Puppet, has nothing to do with technology, my absolutely favorite thing about Puppet was in 2007. I was starting to play with this, everybody I talk to about it, I learned something from, every single person on the mailing lists.
I learned about new ways to do monitoring, new ways to do backups, new ways to do database offline you know hot backups, it was just crazy. Sometimes we didn’t talk about Puppet for weeks but I learned from the best sysadmins in the world at the time. And I still feel like that's what Puppet is, it’s a collection of amazing system administrators. Get help from the community, ask questions, learn things, that is my favorite part Puppet and Puppet Labs. It will make you better at automating, which will make you better at your job.
So before you get with Puppet or as you are getting started with Puppet. What are the right things to do? Pick the right things to automate. Don't learn two things at the same time. Emphasis on don't try and automate your automation tool as your first project. Use the Puppet tooling to make you better. Start simple and stay simple for as long as possible and then iterate. And in iteration, again I don't love the name on that bullet, learn from developers, take their practices, take their version control ideas, take their deployment ideas, take their tooling, take their IDEs. They have good ideas. They have been doing this for a very long time, infrastructure people can do it as well.
These are just pictures of cats. Because every presentation needs them. So I am done talking about what I was going to talk about. So if you have questions there is microphone in the center aisle. I am happy to answer nearly any question. And I am also available in the hallways, I will talk about Puppet and automation literally in my sleep, ask my wife. But it is just, I'm so happy to be here, I am so happy you guys are at PuppetConf. I hope you learned a little bit about thought process and approach to Puppet.
There are several other talks going on about what to do in terms of getting started with syntax and how to build great modules and how to build reusable modules. And I think Ryan Coleman is giving one later this afternoon on a lot of Forge content. His will be great. So just, you know I really got, I really, really, really hope you enjoy the conference. But I'm willing to take any question you guys have as well. Can you step up to the mic just so the recording can hear you. It’s not that I don’t think you can yell, I do.
Audience: What's the difference between the audit switch versus the No-op? Is there a big difference?
Michael Stahnke: Yes, it’s an interesting question, there is a switch in Puppet called audit and basically what that does is it will turn, it will turn in the state of a resource to a report. It will not tell you if it would have changed it. So it will tell you, this file exists and these are their permissions like if you are auditing a file. That’s all it will tell you that will come in the report every time.
On No-op, it could be: I want this file 644, it’s actually sitting at 777. And no-op would be like, I would've made this change. Because it’s not what you told me to have it at. So those are kind of the differences. There is more under the hood in terms of differences, but that’s kind of the high level. As one will tell you exactly what's going on and one will tell you that. This is what's going on and it’s wrong. Any other questions? All right, if not, I hope you guys have a spectacular time at PuppetConf. Thank you so much for being here.