Sometimes it can feel like the world of information technology is just moving too fast. No sooner do you get a handle on one kind of technology, when you begin to hear of another, better, faster, more powerful technology. You start to feel if you don't get up to speed with it right now, you'll be left in the dust.
So how do you adopt new technologies like cluster schedulers like Kubernetes, Mesos, smaller operating systems such as CoreOs, Atomic or Photon, and containers? After all, you aren't going to move entirely to a new technology; you'll still have legacy systems to manage.
The Puppet Podcast comes to the rescue with an episode about Project Blueshift, Puppet's name for projects around newer IT technologies. In this podcast, Carl Caum, Gareth Rushgrove and Kara Sowles discuss the why of new technologies, and how Puppet helps you adopt them and keep them predictable, secure and safe, using the same capabilities Puppet is known for, including abstraction, modeling and comprehensive visibility.
People adopt new technologies for new capabilities, speed and scale. Containers, for example, offer consistency from one environment to the next, helping to overcome the dreaded "well it worked on my laptop" when you deploy to production.
That's great, but for large companies with real risks — for example, a powerful need to guard transactions, customer credit card numbers, and other sensitive information — there's a real need to understand what's inside each container. Because, as Gareth says in this podcast, "ultimately, anything inside of it could be the reason why they're compromised, or the reason why something goes wrong. They know that's the problem they've been solving with configuration management tools, and they value that insight, that situational awareness of, 'I know everything about my infrastructure.'"
There's another value that Puppet brings to working with technology that raises the level of abstraction, like Kubernetes. Puppet does more than configure your infrastructure: It actually models your infrastructure. As Gareth points out, there are known good practices for refactoring Puppet code, "which is really refactoring your models like the way you think of the world." That comes in very useful when you're adopting new technologies: Puppet lets you evolve your model for your technology, and allows your model to include the succeeding generations of technologies you adopt.
This podcast has a lot more interesting ideas to offer around adopting new technology. Go ahead and enjoy it (and our other podcasts) on iTunes. And let us know what you think!
Aliza Earnshaw is the editorial director at Puppet.
Transcript of podcast
Kara Sowles: Hey, everyone. Welcome back to the Puppet Podcast. We're pretty excited to be talking about Project Blueshift today. I'm Kara. I'm from the community department here at Puppet, and I'm joined by. . .
Carl Caum: I am Carl Caum, technical marketing manager at Puppet.
Gareth Rushgrove: I'm Gareth Rushgrove. I'm one of the senior software engineers here at Puppet.
Kara Sowles: Let's just jump right into it, because I know not everyone has heard of Project Blueshift. Gareth, what is it?
Gareth Rushgrove: Project Blueshift is basically a banner under which we're taking a lot of the work we're doing with future-facing infrastructure -- things like containers, cluster managers, new-style operating systems -- and just grouping it together so we can talk about it coherently.
I think some people come across some of these individual bits, but actually you often end up using many of those things. And knowing that Puppet supports all of them is really useful.
Kara Sowles: So, is Project Blueshift a project in itself, or do people download it?
Gareth Rushgrove: No, it's not something that we're shipping. It really is just that banner. Where you see us talking about Project Blueshift, you know you're going to find content, both in terms of Puppet content, code, new tools, and also discussions like this about the future of infrastructure.
Carl Caum: It's more about the function than it is about anything else. We're in the world of IT, where things are changing so rapidly that we need this practice of constantly seeing what's coming over the horizon, and being able to understand it, and then give our customers some path to it. And that all falls under Project Blueshift.
Kara Sowles: Great. You're talking about future infrastructure. Is Project Blueshift focused on containers, or is it more than that?
Gareth Rushgrove: It's definitely more than that, and I think that's where having the concept of "Project Blueshift" rather than just simply saying, "Oh, Puppet is doing something with containers." Well, of course we're doing something with containers in the same ways we've done things with the previous generations of technology. We're doing things with Kubernetes. We're doing things with Mesos.
But we'll be doing things with what comes next as well. Maybe that's serverless things or AI [artificial intelligence], but this doesn't feel like that's where it ends. We're in a place where innovation and technology is happening faster and faster, and I don't see that ending because containers fix all the problems or serverless fixes all the problems. The future infrastructure looks a lot more like change than it does coming to some sort of plateau.
Kara Sowles: We're never going to solve all the problems?
Gareth Rushgrove: We'll give it a try! [Laughter]
Carl Caum: I think if you look at the new world, the containers, and container orchestrators, and things like that are really enabling faster change in and of themselves. So that becomes the constant, like Gareth is saying. And who knows what that's going to be, but just to go back to this idea of having that function, that we can constantly build that into our DNA, the way we think about working and the way that we think about how other people work is really going to help us stay on top of that change, and really incorporate that into the way we think about products and projects.
Gareth Rushgrove: I'd say containers are an implementation detail, not an end state.
Carl Caum: Yeah, definitely.
Kara Sowles: You were talking a lot about where things are headed. If folks are interested in what you're doing right now under that banner of Project Blueshift, what kind of stuff are you doing?
Gareth Rushgrove: So some of the work we're doing is inside Puppet, and we have teams working on features that are related to containers, cluster managers, and other bits and pieces. But one of the things as well is we want to highlight the work that's being done by the community.
A great example of that is actually Puppet has really good support for Mesos. Mesos is a popular open-source project. It's been around for a number of years. And the Puppet community has been using Mesos for a long time. And even though Puppet ourselves hasn't done a huge amount of work there, the community has provided incredible content. We have these third-party modules on the Forge that will get you up and running with Mesos much easier than anything else, really.
But internally and more recently, we've been doing work around packaging Puppet software up as images, as Docker images, on Docker Hub. And we were a launch partner for Docker Start, so there's sort of a tangent there.
We've been releasing modules that interact with cluster schedulers. So the Kubernetes module, for example, is auto generated from the Kubernetes API. And it allows you to control the Kubernetes resources, like replication controllers, deployment managers, all from within the Puppet language.
And just at PuppetConf as well, we released tools for building images from Puppet code. So you can go straight from your existing Puppet code that you already have -- that already describes your services, your applications -- and run a single command, and you have an image. So building tooling around both where the Puppet ecosystem touches these new worlds. . .
Kara Sowles: That's fantastic. Could you pick one use case that you see, and we can dive into it a little bit more?
Gareth Rushgrove: I guess the one that comes to mind is probably the image build work, which we just shipped at PuppetConf. It's definitely in my head at the moment.
Carl Caum: One of the things I really love about these tools and the work that's going on here is it highlights the real nature of configuration management. For so long, we've been talking about configuration management as, "It's about managing packages, and files, and users." And then people saw the world of containers, and they were like, "Well, does configuration management really play a role there?"
But that's not really the point of configuration management. The reason that configuration management was great at managing packages, and files, and services is because it was great at managing configurations. And configurations don't go away when you have containers, and cluster schedulers, and things like that.
In fact, they get even more complex. The scale of them becomes very hard to keep track of. And that's something where, if you have a source of truth that can span both Kubernetes, that is managing the images and where they're all going, but also managing the content of the images themselves, and also managing the non-ephemeral services, like databases, and message queues, and things like that in your infrastructure, and be able to have one language and one thing, of one source of truth of how all of these things are supposed to piece together -- well, now it becomes much easier to wrap your head around what's actually going on in your infrastructure.
Gareth Rushgrove: If you're coming from the point of view of a brand new organization, or even an organization that's very technically mature, it's easy to say, "Oh, we have one platform." If you look across most organizations of a certain size, and a certain scale, and a certain vintage, the reality is you can go through the different strata of technology decisions over the years.
I think technology as a whole, it's much easier to add new things than it is to switch them off. So after 40 years of doing that, you have a lot of things, and they cross generations. And I think with Puppet, what we're trying to do -- and I think what we're successfully doing in lots of cases -- is demonstrating that having one tool that crosses those generations to manage things allows you to adopt new things faster in messy, heterogeneous, low-trust, risky environments.
Carl Caum: And we showed this at PuppetConf with the Docker demo and the keynotes, where we were able to take an existing application and isolate just one of its services, and then use the Puppet code that was managing that in VMs to build out an image, and then using Puppet code to manage that image, and getting it deployed out. So giving us a very clear path of, "We need to isolate this part of this application, and upgrade it to newer technology."
Gareth Rushgrove: The keynote demos were, I think, one of everyone's favorite part of PuppetConf, absolutely. And videos of the keynote will be available soon.
Kara Sowles: You can tell we're all freshly back from PuppetConf. We're so excited right now. There's nothing as intoxicating as a few days of talking to Puppet users all day long, nonstop. And so by the time this podcast goes live, I think those videos will be up. This will be in the past.
Carl Caum: Hello from the past!
Kara Sowles: Exactly. . . So, when you meet with customers to discuss this kind of work, what do you see them struggle with? What are the issues they're trying to solve?
Gareth Rushgrove: It varies from customer to customer, people adopting things, because containers are, like themselves -- they're quite a low-level primitive. And so it's the problems you're trying to solve that matter there, and different customers are trying to solve different problems. Whether that's faster application deployment, or it's utilization problems, those have different facets. They just both happen to use containers.
I think one of the ones that comes up a number of times is the black-box nature of containers. That's one of the things that gives it that power. It's like, "Well, it's just a container. It contains things." And from the point of view of the scheduler, it doesn't need to know what it contains. It just needs to know some metadata and it can get on with it.
From the point of view of a large financial institution, or a big retailer, or a bank, or something, that's fine from the point of view of running it; but they still want to know what's inside it, because, ultimately, anything inside of it could be the reason why they're compromised, or the reason why something goes wrong. They know that's the problem they've been solving with configuration management tools, and they value that insight, that situational awareness of, "I know everything about my infrastructure."
The concept of adding a bunch of black boxes to that scares them, because they don't want to give up that capability. And that's certainly one of the things that we provide today with other types of infrastructure, and we've always been about adding more types of infrastructure that we provide that insight capability for.
For example, the network device support that's been growing over the past number of years just adds more and more things that you can get that awareness of it from Puppet.
Carl Caum: One of the reasons that Puppet makes that great is. . . So you can manage network devices along with containers and Kubernetes — big deal. The real power there is the abstractions that you build with Puppet, and being able to have the model that says, "This is how we're going to talk about our infrastructure in a way that we think of it, and have our inputs of how we define that."
And underneath it? Well, who cares what the technology is, right? It may be one network switch one day, and then we may switch to virtual networks for the underlying implementation. The underlying implementation becomes that detail that we can introspect and swap out, but the model on top is how we think of things. And that's the piece that becomes really easy with Puppet.
Gareth Rushgrove: I think, actually, over time that model does change. New technologies aren't just the same but better. They're different as well. But Puppet is a programming language with toolings put around it, and programmers change programs all the time. It's called "refactoring," when it's done well.
And I think there are patterns and practices around refactoring Puppet code, which is really refactoring your models like the way you think of the world. So we have both of those things. We have the tools to move your model, and your model can span multiple generations.
And it sounds all a bit abstract, but it's the problems you can solve when you think like that that are interesting.
Carl Caum: Well said.
Kara Sowles: What part do you see Blueshift play in helping people adopt newer technologies in the real world?
Gareth Rushgrove: Coming back to that, "What's inside the black box?" is a really interesting problem. And partly, it's that most people using containers are embedding entire operating systems inside them. And how many packages are in there? Are those packages upstate? Can you ask questions like, "What versions of OpenSSL are available on all of my infrastructure if some of them are inside black boxes and some of them aren't?"
You might find some tools that will do one or do another, but if someone has just come through the door going, "There's definitely a problem, answer me this question" -- do you want to be looking at multiple tools for different generations of your infrastructure -- do you want to be saying, "Well, I can tell you about it for this bit, but actually for this bit, we don't know," or, "We know what it used to be," or something akin to that? And so other than that, the inventory problem is really interesting.
But I think, as well, there's also: How do people adopt? How do you bring these tools into your existing infrastructure? A lot of things like to assume that everything exists in isolation, and I think Puppet has always been a good tool for installing and managing anything.
Kara Sowles: There's a lot of interest in new smaller operating systems. What are we doing around systems like that, stuff like CoreOS?
Gareth Rushgrove: Good question. They're just operating systems. They have things that need managing. They just have fewer things. And someone was saying to me, "Why do I need Puppet if I have these smaller operating systems?" And the resource count isn't the thing you get value out of. Puppet isn't more valuable the more resources you're managing.
You want to manage the thing. And, actually, with the smaller operating systems, it's much easier to manage everything. They tend to have simpler interfaces, that you're not dealing with so many packages, and so many files, and so many services.
But they do have certain things you want to manage and make consistent across them. And they're all subtly different as well. The implementations of things like CoreOS, and Atomic, and Photon, for example, are different. I think Nano is just out with Windows Server 2016, and, again, it's different.
Some of them are very much like, "Well, it's a single file system lab." But they still have services. Photon has, basically, its own minimal package system, so there's actually some of the packages you want to manage. And, again, you want to manage the services and the unit files for systemd.
And the nice thing here is there's a lot of consistency in these new operating systems. Actually, managing them with Puppet is a simpler endeavor than managing a large, general Puppet operating system. But you get the value of having that existing quicker, I think. . .
It's interesting how a lot of these are working is that software is either the operating system or it's a container. We did some work around packaging up Puppet actually as those images that are up on Hub, and you simply run Puppet as a container to manage the operating system.
It's nearly the opposite of some of the things we've talked about before, where you're using Puppet to manage your containers. Here, we're leveraging container technology to help you manage other things. So there's a nice synergy there, too.
Kara Sowles: Excellent. What about instances where people are using some of this tech that you're talking about with Puppet? Are they using it to make Puppet better? Does it go along with that well?
Gareth Rushgrove: Yeah. I mean, that example there was. . . I guess using the container technology with Puppet to manage something else is a good example of that. But, also, there are some people running -- and we're doing a bunch of work around this, to run Puppet and the Puppet Enterprise product, for example, on top of the cluster schedulers.
There's a lot of interesting things there. For example, Kubernetes or Docker Swarm, they're introducing these primitives for rolling updates, or high availability, or better utilization. I think casting them as container managers is missing the interesting bit. The interesting bit is they're giving us high-level primitives for building really robust distribution systems.
Historically, we've had to do all of that ourselves -- not just Puppet, but literally anyone building any sort of software. And it's something we can go put in a layer of abstraction, and have the benefits of a bunch of those features without having to do all the science.
And I think that the idea of running a Puppet cluster and Puppet manager, and scaling that out with compile masters automatically on top of the schedulers is a really interesting space we're going to be seeing a lot more of.
Carl Caum: I think our customers are going to really feel that, too, when they start running it that way, simply because if you look at Puppet Enterprise now and Puppet services, our open-source projects, scaling those out right now -- like you really have to know what you're doing here. And having these high-level primitives really make the ability to just describe this one service that just automatically scales up and down the way you need it to for your infrastructure is really nice.
Kara Sowles: That's fantastic. Let's dive into one. I know what everyone listening is thinking. . . You haven't talked about Docker the entire time. You've talked about other stuff. For those that aren't tired of it, let's dig in a little bit to Puppet and Docker.
Gareth Rushgrove: The Puppet and Docker story is interesting to me personally. I originally wrote the Puppet-Docker module well before I worked here.
Kara Sowles: When was that?
Gareth Rushgrove: Let's think. . . Docker shipped around four years ago -- just a little bit less. My memory is a little hazy. The Puppet module was probably around a couple of weeks after that. I was a super early adopter that was messing around with it. And it's really interesting. From that point where I just wrote something to now, I think there's been 130, 140 contributors to that module, into 1,000 commits -- like many releases of major versions.
It's been a really good community sector success story. And then, obviously, coming to Puppet to work on some of those things has been -- my hobby became my job, in some ways, and then other people from work became my collaborators on one of the things I'd done from outside. It's a really good example of open-source working really well.
And regularly hearing from users, that's how they started with Docker. And if you want to use Docker, you want to use it on your infrastructure. And if you are using Puppet, it's as simple as just, include Docker in your manifests.
I think a lot of people are sometimes put off by new software -- I mean, this is not really about anything specific. But they're put off by the complexity of getting started. The story around Docker was so simple for Puppet users that it was just, "Oh, I just put 'include Docker' and I have Docker."
And later, people have been able to -- well, the module now has a whole bunch of parameters for all the different things you can configure. And you can build quite robust, highly customized installations and configurations, all just through the simple Puppet interface.
The module itself is all about installing Docker, managing the configuration of it -- some of that file package and service stuff that still required. . . You want to run containers on the top, but there's still an engine. There's still a daemon. There's still a thing you need to manage.
But it also supports primitives for managing Docker networks, for example. You can also have it actually managing specific containers if you want to have Puppet launch statically scheduled containers on your hosts. And there's support for Docker Compose. If you're using Docker Compose files, Puppet can take over the management of that and ensure that state over time.
So there's a whole bunch of bits and pieces in the Docker module, but, also, that's not the end of the Docker and Puppet story. There are a couple of other bits and pieces. There's a module for Universal Control Plane, which is part of the Docker datacenter toolset. Getting started with that with Puppet is, again, "Just include the module."
There's also a module for Docker Swarm. There's a whole bunch of interesting things on top of Swarm -- not just actually setting it up, but allowing you to manage the containers on top of your swarm using Puppet. And that was done by a community member, by Scott Coulton.
[Unintelligible], and he talked about Puppet and Docker at the PuppetConf keynote. But, also, he's the author of the Puppet and containerization book, which was pretty interesting from my point of view, in that the company Scott was working for and the people he was working with took the module which allows you to do all these things, and they did it in production in a high-security environment.
So a lot of the best practices that Docker has published, that other people have published about configuring Docker, they were able to encode in Puppet and have the reassurance that they were not going to change. The Docker cluster grew, and maintained the security posture it needed to do with the best of both worlds.
Carl Caum: You can hear Gareth hinting at something really key here. I participate in the local Docker MeetUp here in Portland, and one of the things that I consistently talk to people about is they feel overwhelmed. When they look at getting started with this stuff, they're like, "Do I choose Kubernetes? Do I choose Swarm? How am I actually going to get these things out? Should I have a local registry?"
It's like, "Whoa! Calm down. Let's start a little bit smaller here. How about we just get Docker running?" Like, step one, right? "Now let's try and get a container out. . ." And Puppet really helps you do that, but the challenge there is, once you take that first step, how do you know when to take the next step, and where?
So you've deployed the Docker engine in your infrastructure. Well, where is it? And how many do you now need to upgrade to the next thing, whether that's Compose or Swarm, or whatever else?
But with Puppet, you know. Just look at the code. Where is it? It's right there. . . Where is it deployed? Right here. . . Okay. Now we just need to change a couple of lines out into the next thing that we want to try, the next step.
As you're diving into these things, as you're diving into Docker, as you're diving into Kubernetes, and all these things, don't go head first. Don't try and just burn the world and start over with these new technologies. Absolutely not. . . I've never seen it work. It actually ends up taking longer, being more costly. It's not good.
Treat it as an evolutionary step. Treat it as something where we want to change out one thing at a time. We want to learn from it, and then we want to take away what we've learned to take the next step, and then take a better next step and end up in a better place. It'll get there faster and have better expertise, because we learned along the way.
And Puppet really helps you with that, and the Docker module is a great example of that.
Gareth Rushgrove: I mean, that's going back to one of the first Docker and Puppet integrations. The most recent one was the work around image_build, which we shipped at PuppetConf. And this is the ability to. . . In the same way that most Docker images are built with a Dockerfile, and you take a Dockerfile as a text document, just a line-oriented procedural script -- well, you run docker_build.
And the beauty is in the simplicity. It's a simple way of doing something. And what we wanted to do was take that for Puppet code, so we shipped a tool that allows you to just take Puppet code -- and this is any Puppet code -- and just simply run puppet_docker_build against it.
And you can provide metadata. You can provide configurable options depending on what you're trying to do. You might want to use a different operating system or a different version of Puppet. And that will build you an image. It's a very similar user experience with a different interface.
The first time we talked about that was at PuppetConf, and we had some testing going on. And I was talking to one of my colleagues about that today. And to hit Carl's point about some people feeling overwhelmed, but knowing that they need to know these things, a number of them were like, "I've never built an image before, and I just built one using something I understand. I've been told that I need to get into these containers."
But it can feel overwhelming. And Puppet has that bridge -- well, you already know it -- how can we provide better tools to get you there, where there isn't really a destination as much as a jumping-off point for learning something else? I think that's where, over generations, Puppet can become that bridge, that language.
Kara Sowles: That's really good advice. And that's great advice, Carl, to be gentle with yourself, to take things slowly and not get overwhelmed there. Any other advice to folks as we wrap up for people who are out there thinking through all of this?
Gareth Rushgrove: "Advice" might be the wrong word, but definitely try out a bunch of the things we've been talking about. We'd love feedback. The more we hear, the more we iterate and improve on these things, and the more they'll solve other problems that are adjacent as well.
And as we talked about, technology is constantly changing. Puppet and everything else is also constantly changing. And those things go hand-in-hand. We want to help you manage anything you're doing.
And, yeah, run the shameless pitch there. Check out all the videos and talks from PuppetConf. There were a lot of talks, from the keynote, from some of the demos, we had an entire track on modern infrastructure, and a number of talks scattered about. There's some great content for people who are maybe more visual than auditory.
Kara Sowles: Yeah. Shout out to folks who remember the last time we did a podcast on Docker. It was almost exactly two years ago. It was a little more than two years ago. And we had a couple folks from Docker in the office, and a local user, and stuff. It's suddenly coming back, like, "Oh, yeah. We started having these conversations even here on the podcast." So shout out to anyone who's been listening long enough to remember that from 2014.
So as people go home and think about this, watch those PuppetConf videos. Anything else for those that are not into videos or were already at PuppetConf that they should be checking out?
Gareth Rushgrove: It's always worth checking out the Puppet blog. And there's a number of blog posts, and there will definitely be a number more, around some of the things we shipped at PuppetConf. And it's always a good place to watch anyway, whether about Blueshift, or containers, or anything else we're up to.
Carl Caum: There's also the Blueshift page on our Web site which will link out to all the projects. And the reason that I say that is because all of our projects have examples in them. Thank you, Gareth. So you can really just get started with it and try it out.
Kara Sowles: That's great. If you like listening to Gareth. . . If you Google "Gareth," you'll find about 10,000 talks that are somehow all really good, and then about 100 blog posts. So there's no shortage out there.
Gareth Rushgrove: I'm sure they all contradict each other as well. . . [Laughs]
Kara Sowles: Thank you both for joining us. We really appreciate it.
Carl Caum: Yeah.
Gareth Rushgrove: Thanks a lot.
Kara Sowles: Thank you to everyone that's listening in. We look forward to talking to you next time on the Puppet Podcast. Bye.