homebloglyra brings puppet concepts and more cloud native

Lyra brings Puppet concepts and more to cloud-native

Editor's note: We invite you to join us at Puppetize PDX this 9-10 October 2019 to learn more about what's new at Puppet.

Introducing Lyra

As of today, Lyra, a new open source project sponsored by Puppet, has hit its general availability milestone. In this blog post, I want to take the time to explain why we are working on Lyra, the problems Lyra addresses, and its approach to solving them. If you've talked to us about it at DockerCon, seen a demo at KubeCon, or read through the README already, you can jump straight to the download instructions.

The "elevator pitch" for Lyra goes like this: it's an open source workflow engine for orchestrating cloud-native deployments. In a single Lyra workflow, you can provision infrastructure, deploy your application and its external dependencies, plus trigger actions like sending ChatOps notifications or GitOps callbacks at any stage. Plus (ok, assume it's a really long elevator ride), Lyra workflows are composable pieces of code, meaning you can call workflows from other workflows to increase reusability, eliminate copy-pasting config, and provide app deployment building blocks for large teams.

How did we get here? How do I work this?

Puppet (the original open source project) was created in 2005. Luke Kanies wrote Puppet on his quest to build better tools to eliminate the soul-crushing parts of his job as a systems administrator. Puppet codified the nascent principles of the configuration management discipline — infrastructure as code, declarative model-driven state management, autonomous agents — and helped usher in the era of what ultimately became known as DevOps.

This idea and execution ended up being successful, taking root and evolving at the thousands of sites that have used Puppet technology over the last decade and a half. But the IT landscape that informed many of Puppet's core design decisions has changed dramatically over time. The emergence of virtualized, ephemeral, and immutable infrastructure has brought the industry into a new era, commonly referred to as "cloud-native." Cloud-native infrastructure is (according to the excellent description in Kris Nova and Justin Garrison's book of the same name): meant to run on a platform; operated through APIs; and designed for resiliency, agility, operability, and observability.

I talked about the evolution of the space in my Configuration Management Camp 2019 keynote talk and have been thinking a lot about how Puppet the company — and, more importantly, Puppet the practitioner and development community — can bring forward the valuable lessons we've learned into this paradigmatically different world. Most of the Puppet community are no longer racking and stacking servers in the data center, grappling with PXEBOOT, kickstart, or in-place OS upgrades. (But where they are, they're probably doing it at cloud-provider scale!)

They may be part of an effort to migrate workloads to the cloud or, in some cases, are shifting into a new role where they're helping re-architect services to take best advantage of those desirable characteristics: resiliency, operability, etc. These users have questions for how they can translate their Puppet skills into the cloud. Do they have to relearn everything for cloud-native?

(Spoiler alert: The answer is "no.")

The expansion pack: problems for Kubernetes app developers, application service teams, maybe you?

At the same time, early adopters of technology like Docker, Kubernetes, and public cloud services have found the many devils hiding in the details. Far from finding infrastructure operations eliminated as a role or discipline, practitioners are now tasked with provisioning complex infrastructure through unstable APIs; understanding how to troubleshoot end-user requests that may flow through a dozen microservices; and managing distributed systems infrastructure through user interfaces that even their creators acknowledge aren't meant for human authoring and consumption.

In his KubeCon 2019 keynote, VMware senior staff engineer and KubeCon program chair Bryan Liles pointed out that configuration management for Kubernetes is an area where "we're not hitting the bar yet.” The state of the art is akin to RPM or Apt package management on systems, and "nobody has yet built the equivalent of cfengine for Kubernetes.”

It sure seems like this is something we should be able to help out with.

Managing and operating cloud-native infrastructure requires capabilities that haven't fully settled in today's ecosystem. Users need to:

  • Provision resources that may span cloud platforms from a single interface; for example, creating a Kubernetes load balancer and deployment for front-end traffic while provisioning a RDS database instance on AWS.
  • Intermingle those declarative steps with imperative, non-stateful tasks like manipulating a ticket queue or PagerDuty downtime interval over a REST API.
  • Describe this deployment in a reusable, accessible artifact that can be consumed by other teams and incorporated into the broader toolchain, such as the CI/CD pipeline and GitOps workflows.
  • Provide sophisticated data injection capabilities for deployments, to manage the values that should be interpolated into Helm chart variables, and secrets like keys and database credentials, without introducing an explosion of YAML.
  • Integrate with the huge ecosystem of existing open source projects that work in this space to save repetitive work and leverage the collective intelligence of the thriving cloud-native community.

Enter stage left: capabilities of Lyra

Lyra helps teams simplify the way they orchestrate cloud-native infrastructure and application deployments, similar to what Puppet did for node-centric automation. It's brought forward some concepts that anyone who's used Puppet will appreciate (Hiera, graph-based dependency resolution, the rich Puppet data type system) but is a completely new implementation in the Go language. Here's a breakdown of its capabilities, as they fulfill the problems outlined above. I'm obliged to say that this contains what we call forward-looking statements, describing capabilities which exist in various stages of doneness from "tested and working" to "hacky prototype" to "gleams in a developer's (or product manager's) eye."

Orchestration: Lyra provides the ability to orchestrate a collection of steps in a workflow. Steps can be declarative, meaning they query existing state of the outside world before changing anything, to determine whether changes are necessary; or they can be imperative actions which don't depend on external state. Lyra's engine figures out the dependencies between steps given each step's inputs and outputs (the concept of the Directed Acyclic Graph is near and dear to our hearts, after all) and executes them in an order that will satisfy those dependencies. The following snippet shows the subnet_group_name key being returned from aws_db_group step and referenced in the aws_db step later, indicating the dependency order between the two steps:

Workflow Language: Lyra was designed to be a polyglot (multiple language) system from the start. While we began with YAML and Puppet language interfaces, the changes to the Puppet syntax needed to meet the use cases created a beast that was "neither fish nor fowl": far enough from classic Puppet to require experienced practitioners relearn it, but not used outside our own ecosystem. We currently support YAML simple workflows, plus both Go and TypeScript for users who want sophisticated control flow or custom error handling. We expect that Go will prove the best suited since that is Lyra's native language, but are open to exploring additional avenues if there's community interest. This example shows how Lyra invokes a Go plugin:

Reusability: Regardless of language choice, Lyra workflows are namespaced and referenceable from other workflows, which enables a compositional approach. For example, a user I spoke with at KubeCon explained that she has an app which sometimes needs Terraform to create infrastructure, but where every deployment consists of pulling a new container artifact and deploying it. Lyra's approach would break the "deployment" piece into its own workflow which could be called directly for the latter case, and invoked by a larger provision-then-deploy workflow for the former. The ultimate goal is to enable "blueprint" workflows so application service teams can provide a catalog of standard, blessed deployment scenarios for the developers they work with, increasing standardization and reducing the sprawl that's becoming a nightmare for early adopters of Kubernetes in the enterprise. In this example , the call key refers to a different workflow, which is invoked with the parameters provided in this step:

Data Injection: Lyra provides a Go implementation of the popular Hiera tool to inject external data and secrets into workflows to support code reuse and code/data separation. Go-Hiera is (mostly) compatible with Puppet's Ruby-Hiera version 5, to enable sites with existing business logic encoded in their Hiera databases to reuse that for their cloud-native projects. Hiera features a pluggable backend for integration against other existing key-value stores such as HashiCorp's popular Vault and Kubernetes' own etcd. Hiera is available as a Go project distinct from Lyra, and our hope is to encourage integrations with other adjacent tools that have data-injection needs, like Helm and Konfigure. Here's an example configuration tells Hiera to look first for a file in the environments subdirectory whose name matches the provided environment, and if no suitable overrides are found there, to fall back on the values in defaults.yaml:

Providers & Resources: Lyra is built on top of a provider system responsible for exposing manageable resources in a standard way. The provider system makes use of the Puppet data types, enabling rich input validation and data passing between workflows. For example, the type system can ensure that a user trying to instantiate a RDS database is restricted to an approved set of availability zones. Currently most of Lyra's provider content comes via Terraform, so we can address any of the cloud implementations that is "bridged" into the terraform content library. We expect to quickly provide deeper integrations with tools like Helm, Terraform modules and plans (rather than individual resources), Kubernetes api-server compatible YAML, and Bolt plans, and it's easy to add additional providers to the system as the community develops.

In conclusion

Lyra's a young project but it's moving quickly and we'd love to work with people who are interested in building workflows to meet their deployment scenarios, hacking on the project itself, or combining forces with tools that overlap with its goals and capabilities. The project is developed in the open via GitHub Project boards, issues, and conversation on Slack and Twitter.


Eric Sorenson is a technical product manager at Puppet.