homeblogintro to canary

Intro to Canary and Blue-Green Deployments with Dashboards.

This article is a continuation of the “What is Knative” series. For part 1, please follow this link. In part 2 of this series, we will review canary and blue-green deployments and dashboards while using no YAML!

If we raise the load on our service, Knative will autoscale it even further. Exactly how Knative should autoscale can be configured globally or per service.

To load our helloworld service we are first going to modify its configuration. By default, a service is configured to handle 100 concurrent requests. We will reduce it to 10 to be able to show the effects of autoscaling.


Configurations, Routes, Revisions

To understand what happens now, we need to take a look at other Knative objects:


As we can see, there are 4 Custom Resource Definitions inside our K8S cluster which are specific to Knative Serving: configurations, revisions, routes and services. Because Kubernetes has a native resource type “service”, the Knative one is called “kservice” or “ksvc”.

Diagram of Knative custom resource definitions: configurations, revisions, routes, and services

A Knative Service represents the microservice, the app which we just deployed. The service has a route which defines under which url it is accessible. It also has a configuration — the combination of code and settings — which can be versioned in revisions.


We see under Revisions that 100% of the traffic goes to the revision helloworld-pflmj-2 a.k.a. @latest, and we also see which image version is used in that revision.

Let’s check out our revisions:


We see that we actually have 2 revisions and the second revision gets 100% of the traffic. How come we have 2 revisions? Well, remember we changed the concurrency limit on our service? That’s when a new revision was created:


We clearly see the concurrency limit set to 10 in this revision.

Autoscaling a Knative service

Now we are ready to generate load on the service. We will use a tool called hey, but hey, you are welcome to use a tool of your choice:


This will generate 50 concurrent requests to our service during 30 seconds. Since our concurrency limit was set to 10, we now expect 5 pods to get started to handle all the traffic.

Our watch confirms our theory:


Again, after becoming idle these pods will be terminated.

Routes

But what if we want to deploy a new version of our app but not move it in production yet?

Our helloworld app supports an environment variable TARGET — if we set it to a message, that message will be returned to us in the response. So let’s use that to simulate releasing a new “testing” version of our app.

Obviously, doing kn service update helloworld --env TARGET=testing doesn’t work because this will route all traffic to the new version which we wanted to prevent.

To make this work, we first need to specify that the traffic should remain on the current version. We will use the feature called ‘tags’:


We defined a tag ‘production’, assigned it to the current version and specified that it should get 100% of the traffic. Now we can deploy a new testing version and tag it as testing:


We have now tagged our new version as ‘testing’. 100% of the traffic is still sent to production, as we see in the revision list. It turns out that tagging automatically creates a new route so we can access our testing version as follows:


We can now test our new version in isolation.

Blue and green canaries

After testing, we are now ready to move our testing version to production. Since production is the only really representative testing environment, instead of replacing the production version immediately, we would like to send a percentage of the traffic to the new version — a process called ‘canary testing’ or ‘canary deployment’.


We now see the intended traffic distribution. If we now do


we will get about 10% of “Hello testing!”s and 90% of “Hello World!”s. After we are satisfied that our testing revision is performing properly, we can tag our testing version as production and send 100% traffic to is using the mechanisms explained above.

A different approach is a so-called “blue-green” deployment. In that scenario, we imagine that our current production environment is tagged ‘blue’. We tag the new production version ‘green’ and switch 100% of the traffic to it. If drama happens, we quickly switch traffic back to ‘blue’ and start solving bugs in ‘green’.

Let’s start from scratch. First, let’s delete our service:


Let’s create our blue version:


Here, we used the --revision-name option to specify the revision name instead of letting Knative generate one for us. This means that we can use the revision name and can omit the tagging. In practice, tagging is more flexible because that is independent of revisions and moving a tag is easier than renaming revisions.

Next we will pin 100% traffic to the blue version so traffic will stick to it when we deploy a new revision:


We see that the blue version is now live. Let’s now create our green version:


We successfully switched to the green version. We can switch back anytime:


That was easy — we just implemented a blue-green deployment!

Please note that the real world is often tricker, especially if you have a storage backend with changing schema across service versions. Knative is still a big help, since it removes a lot of burden from deploying the web services themselves.

Knative Dashboards

Knative comes with pre-configured monitoring components. In this example we have installed Grafana and Prometheus, which enable us to view nice dashboards of our services.

This command will forward the localhost port 3000 to the grafana service in our kubernetes cluster:


Now, we can access the dashboards via "http://localhost:300" in our local browser.

Grafana dashboard for Knative

Epilogue

Summarizing, we have explored:

  • Installing Knative

  • Deploying and (auto)scaling a service

  • Canary and blue-green deployments

  • Knative Dashboards

I hope this overview has provided you with enough information and got you excited to start exploring Knative for yourself.

References:

If you are interested in moving your CI/CD pipeline to Kubernetes, check out the Tekton blog by Eric Sorenson. Fun fact: Tekton originated from a third component of Knative, “Build”, which has since then moved away from Knative into the Tekton project.

This educational content is brought to you by Relay. Relay is an event-driven automation platform that pulls together all of the tools and technologies you need to effectively manage your DevOps environment.