Canary deployment is a method of rolling out new software versions in a controlled manner, testing a new version on a small fraction of users to ensure it works well, before rolling out to all users.
Canary deployments take their name from the practice of miners carrying a canary into a coal mine to detect any dangerous gasses. If the canary showed signs of distress, miners knew that the environment was unsafe. Similarly, in the context of software deployment, a small portion of traffic is routed to the new version or ‘canary.’ If issues arise, they affect only this small subset of users, not the entire customer base.
When running containerized applications in Kubernetes, the platform’s inherent flexibility and scalability make it well-suited for canary deployment strategies. For example, if an application is distributed across ten Kubernetes pods, you can designate one pod as the canary, deploy the new version only on that pod, and if all is well, deploy it to the remaining nine pods.
It’s important to note that canary deployments are not available by default in Kubernetes—they are not one of the deployment strategies in the Deployment object. Therefore, to carry out canary deployments in Kubernetes you will need some customization or the use of additional tools. We’ll show how to carry out canary deployments easily with Argo Rollouts, an open source deployment tool.
Benefits of Canary Deployment in Kubernetes
Here are a few reasons you should consider using the canary deployment strategy:
- Risk mitigation: Introduces updates to a small user group to contain potential issues, enabling straightforward rollback and reducing system-wide failure risks.
- Real-time user feedback: Enables live monitoring to identify problems and gain insights into user acceptance, guiding release decisions.
- Controlled traffic exposure: Allows teams to control the flow of traffic during the release. For example, deploying the new version gradually to more and more users, or switching to the new version all at once.
- Cost efficiency: Uses the existing production environment for testing, saving on the costs of dedicated test or staging environments.
Use Cases of Canary Deployment in Kubernetes
Here are some common use cases of canary deployments in a Kubernetes environment:
- Feature rollouts: Deploy new features incrementally to a subset of pods, using Kubernetes’s ability to manage multiple versions of a container simultaneously. This minimizes the risk when introducing new features by initially exposing them only to a limited user base.
- Configuration changes: Canary deployments can be utilized for applying and verifying configuration changes. Before updating the entire fleet of pods, configuration changes can be applied to a single pod or a small subset. This is particularly useful in microservices, where a change in one service may have a ripple effect on others.
- User-specific deployments: In multi-tenant applications, canary deployments can target specific user segments. For instance, updates can be released to a canary group composed of internal users or a particular customer demographic, allowing for targeted testing and feedback.
- Multi-region deployments: In distributed systems spanning multiple regions, canary deployments can assess the performance and reliability of a new release in a specific region before a wider rollout, accounting for region-specific variations in traffic and usage patterns.
- Third-party service updates: When updating services that rely on external APIs or services, a canary release can validate compatibility and ensure that changes in the external service don’t negatively impact the application.
What Is Argo Rollouts?
Argo Rollouts is a Kubernetes controller and set of CRDs (Custom Resource Definitions) that provides advanced deployment capabilities such as blue-green, canary, and more in Kubernetes environments. It uses native Kubernetes concepts and extends them with powerful features that support complex deployment strategies. With Argo Rollouts, you can achieve much more granular control over the rollout of new versions of your applications, making sure that it doesn’t negatively affect your users.
Argo Rollouts supports canary deployments out of the box, as part of the Rollout object. It enables you to gradually roll out new versions to a small subset of users and monitor the application’s health. You can then, either manually or automatically, direct more traffic to the canary version or roll back to the stable version.
Quick Tutorial: Canary Deployment on Kubernetes with Argo Rollouts
Installing Argo Rollouts
You’ll need to add the Argo Rollouts chart repo to your Kubernetes cluster. You can do this by running the following command in your terminal:
kubectl create namespace argo-rollouts
kubectl apply -n argo-rollouts -f https://raw.githubusercontent.com/argoproj/argo-rollouts/stable/manifests/install.yaml
This will create a new namespace called argo-rollouts
and install Argo Rollouts within it.
Create a Rollout Object with Canary Deployment
The first thing you will need to do is define a Rollout object. This object will specify the deployment details, including the image to be deployed, the number of replicas, and the update strategy.
Here is an example, shared in the Argo Rollouts documentation, of a Rollout manifest that defines a canary deployment:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: example-rollout
spec:
replicas: 10
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15.4
ports:
- containerPort: 80
minReadySeconds: 30
revisionHistoryLimit: 3
strategy:
canary: #Indicates that the rollout should use the Canary strategy
maxSurge: "25%"
maxUnavailable: 0
steps:
- setWeight: 10
- pause:
duration: 1h # 1 hour
- setWeight: 20
- pause: {} # pause indefinitely
This manifest defines the following settings for the canary deployment:
- maxSurge: A 25% surge is allowed above the desired number of pods during the update process.
- maxUnavailable: Ensures zero downtime by keeping all pods available during the deployment.
- steps: Defines the steps of the canary deployment.
- setWeight: This step shifts a specified percentage (10% initially) of traffic to the new version.
- pause: After setting the initial weight, the rollout pauses for 1 hour, giving time to monitor and verify the new version before proceeding.
- The next setWeight would increase the traffic weight to 20% if specified, followed by an indefinite pause, which means it will wait for a manual judgment before proceeding.
Save this file as rollout.yaml
and then apply it with kubectl apply -f rollout.yaml
.
Note: At this point, nothing special happens. The rollout behaves as a standard Kubernetes deployment, because there is no “previous” version yet.
Update the Rollout
To update the rollout, you need to modify the image in the Rollout object and apply the change. Suppose you’ve made some updates to your application, and you’ve built a new image called YOUR_NEW_IMAGE
. This is the canary version. You then need to update the image field in your Rollout object:
spec:
template:
spec:
containers:
- name: rollout-canary
image: YOUR_NEW_IMAGE
Save the changes and apply them with kubectl apply -f update-rollout.yaml
. Argo Rollouts will now gradually update the pods with the new image based on the parameters specified in the Rollout object.
Promote the Rollout
In the Rollout template we showed above, the final step is to pause the deployment indefinitely and wait for user input. Once you have tested the canary deployment and are sure you want to deploy it to all users, run this command to continue the deployment:
kubectl argo rollouts promote <rollout>
Note: Manually promoting a rollout is optional and gives you more control over the canary deployment. You can also set the rollout to be promoted automatically based on a metric you specify.
Advanced Progressive Delivery in Kubernetes with Argo Rollouts and Codefresh
The Codefresh Software Delivery Platform, powered by Argo, offers advanced progressive delivery methods by leveraging Argo Rollouts, a project specifically designed for gradual deployments to Kubernetes.
Through Argo Rollouts, the Codefresh platform can perform advanced canary deployments that support:
- Declarative configuration – all aspects of the blue/green deployment are defined in code and checked into a Git repository, supporting a GitOps process.
- Pausing and resuming – pausing a deployment and resuming it after user-defined tests have succeeded.
- Advanced traffic switching – leveraging methods that take advantage of service meshes available on the Kubernetes cluster.
- Verifying new version – creating a preview service that can be used to verify the new release (i.e smoke testing before the traffic switch takes place).
- Improved utilization – leveraging anti-affinity rules for better cluster utilization to avoid wasted resources in a canary deployment.
- Easy management of the rollout – view status and manage the deployment via the new Applications Dashboard.
Try it for yourself by signing up for a free Codefresh account.