Continuous Delivery and Continuous Deployment for Kubernetes (part 2)

Continuous Delivery and Continuous Deployment for Kubernetes (part 2)

6 min read

Part 2

In the previous post, I’ve covered Docker CI, and talked about Continuous Deployment and Continuous Delivery.

This time, I’m going to share our POV for building effective CD (both CD types) for a microservice based application, running on a Kubernetes cluster.

Kubernetes Continuous Delivery (CD)

Building Docker image on git push is a very first step you need to automate, but …

Docker Continuous Integration is not a Kubernetes Continuous Deployment/Delivery

After CI completes, you just have a new build artifact – a Docker image file.

Now, somehow you need to deploy it to the desired environment (Kubernetes cluster) and maybe also need to modify other Kubernetes resources, like configurations, secrets, volumes, policies, and others. Or maybe you do not have a “pure” microservice architecture and some of your services still have some kind of inter-dependency and have to be released together. I know, this is not “by the book”, but this is a very common use case: people are not perfect and not all architectures out there are perfect either. Usually, you start from an already existing project and try to move it to a new ideal architecture step by step.

So, on one side, you have one or more freshly backed Docker images.

On the other side, there are one or more environments where you want to deploy these images with related configuration changes. And most likely, you would like to reduce required manual effort to the bare minimum or dismiss it completely, if possible.

Continuous Delivery is the next step we are taking.

Most of the CD tasks should be automated, while there still may be a few tasks that should be done manually. The reason for having manual tasks can be different: either you cannot achieve full automation or you want to have a feeling of control (deciding when to release by pressing some “Release” button), or there is some manual effort required (bring the new server and switch it on)

For our Kubernetes Continuous Delivery pipeline, we manually update Codefresh application Helm chart with appropriate image tags and sometimes we also update different Kubernetes YAML template files too (defining a new PVC or environment variable). Once changes to our application chart are pushed into the git repository, an automated Continuous Delivery pipeline execution is triggered.

Codefresh includes some helper steps that make building Kubernetes CD pipeline easier. First, we have a built-in helm update step that can install or update a Helm chart on specified Kubernetes cluster or namespace, using Kubernetes context, defined in Codefresh account.

Codefresh also provides a nice view of what is running in your Kubernetes cluster, where it comes from (release, build) and what does it contain: images, image metadata (quality, security, etc.), code commits.

We use our own service (Codefresh) to build an effective Kubernetes Continuous Delivery pipeline for deploying Codefresh itself. We also constantly add new features and useful functionality that simplify our life (as developers) and hopefully help our customers too.

Typical Kubernetes Continuous Delivery flow

Kubernetes Continuous Delivery
  1. Setup a Docker CI for the application microservices
  2. Update microservice/s code and chart template files, if needed (adding ports, env variables, volumes, etc.)
  3. Wait till Docker CI completes and you have a new Docker image for updated microservice/s
  4. Manage the application Helm chart code in separate git repository; use the same git branch methodology as for microservices
  5. Manually update imageTags for updated microservice/s
  6. Manually update the application Helm chart version
  7. Trigger CD pipeline on git push event for the application Helm chart git repository
  • validate Helm chart syntax using helm lint command
  • convert Helm chart to Kubernetes template files (with helm template plugin) and use kubeval tool to validate these files
  • package the application Helm chart and push it to the Helm chart repository
  • Tip: create few chart repositories; I suggest having a chart repository per environment: production, staging, develop
  1. Manually (or automatically) execute helm upgrade --install from corresponding chart repository

After CD completes, we have a new artifact – an updated Helm chart package (tar archive) of our Kubernetes application with a new version number.

Now, we can run help upgrade --install command creating a new revision for the application release. If something goes wrong, we can always rollback failed release to the previous revision. For the sake of safety, I suggest first to run helm diff (using helm diff plugin) or at least use a --dry-run flag for the first run, inspect the difference between a new release version and already installed revision. If you are ok with upcoming changes, accept them and run the helm upgrade --install command without --dry-run flag.

Kubernetes Continuous Deployment (CD)

Based on above definition, to achieve Continuous Deployment we should try to avoid all manual steps, besides git push for code and configuration changes. All actions, running after git push, should be 100% automated and deliver all changes to a corresponding runtime environment.

Let’s take a look at manual steps from “Continuous Delivery” pipeline and think about how can we automate them?

Kubernetes Continuous Deployment

Automate: Update microservice imageTag after successful docker push

After a new Docker image for some microservice pushed to a Docker Registry, we would like to update the microservice Helm chart with the new Docker image tag. There are two (at least) options to do this.

  1. Add a Docker Registry WebHook handler (for example, using AWS Lambda). Take the new image tag from the DockerHub push event payload and update corresponding imageTag in the Application Helm chart. For GitHub, we can use GitHub API to update a single file or bash scripting with mixture of sed and git commands.
  2. Add an additional step to every microservice CI pipeline, after docker push step, to update a corresponding imageTag for the microservice Helm chart

Automate: Deploy Application Helm chart

After a new chart version uploaded to a chart repository, we would like to deploy it automatically to “linked” runtime environment and rollback on failure.

Helm chart repository is not a real server that aware of deployed charts. It is possible to use any Web server that can serve static files as a Helm chart repository. In general, I like simplicity, but sometimes it leads to naive design and lack of basic functionality. With Helm chart repository it is the case.
Therefore, I recommend using a web server that supports nice API and allows to get notifications about content change without pull loop. Amazon S3 can be a good choice for Helm chart repository

Once you have a chart repository up and running and can get notifications about a content update (as WebHook or with pool loop), and make next steps towards Kubernetes Continuous Deployment.

  1. Get updates from Helm chart repository: new chart version
  2. Run helm update --install command to update/install a new application version on “linked” runtime environment
  3. Run post-install and in-cluster integration tests
  4. Rollback to the previous application revision on any “failure”

Summary

This post describes our current Kubernetes Continuous Delivery” pipeline we succeeded to setup. There are still things we need to improve and change in order to achieve fully automated Continuous Deployment.

We constantly change Codefresh to be the product that helps us and our customers to build and maintain effective Kubernetes CD pipelines. Give it a try and let us know how can we improve it.


Hope, you find this post useful. I look forward to your comments and any questions you have.

New to Codefresh? Create Your Free Account Today!

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

5 thoughts on “Continuous Delivery and Continuous Deployment for Kubernetes (part 2)

  1. Are you saying injecting all env specific values into the packaged chart ?
    How do you reuse the same snapshot across distinct environments ?

    1. Default values are in chart. But we override most value and decrypt secret values per environment (production, staging, testing) when deploying a chart.
      It’s indeed a headache to maintain a set of variables per environment. For example, adding a new variable requires to add it to each environment.

      We are working to simplify management of multiple variable files, environments, and secrets.

  2. Great article.

    Would you reuse helm charts in flavor to reuse it? I had seen many projects with tens of similar services. Would you consider the idea of rule similar kind of services from the same chart?

    1. In a situation where you have many services that are all the deployed the same way, you can potentially have “one chart to rule them all”, where you are simply passing in different Docker image refs every time you deploy.

      However, with the approach described above, it becomes difficult to see what’s really running in your cluster using a “helm list” (all chart names are the same).

      What might be a better approach is to develop a “common” chart that is included by all of your other services’ charts. Here is an example of 3 different charts that all depend on a chart called “simplepod”: https://github.com/codefresh-io/helm-chart-examples/tree/master/chart-of-charts/charts

  3. We are starting to go down the K8s and Helm path and these articles have been very helpful. I am working on planning our CD pipeline and a challenge I am still facing is how to deal with SymVer for Helm Charts while doing Continuous Delivery.

    In my research on the Internet I see many folks mention that SymVer has to be handled manually, as part of the code commit. This makes it difficult for many developers working on the same codebase and to use CD.

    I was curious if you had any advice for dealing with SymVer with something like you Kubernetes Continuous Deployment scenario you outline above? Any considerations when your Helm Chart includes multiple containers each that are versioned independently.

    Thanks in advance!

Leave a Reply

Your email address will not be published. Required fields are marked *

Comment

Build your GitOps skills and credibility today with a GitOps Certification.

Get GitOps Certified

Ready to Get Started?
  • safer deployments
  • More frequent deployments
  • resilient deployments