Kubernetes CD

What is Continuous Delivery?

Continuous Delivery means that your software is always ready to deploy, at the push of a button. But how does that work? How does CD work with platforms like Kubernetes?

A typical continuous delivery process

A typical continuous delivery process would include the following:

  1. Source code of the application is checked out
  2. Unit tests are run
  3. Source code is package/compiled
  4. Integrations/component/end-to-end tests are run
  5. Security scanning is performed
  6. Additional facilities such as linting take place
  7. An artifact is created (e.g. Docker image, Jar file)
  8. The artifact is stored and can be easily pushed to production

The whole process can be modelled with a pipeline of multiple steps. The main point of Continuous Delivery is that each artifact is a potential candidate for being sent to production. But the artifact itself is ready at the end of the pipeline and doesn’t need any additional testing or polishing before it reaches production.

As a last step, a human can approve (send to production) or reject (keep in storage) the artifact. This approval can take multiple forms. It can be the the merging of a pull request in the mainline (i.e. trunk based development) or simply the continuation of the pipeline to a set of steps that deploy the artifact to production.

Continuous Delivery assumes the presence of Continuous Integration, meaning that the artifact is created after all developers have merged/integrated their unit of work in the same Git branch.

Additionally if all release candidates (that satisfy quality requirements) are sent to production immediately without any human intervention, then the process takes the form of Continuous Deployment. A continuous deployment pipeline does not contain any manual approval steps. It starts from source code and finishes with a production deployment.

Continuous Delivery with Kubernetes

In traditional applications that are based on Virtual machines, deploying an artifact means copying it to the target host and initializing it. Kubernetes clusters work in a different way by pulling their deployment artifact on their own from a Docker registry.

Therefore, a deployment to a Kubernetes cluster takes place simply by instructing the cluster on what kind of resources it should contain in the form of application manifests. The manifests describe the artifact location and other associated resources (configuration settings, network balancers etc). The cluster is then responsible to fetch the artifact and initialize it as a cluster resource.

This means that sending a deployment message to a Kuberentes cluster doesn’t actually deploy an artifact. Instead it starts a reconciliation process between the desired cluster state (described in the manifests) and the actual cluster state.

Kubernetes is powered by a well-defined API that can be used for resource creation and deployment. In the most typical scenario the associated `kubectl` command can be used to perform deployments in an imperative manner, by sending to the cluster specific application manifests and asking the cluster to “apply” them.

Coming back to the Continuous Delivery topic, the most basic way to deploy inside a pipeline is by simply wrapping the `kubectl` command inside a pipeline step. This approach can work for simple scripts but has several shortcomings with regards to traceability and auditing.

A better way to deploy to Kuberentes is by using the cluster itself as the deployment mechanism. Instead of sending a direct message to the cluster API when a deployment takes place, we can simply use Git as a central record of all deployments and commit the application manifest to a Git repository that is also accessible to the cluster.

An existing cluster application such as Argo CD can then monitor this Git repository and automatically apply any resource manifests that are contained in it. The process is called GitOps and has several advantages compared to the imperative method of using kubectl commands.

Deploy to Kubernetes with Codefresh

Codefresh has full support for both imperative and declarative deployments to Kubernetes. You can choose among multiple ways depending on your organization needs.

In the most simple scenario, if you want to deploy an application right away, Codefresh will even auto-generate the application manifests for you.

Thus, the quickest way to deploy to Kubernetes with Codefresh is by using the built-in deploy step inside a pipeline.This step is available to all Codefresh pipelines and requires only a Docker image. A manifest for a deployment and service will automatically generated for you. The image is assumed to be in a connected registry and the cluster must also be known to Codefresh.

If you already have your own manifests, Codefresh additional deployment mechanisms that even support some basic templating capabilities.

For GitOps deployments, Codefresh comes with a pipeline step that can instruct Argo CD to perform a sync with the Git state.

A similar step also exists for Argo Rollouts if you want to deploy to Kubernetes using Canaries or Blue/green deployments.

You can easily insert an Argo Sync step inside the middle of a pipeline, allowing you to perform pre-sync and post-sync actions:

Specifically for GitOps, Codefresh also has pipeline steps that can perform Git commits inside a pipeline and even open Pull Requests.

This means that you can even create a Continuous Deployment pipeline that runs in a fully automated manner by using both Argo CD and Pull requests inside a single pipeline.