Editors note: We did a webinar on this very topic. Scroll to the bottom of this post or click here to view the webinar.
If you’re looking to deploy your services to Kubernetes, Helm works great. However, once you start deploying to multiple environments, developing code as a team, or automating in a CI/CD pipeline, you start to run into limitations with Helm.
Codefresh Pipelines using Helmfile has the power and flexibility to address these issues and many others. It’s also one of the best ways to organize your Helm code and values.
The nice thing is that Helmfile just manages Helm, and it works with Helm 2 & 3.. You are not locked in – you can still run plain old Helm with some powerful Go/Sprig Templating exposed.
Utilizing Helmfiles is more streamlined, which allows teams to develop Helm Charts together using clean and reusable Codefresh Pipelines.
In this tutorial, we’re going to walk through a deploy starting from a completely clean GKE install. For our example project, I’ll be using a monolithic git repository, however, Helmfile works great with a repository per Helm Chart as well.
Getting Started
Before you can complete the steps to run and apply a Codefresh Helmfile, you will need the following:
- GKE Cluster (I used 1.15.11-gke.11)
- IAM
- Kubectl, Helm, Helmfile
It’s worth noting that some basic knowledge of K8s and Helm is a prerequisite. If you aren’t familiar with them already, there’s a lot of great resources out there already. Including some Codefresh Webinars. Windows is going to run into some issues around the plugins if using Powershell / windows native shells.. I’d recommend using Windows Subsystem Linux (WSL), which should get you past any Windows issues. If you encounter problems, post below in the comments.
What is Helmfile?
Helm is great in that (1) it gives us the power of dependency management, i.e. our Chart can depend on other Charts and (2) allows templating of yaml files. Helmfile allows us to have dependencies between separate Helm installs. This way we can install multiple helm charts with 1 command, but Helmfile is actually using Helm to install multiple Charts.
Helmfile also allows us to template values.yaml. In standard Helm Charts this isn’t allowed. Helm wants the yaml file to be declarative. Declarative languages work great for ensuring that a system is going to get to the final state we want without worrying about imperative coding type issues. But maintaining a bunch of static values.yaml files is messy and not DRY. Which is why often with Declarative Languages you end up using imperative programming in front of the declarative tool/language.
Helmfile allows you to manage any number of Helm charts. Here’s an example of some of the things Helmfile can help you accomplish:
- Setup Helm repos
- Automate commands to run before/after Helm via Hooks
- Preview changes before deployment, similar to Terraform Plan
- Install dependencies as a separate Helm deploy with one command
- Manage the order of your Helm chart dependencies
- Golang/Sprig Templating values.yaml, and helm parameters
- Manage secrets as part of a Helm deploy
- Run a script or acquire data from another source
- Use of Environment Variables in Templating
Most importantly, Helmfile prevents vendor lock-in and allows for Golang Templating free from restrictions.
Step 1 – Create the GKE Cluster
First, I logged into my Google account for cloud.
$ gcloud auth login Your browser has been opened to visit: https://accounts.google.com/o/oauth2/auth?code_challenge=V4w93Tg2vSgt3dgrppcoXk7TtNpHHiiOH3vNZobkpHk&prompt=select_account&code_challenge_method=S256&access_type=offline&redirect_uri=http%3A%2F%2Flocalhost%3A8085%2F&response_type=code&client_id=32555940559.apps.googleusercontent.com&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fappengine.admin+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcompute+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Faccounts.reauth WARNING: `gcloud auth login` no longer writes application default credentials. If you need to use ADC, see: gcloud auth application-default --help You are now logged in as [[email protected]]. Your current project is [site-impact-infra]. You can change this setting by running: $ gcloud config set project PROJECT_ID $ gcloud container clusters get-credentials cluster-1 --zone us-central1-c --project rlt-sandbox Fetching cluster endpoint and auth data. kubeconfig entry generated for cluster-1.
For creating your GKE cluster, use the following directions (below) as an example. If you want more detail regarding this step, you can follow or looking at Google Cloud’s Quickstart page here.
$ gcloud container clusters create demo-cluster --cluster-version 1.15.11-gke.13 WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using `--no-enable-ip-alias` flag. Use `--[no-]enable-ip-alias` flag to suppress this warning. WARNING: Newly created clusters and node-pools will have node auto-upgrade enabled by default. This can be disabled using the `--no-enable-autoupgrade` flag. WARNING: Starting in 1.12, default node pools in new clusters will have their legacy Compute Engine instance metadata endpoints disabled by default. To create a cluster with legacy instance metadata endpoints disabled in the default node pool, run `clusters create` with the flag `--metadata disable-legacy-endpoints=true`. WARNING: Your Pod address range (`--cluster-ipv4-cidr`) can accommodate at most 1008 node(s). This will enable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs. Creating cluster demo-cluster in europe-west1-b... Cluster is being health-checked (master is healthy)...done. Created [https://container.googleapis.com/v1/projects/site-impact-infra/zones/europe-west1-b/clusters/demo-cluster]. To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/europe-west1-b/demo-cluster?project=site-impact-infra kubeconfig entry generated for demo-cluster. NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS demo-cluster europe-west1-b 1.15.11-gke.13 35.195.222.178 n1-standard-1 1.15.11-gke.13 3 RUNNING Updates are available for some Cloud SDK components. To install them, please run: $ gcloud components update To take a quick anonymous survey, run: $ gcloud survey $ gcloud container clusters get-credentials demo-cluster Fetching cluster endpoint and auth data. kubeconfig entry generated for demo-cluster.
Example Application:
In order to demonstrate a real-world example, we will be deploying a simple Hello World Application to Kubernetes. The Hello World application is simply an Nginx site. However, we want Hello World exposed via Traefik Ingress to the public internet, Along with installing an Ingress we also want to create 1 namespace for Hello World and 1 for Traefik. We will deploy our application to multiple environments with the option to configure DNS & SSL.
In Part 1, we will cover deploying Hello World, but we won’t get to Ingress, DNS, SSL, or Multiple Environment Support in Part 1. Later parts of the blog will continue to build until we have a more complete real-world example deploying Hello World to a completely new Kubernetes cluster.
Step 2: Set Up Helm Repos
As of Helm 2, no Chart Repositories come pre-configured. Helm Chart Repositories are expected to be set up before running Helm, but Helm doesn’t give you a way to manage them. It becomes something that either is in Documentation or may be automated with a custom script. With Helmfile you can declaratively set up your Helm Repos. Helmfile will run commands to manage those Repositories before trying to install the Helm Chart.
In this step, we will set up the “stable” Helm Repository, which contains the Traefik chart we are going to use for Ingress. Then with the “stable” repository being setup, we can successfully install the Traefik Chart.
As you can see when you currently attempting to install “mytraefik” the repository is missing
$ helm upgrade --install mytraefik stable/traefik Error: failed to download "stable/traefik" (hint: running `helm repo update` may help)
Helmfile allows setting up of all different types of Helm Repositories including public and private with credentials, using GitHub as a Chart Repository, etc. Here we will just set up the public stable chart repository.
cat << EOF > helmfile.yaml repositories: - name: stable url: https://kubernetes-charts.storage.googleapis.com releases: - name: mytraefik namespace: default chart: stable/traefik labels: name: traefik-public version: 1.86.2 EOF
Then apply Helmfile.
$ helmfile apply Adding repo stable https://kubernetes-charts.storage.googleapis.com "stable" has been added to your repositories Updating repo Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "stable" chart repository Update Complete. ⎈ Happy Helming!⎈ <REDACTED> Upgrading release=mytraefik, chart=stable/traefik Release "mytraefik" does not exist. Installing it now. NAME: mytraefik LAST DEPLOYED: Thu May 7 11:38:01 2020 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: 1. Get Traefik's load balancer IP/hostname: NOTE: It may take a few minutes for this to become available. You can watch the status by running: $ kubectl get svc mytraefik --namespace default -w Once 'EXTERNAL-IP' is no longer '<pending>': $ kubectl describe svc mytraefik --namespace default | grep Ingress | awk '{print $3}' 2. Configure DNS records corresponding to Kubernetes ingress resources to point to the load balancer IP/hostname found in step 1 Listing releases matching ^mytraefik$ mytraefik default 1 2020-05-07 11:38:01.914973 -0500 CDT deployed traefik-1.86.2 1.7.20 UPDATED RELEASES: NAME CHART VERSION mytraefik stable/traefik 1.86.2
Now that Helmfile has installed the proper charts, it’s time to run Helm Hooks.
Step 3: Run Helm Hooks
Another issue that is common to come across is wanting to run a command before or after Helmfile. For example, maybe you want to run Terraform before Helmfile, or in this example, we will create a missing namespace before running Helm. Helmfile Hooks can be trigger by the following Events:
- Prepare: events are triggered after each release in your Helmfile is loaded from YAML, before execution.
- Presync: events are triggered before each release is applied to the remote cluster. This is the ideal event to execute any commands that may mutate the cluster state as it will not be run for read-only operations like lint, diff, or template.
- Postsync: events are triggered after each release is applied to the remote cluster. This is the ideal event to execute any commands that may mutate the cluster state as it will not be run for read-only operations like lint, diff, or template.
- Cleanup: events are triggered after each release is processed.
You can find more details on GitHub about Helm Hooks this aren’t to be confused with Helm Chart Hooks which are similar but actually part of a Helm chart and limited to working with K8s resources. Helm chart hooks don’t allow you to do things like run Terraform or a custom script you’ve created so using Helmfile Hooks and Helm Chart Hooks are complementary and do not overlap
Alright, let’s use the “Prepare” event to execute a script that will create a namespace if it’s missing. This was a feature of Helm 2, but has been removed from Helm 3.
First, use Helm to create a Helm Chart for “helloworld” service.
$ mkdir -p helm/charts $ cd helm/charts/ $ helm create myapp Creating myapp $ tree . . └── myapp ├── Chart.yaml ├── charts ├── templates │ ├── NOTES.txt │ ├── _helpers.tpl │ ├── deployment.yaml │ ├── ingress.yaml │ ├── service.yaml │ ├── serviceaccount.yaml │ └── tests │ └── test-connection.yaml └── values.yaml 4 directories, 9 files
$ helm upgrade --install myapp ./myapp --namespace mynamespace Release "myapp" does not exist. Installing it now. Error: create: failed to create: namespaces "mynamespace" not found
You’ll notice that the Namespace doesn’t exist, and as a result, the command failed. So our next step is to use a hook and Helmfile to make sure the namespace is created before we proceed with the install.
$ cd .. $ mkdir -p helmfile/myapp $ cd charts/helmfile/myapp/ $ cat <<EOF > helmfile.yaml releases: - name: myapp namespace: mynamespace chart: ../../charts/myapp version: 0.1.0 hooks: - events: ["prepare"] showlogs: true command: "../../scripts/create_namespace.sh" args: ["mynamespace"] EOF
$ cd ../../.. $ mkdir scripts $ cat <<EOF > scripts/create_namespace.sh #!/bin/bash NS=$1 kubectl get namespace $NS 2> /dev/null exit_status=$? if [ $exit_status -eq 0 ]; then echo "Namespace $NS Already exists" else echo "Missing Namespace $NS creating now" kubectl create namespace $NS fi EOF $ chmod +x scripts/create_namespace.sh
At this point, you should have the following setup below:
$ tree . ├── charts │ └── myapp │ ├── Chart.yaml │ ├── charts │ ├── templates │ │ ├── NOTES.txt │ │ ├── _helpers.tpl │ │ ├── deployment.yaml │ │ ├── ingress.yaml │ │ ├── service.yaml │ │ ├── serviceaccount.yaml │ │ └── tests │ │ └── test-connection.yaml │ └── values.yaml ├── helmfile │ └── myapp │ └── helmfile.yaml └── scripts └── create_namespace.sh
Finally, we can apply Helmfile.
$ cd helmfile/myapp/ $ helmfile apply Building dependency release=myapp, chart=../../charts/myapp helmfile.yaml: basePath=. hook[prepare] logs | Missing Namespace mynamespace creating now hook[prepare] logs | namespace/mynamespace created hook[prepare] logs | <REDACTED> Upgrading release=myapp, chart=../../charts/myapp Release "myapp" does not exist. Installing it now. NAME: myapp LAST DEPLOYED: Thu May 7 12:30:51 2020 NAMESPACE: mynamespace STATUS: deployed REVISION: 1 NOTES: 1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace mynamespace -l "app.kubernetes.io/name=myapp,app.kubernetes.io/instance=myapp" -o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl --namespace mynamespace port-forward $POD_NAME 8080:80 Listing releases matching ^myapp$ myapp mynamespace 1 2020-05-07 12:30:51.919429 -0500 CDT deployed myapp-0.1.0 1.16.0 UPDATED RELEASES: NAME CHART VERSION myapp ../../charts/myapp 0.1.0
Near the top, we can see our logs confirming that we successfully created a namespace. Additionally, we were able to install our helm chart into the newly-formed namespace.
Helmfile Diff
Helm is great for applying changes and knowing that things will eventually be consistent. One issue though is that it’s a Black Box. Like Terraform Plan it would be great to see the difference and what changes a Helm chart install will have on the system. Luckily Helmfile leverages the Helm Diff plugin to make this happen. If you’ve been following along, you’ve been seeing REDACTED sections, this contained the Diff output. In order to use it on the command line simply run “helmfile diff”, a diff is also run when “helmfile apply” is run, but unlike a Terraform apply it will not ask you if you want to proceed or not, it simply proceeds with the apply. So be careful when running “helmfile apply”. Because we want to ensure that helmfile diff is always run and looked at before running helmfile apply, and we want others to be able to run deploys and get insight into other deploys, let’s also set up a Codefresh Pipeline now. First, we need to get our K8s cluster and Helmfile setup to run in Codefresh.
***Note: currently there is an open issue regarding a 3 way Diff that’s supported in Helm 3. Unfortunately, the Helm Diff plugin has not been updated to support a 3 way Diff. This means if manual changes have been made to your cluster, the diff will not see those changes.
Step 4: Configure Codefresh to See GKE Cluster
We need to add our GKE cluster we previously set up into Codefresh. This allows us to create a pipeline that will deploy to the GKE cluster that we integrated into Codefresh.
To begin, click “Kubernetes” on the left panel. Then click “ADD CLUSTER” in the upper right.
Add a cluster to “Custom Providers.”
If you need more assistance during this part of the process, check out the Codefresh tutorial here to add your cluster.
Unless you want to manually update your codefresh.yaml, make sure to name your cluster “codefresh-helmfile-demo”. This is because Codefresh automatically sets up kube context for you by injecting a kube config file. Your kube context is based on your cluster name.
Click “TEST CONNECTION,” and if successful, click “SAVE.”
Step 5: Create a Codefresh Pipeline
Now that we have our Kubernetes cluster setup and integrated with Codefresh, and we have a working Helmfile to deploy Hello World, let’s build a Codefresh CI/CD pipeline. Our Pipeline will:
- Clone Git Repository
- Run Helmfile Diff to show what a deploy of Hello World will change
- Pause for User Confirmation before deploying
- Run Helmfile Apply to deploy Hellow World
When we are done we will have a Pipeline like:
Next step is to Create a New Codefresh Pipeline
To access your pipeline from Git Repo, update the Helmfile with (in this case) “./code-01/codefresh.yml”. See below.
See below for a copy of the codefresh.yaml.
$ cat code-01/codefresh.yml # More examples of Codefresh YAML can be found at # https://codefresh.io/docs/docs/yaml-examples/examples/ version: "1.0" # Stages can help you organize your steps in stages stages: - "clone" - "dryrun" - "approve" - "deploy" steps: clone: title: "Cloning repository" type: "git-clone" repo: "rootleveltech/codefresh-helmfile-webinar" # CF_BRANCH value is auto set when pipeline is triggered # Learn more at codefresh.io/docs/docs/codefresh-yaml/variables/ revision: "${{CF_BRANCH}}" git: "github-1" stage: "clone" dryrun: title: "Dry Run" type: freestyle image: bradenwright/cfstep-helmfile:0.111.0-customized working_directory: "${{clone}}" environment: - COMMANDS=diff commands: - kubectl config use-context $KUBE_CONTEXT - cd /codefresh/volume/codefresh-helmfile-webinar/$CODE_DIR/helm/helmfile/$SERVICE - python3 /helmfile.py stage: "dryrun" ask_for_permission: title: Deploy? type: pending-approval stage: "approve" deploy: title: "Deploy" type: freestyle image: bradenwright/cfstep-helmfile:0.111.0-customized working_directory: "${{clone}}" environment: - COMMANDS=apply commands: - kubectl config use-context $KUBE_CONTEXT - cd /codefresh/volume/codefresh-helmfile-webinar/$CODE_DIR/helm/helmfile/$SERVICE - python3 /helmfile.py stage: "deploy" when: steps: - name: ask_for_permission on: - approved
The next step is to add a few variables. Your build will determine the number of variables you’ll need. We will cover details of Docker image bradenwright/cfstep-helmfile in later parts of the blog. For purposes of this step, the cfstep-helmfile which Codefresh has available would work as well. Since changes aren’t relevant to this post, I’ll cover in detail later.
Now you will be able to run your build. Simply click “Run” using the following defaults:
Within Codefresh, you should see that the pipeline stops to wait for approval. Meanwhile, the output from the Dry Run step will show what’s going to change if/when the Pipeline is “APPROVED” and deployed.
In this example, you can see the Deployment for Myapp is going to change if approved.
Everything looks great!
Congratulations! You’ve successfully run and applied your Codefresh Helmfile.
We also recorded a webinar on this very topic. Watch it now: