How to Kustomize your Codefresh/Argo Runtime

How to Kustomize your Codefresh/Argo Runtime

12 min read

The Codefresh Software Delivery Platform (CSDP) brings together the complete open source Argo toolset (Workflows, Events, CD, and Rollouts) into a single platform for enhanced efficiency and visibility of software deployments at massive scale. If you’re a new CSDP user, one of the first things you’ll do is install the CSDP runtime in one of your Kubernetes clusters. In fact, we’ve recently talked about how to securely scale your Argo CD instances with CSDP, so you may even be installing multiple runtimes.

The runtime is the CSDP component that houses the enterprise distribution of the Argo services. After installing a runtime, a common day-2 activity is to tweak the configuration of one of its Argo services for your specific needs. In this article we’ll dive into how to update their configuration. Hint: we do it GitOps-style!

One of the biggest advantages of GitOps is having repeatable declarative configuration for all of your changes. We’ll be covering 2 example changes in this article, which will illustrate how this works, as well as give you a process that you can repeat for all of your other CSDP runtime service configuration changes.

Our example changes are:

  1. Set Argo CD’s Git polling interval
  2. Configure long-term log storage for Argo Workflows (your Delivery Pipelines)

GitOps Organization

During runtime installation, CSDP creates a Git repository with all of your runtime’s Kubernetes manifests. This approach is a GitOps-style deployment of the Argo services, and provides the full host of GitOps benefits. The runtime installation starts by applying the manifests for Argo CD. After Argo CD is up and running, then Argo CD syncs (installs) the rest of the manifests in the repository. Going forward, any version upgrades or configuration changes to your runtime are made to this Git repository first. Argo CD then syncs those changes for you – including changes to Argo CD itself!

Before we start configuring your Argo services, let’s take a quick look at how this repository is organized. At the root of your repository, you should have 3 directories: apps, bootstrap, and projects. For configuration, we’ll be focused on the apps and bootstrap directories. Let’s go a level deeper, and identify the key subdirectories for configuration – each one corresponds to one of the 4 Argo services.

  • bootstrap/
    • argo-cd/
  • apps/
    • events/
    • rollouts/
    • workflows/

Each of these subdirectories contains Kustomize manifests for the deployment and configuration of its respective service in Kubernetes. If you’re not already familiar with how Kustomize works, don’t worry – I’ll provide some brief explanation as we go through each example.

Example 1: Set Argo CD’s Git polling interval

Usually when I tell new Codefresh users that Argo CD automatically polls their Git repos for changes, their next question is: how often does it do that? Well, the default open source Argo CD polls every 180 seconds, and the Codefresh runtime increases the default to every 20 seconds. If you find yourself waiting in anticipation for your deployments to start, then this increase can shave off a few moments.

That said, if you have a runtime with several hundred Argo CD applications and your Git provider limits API requests, then you might not want it polling for changes quite so frequently. This gives you the option of fine tuning a frequency that makes sense for your organization’s needs.

All of the Argo services are configured in Kubernetes via ConfigMaps and Secrets. You can find a list of all of Argo CD’s ConfigMaps and Secrets in the documentation. For our Git polling setting, we’re going to update the timeout.reconciliation key within the argocd-cm ConfigMap, as documented here: https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/argocd-cm.yaml

First, let’s check the live value of this key within our runtime. In the command below, replace MY_RUNTIME_NAMESPACE with the name of your runtime. Also, note the backslash before the period in timeout.reconciliation.

kubectl -n MY_RUNTIME_NAMESPACE get configmap/argocd-cm 
-o jsonpath='{.data.timeout.reconciliation}'

My timeout.reconciliation was set to 20s. Let’s say we want to change this value to 60s.

Now that we know the exact value to change within the argocd-cm ConfigMap, let’s figure out how to put this change in our runtime’s Git repo. Since we’re going to be configuring Argo CD in this example, our work will be in this directory path: bootstrap/argo-cd/

Right now, you should see 1 file in that directory:

bootstrap/argo-cd/kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
configMapGenerator:
- behavior: merge
  literals:
  - |
    repository.credentials=- passwordSecret:
        key: git_token
        name: autopilot-secret
      url: https://github.com/
      usernameSecret:
        key: git_username
        name: autopilot-secret
  name: argocd-cm
kind: Kustomization
namespace: MY_RUNTIME_NAMESPACE
resources:
- github.com/codefresh-io/cli-v2/manifests/argo-cd?ref=v0.0.244

In case you’re not familiar with Kustomize, I’ll explain what’s going on. In the resources section, we’re grabbing a manifest which contains Codefresh’s full enterprise deployment of Argo CD for the version of the CSDP runtime that I have installed (0.0.244 at the time of this writing). Before applying this manifest to Kubernetes, Kustomize is going to apply any patches that are specified in this kustomization.yaml file.

There are multiple ways to define patches in Kustomize, but in this case, we have a configMapGenerator section, with behavior: merge (merge = patch). On line 13, you can see that the patch is being applied to the ConfigMap we need, name: argocd-cm. Currently, it is patching the repository.credentials key. We just need to add an additional key for our timeout.reconciliation setting.

I inserted this key at line 13.

apiVersion: kustomize.config.k8s.io/v1beta1
configMapGenerator:
- behavior: merge
  literals:
  - |
    repository.credentials=- passwordSecret:
        key: git_token
        name: autopilot-secret
      url: https://github.com/
      usernameSecret:
        key: git_username
        name: autopilot-secret
  - timeout.reconciliation='60s'
  name: argocd-cm
kind: Kustomization
namespace: MY_RUNTIME_NAMESPACE
resources:
- github.com/codefresh-io/cli-v2/manifests/argo-cd?ref=v0.0.244

Now we could commit and push this change to our runtime’s Git repo. However, in order for the Argo CD to see the change, we would also need to cycle the argocd-repo-server Deployment, like this:

kubectl -n MY_RUNTIME_NAMESPACE rollout restart deploy/argocd-repo-server

But let’s not do that. Manually cycling a Deployment like this goes against one of the core rules of GitOps, which says that GitOps is the single source of truth that drives all changes. Instead of making a manual change, what we really need is to add a mechanism to our Deployment that will let it know when the ConfigMap has changed. One common Kubernetes technique for this is to add an annotation to the Deployment that records a hash of its dependent ConfigMap YAML. Since a hash is deterministic, it will only change when the YAML of the ConfigMap changes. When the Deployment’s annotation changes to reflect a change in the ConfigMap YAML, it will automatically re-deploy its pods to give them the new annotation.

To that end, we’ll be adding an annotation to the Deployment that contains a sha256 hash of our argocd-cm ConfigMap. We can use the commands below to get the sha256 hash. Be sure to install yq v4.X on your workstation if you haven’t already, and note the slightly different hash command for Linux vs Mac.

# From a clone of the runtime's Git repo, change to the Argo CD deployment
cd bootstrap/argo-cd

# Get a sha256 hash of the argocd-cm ConfigMap
# Use 'sha256sum' on Linux, or 'shasum -a 256' on Mac
kubectl kustomize | yq e '. | select(.kind == "ConfigMap") | select(.metadata.name == "argocd-cm")' - | shasum -a 256 | awk '{print $1}'

Since this is a long command, I’ll explain what it’s doing. It starts with kubectl kustomize to render all of the Argo CD manifests that are defined in Kustomize. Next, yq isolates just the YAML for the argocd-cm ConfigMap. Then shasum or sha256sum gets the sha256 hash of that YAML. Finally, awk strips away any extra characters after the sha256 hash.

My argocd-cm ConfigMap had a sha256 hash of 350826b2b65bba9e7ccfa5f7fa2a747f9e9876d866e681a46c10c00470e95aad. To add the corresponding annotation to our argocd-server Deployment, we’ll add another patch to our kustomization.yaml file. This time, we’ll define our patch within the kustomization.yaml by adding a patches section.

bootstrap/argo-cd/kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
configMapGenerator:
- behavior: merge
  literals:
  - |
    repository.credentials=- passwordSecret:
        key: git_token
        name: autopilot-secret
      url: https://github.com/
      usernameSecret:
        key: git_username
        name: autopilot-secret
  - timeout.reconciliation='60s'
  name: argocd-cm
kind: Kustomization
namespace: MY_RUNTIME_NAMESPACE
resources:
- github.com/codefresh-io/cli-v2/manifests/argo-cd?ref=v0.0.244
patches:
  - patch: |-
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: argocd-repo-server
      spec:
        template:
          metadata:
            annotations:
              argocd-cm-hash: "350826b2b65bba9e7ccfa5f7fa2a747f9e9876d866e681a46c10c00470e95aad"

We’re finally ready to commit and push this change to your runtime’s Git repo. Argo CD will sync the change, and after a few seconds you can verify that the new setting is live in your cluster.

First, let’s verify that the ConfigMap has been updated with the new 60s setting. After that, let’s verify that the addition of our annotation has caused the argocd-repo-server Deployment to cycle.

$ kubectl -n MY_RUNTIME_NAMESPACE get configmap/argocd-cm 
-o jsonpath='{.data.timeout.reconciliation}'
60s

$ kubectl -n MY_RUNTIME_NAMESPACE get pod | grep argocd-repo-server
argocd-repo-server-6d6c9bb8bc-2ljr7 0/1 Terminating 0 12d
argocd-repo-server-7b8948bc76-hq542 1/1 Running 0 18s

Looking good!

Example 2: Configure logging for Argo Workflows

The default runtime configuration for Argo Workflows (“Delivery Pipelines” in the Codefresh UI) is to keep your workflow logs for 24 hours. After a workflow is 24 hours old, Argo Workflows will clean up its completed pods, which include the workflow’s logs. The common best practice is to keep workflow logs after their pods are cleaned up by configuring an artifact repository for long-term storage.

Artifact repositories are defined via a ConfigMap. When defining this artifact repository, there are lots of options for its backend storage, such as S3, GCS, Minio, Artifactory, etc. You can check out this summary in the documentation, and perhaps we’ll do a deeper-dive blog article on this topic in the future. For this article, however, I chose to use an S3 bucket for my artifact repository.

I started by following the documented steps for creating my S3 bucket, and associated IAM policy, IAM role, and access key. Your access key should look something like this:

access-key.json

{
    "AccessKey": {
        "UserName": "Bob",
        "Status": "Active",
        "CreateDate": "2022-02-09T18:39:23.411Z",
        "SecretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYzEXAMPLEKEY",
        "AccessKeyId": "AKIAIOSFODNN7EXAMPLE"
    }
}

Next, we’ll need to create a secret in our runtime with the SecretAccessKey and AccessKeyId fields. For testing purposes in your lab, you can manually create this secret in your cluster, and then later on you can create a secure Kubernetes manifest to put in the runtime’s Git repository (more on that in a moment).

Here is the manual creation command. Again, replace MY_RUNTIME_NAMESPACE with the name of your runtime.

kubectl -n MY_RUNTIME_NAMESPACE create secret generic s3-cred 
--from-literal=SecretAccessKey="wJalrXUtnFEMI/K7MDENG/bPxRfiCYzEXAMPLEKEY" 
--from-literal=AccessKeyId="AKIAIOSFODNN7EXAMPLE"

For testing purposes in your personal lab, it’s not critical to add this secret to your runtime’s git repo immediately. But since the principles of GitOps dictate that Git should be our source of truth, before you put this into a real environment you’ll want to place this secret into a secure manifest file using a tool like Bitnami Sealed Secrets or the External Secrets Operator.

Now we can create our ConfigMap, which will tell Argo Workflows to send its workflow logs to our S3 bucket using the credentials from the secret. To construct my ConfigMap, I used this example of the artifact-repositories ConfigMap, as well as this documentation of the S3 fields (skip down to the “artifactRepository” section). And yes, for fans of using IAM roles instead of an Access Key, this documentation includes the S3 fields needed for that, too.

configmap-artifact-repositories.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: artifact-repositories
  annotations:
    workflows.argoproj.io/default-artifact-repository: default-v1
data:
  default-v1: |
    archiveLogs: true
    s3:
      endpoint: s3.amazonaws.com
      bucket: my-bucket
      region: us-east-2
      accessKeySecret:
        name: s3-cred
        key: AccessKeyId
      secretKeySecret:
        name: s3-cred
        key: SecretAccessKey

While I happen to know that this ConfigMap isn’t included in the default configuration of the runtime’s Git repository, it’s always a good idea to double check – if we find that it already exists, then we may just want to patch it with the needed fields, as opposed to defining the whole ConfigMap in its entirety.

$ kubectl -n MY_RUNTIME_NAMESPACE get configmap/artifact-repositories
Error from server (NotFound): configmaps "artifact-repositories" not found

OK, good – we can proceed with adding the whole configmap-artifact-repositories.yaml file to our Kustomize for Argo Workflows. But where exactly in our runtime Git repository should we place this file? Since this is for Argo Workflows, let’s start by looking in apps/workflows. You should see these subdirectories (MY_RUNTIME will be the actual name of your CSDP runtime).

  • apps/workflows/
    • base/
    • overlays/
      • MY_RUNTIME/

I’ll provide a bit more background on how Kustomize streamlines these configurations. Kustomize has the concept of a base layer and overlay layers. The idea is that you define a base layer for a complete, working deployment, and then create overlay layers which patch and add to the base layer to create variations.

In our case, we will be adding our configmap-artifact-repositories.yaml to the overlay layer that is specific to our runtime: apps/workflows/overlays/MY_RUNTIME. When you create a secure manifest for your s3-cred secret, you can place it in this directory too.

Just like in example #1, we’re going to need to cycle a Deployment in order for its pods to see the ConfigMap. In this case we need to cycle the workflow-controller Deployment, so let’s grab the hashmap of the configmap-artifact-repositories.yaml file we just created.

shasum -a 256 configmap-artifact-repositories.yaml | awk '{print $1}'

My hash was 04253bf475ade629b6dc2baae36c3fe68ce9c8871aea826b954b0f4a4a04ef8f.

Finally, we need to tell Kustomize about the manifests for our ConfigMap and (optional) secured secret. I added this to the last 2 lines, below. I also added an additional patch to add the annotation to the workflow-controller Deployment.

apps/workflows/overlays/MY_RUNTIME/kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: MY_RUNTIME_NAMESPACE
patches:
- path: ingress-patch.json
  target:
    group: apps
    kind: Deployment
    name: argo-server
    version: v1
- patch: |-
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: workflow-controller
    spec:
      template:
        metadata:
          annotations:
            artifact-repositories-hash: "04253bf475ade629b6dc2baae36c3fe68ce9c8871aea826b954b0f4a4a04ef8f"
resources:
- ../../base
- ingress.yaml
# Next 2 lines are for logging Workflows to S3
- configmap-artifact-repositories.yaml
- sealedsecret-s3-creds.yaml  # Omit this if you haven't created it yet

The resources field in kustomization.yaml is where we’re adding our files this time because they are complete manifest files, as opposed to patches. If you do a git status, you should see that the following files have been modified/created:

  • apps/workflows/overlays/MY_RUNTIME/kustomization.yaml
  • apps/workflows/overlays/MY_RUNTIME/configmap-artifact-repositories.yaml
  • apps/workflows/overlays/MY_RUNTIME/sealedsecret-s3-creds.yaml (optional)

Commit this change and push it to your runtime’s Git repository. Give it a few seconds to sync, and then you can verify the changes:

$ kubectl -n MY_RUNTIME_NAMESPACE get configmap/artifact-repositories
NAME DATA AGE
artifact-repositories 1 11s

$ kubectl -n MY_RUNTIME_NAMESPACE get pod | grep workflow-controller
workflow-controller-676f77b769-p2h8f 1/1 Running 0 1m8s

The next time a Delivery Pipeline runs in Codefresh, you should see that a new log directory named after its workflow has been created in your S3 bucket. Woohoo!

Summary

Congratulations on making your first configurations to Argo CD and Argo Workflows – GitOps style!

In this article, we saw how one can read about configurations in the Argo documentation and apply them via the CSDP runtime. First, we examined the directory structure of the runtime’s Git repository so we know the correct place to make changes. Then we made Kustomize changes that included patching an existing ConfigMap, as well as creating a new ConfigMap and Secret. With this technique, you’re now well-equipped to configure the Argo services in your CSDP runtime!

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

Build your GitOps skills and credibility today with a GitOps Certification.

Get GitOps Certified

Ready to Get Started?
  • safer deployments
  • More frequent deployments
  • resilient deployments