How to Install and Manage Argo CD through Continuous Delivery Pipelines Using Codefresh with GKE/AWS/EKS/AKS

How to Install and Manage Argo CD through Continuous Delivery Pipelines Using Codefresh with GKE/AWS/EKS/AKS

18 min read

We are about to install and manage Argo CD through a CD pipeline.

“Why would we do that? We can just as well accomplish that through a command like kubectl apply or helm upgrade --install.”

I’m glad you asked.

The primary objective of Argo CD is to help us apply GitOps processes when deploying applications. It is directing us towards the world in which everything is defined as code, and all code is stored in Git. Once we set it up, we will be able to manage all our applications without ever executing any command from a terminal. We could completely remove any access to the control plane while still being able to deploy as frequently as possible. With GitOps, Git becomes the barrier between humans and machines. We push changes to Git, and machines are applying those changes. We are defining our desires, and machines are converging the actual into the desired state.

If you’re not already familiar with GitOps, please watch What Is GitOps And Why Do We Want It? for a brief overview. Similarly, if you are new to Argo CD, you should get a quick hands-on introduction through the Argo CD: Applying GitOps Principles To Manage Production Environment In Kubernetes video.

The problem is that we cannot use Argo CD to apply GitOps-style deployments without deploying Argo CD itself. It’s a chicken and egg type of problem. Without Argo CD, we have to deploy applications through commands like kubectl and helm install. Yet, if we are to adhere to the GitOps principles that Argo CD promotes, we shouldn’t be running such commands manually from a terminal. All that means that we can use Argo CD for all our deployments, except for the installation and management of Argo CD itself. So, we have to resort to a different tool to manage Argo CD definition stored in Git.

What we can do is define a CD pipeline that will deploy and manage Argo CD. Nevertheless, that would result in the same “chicken and egg” problem. Who is going to deploy a CD solution in a way that it follows GitOps principles? The short answer is “nobody”. We’ll use Codefresh, which happens to be SaaS, even though you could run it in the self-managed mode. It’s already running, and all we have to do is notify it that there is a Git repository with a pipeline that will deploy and manage Argo CD.

We’ll combine Codefresh, Terraform, kubectl, helm, and a bit of custom scripting, and envelop all that into a Codefresh pipeline. If we are successful, this might be the last time you, your colleagues, your pipelines, or any other human or a machine executes kubectl, helm, or similar commands. As a matter of fact, when we are finished, you should be able to remove any Ingress traffic to the control plane (to Kube API). You will be able to prevent both people and other applications from accessing it in any form or way except, maybe, in the read-only mode. Isn’t that a worthy goal?

Let’s get going.

Setting Up The Scene

We need to set up a few requirements.

To begin with, we’ll need Codefresh CLI. If you are already using codefresh.io, you might be used to doing everything through the UI. Not today. We’ll use the CLI for the few operations we’ll need to do in Codefresh.

Please follow the instructions from the Codefresh CLI Installation page. Once the CLI is installed, you should authenticate it. You’ll need an API key for that. If you do not have it already, go to the User Settings page and click the GENERATE button inside the API Keys section. Type devops-catalog as the KEY NAME, select the SCOPES checkbox, and click the CREATE button. Make sure to copy it by clicking the Copy token to clipboard link below the API KEY field.

All the commands are available in the deploy-argo-cf.sh Gist. Feel free to use it if you’re too lazy to type. There’s no shame in copy & paste.

Execute the command that follows once you have the token.

Please replace [...] with the token you just copied to the clipboard.

codefresh auth 
    create-context devops-catalog 
    --api-key [...]

If you are a Windows user, I will assume that you are running the commands from a Bourne Again Shell (Bash) or a Z Shell (Zsh) and not PowerShell. That should not be a problem if you followed the instructions on setting up Windows Subsystem for Linux (WSL) explained in the Installing Windows Subsystem For Linux (WSL) YouTube video. If you do not like WSL, a Bash emulator like GitBash should do. If none of those is an acceptable option, you might need to modify some of the commands in the examples that follow.

Next, we’ll need a Kubernetes cluster. However, it cannot be any cluster since the examples assume that you followed the instructions presented in my previous article. Please make sure that you went through them, or be prepared to tweak the examples. You can find the instructions in any of the following articles.

If you destroyed the cluster after you finished reading that article (as you should have), I prepared a Gist with the instructions on how to recreate it. Please follow the Create A Cluster section of the Gist related to your favorite Kubernetes distribution.

Now we should have a Kubernetes cluster up and running, and all we did was change a single value in Terraform and let Codefresh take care of executing the steps required to make the magic happen.

If that was a “real world” situation, we should have created a pull request with proposed changes, review it, test it, and merge it to master. But this is a demo, so shortcuts are allowed.

We might want to be able to access the newly created cluster from our laptop. Typically, you wouldn’t need that once you start trusting your GitOps processes and automation. Nevertheless, being able to connect to it will be useful for our exercises, so we’ll need to retrieve the Kube config of the new cluster. At the same time, that will give us an insight into one of the steps we’ll need to add to our pipeline. Later on, you’ll see why that extension is necessary, but, for now, let’s focus on how to retrieve the config locally.

I already created a script that will do just that, and it is located in the same repo we’re using to create and manage the cluster. Let’s go there.

Please replace [...] in the commands that follow with your Kubernetes platform. Use gke, eks, or aks as the value.

# Replace `[...]` with the k8s platform (e.g., `gke`, `eks`, or `aks`)
export K8S_PLATFORM=[...]

cd cf-terraform-$K8S_PLATFORM

To connect to the cluster, we’ll need to retrieve some information like, for example, the name of the cluster. Since Terraform created it, we can use its output values to get the info we need. Given that the Terraform state is stored in the remote storage, the first step is to initialize the project so that the local copy becomes aware of it, and download the plugins it might need.

terraform init

The commands to retrieve Kube config differ from one provider to another. So, instead of going through all the variations, I prepared a script that will execute the required commands to create kubeconfig.yaml. Let’s take a look at it.

cat get-kubeconfig.sh

As I already mentioned, the content of that script differs from one provider to another. In the interest of brevity, I’ll skip explaining the differences in the way we retrieve the config for EKS, AKS, and GKE. I’m sure you can explore that script on your own. Go ahead, explore it. I’ll wait.

Next, we need to make sure that the script is executable.

chmod +x get-kubeconfig.sh

The way we’ll execute the script differs from one vendor to another. All require the name of the cluster. However, EKS needs to know the region, AKS needs the resource group, and GKE both the region and the project ID. So, the arguments of the script are different depending on which Kubernetes platform you are using.

Please execute the command that follows if you are using EKS.

./get-kubeconfig.sh 
    $(terraform output cluster_name) 
    $(terraform output region)

Please execute the command that follows if you are using AKS.

./get-kubeconfig.sh 
    $(terraform output cluster_name) 
    $(terraform output resource_group)

Please execute the commands that follows if you are using GKE.

./get-kubeconfig.sh 
    $(terraform output cluster_name) 
    $(terraform output region) 
    $(terraform output project_id)

export GOOGLE_APPLICATION_CREDENTIALS=$PWD/account.json

The Kube config was created. The only thing missing for us to access the cluster is to define the environment variable KUBECONFIG so that kubectl knows where to find the connection info.

export KUBECONFIG=kubeconfig.yaml

To be on the safe side, let’s output the nodes to confirm that the cluster was indeed created and that we can access it.

kubectl get nodes

The output, in my case, is as follows.

NAME                       STATUS ROLES  AGE   VERSION
ip-10-0-0-250.ec2.internal Ready  <none> 9m47s v1.17.9-eks-4c6976
ip-10-0-1-66.ec2.internal  Ready  <none> 10m   v1.17.9-eks-4c6976
ip-10-0-2-235.ec2.internal Ready  <none> 10m   v1.17.9-eks-4c6976

Now we can move into the main subject and figure out how to install Argo CD.

Installing And Managing Argo CD Using Continuous Delivery Pipelines

We already have a pipeline that creates and manages a Kubernetes cluster using a Codefresh pipeline with Terraform steps. I will not go through that pipeline since we already covered it in the EKS, AKS, and GKE articles. You must go through those first if you haven’t already since we will continue where those left.

Apart from ensuring that the cluster is up-and-running, we also need to ensure that Argo CD is running. That might be the last Kubernetes applications we will ever install using kubectl apply, helm upgrade --install, and similar commands. Once Argo CD is operational, it will take care of all other deployments, updates, and deletions.

We could set up Argo CD with commands executed from a terminal, but that would be silly. Given that we already have a pipeline that manages the cluster, it makes perfect sense to extend it with Argo CD setup.

On the first look, we could just as well extend the pipeline with a simple helm upgrade --install command. But things are not that easy. We also need the Ingress controller so that Argo CD UI is accessible through a domain. There’s more, though. Installing anything in Kubernetes means that we need a valid Kube config, so we’ll need to generate it before setting up the Ingress controller and Argo CD.

You already saw that I prepared get-kubeconfig.sh that will take care of creating Kube config, but, for it to work, we need a few values like, for example, the name of the cluster. We can get the info we need through terraform output commands.

All in all, we need to do the following steps after ensuring that the cluster is up-and-running.

  • Retrieve the info about the cluster
  • Generate Kube config
  • Make sure that the Ingress controller is up-and-running
  • Make sure that Argo CD is up-and-running

Luckily for you, I already created a pipeline that does all that. It’s in the orig directory, so let’s use it to replace the current codefresh.yaml pipeline.

Some of the files are stored in the orig directory even though we need them in the root. That might sound strange, but there is a good reason behind it. I might be experimenting with that repo. The files in the root might be configured with my info. To avoid any potential issues, I stored the “golden” version of some of the files inside that directory.

cp orig/codefresh-argocd.yml 
    codefresh.yml

cat codefresh.yml

We’ll comment only on the new parts of that pipeline. To be more precise, only on those that were added to the previous version of the pipeline.

The output, limited to the relevant parts, is as follows.

Do not get confused if the output on your screen is different from mine. I’ll show parts of my output based on EKS. If you’re using AKS or GKE, some of the steps will be different. Nevertheless, the logic and the flow of the steps is the same, and you should not have any trouble matching your output with my explanation.

version: "1.0"
stages:
  ...
  - apps
steps:
  ...
  apply:
    image: hashicorp/terraform:0.13.0
    title: Applying Terraform
    stage: apply
    commands:
      - terraform apply -auto-approve 
      - export CLUSTER_NAME=$(terraform output cluster_name)
      - export REGION=$(terraform output region)
      - export DESTROY=$(terraform output destroy)
      - cf_export CLUSTER_NAME REGION DESTROY
    ...
  apply_app:
    image: vfarcic/aws-helm-kubectl:2.0.47
    title: Applying apps
    stage: apps
    commands:
      - chmod +x get-kubeconfig.sh && ./get-kubeconfig.sh $CLUSTER_NAME $REGION
      - export KUBECONFIG=kubeconfig.yaml
      - kubectl apply --filename https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.35.0/deploy/static/provider/aws/deploy.yaml
      - kubectl wait --namespace ingress-nginx --for=condition=ready pod --selector=app.kubernetes.io/component=controller --timeout=120s
      - helm upgrade --install argocd argo-cd --repo https://argoproj.github.io/argo-helm --namespace argocd --create-namespace --version 1.6.2 --values argocd-values.yaml --wait
    when:
      condition:
        all:
          notDestroy: '"${{DESTROY}}" == "false"'
      branch:
        only:
          - master

To begin with, we added apps to the list of stages. That should provide a clear separation between the steps involved in managing the applications and those from the rest of the pipeline.

Next, we extended the apply step by adding the commands that export the variables we need. We are storing CLUSTER_NAME. In the case of AKS, we are also exporting the RESOURCE_GROUP. EKS needs the REGION, while in the case of GKE, we are retrieving both the PROJECT_ID and the REGION.

If you remember from the previous article, we are using the Terraform destroy variable to let it know that it should obliterate the cluster. But, this time, we’ll need it as an environment variable as well. I’ll explain it’s purpose soon. For now, please note that we’re storing it as DESTROY.

Shell export commands retain values only during the current session. Since each step is running in a different container, the session is different in each, so whatever we export in one is not available in the others. Given that we need those values in the next step, we are using the cf_export command, which makes environment variables persistent across sessions (containers, steps).

The apply_app step belongs to the apps stage where the “real” action is happening, at least when Argo CD is concerned.

We need a few tools in the apply_app step.

To begin with, we have to have kubectl and helm, since we’ll use those to install the Ingress controller and Argo CD. On top of those, we need vendor-specific CLI to retrieve the Kube config. That would be aws, az, or gcloud CLI, depending on our vendor of choice. To simplify everything and save you a few minutes, I already created images with those tools. You can see that the image field of that step is set to vfarcic/aws-helm-kubectl, vfarcic/az-helm-kubectl, or vfarcic/gke-helm-kubectl.

In the case of AKS and GKE, the first command of that step is to log in. That is not needed for EKS since it uses environment variables for authentication, and we already have them defined in the pipeline.

The first step, ignoring the one that logs us in, is making the get-kubeconfig.sh executable and running it. As we already saw when we run it locally, it creates the kubeconfig.yaml file. Further on, we’re defining the KUBECONFIG variable that tells kubectl to use a non-default path for its config.

The rest of the steps should be straightforward. We are applying NGINX Ingress, waiting until it is ready, and executing helm upgrade --install to ensure that Argo CD is running and configured through values specified in argocd-values.yaml. If we ever need to change any aspect of it or upgrade it, all we’ll have to do is change the values in that file and push them to Git.

Finally, that step contains a when conditional. It will be executed only when the DESTROY variable is set to false, and the build was triggered by a change in the master branch.

That’s it as far as the parts of the pipeline related to the setup and maintenance of Argo CD are concerned. We are almost ready to let Codefresh take care of everything. The only thing missing is to make sure that Argo CD uses the correct address for accessing the UI. As you already saw from the helm upgrade commands used in the pipeline, the values are stored in argocd-values.yaml, so let’s copy the “golden” version of it and take a quick look.

cp orig/argocd-values.yaml 
    argocd-values.yaml

cat argocd-values.yaml

The output is as follows.

server:
  ingress:
    enabled: true
    hosts:
    - acme.com
  extraArgs:
    insecure: true
installCRDs: false

Those values should be self-explanatory. We’re enabling ingress, allowing insecure communication since we do not have TLS for our examples, and disabling the installation of CRD.

Helm 3 removed the install-crds hook, so CRDs need to be installed as if they are “normal” Kubernetes resources. Think of installCRDs set to false as a workaround.

The problematic part with those values is the host set to acme.com. I could not know in advance whether you have a “real” domain with DNS entries pointing to the external load balancer sitting in front of your cluster. So I had to hard-code a value which we are about to change to a xip.io domain.

We’ll use xip.io since I could not assume that you have a “real” domain that you can use for the exercises or, if you do, that you configured its DNS to point to the cluster.

To generate a xip.io domain, we need to retrieve the IP of the external load balancer created during the installation of the Ingress controller. Unfortunately, the way how to do that differs from one provider to another. So, we’ll need to split the commands into those for GKE and AKS on the one hand, and EKS on the other.

Please execute the command that follows if you are using GKE or AKS.

export INGRESS_HOST=$(kubectl 
    --namespace ingress-nginx 
    get svc ingress-nginx-controller 
    --output jsonpath="{.status.loadBalancer.ingress[0].ip}")

Please execute the commands that follows if you are using EKS.

export INGRESS_HOSTNAME=$(kubectl 
    --namespace ingress-nginx 
    get svc ingress-nginx-controller 
    --output jsonpath="{.status.loadBalancer.ingress[0].hostname}")

export INGRESS_HOST=$(
    dig +short $INGRESS_HOSTNAME)

Now you should have the IP of the external load balancer stored in the environment variable INGRESS_HOST. Let’s confirm that.

echo $INGRESS_HOST

The output, in my case, is as follows.

52.71.238.18

If you are using AWS and the output contains more than one IP, wait for a while longer, and repeat the export commands. If the output continues having more than one IP, choose one of them and execute export INGRESS_HOST=[...] with [...] being the selected IP.

Now that we have the IP, let’s define the address through which we want to access Argo CD UI.

export ARGO_ADDR=argocd.$INGRESS_HOST.xip.io

All that’s left is to replace the hard-coded value acme.com with the newly generated address and push the changes to the repository. Codefresh should pick it up and run a pipeline build that will execute the steps that manage the cluster, the Ingress controller, and Argo CD.

cat argocd-values.yaml 
    | sed -e "[email protected]@$ARGO_ADDR@g" 
    | tee argocd-values.yaml

git add .

git commit -m "Adding Argo CD"

git push

If you are already using Codefresh, your reaction might be to go to its UI to see the status of the newly executed build. We will not do that. Instead, we’ll use codefresh CLI to retrieve and follow the logs of the build. But, to do that, we need to find out what its ID is. We can do that by retrieving all the builds associated with the cf-terraform-* pipeline.

codefresh get builds 
    --pipeline-name cf-terraform-$K8S_PLATFORM

Please copy the ID of the newest build located at the top of the output, and paste it instead of [...] in the command that follows.

export BUILD_ID=[...]

Now we can retrieve and follow the logs.

codefresh logs $BUILD_ID -f

Once the build is finished, the last lines of the logs output should show the typical Helm output, including the NOTES with the next steps. We’ll ignore them for now.

Let’s have a quick look at whether all the components of Argo CD are indeed running now.

kubectl --namespace argocd get pods

The output is as follows.

NAME                              READY STATUS  RESTARTS AGE
argocd-application-controller-... 1/1   Running 0        15m
argocd-dex-server-...             1/1   Running 0        15m
argocd-redis-...                  1/1   Running 0        15m
argocd-repo-server-...            1/1   Running 0        15m
argocd-server-...                 1/1   Running 0        15m

Similarly, we should confirm that Argo CD Ingress is indeed set to the correct host.

kubectl --namespace argocd get ingresses

The output is as follows.

NAME          HOSTS                      ADDRESS PORTS AGE
argocd-server argocd.52.71.238.18.xip.io ...     80    25m

There’s one last step that we might need to do before we conclude that everything is working as expected.

When installed for the first time, Argo CD uses a password that happens to be the same as the name of the Pod in which it is running. Let’s retrieve it.

export PASS=$(kubectl --namespace argocd 
    get pods 
    --selector app.kubernetes.io/name=argocd-server 
    --output name 
    | cut -d'/' -f 2)

We stored the password in the environment variable PASS, and now we can use it to authenticate the argocd CLI. Otherwise, we wouldn’t be able to use it.

argocd login 
    --insecure 
    --username admin 
    --password $PASS 
    --grpc-web 
    argocd.$INGRESS_HOST.xip.io

Let’s see what that password is.

echo $PASS

The output, in my case, is as follows.

argocd-server-745949fb6d-jv4b5

You probably would not be able to remember that password. Even if you would, I do not want you to waste your brain capacity on such trivial information. So let’s change it to something that will be easier to remember.

argocd account update-password

You will be asked to enter current password. Copy and paste the output of echo $PASS. Further on, you’ll be asked to enter a new password twice. Do it. If you’re uninspired, use admin123 or something similar. After all, it’s a demo cluster, and there’s no need to be creative with special characters, rarely used words, and similar things that make passwords less likely to be guessed.

All that’s left is to open Argo CD in a browser and confirm that it works.

If you are a Linux or a WSL user, I will assume that you created the alias open and set it to the xdg-open command. If that’s not the case, you will find instructions on doing that in the Setting Up A Local Development Environment chapter. If you do not have the open command (or the alias), you should replace open with echo and copy and paste the output into your favorite browser.

open http://$ARGO_ADDR

You should be presented with the sign-in screen.

Feel free to explore the UI. But, before you do, be warned that you will not see much. We did not yet deploy a single application with Argo CD, so there’s not much excitement in the UI.

What Should We Do Next?

From here on, it’s up to you to “play” with the desired state of the cluster and the applications running inside. Change any of the declarative files in the repo, push the changes, wait until the pipeline converges your desires into reality, and observe the outcomes.

Before you leave, remember to destroy everything. I do not want to be blamed for the expenses incurred from Cloud resources left floating after you’re finished.

Fortunately, you already know how to destroy everything we created. You saw it in the EKS, AKS, and GKE articles. All you have to do is open the variables.tf file, set the value of the destroy variable to true, and push the change to Git. Codefresh will take care of the rest. But, before you do that, let’s take another look at the pipeline.

cat codefresh.yml

The output, limited to the when condition of the apply_app step, is as follows.

...
steps:
  ...
  apply_app:
    ...
    when:
      condition:
        all:
          notDestroy: '"${{DESTROY}}" == "false"'
      branch:
        only:
          - master

That step installs the apps. But, given that the cluster will be destroyed by the time the build reaches that step, we are making sure that it is not executed. To be more precise, we set the when.condition.all.notDestroy conditional to true only when the DESTROY value is set to false. In other words, that step will run only when the cluster is not set for destruction. There is no point in trying to deploy applications inside a non-existing cluster, doesn’t it?

Now we can destroy the cluster. Go back to the Gist you used to create it in the first place, and follow the instructions from the Destroy The Cluster section.

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

Build your GitOps skills and credibility today with a GitOps Certification.

Get GitOps Certified

Ready to Get Started?
  • safer deployments
  • More frequent deployments
  • resilient deployments