Creating Temporary Preview Environments Based On Pull Requests With Argo CD And Codefresh

Creating Temporary Preview Environments Based On Pull Requests With Argo CD And Codefresh

30 min read

Creating preview environments as a result of making pull requests is one of those practices that have vast potential and are yet largely overlooked. There is a strong chance that you are not using them, even though they can drastically increase productivity.

I will not explain what preview environments are, besides stating that they are temporary environments created when pull requests are made and destroyed when PRs are closed.

We will not debate what preview environments are, what Argo CD is, and why Codefresh is likely the best choice for defining and running continuous delivery pipelines. Instead, we will jump straight into defining the requirements that we might need to keep in mind and, after that, directly into the practical hands-on exploration of the concept.

I might even throw in a diagram or two for those of you too lazy to do the hands-on exercises and just want to pretend to understand how it all works. If that’s what you’re looking for, I will not make it easy for you by putting them all at once at the top. Instead, the diagrams will be spread around the examples. You will have to, at least, do some scrolling. Think of that as me forcing you to put in some effort, even if it is limited to exercising your finger while scrolling.

So, the focus is not on theory but on the practical implementation of preview environments. We’ll use Argo CD for deploying applications and Codefresh for pipelines that will orchestrate all the steps required when working with pull requests.

If you are not proficient with Argo CD, I strongly recommend watching the Argo CD: Applying GitOps Principles To Manage Production Environment In Kubernetes first. After that, if you would like to see a manual version of what we are about to fully automate, please go through Environments Based On Pull Requests (PRs): Using Argo CD To Apply GitOps Principles On Previews.

With the pleasantries out of the way, we will jump straight into the mission by quickly exploring the expectations and the requirements.

Discussing The Expectations And The Requirements

There are usually many ways to accomplish an objective, and which one will be taken often depends on the expectations, which might shape the requirements.

Let’s start with the expectations I set in front of me when designing the solution we are about to explore. There are only three, so it won’t take much time to go through them.

Every time a pull request (PR) is made against a repository of an application, a temporary environment should be created with the build of that PR and, potentially, the dependencies. That way, we should be able to evaluate the quality of the PR by running automated or manual tests. By using unique and temporary environments, we should not be limited in the number of release candidates we are evaluating. Unlike static environments, like staging and production, that are forcing us to queue deployments, temporary preview environments should pose no such restrictions.

Every time a pull request is closed, the temporary environment should be removed. That way, we should save on costs. There is no need to have a temporary environment based on a PR running after that PR is closed or merged.

Finally, there is no need for anyone or anything to have direct access to the cluster. One of the most powerful features of Argo CD is to sync automatically with the desired state stored in Git. Today, with the tools we have at our disposal, there is no justifiable reason to lower the security requirements. We shouldn’t allow people, or even other tools, to interact with the cluster, as long as that does not impact productivity and does not introduce unnecessary delays.

Please read GitOps Patterns – Auto-Sync Vs. Manual Sync for an overview of the reasons behind enabling the auto-sync feature.

That’s it. Those are all the expectations we are striving to fulfill.

Now that the expectations are clear, or, at least, not wholly obscured, let’s define the requirements.

We’ll need a Kubernetes cluster with Argo CD installed. You can create a cluster any way you like, anywhere you want. Similarly, it does not matter much how you install Argo CD, as long as you do, and as long as Ingress is enabled so that we can access it. If you need inspiration, I published a series of articles on how to create and manage Kubernetes clusters in Google Cloud (GKE), AWS (EKS), and Azure (AKS), as well as instructions on how to automate the installation and management of Argo CD itself.

You do not have to follow any of those instructions if you do not want to. As long as you have a Kubernetes cluster and you installed Argo CD, it does not matter how you did it.

There are a few other requirements, though. I will assume that you created an environment variable INGRESS_HOST with the IP of your Ingress controller through which we can access the applications inside the cluster.

Finally, I will assume that you installed Codefresh CLI and authenticated it against your Codefresh account. If you haven’t, I encourage you to watch Using CLI To Configure Codefresh And Create And Manage Kubernetes Pipelines, which provides a quick introduction through practical hands-on examples into the codefresh CLI.

Now we’re ready to start defining everything we need for creating pull requests.

Creating The Project And App Of Apps

Before we dive into pull requests and preview environments, we’ll need to create a few things that will define the framework of everything else we’ll do.

All the commands are available in the previews.sh Gist. Feel free to use it if you’re too lazy to type. There’s no shame in copy & paste.

To begin with, we need a Git repository where we will define preview environments. Luckily for you, I already created such a repository, so all we have to do is fork it.

If you are a Linux or a WSL user, the open command might not work. If that is the case, you should replace open with echo and copy and paste the output into your favorite browser.

open https://github.com/vfarcic/argocd-previews

Next, you will need to fork the repo. We’ll soon make some changes to the code, and you wouldn’t be able to push them to my repo. So, it needs to be yours.

If you do not know how to fork a GitHub repo, the only thing I can say is “shame on you”. Google how to do that. I will not spend time explaining that.

Next, we’ll clone the newly forked repository.

Please replace [...] with your GitHub organization in the command that follows. If you forked the repo into your personal account, then the organization is your GitHub username.

export GH_ORG=[...]

git clone https://github.com/$GH_ORG/argocd-previews.git

cd argocd-previews

If you already forked that repository before, while going through some other exercises of mine, you might want to merge with the upstream. That should ensure that you have the latest changes I might have added.

Please execute the commands that follow only if you already forked the repository earlier.

git remote add upstream 
    https://github.com/vfarcic/argocd-previews

git fetch upstream

git merge upstream/master

Now we can explore a few critical files. The first one is the Argo CD project that will group all the preview environments.

cat project.yaml

The output is as follows.

apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: previews
  namespace: argocd
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  description: Previews
  sourceRepos:
  - '*'
  destinations:
  - namespace: previews
    server: https://kubernetes.default.svc
  - namespace: "pr-*"
    server: https://kubernetes.default.svc
  - namespace: argocd
    server: https://kubernetes.default.svc
  clusterResourceWhitelist:
  - group: ''
    kind: Namespace
  namespaceResourceWhitelist:
  - group: "*"
    kind: "*"

The only “special” thing about that project is that it allows pr-* as the namespace where we might deploy applications that belong to that project. As you can probably guess, * is the wildcard character, so that entry means that any Namespace with the name that starts with pr- is allowed.

We are whitelisting (clusterResourceWhitelist) the Namespace as a resource that can be created on the cluster level. That should allow Argo CD to create those associated with preview environments.

I’m sure you can figure out the meaning of the rest of that definition yourself, so let’s move on and apply it.

kubectl apply --filename project.yaml

*Normally, we would not create any resource manually from a terminal. In this specific case, we should probably trigger a pipeline build on changes to that repo. The pipeline would be executing the same kubectl apply command. We’re not doing that right now for brevity reasons. I want us to get to the subject at hand as fast as possible.

Next, we’ll create an app of apps. It will be an Argo CD application that will be used mostly as a reference for it to know where to look for the applications related to preview environments.

Just as before, I already created the definition we can use, so let’s take a quick look at it.

cat apps.yaml

The output is as follows.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: previews
  namespace: argocd
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  project: previews
  source:
    repoURL: https://github.com/vfarcic/argocd-previews.git
    targetRevision: HEAD
    path: helm
  destination:
    server: https://kubernetes.default.svc
    namespace: previews

That YAML defines Argo CD Application that will monitor a specific repoURL and path. Whatever is defined in there will be considered the desired state, and Argo CD will ensure that the actual state is converged into it.

The definition uses the previews project that we created a few moments ago.

Before we proceed, we need to modify the repoURL. It is currently pointing to my repository, so we should change it to use the fork you created earlier. We’ll do that with a bit of “sed magic”.

cat apps.yaml 
    | sed -e "s@vfarcic@$GH_ORG@g" 
    | tee apps.yaml

Let’s push that change to the Git repo. It’s of no use to keep it local.

git add .

git commit -m "Initial commit"

git push

We’re almost finished with the resources in the previews repo. All that’s left is to apply the definition in the apps.yaml.

kubectl apply --filename apps.yaml

There is one more thing we might want to observe.

As you already saw, the Argo CD Application we just created will monitor the helm directory inside that repo. As you can probably guess from the name, it is a Helm chart. That’s where we’ll be adding, modifying, and removing files to match the desired state of our preview environment. Typically, it should be empty at the start. We did not yet create any PR, so there shouldn’t be any environments. However, Helm does not allow us to deploy a chart without any definition. So, there must be something, even though, right now, we want nothing. As a workaround, I created a dummy Helm template that will serve no purpose but as a workaround for Helm’s inability to have a Chart without a single resource.

Let’s take a look at the dummy.

ls -1 helm/templates

The output is a single namespace.yaml file. Inside is a definition of a Namespace previews, and nothing else. You can ignore it. I mentioned it only because I do not want there to be any secrets between us.

We will not need to interact with the previews repo anymore, at least not directly, so let’s get out of the local copy.

cd ..

Now we are ready to dive into pull requests themselves.

Creating The Pipeline

The actions we might want to perform when creating pull requests are the same as when syncing and reopening them.

That would probably sound strange if I said it ten years ago when most of us treated applications as mutable entities. Back then, we would create an environment and deploy a temporary release whenever a pull request is made. We would probably update that deployment after syncing it (pushing changes). I’m not even sure what we would do as a reaction to reopening a pull request. Would we recreate the preview environment? In any case, back then, the actions performed on opening PRs would be different from those when syncing or reopening them.

Today, we can tie all those three types of events into one set of actions, thanks to the immutability behind container images and the idempotency of Kubernetes resources. Whenever a pull request is created, synced, or reopened, we can tell Kubernetes that we want specific resources to be running inside the cluster and let it handle the rest.

However, since we are trying to apply GitOps principles, we will not tell Kubernetes anything. Instead, we’ll change the definitions in a dedicated Git repository and let Argo CD figure out what to do to comply with our desires.

The ideal situation would be to use one of your applications for the exercises. But that would also increase the number of potential permutations of the things I would need to explain. I could not guess in advance how to build the binaries and to perform whichever actions might be specific to your situation. Instead, we’ll use one of my demo apps with the assumption that you should have no problems translating the lessons-learned. If you read my other posts or watched some of my videos, you can probably guess which application it is. It’s okay if you can’t. The app is as simple as it can get. It’s so basic that it is not worth even explaining what it does. What matters is the process rather than the architecture of the app anyway.

Let’s open the repository of the app.

open https://github.com/vfarcic/devops-toolkit

You know what’s coming next. Fork the repo first and execute the commands that follow to clone it.

git clone https://github.com/$GH_ORG/devops-toolkit.git

cd devops-toolkit

If you already forked that repository before while going through some other exercises of mine, you might want to merge with upstream. That should ensure that you have the latest changes I might have added.

Please execute the commands that follow only if you already forked the repository earlier.

git remote add upstream 
    https://github.com/vfarcic/devops-toolkit

git fetch upstream

git merge upstream/master

If we are to create preview environments based on pull requests as Argo CD applications, we need to have a template we can use. It cannot be a generic Argo CD application since the way apps are deployed might differ from one application to another. Some applications might require extra dependencies, others might have different ways to define tags, and so on and so forth. If we are using Helm, most of those differences can be described as Helm values. In other words, which values will be overwritten for preview environments might differ from one application to another. With that in mind, we might need a template (of sorts) dedicated to each app.

Given that I believe that a repository of an app should contain everything that app needs when running in isolation, the logical place for the template of the Argo CD app that describes it is in the repo of the application. If we combine that with my love of naming conventions, we get to the idea that each repo of applications could contain a file called preview.yaml. That file can be the template we need for deploying apps in preview environments.

Let’s look at the one I already prepared.

cat preview.yaml

The output is as follows.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: "{{.APP_ID}}"
  namespace: argocd
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  project: previews
  source:
    path: helm
    repoURL: https://github.com/vfarcic/devops-toolkit.git
    targetRevision: HEAD
    helm:
      values: |
        image:
          repository: vfarcic/devops-toolkit
          tag: {{.IMAGE_TAG}}
        ingress:
          host: {{.APP_ID}}.devopstoolkitseries.com
      version: v3
  destination:
    namespace: "{{.APP_ID}}"
    server: https://kubernetes.default.svc
  syncPolicy:
    automated:
      selfHeal: true
      prune: true
    syncOptions:
    - CreateNamespace=true

To begin with, that file is not a typical YAML file. I already mentioned that is should be a template, and not the file that contains the final definition.

We will be using kyml, given that it might be the simplest way to convert a template into the final file.

The preview.yaml contains a couple of entries specific to kyaml. Those are the ones surrounded by double curly braces ({{ and }}). To be more precise, templated values are those that are likely going to change from one preview (one PR) to another.

Each Application name needs to be unique. So, we are having {{.APP_ID}} as the value of metadata.name.

Further on, we are overwriting a few Helm values. Specifically, the image.tag needs to be the one that we will build through the pipelines. Also, ingress.host should be unique so that each PR preview can be accessed independently of others.

Finally, the spec.destination.namespace should also be unique, so it contains the same {{.APP_ID}} we are using for the metadata.name.

Before we proceed, there are values that will always be the same, but they are currently set to what works for me, not for you. Given that they will not change from one PR to another, it would be pointless to have them as templated values. They can stay hard-coded, but to the values that match your situation. Specifically, the repoURL should be pointing to your GitHub organization, the image.repository should use your Docker Hub account, and the domain should be whatever is the domain of your cluster.

Given that I could not be sure that you have a domain at your disposal, we’ll use xip.io to “simulate” it. That’s why I said that the environment variable INGRESS_HOST is one of the requirements. Let’s confirm that you did follow my instructions and set it up.

echo $INGRESS_HOST

The output should be the IP through which you can access Ingress. If it is empty, you forgot to declare it or, more likely, you did not follow my instructions from the beginning of the article.

You should already have the GitHub organization stored inside the environment variable GH_ORG, so the only value missing is your Docker Hub user. Please make sure that you are registered, and replace [...] with the username in the command that follows.

export DH_USER=[...]

The only thing left when preview.yaml template is concerned, is to replace those hard-coded values.

cat preview.yaml 
    | sed -e "[email protected]/[email protected]/$GH_ORG@g" 
    | sed -e "s@repository: vfarcic@repository: $DH_USER@g" 
    | sed -e "[email protected]@$INGRESS_HOST.xip.io@g" 
    | tee preview.yaml

Finally, we will need a Codefresh pipeline that will make sure that all the steps are executed whenever we create a pull request. I already created one we can use. Given that I might be experimenting with that repo quite a lot, I stored the “golden” copy of the pipeline we will use in the codefresh directory. Let’s copy it to the root of the local copy of the repo and see what’s inside.

cp codefresh/codefresh-pr-open.yml 
    codefresh-pr-open.yml

cat codefresh-pr-open.yml

The output is as follows.

version: "1.0"
kind: pipeline
metadata:
  name: devops-toolkit-pr-open
  description: Triggered when a PR is opened or synced
spec:
  triggers:
  - type: git
    provider: github
    context: github
    name: pr-open
    repo: vfarcic/devops-toolkit
    events:
    - pullrequest.opened
    - pullrequest.reopened
    - pullrequest.synchronize
    pullRequestAllowForkEvents: true
    pullRequestTargetBranchRegex: /master/gi
    verified: true
  contexts: []
  stages:
    - release
    - deploy
  steps:
    main_clone:
      title: Cloning repository
      type: git-clone
      arguments:
        repo: "${{CF_REPO_OWNER}}/${{CF_REPO_NAME}}"
        git: github
        revision: "${{CF_BRANCH}}"
      stage: release
    build_app:
      title: Building Hugo
      image: klakegg/hugo:0.75.1-ext-alpine
      commands:
      - ./build.sh
      - cf_export REPO_PATH=$PWD
      - cf_export APP_ID=pr-$CF_REPO_NAME-$CF_PULL_REQUEST_NUMBER
      stage: release
    build_image:
      title: Building container image
      type: build
      arguments:
        image_name: vfarcic/devops-toolkit
        tags:
        - ${{CF_SHORT_REVISION}}
        registry: docker-hub
      stage: release
    clone_env_repo:
      title: Cloning preview env. repo
      type: git-clone
      arguments:
        repo: vfarcic/argocd-previews
        git: github
      stage: deploy
    define_preview:
      image: vfarcic/argocd-pipeline:1.0.ee76b7a
      title: Defining preview environment app
      working_directory: "${{clone_env_repo}}" 
      commands:
      - export IMAGE_TAG=$CF_SHORT_REVISION
      - cat $REPO_PATH/preview.yaml | kyml tmpl -e APP_ID -e IMAGE_TAG | tee helm/templates/$APP_ID.yaml
      - git add .
      stage: deploy
    push_env_repo:
      title: Pushing preview env. changes to the repo
      type: git-commit
      arguments:
        repo: vfarcic/argocd-previews
        git: github
        commit_message: "Adding PR ${{CF_PULL_REQUEST_NUMBER}} from ${{CF_REPO_NAME}}"
        git_user_name: "${{CF_COMMIT_AUTHOR}}"
        working_directory: "/codefresh/volume/argocd-previews"
      stage: deploy

If you are already using Codefresh (as I hope you do), you might not be used to creating pipelines with codefresh CLI and from definitions stored in Git. If you’re not, remember that the Using CLI To Configure Codefresh And Create And Manage Kubernetes Pipelines guides you through the first steps towards moving away from the UI. I will not go deep into the pipeline definition in front of us, assuming that you already watched that video. If you prefer written material, please follow the link to the more in-depth article.ª

Inside the spec.triggers, we are defining a single entry that defines which events will trigger builds. Those are opened, reopened, and synchronize events associated with PRs.

The “real” action is happening in the steps.

We are cloning the repository of the application (main_clone), building the app (build_app), and building the container image, and pushing to the registry (build_image). Those steps can be considered common to any type of pipeline that creates releases. The rest is specific to preview environments.

Further on, we are cloning the argocd-previews repository (clone_env_repo) that should contain all the preview environments, not only those related to this application. The next step (define_preview) is the key. It takes the preview.yaml file we explored earlier, passes it through kyaml that replaces the APP_ID and IMAGE_TAG placeholders, and stores the result inside the helm/templates directory as a file with a unique name. The APP_ID variable (defined in the build_app step) is a combination of the pr- prefix, the repository name, and the PR number. That makes it not only unique but easy to find. If we know the repo and the PR, we should have no trouble figuring out where it is defined and running inside the cluster.

Finally, the last step (push_env_repo) is pushing the changes we made to the previews repo.

Please note that a pipeline would have quite a few other steps. Normally, we would run tests, do security scanning, and so on. But, for simplicity reasons, we are exploring only those related to the deployment of preview environments.

There is one more crucial thing left to note. But, this time, it is not about what we have, but what is missing.

We are not using kubectl apply, helm upgrade, or any similar command. We are not communicating with the cluster in any form of way. For all we know, neither Codefresh nor we have access to the cluster. We might not even know where that cluster is. The pipeline is only building artifacts and pushing some changes to the repository that defines the desired state of preview environments. Argo CD is the one that will do the work of converging the actual state into our desires. It is already monitoring that repository and making sure that the changes to the helm directory are applied. As a result, no person or process needs to have access to the cluster.

The problem with that pipeline is that it works for me, but not necessarily for you. We might need to change a few things to make it yours. For example, the Git repo organization (owner) is set to vfarcic, it assumes that the Git context defined in Codefresh is called github, and so on. We already have most of the information we need in environment variables, except for the Codefresh Git context and registry with container images. The good news is that we can easily find out what those are.

codefresh get contexts

The output, in my case, is as follows.

NAME            TYPE
...
github-2        git.github

I, for example, have the context named github-2. Yours might be different. What matters is that you do have a context of type git.github. If you do not, please create one. If you’re confused about how to do it, you did not take my advice and watched the Using CLI To Configure Codefresh And Create And Manage Kubernetes Pipelines video. Maybe you did, but it wasn’t as helpful as I thought it would be. In that case, consult the official documentation or, simply, execute the codefresh create context git github --help command to find out how to create a github context.

Whether you already had the github context or you created a new one just now, please replace [...] with the name.

export CF_GIT_CONTEXT=[...]

Similarly to the github context, we’ll need the registry’s name, where we’ll push preview images. Let’s see whether you have one.

codefresh get registry

The output, in my case, is as follows.

ID      PROVIDER  NAME       KIND     BEHINDFIREWALL DEFAULT
5f84... dockerhub docker-hub standard false          true   

If you already have it, you’re my hero. If you don’t, that’s okay as well. Create it. Just do not tell me that you do not know-how, since that would mean that you ignored my repeated attempts at forcing you to watch the Using CLI To Configure Codefresh And Create And Manage Kubernetes Pipelines. That would hurt my feelings, and I tend to be very mean to those who break my heart.

Anyway, I will assume that you have the name of the registry you want to use, so please replace [...] in the command that follows with whatever the name is.

export CF_REGISTRY=[...]

Let’s resort to “sed magic” one more time, and push the changes to the repo.

cat codefresh-pr-open.yml 
    | sed -e "s@repo: vfarcic@repo: $GH_ORG@g" 
    | sed -e "s@image_name: vfarcic@image_name: $DH_USER@g" 
    | sed -e "s@IMAGE: vfarcic@IMAGE: $DH_USER/devops-toolkit@g" 
    | sed -e "s@context: github@context: $CF_GIT_CONTEXT@g" 
    | sed -e "s@git: github@git: $CF_GIT_CONTEXT@g" 
    | sed -e "s@GIT_PROVIDER_NAME: github@GIT_PROVIDER_NAME: $CF_GIT_CONTEXT@g" 
    | sed -e "s@registry: docker-hub@registry: $CF_REGISTRY@g" 
    | tee codefresh-pr-open.yml

git add .

git commit -m "Corrections"

git push

All that’s left before we start creating PRs and enjoying the liberating feeling of full automation is to create the pipeline we defined.

codefresh create pipeline 
    -f codefresh-pr-open.yml

Now we are ready to see the effect of orchestrating the creation of preview environments through pipelines and Argo CD.

Creating, Syncing, And Reopening Pull Requests

We can finally reap the fruits of our labor by pretending to work on a new feature that will result in a pull request.

Let’s checkout a branch, make a silly change as a way to simulate that we worked on a “real” feature, and push the changes.

git checkout -b pr-1

echo "A silly change" | tee README.md

git add .

git commit -m "A silly change"

git push --set-upstream origin pr-1

I am obsessed with being able to do everything from a terminal using CLIs. Assuming that you have no saying how we do the exercises, I will continue passing that obsession to you by creating a PR from the terminal sessions. We’ll need GitHub CLI (gh) for that. If you do not have it already, please visit the Installation page and follow the instructions.

gh pr create 
    --repo $GH_ORG/devops-toolkit 
    --title "A silly change" 
    --body "A silly change indeed"

We’ll need the PR number soon, so we’ll store it in yet another environment variable.

*Please replace [...] with the PR number available in the output of the gh pr create command. It is probably 1.

export PR_NUMBER=[...]

This will be the first true test of whether the pipeline we created works and whether we configured the triggers correctly. If everything is going according to the plan, a new pipeline build should be running. GitHub should have fired a webhook request to Codefresh, notifying it that a new PR was created. In turn, Codefresh should have spun up an instance of the pipeline. We can confirm that by outputting devops-toolkit-pr-open builds.

codefresh get builds 
    --pipeline-name devops-toolkit-pr-open

There should be a single build of that pipeline. We’ll need it if we want to look at the logs, so let’s store it inside an environment variable.

*Please replace [...] in the command that follows with the ID of the build.

export BUILD_ID=[...]

Unless you’re a “freak” like me, you might be tired of looking only at the terminal. If that’s not the case and you’d like to stay in the monochrome world, you can follow the build logs through codefresh logs $BUILD_ID -f. Otherwise, let’s open the build in a browser.

open https://g.codefresh.io/build/$BUILD_ID

All that’s left is to wait for a few moments until the build is finished.

Once the build is finished, Argo CD will detect that a change was made to the previews repo and, soon afterward, it will deploy the preview based on the PR we created earlier. We can observe the status of Argo CD synchronization through CLI but, given that we already switched to the browser, we might do that through the Argo CD UI. But, before we do that, we need to “discover” the address.

ARGOCD_ADDR=$(kubectl 
    --namespace argocd 
    get ingress argocd-server 
    --output jsonpath="{.spec.rules[0].host}")

echo $ARGOCD_ADDR

If the first command threw an error or the value of echo is empty, you probably forgot to enable Argo CD Ingress. Shame on you. You should have followed my instructions on how to set it up. Now you’re on your own. Go and figure out how to enable Ingress for Argo CD through the official docs.

Let’s open the UI and see what we’ll get.

open http://$ARGOCD_ADDR

We can see that there are two applications. The previews is the app of the apps. We can think of it as a group of applications stored in the referenced repo. The pr-devops-toolkit-1 is the preview environment created as a result of us making the PR. If that’s confusing, it will hopefully become more apparent if we open the previews app.

Click somewhere on the previews box.

We can see that the previews Application contains two resources. There is the previews Namespace, which, if you remember, is the dummy resource we have in the Helm chart. We’re using it to avoid potential problems with charts without any resources.

The pr-devops-toolkit-1 Application is the one we just created. We can see that it is considered as part of the previews app.

Click the open application icon in the pr-devops-toolkit-1 Application, and you should see all the resources it contains. There is a Deployment, an Ingress, a Service, and so on.

Even though I promised that I will put my need to do everything from a terminal on hold and allow you to see some colors through UIs, this was as much as I could take it. Prolonged exposure to UIs hearts my eyes, so we’ll go back to the terminal.

Let’s take a look at what’s going on with the Namespaces.

kubectl get namespaces

The output, limited to the relevant parts, is as follows.

NAME                  STATUS   AGE
...
pr-devops-toolkit-1   Active   7m22s
...

We can see that a new Namespace (pr-devops-toolkit-1) was created. It is unique and reserved for the preview environment related to the PR we opened earlier.

All that might be great, but the real test of the process is whether the application is indeed up and running and can be accessed through a unique subdomain. We could easily “guess” what that subdomain is given that it is based on the name of the repo and the PR number. Nevertheless, why would we “stretch” our brain if we can retrieve it by querying Kubernetes?

Ideally, we should have instructed the pipeline to write a comment to the PR with the address through which the app is accessible. It could have been something like “A preview environment was created and is accessible through __________.” But we didn’t do that since that is out of the scope of this article. I had to draw the line somewhere. Otherwise, this text was on the way towards becoming just as big as War and Peace.

export APP_ADDR=$(kubectl 
    --namespace pr-devops-toolkit-$PR_NUMBER 
    get ingresses 
    --output jsonpath="{.items[0].spec.rules[0].host}")

echo $APP_ADDR

The output, in my case, is as follows.

pr-devops-toolkit-1.54.88.253.179.xip.io

Let’s open the app in a browser and pretend that we are testing it manually while dreaming that the tests are being executed by pipeline builds.

open http://$APP_ADDR

That’s it. We created a PR, which triggered a pipeline which built the binaries, pushed them to registries, and modified the previews repo. Argo CD detected those changes and interpreted them as a new desired state. It converged the actual state into the new desired state. As a result, the preview environment is up-and-running inside the cluster. Neither we nor pipeline builds instructed Kubernetes to change the state. All that changed is the desired state stored in Git.

Let’s keep things clean and checkout the master branch.

git checkout master

We’re halfway through when PRs are concerned. We created a process that is executed whenever a PR is created, synced, or reopened. We’ll soon move into the second part of the story and define what should happen when PRs are closed. But, before we do, I have a task for you.

Create a few more PRs and observe the results. Think of the “process” as a new toy you might want to play with. Get familiar with that part of the process before we move on.

Closing Pull Requests

We saw how we can create as many environments as there are open pull requests. As a result, no one will ever need to wait until their pull request is reviewed and tested. Or, at least, if there is some waiting time, that will be due to people’s inability to do their part of the work, and not because a release candidate not being deployed and tested.

Still, this approach has a potentially huge issue that needs to be resolved. It might be too expensive. We can solve that problem in two ways.

To begin with, we can make preview environments run only when someone is using them. We could accomplish that through, let’s say, Knative, that would scale the application to zero replicas when not in use and back up when machines are running tests or validating it manually. But that is not the subject today, so we’ll skip Knative (for now).

The other important thing we can do to lower the cost is to remove preview environments as soon as PRs are closed. That is indeed within the scope of this article, so let’s get going and do that.

Just as we created a pipeline that is triggered whenever a pull request is opened, synced, or reopened, we’ll make another one that will be used when PRs are closed. As you can probably guess, I already prepared one we can use. It’s stored in the codefresh directory, so let’s copy it to the root and see what’s inside.

cp codefresh/codefresh-pr-close.yml 
    codefresh-pr-close.yml

cat codefresh-pr-close.yml

The output is as follows.

version: "1.0"
kind: pipeline
metadata:
  name: devops-toolkit-pr-close
  description: Triggered when a PR is closed
spec:
  triggers:
  - type: git
    provider: github
    context: github
    name: pr-close
    repo: vfarcic/devops-toolkit
    events:
    - pullrequest.closed
    pullRequestAllowForkEvents: true
    pullRequestTargetBranchRegex: /master/gi
    verified: true
  contexts: []
  stages:
    - deploy
  steps:
    clone_env_repo:
      title: Cloning preview env. repo
      type: git-clone
      arguments:
        git: github
        repo: vfarcic/argocd-previews
      stage: deploy
    remove_preview:
      image: vfarcic/argocd-pipeline:1.0.ee76b7a
      title: Removing preview environment app
      working_directory: "${{clone_env_repo}}" 
      commands:
      - export APP_ID=pr-$CF_REPO_NAME-$CF_PULL_REQUEST_NUMBER
      - rm -f helm/templates/$APP_ID.yaml
      - git add .
      stage: deploy
    push_env_repo:
      title: Push preview env. changes to the repo
      type: git-commit
      arguments:
        git: github
        repo: vfarcic/argocd-previews
        commit_message: "Adding PR ${{CF_PULL_REQUEST_NUMBER}} from ${{CF_REPO_NAME}}"
        git_user_name: "${{CF_COMMIT_AUTHOR}}"
        working_directory: "/codefresh/volume/argocd-previews"
      stage: deploy

This pipeline is even simpler than the previous one. We do not need to clone the application’s repo, to build binaries, to store them in registries, nor any other action we might normally do when PRs are opened. The only job of that pipeline is to remove the Argo CD application from the previews repository.

We can see that the spec.triggers entry has a single event pullrequest.closed.

Further on, inside the steps section, we are cloning the environment repository (clone_env_repo), removing the preview app associated with the PR (remove_preview), and pushing changes back to the repo (push_env_repo).

That’s it. That’s all the pipeline does. Simplicity is a good thing, isn’t it?

Just as before, we’ll need to replace a few hard-coded values specific to my setup and push the changes back to the repo.

cat codefresh-pr-close.yml 
    | sed -e "s@repo: vfarcic@repo: $GH_ORG@g" 
    | sed -e "s@context: github@context: $CF_GIT_CONTEXT@g" 
    | sed -e "s@git: github@git: $CF_GIT_CONTEXT@g" 
    | tee codefresh-pr-close.yml

git add .

git commit -m "Corrections"

git push

All that’s left before we see it in action is to create the pipeline.

codefresh create pipeline 
    -f codefresh-pr-close.yml

Let’s open the PR and see what we have so far.

open https://github.com/$GH_ORG/devops-toolkit/pull/$PR_NUMBER

Please click the Show all checks link, and you’ll see that the build initiated as a result of creating the pull request passed.

We could close the PR in two ways. One possible action would be to merge it into the mainline. That would trigger two events, merge and close. However, since we might still want to check whether the previous pipeline works on the reopen event, we’ll choose the other option and close the PR instead of merging it. From the perspective of that pipeline, both merge and close actions are the same since both are firing the closed event.

Please click the Close pull request button.

Let’s take a quick look at whether a new build of the devops-toolkit-pr-close pipeline was created and is running.

codefresh get builds 
    --pipeline-name devops-toolkit-pr-close

There should be a single build. Let’s see what it’s doing.

Please replace [...] in the command that follows with the ID of the build.

export BUILD_ID=[...] # Replace `[...]` with the ID of the last build

open https://g.codefresh.io/build/$BUILD_ID

We should see the three steps that constitute the pipeline. One of them might still be running. If that’s the case, please wait until the whole pipeline build is finished. Feel free to entertain yourself by observing the logs of one of the steps.

Let’s take a look at what happened with the Namespace of the preview environment associated with the PR we just closed.

kubectl get namespaces

The output, limited to the relevant parts, is as follows.

NAME                STATUS AGE
...
pr-devops-toolkit-1 Active 47m
...

The Namespace is still there. That’s disappointing, isn’t it? The preview environment was supposed to be removed, but the Namespace is still there. But, if you take a closer look at the pipeline, seeing that the Namespace is still there should come as no surprise. We removed the application from the previews repo. We changed the desired state by saying that we do not want the preview application anymore. We did not say that we do not want the Namespace that was created automatically. If everything went as planned, Argo CD should have removed the app while leaving the Namespace intact. Let’s confirm that.

kubectl --namespace pr-devops-toolkit-$PR_NUMBER 
    get pods

The output should show that No resources were found in pr-devops-toolkit-1 namespace.

If, in your case, the Pods are still there, you probably did not give Argo CD enough time to synchronize. Wait for a few moments, and retrieve the Pods again.

The application is indeed gone. As a result, we are not wasting resources on a PR that does not exist anymore. The empty Namespace left behind is not using any CPU or memory. It’s just annoying that it is there. We could have solved that as well, but not through the pipeline. One of the requirements is to NOT allow access to the cluster to anyone or anything, including pipelines. I’ll leave that to you as “special homework”. There are a few ways to solve that, and I’m curious which one you will come up with. Please let me know how you did it.

Now, let’s go back to the previous pipeline and confirm that the trigger associated with reopening PRs works as well. We’ll imagine that we closed the pull request by error or that we changed our mind and would want it after all.

open https://github.com/$GH_ORG/devops-toolkit/pull/$PR_NUMBER

Please click the Reopen pull request button.

You should already know what happens next and what we will do to observe the outcome.

Reopening the pull request triggered an even that was sent to Codefresh. Given that the devops-toolkit-pr-open pipeline has a trigger that corresponds with that event, a new build was created. We can confirm that by retrieving all the builds of that pipeline.

codefresh get builds 
    --pipeline-name devops-toolkit-pr-open

Please copy the latest build’s ID, and paste it instead of [...] in the commands that follow.

export BUILD_ID=[...] # Replace `[...]` with the ID of the last build

open https://g.codefresh.io/build/$BUILD_ID

Next, we should wait until the build is finished. Once it’s done, the definition of the preview app associated with that PR should be pushed to the previews repo. Argo CD, on the other hand, is monitoring that same repo, and, soon afterward, it should initiate the convergence of the actual into the desired state.

kubectl --namespace pr-devops-toolkit-$PR_NUMBER 
    get pods

What we can see in front of us might vary. There might be no Pods if Argo CD synchronization did not yet execute. There might be one Pod if Argo CD did synchronize, but HorizontalPodAutoscaler (HPA) did not yet do its job. Or there might already be two Pods. In any case, sooner or later, the actual state should stabilize, and there should be a minimum of two Pods given the lower limit set in the HPA.

That’s it. We saw how to fully automate the creation, management, and destruction of preview environments based on events created through pull requests.

We are victorious!

Feel free to “play” with what we created. We’ll destroy everything once you’re done.

Destroying The Evidence

I prefer leaving no trace of my exercises once I’m finished. That follows my philosophy that we should always be ready to create everything we need and destroy the things that are of no use. Think of it as a “leave no trace, I was never there” type of approach. So, let’s destroy everything we created.

Let’s start by getting out of the app repo.

cd ..

Next, we’ll delete the pipelines we created.

codefresh delete pipeline 
    devops-toolkit-pr-open

codefresh delete pipeline 
    devops-toolkit-pr-close

We’ll also remove the repositories you forked.

open https://github.com/$GH_ORG/argocd-previews/settings

# Click the *Delete this repository* button and follow the instructions

open https://github.com/$GH_ORG/devops-toolkit/settings

# Click the *Delete this repository* button and follow the instructions

Next, there is no need to keep the local copies of the repos.

rm -rf 
    devops-toolkit 
    argocd-previews

Finally, destroy the cluster itself, unless you are using it for other purposes. If you insist on keeping it, delete the Namespaces we created (e.g., argocd, previews, etc.).

That’s it. We started from nothing, and we ended with nothing. It’s like we haven’t done anything. If there is no evidence, there is no trial.

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

Build your GitOps skills and credibility today with a GitOps Certification.

Get GitOps Certified

Ready to Get Started?
  • safer deployments
  • More frequent deployments
  • resilient deployments