Create your FREE Codefresh account and start making pipelines fast. Create Account

Stop Using Branches for Deploying to Different GitOps Environments

9 min read

In our big guide for GitOps problems, we briefly explained (see points 3 and 4) how the current crop of GitOps tools don’t really cover the case of promotion between different environments or how even to model multi-cluster setups.

GitOps promotion
GitOps promotion

The question of “How do I promote a release to the next environment?” is becoming increasingly popular among organizations that want to adopt GitOps. And even though there are several possible answers, in this particular article I want to focus on what you should NOT do.

You should NOT use Git branches for modeling different environments. If the Git repository holding your configuration (manifests/templates in the case of Kubernetes) has branches named “staging”, “QA”, “Production” and so on, then you have fallen into a trap.

Branch Per environment
Branch Per environment

Let me repeat that. Using Git branches for modeling different environments is an anti-pattern. Don’t do it!

We will explore the following points on why this practice is an anti-pattern:

  1. Using different Git branches for deployment environments is a relic of the past.
  2. Pull requests and merges between different branches is problematic.
  3. People are tempted to include environment specific code and create configuration drift.
  4. As soon as you have a large number of environments, maintenance of all environments gets quickly out of hand.
  5. The branch-per-environment model goes against the existing Kubernetes ecosystem.

Using branches for different environments should only be applied to legacy applications.

When I ask people why they chose to use Git branches for modelling different environments, almost always the answer is a variation of “we’ve always done it that way,” “it feels natural,” “this is what our developers know,” and so on.

And that is true. Most people are familiar with using branches for different environments. This practice was heavily popularized by the venerable Git-Flow model. But since the introduction of this model, things have changed a lot. Even the original author has placed a huge warning at the top advising people against adopting this model without understanding the repercussions.

The fact is that the Git-flow model:

  • Is focused on application source code and not environment configuration (let alone Kubernetes manifests).
  • Is best used when you need to support multiple versions of your application in production. This happens, but is not usually the case.

I am not going to talk too much about Git-flow here and its disadvantages because the present article is about GitOps environments and not application source code, but in summary, you should follow trunk-based development and use feature-flags if you need to support different features for different environments.

In the context of GitOps, the application source code and your configuration should also be in different Git repositories (one repository with just application code and one repository with Kubernetes manifests/templates). This means that your choice of branching for the application source code should not affect how branches are used in the environment repository that defines your environments.

Use different repositories
Use different repositories

When you adopt GitOps for your next project, you should start with a clean slate. Application developers can choose whatever branching strategy they want for the application source code (and even use Git-flow), but the configuration Git repository (that has all the Kubernetes manifests/templates) should NOT follow the branch-per-environment model.

Promotion is never a simple Git merge

Now that we know the history of using a branch-per-environment approach for deployments, we can talk about the actual disadvantages.

The main advantage of this approach is the argument that “Promotion is a simple git merge.” In theory, if you want to promote a release from QA to staging, you simply merge your QA branch into the staging branch. And when you are ready for production, you again merge the staging branch into the production branch, and you can be certain that all changes from staging have reached production.

Do you want to see what is different between production and staging? Just do a standard git diff between the two branches. Do you want to backport a configuration change from staging to QA? Again, a simple Git merge from the staging branch to qa will do the trick.

And if you want to place extra restrictions on promotions, you can use Pull Requests. So even though anybody could merge from qa to staging, if you want to merge something in the production branch, you can use a Pull Request and demand manual approval from all critical stakeholders.

This all sounds great in theory, and some trivial scenarios can actually work like this. But in practice, this is never the case. Promoting a release via a Git merge can suffer from merge conflicts, unwanted changes, and even the wrong order of changes.

As a simple example, let’s take this Kubernetes deployment that is currently sitting in the staging branch:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-deployment
spec:
  replicas: 15
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: backend
        image: my-app:2.2
        ports:
        - containerPort: 80

Your QA team has informed you that version 2.3 (which is in the QA branch) looks good, and it is ready to be moved to staging. You merge the QA branch to the staging branch, promoting the application and think that everything is good.

What you didn’t know is that somebody also changed the number of replicas in the QA branch to 2 because of some resource limitations. With your Git merge, you not only deployed 2.3 to staging, but you also scaled the replicas to 2 (instead of 15), and that is probably something that you don’t want.

You might argue that it would be easy to look at the replica count before merging, but remember that in a real scenario you have a large number of applications with a big number of manifests that are almost always templated (via Helm or Kustomize). So understanding what changes you want to bring and what to leave behind is not a trivial task.

And even if you do find changes that should not be promoted, you need to manually choose the “good” parts using git cherry-pick or other non-standard methods which are a far cry from the original “simple” Git merge.

But even if you are aware of all the changes that can be promoted, there are several cases where the order of promotion is not the same as the order of committing. As an example, the following 4 changes happen to the QA environment.

  1. The ingress of the application is updated with an extra hostname.
  2. Release 2.5 is promoted to the QA branch and all QA people start testing.
  3. A problem is found with 2.5 and a Kubernetes configmap is fixed.
  4. Resource limits are fine-tuned and committed to QA.

It is then decided that the ingress setting and the resource limits should move to the next environment (staging). But the QA team has not finished testing with the 2.5 release.

If you blindly merge the QA branch to the staging branch, you will get all 4 changes at once, including the promotion of 2.5.

To resolve this, again you need to use git cherry-pick or other manual methods.

There are even more complicated cases where the commits have dependencies between them, so even cherry-pick will not work.

Commit dependencies
Commit dependencies

In the example above, release 1.24 must be promoted to production. The problem is that one of the commits (the hotfix) contains a multitude of changes where some of them depend on another commit (the ingress config change) which itself cannot be moved to production (as it only applies only to staging). So even with cherry-picks, it is impossible to bring only the required changes from staging to production.

The end result is that promotion is never a simple Git merge. Most organizations will also have a large number of applications that go on a large number of clusters, composed by a large number of manifests. Manually choosing commits is a losing battle.

Configuration drift can be easily created by environment-specific changes

In theory, configuration drift should not be an issue with Git merges. If you make a change in staging and then merge that branch to production, then all your changes should transfer to the new environment.

In practice, however, things are different because most organizations only merge to one direction, and team members are easily tempted to change upstream environments and never back-port the changes to downstream environments.

In the classic example with 3 environments for QA, Staging, and Production, the direction of Git merges only goes to one direction. People merge the qa branch to staging and the staging branch to production. This means that changes only flow upwards.

QA -> Staging -> Production.

The classic scenario is that a quick configuration change is needed in production (a hotfix), and somebody applies the fix there. In the case of Kubernetes, this hotfix can be anything such as a change in an existing manifest or even a brand new manifest.

Now Production has a completely different configuration than staging. Next time a release is promoted from Staging to Production, Git will only notify you on what you will bring from Staging. The ad hoc change on production will never appear anywhere in the Pull Request.

One direction only
One direction only

This means that all subsequent deployments can fail, as production now has an undocumented change that will never be detected by any subsequent promotions.

In theory, you could backport such changes and merge periodically all commits from production to staging (and staging to QA). In practice, this never happens due to the reasons outlined in the previous point.

You can imagine that a large number of environments (and not just 3) further increases the problem.

In summary, promoting releases by Git merges does not solve configuration drift and in fact makes it even more problematic as teams are tempted to make ad hoc changes that are never promoted in sequence.

Managing different Git branches for a large number of environments is a losing battle

In all the previous examples, I only used 3 environments (qa-> staging-> production) to illustrate the disadvantages of branch-based environment promotion.

Depending on the size of your organization, you will have many more environments. If you factor in other dimensions such as geographical location, the number of environments can quickly skyrocket.

For example, let’s take a company that has 5 environments:

  1. Load Testing
  2. Integration testing
  3. QA
  4. Staging
  5. Production

Then let’s assume that the last 3 environments are also deployed to EU, US, and Asia while the first 2 also have GPU and Non-GPU variations. This means that the company has a total of 13 environments. And this is for a single application.

If you follow a branch-based approach for your environments:

  • You need to have 13 long living Git branches at all times.
  • You need 13 pull requests for promoting a single change across all environments.
  • You have a two dimensional promotion matrix with 5 steps upwards and 2-3 steps outwards.
  • The possibilities for wrong merges, configuration drift and ad-hoc changes is now non-trivial across all environment combinations.

In the context of this example organization, all previous issues are now more prevalent.

The branch-per-environment model goes against Helm/Kustomize

Two of the most popular Kubernetes tools for describing applications are Helm and Kustomize. Let’s see how these two tools recommend modeling different environments.

For Helm, you need to create a generic chart that itself accepts parameters in the form of a values.yaml file. If you want to have different environments, you need multiple values files.

Helm environments
Helm environments

For Kustomize, you need to create a “base” configuration, and then each environment is modeled as an overlay that has its own folder:

Kustomize environments
Kustomize environments

In both cases, different environments are modeled with different folders/files. Helm and Kustomize know nothing about Git branches or Git merges or Pull Requests. They use just plain files.

Let me repeat that again: Both Helm and Kustomize use plain files for different environments and not Git branches. This should be a good hint on how to model different Kubernetes configurations using either of these tools.

If you introduce Git branches in the mix, you not only introduce extra complexity, but you also go against your own tooling.

The recommended way to promote releases in GitOps environments

Modeling different Kubernetes environments and promoting a release between them is a very common issue for all teams that adopt GitOps. Even though a very popular method is to use Git branches for each environment and assume that a promotion is a “simple” Git merge, we have seen in this article that this is an anti-pattern.

In the next article, we will see a better approach to model your different environments and promote releases between your Kubernetes cluster. The last point of the article (regarding Helm/Kustomize) should already give you a hint on how this approach works.

Stay tuned!

Kostis Kapelonis

Kostis is a software engineer/technical-writer dual class character. He lives and breathes automation, good testing practices and stress-free deployments with GitOps.

30 responses to “Stop Using Branches for Deploying to Different GitOps Environments

  1. Surender Aireddy says:

    Great detailed information. Thank you Kostis Kapelonis.

  2. Samuel Almeida says:

    Very good article, but let me get this straight.

    Are you suggesting that we use 2 repositories, a repository where my application code will be and another repository where the code (yaml) of my environment configuration will be?

    In this case, a hypothetical example would be:

    I have a guestbook application that was compiled and sent to the registry as follows: my.registry.com/guestbook:v1 and it is working in my 3 environments (dev, stage, prod), days later another update was made that ended up changing the image tag to v2.

    In that case, how do I deal with updating this application in my environments? Would I have to somehow change my kubernetes deployment manifest by changing the image version manually? How could ArgoCD help me with this?

    Congratulations for the article, very well written, I am a beginner in the field and I have many questions regarding best practices.

    1. Kostis Kapelonis says:
      Are you suggesting that we use 2 repositories, a repository where my application code will be and another repository where the code (yaml) of my environment configuration will be?

      Yes. That is exactly how it works.

      Would I have to somehow change my kubernetes deployment manifest by changing the image version manually?

      Yes you need to update the manifest. No, you don’t have to do it manually.

      1. Samuel Almeida says:

        Thank you very much for the reply, I’m looking forward to the next article.

      2. Do you have example of tools that are best suited for automatically updating manifests? I know of Kustomize’s transformers for example

        1. Kostis Kapelonis says:

          Yes Kustomize can do this. If you really want to go low-level you can use yq/jq. I have seen also some other tools in the wild with one-off names (ktmpl, kyml, etc.).

          And of course there are people who still use sed/awk and it works just fine for them.

          I don’t think the tool is that important as long as you define what an “environment” means to you.

  3. Congrats, very good article. When should we expect the next one with the better approach? I’m very anxious for it.

    1. Kostis Kapelonis says:

      Thank you for the feedback. If you cannot wait the correct answer is “environment-per-folder”. Stay tuned!

      1. danis imamovic says:

        with the “environment per folder” approach, are you suggesting that we should only use 1 branch for our YAML repository? and just “promote” our changes across environments by simply copying over the config from one environment folder to the next?

          1. danis imamovic says:

            interesting approach, but how do you modify the base file without affecting all environments simultaneously? could each environment folder have its own base file?

          2. Kostis Kapelonis says:

            See my other answer to shii.

            I realize that we have never published an article that goes into depth about how to (ab)use Kustomize. I am adding it in the backlog.

          3. I have actually the same question as danis, how can I make changes in “base” that I want to test first in staging without affecting production?

            Do I have to duplicate base for each env?

          4. Kostis Kapelonis says:

            Hello. It is a bit difficult to cover all possible use cases of all people (Helm/Kustomize/plain manifests) in a single article.

            I don’t think there is a magic answer there. Either you put your change first to the overlay and after it works you pass it to base, or you accept the risk that changes to base should really affect
            all environments and accept the risk. Alternatively you can have a hierarchy of “base” that span multiple environments but not your production ones.

            I wanted this article to be generic and not focus on a specific tool. Of course many organizations have their own custom scripts or other processes to handle everything.

  4. Herman Banken, Q42 says:

    We are successfully using a branched model for our GitOps. We have 4 environment levels (dev/test/staging/production), and 2 clusters in each “level”. We do not have a branch per cluster, but only per level.

    We have a certain setup, which involves some automation. For example, each microservice change, happens from a branch that is auto-generated and always based on production. We then automatically manage merges from this (microservice) release branch, to develop/testing/staging.

    For production however, we only automatically create a PR. This PR needs to be approved by at least 2 people. This is not possible with tags, and is the primary reason we use branches.

    It CAN be done. Due to our automation we have zero conflicts, and no stale changes in develop/testing/staging (we never need to merge from those environments to production).

    I really wonder how to it would be to manage 30 microservices with a trunk-based system if testing (happening before production can merge) can block a release. This would cause all of the teams to be blocked for a long time! For us that is unacceptable.

    Maybe I should write an article about our setup; if so, I’ll link it here.

    1. Kostis Kapelonis says:
        > This PR needs to be approved by at least 2 people
      

      That is not Continuous Deployment, it is plain Continuous Delivery.

      ...happens from a branch that is auto-generated and always based on production. 

      What happens if you want to do a change to specific environment that is not production? For example how do you decrease the amount of replicas in staging while
      leaving the amount of replicas in production as is?

      Yes, please write the article. It would be very interesting to read.

      However some of the problems I mention (i.e. the order of commits) cannot be solved with any amount of automation.
      So if your answer to those problems is “we have a human that goes and fixes stuff”, then I challenge your sentence that “it CAN be done”. Or maybe your setup is really simple
      and you never have the problems I mention.

      1. There are always environment-specific settings, such as which database to connect to , which external apis to use, etc.

        For *these* files you can simply use a file-per-environment approach.

        If a dev team wants a different number of replicas in development, they simply add the number of replicas to this environment-specific file. (Having a different number of replicas in staging is a bit of an anti-pattern in and of itself; staging should be as similar to production as possible, to catch as many issues as possible).

        With Helm’s possibility to use multiple values files, this is quite simple and you only need to set/override the settings that are different. That way, you can rollout a change to the number of replica’s by setting it in the common values, promote that to staging to see if it works and then promote that to prod, knowing that the setting has been tested in staging.

        1. Kostis Kapelonis says:

          Having a different number of replicas in staging is a bit of an anti-pattern in and of itself

          This is not always true. For some organizations it would be impossible simply for cost reasons to hold the same number of replicas in staging and prod. Depends on the org, the environment (on-prem, cloud), security constraints, auto-scaling setup etc.

          The article is written to be as generic as possible, trying to cover all possible edge cases.

  5. Samuel Terburg says:

    Thank for writing this article, really appreciate the effort.
    I do would like to remark:
    The pain that you feel from merge conflicts is on purpose, it shows you a potential configuration problem in production even before you deploy (shift-left).

    The pain from deploying multiple commits into QA that need to be promoted to Prod as a whole, we resolved that by using (multiple) “Release” branches. Clearly define the bundle of commits that are are tightly coupled.
    (even nicer is the fact when you can dynamically spin up Kubernetes Namespaces for Test & QA and thereby have multiple QA environments so that you can test individual commits)

    Kustomize can refer to git url’s including specific branches, so it’s not that Kustomize’s model is solely directory based.

    About your example of 13 different environments equals 13 branches is also not fair:
    You should use “service discovery” (convention over configuration).
    Application branches are Load-Test, Integration-Test, QA, Staging, Prod. But when deployed to Prod the pipeline would deploy to both EU & US. When deploying to GPU & Non-GPU variants this should be a configuration within the Kubernetes Cluster that can be discovered. based on that the application can behave different.
    Application Configuration should be feature-flags (not complete branches) like:
    * Region (EU,US,Azia) redundancy: yes/no
    * GPU: yes/no

    1. Kostis Kapelonis says:

      Thank you for your feedback. If you liked this you should also look at https://codefresh.io/kubernetes-tutorial/kubernetes-antipatterns-1/ and https://codefresh.io/devops/enterprise-ci-cd-best-practices-part-1/ because they mention some of the things you are talking about.

      you can dynamically spin up Kubernetes Namespaces for Test & QA and thereby have multiple QA environments so that you can test individual commits
      

      I am a big fan of preview test environments and have even mentioned them in both the blog posts I mentioned (see point 8 in the Kubernetes anti-patterns one). However they only work with isolated features. You know that feature A works on its own and that
      feature B works on its own. You don’t know if A and B work together. So you merge feature A in production and it works just fine but then you merge feature B and the deployment fails because the two features
      were never tested together. Of course depending on your use scenario, this might be an edge case for you and you don’t care. I accept that.

      Kustomize can refer to git url’s including specific branches
      

      True. But kustomize does not understand that each git branch represents an environment and the implications of this pattern. Simply mentioning a Git URL doesn’t mean that you understand what is stored there.

      You should use “service discovery” (convention over configuration)
      

      Again this is clearly explained and promoted in my Kubernetes anti-patterns series (see points 2 and 3). However if you follow this pattern you don’t really need branches at all.

      You say that with service discovery you don’t need special branches for EU and US. But then why you do need special branches for the other environments? If all your configuration
      comes dynamically why do you need branches in the first place? You can use a single branch (which is what I have been suggesting all along) for everything and fetch all configuration during runtime.

  6. Martin Hollerweger says:

    We are using environment branches together with per environment per folder.
    This way you can manage all environments in one code base and promote them easily with git merges.
    It also allows testing bigger base changes without affecting production or qa environment.

    I would like to know how you can handle bigger changes with a single branch for all environments.
    Also for auditing, security and stability I see it a huge risk to only use a single branch where all are merging(with PRs) directly.

    It looks for me that you at least need a separate branch for production to avoid any auto promotion of not tested changes.

    1. Kostis Kapelonis says:
      We are using environment branches together with per environment per folder.

      Maybe you can write an article about this approach and its advantages and disadvantages?

      I would like to know how you can handle bigger changes with a single branch for all environments.

      There is nothing strange to it. You just copy files around folders (manually or with automation). I don’t see why you need multiple branches for this.

      Also for auditing, security and stability I see it a huge risk to only use a single branch where all are merging(with PRs) directly.

      If you want to be bullet proof you can still use different repo (one for prod stuff and one for non-prod stuff). But each repo should still have a single branch and follow the “environment-per-folder” pattern.
      I personally think that this is overkill.

      It looks for me that you at least need a separate branch for production to avoid any auto promotion of not tested changes.

      You can simply disable auto-sync in prod environments if you wish. Or have manual approvals. But this is unrelated to how the environments are represented on the repository. You don’t need branches for manual approvals. You can still have manual approvals even with the environment-per-folder approach.

  7. Great article Kostis, I’m really happy that this topic is talked about and can’t wait until we at the community have some guidelines for specific scenarios.

    My biggest pain point is that I actually use the branch per environment in combination with directory per environment with Kustomize (https://i.imgur.com/7CjHUzw.png).

    While this works in my rather simple scenario, I’m interested to know how your suggestion for using just directory per environment would work as any change to the base would result in a simultaneous and immediate change to all environments which I guess nobody really wants.

    How would you solve that problem?

    I think Kustomize is part of the problem but just not sure how to avoid propagating untested changes to production environment. Any ideas?

    Thanks in advance!

    1. Kostis Kapelonis says:

      I am still writing the next article and I would like to make it generic instead of focusing specifically on kustomize. I mean if I write about Kustomize, then people that use Helm will ask the same question and people that use JSonnet will complain that I have not covered their use case.

      Anyway, off the top of my head, I would only move something to “base” either when I am certain that it needs to go to all environments at once, OR after having it passed from each environment individually.
      So first you add the change to the QA overlay. Then to the Staging overlay. Then to the prod overlay. And finally you remove it from all overlays and put it in base.

      1. This has a lot of sense actually..

        It seems that this way you suggested we just make a tradeoff between the convenience of working on one environment at a time (and not worrying about affecting other environments) and paying a bit more attention and time to test new resource/patch in every environment before applying it globally in base.

        But what if change is not a new resource or patch, rather new image tag?

        Does that mean that we give up on automation to handle that and should modify image tags by hand each time in each environment?

        I’d be grateful if you could elaborate a bit more on this, at least for Kustomize and Helm in new article as these are two major GitOps tools in the market and I’m sure that will cover at least 95% of all users that are interested in the topic. Please! 🙂

  8. Luis Crespo says:

    Hello Kostis,

    Thank you very much for the article. I am convinced about not using branches, and at the end of the article you say “In the next article…”.

    So now I am very interested to know about more details on the correct approach for dealing with environments. Do you have a link to the next article you mention?

Leave a Reply

* All fields are required. Your email address will not be published.