Things to consider before containerizing your app

Things to consider before containerizing your app

5 min read

Containerizing your app, which means converting it to run inside a container on Docker or another container system, offers a number of advantages. But before jumping into containerization, you should understand how the different aspects of your app affect the way it will run in a container.

Below, I’ll cover these points, with a focus on application design, the CI layer, and environment config. This might sound like ground that has been covered before, but I promise there are some twists.

Should you containerize?

Before we even talk about the road to containerization, though, we should first consider whether you should containerize in the first place. Containerization does offer a lot of advantages. But the main question you should ask yourself before committing to it is whether your app is stateless.

If it is, that’s awesome! Then containerization is almost certainly right for you.

If it isn’t stateless, that doesn’t necessarily mean you can’t containerize. But containerization for stateful apps is more complicated. Containerization reduces the complexity of scaling elastically, but this ability can be diminished if your application is stateful.

To discuss this topic (and how Codefresh can help simplify the containerization of stateful apps) at length would require a separate post. But I did want to note it briefly before jumping into the details of containerization.

Application Design

Now let’s get to the main part of the article, which is how your application’s design affects your ability to containerize.

For starters, you need to understand that your app is going to reside on a containerization platform of some kind, and what that means to you.

Containerized apps can be designed to be platform agnostic. With good design, you shouldn’t need to worry about where your app is going to run and the underlying systems. This might be over-simplifying things, but let’s keep it simple so we don’t end up down a rabbit hole.

An app being containerized shouldn’t contain any environment config. If it did, this would preclude it from being easily deployable between environments and might mean messy elseif or switch case statements. There are three alternatives: the environment defines itself, environment variables are injected into the app at deploy time, or a combination of both. The principle is that you have a single build artifact you can deploy to any environment.

So what does this give us? A separation of concerns between our application and our environments, allowing us to build/configure and deploy each independently.

Now let’s start thinking about containers. Containers are meant to house a single process. For example, you wouldn’t want to include your app tier and your web tier in the same container, as this would mean you’d need to configure the container for both and deploy both at the same time, which is messy. This example has quite distant elements though, so, what about Apache and PHP-FPM? These are two closely linked yet distinct processes. In this example, Apache services http requests and can serve static files, yet would be able to delegate execution of PHP to a container (process) that had the responsibility of running PHP-FPM.

CI Layer

This separation is worth taking note of, as there’s another question to consider at this point. What does your dev environment look like? How do you simultaneously deploy to your CI layer, UAT, stage, pre prod and prod?

The long and short answer is that you’ll need to roll your own code and config to do this based on the needs of your application. There’s no right and wrong answer, and no one answer, as there are just too many CI solutions and languages that you might write your application in to cover here.

What else do we need to cover in the CI layer, then? While talking about application design earlier, we briefly discussed environment config and that it shouldn’t be contained with an app. We all know hardcoding values into an app sucks, so how can we get around this one? A simple solution would be to inject environment config as parameters when your application is deployed to any environment. (Note this is at deploy time and not build time. You should be aiming to promote a single build-up through environments instead of generating a build per environment.)

Environment Config

This takes us into environment configuration. We’re looking for an app to be built once and deployed many times in many places.

An elegant solution to an application reading environment config is for the environment to expose parts of its configuration as variables which can be used within an application. This does away with hard-coded values, switch cases and deploy time injected parameters, making your app easily portable to any other containerization platform at the same time given it has access to all your backing services.

Conclusion and further reading

We’ve covered only the basic essential topics related to containerization here. There are many more that are worth considering, but the ones I outlined above represent the major challenges that you need to overcome architecturally in order to achieve your goals.

If you haven’t come across it before, have a thorough read of This site details principles by which to build an app if it is to be run as a service. It’s a bit romantic, perhaps, but it’s been a beacon of guidance for me while engineering apps in the past, and I’m sure there are many engineers that can still benefit from it.

Last but not least, it’s also worth checking out the Codefresh documentation if you’re thinking about containerizing. These docs will help to familiarize you with the process of containerizing an app using an automation tool like Codefresh. They explain which parts of the containerization workflow are simple, and what you’ll need to have ready before you containerize. And they just might convince you that containerization is not necessarily as complicated as I have made it out to be here, especially when you use a tool like Codefresh to do the dirty work for you.


Matthew Oaks works as a Lead DevOps Engineer at a major media broadcaster in the UK. His background is mostly in software engineering, but he also enjoys config, monitoring and graphs so much that he decided to give DevOps a shot. In his spare time he rides mountain bikes and tries to control and entertain his Labrador pup.

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

Build your GitOps skills and credibility today with a GitOps Certification.

Get GitOps Certified

Ready to Get Started?
  • safer deployments
  • More frequent deployments
  • resilient deployments