Dockerize Your Go Application

Dockerize Your Go Application

5 min read

What is Docker?

I’m sure that you’ve heard that containers are all the rage these days. If not, you can think of containers as a way to package and ship your existing applications. If you’re familiar with containers, you’ve most certainly heard of Docker, which provides the de-facto toolset to build, manage, and deploy containers. In this article, we’ll provide a brief introduction to Docker and go over what it takes to containerize a Java application.

Dockerize Your Java Application

The first step in building a Docker container for your Java application is to ensure that you have the Docker tool suite installed on your development machine. If you need to install Docker on your machine, you can find the appropriate download for your system on the official docker website.

Once installed correctly, you’ll need to find a reputable base image to use as an application baseline. Ideally, a base image should only contain the bare essentials for the task at hand. You’ll also want to keep image sizes down to a minimum to reduce build times and the amount of time to transfer images over the network. While there are several options available, a lot of teams that I’ve worked with prefer Alpine Linux primarily because of its size (5MB). However, one thing to keep in mind is that Alpine does not use glibc, instead, it uses musl libc, which can be problematic, but solutions are available.

Begin by looking through the Dockerfile in the following git repository.

$ git clone https://github.com/n3integration/dockerize-java.git
$ cd dockerize-java && cat Dockerfile

The first line in the file should contain the base image name. As mentioned above, there are several publicly available base images that come with the JDK pre-installed. For additional images, a quick search on Docker Hub turns up multiple choices. Choose the best available image based on your needs, or build a custom image from scratch. For the sake of simplicity, in this post, I’ve chosen to use the default openjdk base image, which is significantly larger than the Alpine image but ensures better compatibility.

FROM openjdk:8-jdk

Although you can certainly install a web server such as Tomcat within a container, it is much more practical to install runnable jar files. There are several frameworks available including: Spark, Springboot, or Restlet. I chose Spark for this post because I find it to be one of the simplest frameworks available to get up and running quickly. The compiled binary is copied to a common directory in the container using a common filename.

COPY build/libs/*.jar /app/service.jar

Finally, the runnable jar file is executed. It is important to point out that each argument must be declared in an individual string.

CMD ["java", "-jar", "/app/service.jar"]

Refer to the official Dockerfile reference guide for a complete set of available commands. In order to build our container, we must first ensure that the project is compiled.

$ cd dockerize-java && ./gradlew clean build
$ docker build -t n3integration/dockerize:latest .

The -t flag is the image tag and the format is organization/project:revision. If the revision is omitted, it will default to using latest.

After selecting a base image, there are a few additional factors to consider when containerizing your application and deploying it to a Docker environment:

  1. Logging
  2. Configuration

Application Logging

When writing log messages from your application, it is common to use a logging framework (e.g. slf4j, jcl, or log4j2). It is also common for traditional applications to write logs to the filesystem. However, when writing log messages within containers it is generally easier to write log messages to standard output, which is certainly an option regardless of the logging framework used. This enables developers to leverage available tooling such as the docker logs command to troubleshoot containers at runtime.

$ docker ps | grep n3integration/dockerize | awk '{print $1}' | xargs docker logs

Application Configuration

One caveat of working with containers is dealing with an application’s configuration. Since containers are immutable, traditional configuration files are less commonplace, unless you’re attempting to provide a default configuration out of the box with settings that will work regardless of environment. However, don’t be discouraged, there are options available in dealing with your application’s configuration.

Environment Variables

This is the least intrusive option and adheres to the twelve factor app methodology. However, care should be taken in exposing secrets as environment variables that aren’t encrypted. In the following example, the environment variable USER is passed to the container.

$ docker run -p4567:4567 -d -eUSER="n3integration" n3integration/dockerize
$ curl localhost:4567
$ docker ps | grep n3integration/dockerize | awk '{print $1}' | xargs docker rm -f

Volumes

Docker provides volumes as a way to mount and restore partial state within a container at runtime. The availability of volumes comes in handy for configuration files. If, for example, you have multiple containers running on a single host, a local directory with environment-specific configuration files can be loaded into the container at runtime by mounting the directory as a volume in all containers. This simplifies deployments and reduces the amount of runtime dependencies. In the following example, the local data directory is mounted as a volume in the /data directory of the container. The output from the curl command should differ from the previous example.

$ docker run -p4567:4567 -d -v $(pwd)/data:/data n3integration/dockerize
$ curl localhost:4567
$ docker ps | grep n3integration/dockerize | awk '{print $1}' | xargs docker rm -f

Configuration Store

For more complex installations, a configuration key/value store such as: consul, etcd, or curator can be used to host and query configuration settings. The most common use case that benefits from the use of a configuration key/value store is service discovery. Service discovery is especially important if running multiple instances of the same service on a single host, which would normally bind to the same port. In the case of Docker containers, specifying a host port is optional and should be avoided unless necessary, if for example at most one instance of a service should run on a single host. Service discovery is outside of the scope of this post and more can be read about the topic in the link provided above.

Ready to Get Started?
  • safer deployments
  • More frequent deployments
  • resilient deployments