Schedule a FREE onboarding and start making pipelines fast.

Crafting perfect Java Docker build flow

Docker Tutorial | March 22, 2017
  • TL;DR

What is the bare minimum you need to build, test and run my Java application in Docker container?

The recipe: Create a separate Docker image for each step and optimize the way you are running it.

Introduction

I started working with Java in 1998, and for a long time, it was my main programming language. It was a long love–hate relationship.

During my work career, I wrote a lot of code in Java. Despite that fact, I don’t think Java is usually the right choice for microservices.

But, sometimes you have to work with Java. Maybe Java is your favorite language and you do not want to learn a new one, or you have a legacy code that you need to maintain, or your company decided on Java and you have no other option.

Whatever reason, if you have to marry Java with Docker, you better do it properly.

In this post, I will show you how to create an effective Java-Docker build pipeline to consistently produce small, efficient, and secure Docker images.

Spoiler alert, it’s gonna be a long article. To follow along, please create a free Codefresh account by adding one of your repos, and start building, testing and deploying Docker images instantly.  

Be careful

There are plenty of “Docker for Java developers” tutorials out there, that unintentionally encourage some Docker bad practices.

For example:

These are examples of not so good tutorials. Following these tutorials, you will get huge Docker images and long build times.

For current demo project, first two tutorials took around 15 minutes to build (first build) and produced images of 1.3GB size each.

Make yourself a favor and do not follow these tutorials!

What should you know about Docker?

Developers new to Docker are often tempted to think of it as just another VM. Instead, think of Docker as a “child process”. The files and packages needed for an entire VM are different from those needed by just another process running a dev machine. Docker is even better than a child process because it allows better isolation and environmental control.

If you’re new to Docker, I suggest reading this Understanding Docker article. Docker isn’t so complex than any developer shouldn’t be able to understand how it works.

Dockerizing Java application

What files need to be included in a Java Application’s Docker image?

Since Docker containers are just isolated processes, your Java Docker image should only contain the files required to run your application.

What are these files?

It starts with a Java Runtime Environment (JRE). JRE is a software package, that has everything required to run a Java program. It includes an implementation of the Java Virtual Machine (JVM) with an implementation of the Java Class Library.

I recommend using OpenJDK JRE. OpenJDK is licensed under GPL with Classpath Exception. The Classpath Exception part is important. This license allows using OpenJDK with any software license, not just the GPL. In particular, you can use OpenJDK in proprietary software without disclosing your code.

Before using Oracle’s JDK/JRE, please read the following post: “Running Java on Docker? You’re Breaking the Law.

Since it’s rare for Java applications to be developed using only the standard library, you most likely need to also add 3rd party Java libraries. Then add the application compiled bytecode as plain Java Class files or packaged into JAR archives. And, if you are using native code, you will need to add corresponding native libraries/packages too.

Choosing a base Docker image for Java Application

In order to choose the base Docker image, you need to answer the following questions:

  • What native packages do you need for your Java application?
  • Should you choose Ubuntu or Debian as your base image?
  • What is your strategy for patching security holes, including packages you are not using at all?
  • Do you mind paying extra (money and time) for network traffic and storage of unused files?

Some might say: “but, if all your images share the same Docker layers, you only download them once, right?

That’s true in theory, but in reality is often very different.

Usually, you have lots of different images: some you built lately, others a long time ago, others you pull from DockerHub. All these images do not share the same base image or version. You need to invest a lot of time to align these images to share the same base image and then keep these images up-to-date (for no reason).

Some might say: “but, who cares about image size? we download them just once and run forever”.

Docker image size is actually very important.

The size has an impact on …

  • network latency – need to transfer Docker image over the web
  • storage – need to store all these bits somewhere
  • service availability and elasticity – when using a Docker scheduler, like Kubernetes, Swarm, DC/OS or other (scheduler can move containers between hosts)
  • security – do you really, I mean really need the libpng package with all its CVE vulnerabilities for your Java application?
  • development agility – small Docker images == faster build time and faster deployment

Without being careful, Java Docker images tend to grow to enormous sizes. I’ve seen 3GB Java images where the required code and JAR libraries only take 150MBs.

Consider using Alpine Linux image, which is only 5MBs, as a base Docker image. Lots of “Official Docker images” have an Alpine-based flavor.

Note: Many, but not all Linux packages have versions compiled with musl libc C runtime library. Sometimes you want to use a package that is compiled with glibc (GNU C runtime library). The frolvlad/alpine-glibc image based on Alpine Linux image contains glibc to enable proprietary projects, compiled against glibc (e.g. OracleJDK, Anaconda).

Choosing the right Java Application server

Frequently, you also need to expose some kind of interface to reach your Java application that runs in Docker a container.

When you deploy Java applications with Docker containers, the default Java deployment model changes.

Originally, Java server-side deployment assumes you have pre-configured a Java Web Server (Tomcat, WebLogic, JBoss, or other) and you are deploying an application WAR (Web Archive) packaged Java application to this server and run it together with other applications, deployed on the same server.

Lots of tools are developed around this concept, allowing you to update running applications without stopping the Java Application server, route traffic to the new application, resolve possible class loading conflicts and more.

With Docker-based deployments, you do not need these tools anymore, you don’t even need the fat enterprise-ready Java Application servers. The only thing you need is a stable and scalable network server that can serve your API over HTTP/TCP or other protocol of your choice. Search Google for “embedded Java server” and take one that you like most.

For this demo, I forked Spring Boot’s REST example and modified it a bit. The demo uses Spring Boot with an embedded Tomcat server. Here’s my fork on GitHub (blog branch).

Building a Java Application Docker image

In order to run this demo, I need to create a Docker image with JRE, the compiled and packaged Java application, and all 3rd party libraries.

Here’s the Dockerfile I used to build my image. This demo Docker image is based on slim Alpine Linux with OpenJDK JRE and contains the application WAR file with all dependencies embedded into it. It’s just the bare minimum required to run the demo application.

To build the Docker image, run the following command:

Running the docker history command on created Docker image will let you see all layers that make up this image:

  • 4.8MB Alpine Linux Layer
  • 103MB OpenJDK JRE Layer
  • 61.8MB Application WAR file

Running the Java Application Docker container

In order to run the demo application, run following command:

Let’s check that the application is up and running (I’m using the httpie tool here):

Setting Docker container memory constraints

One thing you need to know about Java process memory allocation is that in reality it consumes more physical memory than specified with the -Xmx JVM option. The -Xmx option specifies only the maximum Java heap size. But the Java process is a regular Linux process and what is interesting, is how much actual physical memory this process is consuming.

Or in other words – what is the Resident Set Size (RSS) value for running a Java process?

Theoretically, in the case of a Java application, a required RSS size can be calculated by:

where OffHeap consists of thread stacks, direct buffers, mapped files (libraries and jars) and JVM code itself.

There is a very good post on this topic: Analyzing java memory usage in a Docker container by Mikhail Krestjaninoff.

When using the  --memory  option in  docker run  make sure the limit is larger than what you specify for -Xmx.

Offtopic: Using OOM Killer instead of GC

There is an interesting JDK Enhancement Proposal (JEP) by Aleksey Shipilev: Epsilon GC. This JEP developed a GC that only handles memory allocation, but does not implement any actual memory reclamation mechanism.

This GC, combined with --restart (Docker restart policy) should theoretically allow supporting “Extremely short lived jobs” implemented in Java.

For ultra-performance-sensitive applications, where developers are conscious about memory allocations or want to create completely garbage-free applications – GC cycle may be considered an implementation bug that wastes cycles for no good reason. In such use cases, it could be better to allow OOM Killer (Out of Memory) to kill the process and use Docker’s restart policy to restart the process.

Anyway Epsilon GC is not available yet, so it’s just an interesting theoretical use case for a moment.

Building Java applications with Builder container

As you can probably see, in the previous step, I did not explain how I’ve created the application WAR file.

Of course, there is a Maven project file pom.xml which every Java developer should be familiar with. But, in order to actually build, you need to install same Java Build tools (JDK and Maven) on every machine, where you are building the application. You need to have the same versions, use the same repositories and share the same configurations. While’s it’s possible, managing different projects that rely on different tools, versions, configurations, and development environments can quickly become a nightmare.

What if you want to run a build on a clean machine that does not have Java or Maven installed? What should you do?

Java Builder Container

Docker can help here too. With Docker, you can create and share portable development and build environments. The idea is to create a special Builder Docker image, that contains all tools you need to properly build your Java application, e.g.: JDK, Ant, Maven, Gradle, SBT or others.

To create a really useful Builder Docker image, you need to know how your Java Build tools work and how docker build invalidates build cache. Without proper design, you will end up with ineffective and slow builds.

Running Maven in Docker

While most of these tools were created nearly a generation ago, they are still very popular and widely used by Java developers.

Java development life is hard to imagine without extra build tools. There are multiple Java build tools out there, but most of them share similar concepts and serve the same targets – resolve cumbersome package dependencies, and run different build tasks, such as, compile, lint, test, package, and deploy.

In this post, I will use Maven, but the same approach can be applied to Gradle, SBT, and other less popular Java Build tools.

It’s important to learn how your Java Build tool works and how it’s tuned. Apply this knowledge, when creating a Builder Docker image and the way you run a Builder Docker container.

Maven uses the project level pom.xml file to resolve project dependencies. It downloads missing JAR files from private and public Maven repositories, and caches these files for future builds. Thus, next time you run your build, it won’t download anything if your dependency had not been changed.

Official Maven Docker image: should you use it?

The Maven team provides an official Docker image. There are multiple images (under different tags) that allow you to select an image that can answer your needs. Take a deeper look at the Dockerfile files and mvn-entrypoint.sh shell scripts when selecting Maven image to use.

There are two flavors of official Maven Docker images: regular images (JDK version, Maven version, and Linux distro) and onbuild images.

What is the official Maven image good for?

The official Maven image does a good job containerizing the Maven tool itself. Using such image, you can run Maven build on any machine without installing a JDK and Maven.

Example: running mvn clean install on local folder

By default the Maven local repository, for official Maven images, is placed inside a Docker data volume. That means, all downloaded dependencies are not part of the image and will disappear once the Maven container is destroyed. If you do not want to download dependencies on every build, mount Maven’s repository Docker volume to some persistent storage (at least local folder on the Docker host). When setting up your builds on Codefresh, it’s a simple matter of overriding the  MAVEN_CONFIGenvironment variable to store it’s cache in the persistent volume, for example  /codefresh/volume/.m2 .

Example: running mvn clean install on local folder with properly mounted Maven local repository

Now, let’s take a look at onbuild Maven Docker images.

What is Maven onbuild image?

Maven onbuild Docker images exists to “simplify” developer’s life by allowing him/er skip writing a Dockerfile. Actually, a developer should write a Dockerfile, but it’s usually enough to have the single line in it:

Looking into onbuildDockerfile on the GitHub repository

… you can see several Dockerfile commands with the ONBUILD prefix. The ONBUILD tells Docker to postpone the execution of these build commands until building a new image that inherits from the current image.

In our example, two build commands will be executed, when you build the application Dockerfile created FROM: maven:<version>-onbuild :

  1. Add current folder (all files, if you are not using .dockerignore) to the new Docker image
  2. Run mvn install target

The onbuild Maven Docker image is not as useful as the previous image.

First of all, it copies everything from the current repository, so do not use it without a properly configured .dockerignore file.

Then, think: what kind of image you are trying to build?

The new image, created from onbuild  Maven Docker image, includes JDK, Maven, application code (and potentially all files from current directory), and all files produced by Maven install phase (compiled, tested and packaged app; plus lots of build junk files you do not really need).

So, this Docker image contains everything, but, for some strange reason, does not contain a local Maven repository. I have no idea why the Maven team created this image.

Recommendation: Do not use Maven onbuild images!

I will show you how to create a proper Builder image later in this post.

Where to keep the Maven cache?

Official Maven Docker images keep Maven’s cache folder outside of the container, exposing it as a Docker data volume, using VOLUME root/.m2 command in the Dockerfile. A Docker data volume is a directory within one or more containers that bypasses the Docker Union File System, in simple words: it’s not part of the Docker image.

What you should know about Docker data volumes:

  • Volumes are initialized when a container is created.
  • Data volumes can be shared and reused among containers.
  • Changes to a data volume are made directly to the mounted endpoint (usually some directory on host, but can be some storage device too)
  • Changes to a data volume will not be included when you update an image or persist Docker container.
  • Data volumes persist even if the container itself is deleted.

So, in order to reuse Maven cache between different builds, mount a Maven cache data volume to some persistent storage (for example, a local directory on the Docker host).

The command above runs the official Maven Docker image (Maven 3 and OpenJDK 8), mounts project pom.xml file into working directory and "$HOME"/.m2 folder for Maven cache data volume. Maven running inside this Docker container will download all required JAR files into host’s local

Maven running inside this Docker container will download all required JAR files into host’s local folder $HOME/.m2. Next time you create new Maven Docker container for the same pom.xml file and the same cache mount, Maven will reuse the cache and will download only missing or updated JAR files.

Maven Builder Docker image

First, let’s try to formulate what is the Builder Docker image and what should it contain?

Builder is a Docker image that contains everything to allow you creating a reproducible build on any machine and at any point of time.

So, what should it contain?

  • Linux shell and some tools – I prefer Alpine Linux
  • JDK (version) – for the javac compiler
  • Maven (version) – Java build tool
  • Application source code and pom.xml file/s – it’s the application code SNAPSHOT at specific point of time; just code, no need to include a .git repository or other files
  • Project dependencies (Maven local repository) – all pom and JAR files you need to build and test Java applications, at any time, even offline, even if library disappears from the web

The Builder image captures code, dependencies, and tools at a specific point of time and stores them inside a Docker image. The Builder container can be used to create the application “binaries” on any machine, at any time and even without internet connection (or with poor connection).

Here is the sample Dockerfile for my demo Builder:

Let’s go over this Dockerfile and I the reasoning behind each command.

  • FROM: openjdk:8-jdk-alpine – select and freeze JDK version: OpenJDK 8 and Linux Alpine
  • Install Maven
    • Speed up Maven JVM a bit: MAVEN_OPTS="-XX:+TieredCompilation -XX:TieredStopAtLevel=1", read the following post
    • RUN mkdir -p ... curl ... tar ... – download and install (untar and ln -s) Apache Maven
    • ARG ... – Use build arguments to allow overriding Maven version and local repository location (MAVEN_VERSION and USER_HOME_DIR) with docker build --build-arg ...
  • RUN mvn -T 1C install && rm -rf target – download project dependencies:
    • Copy project pom.xml file, run mvn install command, and remove build artifacts (as far as I know, there is no Maven command that will let you download without installing)
    • This Docker image layer will be rebuilt only when project’s pom.xml file changes
  • COPY src /usr/src/app/src – copy project source files (source, tests, and resources)

Note: if you are using Maven Surefire plugin and want to have all dependencies for the offline build, make sure to lock down Surefire test provider.

When you build a new Builder version, I suggest you use a --cache-from option passing previous Builder image to it. This will allow you reuse any unmodified Docker layer and avoid obsolete downloads most of the time (if pom.xml did not change or you did not decide to upgrade Maven or JDK).

Use Builder container to run tests

Use Builder container to create application WAR

<

div class=”markdown-here-wrapper”>

Summary

Take a look at images bellow:

  • sbdemo/run:latest  – Docker image for demo runtime: Alpine, OpenJDK JRE only, demo WAR
  • sbdemo/builder:mvn  – Builder Docker image: Alpine, OpenJDK 8, Maven 3, code, dependency
  • sbdemo/tutorial:1  – Docker image created following first tutorial (just for reference)
  • sbdemo/tutorial:2  – Docker image created following second tutorial (just for reference)

Bonus: Build flow automation

In this section, I will show how to use Docker build flow automation service to automate and orchestrate all steps from this post.

Build Pipeline Steps

Here is the list of steps you need to complete:

  1. Create Maven Builder Docker image
  2. Run tests and store test results
  3. Compile the application code and assemble the application WAR file
  4. Build the application Docker image
  5. Push the application Docker image to a Docker Registry

It’s possible to execute these steps manually. But it’s better to automate them and avoid typing long commands. You can use a Bash script, Makefile, or some other tool. In this post, I will show how to use Codefresh Docker CI/CD service (the company I work for) to automate the Java-Docker build pipeline for this demo.

Java Docker build pipeline automation with Codefresh

Using Codefresh, you can define a automated CI/CD pipelines for your Docker images.

The Codefresh YAML  syntax is pretty straight forward:

  • it contains an ordered list of steps
  • each step has a type:
    • build – for docker build command
    • push – for docker push
    • composition – for creating test or run environment, specified with docker-compose
    • freestyle (default if not specified) – for docker run command
  • /codefresh/volume/data volume (git clone and files generated by steps) is mounted into each step
  • current working directory for each step is set to /codefresh/volume/ by default (can be changed)

For a more detailed description and other examples, take a look at the build steps documentation.

For my demo flow I’ve created following automation steps:

  1. mvn_builder – create Maven Builder Docker image
  2. mv_test – execute tests in Builder container, place test results into /codefresh/volume/target/surefire-reports/data volume folder
  3. mv_package – create application WAR file, place created file into /codefresh/volume/target/data volume folder
  4. build_image – build application Docker image with JRE and application WAR file
  5. push_image – tag and push the application Docker image to DockerHub

Here is the full Codefresh YAML:


Hope, you find this post useful. I look forward to your comments and any questions you have. New to Codefresh? Schedule a FREE onboarding and start making pipelines fast. and start building, testing and deploying Docker images faster than ever.

 

Alexei Ledenev

About Alexei Ledenev

Alexei is an experienced software architect and HPE distinguished technologist. He currently works at Codefresh as the Chief Researcher, focusing lately on #docker, #golang and #aws. In his spare time, Alexei maintains a couple of Docker-centric open-source projects, writes tech blog posts, and enjoys traveling and playing with his kids. https://github.com/gaia-adm/pumba

Reader Interactions

Enjoy this article? Don't forget to share.

Comments

  1. This is a good start, but far from perfect 🙂 just some points out of my head:

    We separate strictly between build images and deploy images, in build we have maven, curl, bash etc, everything you need to debug and stop/restart processes in containers while running. We have a bash shell wrapper that runs as a daemon on process 0, so we can start and stop the java app from inside the container. This is mainly for debug and tuning purposes.

    In prod we have tight images with only a minimized jre and minimum alpine packages needed to run the code. no maven. We use the nexus2 rest api directly. Saves around 50MB of base installation image before pushing the artifacts onto it.

    use su-exec to run as a non-root user that only can execute its own jars/wars and write to logdir. Without password and with root disabled. Means that any security issue can only affect this container and the running program. impossible to log into the container once it has started, you can only shut it down, it becomes immutable.

    I like to use shaded jars to keep everything into one large jar, speeds up running and prevents us from packaging multiple versions of the same jar in large projects.
    Is is also easy to debug on command line.

    Docker syslogs should go to an exernal log volume or be managed by a log-aggregator. We use loggly, Elk and Splunk. There are lots of ways to configure this. I like to log to external volume and let the host take care of the log-aggregating. makes the image smaller this way.

    And a repository manager like Nexus2(maven)/Nexus3 (Docker, npm, pypi, gems) is a must. Unless you pay for a private dockerhub registry it is all public.

  2. Great article, I am using docker and maven myself for a while now and this was exactly what I was missing all the time.

  3. One thing to add.
    I tried the approach and when making the builder image itself for one of our companies artifacts I had to add both files settings.xml (as simple as COPY settings.xml $MAVEN_CONFIG/settings.xml) and resolv.conf the latter which I copied to /etc/resolv.conf (ADD resolv.conf /etc/resolv.conf).
    Background: Some of the dependencies are on an artifactory repository requiring authentication hence I needed to add dns informations search and nameserver in resolv.conf accordingly plus the according sections to settings.xml for authentication.
    Probably that setup needed revision in case one pushed it to a public repository due to the authentication strings.

    • You are right. In this post, I assume that all Maven artifacts are publically available. In case of using private repositories, you might need to include additional configuration steps, like you’ve mentioned.

      Thank you.

  4. For the build image wouldn’t it make sense to also make the Source a volume as well? Mount this on the Docker Host so that external IDEs could modify the code. You also wouldn’t have to pay the price of rebuilding your docker image on each and every change/compile/test.

    • Mounting source as a VOLUME is one of the common practices and I’m also doing this for some projects. Here, in this post, I’ve tried to present a different approach. Embedding source code into the Builder image, allows you to achieve self-contained and 100% repeatable build environment (even without source control). The core idea is that you capture build tools, build configuration and even source code at a specific point of time into a Docker container.
      Now you can run this Builder container on any machine to compile a specific version of your application. And this machine does not need any tools installed, besides Docker, it does not need even access to source repository in order to build.
      So, there are two approaches (and maybe more): capture build tools only or keep source code too. Each approach has its own use case and benefits, and drawbacks too.

  5. Thanks for the post!
    Did you try “mvn dependency:resolve” for downloading all the dependencies? If you do that you could include only the pom.xml file with the dependencies and not the source code.

    • Thanks for the tip. You are right, it’s enough to include pom.xml file.
      With this post, I wanted to encourage people thinking about Java-Docker build optimization; and I’m sure that proposed flow can be optimized even more.

  6. Thanks for sharing a wonderful article, it is very helpful for java developers.

  7. Hello Alexei, Great post! I’m certainly a newbie Java developer… Can you explain exactly what you mean

    Under the “Maven Builder Docker image” section, you mentioned after the sample Dockerfile for “my demo Builder”,

    Are you suggesting the Maven Builder Docker image? I modified my POM to contain the Surefire plugin (surefire-junit4) and I’d like to execute the tests in off-line mode. Running tests works on my target platform in on-line mode but fails off-line. Do I need to have a “RUN mvn install” call directly in the builder Dockerfile? It’s the only way I could see to do this off-line. Can you suggest a better or correct method? Thanks.

    mvn install:install-file -DgroupId=org.apache.maven.surefire -DartifactId=surefire-junit4 -Dversion=2.18.1 -Dpackaging=jar
    -Dfile=surefire-junit4-2.18.1.jar

    -Graeme

    • Hi, Graeme! Thank you for your feedback!

      Answering your question: the basic idea is simple, you need to keep ALL tools and libraries you need to build and test your project inside Builder image. The Builder image should not change a lot, only when you are upgrading dependency or tool/lib version. This image can be huge. The main effort is to put into it things that do not change frequently and this will help you to enjoy Docker cache, making Builder build pretty fast. With app image, take minimalistic approach: store there only JRE, compiled java classes and all files that your app requires at runtime.

      As for Surefire plugin, make sure to install it (with locked config) inside Builder image and avoid redownloading it for every build. I do not remember exact command to do so, the one you’ve posted looks OK (or google for the right command)

      Good luck

  8. Really impressive !! Docker is even better than a child process because it allows better isolation and environmental control. many thanks for sharing this.

  9. Hi, thanks you for you tutorial.

    I have a maven based application in want dockerized.

    the build process faill on this line:

    RUN mvn -T 1C install && rm -rf target

  10. Nice work! A few points.

    Don’t encourage the use of Maven. The front line of the Java ecosphere has mostly moved to Gradle – and none too soon.
    Modern build system plugins obviate the need to get involved with Dockerfiles. E.g. for Gradle, try https://github.com/Transmode/gradle-docker. Configuration can be as minimal as:

    docker {
    baseImage “openjdk:alpine”
    }

    and voila, the build pushes a ready-made Java runnable into the repo.

  11. [INFO] Packaging webapp
    [INFO] Assembling webapp [webapp] in [/elimu/target/literacyapp-SNAPSHOT]
    [INFO] Processing war project
    [INFO] Webapp assembled in [667 msecs]
    [INFO] Building war: /elimu/target/literacyapp-SNAPSHOT.war
    [INFO] ————————————————————————
    [INFO] BUILD FAILURE
    [INFO] ————————————————————————
    [INFO] Total time: 01:18 min
    [INFO] Finished at: 2017-11-23T12:10:18Z
    [INFO] Final Memory: 22M/234M
    [INFO] ————————————————————————
    [ERROR] Failed to execute goal org.apache.maven.plugins:maven-war-plugin:2.2:war (default-war) on project webapp: Error assembling WAR: webxml attribute is required (or pre-existing WEB-INF/web.xml if executing in update mode) -> [Help 1]

  12. Any Idea how to extend to multi module maven projects?

  13. Hi Alexei,
    great post !!! I’m pretty new to docker and found this a great help.
    One question – you mentioned earlier in the article about having to choose the right application server including WebLogic and recommended using embedded app server instead. Most of our services can run as spring-boot in embedded tomcat so no worries there but we have one or two legacy apps that we have to maintain for the near future that can only run on WebLogic and if you had any recommendation for creating an similar image to the one you outlined but with additionally a weblogic server running ?
    Thanks

    • Thank you.
      Regarding WebLogic server, I had no experience working with it. So cannot advise here, sorry.
      But in general, you should be able to embed any Java App Server into Docker container and auto-deploy single app into it, when building this Docker image.

  14. Really good article!! I’m learning to work with docker and this article was really helpful.

    One question about the performance from the build image, when I build a war file in the container it takes aprox. 14 seconds in my computer and when I do the same locally on the host it only takes aprox. 6 seconds. Is such a difference on performance correct?

Reply This Comment: Nicolas Sitbon Cancel reply

Your email address will not be published. Required fields are marked *

Follow me on Twitter