Create your FREE Codefresh account and start making pipelines fast. Create Account

Create lean Node.js image with Docker multi-stage build

5 min read


Starting from Docker 17.05+, you can create a single Dockerfile that can build multiple helper images with compilers, tools, and tests and use files from above images to produce the final Docker image. Read this simple tutorial and create a free Codefresh account  to build, test and deploy images instantly.

The “core principle” of Dockerfile

Docker can build images by reading the instructions from a Dockerfile. A Dockerfile is a text file that contains a list of all the commands needed to build a new Docker image. The syntax of Dockerfile is pretty simple and the Docker team tries to keep it intact between Docker engine releases.

The core principle is very simple: 1 Dockerfile -> 1 Docker Image.

This principle works just fine for basic use cases, where you just need to demonstrate Docker capabilities or put some “static” content into a Docker image.

Once you advance with Docker and would like to create secure and lean Docker images, a single Dockerfile is not enough.

People who insist on following the above principle find themselves with slow Docker builds, huge Docker images (several GB size images), slow deployment time and lots of CVE violations embedded into these images.

The Docker Build Container pattern

Docker Pattern: The Build Container

The basic idea behind Build Container pattern is simple:

Create additional Docker images with required tools (compilers, linters, testing tools) and use these images to produce lean, secure and production ready Docker image.

An example of the Build Container pattern for typical Node.js application:

  1. Derive FROM a Node base image (for example node:6.10-alpine) node and npm installed (
  2. Add package.json
  3. Install all node modules from dependency and devDependency
  4. Copy application code
  5. Run compilers, code coverage, linters, code analysis and testing tools
  6. Create the production Docker image; derive FROM same or other Node base image
  7. install node modules required for runtime (npm install --only=production)
  8. expose PORT and define a default CMD (command to run your application)
  9. Push the production image to some Docker registry

This flow assumes that you are using two or more Dockerfiles and a shell script or flow tool to orchestrate all steps above.


I use a fork of Let’s Chat node.js application. Here is the link to our fork.

Builder Docker image with eslint, mocha and gulp

Production Docker image with ‘production’ node modules only

What is Docker multi-stage build?

Docker 17.05 extends Dockerfile syntax to support new multi-stage build, by extending two commands: FROM and COPY.

The multi-stage build allows using multiple FROM commands in the same Dockerfile. The last FROM command produces the final Docker image, all other images are intermediate images (no final Docker image is produced, but all layers are cached).

The FROM syntax also supports AS keyword. Use AS keyword to give the current image a logical name and reference to it later by this name.

To copy files from intermediate images use COPY --from=<image_AS_name|image_number>, where number starts from 0 (but better to use logical name through AS keyword).

Creating a multi-stage Dockerfile for Node.js application

The Dockerfile below makes the Build Container pattern obsolete, allowing to achieve the same result with the single file.


The above Dockerfile creates 3 intermediate Docker images and single release Docker image (the final FROM).

  1. First image FROM alpine:3.5 AS bas – is a base Node image with: node, npm, tini (init app) and package.json
  2. Second image FROM base AS dependencies – contains all node modules from dependencies and devDependencies with additional copy of dependencies required for final image only
  3. Third image FROM dependencies AS test – runs linters, setup and tests (with mocha); if this run command fail not final image is produced
  4. The final image FROM base AS release – is a base Node image with application code and all node modules from dependencies

Try Docker multi-stage build today

In order to try Docker multi-stage build, you need to get Docker 17.05, which is going to be released in May and currently available on the beta channel.

So, you have two options:

  1. Use beta channel to get Docker 17.05
  2. Run dind container (docker-in-docker)

Running Docker-in-Docker 17.05 (beta)

Running Docker 17.05 (beta) in docker container (--privileged is required):

Try mult-stage build. Add --host=:23751 to every Docker command, or set DOCKER_HOST environment variable.


With Docker multi-stage build feature, it’s possible to implement an advanced Docker image build pipeline using a single Dockerfile .

Kudos to Docker team for such a useful feature!

Hope, you find this post useful. I look forward to your comments and any questions you have.

PS Codefresh just added multi-stage build support, Please go on and create a free Codefresh account to try this out.

New to Codefresh? Schedule a FREE onboarding and start making pipelines fast. and start building, testing and deploying Docker images faster than ever.



Alexei Ledenev

Alexei Ledenev

Alexei is an experienced software architect and HPE distinguished technologist. He currently works at Codefresh as the Chief Researcher, focusing lately on #docker, #golang and #aws. In his spare time, Alexei maintains a couple of Docker-centric open-source projects, writes tech blog posts, and enjoys traveling and playing with his kids.

29 responses to “Create lean Node.js image with Docker multi-stage build

  1. Simon Wydooghe says:

    Cheers, good explanation and clean Dockerfile!

  2. Thx for the blog. However, the build process failed…

    npm ERR! Linux 3.13.0-91-generic
    npm ERR! argv “/usr/bin/node” “/usr/bin/npm” “run” “lint”
    npm ERR! node v6.9.2
    npm ERR! npm v3.10.9

    npm ERR! missing script: lint
    npm ERR!
    npm ERR! If you need help, you may report this error at:
    npm ERR!

    npm ERR! Please include the following file with any support request:
    npm ERR! /root/chat/npm-debug.log

  3. I was thinking there was a way to build only one stage. But it looks like Docker will go through all stages, only the last stage in the Dockerfile is the what will be assigned as the image?

    This doesn’t help if I want i want to end on an earlier stage. Such as if I have a dev stage, is there anyway to start the container in that stage?

    1. You can always create a new Docker container from any LAYER. Just run docker history command to see all image layers. Then select some layer, for example b3616e272dc1 and run it as a container:

      $ docker run -it --rm b3616e272dc1 sh

      if you like to keep specific layer for future use, tag it:

      $ docker tag b3616e272dc1 myrepo/myimage:master

  4. Shane lee says:

    Interesting article. Would I be correct in saying this mirrors a CI build pipeline?

  5. Jay O'Conor says:

    Correction: The docker version required for this is 17.05+, not 17.0.5+. That erroneous extra decimal point makes a difference

    1. Dan Garfield says:

      Thanks Jay, fixed!

  6. What is the difference between executing npm install in an intermediate container, then copy it in the final one vs. Just executing npm install in the final container?

    1. Speed.
      Why downloading npm packages twice? For a small project it makes no difference, but for real project, it can take minutes (depends on network latency).

    1. The basic idea is to have 2 folders in base image: one with production dependencies and other with dev.
      Then test intermediate image could use (copy) dev dependencies from base and release image will copy only production dependencies.
      If test intermediate image fails some test or lint rule, final release image won’t be built.

  7. Cool feature!

    If the dockerfile step for say linting fails, would it stop it from progressing to the next chain?

  8. Btw:

    RUN npm install –only=production

    copy production node_modules aside

    RUN cp -R node_modules prod_node_modules

    That’s smart!

  9. Hello,

    Is it possible to launch only a specific stage in the docker-compose file ?

    I want to run unit tests separately when I want to,

    for example:

    i want to report tests in jenkinsfile in a specific file, as report.xml and I want to launch

    junit report.xml


    1. Enrico Stahn says:

      A couple of observations:

      I haven’t seen any solution that isn’t running sequentially. We run NPM Lint, NPM tests, image build, … all in parallel to maximise speed.
      Caching of NPM modules across multiple CI runs – How is this working?

      1. For caching between CI builds, we (in Codefresh) are using high-iops network volumes mounted into builder container, so subsequent builds, even if running on different machines will reuse the same volume (ot it’s clone, depending on load and git branching).

  10. Very helpful write-up. Thanks Alexei. I went from only knowing a few basics of a regular dockerfile to having one that reduced my image size from 224MB to 127MB by simply using your pattern of copying folders from a “dependencies” stage. My dockerfile is also easier to follow now.

    One side note: you might consider adding a something about using the –target arg with docker build. In my dockerfile I added a stage for creating the image for use locally (vs in a deployed env). If I want the prod version, I simply use: “docker build -t name:tag .” If I want the local version I use: “docker build –target local -t name:tag .” Here’s the dockerfile for reference:


    FROM node:lts-alpine as base
    WORKDIR /home/node/app

    FROM base as dependencies
    COPY . .
    RUN npm install -g typescript &&\
    npm install –only=production &&\
    cp -R node_modules prod_node_modules &&\
    npm run build

    FROM base as release
    COPY –from=dependencies –chown=node:node /home/node/app/prod_node_modules node_modules
    COPY –from=dependencies –chown=node:node /home/node/app/dist dist
    COPY –from=dependencies –chown=node:node /home/node/app/config config
    USER node:node
    EXPOSE 4000
    ENV NODE_ENV production
    ENTRYPOINT [“node”, “dist/index.js”]

    FROM release as local
    COPY –from=dependencies –chown=node:node /home/node/app/some-config-file-that-kubernetes-makes-available-through-a-mount /config/needed-config-file

    FROM release


  11. This article is very useful for me because I am a Node.js developer. Do you have any post regarding how to install and configure docker?

  12. Vikas Rathore says:

    I am having trouble in optimising my docker build step. Below is my use case:

    In my jenkinsfile I am building 3 docker image. (1 from “docker/test/Dockerfile” and 2 from “docker/dev/Dockerfile”)

    stage(‘Build’) {
    steps {
    sh ‘docker build -t Test -f docker/test/Dockerfile .’
    sh ‘set +x && eval $(/usr/local/bin/aws-login/ $AWS_ACCOUNT jenkins eu-west-2) \
    && docker build -t DEV –build-arg \
    –build-arg CONFIG_S3_BUCKET_URI=s3://bucket \
    -f docker/dev/Dockerfile .’
    sh ‘set +x && eval $(/usr/local/bin/aws-login/ $AWS_ACCOUNT jenkins eu-west-2) \
    && docker build -t QA –build-arg \
    –build-arg CONFIG_S3_BUCKET_URI=s3://bucket \
    -f docker/dev/Dockerfile .’
    stage(‘Test’) {
    steps {
    sh ‘docker run –rm TEST npm run test’
    Below is my two docker file


    FROM node:lts

    RUN mkdir /usr/src/app
    WORKDIR /usr/src/app

    ENV PATH /usr/src/app/node_modules/.bin:$PATH

    RUN wget -q -O – | apt-key add –
    RUN sh -c ‘echo “deb [arch=amd64] stable main” >> /etc/apt/sources.list.d/google.list’
    RUN apt-key update && apt-get update && apt-get install -y google-chrome-stable

    COPY . /usr/src/app

    RUN npm install

    CMD sh ./docker/test/

    FROM node:lts as dev-builder


    RUN apt-get update
    RUN apt-get install python3-dev -y
    RUN curl -O
    RUN python3

    RUN pip3 install awscli –upgrade

    RUN mkdir /app
    WORKDIR /app
    COPY . .

    RUN aws s3 cp “$CONFIG_S3_BUCKET_URI/$S3_FILE_NAME” src/environments/
    RUN cat src/environments/

    RUN npm install
    RUN npm run build-dev

    FROM nginx:stable
    COPY nginx.conf /etc/nginx/nginx.conf
    COPY –from=dev-builder /app/dist/ /usr/share/nginx/html/
    Every time it takes 20-25 mins to build the images. Is there any way I can optimise the docker file for better build process. suggestion are welcome. RUN npm run build-dev uses package.json to install the dependencies. which is one on the reason that it install all dependency for everybuild.


  13. Miley Cryus says:

    Thanks for the article, it was very helpful.

  14. Hi ,
    I have a application which has totally 13 sub node applications and they are interlinked like one linking with other and along with this we have lot of 3rd party libraries. They are all getting installed properly in mac. But when tried to dockerise this and install in the hierarchal way thats installed in local it goes well upto 11th project and in the 11th module which has only dependency to their sub modules and no 3rd part libs. is giving with number of such warnings and stopping at the end with max stacktrace error .

    npm WARN tar ENOENT: no such file or directory, open ‘/app/mod11/node_modules/.staging/es5-ext-cefe45e3/error/#/throw.js’
    npm WARN tar ENOENT: no such file or directory, open ‘/app/mod12/node_modules/.staging/type-217172ab/’

    Following is my dockerFile

    FROM node:10.16.0-alpine

    RUN apk –no-cache add \
    bash \
    g++ \
    ca-certificates \
    lz4-dev \
    musl-dev \
    cyrus-sasl-dev \
    openssl-dev \
    make \

    RUN apk add –no-cache –virtual .build-deps gcc zlib-dev libc-dev bsd-compat-headers py-setuptools bash

    WORKDIR /app/app13

    COPY ./app1 /app/app1
    COPY ./app2 /app/app2
    COPY ./app3 /app/app3
    COPY ./app4 /app/app4
    COPY ./app5 /app/app5
    COPY ./app6 /app/app6
    COPY ./app7 /app/app7
    COPY ./app8 /app/app8
    COPY ./app9 /app/app9
    COPY ./app10 /app/app10
    COPY ./app11 /app/app11
    COPY ./app12 /app/app12
    COPY ./Repository /app/Repository

    COPY ./app13 /app/app13

    RUN npm install -g [email protected]
    RUN npm -version && node –version
    ENV config ../Repository/dkronline.json
    RUN cd ../app1 && npm install –no-package-lock
    RUN cd ../app2 && npm install –no-package-lock
    RUN cd ../app3 && npm install –no-package-lock
    RUN cd ../app4 && npm install –no-package-lock
    RUN cd ../app5 && npm install –no-package-lock
    RUN cd ../app6 && npm install –no-package-lock
    RUN cd ../app7 && npm install –no-package-lock
    RUN cd ../app8 && npm install –no-package-lock
    RUN cd ../app9 && npm install –no-package-lock
    RUN cd ../app10 && npm install –no-package-lock
    RUN cd ../app11 && npm install –no-package-lock
    RUN cd ../app12 && npm install –no-package-lock

    RUN npm install –no-package-lock

    CMD [ “node”, “app.js”, “../Repository/dkronline.json” ]

    EXPOSE 3000 4321

  15. What does the docker compose file look like?

Leave a Reply

* All fields are required. Your email address will not be published.

See how Codefresh helps you
deploy more and fail less!