TL;DR
Starting from Docker 17.05+, you can create a single Dockerfile
that can build multiple helper images with compilers, tools, and tests and use files from above images to produce the final Docker image.ย Read this simple tutorial and create a free Codefresh account ย to build, test and deploy images instantly.
The โcore principleโ of Dockerfile
Docker can build images by reading the instructions from a Dockerfile
. A Dockerfile
is a text file that contains a list of all the commands needed to build a new Docker image. The syntax of Dockerfile
is pretty simple and the Docker team tries to keep it intact between Docker engine releases.
The core principle is very simple: 1 Dockerfile -> 1 Docker Image
.
This principle works just fine for basic use cases, where you just need to demonstrate Docker capabilities or put some โstaticโ content into a Docker image.
Once you advance with Docker and would like to create secure and lean Docker images, a single Dockerfile
is not enough.
People who insist on following the above principle find themselves with slow Docker builds, huge Docker images (several GB size images), slow deployment time and lots of CVE violations embedded into these images.
The Docker Build Container pattern
Docker Pattern: The Build Container
The basic idea behind Build Container pattern is simple:
Create additional Docker images with required tools (compilers, linters, testing tools) and use these images to produce lean, secure and production ready Docker image.
An example of the Build Container pattern for typical Node.js application:
- Derive
FROM
a Node base image (for examplenode:6.10-alpine
)node
andnpm
installed (Dockerfile.build
) - Add
package.json
- Install all node modules from
dependency
anddevDependency
- Copy application code
- Run compilers, code coverage, linters, code analysis and testing tools
- Create the production Docker image; derive
FROM
same or other Node base image - install node modules required for runtime (
npm install --only=production
) - expose
PORT
and define a defaultCMD
(command to run your application) - Push the production image to some Docker registry
This flow assumes that you are using two or moreย Dockerfile
s and a shell script or flow tool to orchestrate all steps above.
Example
I use a fork of Letโs Chat node.js application. Here is the link to our fork.
Builder Docker image with eslint, mocha and gulp
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
FROM alpine:3.5 # install node RUN apk add --no-cache nodejs # set working directory WORKDIR /root/chat # copy project file COPY package.json . # install node packages RUN npm set progress=false && \ npm config set depth 0 && \ npm install # copy app files COPY . . # run linter, setup and tests CMD npm run lint && npm run setup && npm run test |
Production Docker image with โproductionโ node modules only
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
FROM alpine:3.5 # install node RUN apk add โno-cache nodejs tini # set working directory WORKDIR /root/chat # copy project file COPY package.json . # install node packages RUN npm set progress=false && \ npm config set depth 0 && \ npm install --only=production && \ npm cache clean # copy app files COPY . . # Set tini as entrypoint ENTRYPOINT [โ/sbin/tiniโ, โ--โ] # application server port EXPOSE 5000 # default run command CMD npm run start |
What is Docker multi-stage build?
Docker 17.05 extends Dockerfile
syntax to support new multi-stage build, by extending two commands: FROM
and COPY
.
The multi-stage build allows using multiple FROM
commands in the same Dockerfile. The last FROM
command produces the final Docker image, all other images are intermediate images (no final Docker image is produced, but all layers are cached).
The FROM
syntax also supports AS
keyword. Use AS
keyword to give the current image a logical name and reference to it later by this name.
To copy files from intermediate images use COPY --from=<image_AS_name|image_number>
, where number starts from 0
(but better to use logical name through AS
keyword).
Creating a multi-stage Dockerfile for Node.js application
The Dockerfile
below makes the Build Container pattern obsolete, allowing to achieve the same result with the single file.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
# # ---- Base Node ---- FROM alpine:3.5 AS base # install node RUN apk add --no-cache nodejs-current tini # set working directory WORKDIR /root/chat # Set tini as entrypoint ENTRYPOINT ["/sbin/tini", "--"] # copy project file COPY package.json . # # ---- Dependencies ---- FROM base AS dependencies # install node packages RUN npm set progress=false && npm config set depth 0 RUN npm install --only=production # copy production node_modules aside RUN cp -R node_modules prod_node_modules # install ALL node_modules, including 'devDependencies' RUN npm install # # ---- Test ---- # run linters, setup and tests FROM dependencies AS test COPY . . RUN npm run lint && npm run setup && npm run test # # ---- Release ---- FROM base AS release # copy production node_modules COPY --from=dependencies /root/chat/prod_node_modules ./node_modules # copy app sources COPY . . # expose port and define CMD EXPOSE 5000 CMD npm run start |
The above Dockerfile
creates 3 intermediate Docker images and single release Docker image (the final FROM
).
- First image
FROM alpine:3.5 AS bas
– is a base Node image with:node
,npm
,tini
(init app) andpackage.json
- Second image
FROM base AS dependencies
– contains all node modules fromdependencies
anddevDependencies
with additional copy ofdependencies
required for final image only - Third image
FROM dependencies AS test
– runs linters, setup and tests (withmocha
); if this run command fail not final image is produced - The final image
FROM base AS release
– is a base Node image with application code and all node modules fromdependencies
Try Docker multi-stage build today
In order to try Docker multi-stage build, you need to get Docker 17.05, which is going to be released in May and currently available on the beta channel.
So, you have two options:
- Use beta channel to get Docker 17.05
- Run dind container (docker-in-docker)
Running Docker-in-Docker 17.05 (beta)
Running Docker 17.05 (beta) in docker container (--privileged
is required):
1 2 |
$ docker run -d --rm --privileged -p 23751:2375 --name dind \ docker:17.05.0-ce-dind --storage-driver overlay2 |
Try mult-stage build. Add --host=:23751
to every Docker command, or set DOCKER_HOST
environment variable.
1 2 3 4 5 6 |
$ # using --host $ docker --host=:23751 build -t local/chat:multi-stage . $ # OR: setting DOCKER_HOST $ export DOCKER_HOST=localhost:23751 $ docker build -t local/chat:multi-stage . |
Summary
With Docker multi-stage build feature, itโs possible to implement an advanced Docker image build pipeline using a single Dockerfileย .
Kudos to Docker team for such a useful feature!
Hope, you find this post useful. I look forward to your comments and any questions you have.
PS Codefresh just added multi-stage build support, Please go on and create a free Codefresh account to try this out.
New to Codefresh? Schedule a FREE onboarding and start making pipelines fast. and start building, testing and deploying Docker images faster than ever.
Cheers, good explanation and clean Dockerfile!
Thx for the blog. However, the build process failed…
npm ERR! Linux 3.13.0-91-generic
npm ERR! argv “/usr/bin/node” “/usr/bin/npm” “run” “lint”
npm ERR! node v6.9.2
npm ERR! npm v3.10.9
npm ERR! missing script: lint
npm ERR!
npm ERR! If you need help, you may report this error at:
npm ERR!
npm ERR! Please include the following file with any support request:
npm ERR! /root/chat/npm-debug.log
I was thinking there was a way to build only one stage. But it looks like Docker will go through all stages, only the last stage in the Dockerfile is the what will be assigned as the image?
This doesn’t help if I want i want to end on an earlier stage. Such as if I have a dev stage, is there anyway to start the container in that stage?
You can always create a new Docker container from any LAYER. Just run
docker history
command to see all image layers. Then select some layer, for exampleb3616e272dc1
and run it as a container:$ docker run -it --rm b3616e272dc1 sh
if you like to keep specific layer for future use, tag it:
$ docker tag b3616e272dc1 myrepo/myimage:master
This is great! NowI would like to mix this with pkg (https://github.com/zeit/pkg)
I think that would be the ultimate NodeApp deployment setup ๐
Interesting article. Would I be correct in saying this mirrors a CI build pipeline?
Correction: The docker version required for this is 17.05+, not 17.0.5+. That erroneous extra decimal point makes a difference
Thanks Jay, fixed!
What is the difference between executing npm install in an intermediate container, then copy it in the final one vs. Just executing npm install in the final container?
Speed.
Why downloading
npm
packages twice? For a small project it makes no difference, but for real project, it can take minutes (depends on network latency).I’d love to see an extended example of this with a compose file – any chance?
Thanks!
This will work only if you configure your lint, setup and test as production dependencies, right?
The basic idea is to have 2 folders in
base
image: one withproduction
dependencies and other withdev
.Then
test
intermediate image could use (copy)dev
dependencies frombase
andrelease
image will copy onlyproduction
dependencies.If
test
intermediate image fails some test or lint rule, finalrelease
image won’t be built.Fantastic Post! Thank you so much for sharing this kind of wonderful post..!!
Cool feature!
If the dockerfile step for say linting fails, would it stop it from progressing to the next chain?
Any failed command (one that exits with non-zero code) will stop the Docker build
Btw:
RUN npm install –only=production
copy production node_modules aside
RUN cp -R node_modules prod_node_modules
That’s smart!
Your article is very nice… thanks for shairng your information….
Hello,
Is it possible to launch only a specific stage in the docker-compose file ?
I want to run unit tests separately when I want to,
for example:
i want to report tests in jenkinsfile in a specific file, as report.xml and I want to launch
thx
A couple of observations:
I haven’t seen any solution that isn’t running sequentially. We run NPM Lint, NPM tests, image build, … all in parallel to maximise speed.
Caching of NPM modules across multiple CI runs โ How is this working?
For caching between CI builds, we (in Codefresh) are using high-iops network volumes mounted into
builder
container, so subsequent builds, even if running on different machines will reuse the same volume (ot it’s clone, depending on load and git branching).Very helpful write-up. Thanks Alexei. I went from only knowing a few basics of a regular dockerfile to having one that reduced my image size from 224MB to 127MB by simply using your pattern of copying folders from a “dependencies” stage. My dockerfile is also easier to follow now.
One side note: you might consider adding a something about using the –target arg with docker build. In my dockerfile I added a stage for creating the image for use locally (vs in a deployed env). If I want the prod version, I simply use: “docker build -t name:tag .” If I want the local version I use: “docker build –target local -t name:tag .” Hereโs the dockerfile for reference:
BEGIN
FROM node:lts-alpine as base
WORKDIR /home/node/app
FROM base as dependencies
COPY . .
RUN npm install -g typescript &&\
npm install –only=production &&\
cp -R node_modules prod_node_modules &&\
npm run build
FROM base as release
COPY –from=dependencies –chown=node:node /home/node/app/prod_node_modules node_modules
COPY –from=dependencies –chown=node:node /home/node/app/dist dist
COPY –from=dependencies –chown=node:node /home/node/app/config config
USER node:node
EXPOSE 4000
ENV NODE_ENV production
ENTRYPOINT [“node”, “dist/index.js”]
FROM release as local
COPY –from=dependencies –chown=node:node /home/node/app/some-config-file-that-kubernetes-makes-available-through-a-mount /config/needed-config-file
FROM release
END
This article is very useful for me because I am a Node.js developer. Do you have any post regarding how to install and configure docker?
I am having trouble in optimising my docker build step. Below is my use case:
In my jenkinsfile I am building 3 docker image. (1 from “docker/test/Dockerfile” and 2 from “docker/dev/Dockerfile”)
stage(‘Build’) {
steps {
sh ‘docker build -t Test -f docker/test/Dockerfile .’
sh ‘set +x && eval $(/usr/local/bin/aws-login/aws-login.sh $AWS_ACCOUNT jenkins eu-west-2) \
&& docker build -t DEV –build-arg S3_FILE_NAME=environment.dev.ts \
–build-arg CONFIG_S3_BUCKET_URI=s3://bucket \
–build-arg AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN \
–build-arg AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION \
–build-arg AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
–build-arg AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
-f docker/dev/Dockerfile .’
sh ‘set +x && eval $(/usr/local/bin/aws-login/aws-login.sh $AWS_ACCOUNT jenkins eu-west-2) \
&& docker build -t QA –build-arg S3_FILE_NAME=environment.qa.ts \
–build-arg CONFIG_S3_BUCKET_URI=s3://bucket \
–build-arg AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN \
–build-arg AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION \
–build-arg AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
–build-arg AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
-f docker/dev/Dockerfile .’
}
}
stage(‘Test’) {
steps {
sh ‘docker run –rm TEST npm run test’
}
}
Below is my two docker file
docker/test/Dockerfile:
FROM node:lts
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
RUN wget -q -O – https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add –
RUN sh -c ‘echo “deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main” >> /etc/apt/sources.list.d/google.list’
RUN apt-key update && apt-get update && apt-get install -y google-chrome-stable
COPY . /usr/src/app
RUN npm install
CMD sh ./docker/test/docker-entrypoint.sh
docker/dev/Dockerfile:
FROM node:lts as dev-builder
ARG CONFIG_S3_BUCKET_URI
ARG S3_FILE_NAME
ARG AWS_SESSION_TOKEN
ARG AWS_DEFAULT_REGION
ARG AWS_SECRET_ACCESS_KEY
ARG AWS_ACCESS_KEY_ID
RUN apt-get update
RUN apt-get install python3-dev -y
RUN curl -O https://bootstrap.pypa.io/get-pip.py
RUN python3 get-pip.py
RUN pip3 install awscli –upgrade
RUN mkdir /app
WORKDIR /app
COPY . .
RUN aws s3 cp “$CONFIG_S3_BUCKET_URI/$S3_FILE_NAME” src/environments/environment.dev.ts
RUN cat src/environments/environment.dev.ts
RUN npm install
RUN npm run build-dev
FROM nginx:stable
COPY nginx.conf /etc/nginx/nginx.conf
COPY –from=dev-builder /app/dist/ /usr/share/nginx/html/
Every time it takes 20-25 mins to build the images. Is there any way I can optimise the docker file for better build process. suggestion are welcome. RUN npm run build-dev uses package.json to install the dependencies. which is one on the reason that it install all dependency for everybuild.
Thanks
This is exactly why we have implemented distributed docker layer caching in Codefresh! https://codefresh.io/docs/docs/configure-ci-cd-pipeline/pipeline-caching/
Thanks for the article, it was very helpful.
Hi ,
I have a application which has totally 13 sub node applications and they are interlinked like one linking with other and along with this we have lot of 3rd party libraries. They are all getting installed properly in mac. But when tried to dockerise this and install in the hierarchal way thats installed in local it goes well upto 11th project and in the 11th module which has only dependency to their sub modules and no 3rd part libs. is giving with number of such warnings and stopping at the end with max stacktrace error .
npm WARN tar ENOENT: no such file or directory, open ‘/app/mod11/node_modules/.staging/es5-ext-cefe45e3/error/#/throw.js’
npm WARN tar ENOENT: no such file or directory, open ‘/app/mod12/node_modules/.staging/type-217172ab/CHANGELOG.md’
Following is my dockerFile
FROM node:10.16.0-alpine
RUN apk –no-cache add \
bash \
g++ \
ca-certificates \
lz4-dev \
musl-dev \
cyrus-sasl-dev \
openssl-dev \
make \
python
RUN apk add –no-cache –virtual .build-deps gcc zlib-dev libc-dev bsd-compat-headers py-setuptools bash
WORKDIR /app/app13
COPY ./app1 /app/app1
COPY ./app2 /app/app2
COPY ./app3 /app/app3
COPY ./app4 /app/app4
COPY ./app5 /app/app5
COPY ./app6 /app/app6
COPY ./app7 /app/app7
COPY ./app8 /app/app8
COPY ./app9 /app/app9
COPY ./app10 /app/app10
COPY ./app11 /app/app11
COPY ./app12 /app/app12
COPY ./Repository /app/Repository
COPY ./app13 /app/app13
RUN npm install -g [email protected]
RUN npm -version && node –version
ENV config ../Repository/dkronline.json
RUN cd ../app1 && npm install –no-package-lock
RUN cd ../app2 && npm install –no-package-lock
RUN cd ../app3 && npm install –no-package-lock
RUN cd ../app4 && npm install –no-package-lock
RUN cd ../app5 && npm install –no-package-lock
RUN cd ../app6 && npm install –no-package-lock
RUN cd ../app7 && npm install –no-package-lock
RUN cd ../app8 && npm install –no-package-lock
RUN cd ../app9 && npm install –no-package-lock
RUN cd ../app10 && npm install –no-package-lock
RUN cd ../app11 && npm install –no-package-lock
RUN cd ../app12 && npm install –no-package-lock
RUN npm install –no-package-lock
CMD [ “node”, “app.js”, “../Repository/dkronline.json” ]
EXPOSE 3000 4321
What does the docker compose file look like?
It is in the Git Repo https://github.com/codefreshdemo/demochat/blob/master/docker-compose.v3.yml