How to launch additional services in Codefresh pipelines
Simple unit tests that rely only on the source code of the application, are very easy to execute in Codefresh, using only freestyle steps. For integration tests however, you usually need to launch either the application itself or one or more external services (such as a database).
Codefresh offers two ways of launching sidecar containers (similar to Docker compose) within the pipeline:
- Compositions is the old (but still supported) way
- Service containers is the new and more flexible way.
For brand new pipelines we suggest you use service containers. They are much more flexible than compositions in 3 major areas
- You can guarantee the order of service launch and their dependencies (a feature that is not even offered by vanilla docker-compose)
- You can use a special docker image to preload data to a database or otherwise initialize a service before tests are run
- The service containers can be attached on the whole pipeline instead of individual steps
- The Codefresh shared volume is auto-mounted on freestyle steps (unlike compositions) making file access very easy (i.e. so that you can execute your tests from the Git repository that was cloned).
Notice that this page explains how to run additional services that are automatically discarded once the pipeline is finished. If you are interested in temporary test environments see the preview environments page.
How integration tests work in Codefresh
Service containers work similar to docker-compose. A set of containers are launched on the same network with configurable hostnames and ports. Once they are up, you decide what to do with a freestyle step that is part of the network as well. In the most typical pipeline you can use your existing test framework (regardless of programming language) in the same manner as you would run your tests locally.
A best practice is to make sure that the hostnames used by your integration tests to access external services are not hard-coded. Even though with Codefresh you can decide the name of hostnames used in the pipeline (i.e. the hostname of a MySQL or Redis instance), in the long run it is better if you can choose that information freely without having the limitation of what is mentioned in the source code.
Also make sure that your tests do NOT use
localhost for an API endpoint. This technique does not work with docker-compose and will not work with Codefresh either. Instead, use the hostname defined in the docker-compose/codefresh.yml file. For example if you launch a MySQL service at hostname
my_db, then your tests should use
my_db:3306 as a target. Even better make the hostname completely configurable with an environment variable (so that you can change it within the Codefresh pipeline at wish). Basically make sure that your integration tests work fine with docker compose locally on your workstation, before converting them to a Codefresh pipeline.
Notice that the services you launch in a Codefresh pipeline, consume resources (memory/CPU) from the pipeline running environment. The more services you launch the less resources you have for the actual pipeline. We also suggest that you do NOT use service containers for running load testing or performance testing.
Running integration tests directly from source code
The simplest way to run integration tests is to check out the source code of your tests and launch the services that they need.
This is a very popular way of running integration tests but not the most flexible one. It works only when your tests have very simple requirements on their testing environment. It also doesn’t make a clear distinction on source code that gets shipped to production with source code that is used only for testing. Make sure that you don’t fall into the common trap of shipping testing tools with your docker container in production.
Here is the respective pipeline:
We suggest using this technique only if your application is not dockerized yet (i.e. you don’t deploy it with a Docker image to production).
Running tests after launching the application
A better approach (that mimics what happens in reality) is to launch your application as a Docker image and then run tests against it. This approach is only possible if you have adopted containers as deployment artifacts:
This technique is only limited by your pipeline resources. If you have not adopted microservices, it might be difficult to launch a huge monolith as part of a Codefresh pipeline (remember that service containers use the same resources as the pipeline). But for simple applications this way ensures that your tests actually hit the running application.
Here is the respective pipeline:
Using a custom test image
In all the previous examples the tests were running in a public Dockerhub image that has the programming language/framework that your tests require. In more complex cases, you might need to create your own Docker image that contains exactly the tools that you wish.
In this case you can create a special Docker image that will be used just for testing and nothing else.
It is very easy to create a test image as part of a pipeline and then reference it for integration tests. Here is the pipeline:
This is the recommended way to run integration tests in Codefresh. It creates a clear distinction between the source code that gets shipped to production and source code that is needed only for tests. It also allows you to define what the test environment will look like (maybe you need multiple or exotic testing tools that are not available in Dockerhub).
Integration tests for microservices
If you have enough pipeline resources, you can keep adding service containers that form a complex running environment. Service containers support launch dependency order as well as post-launch phases making possible any complex infrastructure that you have in mind.
Here is the pipeline:
Keep in the mind that extra services use memory from the pipeline itself, so if you follow this route make sure that the pipeline is running on the appropriate runtime environment.
Running service containers for the whole pipeline
In most cases service containers should be only attached to the pipeline step that is using them.
This not only helps with pipeline resources (as service containers are discarded when they are not needed) but also allows you to mix and match different service containers for different steps.
Here is an example pipeline:
In some cases however, you would like to execute multiple steps with integration tests that share the same environment. In this case you can also launch service containers at the beginning of the pipeline making them available to all pipeline steps:
You can use this technique by putting the service container definition at the root of the pipeline instead of a specific step:
Here is an example that follows this technique:
The Redis and PostgreSQL instances are now available to all pipeline steps.
Creating Test reports
All the techniques shown above are also applicable for Test reports.
Read all about test results and graphs in the test reports page.