Continuous Delivery Pipeline: The 5 Stages Explained

What Is a Continuous Delivery Pipeline?

Continuous delivery (CD) is a software development method that makes it possible to automatically build, test, and deploy new versions of an application. Continuous delivery is based on continuous integration (CI) practices (together they are called CI/CD), but adds the ability to fully automate software deployment to staging and production environments.

A continuous delivery pipeline is a structured, automated process that typically starts with a developer who commits new code to a repository. This code triggers a CI build process, which might be integrated with container registries or binary repositories. The new build is subjected to automated tests, might be deployed to a staging environment for additional testing, and can then be deployed to production with the push of a button.

Related content: Read our guide to continuous delivery vs. continuous deployment

Stages of the Continuous Delivery Pipeline

A continuous delivery pipeline consists of five main phases—build/develop, commit, test, stage, and deploy.

Build/Develop

A build/develop process performs the following:

  1. Pulls source code from a public or private repository.
  2. Establishes links to relevant modules, dependencies, and libraries. 
  3. Builds (compiles) all components into a binary artifact. 

Depending on the programming language and the integrated development environment (IDE), the build process can include various tools. The IDE may offer build capabilities or require integration with a separate tool. Additional tools include scripts and a virtual machine (VM) or a Docker container.

Commit

The commit phase checks and sends the latest source code changes to the repository. Every check-in creates a new instance of the deployment pipeline. Once the first stage passes, a release candidate is created. The goal is to eliminate any builds unsuitable for production and quickly inform developers of broken applications.

Commit tasks typically run as a set of jobs, including: 

  • Compile the source code
  • Run the relevant commit tests
  • Create binaries for later phases
  • Perform code analysis to verify health
  • Prepare artifacts like test databases for later phases

These jobs run on a build grid, a facility provided by continuous integration (CI) servers. It helps ensure that the commit stage completes quickly, in less than five minutes, ideally, and no longer than ten minutes. 

Test

During the test phase, the completed build undergoes comprehensive dynamic testing. It occurs after the source code has undergone static testing. Dynamic tests commonly include: 

  • Unit or functional testing—helps verify new features and functions work as intended.
  • Regression testing—helps ensure new additions and changes do not break previously working features. 

Additionally, the build may include a battery of tests for user acceptance, performance, and integration. When testing processes identify errors, they loop the results back to developers for analysis and remediation in subsequent builds. 

Since each build undergoes numerous tests and test cases, an efficient CI/CD pipeline employs automation. Automated testing helps speed up the process and free up time for developers. It also helps catch errors that might be missed and ensure objective and reliable testing.

Stage

The staging phase involves extensive testing for all code changes to verify they work as intended, using a staging environment, a replica of the production (live) environment. It is the last phase before deploying changes to the live environment. 

The staging environment mimics the real production setting, including hardware, software, configuration, architecture, and scale. You can deploy a staging environment as part of the release cycle and remove it after deployment in production.

The goal is to verify all assumptions made before development and ensure the success of your deployment. It also helps reduce the risk of errors that may affect end users, allowing you to fix bugs, integration problems, and data quality and coding issues before going live. 

Deploy

The deployment phase occurs after the build passes all testing and becomes a candidate for deployment in production. A continuous delivery pipeline sends the candidate to human teams for approval and deployment. A continuous deployment pipeline deploys the build automatically after it passes testing. 

Deployment involves creating a deployment environment and moving the build to a deployment target. Typically, developers automate these steps with scripts or workflows in automation tools. It also requires connecting to error reporting and ticketing tools. These tools help identify unexpected errors post-deployment and alert developers, and allow users to submit bug tickets.

In most cases, developers do not deploy candidates fully as is. Instead, they employ precautions and live testing to roll back or curtail unexpected issues. Common deployment strategies include beta tests, blue/green tests, A/B tests, and other crossover periods.

Best Practices for Continuous Delivery Pipelines

The following best practices can help you implement effective continuous delivery pipelines.

Establish Your Service-Level Objectives (SLOs)

A service-level objective (SLO) is a set of criteria that a software product must meet according to stakeholder demands. Service-level agreements (SLAs) provide the basis for SLOs, along with service-level indicators (SLIs). Establishing SLOs and testing them continuously throughout the software development lifecycle allows you to ensure the quality of your releases. 

The first step is to establish an environment encompassing the multiple pipeline stages – this allows you to design built-in quality gates based on your SLOs to orchestrate workflows and integrate various testing tools (i.e., performance tests, chaos tests, etc.). Knowing that your code meets your SLO requirements and stands up to quality evaluations allows you to deploy confidently.

Use Quality Gates to Evaluate SLOs

Once you’ve established your SLOs, you can use them as a basis to automate test evaluation. One way to implement evaluation automation is to design quality gates – these are thresholds that determine the specific criteria for the software. Any software you develop must meet the quality gate requirements for each step in the software delivery pipeline before proceeding to the next step. 

Quality gates ingest data from various testing tools, including observability data, performance tests, and integration tests. They evaluate this data against criteria determined by the SLOs, enabling a consistent, repeatable process that you can easily tune. You can leverage AI to quickly identify the reasons for failed tests and how to fix them.

Don’t Mix Production and Non-Production Environments

Separating your different environments is important for deploying releases safely. You will ideally have a separate cluster for each of these environments:

  • Development – this is where developers deploy the applications for experiments and tests. You must integrate these deployments with other parts of your system or application (e.g., the database). Development environment clusters usually have a limited number of quality gates, giving developers more control over cluster configurations.
  • Pre-production – this is where developers and testers perform various large-scale tests, such as load, regression, performance, and integration tests. There may be different pre-production environments depending on the pipeline, but all CD pipelines must at least have a testing or staging environment. This environment is ideally identical to the production environment. 
  • Production – this is where you run the production workloads and any user-facing services or applications.

Ensure Your Pre-Production and Production Environments Are Similar

While the ideal pre-production environment is identical to the production environment, this is not always possible. For example, you might scale down the pre-production clusters as replicas of your production clusters to reduce costs. 

Keeping your clusters similar ensures that all tests performed in the testing environment reflect similar (or identical) conditions in the production environment. It also reduces the likelihood of an unexpected failure during deployment to production due to cluster differences.

You can use GitOps and declarative infrastructure to achieve closer parity between your pre-production and production environments by simply duplicating the configurations of the underlying clusters.

Design for Failure in the Production Environment

Even the most thorough testing cannot guarantee that an application will behave correctly in the production environment. Failures occur for various reasons you might not address in the staging environment, such as unusual or unexpected access patterns (i.e., edge cases) that you didn’t consider in the testing data. 

You must build your pipeline with the expectation of some failures. Monitoring applications in production is essential to enable fast rollback and bug fixes. You can leverage an automated rollback tool to save time. The idea is to ensure your deployment strategy accommodates unexpected faults and operates smoothly despite the issues, minimizing the impact on end-users. 

Implement Continuous Monitoring to Maintain Observability

Maintaining end-to-end observability for your dynamic continuous delivery pipelines is essential to allow DevOps teams to deliver successful applications. Monitoring allows you to ensure that your software continues to meet the criteria specified in your SLOs. 

Developers, testers, and analysts must have access to reliable telemetry to effectively analyze the root cause of issues and minimize blind spots throughout your pipeline. This telemetry should include logs, metrics, traces, user experience information, and rich context for various processes. Capturing detailed data at the code level lets you troubleshoot and debug. 

With software services and applications increasingly becoming distributed and relying on open source components, the relevant telemetry may come from multiple disparate sources with different instrumentation requirements. You should implement automatic, continuous monitoring of these sources with the flexibility to enable continuous updates.

How Will GitOps Affect the Continuous Delivery Pipeline?

GitOps is a way to continuously deliver cloud-native applications. It allows developers to easily automate complex environments, using tools they are already familiar with.

The shift to declarative configuration

The core idea of GitOps is to have a central Git repository with a declarative configuration that states which infrastructure and applications are needed for a production environment. With declarative configuration, developers simply describe what they need to do in their environment, and an automated process deploys the necessary resources to match this configuration (unlike prescriptive scripting which specifies how the deployment should happen, step by step).

In the GitOps process, developers deploy new applications or make changes to their environment by updating declarative configurations and committing them to the Git repository. Once configuration is updated, an automated process takes care of everything else. This is also true in reverse—a GitOps agent monitors the live environment and makes adjustments if it is out of sync with the desired configuration.

Pull-based deployments

One of the principles of GitOps is that deployment should be “pull based”. A traditional deployment process is “push based”, meaning that developers create a new version and directly deploy it to the live environment. Pull based deployment means that developers push new code to a repository, and the GitOps agent identifies this, compares the new version to the current application state, and triggers a deployment if there is a difference.

In Kubernetes, pull based deployment is done through a GitOps controller that detects discrepancies between the actual state and the desired state. If there are differences, it immediately updates the infrastructure to match the environment repository. It can also check an image registry to see if there is a new version of an image available to deploy.

Avoiding configuration drift

Pull based deployments have a major advantage over push based deployments—they make it very easy to undo changes to production environments to eliminate configuration drift. The central Git repository keeps track of all changes in Git logs. In any event of configuration drift, the GitOps controller automatically restores the application to the desired state. If a new deployment caused a problem, it is very easy to see what change caused the problem and revert to the last working configuration.

Continuous Delivery Pipeline Automation with Codefresh

Delivering new software is the single most important function of businesses trying to compete today. Many companies get stuck with flaky scripting, manual interventions, complex processes, and large unreliable tool stacks across diverse infrastructure. Software teams are left scrambling to understand their software supply chain and discover the root cause of failures. It’s time for a new approach.

Codefresh helps you meet the continuous delivery challenge. The Codefresh platform is a complete software supply chain to build, test, deliver, and manage software with integrations so teams can pick best-of-breed tools to support that supply chain. 

Built on Argo, the world’s most popular and fastest-growing open source software delivery toolchain, the Codefresh Software Delivery Platform unlocks the full enterprise potential of Argo Workflows, Argo CD, Argo Events, and Argo Rollouts and provides a control-plane for managing them at scale.

The World’s Most Modern CI/CD Platform

A next generation CI/CD platform designed for cloud-native applications, offering dynamic builds, progressive delivery, and much more.

Check It Out

How useful was this post?

Click on a star to rate it!

Average rating 5 / 5. Vote count: 1

No votes so far! Be the first to rate this post.