11 CI/CD Best Practices for DevOps Success

Continuous Integration and Continuous Delivery/Deployment is a practice that has been around for over 20 years and yet many teams still struggle to implement it properly. There is a huge incentive to get CI/CD right, teams that implement CI/CD well, develop software more quickly, deploy more frequently, and experience fewer failures. 

Continuous Integration is the practice of continually checking changes before merging them into a central source code repository. The goal is to quickly validate changes, or rapidly provide feedback to software engineers so they fix issues before the code is merged. 

Continuous Delivery or Continuous Deployment is the practice of rapidly deploying changes and typically features validating those changes in test and staging environments. Continuous Delivery requires a manual approval or trigger to deploy changes while Continuous Deployment is fully-automated. For this article, we’ll focus on best practices when using both CI and CD together and how they complement each other.

Following CI/CD best practices is critical for software engineering and operations teams because it impacts every part of the process. Improving quality, making engineers more productive, allowing businesses to shorten the time to deliver critical features and much more. Let’s dive into how to get the most out of your engineering organization by using CI/CD best practices. 

This is part of a series of articles about CI/CD Pipelines.

1. Commit Early, Commit Often

Committing early and often allows CI/CD to operate and provide rapid feedback to developers who can then make changes to finalize changes for deployment. It is our first best practice because rapid commits enable teams to get the most out of CI/CD. Specifically, because atomic commits are small changes made rapidly, are easier to test, easier to validate, easier to roll back, and provide the best basis for faster iteration. 

In older styles of software development teams or individuals would more commonly build very large feature sets because CI/CD wasn’t available, and source code management made atomic commits more difficult at the time. Advances in these technologies have removed any reason for teams not to commit more frequently.

Frequent commits also ensure changes won’t be lost in extreme cases of machine loss, or developer negligence. This may seem like a small risk but a surprising number of engineering teams still fail to check in their changes at all. This leads to a situation where no one actually knows what is deployed and what code was used to build that software. 

2. Build Only Once 

Poorly implemented CI/CD will often rebuild the same artifact several times. Once changes are expected to be final, the artifact should only be built once. This prevents different builds from getting different test results. When working with container images, these can be promoted with additional tags. 

Poorly managed CI/CD often will rebuild artifacts as they move through the process. For example, if the deployment process involves deploying to a test environment, staging environment, and production environment automation often will rebuild assets each time. This means the binary or binaries deployed to production were not actually the same ones deployed to staging and may not be identical. When this happens all tests have to be run again to validate the changes or we cannot have confidence that they will work when put into production. 

Failure to manage this process leads to running tests needlessly which wastes both time and resources. Building once, and promoting changes allows the process to run more smoothly, quickly, and reliably. 

3. Use Shared Pipelines (DRY)

Managing CI/CD pipelines is a core component of the ci/cd process, using a different CI/CD pipeline for each application or microservice leads to needless duplication and management complexity. Instead, use Shared Pipelines so you “do not repeat yourself” (DRY). Shared Pipelines, like the ones in Codefresh, rely on event triggers to set the context to “hydrate” a pipeline that can be reused across many applications and microservices. 

The traditional approach to building CI/CD pipelines is that each service or application has its own repository, and each repository has its own pipeline. In a world of microservices, this leads to an explosion of pipelines, most of which do more or less the same thing. Checkout code, build an artifact, test it, promote it for deployment, etc. 

Not only is this duplication of effort costly, but it is also difficult to maintain. Instead of having a shared expertise in the way a single pipeline works, teams need to carefully monitor every pipeline along with their specific differences and bespoke peculiarities. 

Shared pipelines eliminate the overhead of operating microservices and a large scale of applications.

4. Take a Security-First Approach

CI/CD provides a great mechanism for automating regular security checks, this includes not just the binaries but also best-practice scanning for Kubernetes manifests or infrastructure files such as Terraform. Pipelines should automatically and regularly check for security issues so engineers can fix them before they are handed off to operations. 

The biggest security threats to software are carefully cataloged and understood. These threats are dangerous when bad actors more carefully monitor their existence than software development teams. Most often, fixing a security issue is as simple as updating a library or dependency which can easily be put into a CI/CD flow. 

Beyond simple security scanning for known vulnerabilities, automated pipelines also provide the opportunity to do more costly techniques such as fuzz testing to identify vulnerabilities from malformed inputs. 

The security-first approach is in stark contrast to the “specialized security approach”. A security-first approach makes the security of software everyone’s responsibility while specialized approaches put security into the hands of a few trusted professionals. Security experts are valuable to support software engineering teams but every engineer must take responsibility. When security is a shared responsibility, teams ship more secure software while deploying more frequently.

5. Automate Tests

Automated quality tests break down into three categories, Unit, Integration, and Functional. Unit tests check specific functions for bugs. Integration tests ensure proper operation of multiple functions together and often find unusual issues and finally, functional tests check final operation against expected outcomes. Setting a proper strategy for developing these tests will ensure the right feedback to developers and the right quality gates for operations. 

Having these tests automated and tied to the CI/CD process will mean they run every time, reliably. Aberrations will be noticed quickly by software engineers so they can correct issues. Likewise, software developers must write these automated tests as they go so the operation of functions does not change in unexpected ways. 

Learn more in our detailed guide to CI/CD process.

6. Keep Builds Fast

Speedy builds are key. Ideally, developers are able to get meaningful feedback in 5-10 minutes so they can move on to the next task rather than spinning their wheels waiting to find out if changes are functional. 

Referring back to the first best practice “Commit Early, Commit Often” – frequent changes mean frequent builds. A delay as small as 1 minute adds up rapidly as we multiply across developers, teams, and larger organizations making many frequent changes. It is critical to make these pipelines operate quickly so they do not become a drag on productivity. The purpose of CI/CD is to speed up and make software changes easier and faster. Don’t let CI/CD become a drag on that process!

To keep builds fast, start with caching dependencies. Because commits are frequent and small, 99% of the code will stay the same between builds. Caching dependencies between builds can dramatically cut down on CI/CD execution time. 

Likewise, a careful ci/cd testing strategy can ensure only the proper tests run on changes, further reducing test overhead for each build. 

7. Create Test Environments on Demand

Bespoke and specialized environments are the enemy of stable software development. Using environments on demand ensures the portability of software while simultaneously reducing costs. 

In the past systems operators were quite proud of machine “uptime” – showing off how long a server could go without rebooting. Unfortunately, this mentality could lead to fragile environments that were difficult to recreate. Using test environments on demand serves three main purposes. First, it ensures that software can reliably start in new environments. Second, it greatly reduces the cost of testing by only keeping environments as long as needed. Lastly, it also provides concurrent testing capabilities that would not otherwise be present, meaning multiple engineers can test changes independently at the same time. 

The reliability aspect is critical because it leads to reliable deployments between environments, disaster recovery, and portability. And leadership is easily convinced when they see how costs can be reduced. To further decrease costs, short-lived environments can take advantage of low-cost temporary infrastructure like AWS spot instances, or GCP Preemptible nodes. These VMs run for pennies on the dollar because they can be deleted at any time, which isn’t really a problem for short-lived environments!

8. Choose Tools that Support Your Priorities

When speed is important, features like parallelization are important. Likewise, building reliable, and repeatable pipelines can be difficult so debugging, setting breakpoints may also be a critical feature. Don’t settle for old, and backwards automation that is not going to solve the problems you need to solve. 

The most common reason teams struggle to select the right tools is because they’re locked into a specific ecosystem. Many dev tool vendors will offer a variety of tools to try to cover the entire process and oftentimes these tools are not a priority for those vendors and so they aren’t perfectly aligned with your priorities. Instead, pick the right tools for each job, this can help you avoid sub-par results from sub-par tools. 

9. Monitor and Measure Your Pipeline

A pipeline taking a few seconds longer than normal may not stand out in the day-to-day but over time the accumulation of tests, changes, or even just malfunctioning caching mechanisms may create degradation. It’s critical to monitor pipelines holistically to make sure they do not become a drag on productivity. The whole point of automation is, after all, to improve productivity. 

Taking the long view of pipelines and regularly reviewing, typically monthly, will show the trend of pipeline performance and also help reveal which tests are the most likely to cause failures. Understanding the most common cause of failures can also help identify the kinds of changes that need additional attention to avoid unnecessary risk. 

10. Involve the Whole Team in CI/CD Implementation

It’s tempting to make CI/CD the specialized domain of a single individual. But over-specialization creates poor dependency and a tendency for engineers to externalize their problems. When CI/CD is a shared responsibility and teams understand its operation they will deal with feedback more quickly and understand the nature of common errors their commits may have created. 

Calling back to the best practice of using shared pipelines, having the whole team involved is easier when the implementation is shared and common to all. CI/CD’s value is ultimately most effective at the personal level. That is, pipelines provide feedback to developers as they’re submitting their work. If they do not understand the operation of pipelines the errors they throw may be surprising and lead to wasted time or misunderstanding of how changes created the errors. 

Just like security, CI/CD implementation should be a shared responsibility.

11. Prepare for Progressive Delivery Strategies (Canary and Blue Green Deployment)

Progressive delivery offers a way to de-risk changes. In Canary Release, software is deployed and exposed to a subset of users. If issues arise the change can be pulled back so it does not impact the larger user base. Software works well in progressive delivery when it is designed to operate as independent microservices. This means a change in one microservice should not impact other microservices. Progressive delivery can be integrated with CI/CD to provide additional layers of feedback to developers and operations. 

The best way to prepare for modern deployment strategies is to follow the 12 factor app pattern. If these principles are followed, microservices should have the required isolation to operate with multiple versions. In many cases, implementing canary is as simple as installing Argo Rollouts and setting up a rollout. But in some cases consideration must be made for user sessions and how they will operate between multiple versions. 

In most canary setups, a subset of traffic is exposed to the new version of a service but affinity is not maintained. Service affinity means once a user is exposed to a specific version they will stay with that session. This is important when the changes that are being tested would fail or cause user disruption when affinity is not maintained. 

Conclusion

Following these best practices can greatly improve your software delivery and improve the feedback cycle to engineers so they can effectively make changes, collect feedback, and issue fixes. With proper automation, software engineers can focus on developing software while operations teams can focus on keeping software operating properly. Automation is key to unlocking the full potential of engineering teams and organizations. 

Learn more in our detailed guide to CI/CD and Agile.

Continuous Integration and Continuous Delivery with Codefresh

Delivering new software is the most critical function of businesses trying to compete today. Many companies get stuck with flaky scripting, manual interventions, complex processes, and sizeable unreliable tool stacks across diverse infrastructure. Software teams are left scrambling to understand their software supply chain and discover the root cause of failures. It’s time for a new approach.

Codefresh helps you meet the continuous delivery challenge. Codefresh is a complete software supply chain to build, test, deliver, and manage software with integrations so teams can pick best-of-breed tools to support that supply chain. 

Built on Argo, the world’s most popular and fastest-growing open-source software delivery toolchain, Codefresh unlocks the full enterprise potential of Argo Workflows, Argo CD, Argo Events, and Argo Rollouts and provides a control-plane for managing them at scale.

The World’s Most Modern CI/CD Platform

A next generation CI/CD platform designed for cloud-native applications, offering dynamic builds, progressive delivery, and much more.

Check It Out