What Is OpenShift Pipelines?
OpenShift Pipelines, part of the OpenShift Container Platform, is a cloud-native, continuous integration and continuous delivery (CI/CD) solution for Kubernetes and OpenShift environments. It uses the Tekton project to provide a standardized approach to build, test, and deploy applications.
OpenShift Pipelines enable developers to define workflows using YAML-based pipeline definitions, offering flexibility in orchestrating various CI/CD tasks. The architecture supports balancing workloads across multi-cluster environments, ensuring high availability and scalability for enterprise applications.
The solution integrates with Kubernetes-native resources, ensuring easy scaling and management of pipeline runs. Its declarative pipeline specifications make it easier to version control and automate CI/CD processes. It provides built-in secret management and the ability to link tasks across namespaces.
What Is Argo CD?
Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes that deploys applications automatically from source code repositories. It serves as a bridge between development and operational aspects of DevOps by focusing on synchronizing application state to its declared configuration, as stored in Git repositories.
This approach promotes consistency and repeatability in application deployment, with improved rollback and history tracking capabilities. Argo CD’s web-based UI offers a visual insight into the application status, providing convenience for monitoring and management.
Argo CD supports multi-cluster deployments, enabling centralized management of distributed environments. It automates the deployment and lifecycle management of Kubernetes applications. By leveraging Git as a single source of truth, Argo CD simplifies tracing and auditing changes while promoting version control integration.
Benefits of Using Argo CD with OpenShift Pipelines
There are several reasons to combine Argo CD with OpenShift Pipelines.
GitOps Workflow Implementation
This integration ensures that infrastructure and application changes are automatically deployed and synchronized from Git repositories, minimizing manual interventions. The GitOps paradigm supports change tracking and accountability, as configurations and state changes are logged and versioned in Git.
With Argo CD and OpenShift Pipelines, operations teams can handle rollout strategies, such as blue-green deployments and canary releases, directly from code repositories. This enables continuous monitoring and automatic rollbacks if discrepancies between desired and current states arise.
Separation of CI and CD
OpenShift Pipelines focuses on CI tasks such as code building and testing, while Argo CD handles the CD tasks of deployment and environment management. This separation of concerns simplifies each phase of the software development lifecycle and improves reliability.
This division also allows teams to address different integration and deployment challenges independently, without impacting the other areas. For example, CI workflows can be refined and improved without influencing the deployment strategies. Operations teams can focus on deploying and monitoring applications using Argo CD, while developers improve code and test cycles with OpenShift Pipelines.
Consistency Across Environments
Argo CD combined with OpenShift Pipelines ensures consistency across multiple environments, such as development, testing, and production. Using Git as the single source of truth allows teams to replicate infrastructure and application states, ensuring uniformity regardless of the environment.
This consistency minimizes discrepancies and configuration drift between environments, often a challenge in large-scale deployments. By maintaining environment consistency, organizations can predict and manage deployments more effectively, identifying potential issues early in the software development lifecycle.
Improved Security and Compliance
Both tools provide access controls restricting who can make changes to applications and deployments. Argo CD ensures that changes are tracked, and any deviations from the desired state are noticed immediately, offering audit trails and change logs for compliance requirements. Maintaining the desired state through automated rollbacks deters unauthorized changes.
Compliance is further supported by using Git to manage change requests and approvals, allowing organizations to implement policies that meet regulatory standards. Automated pipelines reduce the risk associated with manual interventions, which often lead to human errors and security vulnerabilities.
How Does Argo CD Work with OpenShift Pipelines?
Argo CD and OpenShift Pipelines work together by integrating continuous integration (CI) with continuous delivery (CD), creating a seamless CI/CD pipeline. OpenShift Pipelines handles the CI portion—building, testing, and packaging applications—while Argo CD manages the CD phase, deploying applications to Kubernetes clusters based on Git-driven configurations.
Here’s an overview of the workflow:
- Pipeline triggering: When code changes are pushed to a Git repository, OpenShift Pipelines initiates a pipeline run. This pipeline might include tasks such as building container images, running unit tests, and generating artifacts. Pipeline steps are defined declaratively using Tekton, and pipeline runs are automatically triggered on code commits or changes to the repository.
- Artifact storage and deployment: Once the pipeline successfully completes the build and testing processes, artifacts such as container images are stored in a registry. Argo CD continuously monitors the Git repository for changes in application manifests, Helm charts, or Kubernetes YAML configurations. Once the new image version or application configuration is committed, Argo CD synchronizes the application state to match the declared state in Git.
- GitOps synchronization: Argo CD leverages GitOps principles, which means it always pulls the latest application state from the Git repository. When a change is detected, such as an updated image tag or a modified configuration file, Argo CD automatically updates the Kubernetes environment to match the desired state. This happens across all clusters that Argo CD manages, ensuring that the correct version of the application is deployed in each environment.
- Automated rollbacks and monitoring: During deployment, if discrepancies arise between the actual state in the cluster and the desired state as defined in Git, Argo CD can automatically trigger a rollback to the previous stable version. OpenShift Pipelines and Argo CD provide continuous monitoring of both pipeline runs and application states, ensuring real-time feedback on the success or failure of deployments. Argo CD’s UI shows the status of each environment, making it easy for teams to monitor deployments and ensure consistency.
- Multi-cluster deployment support: Both tools can manage multi-cluster environments. OpenShift Pipelines can orchestrate the CI process across clusters, and Argo CD can deploy applications to multiple clusters simultaneously. This is especially beneficial for large-scale, distributed environments, as teams can manage deployments centrally while ensuring that different clusters have consistent and up-to-date application versions.
TIPS FROM THE EXPERT
In my experience, here are tips that can help you better integrate Argo CD with OpenShift Pipelines:
- Set up namespace-based access controls in Argo CD: Leverage Argo CD’s AppProjects and OpenShift’s Role-Based Access Control (RBAC) to create strict, namespace-scoped permissions. This allows each team to manage its environment independently without risking cross-environment interference.
- Optimize resource allocation with Tekton custom tasks: For resource-heavy tasks like integration tests or multi-cluster deployments, create custom Tekton tasks tailored for each stage of your pipeline. This can optimize resource usage in OpenShift while keeping your Argo CD deployments lean.
- Implement Argo CD sync waves control the order of application deployment: Sync waves help ensure that critical Kubernetes resources, like ConfigMaps or Secrets, are updated before the application pods are rolled out.
- Use OpenShift Pipelines for parallel testing environments: Tekton allows for parallel pipeline execution, enabling you to spin up multiple isolated test environments. This is ideal when working with Argo CD-managed multi-cluster setups, enabling comprehensive testing before production.
- Track pipeline metrics with OpenShift and Argo CD: Use OpenShift’s Prometheus/Grafana integration to monitor pipeline performance alongside Argo CD’s application metrics. These insights can help optimize CI/CD times and identify bottlenecks.
Tutorial: Using Argo CD to Set Up a CD Pipeline with OpenShift Pipelines
In this tutorial, we will walk through the process of setting up a continuous delivery (CD) pipeline using Argo CD integrated with OpenShift Pipelines. This involves setting up a tenant Argo CD instance, configuring it to manage multiple environments, and deploying applications across these environments using GitOps practices.
Instructions are adapted from the official OpenShift documentation.
Step 1: Set Up a Tenant Argo CD Instance
First, you’ll need to create a dedicated namespace in OpenShift for the Argo CD instance. This namespace will be used to manage the Argo CD resources. Additionally, for the purposes of this tutorial, create the namespaces for your development (dev), user acceptance testing (uat), and production (prod) environments. Of course, in a production environment you would not use the same cluster for production and non-production resources.
- Ensure these namespaces are labeled appropriately to be managed by the Argo CD instance.
for namespace in ccop-ref-dev ccop-ref-uat ccop-ref-prod do oc new-project $namespace oc label namespace $namespace argocd.argoproj.io/managed-by=<argocd-instance-namespace> done
2. Next, create a secret in the Argo CD instance namespace to configure cluster access:
apiVersion: v1 kind: Secret metadata: name: in-cluster annotations: managed-by: argocd.argoproj.io labels: argocd.argoproj.io/secret-type: cluster type: Opaque stringData: config: '{"tlsClientConfig":{"insecure":false}}' name: in-cluster namespaces: ccop-ref-dev,ccop-ref-uat,ccop-ref-prod server: https://kubernetes.default.svc
A few important points about this configuration:
- The managed-by annotation ensures that Argo CD manages cluster resources, allowing the tool to control deployments within the specified namespaces.
- stringData configures secure access to the cluster, ensuring the connection is encrypted using TLS.
The namespaces field specifies the list of namespaces (e.g., dev, uat, prod) that Argo CD can manage, allowing centralized control across multiple environments.
3. Finally, deploy the Argo CD instance using the following configuration:
apiVersion: argoproj.io/v1alpha1 kind: ArgoCD metadata: name: argocd spec: controller: resources: limits: cpu: 500m memory: 1024Mi requests: cpu: 50m memory: 256Mi redis: resources: limits: cpu: 500m memory: 512Mi requests: cpu: 50m memory: 256Mi server: resources: limits: cpu: 500m memory: 512Mi requests: cpu: 50m memory: 256Mi route: enabled: true repo: resources: limits: cpu: 500m memory: 512Mi requests: cpu: 50m memory: 256Mi sso: provider: keycloak verifyTLS: false resources: limits: cpu: 1000m memory: 1024Mi requests: cpu: 500m memory: 512Mi
A few important points about this configuration:
- CPU and memory resources are defined for each component (controller, server, redis) to ensure stable performance and avoid resource starvation.
- The route.enabled setting creates an OpenShift route for the Argo CD server, making it accessible via a URL.
- The Keycloak SSO provider is used for secure user authentication, with resource limits defined to optimize performance.
Step 2: Configure Single Sign-On (SSO) with Red Hat SSO
Argo CD can also use single sign-on (SSO) integration for secure user authentication. Use Red Hat SSO (RHSSO) by embedding Keycloak within your Argo CD instance.
- Patch your Argo CD custom resource to enable the embedded RHSSO and confirm that the Keycloak pods are up and running:
sso: provider: keycloak verifyTLS: false resources: limits: cpu: 1000m memory: 1024Mi requests: cpu: 500m memory: 512Mi
A few important points about this configuration:
- The provider: keycloak setting enables Keycloak as the single sign-on (SSO) provider, ensuring secure user authentication through RHSSO.
- verifyTLS: false disables strict TLS verification, which may be useful in development environments but should be enabled in production for secure communication.
- Resource limits and requests are defined to allocate appropriate CPU and memory for the Keycloak pods, ensuring reliable performance and preventing resource contention in the cluster.
2. Integrate Keycloak with OpenShift OAuth by executing commands within the Keycloak pod:
$ oc exec -it dc/keycloak -n <argocd-instance-namespace> -- /bin/bash $ /opt/eap/bin/jboss-cli.sh $ embed-server --server-config=standalone-openshift.xml $ /subsystem=keycloak-server/spi=connectionsHttpClient/provider=default:write-attribute(name=properties.proxy-mappings,value=["oauth-openshift.apps.xx;http://xxxweb.int.xxx.ca:xxxx"]) $ quit $ /opt/eap/bin/jboss-cli.sh --connect --command=:reload $ exit
A few important points about these commands:
- The proxy-mappings setting configures Keycloak to interact with OpenShift OAuth, allowing single sign-on integration between the two platforms.
- The jboss-cli.sh commands modify Keycloak’s configuration and reload the server to apply the new settings.
- The use of a proxy ensures that OAuth requests are routed through the appropriate channels, maintaining security during authentication.
Step 3: Deploy an Application in the Development Environment
With your Argo CD instance set up, you can deploy applications in the development environment.
- Create a project and application resource in Argo CD for the dev environment:
apiVersion: argoproj.io/v1alpha1 kind: AppProject metadata: name: dev-project namespace: <argocd-instance-namespace> spec: clusterResourceWhitelist: - group: '*' kind: '*' destinations: - namespace: ccop-ref-dev server: 'https://kubernetes.default.svc' sourceRepos: - 'https://gitlab.xxx.corp.xxx.ca/xx/xxxxx/tekton-pipeline.git'
A few important points about this configuration:
- The clusterResourceWhitelist allows Argo CD to manage all Kubernetes resource types in the specified cluster and namespace.
- sourceRepos restricts the project to the Git repository that contains the application’s configuration, enabling Argo CD to pull updates for deployment.
- destinations define where the application will be deployed, in this case, the dev namespace.
2. Create an application resource in the dev environment:
apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: quarkus-app-dev spec: destination: namespace: ccop-ref-dev server: 'https://kubernetes.default.svc' source: path: k8s/overlays/dev repoURL: 'https://gitlab.xxx.corp.xxx.ca/xx/xxxxx/tekton-pipeline.git' targetRevision: dev project: dev-project syncPolicy: automated: prune: true selfHeal: true
A few important points about this configuration:
- path specifies the location of the Kubernetes manifests or Helm charts within the repository, targeting the dev environment.
- syncPolicy automates the deployment process by enabling pruning of unused resources and self-healing, ensuring the application remains in the desired state.
- targetRevision is set to dev, ensuring that only the development branch is deployed in the dev environment.
Step 4: Deploy in the UAT and Production Environments
For the user acceptance testing (UAT) and production environments, follow a similar process as the dev environment.
- Create the respective namespaces, projects, and application resources in Argo CD. To deploy in the UAT environment:
apiVersion: argoproj.io/v1alpha1 kind: AppProject metadata: name: uat-project namespace: <argocd-instance-namespace> spec: destinations: - namespace: ccop-ref-uat server: 'https://kubernetes.default.svc' sourceRepos: - 'https://gitlab.xxx.corp.xxx.ca/xx/xxxxx/tekton-pipeline.git' --- apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: quarkus-app-uat spec: destination: namespace: ccop-ref-uat server: 'https://kubernetes.default.svc' source: path: k8s/overlays/uat repoURL: 'https://gitlab.xxx.corp.xxx.ca/xx/xxxxx/tekton-pipeline.git' targetRevision: uat project: uat-project syncPolicy: automated: prune: true selfHeal: true
A few important points about this configuration:
- targetRevision is set to uat, which deploys the UAT version of the application, keeping the environment isolated from dev and prod.
- syncPolicy automates deployments and rollbacks, ensuring consistency between the Git repository and the UAT environment.
2. To deploy in the production environment:
apiVersion: argoproj.io/v1alpha1 kind: AppProject metadata: name: prod-project namespace: <argocd-instance-namespace> spec: destinations: - namespace: ccop-ref-prod server: 'https://kubernetes.default.svc' sourceRepos: - 'https://gitlab.xxx.corp.xxx.ca/xx/xxxxx/tekton-pipeline.git' --- apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: quarkus-app-prod spec: destination: namespace: ccop-ref-prod server: 'https://kubernetes.default.svc' source: path: k8s/overlays/prod repoURL: 'https://gitlab.xxx.corp.xxx.ca/xx/xxxxx/tekton-pipeline.git' targetRevision: master project: prod-project syncPolicy: automated: prune: true selfHeal: true
A few important points about this configuration:
- targetRevision is set to master, signifying the production-ready version of the application.
- The same GitOps principles of automated syncing and self-healing apply, ensuring that the production environment always reflects the state defined in the master branch.
This setup, combining Argo CD with OpenShift Pipelines, ensures a simpler and more secure CI/CD process, promoting efficient application delivery across all environments.
Related content: Read our guide to Argo support
Codefresh: A Modern, Argo-Based CI/CD Platform
The Codefresh platform, powered by Argo, combines the best of the open-source with an enterprise-grade runtime allowing you to fully tap the power of Argo Workflows, Events, CD, and Rollouts. It provides teams with a unified GitOps experience to build, test, deploy, and scale their applications.
You can use Codefresh for your platform engineering initiative either as a developer portal or as the machinery that takes care of everything that happens in the developer portal. Your choice depends on how far your organization has adopted Kubernetes and micro-services
Codefresh is a next-generation CI/CD platform designed for cloud-native applications, offering dynamic builds, progressive delivery, and much more.
Deploy more and fail less with Codefresh and Argo