If you have familiarized yourself with the different installation options, here’s a deep dive into the architecture and components of the different options.

Runner architecture

The most important components are the following:

Codefresh VPC: All internal Codefresh services run in the VPC (analyzed in the next section). Codefresh uses Mongo and PostgreSQL to store user and authentication information.

Pipeline execution environment: The Codefresh engine component is responsible for taking pipeline definitions and running them in managed Kubernetes clusters by automatically launching the Docker containers that each pipeline needs for its steps.

External actors. Codefresh offers a public API that is consumed both by the Web user interface and the Codefresh CLI. The API is also available for any custom integration with external tools or services.

Runner topology

If we zoom into Hybrid Runner services, we will see the following:

Topology diagram

Topology diagram

Runner core components

Category Component Function
Core pipeline-manager Manages all CRUD operations for CI pipelines.
  cfsign Signs server TLS certificates for docker daemons, and generates client TLS certificates for hybrid pipelines.
  cf-api Central back-end component that functions as an API gateway for other services, and handles authentication/authorization.
  context-manager Manages the authentications/configurations used by Codefresh CI/CD and by the Codefresh engine.
  runtime-environment-manager Manages the different runtime environments for CI pipelines. The runtime environment for CI/CD SaaS is fully managed by Codefresh. For CI/CD Hybrid, customers can add their own runtime environments using private Kubernetes clusters.
Trigger hermes Controls CI pipeline trigger management. See triggers.
  nomios Enables triggers from Docker Hub when a new image/tag is pushed.See Triggers from Docker Hub.
  cronus Enables defining Cron triggers for CI pipelines. See Cron triggers.
Log cf-broadcaster Stores build logs from CI pipelines. The UI and CLI stream logs by accessing the cf-broadcaster through a web socket.
Kubernetes cluster-providers Provides an interface to define cluster contexts to connect Kubernetes clusters in CI/CD installation environments.
  helm-repo-manager Manages the Helm charts for CI/CD installation environments through the Helm repository admin API and ChartMuseum proxy. See Helm charts in Codefresh.
  k8s-monitor The agent installed on every Kubernetes cluster, providing information for the Kubernetes dashboards. See Kubernetes dashboards.
  charts-manager Models the Helm chart view in Codefresh. See Helm chart view.
  kube-integration Provides an interface to retrieve required information from a Kubernetes cluster, can be run either as an http server or an NPM module.
  tasker-kubernetes Provides cache storage for Kubernetes dashboards. See Kubernetes dashboards.

GitOps architecture

The diagram shows a high-level view of the GitOps environment, and its core components, the Codefresh Control Plane, the Codefresh Runtime, and the Codefresh Clients.

Codefresh GitOps platform architecture

Codefresh GitOps platform architecture

GitOps Control Plane

The Codefresh Control Plane is the SaaS component in the platform. External to the enterprise firewall, it does not have direct communication with the Codefresh Runtime, Codefresh Clients, or the customer’s organizational systems. The Codefresh Runtime and the Codefresh Clients communicate with the Codefresh Control Plane to retrieve the required information.

GitOps Runtime

The GitOps Runtime is installed on a Kubernetes cluster, and houses the enterprise distribution of the Codefresh Application Proxy and the Argo Project.
Depending on the type of GitOps installation, the GitOps Runtime is installed either in the Codefresh platform (Hosted GitOps), or in the customer environment (Hybrid GitOps). Read more in Codefresh GitOps Runtime architecture.

GitOps Clients

GitOps Clients include the UI and the GitOps CLI.
The UI provides a unified, enterprise-wide view of deployments (runtimes and clusters), and CI/CD operations (Delivery Pipelines, workflows, and deployments) in the same location.
The Codefresh CLI includes commands to install hybrid runtimes, add external clusters, and manage runtimes and clusters.

GitOps Runtime architecture

The sections that follow show detailed views of the GitOps Runtime architecture for the different installation options, and descriptions of the GitOps Runtime components.

Hosted GitOps runtime architecture

In the hosted environment, the Codefresh Runtime is installed on a K8s cluster managed by Codefresh.

Hosted GitOps Runtime architecture

Hosted GitOps Runtime architecture

Tunnel-based Hybrid GitOps runtime architecture

Tunnel-based Hybrid GitOps runtimes use tunneling instead of ingress controllers to control communication between the GitOps Runtime in the customer cluster and the Codefresh GitOps Platform. Tunnel-based runtimes are optimal when the cluster with the GitOps Runtime is not exposed to the internet.

Tunnel-based Hybrid GitOps Runtime architecture

Tunnel-based Hybrid GitOps Runtime architecture

Ingress-based Hybrid GitOps runtime architecture

Ingress-based runtimes use ingress controllers to control communication between the GitOps Runtime in the customer cluster and the Codefresh GitOps Platform. Ingress-based runtimes are optimal when the cluster with the GitOps Runtime is exposed to the internet.

Ingress-based Hybrid GitOps runtime architecture

Ingress-based Hybrid GitOps runtime architecture

Application Proxy

The GitOps Application Proxy (App-Proxy) functions as the Codefresh agent, and is deployed as a service in the GitOps Runtime.

For tunnel-based Hybrid GitOps Runtimes, the Tunnel Client forwards the incoming traffic from the Tunnel Server using the Request Routing Service to the GitOps App-Proxy. For Hybrid GitOps Runtimes with ingress, the App-Proxy is the single point-of-contact between the GitOps Runtime, and the GitOps Clients, the GitOps Platform, and any organizational systems in the customer environment.

The GitOps App-Proxy:

  • Accepts and serves requests from GitOps Clients either via the UI or CLI
  • Retrieves a list of Git repositories for visualization in the Client interfaces
  • Retrieves permissions from the GitOps Control Plane to authenticate and authorize users for the required operations.
  • Implements commits for GitOps-controlled entities, such as Delivery Pipelines and other CI resources
  • Implements state-change operations for non-GitOps controlled entities, such as terminating Argo Workflows

Argo Project

The Argo Project includes:

  • Argo CD for declarative continuous deployment
  • Argo Rollouts for progressive delivery
  • Argo Workflows as the workflow engine
  • Argo Events for event-driven workflow automation framework

Codefresh users rely on our platform to deliver software reliably, and predictably without interruption.
To maintain that high standard, we add several weeks of testing and bug fixes to new versions of Argo before making them available within Codefresh.
Typically, new versions of Argo are available within 30 days of release in Argo.

Request Routing Service

The Request Routing Service is installed on the same cluster as the GitOps Runtime in the customer environment.
It receives requests from the the Tunnel Client (tunnel-based) or the ingress controller (ingress-based), and forwards the request URLs to the Application Proxy, and webhooks directly to the Event Sources.

The Request Routing Service is available from runtime version 0.0.543 and higher.
Older runtime versions are not affected as there is complete backward compatibility, and the ingress controller continues to route incoming requests.

Tunnel Server

Applies only to tunnel-based Hybrid GitOps Runtimes.
The Codefresh Tunnel Server is installed in the Codefresh platform. It communicates with the enterprise cluster located behind a NAT or firewall.

The Tunnel Server:

  • Forwards traffic from Codefresh Clients to the client (customer) cluster.
  • Manages the lifecycle of the Tunnel Client.
  • Authenticates requests from the Tunnel Client to open tunneling connections.

Tunnel Client

Applies only to tunnel-based Hybrid GitOps Runtimes.

Installed on the same cluster as the Hybrid GitOps Runtime, the Tunnel Client establishes the tunneling connection to the Tunnel Server via the WebSocket Secure (WSS) protocol.
A single Hybrid GitOps Runtime can have a single Tunnel Client.

The Tunnel Client:

  • Initiates the connection with the Tunnel Server.
  • Forwards the incoming traffic from the Tunnel Server through the Request Routing Service to App-Proxy, and other services.

Customer environment

The customer environment that communicates with the GitOps Runtime and Codefresh, generally includes:

  • Ingress controller for ingress-based Hybrid runtimes
    The ingress controller is configured on the same Kubernetes cluster as the GitOps Runtime, and implements the ingress traffic rules for the GitOps Runtime. See Ingress controller requirements.
  • Managed clusters
    Managed clusters are external clusters registered to provisioned Hosted or Hybrid GitOps runtimes for application deployment.
    Hosted GitOps requires you to connect at least one external K8s cluster as part of setting up the Hosted GitOps environment.
    Hybrid GitOps allow you to add external clusters after provisioning the runtimes.
    See Add external clusters to runtimes.
  • Organizational systems
    Organizational Systems include the customer’s tracking, monitoring, notification, container registries, Git providers, and other systems. They can be entirely on-premises or in the public cloud.
    Either the ingress controller (ingress hybrid environments), or the Tunnel Client (tunnel-based hybrid environments), forwards incoming events to the GitOps Application Proxy.

## Related articles Codefresh pricing
Codefresh features