GitOps Runtime architecture

View components of GitOps Runtimes

See detailed views of GitOps Runtime architecture for the different installation modes, and descriptions of the GitOps Runtime components.

Hosted GitOps Runtime architecture

In the hosted environment, the Codefresh Runtime is installed on a K8s cluster managed by Codefresh.

Hosted GitOps Runtime architecture

Hosted GitOps Runtime architecture

Tunnel-based Hybrid GitOps Runtime architecture

Tunnel-based Hybrid GitOps Runtimes use tunneling instead of ingress controllers to control communication between the GitOps Runtime in the customer cluster and the Codefresh GitOps Platform. Tunnel-based runtimes are optimal when the cluster with the GitOps Runtime is not exposed to the internet.

NOTE
Tunnel-based access mode is not supported for GitOps on-premises installations.

Note: Tunnel-based architecture is not supported for on-prem instances.

Tunnel-based Hybrid GitOps Runtime architecture

Tunnel-based Hybrid GitOps Runtime architecture

Ingress-based Hybrid GitOps Runtime architecture

Ingress-based Runtimes use ingress controllers to control communication between the GitOps Runtime in the customer cluster and the Codefresh GitOps Platform. Ingress-based Runtimes are optimal when the cluster with the GitOps Runtime is exposed to the internet.

Ingress-based Hybrid GitOps Runtime architecture

Ingress-based Hybrid GitOps Runtime architecture

Application Proxy

The GitOps Application Proxy (App-Proxy) functions as the Codefresh agent, and is deployed as a service in the GitOps Runtime.

For tunnel-based Hybrid GitOps Runtimes, the Tunnel Client forwards the incoming traffic from the Tunnel Server using the Request Routing Service to the GitOps App-Proxy. For Hybrid GitOps Runtimes with ingress, the App-Proxy is the single point-of-contact between the GitOps Runtime, and the GitOps Clients, the GitOps Platform, and any organizational systems in the customer environment.

The GitOps App-Proxy:

  • Accepts and serves requests from GitOps Clients either via the UI or CLI
  • Retrieves a list of Git repositories for visualization in the Client interfaces
  • Retrieves permissions from the GitOps Control Plane to authenticate and authorize users for the required operations.
  • Implements commits for GitOps-controlled entities, such as Delivery Pipelines and other CI resources
  • Implements state-change operations for non-GitOps controlled entities, such as terminating Argo Workflows

Argo Project

The Argo Project includes:

  • Argo CD for declarative continuous deployment
  • Argo Rollouts for progressive delivery
  • Argo Workflows as the workflow engine
  • Argo Events for event-driven workflow automation framework

NOTE
Codefresh users rely on our platform to deliver software reliably, and predictably without interruption.
To maintain that high standard, we add several weeks of testing and bug fixes to new versions of Argo before making them available within Codefresh.
Typically, new versions of Argo CD are available in the Codefresh Runtime within 30 days of their official release.

Event Reporters

Event Reporters monitor changes to resources deployed on the cluster and report the changes back to the Codefresh platform.

Codefresh has two types of Event Reporters:

  • Resource Event Reporter
  • Application Event Reporter

Resource Event Reporter

The Resource Event Reporter monitors specific types of resources on the cluster and tracks changes in their live-states. It sends the live-state manifests with the changes to Codefresh without preprocessing.

The Resource Event Reporter monitors changes to these resource types:

  • Rollouts (Argo Rollouts)
  • ReplicaSets and Workflows (Argo Workflows)

Resource Event Reporters leverage Argo Event components such as Event Sources to monitor changes to the live-state manifests, and Sensors to send the live-state manifests to Codefresh. For setup information on these Argo Event components, see Argo CD’s documentation on Event Source and Sensor.

Application Event Reporter

The Application Event Reporter specializes in monitoring changes to Argo CD applications deployed on the cluster.

In contrast to the Resource Event Reporter which utilizes Argo Events, the Application Event Reporter employs a proprietary implementation that includes an event queue to process application change-events and sharding for a robust and scalable setup. Another significant difference is that the Application Reporter retrieves both the live-state manifest of the application, and the Git manifests for all the application’s managed resources.

Application Event Reporter data flow

The diagram below illustrates the data flow for the Application Event Reporter (identified on the cluster as event-reporter):

Application Event Reporter flow

Application Event Reporter flow
  1. The user makes changes to the application manifest or its managed resources and commits them to the Git repository.

  2. The Argo CD Application Controller monitors the Git repository for changes, synchronizes the updates with the cluster, and forwards the changes to the Kubernetes API.

  3. The Application Event Reporter subscribes to the Kubernetes API to receive application-change events.

    • If there are multiple instances of the Application Event Reporter, each instance subscribes to a set of specific applications determined through a hash function on the application name.
    • The application-change event is added to the Event Queue of the appropriate Application Reporter instance for processing based on the shard to which it belongs. Each instance of the Application Reporter can queue up to 1,000 events at a time.

NOTE
The number of Application Event Reporters are equal to the configured number of replicas. By default, there are five replicas, but the number can be customized through the argo-cd.eventReporter.replicas parameter in your Helm values file values.yaml.

  1. The Application Event Reporter requests both the application’s live-state manifest and the Git manifests for all the application’s managed resources from the Argo CD Server.

  2. The Argo CD server retrieves these manifests from the Argo CD repo-server and forwards them to the Application Event Reporter.

  3. The Application Event Reporter reports the application-change events to the Codefresh platform.

Request Routing Service

The Request Routing Service is installed on the same cluster as the GitOps Runtime in the customer environment.
It receives requests from the the Tunnel Client (tunnel-based) or the ingress controller (ingress-based), and forwards the request URLs to the Application Proxy, and webhooks directly to the Event Sources.

IMPORTANT
The Request Routing Service is available from Runtime version 0.0.543 and higher.
Older Runtime versions are not affected as the ingress controller continues to route incoming requests and there is full backward compatibility.

Tunnel Server

Applies only to tunnel-based Hybrid GitOps Runtimes.
The Codefresh Tunnel Server is installed in the Codefresh platform. It communicates with the enterprise cluster located behind a NAT or firewall.

The Tunnel Server:

  • Forwards traffic from Codefresh Clients to the client (customer) cluster.
  • Manages the lifecycle of the Tunnel Client.
  • Authenticates requests from the Tunnel Client to open tunneling connections.

Tunnel Client

Applies only to tunnel-based Hybrid GitOps Runtimes.

Installed on the same cluster as the Hybrid GitOps Runtime, the Tunnel Client establishes the tunneling connection to the Tunnel Server via the WebSocket Secure (WSS) protocol.
A single Hybrid GitOps Runtime can have a single Tunnel Client.

The Tunnel Client:

  • Initiates the connection with the Tunnel Server.
  • Forwards the incoming traffic from the Tunnel Server through the Request Routing Service to App-Proxy, and other services.

Customer environment

The customer environment that communicates with the GitOps Runtime and Codefresh generally includes:

  • Ingress controller for ingress-based Hybrid GitOps Runtimes
    The ingress controller is configured on the same Kubernetes cluster as the GitOps Runtime, and implements the ingress traffic rules for the GitOps Runtime. See Ingress controller requirements.
  • Managed clusters
    Managed clusters are external clusters registered to provisioned Hosted or Hybrid GitOps Runtimes for application deployment.
    Hosted GitOps requires you to connect at least one external K8s cluster as part of setting up the Hosted GitOps environment.
    Hybrid GitOps allow you to add external clusters after provisioning the Runtimes.
    See Managing external clusters in GitOps Runtimes.
  • Organizational systems
    Organizational Systems include the customer’s tracking, monitoring, notification, container registries, Git providers, and other systems. They can be entirely on-premises or in the public cloud.
    Either the ingress controller (ingress hybrid environments), or the Tunnel Client (tunnel-based hybrid environments), forwards incoming events to the GitOps Application Proxy.

Hosted GitOps Runtime installation Hybrid GitOps Runtime installation On-premises GitOps Runtime installation