Requirements

The requirements listed are the minimum requirements for the Codefresh platform runtimes.

In the documentation, Kubernetes and K8s are used interchangeably.

Kubernetes cluster requirements

This section lists cluster requirements.

Cluster version

Kubernetes cluster, server version 1.18 and higher, without Argo Project components.

Tip:
To check the server version, run kubectl version --short.

Ingress controller

Configure your Kubernetes cluster with an ingress controller component that is exposed from the cluster.

Supported ingress controllers

Supported Ingress Controller Reference
Ambassador Ambassador ingress controller documentation
ALB (AWS Application Load Balancer) AWS ALB ingress controller documentation
NGINX Enterprise (nginx.org/ingress-controller) NGINX Ingress Controller documentation
NGINX Community (k8s.io/ingress-nginx) Provider-specific configuration in this article
Istio Istio Kubernetes ingress documentation
Traefik Traefik Kubernetes ingress documentation

Ingress controller requirements

  • Valid external IP address
    Run kubectl get svc -A to get a list of services and verify that the EXTERNAL-IP column for your ingress controller shows a valid hostname.

  • Valid SSL certificate
    For secure runtime installation, the ingress controller must have a valid SSL certificate from an authorized CA (Certificate Authority).

  • AWS ALB
    In the ingress resource file, verify that spec.controller is configured as ingress.k8s.aws/alb.

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: alb
spec:
  controller: ingress.k8s.aws/alb
  • Report status
    The ingress controller must be configured to report its status. Otherwise, Argo’s health check reports the health status as “progressing” resulting in a timeout error during installation.

    By default, NGINX Enterprise and Traefik ingress are not configured to report status. For details on configuration settings, see the following sections in this article:
    NGINX Enterprise ingress configuration
    Traefik ingress configuration

NGINX Enterprise version ingress configuration

The Enterprise version of NGINX (nginx.org/ingress-controller), both with and without the Ingress Operator, must be configured to report the status of the ingress controller.

Installation with NGINX Ingress

  • Pass the - -report-ingress-status to deployment.

      spec:                                                                                                                                                                 
        containers: 
         - args:                                                                                                                                              
         - -report-ingress-status
    

Installation with NGINX Ingress Operator

  1. Add this to the Nginxingresscontrollers resource file:

    ...
    spec:
      reportIngressStatus:
        enable: true
    ...
    
  2. Make sure you have a certificate secret in the same namespace as the runtime. Copy an existing secret if you don’t have one.
    You will need to add this to the ingress-master when you have completed runtime installation.

NGINX Community version provider-specific ingress configuration

Codefresh has been tested and is supported in major providers. For your convenience, here are provider-specific configuration instructions, both for supported and untested providers.

The instructions are valid for k8s.io/ingress-nginx, the community version of NGINX.

AWS
  1. Apply:
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/aws/deploy.yaml
  2. Verify a valid external address exists:
    kubectl get svc ingress-nginx-controller -n ingress-nginx
For additional configuration options, see ingress-nginx documentation for AWS.
Azure (AKS)
  1. Apply:
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml
  2. Verify a valid external address exists:
    kubectl get svc ingress-nginx-controller -n ingress-nginx
For additional configuration options, see ingress-nginx documentation for AKS.
Bare Metal Clusters
  1. Apply:
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/baremetal/deploy.yaml
  2. Verify a valid external address exists:
    kubectl get svc ingress-nginx-controller -n ingress-nginx
Bare-metal clusters often have additional considerations. See Bare-metal ingress-nginx considerations.
Digital Ocean
  1. Apply:
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/do/deploy.yaml
  2. Verify a valid external address exists:
    kubectl get svc ingress-nginx-controller -n ingress-nginx
For additional configuration options, see ingress-nginx documentation for Digital Ocean.
Docker Desktop
  1. Apply:
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml
  2. Verify a valid external address exists:
    kubectl get svc ingress-nginx-controller -n ingress-nginx
For additional configuration options, see ingress-nginx documentation for Docker Desktop.
Note: By default, Docker Desktop services will provision with localhost as their external address. Triggers in delivery pipelines cannot reach this instance unless they originate from the same machine where Docker Desktop is being used.
Exoscale
  1. Apply:
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml
  2. Verify a valid external address exists:
    kubectl get svc ingress-nginx-controller -n ingress-nginx
For additional configuration options, see ingress-nginx documentation for Exoscale.
Google (GKE)
Add firewall rules
GKE by default limits outbound requests from nodes. For the runtime to communicate with the control-plane in Codefresh, add a firewall-specific rule.
  1. Find your cluster's network:
    gcloud container clusters describe [CLUSTER_NAME] --format=get"(network)"
  2. Get the Cluster IPV4 CIDR:
    gcloud container clusters describe [CLUSTER_NAME] --format=get"(clusterIpv4Cidr)"
  3. Replace the `[CLUSTER_NAME]`, `[NETWORK]`, and `[CLUSTER_IPV4_CIDR]`, with the relevant values:
    gcloud compute firewall-rules create "[CLUSTER_NAME]-to-all-vms-on-network"
    --network="[NETWORK]" \
    --source-ranges="[CLUSTER_IPV4_CIDR]" \
    --allow=tcp,udp,icmp,esp,ah,sctp

Use ingress-nginx
  1. Create a `cluster-admin` role binding:
    kubectl create clusterrolebinding cluster-admin-binding \
    --clusterrole cluster-admin \
    --user $(gcloud config get-value account)
  2. Apply:
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml
  3. Verify a valid external address exists:
    kubectl get svc ingress-nginx-controller -n ingress-nginx
We recommend reviewing the provider-specific documentation for GKE.
MicroK8s
  1. Install using Microk8s addon system:
    microk8s enable ingress
  2. Verify a valid external address exists:
    kubectl get svc ingress-nginx-controller -n ingress-nginx
MicroK8s has not been tested with Codefresh, and may require additional configuration. For details, see Ingress addon documentation.
MiniKube
  1. Install using MiniKube addon system:
    minikube addons enable ingress
  2. Verify a valid external address exists:
    kubectl get svc ingress-nginx-controller -n ingress-nginx
MiniKube has not been tested with Codefresh, and may require additional configuration. For details, see Ingress addon documentation.
Oracle Cloud Infrastructure
  1. Apply:
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml
  2. Verify a valid external address exists:
    kubectl get svc ingress-nginx-controller -n ingress-nginx
For additional configuration options, see ingress-nginx documentation for Oracle Cloud.
Scaleway
  1. Apply:
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/scw/deploy.yaml
  2. Verify a valid external address exists:
    kubectl get svc ingress-nginx-controller -n ingress-nginx
For additional configuration options, see ingress-nginx documentation for Scaleway.


Traefik ingress configuration

To enable the the Traefik ingress controller to report the status, add publishedService to providers.kubernetesIngress.ingressEndpoint.

The value must be in the format "<namespace>/<service-name>", where:
<service-name> is the Traefik service from which to copy the status

   ...
   providers:
    kubernetesIngress:
      ingressEndpoint:
        publishedService: "<namespace>/<traefik-service>" # Example, "codefresh/traefik-default" ...
   ...

Node requirements

  • Memory: 5000 MB
  • CPU: 2

Runtime namespace permissions for resources

Resource Permissions Required
ServiceAccount Create, Delete
ConfigMap Create, Update, Delete
Service Create, Update, Delete
Role In group rbac.authorization.k8s.io: Create, Update, Delete
RoleBinding In group rbac.authorization.k8s.io: Create, Update, Delete
persistentvolumeclaims Create, Update, Delete
pods Creat, Update, Delete

Git repository requirements

This section lists the requirements for Git installation repositories.

Git installation repo

If you are using an existing repo, make sure it is empty.

Git access tokens

Codefresh requires two access tokens, one for runtime installation, and the second, a personal token for each user to authenticate Git-based actions in Codefresh.

Git runtime token

The Git runtime token is mandatory for runtime installation.

The token must have valid:

  • Expiration date: Default is 30 days
  • Scopes: repo and admin-repo.hook

Scopes for Git runtime token

Scopes for Git runtime token
Git user token for Git-based actions

The Git user token is the user’s personal token and is unique to every user. It is used to authenticate every Git-based action of the user in Codefresh. You can add the Git user token at any time from the UI.

The token must have valid:

  • Expiration date: Default is 30 days
  • Scope: repo

Scope for Git personal user token

Scope for Git personal user token

For detailed information on GitHub tokens, see Creating a personal access token.

Runtime installation