Codefresh On-Premises Upgrade

Use the Kubernetes Codefresh Installer to upgrade the Codefresh On-Premises platform

Upgrade the Codefresh on-premises platform to the latest version:

  • Prepare for the upgrade: Before the upgrade, based on the version you are upgrading to, complete the required tasks
  • Upgrade the platform
  • Complete post-upgrade configuration: If needed, also based on the version you are upgrading to, complete the required tasks

Upgrade to 1.1.1

Prepare for the upgrade to v1.1.1 by performing the tasks listed below.

Maintain backward compatibility for infrastructure services

If you have Codefresh version 1.0.202 or lower installed, and are upgrading to v1.1.1, to retain the existing images for the services listed below, update the config.yaml for kcfi.

  • cf-mongodb
  • cf-redis
  • cf-rabbitmq
  • cf-postgresql
  • cf-nats
  • cf-consul

In the config.yaml, as in the example below, if needed, replace the bitnami prefix with that of your private repo.

...

global:
  ### Codefresh App domain name. appUrl is manadatory parameter
  appUrl: onprem.mydomain.com
  appProtocol: https

  mongodbImage: bitnami/mongodb:3.6.13-r0 # (default `mongodbImage: bitnami/mongodb:4.2`)

mongodb:
  image: bitnami/mongodb:3.6.13-r0 # (default `image: bitnami/mongodb:4.2`)
  podSecurityContext:
    enabled: true
    runAsUser: 0
    fsGroup: 0
  containerSecurityContext:
    enabled: false    

redis:
  image: bitnami/redis:3.2.9-r2 # (default `image: bitnami/redis:6.0.16`)
  podSecurityContext:
    enabled: false
  containerSecurityContext:
    enabled: false  

postgresql:
  imageTag: 9.6.2 # (default `imageTag:13`)

nats:
  imageTag: 0.9.4  # (default `imageTag:2.7`)

consul:
  ImageTag: 1.0.0 # (default `imageTag:1.11`)
...

Upgrade to 1.2.0 and higher

This major release deprecates the following Codefresh managed charts:

  • Ingress
  • Rabbitmq
  • Redis

See the instructions below for each of the affected charts.

Before the upgrade remove any seed jobs left from previous release with:
kubectl delete job --namespace ${CF_NAMESPACE} -l release=cf

Before the upgrade remove PDBs for Redis and RabbitMQ left from previous release with:
kubectl delete pdb cf-rabbitmq --namespace ${CF_NAMESPACE}
kubectl delete pdb cf-redis --namespace ${CF_NAMESPACE}

Update configuration for Ingress chart

From version 1.2.0 and higher, we have deprecated support for Codefresh-managed-ingress.
Kubernetes community public ingress-nginx chart replaces Codefresh-managed-ingress chart. For more information on the ingress-nginx, see kubernetes/ingress-nginx.

Parameter locations have changed as the ingress chart name was changed from ingress to ingress-nginx:
NGINX controller parameters are now defined under ingress-nginx
Ingress object parameters are now defined under ingress

You must update config.yaml, if you are using:

  • External ingress controllers, including ALB (Application Load Balancer)
  • Codefresh-managed ingress controller with custom values
Update configuration for external ingress controllers

For external ingress controllers, including ALB (Application Load Balancer), update the relevant sections in config.yaml to align with the new name for the ingress chart:

  • Replace ingress with ingress-nginx

v1.1.1 or lower

ingress: #disables creation of both Nginx controller deployment and Ingress objects
  enabled: false

v1.2.2 or higher

ingress-nginx: #disables creation of Nginx controller deployment
  enabled: false

ingress: #disables creation of Ingress objects (assuming you've manually created ingress resource before)
  enabled: false
  • Replace annotations that have been deprecated with ingressClassName

v1.1.1 or lower

ingress:
  annotations: 
    kubernetes.io/ingress.class: my-non-codefresh-nginx

v1.2.2 or higher

ingress-nginx:
  enabled: false

ingress:
  ingressClassName: my-non-codefresh-nginx
###  `kubernetes.io/ingress.class` annotation is deprecated from kubernetes v1.22+.
#  annotations: 
#    kubernetes.io/ingress.class: my-non-codefresh-nginx  
Update configuration for Codefresh-managed ingress with custom values

If you were running Codefresh-managed ingress controller with custom values refer to values.yaml from the official repo. If needed, update the ingress-nginx section in config.yaml. The example below shows the default values (already provided in Codefresh chart) for ingress-nginx:

ingress-nginx:
  enabled: true
  controller:
    ## This section refers to the creation of the IngressClass resource
    ## IngressClass resources are supported since k8s >= 1.18 and required since k8s >= 1.19
    ingressClassResource:
      # -- Is this ingressClass enabled or not
      enabled: true
      # -- Is this the default ingressClass for the cluster
      default: false
      # -- Controller-value of the controller that is processing this ingressClass
      controllerValue: "k8s.io/ingress-nginx-codefresh"
      # -- Name of the ingressClass
      name: nginx-codefresh
    # -- For backwards compatibility with ingress.class annotation.
    # Algorithm is as follows, first ingressClassName is considered, if not present, controller looks for ingress.class annotation
    ingressClass: nginx-codefresh
    # -- Process IngressClass per name (additionally as per spec.controller).
    ingressClassByName: true
    # Limit the scope of the controller to a specific namespace
    scope:
      # -- Enable 'scope' or not
      enabled: true
    admissionWebhooks:
      enabled: false

New ingress-nginx subchart creates a new cf-ingress-nginx-controller service (type: LoadBalancer) instead of old cf-ingress-controller service. So make sure to update DNS record for global.appUrl to point to a new external load balancer IP.
You can get external load balancer IP with:
kubectl get svc cf-ingress-nginx-controller -o jsonpath={.status.loadBalancer.ingress[0].ip

Update configuration for RabbitMQ chart

From version 1.2.2 and higher, we have deprecated support for the Codefresh-managed Rabbitmq chart. Bitnami public bitnami/rabbitmq chart has replaced the Codefresh-managed rabbitmq. For more information, see bitnami/rabbitmq.

Configuration updates are not required if you are running an external RabbitMQ service.

RabbitMQ chart was replaced so as a consequence values structure might be different for some parameters. For the complete list of values, see values.yaml

existingPvc changed to existingClaim and defined under persistence

v1.1.1 or lower

rabbitmq:
  existingPvc: my-rabbitmq-pvc
  nodeSelector:
    foo: bar
  resources:
    limits:
      cpu: 2000m
      memory: 2Gi
    requests:
      cpu: 500m
      memory: 1Gi
  tolerations:
  - effect: NoSchedule
    key: <key>
    operator: Equal
    value: <value>

v1.2.2 or higher

rabbitmq:
  volumePermissions: ## Enable init container that changes the owner and group of the persistent volume from existing claim
    enabled: true
  persistence:
    existingClaim: my-rabbitmq-pvc
  nodeSelector:
    foo: bar
  resources:
    limits:
      cpu: 2000m
      memory: 2Gi
    requests:
      cpu: 500m
      memory: 1Gi
  tolerations:
  - effect: NoSchedule
    key: <key>
    operator: Equal
    value: <value>

storageClass and size defined under persistence

v1.1.1 or lower

rabbitmq:
  storageClass: my-storage-class
  storageSize: 32Gi

v1.2.2 or higher

rabbitmq:
  persistence:
    storageClass: my-storage-class
    size: 32Gi

Update configuration for Redis chart

From version 1.2.2 and higher, we have deprecated support for the Codefresh-managed Redis chart. Bitnami public bitnami/redis chart has replaced the Codefresh-managed Redis chart. For more information, see bitnami/redis.

Redis storage contains CRON and Registry typed triggers so you must migrate existing data from the old deployment to the new stateful set. This is done by backing up the existing data before upgrade, and then restoring the backed up data after upgrade.

Configuration updates are not required:

  • When running an external Redis service.
  • If CRON and Registy triggers have not been configured.
Verify existing Redis data for CRON and Registry triggers

Check if you have CRON and Registry triggers configured in Redis.

  • Run codefresh get triggers
    OR
    Directly from the K8s cluster where Codefresh is installed.
NAMESPACE=codefresh
REDIS_PASSWORD=$(kubectl get secret --namespace $NAMESPACE cf-redis -o jsonpath="{.data.redis-password}" | base64 --decode)
  
kubectl exec -it deploy/cf-redis -- env REDIS_PASSWORD=$REDIS_PASSWORD bash
#once inside cf-redis pod
REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli
info keyspace # list db 
select 15 # select db 15
keys * #show keys
  • If there are results, continue with Back up existing Redis data.
Back up existing Redis data

Back up the existing data before the upgrade:

  • Connect to the pod, run redis-cli, export AOF data from old cf-redis-* pod:
NAMESPACE=codefresh
REDIS_PASSWORD=$(kubectl get secret --namespace $NAMESPACE cf-redis -o jsonpath="{.data.redis-password}" | base64 --decode)
REDIS_POD=$(kubectl get pods -l app=cf-redis -o custom-columns=:metadata.name --no-headers=true)
kubectl cp $REDIS_POD:/bitnami/redis/data/appendonly.aof appendonly.aof -c cf-redis
Restore backed-up Redis data

Restore the data after the upgrade:

  • Copy appendonly.aof to the new cf-redis-master-0 pod:

    kubectl cp appendonly.aof cf-redis-master-0:/data/appendonly.aof
    
  • Restart cf-redis-master-0 and cf-api pods:

    kubectl delete pod cf-redis-master-0  
        
    kubectl scale deployment cf-cfapi-base --replicas=0 -n codefresh  
    kubectl scale deployment cf-cfapi-base --replicas=2 -n codefresh 
    

Redis chart was replaced so as a consequence values structure might be different for some parameters. For the complete list of values, see values.yaml.

existingPvc changed to existingClaim and defined under master.persistence

v1.1.1 or lower

redis:
  existingPvc: my-redis-pvc
  nodeSelector:
    foo: bar
  resources:
    limits:
      cpu: 1000m
      memory: 1Gi
    requests:
      cpu: 500m
      memory: 500Mi
  tolerations:
  - effect: NoSchedule
    key: <key>
    operator: Equal
    value: <value>

v1.2.2 or higher

redis:
  volumePermissions: ## Enable init container that changes the owner and group of the persistent volume from existing claim
    enabled: true
  master:
    persistence:
      existingClaim: my-redis-pvc
    nodeSelector:
      foo: bar
    resources:
      limits:
        cpu: 1000m
        memory: 1Gi
      requests:
        cpu: 500m
        memory: 500Mi
    tolerations:
    - effect: NoSchedule
      key: <key>
      operator: Equal
      value: <value>

storageClass and size defined under master.persistence

v1.1.1 or lower

redis:
  storageClass: my-storage-class
  storageSize: 32Gi

v1.2.2 or higher

redis:
  master:
    persistence:
      storageClass: my-storage-class
      size: 32Gi

If you run the upgrade without redis backup and restore procedure, Helm Releases Dashboard page might be empty for a few minutes after the upgrade.

Upgrade the Codefresh Platform with kcfi

  1. Locate the config.yaml file you used in the initial installation.
  2. Change the release number inside it.
     metadata:
       kind: codefresh
       installer:
         type: helm
         helm:
           chart: codefresh
           repoUrl: https://chartmuseum.codefresh.io/codefresh
           version: 1.2.12
    
  3. Perform a dry run and verify that there are no errors:
    kcfi deploy --dry-run --debug -c codefresh/config.yaml
  4. Run the actual upgrade:
    kcfi deploy --debug -c codefresh/config.yaml
  5. Verify that all the pods are are in running state:
    kubectl -n codefresh get pods --watch
  6. Log in to the Codefresh UI, and check the new version.
  7. If needed, enable/disable new feature flags.

Codefresh with Private Registry

If you install/upgrade Codefresh on the air-gapped environment (without access to public registries or Codefresh Enterprise registry) you will have to copy the images to your organization container registry.

Obtain image list for specific release

Push images to private docker registry

There are 3 types of images:

localhost:5000 is your <private-registy-addr:[port]>

  • non-Codefresh like:
    bitnami/mongo:4.2
    k8s.gcr.io/ingress-nginx/controller:v1.2.0
    postgres:13
    

    convert to:

    localhost:5000/bitnami/mongodb:4.2
    localhost:5000/ingress-nginx/controller:v1.2.0
    localhost:5000/postgres:13
    
  • Codefresh public images like:
    quay.io/codefresh/dind:20.10.13-1.25.2
    quay.io/codefresh/engine:1.147.8
    quay.io/codefresh/cf-docker-builder:1.1.14
    

    convert to:

    localhost:5000/codefresh/dind:20.10.13-1.25.2
    localhost:5000/codefresh/engine:1.147.8
    localhost:5000/codefresh/cf-docker-builder:1.1.14
    
  • Codefresh private images like:
    gcr.io/codefresh-enterprise/codefresh/cf-api:21.153.6
    gcr.io/codefresh-enterprise/codefresh/cf-ui:14.69.38
    gcr.io/codefresh-enterprise/codefresh/pipeline-manager:3.121.7
    

    convert to:

    localhost:5000/codefresh/cf-api:21.153.6
    localhost:5000/codefresh/cf-ui:14.69.38
    localhost:5000/codefresh/pipeline-manager:3.121.7
    

    DELIMITERS are codefresh OR codefresh-io

  • To push images via kcfi (ver. 0.5.15 is required) use:

kcfi images push --help

Prerequisites: sa.json to access Codefresh Enterprise GCR

kcfi images push --codefresh-registry-secret sa.json --images-list images-list-v1.2.12 --registry localhost:5000 --user "root" --password "root"

./push-to-registry.sh localhost:5000 v1.2.12

Install/Upgrade Codefresh with private docker registry config

Set usePrivateRegistry: true, and set privateRegistry address, username and password in config.yaml

config.yaml

global:
  ...

images:
  codefreshRegistrySa: sa.json
  usePrivateRegistry: true
  privateRegistry:
    address: myartifactory.com
    username: 
    password: