Migrating to a new cluster

How to migrate the CKI microservices to a new Kubernetes cluster

There are multiple services to migrate when moving to a different cluster. Here are some of the manual tasks to do.

Move deployments

In general, services are migrated by adjusting the Kubernetes context in cee/deployment-all. After all the pods are up, data can be migrated.

  1. Reconfigure .gitlab-ci.yml to point to the new cluster by adjusting the appropriate PROJECT_CONTEXT variable.

  2. Run the CI job and wait for the deployment to settle.

Move persistent volumes

To move the contents of a persistent volume via an external S3 bucket:

  1. Obtain bucket, endpoint and credentials from cee/deployment-all, e.g. from the BUCKET_DH_DW_BACKUPS variable.

  2. Scale the deployment in both clusters to zero via

    oc --context CONTEXT scale \
        --replicas=0 \
        dc/deployment-name
    
  3. Create a debug pod in both clusters via

    oc --context CONTEXT debug \
        --keep-init-containers=false \
        --image=quay.io/cki/cki-tools:production \
        dc/deployment-name
    
  4. In the original cluster, copy the PVC contents to the S3 bucket via

    tar -cf - /path/to/pvc | \
        AWS_ACCESS_KEY_ID=... AWS_SECRET_ACCESS_KEY=... \
        aws --endpoint-url https://url-for-s3-endpoint/ \
        s3 cp - s3://bucket/path/tarball.tar
    
  5. In the new cluster, extract the tarball and delete it from the bucket via

    AWS_ACCESS_KEY_ID=... AWS_SECRET_ACCESS_KEY=... \
        aws --endpoint-url https://url-for-s3-endpoint/ \
        s3 cp s3://bucket/path/tarball.tar - | \
        tar -xf - -C /path/to/pvc
    AWS_ACCESS_KEY_ID=... AWS_SECRET_ACCESS_KEY=... \
        aws --endpoint-url https://url-for-s3-endpoint/ \
        s3 rm s3://bucket/path/tarball.tar
    
  6. Scale the deployment in both clusters back up via

    oc --context CONTEXT scale \
        --replicas=1 \
        dc/deployment-name
    
Last modified October 14, 2022: Retrieve container images from quay.io (0d94653)