Migrating to a new cluster
There are multiple services to migrate when moving to a different cluster. Here are some of the manual tasks to do.
Move deployments
In general, services are migrated by adjusting the Kubernetes context in cee/deployment-all. After all the pods are up, data can be migrated.
-
Reconfigure
.gitlab-ci.yml
to point to the new cluster by adjusting the appropriatePROJECT_CONTEXT
variable. -
Run the CI job and wait for the deployment to settle.
Move persistent volumes
To move the contents of a persistent volume via an external S3 bucket:
-
Obtain bucket, endpoint and credentials from cee/deployment-all, e.g. from the
BUCKET_DH_DW_BACKUPS
variable. -
Scale the deployment in both clusters to zero via
oc --context CONTEXT scale \ --replicas=0 \ dc/deployment-name
-
Create a debug pod in both clusters via
oc --context CONTEXT debug \ --keep-init-containers=false \ --image=quay.io/cki/cki-tools:production \ dc/deployment-name
-
In the original cluster, copy the PVC contents to the S3 bucket via
tar -cf - /path/to/pvc | \ AWS_ACCESS_KEY_ID=... AWS_SECRET_ACCESS_KEY=... \ aws --endpoint-url https://url-for-s3-endpoint/ \ s3 cp - s3://bucket/path/tarball.tar
-
In the new cluster, extract the tarball and delete it from the bucket via
AWS_ACCESS_KEY_ID=... AWS_SECRET_ACCESS_KEY=... \ aws --endpoint-url https://url-for-s3-endpoint/ \ s3 cp s3://bucket/path/tarball.tar - | \ tar -xf - -C /path/to/pvc AWS_ACCESS_KEY_ID=... AWS_SECRET_ACCESS_KEY=... \ aws --endpoint-url https://url-for-s3-endpoint/ \ s3 rm s3://bucket/path/tarball.tar
-
Scale the deployment in both clusters back up via
oc --context CONTEXT scale \ --replicas=1 \ dc/deployment-name