Container images

How the CKI team creates, builds, tests and deploys container images used for services, cron jobs and the pipeline

The CKI Project uses container images for most of it services and cron jobs as well as in the kernel build and test pipeline.

Repositories

Container images are built in two places:

The container image repository contains the container image build files (i.e. Dockerfiles) for

  • some basic container images that are used all across the CKI project (base, buildah)
  • the (builder) images used in the pipeline (python, builder-*)

For the micro services and cron jobs, the container image build files are distributed alongside the source code.

General guidelines

  • All containers other than the builder images should be directly or indirectly based on the latest Fedora release
  • Outside of the container image repository, all container images for services and cron jobs should derive from the base container image built in the container image repository
  • For nearly all GitLab CI purposes, the cki-tools container image built in the cki-tools repository should be used
  • Container images should never contain any sensitive information

Creating container images

Container image build files (i.e. Dockerfiles) should be placed in builds/image-name.in and will be preprocessed with the C preprocessor cpp by buildah. This allows to include the following building blocks with #include:

  • setup-from-base: FROM command for a generic Python application based on the base container image with support for customizing image name and tag
  • python-requirements: commands for a generic Python application that contains setup.cfg or requirements.txt for dependencies and a top-level run.sh as the entry point
  • cleanup: remove caches from dnf and pip, should always be at the end of your image build file

As an example, an image build file for a Python application in builds/image-name.in could look like

#include "setup-from-base"
/* any steps here that should happen before pip install */
#include "python-requirements"
/* any steps here that should happen after pip install */
#include "cleanup"

As the preprocessor will consider anything starting with a # as a preprocessor command, comments need to be put between C-style delimiters like /* comment */.

Building container images

A helper script cki_build_image.sh that simplifies building and pushing images is provided in the container image repository and included in the buildah image.

The script will use buildah with an image build file from builds/ via the IMAGE_NAME environment variable to build a container image. IMAGE_NAME should contain the base file name of the image build file located under the builds/, without extension, i.e. IMAGE_NAME=image-name for builds/image-name.in.

When run inside a GitLab pipeline, the script will push the container images to the project container image registry with a tag of p-<pipeline-id>.

Building foreign architectures

When setting the IMAGE_ARCH environment variable to a non-empty value, images can be built for non-native architectures via qemu. The resulting images have tags with an appended -arch. Currently, amd64, arm64, ppc64le and s390x are verified to be compatible with quay.io.

When setting the IMAGE_ARCHES environment variable to a stringified list of architectures, a multi-arch manifest can be built and uploaded. The source images are downloaded from the registry before creating the manifest to work around some peculiarities of quay.io.

Building locally

To build a container image locally, run the helper script in the buildah container with the correct IMAGE_NAME environment variable via

podman run \
    --rm \
    --pull=newer \
    -e IMAGE_NAME=image-name \
    --privileged \
    -w /code \
    -v .:/code \
    -v ~/.local/share/containers:/var/lib/containers \
    quay.io/cki/buildah:production \
    cki_build_image.sh

This will mount the current directory as /code inside the container, and call cki_build_image.sh with the correct IMAGE_NAME environment variable. Additionally, it will share the container storage of your local user in ~/.local/share/containers with the container. In this way, the resulting container image will be available on the host as well.

If you cannot see the container images generated inside the container on your host system, check store.GraphDriverName returned from buildah info inside and outside the buildah container. If your host system uses a different graph driver (e.g. vfs), you can force buildah inside the container to use the same driver via additionally passing -e "STORAGE_DRIVER=vfs" to the podman command line above.

Building via GitLab

The cki-lib repository contains common GitLab CI building blocks for building container images via the cki-common.yml include file.

It can be included in .gitlab-ci.yml with something like

include:
  - project: cki-project/cki-lib
    file: .gitlab/ci_templates/cki-common.yml

It provides the following job templates for container image building:

  • .publish: publish a container image named after the project (CI_PROJECT_NAME)
  • .publish_job: publish a container image named after the job (CI_JOB_NAME)
  • .tag: tag a container image with mr-<123>, <tag> and latest as appropriate
  • .deploy_production: manual job to tag a container image with production and deploy into the production environment
  • .deploy_production_image: as above, but for custom image names
  • .deploy_production_tag: manual job to tag the current git commit as production, this needs a GITLAB_JOB_DEPLOY_PRODUCTION_ACCESS_TOKEN deployment token with repo write access
  • .deploy_mr: manual job to tag a container image with mr-<123> and deploy into a testing environment
  • .deploy_mr_image: as above, but for custom image names
  • .stop_mr: stop job for the testing environments

Tags and environments

Depending on the kind of GitLab pipeline where container images are built, the images are tagged with various tags:

tag/pipeline default branch tag merge request
p-123456 always always always
g-123456 on success on success on success
latest on success
tag on success
mr-123 on success

For all pipelines, the p-123456 tag will always be available independent of whether the testing passes. All other tags will only be pushed after testing passed successfully.

Additionally, pipeline jobs that deploy into an environment can be added to a project.

For deployments to the production environment, container images are tagged with production. This job will run automatically on the default branch, but is also available for manual execution on other pipelines.

For deployments into dynamic review environments per merge request, container images are tagged with mr-123. This job is only available for manual execution in merge request pipelines.

All deployment jobs only tag the container images. The actual deployment of these images needs to happen elsewhere.

Single container image per repository

To publish and tag a container image named after the project, and (optionally) create production and review environments, add the following jobs to .gitlab-ci.yml:

publish:
  extends: .publish

tag:
  extends: .tag

deploy-production:
  extends: .deploy_production

deploy-mr:
  extends: .deploy_mr
  environment: {on_stop: stop-mr}

stop-mr:
  extends: [.deploy_mr, .stop_mr]

Multiple container images per repository

If multiple container images should be built, add multiple container image build files in builds/. As more jobs are needed, the .gitlab-ci.yml jobs get slightly more complicated 🙈.

As an example, to publish and tag two images backend and frontend and (optionally) create production and review environments, the following jobs have to be added:

.images:
  parallel:
    matrix:
      - IMAGE_NAME: backend
        CHANGES: src/{core,backend}  # "/*" is automatically appended
      - IMAGE_NAME: frontend
        CHANGES: src/{core,frontend}  # "/*" is automatically appended

publish:
  extends: [.publish, .images]

tag:
  extends: [.tag, .images]

deploy-production:
  extends: [.deploy_production_image, .images]
  environment:
    name: production/$IMAGE_NAME

# MR environment per image, on_stop cannot be specified in a matrix

deploy-mr-backend:
  extends: .deploy_mr_image
  variables: {IMAGE_NAME: backend}
  environment: {on_stop: stop-mr-backend}

stop-mr-backend:
  extends: [deploy-mr-backend, .stop_mr]

deploy-mr-frontend:
  extends: .deploy_mr_image
  variables: {IMAGE_NAME: frontend}
  environment: {on_stop: stop-mr-frontend}

stop-mr-frontend:
  extends: [deploy-mr-frontend, .stop_mr]

Customizing parameters

Take a look at the publish job template to see what variables can be overridden. The BASE_IMAGE_TAG and buildah_image_tag variables allow to run image builds with newer versions of the base and buildah container images.

Building multi-arch images

To build multi-arch container images, one job per architecture plus one job for the multi-arch manifest need to be defined:

backend:
  extends: .publish_job
  variables:
    IMAGE_NAME: backend
  parallel:
    matrix:
      - IMAGE_ARCH: [amd64, arm64, ppc64le, s390x]

backend-multi:
  extends: .publish_job
  stage: 🎁
  variables:
    IMAGE_ARCHES: "amd64 arm64 ppc64le s390x"
    IMAGE_NAME: backend

Testing container images

For all merge requests in projects that use container images, updated container images with the code in the merge request will be published to the GitLab package registry of the project with tags like mr-1234, where 1234 corresponds to the merge request ID.

Testing locally

To run container images locally, use a temporary container via

podman run \
    --rm \
    --env ENV_NAME="value" \
    --env ... \
    --workdir /code \
    quay.io/cki/image

The --env parameters allow to set environment variables inside the container. For local development, adding --volume .:/code parameter will overlay the current directory on top of /code inside the container. This means that any changes outside the container will be immediately visible inside the container, and that it is not necessary to rebuild the container image for each code change; a restart of the container is good enough.

Testing OpenShift services

The deployment-all repository contains information on how to create non-production deployments for services.

Testing OpenShift cron jobs

Clone the cron job configuration into a new pod via oc debug while overriding the container image and CKI_DEPLOYMENT_ENVIRONMENT environment variable like

oc debug \
    cronjob/acme-update-cluster-routes-daily \
    --image quay.io/cki/cki-tools:mr-73 \
    CKI_DEPLOYMENT_ENVIRONMENT=staging

Testing in the pipeline

In the container image repository, updated images can be tested by the bot:

  • add a comment to the merge request that contains: @cki-ci-bot test
  • wait for the pipelines to finish
  • verify everything is correct

Deploying container images

Deploying OpenShift services

All services are deployed via container images with the :production tags. When the container images are tagged with :production in a production GitLab environment, the deployment-bot will trigger a corresponding pipeline in the deployment-all repository.

The precise configuration for production deployments of services can be found in deployment-all/openshift.

Deploying OpenShift cron jobs

Cron jobs run via container images with the :production tags as well. As this tag is built automatically from the default branch for all repositories, any update to the default branch of a repository results in the immediate use of the new code the next time the cron jobs runs. This implemented via the imagePullPolicy=Always setting.

The precise configuration for production deployments of cron jobs can be found in deployment-all/schedules.

Deploying in the pipeline

See the documentation on updating pipeline images.

Container image registries

Container images should be public by default, i.e. they are free for anyone to use. Container images are considered private if building them required access to internal resources, e.g. internal RHEL repositories.

quay.io

The cki organization on quay.io contains the following container images:

  • public container images
  • internal container images
  • mirror repositories for external container images

Access control

The cki organization contains three teams:

  • owners: admin access, contains only CKI project members
  • repocreators: allows to create new repositories, contains the push_account robot account
  • readers: read-only access, contains external users with read-only access and the pull_account robot account

If you receive an error about not being able to access container images, talk to another team member to get your permissions adjusted.

registry.gitlab.com

The various container registries in the cki-project group on gitlab.com contain mirrors of a selection of the tags of the public container images. As PSI OpenShift might have transient problems to pull unauthenticated images from there, a group deploy token with read_registry scope is used for all pulls.

Docker Hub

Docker Hub has introduced pull rate limits that make automated use of images hosted there very unreliable. For that reason, container images consumed from Docker Hub that are not available elsewhere are mirrored into repositories on quay.io/cki.

As an example, the postgres:alpine image is used for CI/CD in the datawarehouse repository. The image is hosted on Docker Hub and is normally referenced either via the short image name postgres:alpine or the full image name docker.io/postgres:alpine. To improve reliability, the image is mirrored in deployment-all. To use the mirrored image, the image name has been replaced by quay.io/cki/mirror_postgres:alpine.