Reproducing and debugging kernel builds
CKI uses containers to build the kernels, thus making it easy to reproduce the
builds on any local environment. Before starting, install podman
to be able
to pull and run the containers:
dnf install -y podman
Finding the correct job
There are multiple places in the pipeline where the builds are run. Most likely, you will want to open the failed builds which are marked with the red cross. However, if you need to reproduce a successful build, you need to get the correct job in the pipeline first:
- Base kernel build: Open the desired architecture and config job in the
build
stage. - Kernel tools build for non-
x86_64
architectures: Open the desired architecture in thebuild-tools
stage.
Download the builder container image locally
Information about which container image and tag was used to build the kernel is provided in the job we found in the previous step, towards the top of the logs:
Use podman
to download the container image:
podman image pull <IMAGE_NAME_AND_TAG>
e.g. the command from the screenshot example would be
podman image pull registry.gitlab.com/cki-project/containers/builder-stream9:latest
Accessing private container images
RHEL container images are not publicly accessible. If the image pulling fails with permission issues, contact the CKI team to get access. The RHEL CKI containers are stored in Quay and you need to connect your Red Hat account to Quay.
Once you are granted permissions, log in to Quay:
podman login quay.io
After login you can use the original podman image pull
command.
Reproducing the build
Start the container
podman run -it <IMAGE_NAME_AND_TAG> /bin/bash
Get the kernel rebuild artifacts
If you are rebuilding a tarball or source RPM, you will need to clone or copy
the git repository into the container. If you are reproducing the RPM build or
tools, you will need to retrieve the built source RPM from the artifacts of the
merge
job.
You can either run the commands (e.g. curl
) in the container directly, or in
case you already have the artifacts present locally, use podman
to copy the
artifacts over. Run the command from outside the container:
podman cp <LOCAL_FILE_PATH> <CONTAINER_ID>:<RESULT_FILE_PATH>
You can retrieve the <CONTAINER_ID>
from
podman container list
You can also first rebuild the source RPM from git and then reproduce the actual build if you wish so.
Get the kernel configuration
For kernels built as tarballs, the config file used to build the kernel is
available in the artifacts of the build
job. Retrieve the configuration the
same way as the rebuild artifacts in the previous step and name it as .config
in the kernel repository.
For RPM-built kernels, the default configuration for a given architecture is used. The configuration is part of the built source RPM so no steps are needed here.
Export required variables
For build
and build-tools
jobs, some environment variables are needed to
properly reproduce the build. These are printed in the job output:
Export the same variables in the container. If the CROSS_COMPILE
variable is
not exported, the kernel build ran on a native architecture. In that case, you
need to run the container on the same architecture to properly reproduce the
build.
Run the actual build command
The command used to build the kernel is also printed in the job logs. Some
examples of the tarball build and rpmbuild
commands:
Copy and run the command. For the rpmbuild
commands, you’ll have to append the
source RPM path to the end.
Local reproducer does not work
If you fail to reproduce the builds locally and need to access the pipeline run directly to debug what is going on, please reach out to the CKI team. The process to get access is described in the debugging a pipeline job documentation.
Last modified December 2, 2021: Add reproducer docs for builds (5ffe38d)