08 Apr 2019, 00:00

CKI pipeline under the hood part 1: Figuring out what to test

So we want to test the kernel, great idea! But what does it mean? Manually watch a random git tree, build it and submit a test job? Git is a great tool but it’s not exactly known for sending you notifications on updates. And what about testing completed builds from build systems like koji or COPR? Or if the developers come and want you to test their patches from Patchwork? It’s easy to get lost, and we haven’t even started talking about the actual test pipeline. So, let’s take a look at pipeline triggers, our solution to taming the chaos.

Basic git triggers

Let’s start easy with triggering the pipeline on git trees. We specify the git URL and branch, check if we already executed a pipeline for the top commit and if not, test it. Now you ask, but how do you know the git was updated? You don’t. Maybe for some fancy systems as GitHub or GitLab you can set up notification receivers but most of the kernel tress don’t use these. The solution is embarrassingly simple – just set up a cron job to check for new commits every once in a while! Because we are checking if the pipeline for given commit already exists we don’t have to fear running the cron every few minutes for almost real-time testing.

Patchwork

The kernel’s development model is to send email patches and let’s be honest, it’s hard to follow what exactly is happening there (especially on busier lists). Because of this, Patchwork instances are set to track the lists and provide a nice UI for maintainers, casuals and automation. Patchwork v2 that’s currently in use for most upstream work has a REST API which can be used to query new patches for given project (mailing list we are interested in). We retrieve a list of new patch series and all the additional information we need and trigger new pipelines. Since we use the date of the last tested patch in the API query, we don’t need to check if the patch was already tested or not, we know it wasn’t.

We also offer a trigger for legacy Patchwork v1 instances which are a lot of fun because they don’t recognize patch series and we need to reconstruct them from standalone entries.

We are currently only running this kind of testing on internal Red Hat kernels but in the future, would like to expand to upstream lists too.

Stable queue

Stable queue is a git tree containing a file with a list of patches for quilt and those patches themselves. These patches are planned to be released as part of the next stable kernel release. It is a different way of tracking patches and their order of application and thus needs to be treated specially.

Since we are again dealing with a git tree, we are periodically checking for changes. Only this time it is not enough to check the top commit – we need to see the contents of the series file for the release we are interested in. We retrieve the content of the file, find out if we already seen something like it and if not, grab the links to raw patch files, in the order they are mentioned in the file, and trigger the pipeline.

RPM builds

We care about Fedora and Red Hat kernels, and those are distributed as RPMs instead of tarballs. It makes sense that we test the end result that’s going out to users since the way the kernel is built has an effect on its functionality too.

Because of this, we have a receiver for Koji and COPR. Both are build systems for Fedora community and send notification messages for completed builds via fedmsg. We simply trigger pipelines for any completed builds we are interested in. This allows us to test Fedora kernels (very close to mainline) before they are added to repositories. We use the same mechanism for Brew (downstream version of Koji) to test internal kernels.

Since anyone can use these build systems to build their kernels just for fun, we implemented filtering mechanisms not only on package name and release but users and COPR repositories too.

Putting it all together

Now that we got through all of this you may think that’s a lot of complicated work to maintain four pipelines but it’s really not. Omitting a lot of details, we have a single pipeline with different stages that can be left out based on the configuration.

Both the stable queue and Patchwork need to have patches applied, and then be built and tested. The git trees only need to be built and tested. And the RPM builds, no matter where they come from, only need to be tested. So really, we have one pipeline that skips the patch application or build steps based on the passed data.

Stay tuned for the next post explaining all about the GitLab interactions in the triggers!