Frequently Asked Questions

Have a specific question and no time to read the detailed docs?
  1. Where do I reach out about any problems?
  2. I got an email about a failed pipeline, how do I find the failure?
  3. What is realtime_check and what should I do when it fails?
  4. How do I send built kernels to partners?
  5. A test failed and I don’t understand why!
  6. How do I retry a job?
  7. How do I retry a pipeline?
  8. Steps for developers to follow to get a green check mark?
  9. How to customize test runs?
  10. I need to keep the artifacts for longer
  11. I need to regenerate the artifacts

I got an email about a failed pipeline, how do I find the failure?

Failed email example

Click on the pipeline ID in the email (in the “Pipeline #276757970 triggered” part in the example). This will bring you to the generic pipeline view:

Multi-project pipeline view

Click on the right arrow to unpack the pipeline:

Full multi-project pipeline

Note that you may have to scroll to the side to see the full pipeline visualisation and find the failures!

Any failed jobs will be marked with a red cross mark. Click on them and follow the output to find out what happened. If the job output is not sufficient, complete logs are available in the job artifacts, which you can browse in the web UI or download for local use:

Links to retrieve artifacts

For more information, see the detailed debugging guide.

What is realtime_check and what should I do when it fails?

The checks are in place to give the real time kernel team a heads up about conflicts with the real time kernel branches. No action from the developers is required at this stage and failures are not blocking. The real time kernel team will contact you if they need to follow up.

How do I send built kernels to partners?

Open the publish job of your desired architecture. On the right side, there will be an area with the job summary and artifacts, similar to this example from a build job:

Links to retrieve artifacts

Click on the Download button at the bottom to download the artifacts locally. Extract the zipped archive and pick either the directory with the dnf/yum kernel repository, or specific binaries from it. You can forward these to partners the same way you used to forward builds before.

A test failed and I don’t understand why!

If result checking is enabled (last job of the pipeline is check-kernel-results), follow the DataWarehouse link in the job console output. You can find all test logs available there, as well as contact information of the test maintainers:

Test details in DW

If result chacking is not enabled (e.g. during CVE process), links to test logs and maintainer information will be printed in the test job:

Waived failed test information in the job logs

You can contact the test maintainers if you can’t figure out the failure reason from the logs yourself.

How do I retry a job?

Jobs can be retried from both the pipeline overview and specific job views. In the pipeline overview, you can click on the circled double arrows to retry specific jobs:

Full multi-project pipeline

If you already have a specific job open, click on the Retry button on the right side:

Job details

How do I retry a pipeline?

Go into the Pipelines tab on your merge request and click on the green Run pipeline button:

Run pipeline

Steps for developers to follow to get a green check mark?

  1. Look into the failure as outlined in the first step.
  2. If the failure is caused by your changes, push a fixed version of the code.
  3. If the failure reason turns out to be unrelated to your changes, the failure needs to be waived:
    1. If you are not working on a CVE, submit the failure (or ask the test maintainer to do so) as a new known issue in DataWarehouse. Afterwards, restart the kernel-result stage at the end of the pipeline to force result reevaluation. There is no need to rerun the testing or even the complete pipeline!
    2. If you are working on a CVE, explain the situation in a comment on your merge request. You can still look into DataWarehouse for known issues yourself, however the automated detection is disabled due to security concerns.

How to customize test runs?

Modify the configuration in .gitlab-ci.yml (or .gitlab-ci-private.yml in case of CVE work) in the top directory of the kernel repository. The list of all supported configuration options (“pipeline variables”) is available in the configuration documentation.

Don’t forget to revert your customizations when marking the MR as ready!

I need to keep the artifacts for longer

If you know the default 6 weeks is not enough for you case, you can proactively prolong the lifetime of the artifacts. Before the artifacts disappear, retry the publish jobs in the pipeline. This will keep the build artifacts for another 6 weeks since the retry. This is only possible to do once - the job depends on the previous pipeline jobs and those have the same lifetime.

See the section below if the artifacts already disappeared and you need to regenerate them.

I need to regenerate the artifacts

There are two ways to regenerate the artifacts:

  1. Submit a new pipeline run, either by pushing into your branch or by clicking the Run pipeline button on your MR. This method will execute the full pipeline, which means testing will run again and the MR will be blocked until it’s finished.

  2. Sequentially retry all pipeline jobs up till (and including) the publish stage. This can be done on a per-stage basis, i.e. retrying all prepare or build jobs at once. Stages are visually defined as columns in the pipeline view.

Last modified August 19, 2021: Make CKI contacts a separate section and refer it from FAQ (6ab5aae)