Frequently Asked Questions

Have a specific question and no time to read the detailed docs?
  1. I got an email about a failed pipeline, how do I find the failure?
  2. What is {realtime,automotive}_check and what should I do when it fails?
  3. How do I send built kernels to partners?
  4. A test failed and I don’t understand why!
  5. Why is my test/pipeline taking so long?
  6. How do I retry a job?
  7. How do I retry a pipeline?
  8. Steps for developers to follow to get a green check mark?
  9. How to customize test runs?
  10. I need to keep the artifacts for longer
  11. I need to regenerate the artifacts
  12. DataWarehouse links return 404!
  13. Where do I reach out about any problems?

I got an email about a failed pipeline, how do I find the failure?

Failed email example

Click on the pipeline ID in the email (in the “Pipeline #276751415 triggered” part in the example). This will bring you to the generic pipeline view:

Multi-project pipeline view

Click on the right arrow to unpack the pipeline:

Full multi-project pipeline

Note that you may have to scroll to the side to see the full pipeline visualisation and find the failures!

Any failed jobs will be marked with a red cross mark. Click on them and follow the output to find out what happened. If the job output is not sufficient, complete logs are available in the job artifacts, which you can browse in the web UI or download for local use:

Links to retrieve artifacts

For more information, see the detailed debugging guide.

What is {realtime,automotive}_check and what should I do when it fails?

These checks are in place to give the real time and automotive kernel teams a heads up about conflicts with their kernel branches. No action from the developers is required at this stage and failures are not blocking. The real time and automotive teams will contact you if they need to follow up.

How do I send built kernels to partners?

Open the publish job of your desired architecture.

On the right side, there will be an area with the job summary and GitLab artifacts, similar to this example from a build job:

Links to retrieve artifacts

Click on the Browse button to see the artifact listing.

If the pipeline uses GitLab artifacts: The kernel artifacts are available in the artifacts directory. If this directory is empty, the pipeline is using S3. Follow the steps below.

If the pipeline uses S3 artifacts: The artifacts-meta.json file present in the GitLab artifacts contains a link to the kernel artifacts under the s3_browse_url key.

The same link is available at the bottom of the GitLab job logs:

Links to retrieve S3 artifacts

A static index page is available at the location linked by the s3_index_url key.

The s3_* keys and the artifact link in the job output are not available if the pipeline uses GitLab artifacts.

Once you have the correct location, click through the artifacts structure to find your desired files. In case of kernel builds, you’ll most likely want to go to artifacts/repo/<KERNEL_VERSION>. Download the packages you’re interested in.

Once you have retrieved the packages you can forward these builds to partners using the same process you used to forward builds before.

A test failed and I don’t understand why!

If result checking is enabled (last job of the pipeline is check-kernel-results), follow the DataWarehouse link in the job console output. You can find all test logs available there, as well as contact information of the test maintainers:

Test details in DW

If result checking is not enabled (e.g. during CVE process), links to test logs and maintainer information will be printed in the test job:

Waived failed test information in the job logs

You can contact the test maintainers if you can’t figure out the failure reason from the logs yourself.

Why is my test/pipeline taking so long?

CKI runs tests across a large pool of dedicated systems. While systems are constantly added to this pool there are occasionally times where the number of jobs/pipelines may exceed the pool’s queue capacity (for example, at the end of a release cycle). In addition to this, CKI updates and infrastructure failures may also impact job/pipeline completion times. As a result of these issues, you may occasionally experience delays in job/pipeline completion.

Users with jobs/pipelines that are waiting for resources should be patient for their jobs/pipelines to be executed in the queue. If you are concerned that your job or pipeline is no longer responding, you can contact the CKI team for help.

How do I retry a job?

Jobs can be retried from both the pipeline overview and specific job views. In the pipeline overview, you can click on the circled double arrows to retry specific jobs:

Full multi-project pipeline

If you already have a specific job open, click on the Retry button on the right side:

Job details

How do I retry a pipeline?

Go into the Pipelines tab on your merge request and click on the green Run pipeline button:

Run pipeline

Steps for developers to follow to get a green check mark?

  1. Look into the failure as outlined in the first step.
  2. If the failure is caused by your changes, push a fixed version of the code.
  3. If the failure reason turns out to be unrelated to your changes, the failure needs to be waived:
    1. If you are not working on a CVE, submit the failure (or ask the test maintainer to do so) as a new known issue in DataWarehouse. Afterwards, restart the kernel-result stage at the end of the pipeline to force result reevaluation. There is no need to rerun the testing or even the complete pipeline!
    2. If you are working on a CVE, explain the situation in a comment on your merge request. You can still look into DataWarehouse for known issues yourself, however the automated detection is disabled due to security concerns.

How to customize test runs?

Modify the configuration in .gitlab-ci.yml (or .gitlab-ci-private.yml in case of CVE work) in the top directory of the kernel repository. The list of all supported configuration options (“pipeline variables”) is available in the configuration documentation.

Don’t forget to revert your customizations when marking the MR as ready!

I need to keep the artifacts for longer

If you know the default 6 weeks is not enough for you case, you can proactively prolong the lifetime of the artifacts. Before the artifacts disappear, retry the publish jobs in the pipeline. This will keep the build artifacts for another 6 weeks since the retry. This is only possible to do once - the job depends on the previous pipeline jobs and those have the same lifetime.

See the section below if the artifacts already disappeared and you need to regenerate them.

I need to regenerate the artifacts

There are two ways to regenerate the artifacts:

  1. Submit a new pipeline run, either by pushing into your branch or by clicking the Run pipeline button on your MR. This method will execute the full pipeline, which means testing will run again and the MR will be blocked until it’s finished.

  2. Sequentially retry all pipeline jobs up till (and including) the publish stage. This can be done on a per-stage basis, i.e. retrying all prepare or build jobs at once. Stages are visually defined as columns in the pipeline view.

See: DataWarehouse error 404

Last modified August 25, 2022: Update docs to include automotive checks (dd0a4e8)