Frequently Asked Questions
- Where do I reach out about any problems?
- I got an email about a failed pipeline, how do I find the failure?
- What is
realtime_checkand what should I do when it fails?
- How do I send built kernels to partners?
- A test failed and I don’t understand why!
- How do I retry a job?
- How do I retry a pipeline?
- Steps for developers to follow to get a green check mark?
- How to customize test runs?
- I need to keep the artifacts for longer
- I need to regenerate the artifacts
I got an email about a failed pipeline, how do I find the failure?
Click on the pipeline ID in the email (in the “Pipeline #276757970 triggered” part in the example). This will bring you to the generic pipeline view:
Click on the right arrow to unpack the pipeline:
Note that you may have to scroll to the side to see the full pipeline visualisation and find the failures!
Any failed jobs will be marked with a red cross mark. Click on them and follow the output to find out what happened. If the job output is not sufficient, complete logs are available in the job artifacts, which you can browse in the web UI or download for local use:
For more information, see the detailed debugging guide.
realtime_check and what should I do when it fails?
The checks are in place to give the real time kernel team a heads up about conflicts with the real time kernel branches. No action from the developers is required at this stage and failures are not blocking. The real time kernel team will contact you if they need to follow up.
How do I send built kernels to partners?
publish job of your desired architecture. On the right side, there
will be an area with the job summary and artifacts, similar to this example from
Click on the
Download button at the bottom to download the artifacts locally.
Extract the zipped archive and pick either the directory with the
kernel repository, or specific binaries from it. You can forward these to
partners the same way you used to forward builds before.
A test failed and I don’t understand why!
If result checking is enabled (last job of the pipeline is
follow the DataWarehouse link in the job console output. You can find all test
logs available there, as well as contact information of the test maintainers:
If result chacking is not enabled (e.g. during CVE process), links to test logs and maintainer information will be printed in the test job:
You can contact the test maintainers if you can’t figure out the failure reason from the logs yourself.
How do I retry a job?
Jobs can be retried from both the pipeline overview and specific job views. In the pipeline overview, you can click on the circled double arrows to retry specific jobs:
If you already have a specific job open, click on the
Retry button on the
How do I retry a pipeline?
Go into the
Pipelines tab on your merge request and click on the green
Run pipeline button:
Steps for developers to follow to get a green check mark?
- Look into the failure as outlined in the first step.
- If the failure is caused by your changes, push a fixed version of the code.
- If the failure reason turns out to be unrelated to your changes, the failure
needs to be waived:
- If you are not working on a CVE, submit the failure (or ask the test
maintainer to do so) as a new known issue in DataWarehouse. Afterwards,
kernel-resultstage at the end of the pipeline to force result reevaluation. There is no need to rerun the testing or even the complete pipeline!
- If you are working on a CVE, explain the situation in a comment on your merge request. You can still look into DataWarehouse for known issues yourself, however the automated detection is disabled due to security concerns.
- If you are not working on a CVE, submit the failure (or ask the test maintainer to do so) as a new known issue in DataWarehouse. Afterwards, restart the
How to customize test runs?
Modify the configuration in
case of CVE work) in the top directory of the kernel repository. The list of
all supported configuration options (“pipeline variables”) is available in the
Don’t forget to revert your customizations when marking the MR as ready!
I need to keep the artifacts for longer
If you know the default 6 weeks is not enough for you case, you can proactively
prolong the lifetime of the artifacts. Before the artifacts disappear, retry
publish jobs in the pipeline. This will keep the build artifacts for
another 6 weeks since the retry. This is only possible to do once - the job
depends on the previous pipeline jobs and those have the same lifetime.
See the section below if the artifacts already disappeared and you need to regenerate them.
I need to regenerate the artifacts
There are two ways to regenerate the artifacts:
Submit a new pipeline run, either by pushing into your branch or by clicking the
Run pipelinebutton on your MR. This method will execute the full pipeline, which means testing will run again and the MR will be blocked until it’s finished.
Sequentially retry all pipeline jobs up till (and including) the
publishstage. This can be done on a per-stage basis, i.e. retrying all
buildjobs at once. Stages are visually defined as columns in the pipeline view.