Contributing Tests

How to contribute a test to CKI

There are various ways to contribute tests to CKI. LTP and kselftests are included test suites, among many others in CKI, for a full list of test suites visit our public tests repo. Check test stages overview to determine how/when you want to trigger your testing.



Standalone Tests

  • If you want to contribute a stand alone test to CKI, you have an example that can be used as a template.
  • Please file a PR to to add a test to our public tests repo


Q: Which test should I onboard?

  • Tier0/Tier1 level tests (unit testing/basic smoke testing)
  • Tests should be stable and passing reliably (known failures are masked)
  • All functional areas and related features for each Kernel SST should have matching test case(s)

Q: It’s impossible to include matching tests for every kernel patch, how can we work towards this goal? For example, some features/bug fixes are SanityOnly tested due to several reasons. This can include no functional way to test the patch, lack of hardware which implements the feature, dependent on another patch, etc.

  • Specify all functional areas and features for your kernel subsystem, a future test case may be possible at a later point, e.g. when hardware becomes available in beaker. Otherwise identify the gaps so it can revisited later, non OtherQA feature patches are priority.
  • If possible, include a high level ‘generic’ smoke test which may not target a new feature but will test for regressions.
  • We have several test cases which trigger on all patches, e.g ltp-lite. So at a bare minimum all patches will still be tested in the pipeline, if not targeted testing.

Q: How do I limit my test to a specific arch/tree?

  • CKI pipeline is multi-arch and tested across multiple trees (rhel7/rhel8/ark/upstream), if test is not suitable for a given arch/tree, we can exclude it in kpet-db.

Q: How can I ensure my test is only run on supported hardware in CKI?

  • The hardware can be filtered using the hostRequires xml element in the beaker job, e.g. <device description="%ipmi%" op="like"/> If the machine can be filtered in beaker, than we can filter it in kpet-db.

  • We can also target a group of hard coded machines grouped by <or/> statements, beaker will select the first available machine.

Q: My test will only run on specific hardware which enables a feature and not included in the general beaker pool, how can I run my test case in CKI? There are a few options:

  • If only one machine is available, it can be dedicated to CKI team to avoid long queue times. Please update the machine ACL to give our keytab account (beaker/cki-team-automation) permission to provision the machine. Change the system type from ‘Machine’ to either ‘Resource’ or ‘Protoype’, this will limit the machine to testing only when it’s dedicated in hostRequires for a specific test case.
  • If more than one machine is available, we can create a pool of machines in beaker, the ACL will need to be updated to give our keytab access to the beaker pool. If a test triggers, the first available machine will be selected.
  • On-boarding a CI system which which has access to special hardware: Test CKI kernels or patches in your own CI environment (e.g. Platform-CI) by triggering off a UMB message for patches/builds, CKI can listen and report results back.

Q: My test is only supported in a multi-host environment

  • Multi-host support is now enabled in kpet if hardware is readily available in the public beaker pool
  • If multi-host testing requires specific hardware (e.g. specific NIC card) which is either limited or not available in the public beaker pool, please see On-boarding a CI system statement above.

Q: I have no idea what beaker is or how to get my bash script to run in beaker

  • Step by step instructions are listed below to beakerize your script, or reach out to #kernelci IRC channel

Q: Is it possible to test a feature which is only available in an upstream kernel?

  • You can create a condition in your script to test if the running kernel is >$ker_ver, there are many examples in CKI GitLab repo

Steps to onboard a test to CKI

Step 1: File an MR cki-project/kernel-tests

(Skip to step 2 if test is blocked from open sourcing or test case already exists)

This enables us to test against upstream kernels and provide feedback in a public forum, if the test cannot be open sourced, we can fetch internally within Red Hat and this section can be skipped.

  1. Fork kernel-tests

  2. Copy the kernel-tests example and adjust the files as necessary:

    • Main test script. Make sure you source the beaker environment as described in the [beakerlib documentation] if using beakerLib libraries.
    • metadata: Contains the test metadata, including GPL license header and test dependencies/repos.
    • Includes steps to manually run the test
  3. Follow upstream guidelines to ensure you’re not exposing sensitive/private data. See the cpu die test suite an an example for how to integrate existing bash scripts into

  4. When copying existing tests from a private repo, you have to remove the time limit when copying the test. If older-style metadata tests, you have to remove the line containing TestTime in Makefile. For newer-style metadata tests, you need to remove the line containing max_time from the metadata file. But note the value, because you’ll need it for “Step 2:

  5. Before filing a PR in kernel-tests you can test it in Beaker by fetching the test source from your fork/branch, substitute your values for $test_name, $user, $branch, and $test_location:

    <task name="$test_name role="STANDALONE">
      <fetch url="$user/kernel-tests/-/archive/$branch/kernel-tests-$$test_location"/>
  6. Once passing reliably, file a PR in the GitLab kernel-tests repo. If you are new to GitLab, see the section below.

Step 2: Enable test in kpet-db

Enable you test in kpet-db, this will tell the pipeline when and how to trigger your testing:

  1. Fork kpet-db

  2. Then cd to the kpet-db directory and create a branch:

    cd kpet-db
    git checkout -b $onboard_test
  3. Copy the [acpitable] test as an example. For a kernel area, create the new directory under kpet-db/cases/$kernelsubsystem/$testcase, for a userspace package, create the directory under kpet-db/cases/$packages/$rpm/$testcase.

    Adjust the kpet parameters in index.yaml, some common variables are listed below, please see kpet-db/index.yaml for examples and documentation.

    • url_suffix: directory path used to auto generate the url fetched from GitLab if ported successfully, otherwise list beaker task name in name field. If the test is run internally url_suffix is omitted, please see nr-diff for an example
    • description: description of the test case
    • or,and,not: These keywords can be used to limit where the test should run:
      • trees: e.g. upstream, rhel79-z, rhel8
      • arches: e.g. ppc, ppc64le, 390x, x86_64, aarch64
    • sources: kernel patch patterns which should trigger the testing
    • waived: This is required for new tests, it will onboard them into a waived state which means failures are ignored and will not be sent to kernel maintainers, add ‘waived: True’ under the case name parameter. This state will be removed once we proved the test is stable in the pipeline.
    • max_duration_seconds: max time duration the test should run. If run longer, the test will be terminated and an infrastructure error is recorded. This should be copied from the time limit in the test, if it was copied from a private repository.
    • set: How to categorize the test, for example if both kt0 and stor are added, it will run for official builds (gating) and also when the storage test set is specified, e.g. for storage git trees or in Brew build NVRs.
  4. Final step is to add the main test to the kpet-db/index.yaml

  5. File a merge request from your fork with the proposed changes. Please ask a project member with privilege to trigger the bot for testing, you will see the bot attach to an MR after a few minutes of filing explaining how to test out your changes.

  6. Once merged, you can join #kernelci irc channel for questions

GitLab Pull Request Workflow for kernel-tests repo

  1. Create a fork from kernel-tests:

    1. create a GitLab user account and add your ssh keys in your settings

    2. click on ‘fork’ button (upper right) on kernel-tests

    3. clone your forked repo via ssh:

      git clone$user/kernel-tests.git
  2. File a PR:

    1. change to the forked repo checkout:

      cd into/cloned/fork-repo
    2. if you already have a fork, sync it with upstream to avoid conflicts (see section below)

    3. best practice is to file the PR from a branch

      git checkout -b $branch_name
    4. make your changes, e.g.

    5. add the changed files

      git add <file(s)>
    6. commit your changes

      git commit -m "summary of changes"
    7. push your changes

      git push origin $branch\_name
    8. File your Merge Request. You should see an option to file a Merge Request in the Web UI. Compare your change from your fork’s branch to upstream’s main branch and file the PR.

  3. Sync fork with upstream:

    1. Set remote url for upstream repo if not defined already

      git remote add upstream
    2. Check out your main branch in your fork

      git checkout main
    3. Fetch the latest changes from upstream (cki-project)

      git fetch upstream
    4. Merge upstream changes into fork

      git merge upstream/main
    5. Push changes back to fork

      git push
Last modified May 7, 2021: Add explicit docs about customizing CI runs for MRs (d2a28c8)