There are various ways to contribute tests to CKI. LTP and kselftests are included test suites, among many others in CKI, for a full list of test suites visit our public tests repo. Check test stages overview to determine how/when you want to trigger your testing.
- LTP Test source lives at the Linux Test Project
- Contact: Send questions to the mailing list firstname.lastname@example.org. For full contact information, including IRC, see the LTP wiki
- Proposing patches: You can submit Pull Requests to the LTP project on GitHub.
- Documentation: The LTP wiki has plenty of documentation to get you started. You have a step-by-step tutorial to create a new C test, Guidelines, and other documentation.
- The kernel contains a set of “self tests” under the tools/testing/selftests/ directory
- See Contributing to kselftests for instructions to add tests
- You can find more details at the kselftests wiki
- Contact: Send questions to the mailing list email@example.com or join #linux-kselftest IRC Channel on freenode
- If you want to contribute a stand alone test to CKI, you have an example that can be used as a template.
- Please file a PR to to add a test to our public tests repo
Q: Which test should I onboard?
- Tier0/Tier1 level tests (unit testing/basic smoke testing)
- Tests should be stable and passing reliably (known failures are masked)
- All functional areas and related features for each Kernel SST should have matching test case(s)
Q: It’s impossible to include matching tests for every kernel patch, how can we work towards this goal? For example, some features/bug fixes are SanityOnly tested due to several reasons. This can include no functional way to test the patch, lack of hardware which implements the feature, dependent on another patch, etc.
- Specify all functional areas and features for your kernel subsystem, a future test case may be possible at a later point, e.g. when hardware becomes available in beaker. Otherwise identify the gaps so it can revisited later, non OtherQA feature patches are priority.
- If possible, include a high level ‘generic’ smoke test which may not target a new feature but will test for regressions.
- We have several test cases which trigger on all patches, e.g ltp-lite. So at a bare minimum all patches will still be tested in the pipeline, if not targeted testing.
Q: How do I limit my test to a specific arch/tree?
- CKI pipeline is multi-arch and tested across multiple trees (rhel7/rhel8/ark/upstream), if test is not suitable for a given arch/tree, we can exclude it in kpet-db.
Q: How can I ensure my test is only run on supported hardware in CKI?
The hardware can be filtered using the hostRequires xml element in the beaker job, e.g.
<device description="%ipmi%" op="like"/>If the machine can be filtered in beaker, than we can filter it in kpet-db.
We can also target a group of hard coded machines grouped by
<or/>statements, beaker will select the first available machine.
Q: My test will only run on specific hardware which enables a feature and not included in the general beaker pool, how can I run my test case in CKI? There are a few options:
- If only one machine is available, it can be dedicated to CKI team to avoid long queue times. Please update the machine ACL to give our keytab account (beaker/cki-team-automation) permission to provision the machine. Change the system type from ‘Machine’ to either ‘Resource’ or ‘Protoype’, this will limit the machine to testing only when it’s dedicated in hostRequires for a specific test case.
- If more than one machine is available, we can create a pool of machines in beaker, the ACL will need to be updated to give our keytab access to the beaker pool. If a test triggers, the first available machine will be selected.
- On-boarding a CI system which which has access to special hardware: Test CKI kernels or patches in your own CI environment (e.g. Platform-CI) by triggering off a UMB message for patches/builds, CKI can listen and report results back.
Q: My test is only supported in a multi-host environment
- Multi-host support is now enabled in kpet if hardware is readily available in the public beaker pool
- If multi-host testing requires specific hardware (e.g. specific NIC card) which is either limited or not available in the public beaker pool, please see On-boarding a CI system statement above.
Q: I have no idea what beaker is or how to get my bash script to run in beaker
- Step by step instructions are listed below to beakerize your script, or reach
Q: Is it possible to test a feature which is only available in an upstream kernel?
- You can create a condition in your script to test if the running kernel is
>$ker_ver, there are many examples in CKI GitLab repo
Steps to onboard a test to CKI
Step 1: File an MR cki-project/kernel-tests
(Skip to step 2 if test is blocked from open sourcing or test case already exists)
This enables us to test against upstream kernels and provide feedback in a public forum, if the test cannot be open sourced, we can fetch internally within Red Hat and this section can be skipped.
Copy the kernel-tests example and adjust the files as necessary:
runtest.sh: Main test script. Make sure you source the beaker environment as described in the [beakerlib documentation] if using beakerLib libraries.
metadata: Contains the test metadata, including GPL license header and test dependencies/repos.
README.md: Includes steps to manually run the test
Follow upstream guidelines to ensure you’re not exposing sensitive/private data. See the cpu die test suite an an example for how to integrate existing bash scripts into runtest.sh.
When copying existing tests from a private repo, you have to remove the time limit when copying the test. If older-style metadata tests, you have to remove the line containing
Makefile. For newer-style metadata tests, you need to remove the line containing
metadatafile. But note the value, because you’ll need it for “Step 2:
Before filing a PR in kernel-tests you can test it in Beaker by fetching the test source from your fork/branch, substitute your values for
<task name="$test_name role="STANDALONE"> <fetch url="https://gitlab.com/$user/kernel-tests/-/archive/$branch/kernel-tests-$branch.zip#$test_location"/> </task>
Once passing reliably, file a PR in the GitLab kernel-tests repo. If you are new to GitLab, see the section below.
Step 2: Enable test in kpet-db
Enable you test in kpet-db, this will tell the pipeline when and how to trigger your testing:
Then cd to the kpet-db directory and create a branch:
cd kpet-db git checkout -b $onboard_test
Copy the [acpitable] test as an example. For a kernel area, create the new directory under
kpet-db/cases/$kernelsubsystem/$testcase, for a userspace package, create the directory under
Adjust the kpet parameters in
index.yaml, some common variables are listed below, please see kpet-db/index.yaml for examples and documentation.
url_suffix: directory path used to auto generate the url fetched from GitLab if ported successfully, otherwise list beaker task name in name field. If the test is run internally
url_suffixis omitted, please see nr-diff for an example
description: description of the test case
or,and,not: These keywords can be used to limit where the test should run:
- trees: e.g. upstream, rhel79-z, rhel8
- arches: e.g. ppc, ppc64le, 390x, x86_64, aarch64
sources: kernel patch patterns which should trigger the testing
waived: This is required for new tests, it will onboard them into a waived state which means failures are ignored and will not be sent to kernel maintainers, add ‘waived: True’ under the case name parameter. This state will be removed once we proved the test is stable in the pipeline.
max_duration_seconds: max time duration the test should run. If run longer, the test will be terminated and an infrastructure error is recorded. This should be copied from the time limit in the test, if it was copied from a private repository.
set: How to categorize the test, for example if both
storare added, it will run for official builds (gating) and also when the storage test set is specified, e.g. for storage git trees or in Brew build NVRs.
Final step is to add the main test to the kpet-db/index.yaml
File a merge request from your fork with the proposed changes. Please ask a project member with privilege to trigger the bot for testing, you will see the bot attach to an MR after a few minutes of filing explaining how to test out your changes.
Once merged, you can join #kernelci irc channel for questions
GitLab Pull Request Workflow for kernel-tests repo
Create a fork from kernel-tests:
create a GitLab user account and add your ssh keys in your settings
click on ‘fork’ button (upper right) on kernel-tests
clone your forked repo via ssh:
git clone firstname.lastname@example.org:$user/kernel-tests.git
File a PR:
change to the forked repo checkout:
if you already have a fork, sync it with upstream to avoid conflicts (see section below)
best practice is to file the PR from a branch
git checkout -b $branch_name
make your changes, e.g.
add the changed files
git add <file(s)>
commit your changes
git commit -m "summary of changes"
push your changes
git push origin $branch\_name
File your Merge Request. You should see an option to file a Merge Request in the web UI. Compare your change from your fork’s branch to upstream’s main branch and file the PR.
Sync fork with upstream:
Set remote url for upstream repo if not defined already
git remote add upstream https://gitlab.com/cki-project/kernel-tests.git
Check out your main branch in your fork
git checkout main
Fetch the latest changes from upstream (cki-project)
git fetch upstream
Merge upstream changes into fork
git merge upstream/main
Push changes back to fork