GitLab Pipeline
Overview
CKI GitLab pipelines roughly come in two flavors: pipelines that build kernels from source, and pipelines that download prebuilt kernels from Koji/Copr.
Building kernels from source
The following figure provides a slightly simplified overview of a CKI GitLab pipeline for a CentOS Stream 9 merge request where kernels are built from the sources in the merge request:
Different colors represent different architectures of the underlying machines. The machines are either provided by AWS EC2 (“AWS” with single outline on the diagram) or by an on-premise data center (“DC” with double outline on the diagram).
Using prebuilt kernels
For comparison, this is a pipeline for a prebuilt kernel from Koji:
Again, machines are either provided by AWS EC2 (“AWS” with single outline on the diagram) or by an on-premise data center (“DC” with double outline on the diagram).
Pipeline jobs
While the GitLab pipelines run in the various branches of the CKI pipeline projects, the actual pipeline code comes from the pipeline-definition repository.
Depending on various factors, the jobs in a given pipeline will be different:
- kernel built from source or already prebuilt via Koji
- supported architectures, native compilation or native tools
- retriggered pipelines for CI testing using artifacts from previous pipelines
Retriggered pipelines
Depending on the pipeline type (internal
, public
, ofa
), retriggered
pipelines share most of their infrastructure with production pipelines:
internal |
public |
ofa |
|
---|---|---|---|
DC GitLab runner configurations | shared | shared | shared |
AWS GitLab runner configurations | split | split | split |
launch templates | split | shared | shared |
GitLab runner machines | split | shared | shared |
VPC subnets | shared | shared | shared |
S3 buckets | shared | shared | shared |
Infrastructure has been split where necessary to allow for the testing of launch template changes via retriggered pipelines.
DC GitLab runner configurations
Currently, all Docker-based GitLab runner configurations hosted on static
machines in the data center are shared between retriggered and production
pipelines. In practice, this means that e.g. the pipeline-test-runner
and
staging-pipeline-test-runner
tags are served by the same GitLab runner
configuration.
These configurations could be split to e.g. allow experimentation with the
Docker configuration. This would require additional changes to the
gitlab-runner-config
script in deployment-all
to allow separate
deployment of staging and production configurations.
AWS GitLab runner configurations
All docker-machine-based GitLab runner configurations hosted on AWS EC2
machines are split between retriggered and production pipelines. In practice,
this means that e.g. the pipeline-createrepo-runner
and
staging-pipeline-createrepo-runner
tags are served by different GitLab runner
configurations.
Launch templates
The properties of the workers launched by the docker-machine-based GitLab
runner configurations are determined by the associated launch templates. For
internal pipelines, separate launch templates are used for retriggered and
production pipelines. In practice, this means that e.g. the
pipeline-createrepo-runner
tag will spawn workers based on the
arr-cki.prod.lt.internal-general-worker
launch template, while the
staging-pipeline-test-runner
tag will use the
arr-cki.staging.lt.internal-general-worker
launch template.
The current setup allows to test changes to the launch templates by retriggering internal pipelines. Most of these changes should apply equally well to the other pipeline types. Nevertheless, the launch templates could also be split for the other pipeline types.
GitLab runner machines
The AWS EC2 machines hosting the docker-machine-based GitLab runner
configurations are only split for internal pipelines. In practice, this means
that the pipeline-createrepo-runner
and staging-pipeline-createrepo-runner
tags are handled by GitLab runners on different AWS EC2 machines.
The current setup allows to test changes to the EC2 machine setup by retriggering internal pipelines. Most of these changes should apply equally well to the machines for the other pipeline types.
Nevertheless, the AWS EC2 machines hosting the docker-machine-based GitLab
runners could also be split for the other pipeline types. For ofa
pipelines,
this would require two additional service accounts for the VPN connections as
these cannot be shared across machines.
VPC subnets
Currently, the same VPC subnets are used for the dynamically spawned workers of
retriggered and production pipelines. In practice, this means that e.g. the
pipeline-createrepo-runner
and staging-pipeline-createrepo-runner
tags
result in workers that share the same VPC subnets.
The subnets could also be split to further separate the workers for production pipelines from the workers for retriggered pipelines. This would avoid interference e.g. in the case of subnets running out of IP addresses.
S3 buckets
Currently, the same S3 buckets are used for retriggered and production
pipelines. In practice, this means that e.g. retriggered pipelines share its
ccache
with the production pipelines.
The S3 buckets could also be split to further separate production
pipelines from retriggered pipelines. This might require bot or pipeline
changes to keep short pipelines (tests_only=true
) working.