*: bring our own sandbox root
This change removes the build container and replaces it with a
Bazel-built Fedora 37 sysroot which is bind-mounted into the Bazel
sandbox using --sandbox_add_mount_pair. The tools/bazel wrapper script
automatically (re-)generates the sysroot when needed.
Both Bazelisk and Bazel's native wrapper automatically run the
tools/bazel script, which means that our build should now work without
extra steps on any machine with a working Bazelisk setup and unpriv ns.
This fixes all kinds of weirdness caused by the previous podman setup
("bazel run"/container pushes, log access, weird podman bugs,
breaking the IDE plugin for any non-Monogon workspaces...).
Using the sandbox hash as an action var also ensures that the cache
is invalidated whenever the ambient environment changes. Previously,
Bazel did not invalidate build steps when any host dependency changed.
To my knowledge, this was the only remaining cause for stale builds.
It also means we cannot depend on the host toolchain since it
won't be accessible in the sandbox, and anything that inspects the
host during analysis stage will fail. This currently means that
running on a non-Fedora host won't work - we fix this next.
All RPMs are pinned and the sysroot is fully reproducible.
Once we upgrade to Bazel 5.x, we can take it further by enabling
--experimental_use_hermetic_linux_sandbox and fully remove the
remaining host paths from the sandbox for full hermeticity.
In a follow-up, we can clean up the CI image to only contain the
minimum dependencies needed for Bazelisk and the agent.
Existing IntelliJ users need to remove the -Dbazel.bep.path flag
from their VM options.
Handbook/Rust rules are disabled temporarily to keep CI green
(requires a more recent rules_rust version).
Change-Id: I1f17d57d985ff9d749bf3359f259d8ef52247c18
Reviewed-on: https://review.monogon.dev/c/monogon/+/1033
Tested-by: Jenkins CI
Reviewed-by: Lorenz Brun <lorenz@monogon.tech>
diff --git a/build/ci/README.md b/build/ci/README.md
index 45dcd3f..7eda3d6 100644
--- a/build/ci/README.md
+++ b/build/ci/README.md
@@ -3,7 +3,7 @@
Monogon has a work-in-progress continuous integration / testing pipeline.
Because of historical reasons, some parts of this pipeline are defined in a
-separate non-public repository that is managed by Monogon Labs.
+separate non-public repository that is managed by Monogon SE.
In the long term, the entire infrastructure code relating to this will become
public and part of the Monogon repository. In the meantime, this document
@@ -15,26 +15,16 @@
`//build/ci/Dockerfile` describes a 'builder image'. This image contains a
stable, Fedora-based build environment in which all Monogon components should
-be built. It has currently two uses:
+be built. The Jenkins based CI uses the Builder image as a base to run Jenkins agents.
-1. The build scripts at
- `//scripts/{create_container.sh,destroy_container.sh,/bin/bazel}`. These are
- used by developers to run Bazel against a controlled environment to develop
- Monogon code. The `create_container.sh` script builds the Builder image and
- starts a Builder container. The `bin/bazel` wrapper script launches Bazel in
- it. The `destroy_container.sh` script cleans everything up.
+A Monogon SE developer runs `//build/ci/build_ci_image`, which builds the
+Builder Image and pushes it to a container registry. Then, in another
+repository, that image is used as a base to overlay a Jenkins agent on top,
+and then used to run all Jenkins actions.
-2. The Jenkins based CI uses the Builder image as a base to run Jenkins agents.
- A Monogon Labs developer runs `//build/ci/build_ci_image`, which builds the
- Builder Image and pushes it to a container registry. Then, in another
- repository, that image is used as a base to overlay a Jenkins agent on top,
- and then used to run all Jenkins actions.
-
-As Monogon evolves and gets better build hermeticity using Bazel toolchains,
-the need for a Builder image should subdue. Meanwhile, using the same image
-ensures that we have the maximum possible reproducibility of builds across
-development and CI machines, and gets us a base level of build hermeticity and
-reproducibility.
+The build image contains only basic dependencies that are required to bootstrap
+the sandbox sysroot and run the CI agents. All other build-time dependencies
+are managed by Bazel via [third_party/sandboxroot](../../third_party/sandboxroot).
CI usage
--------
@@ -45,7 +35,7 @@
attempt to take over the CI system, or change the CI scripts themselves to skip
tests.
-Currently, all Monogon Labs employees (thus, the core Monogon development team)
+Currently, all Monogon SE employees (thus, the core Monogon development team)
are marked as 'trusted users'. There is no formal process for community
contributors to become part of this group, but we are more than happy to
formalize such a process when needed, or appoint active community contributors
@@ -61,4 +51,4 @@
`//build/ci/jenkins-presubmit.groovy` script on them.
Currently, the Jenkins instance is not publicly available, and thus CI logs are
-not publicly available either. This will be fixed very soon.
+not publicly available either. This will be fixed soon.
diff --git a/build/ci/jenkins-presubmit.groovy b/build/ci/jenkins-presubmit.groovy
index ccb75d0..54d89be 100644
--- a/build/ci/jenkins-presubmit.groovy
+++ b/build/ci/jenkins-presubmit.groovy
@@ -2,6 +2,9 @@
// executed by Jenkins for presubmit checks, ie. checks that run against an
// open Gerrit change request.
+// TODO(leo): remove once CI image has been updated.
+def gazelle_build = "bazel --noworkspace_rc run go install github.com/bazelbuild/bazelisk@v1.15.0"
+
pipeline {
agent none
options {
@@ -21,8 +24,9 @@
gerritCheck checks: ['jenkins:test': 'RUNNING'], message: "Running on ${env.NODE_NAME}"
echo "Gerrit change: ${GERRIT_CHANGE_URL}"
sh "git clean -fdx -e '/bazel-*'"
- sh "JENKINS_NODE_COOKIE=dontKillMe bazel test //..."
- sh "JENKINS_NODE_COOKIE=dontKillMe bazel test -c dbg //..."
+ sh gazelle_build
+ sh "JENKINS_NODE_COOKIE=dontKillMe ~/go/bin/bazelisk test //..."
+ sh "JENKINS_NODE_COOKIE=dontKillMe ~/go/bin/bazelisk test -c dbg //..."
}
post {
success {
@@ -45,8 +49,9 @@
gerritCheck checks: ['jenkins:gazelle': 'RUNNING'], message: "Running on ${env.NODE_NAME}"
echo "Gerrit change: ${GERRIT_CHANGE_URL}"
sh "git clean -fdx -e '/bazel-*'"
- sh "JENKINS_NODE_COOKIE=dontKillMe bazel run //:gazelle-update-repos"
- sh "JENKINS_NODE_COOKIE=dontKillMe bazel run //:gazelle -- update"
+ sh gazelle_build
+ sh "JENKINS_NODE_COOKIE=dontKillMe ~/go/bin/bazelisk run //:gazelle-update-repos"
+ sh "JENKINS_NODE_COOKIE=dontKillMe ~/go/bin/bazelisk run //:gazelle -- update"
script {
def diff = sh script: "git status --porcelain", returnStdout: true
diff --git a/build/toolchain/cc_toolchain_config.bzl b/build/toolchain/cc_toolchain_config.bzl
index 7647021..cebf18f 100644
--- a/build/toolchain/cc_toolchain_config.bzl
+++ b/build/toolchain/cc_toolchain_config.bzl
@@ -195,7 +195,7 @@
"is_glibc": attr.bool(default = True),
"host_includes": attr.string_list(
default = [
- "/usr/lib/gcc/x86_64-redhat-linux/11/include/",
+ "/usr/lib/gcc/x86_64-redhat-linux/12/include/",
"/usr/include",
],
),
diff --git a/build/toolchain/musl-host-gcc/sysroot/tarball.bzl b/build/toolchain/musl-host-gcc/sysroot/tarball.bzl
index c0631f8..c24c3f2 100644
--- a/build/toolchain/musl-host-gcc/sysroot/tarball.bzl
+++ b/build/toolchain/musl-host-gcc/sysroot/tarball.bzl
@@ -34,7 +34,7 @@
linux_headers = ctx.file.linux_headers
linux_headers_path = linux_headers.path
- compiler_headers_path = "lib/gcc/x86_64-redhat-linux/11/include"
+ compiler_headers_path = "lib/gcc/x86_64-redhat-linux/12/include"
musl_root = detect_root(ctx.attr.musl)
musl_files = ctx.files.musl