commit | fe3d8fd61a81c09a1544d8cf1e13326c179c5972 | [log] [tgz] |
---|---|---|
author | Serge Bazanski <serge@monogon.tech> | Tue May 30 20:50:09 2023 +0200 |
committer | Serge Bazanski <serge@monogon.tech> | Tue Jun 06 12:11:51 2023 +0000 |
tree | d1803d9e307e5aea697c02fcb17e82e2c4fb605d | |
parent | 90a70a0e1cab83ba1601d355d07f285dff0d4d55 [diff] |
m/n/core/roleserve: rework cluster membership, reuse control plane connections This changes up roleserver internals to simplify the handling of cluster membership state. The end goal is to allow reusing control plane gRPC connections across different components in a node, but the refactor goes a bit beyond that. Ever since the introduction of the rpc resolver, we have effectively simplifies the control plane startup problem. This is because the resolver allows the rest of the system to dynamically switch between different gRPC endpoints for the control plane. What this means is that some of the existing complexity in the roleserver (which predates the resolver) can be thrown away. Notably, we remove the ClusterMembership structure, and replace it with two significantly simpler structures that represent two separate facts about he local node: 1. localControlPlane carries information about whether this node has a locally running control plane. This is only used by the statuspusher (to report whether the control plane is running) and by the Kubernetes control plane. 2. curatorConnection carries the credentials, resolver and an open gRPC connection to the control plane, and is the only roleserver EventValue now used by the vast majority of the roleserver runnables. The resulting code, especially inside the control plane roleserver runnable, is now less complex, at the cost of a bit of an ugly refactor. Change-Id: Idbe1ff2ac3bfb2d570bed040a2f78ccabb66caba Reviewed-on: https://review.monogon.dev/c/monogon/+/1749 Tested-by: Jenkins CI Reviewed-by: Lorenz Brun <lorenz@monogon.tech>
This is the main repository containing the source code for the Monogon Platform.
This is pre-release software - take a look, and check back later!
Our build environment is self-contained and requires only minimal host dependencies:
/dev/kvm
(if you want to run tests).Our docs assume that Bazelisk is available as bazel
on your PATH.
Refer to SETUP.md for detailed instructions.
Build CLI and node image:
bazel build //metropolis/cli/dbg //:launch -c dbg
Launch an ephemeral test node:
bazel test //:launch -c dbg --test_output=streamed
Run a kubectl command while the test is running:
bazel-bin/metropolis/cli/dbg/dbg_/dbg kubectl describe node
Run full test suite:
bazel test -c dbg //...