| commit | 2d91aa323b5cb576b3c7749eedb91058be1f8f57 | [log] [tgz] |
|---|---|---|
| author | Serge Bazanski <serge@monogon.tech> | Mon Apr 25 13:29:31 2022 +0200 |
| committer | Sergiusz Bazanski <serge@monogon.tech> | Tue May 24 15:31:27 2022 +0000 |
| tree | 1d4a2a79f2f63cca28614df7f37fd044fd0844da | |
| parent | b354453656f82d0a38b3f2ed0d1ebf843c14d922 [diff] |
curator: remove dispatch system
This vastly simplifies the curator by removing the dispatch and per-RPC
switch logic, instead replacing it with the gRPC server stopping
whenever the leadership status of a curator switches.
The downside is technically the possibility of some 'stale' RPC
handling, where a leader/follower accepts some new RPC even though it's
in the process of switching its leadership.
The rationale for this change is:
1. Leadership-exclusive actions are guarded by the etcd leadership
lock being held, so there's no chance a long pending RPC to a
leader that just stepped down will cause split brain scenarios.
2. We're moving away from follower proxying, and followers will
instead just serve a 'who's the leader' RPC. These are okay to
serve stale data (ie. when the contacted follower should've
switched to be a leader, or a follower of another leader), as
during leadership failover we expect clients to perform retry
loops until a new leadership connection is established.
Another downside (or perhaps upside) is that we don't start the listener
until we're ready to serve data, either the full API as a leader or a
reduced API as a follower. The downside is that clients will have to
retry connections until the leader is running, and that it might be
difficult to tell apart a node which isn't yet running the curator from
a broken node, or one that will not run the curator at all. On the other
hand, succesfully establishing a connections means that we are sure to
get a gRPC response instead of a hang because the curator isn't yet ready
to serve.
Change-Id: I2ec35f00bce72f0f337e8e25e8c71f5265a7d8bb
Reviewed-on: https://review.monogon.dev/c/monogon/+/685
Reviewed-by: Lorenz Brun <lorenz@monogon.tech>
This is the main repository containing the source code for the Monogon Project.
This is pre-release software - feel free to look around, and check back later for our first release!
Our build environment requires a working Podman binary (your distribution should have one).
Spinning up: scripts/create_container.sh
Spinning down: scripts/destroy_container.sh
Running commands: scripts/run_in_container.sh <...>
Using bazel using a wrapper script: scripts/bin/bazel <...> (add to your local $PATH for convenience)
This repository is compatible with the IntelliJ Bazel plugin, which enables full autocompletion for external dependencies and generated code. All commands run inside the container, and necessary paths are mapped into the container.
The following steps are necessary:
Install Google's Bazel plugin in IntelliJ. On IntelliJ 2020.3 or later, you need to install a beta release of the plugin.
Add the absolute path to your ~/.cache/bazel-monogon folder to your idea64.vmoptions (Help → Edit Custom VM Options) and restart IntelliJ:
-Dbazel.bep.path=/home/leopold/.cache/bazel-monogon
Set "Bazel Binary Location" in Other Settings → Bazel Settings to the absolute path of scripts/bin/bazel. This is a wrapper that will execute Bazel inside the container.
Use File → Import Bazel project... to create a new project from .bazelproject.
After running the first sync, everything should now resolve in the IDE, including generated code.
Launch the node:
scripts/bin/bazel run //:launch
Run a kubectl command:
scripts/bin/bazel run //metropolis/cli/dbg -- kubectl describe
Run tests:
scripts/bin/bazel test //...