m/node: implement container networking ourselves

This change gets rid of the CNI mechanism for configuring container
networking in favour of a split approach where the network service is
extended by a gRPC workload network service which handles all of the
work as well as a library which exposes just enough of go-cni's
interface to be a drop-in replacement in containerd, which then talks
to the workload network service.

This is a rather unconventional approach do doing things as CNI itself
is a pluggable interface. The reason for doing it this way is that the
binary executing interface of CNI has a huge spec which is also horrible
to convert into decent Go types and being a binary-calling interface has
inherent lifecycle, complexity and image size disadvantages. The part of
CNI that is actually used by containerd is tiny and its arguments are
well-specified and have decent Go types. It also avoids the whole CNI
caching mechanic which adds further unnecessary complexity.

The reason for the split service model instead of implementing
everything in cniproxy is to allow for more complex logic and Monogon
control plane interfacing from the workload network service. Also this
will allow offloading the actual service to things like DPUs.

Right now there is some uglyness left to make this self-contained. Two
obvious examples are the piping through of the pod network event value
and the exclusion of the first (non-network) IP from the IP allocator.
These will eventually go away but are necessary to get this to work as a
standalone change.

Change-Id: I46c604b7dfd58da9e6ddd0a29241680d25a2a745
Reviewed-on: https://review.monogon.dev/c/monogon/+/4496
Reviewed-by: Jan Schär <jan@monogon.tech>
Tested-by: Jenkins CI
diff --git a/metropolis/node/core/roleserve/roleserve.go b/metropolis/node/core/roleserve/roleserve.go
index 1f9604c..4c1f610 100644
--- a/metropolis/node/core/roleserve/roleserve.go
+++ b/metropolis/node/core/roleserve/roleserve.go
@@ -66,6 +66,8 @@
 	// Network is a handle to the network service, used by workloads.
 	Network *network.Service
 
+	PodNetwork *memory.Value[*clusternet.Prefixes]
+
 	// resolver is the main, long-lived, authenticated cluster resolver that is used
 	// for all subsequent gRPC calls by the subordinates of the roleserver. It is
 	// created early in the roleserver lifecycle, and is seeded with node
@@ -86,7 +88,6 @@
 	KubernetesStatus      memory.Value[*KubernetesStatus]
 	bootstrapData         memory.Value[*BootstrapData]
 	LocalRoles            memory.Value[*cpb.NodeRoles]
-	podNetwork            memory.Value[*clusternet.Prefixes]
 	clusterDirectorySaved memory.Value[bool]
 	localControlPlane     memory.Value[*localControlPlane]
 	CuratorConnection     memory.Value[*CuratorConnection]
@@ -141,7 +142,7 @@
 		curatorConnection: &s.CuratorConnection,
 
 		kubernetesStatus: &s.KubernetesStatus,
-		podNetwork:       &s.podNetwork,
+		podNetwork:       s.Config.PodNetwork,
 	}
 
 	s.rolefetch = &workerRoleFetch{
@@ -161,7 +162,7 @@
 		storageRoot: s.StorageRoot,
 
 		curatorConnection: &s.CuratorConnection,
-		podNetwork:        &s.podNetwork,
+		podNetwork:        s.Config.PodNetwork,
 		network:           s.Network,
 	}