metropolis: first pass API for reconfiguring cluster

This implements management.ConfigureCluster. This API is based around
Protobuf FieldMasks, which is a new thing in the Metropolis codebase
(node config mutation is performed via optional fields).

Whether this is the right way to do this is to be discussed.
Alternatives considered are:

1. Always insert a full new config, providing the old one as a base. The
   downside of that is the potential conflicts that will spring up the
   moment we have systems regularly mutate independent parts of the
   config. Additionally, this might lead to some odd behaviour when
   dealing with clients that don't have support for newer versions of
   the config proto.
2. Use optional fields, like in Node role code. However, this has the
   downside of duplicating protos (one for the config state, one for the
   mutation request). Plus, protobuf optionals are still somewhat
   unusual.
3. Provide individual requests for mutating fields (like with Node
   labels). This also results in a lot of boilerplate code.
4. Something akin to JSON Patch, but for protobufs, which doesn't seem
   to exist.

Change-Id: I42e5eabd42076e947f4bc8399b843e0e1fd48548
Reviewed-on: https://review.monogon.dev/c/monogon/+/3591
Tested-by: Jenkins CI
Reviewed-by: Tim Windelschmidt <tim@monogon.tech>
diff --git a/metropolis/test/e2e/suites/kubernetes/run_test.go b/metropolis/test/e2e/suites/kubernetes/run_test.go
index cf33126..80a8292 100644
--- a/metropolis/test/e2e/suites/kubernetes/run_test.go
+++ b/metropolis/test/e2e/suites/kubernetes/run_test.go
@@ -17,6 +17,7 @@
 	"time"
 
 	"github.com/bazelbuild/rules_go/go/runfiles"
+	"google.golang.org/protobuf/types/known/fieldmaskpb"
 	corev1 "k8s.io/api/core/v1"
 	kerrors "k8s.io/apimachinery/pkg/api/errors"
 	"k8s.io/apimachinery/pkg/api/resource"
@@ -206,10 +207,41 @@
 			"test.monogon.dev/foo":                     "bar",
 		}
 		if labels := getLabelsForNode(cluster.NodeIDs[1]); !want.Equals(labels) {
-			return fmt.Errorf("Node %s should have labels %s, has %s", cluster.NodeIDs[1], want, labels)
+			return fmt.Errorf("node %s should have labels %s, has %s", cluster.NodeIDs[1], want, labels)
 		}
 		return nil
 	})
+
+	// Reconfigure node label rules.
+	_, err = mgmt.ConfigureCluster(ctx, &apb.ConfigureClusterRequest{
+		BaseConfig: &cpb.ClusterConfiguration{
+			KubernetesConfig: &cpb.ClusterConfiguration_KubernetesConfig{
+				NodeLabelsToSynchronize: []*cpb.ClusterConfiguration_KubernetesConfig_NodeLabelsToSynchronize{
+					{Regexp: `^test\.monogon\.dev/`},
+				},
+			},
+		},
+		NewConfig: &cpb.ClusterConfiguration{
+			KubernetesConfig: &cpb.ClusterConfiguration_KubernetesConfig{},
+		},
+		UpdateMask: &fieldmaskpb.FieldMask{
+			Paths: []string{"kubernetes_config.node_labels_to_synchronize"},
+		},
+	})
+	if err != nil {
+		t.Fatalf("Could not update cluster configuration: %v", err)
+	}
+
+	ci, err := mgmt.GetClusterInfo(ctx, &apb.GetClusterInfoRequest{})
+	if err != nil {
+		t.Fatalf("Could not get cluster info")
+	}
+	// See if the config changed.
+	if rules := ci.ClusterConfiguration.KubernetesConfig.NodeLabelsToSynchronize; len(rules) != 0 {
+		t.Fatalf("Wanted 0 label rules in config after reconfiguration, have %d: %v", len(rules), rules)
+	}
+	// TODO: ensure new rules get applied, but that will require watching the cluster
+	// config for changes in the labelmaker.
 }
 
 // TestE2EKubernetes exercises the Kubernetes functionality of Metropolis.