commit | aa6b7346a87a5512fbdd5b39db766000c0e10415 | [log] [tgz] |
---|---|---|
author | Lorenz Brun <lorenz@nexantic.com> | Thu Dec 12 02:55:02 2019 +0100 |
committer | Lorenz Brun <lorenz@nexantic.com> | Thu Dec 12 02:55:02 2019 +0100 |
tree | 8b7665934b854d4d2ee18e90a289752f8cd85942 | |
parent | 5e0bd2d43ab72cf4091e7689d02f95e07b1c1010 [diff] |
Attestation & Identity & Global Unlock & Enrolment This changes the node startup sequence significantly. Now the following three startup procedures replace the old setup/join mechanic: * If no enrolment config is present, automatically bootstrap a new cluster and become master for it. * If an enrolment config with an enrolment token is present, register with the NodeManagementService. * If an enrolment config without an enrolment token is present, attempt a normal cluster unlock. It also completely revamps the GRPC management services: * NodeManagementService is a master-only service that deals with other nodes and has a cluster-wide identity * NodeService is only available in unlocked state and keyed with the node identity * ClusterManagement is now a master-only service that's been spun out of the main NMS since they have very different authentication models and also deals with EnrolmentConfigs The TPM support library has also been extended by: * Lots of integrity attestation and verification functions * Built-in AK management * Some advanced policy-based authentication stuff Also contains various enhancements to the network service to make everything work in a proper multi-node environment. Lots of old code has been thrown out. Test Plan: Passed a full manual test of all three startup modes (bootstrap, enrolment and normal unlock) including automated EnrolmentConfig generation and consumption in a dual-node configuration on swtpm / OVMF. Bug: T499 X-Origin-Diff: phab/D291 GitOrigin-RevId: d53755c828218b1df83a1d7ad252c7b3231abca8
This is the monorepo storing all of nexantic's internal projects and libraries.
We assume a Fedora host system provisioned using rW, and IntelliJ as the IDE.
For better reproducibility, all builds are executed in containers.
Spinning up: scripts/create_container.sh
Spinning down: scripts/destroy_container.sh
Running commands: scripts/run_in_container.sh <...>
Using bazel using a wrapper script: scripts/bin/bazel <...>
(add to your local $PATH for convenience)
This repository is compatible with the IntelliJ Bazel plugin. All commands run inside the container, and necessary paths are mapped into the container.
We check the entire .ijwb project directory into the repository, which requires everyone to use the latest version of both IntelliJ and the Bazel plugin, but eliminates manual setup steps.
The following steps are necessary:
Install Google's official Bazel plugin in IntelliJ.
Add the absolute path to your ~/.cache/bazel-nxt folder to your idea64.vmoptions (Help → Edit Custom VM Options) and restart IntelliJ:
-Dbazel.bep.path=/home/leopold/.cache/bazel-nxt
Set "Bazel Binary Location" in Other Settings → Bazel Settings to the absolute path of scripts/bin/bazel. This is a wrapper that will execute Bazel inside the container.
Open the .ijwb
folder as IntelliJ project.
Disable Vgo support for the project.
Run a non-incremental sync in IntelliJ
The plugin will automatically resolve paths for generated files.
If you do not use IntelliJ, you need to use the scripts/bazel_copy_generated_for_ide.sh script to copy files locally.