-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add mesh VPN support to the CDH #763
base: main
Are you sure you want to change the base?
Add mesh VPN support to the CDH #763
Conversation
00e1f81
to
6fb1d22
Compare
60d552e
to
8cdf322
Compare
8062c00
to
2163ae5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few comments on the first half of the PR
#[arg(short, long)] | ||
pod_name: String, | ||
#[arg(short, long)] | ||
lighthouse_pub_ip: String, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this could maybe come from the CDH config file (with init-data coming soon) rather than be passed from the caller.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this idea. Here's an example from the repo. So you're thinking a new top-level key, e.g. "overlay-network"?
confidential-data-hub/overlay-network/src/overlay_network/nebula.rs
Outdated
Show resolved
Hide resolved
|
||
// FIXME These should be configurable | ||
const LIGHTHOUSE_IP: Ipv4Addr = Ipv4Addr::new(192, 168, 100, 100); | ||
const LIGHTHOUSE_MASK: Ipv4Addr = Ipv4Addr::new(255, 255, 255, 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, try to get these from the CDH config file. Otherwise we have sort of a single-use feature.
} | ||
|
||
// FIXME: kbs hard-coded to localhost is wrong. This should be based on | ||
// ResourceUri? Where does it come from? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
config file
/// - Ask trustee for its nebula credentials | ||
/// - Start the nebula daemon. | ||
pub async fn init(&self) -> Result<()> { | ||
let is_lighthouse: bool = self.lighthouse_ip.is_empty(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so we're going to use one of the pods as the lighthouse?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
... yea. Thoughts? We looked at two designs: (1) the lighthouse is a coco pod (we went with this for now); (2) the lighthouse runs and lives alongside trustee.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would do #2. Trustee should already be at some static ip known to all pods and it won't move or go down during the lifecycle of the workload. This would also allow you to more easily have two completely different workloads sharing the same overlay network.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the merits of #2. Maybe the thing to do is strip the ability to launch the lighthouse here (which will simplify things); then create a new issue and PR for lighthouse support in trustee. Thoughts?
confidential-data-hub/overlay-network/src/overlay_network/nebula.rs
Outdated
Show resolved
Hide resolved
confidential-data-hub/overlay-network/src/overlay_network/nebula.rs
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it could be cool to make these configs more object oriented. rather than just storing a string config, you could have a struct that stores a string and knows how to write itself to a certain place and such. not a requirement but could be nifty
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the config as a giant string (like we have here) (and like what I think you're still suggesting) is good. Reasons include readability, and updating it (to change a default value, or to change something due to a nebula update). But if it's part of a struct that can manage how and where it's written out, etc. (rather than burying those details in the NebulaMesh::init
function), then it's better than how it is now.
4184957
to
0f1fdb4
Compare
Signed-off-by: Chris Porter <[email protected]>
0f1fdb4
to
f425b00
Compare
RFC issue is here
This PR is meant to hold the main guest functionality for overlay network support with Nebula. The main additions are in confidential-data-hub/overlay-network.
Related items not included in this PR: