Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UDN, IPAM: Use v1.multus-cni.io/default-network #69

Closed
wants to merge 1 commit into from

Conversation

oshoval
Copy link
Collaborator

@oshoval oshoval commented Sep 17, 2024

What this PR does / why we need it:
In order to specify ipam-claim-reference for the primary network,
use v1.multus-cni.io/default-network instead
k8s.ovn.org/primary-udn-ipamclaim.

Webhook adds
v1.multus-cni.io/default-network: '[{"namespace":"default","name":"ovn-kubernetes","ipam-claim-reference":"vm-a.passtnet"}]'
(ipam-claim-reference value is dynamic of course)

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #

Special notes for your reviewer:
OVN K8s PR
ovn-org/ovn-kubernetes#4732
Both PRs need to be merged else the tests (that atm exists only on OVN) will break.
We should not merge it until the above PR is merge candid.

Need to create this NAD (because default-network points to a NAD):
based on /etc/cni/net.d/10-ovn-kubernetes.conf

cat <<EOF | oc apply -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: ovn-kubernetes
spec:
  config: '{
    "cniVersion": "0.4.0",
    "name": "ovn-kubernetes",
    "type": "ovn-k8s-cni-overlay",
    "ipam": {},
    "dns": {},
    "runtimeConfig": {}
  }'
EOF

Release note:


@kubevirt-bot kubevirt-bot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. labels Sep 17, 2024
@oshoval oshoval marked this pull request as draft September 17, 2024 08:49
@kubevirt-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign qinqon for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@oshoval
Copy link
Collaborator Author

oshoval commented Sep 17, 2024

Before OVN changes i get this error

  Normal   AddedInterface          4m33s                   multus   Add eth0 [10.244.0.14/24] from default/ovn-kubernetes
  Normal   AddedInterface          4m33s                   multus   Add net1 [] from default/primary-udn-kubevirt-binding
  Warning  FailedCreatePodSandBox  4m32s (x534 over 137m)  kubelet  (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ce24320a85970a3bafda49c9898482ea549633f9c6c2f4f224545f44ed51dc1d": plugin type="multus" name="multus-cni-network" failed (add): [blue-ns/virt-launcher-vm-a-2czxk/f5987ff9-c76d-4110-9363-1c2c45742e90:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[blue-ns/virt-launcher-vm-a-2czxk ce24320a85970a3bafda49c9898482ea549633f9c6c2f4f224545f44ed51dc1d network default NAD default] [blue-ns/virt-launcher-vm-a-2czxk ce24320a85970a3bafda49c9898482ea549633f9c6c2f4f224545f44ed51dc1d network default NAD default] failed to configure pod interface: container veth name provided (net2) already exists

should i add to the new NSE entry "type":"noop" so it wont be created by multus ?
(it treats it as a 3rd interface)
(OVN changes wont help for this iiuc, because multus creates the interface before delegating additional tasks for ONV k8s CNI)

Additional question that might be important:
I also don't understand how would it help to add the stick IP to the default interface as done in this PR
{"name":"ovn-kubernetes","namespace":"default","ipam-claim-reference":"vm-a.passtnet"},
when we bind the primary ovn-udn1 to the VM,
which is the one that needs to have the sticky IP (unless passt will move the IP there)

@maiqueb
Copy link
Collaborator

maiqueb commented Sep 17, 2024

Before OVN changes i get this error

  Normal   AddedInterface          4m33s                   multus   Add eth0 [10.244.0.14/24] from default/ovn-kubernetes
  Normal   AddedInterface          4m33s                   multus   Add net1 [] from default/primary-udn-kubevirt-binding
  Warning  FailedCreatePodSandBox  4m32s (x534 over 137m)  kubelet  (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ce24320a85970a3bafda49c9898482ea549633f9c6c2f4f224545f44ed51dc1d": plugin type="multus" name="multus-cni-network" failed (add): [blue-ns/virt-launcher-vm-a-2czxk/f5987ff9-c76d-4110-9363-1c2c45742e90:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[blue-ns/virt-launcher-vm-a-2czxk ce24320a85970a3bafda49c9898482ea549633f9c6c2f4f224545f44ed51dc1d network default NAD default] [blue-ns/virt-launcher-vm-a-2czxk ce24320a85970a3bafda49c9898482ea549633f9c6c2f4f224545f44ed51dc1d network default NAD default] failed to configure pod interface: container veth name provided (net2) already exists

Is this the first error it shows ? Trying to ensure it didn't fail for something else first, then also failed to cleanup the pod's sandbox, leaving a net2 interface, which causes follow-up CNI ADDs to also fail.

should i add to the new NSE entry "type":"noop" so it wont be created by multus ? (it threads it as secondary 3rd interface) (OVN changes wont help for this iiuc, because multus creates the interface before delegating additional tasks for ONV k8s CNI)

Additional question that might be important: I also don't understand how would it help to add the stick IP to the default interface as done in this PR {"name":"ovn-kubernetes","namespace":"default","ipam-claim-reference":"vm-a.passtnet"}, when we bind the primary ovn-udn1 to the VM, which is the one that needs to have the sticky IP (unless passt will move the IP there)

Your NSE plumbs the default network interface into the pod, right ? And multus will not invoke the cluster default network attachment.

The secondary network controller for the UDN should still kick in, compute the synthetic network selection element for the UDN, and append that to the list of NSEs that will be processed. Here is where you need to adjust the code: the IPAMClaim reference should be set in the synthetic network selection element, not in the default cluster network NSE.

@oshoval
Copy link
Collaborator Author

oshoval commented Sep 17, 2024

Is this the first error it shows ? Trying to ensure it didn't fail for something else first, then also failed to cleanup the pod's sandbox, leaving a net2 interface, which causes follow-up CNI ADDs to also fail.

according describe on the pod, yes
once i understand what you meant about the NSE, it might solve this, because i wont have this additional problematic NSE (that once i didn't have it, the pod did spin)

Events:
  Type     Reason                  Age   From               Message
  ----     ------                  ----  ----               -------
  Normal   Scheduled               10s   default-scheduler  Successfully assigned blue-ns/virt-launcher-vm-a-jwc88 to ovn-worker2
  Normal   AddedInterface          9s    multus             Add eth0 [10.244.2.11/24] from default/ovn-kubernetes
  Normal   AddedInterface          9s    multus             Add net1 [] from default/primary-udn-kubevirt-binding
  Warning  FailedCreatePodSandBox  8s    kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "118fff0589973422b5c9f921a8012840e896ddb2df81903bde3e6fe337d5d80f": plugin type="multus" name="multus-cni-network" failed (add): [blue-ns/virt-launcher-vm-a-jwc88/4ab0c4f8-2622-4965-83b7-8c205e66c2de:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[blue-ns/virt-launcher-vm-a-jwc88 118fff0589973422b5c9f921a8012840e896ddb2df81903bde3e6fe337d5d80f network default NAD default] [blue-ns/virt-launcher-vm-a-jwc88 118fff0589973422b5c9f921a8012840e896ddb2df81903bde3e6fe337d5d80f network default NAD default] failed to configure pod interface: container veth name provided (net2) already exists
'
  Normal   AddedInterface          6s  multus   Add eth0 [10.244.2.11/24] from default/ovn-kubernetes
  Normal   AddedInterface          6s  multus   Add net1 [] from default/primary-udn-kubevirt-binding
  Warning  FailedCreatePodSandBox  6s  kubelet  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d572bf702f8507b4e54b302fc45310e2885eab034c794c6c2e3a4316b3ab1c1b": plugin type="multus" name="multus-cni-network" failed (add): [blue-ns/virt-launcher-vm-a-jwc88/4ab0c4f8-2622-4965-83b7-8c205e66c2de:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[blue-ns/virt-launcher-vm-a-jwc88 d572bf702f8507b4e54b302fc45310e2885eab034c794c6c2e3a4316b3ab1c1b network default NAD default] [blue-ns/virt-launcher-vm-a-jwc88 d572bf702f8507b4e54b302fc45310e2885eab034c794c6c2e3a4316b3ab1c1b network default NAD default] failed to configure pod interface: container veth name provided (net2) already exists

@oshoval
Copy link
Collaborator Author

oshoval commented Sep 17, 2024

if i understood correctly (if i am wrong we can continue offline as we spoke, thanks)

ipam-ext should add this only
v1.multus-cni.io/default-network: '[{ "name": "default/ovn-kubernetes" , "ipam-claim-reference":"vm-a.passtnet" }]'

OVN should take "ipam-claim-reference":"vm-a.passtnet" from it and copy it to the NSE of the primary interface that GetPodNADToNetworkMapping returns

@maiqueb
Copy link
Collaborator

maiqueb commented Sep 17, 2024

if i understood correctly (if i am wrong we can continue offline as we spoke, thanks)

ipam-ext should add this only v1.multus-cni.io/default-network: '[{ "name": "default/ovn-kubernetes" , "ipam-claim-reference":"vm-a.passtnet" }]'

OVN should take "ipam-claim-reference":"vm-a.passtnet" from it and copy it to the NSE of the primary interface that GetPodNADToNetworkMapping returns

Actually, to the primary interface that GetPodNADToNetworkMappingWithActiveNetwork returns. Right ?

@maiqueb
Copy link
Collaborator

maiqueb commented Sep 17, 2024

Is this the first error it shows ? Trying to ensure it didn't fail for something else first, then also failed to cleanup the pod's sandbox, leaving a net2 interface, which causes follow-up CNI ADDs to also fail.

according describe on the pod, yes once i understand what you meant about the NSE, it might solve this, because i wont have this additional problematic NSE (that once i didn't have it, the pod did spin)

Events:
  Type     Reason                  Age   From               Message
  ----     ------                  ----  ----               -------
  Normal   Scheduled               10s   default-scheduler  Successfully assigned blue-ns/virt-launcher-vm-a-jwc88 to ovn-worker2
  Normal   AddedInterface          9s    multus             Add eth0 [10.244.2.11/24] from default/ovn-kubernetes
  Normal   AddedInterface          9s    multus             Add net1 [] from default/primary-udn-kubevirt-binding
  Warning  FailedCreatePodSandBox  8s    kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "118fff0589973422b5c9f921a8012840e896ddb2df81903bde3e6fe337d5d80f": plugin type="multus" name="multus-cni-network" failed (add): [blue-ns/virt-launcher-vm-a-jwc88/4ab0c4f8-2622-4965-83b7-8c205e66c2de:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[blue-ns/virt-launcher-vm-a-jwc88 118fff0589973422b5c9f921a8012840e896ddb2df81903bde3e6fe337d5d80f network default NAD default] [blue-ns/virt-launcher-vm-a-jwc88 118fff0589973422b5c9f921a8012840e896ddb2df81903bde3e6fe337d5d80f network default NAD default] failed to configure pod interface: container veth name provided (net2) already exists
'
  Normal   AddedInterface          6s  multus   Add eth0 [10.244.2.11/24] from default/ovn-kubernetes
  Normal   AddedInterface          6s  multus   Add net1 [] from default/primary-udn-kubevirt-binding
  Warning  FailedCreatePodSandBox  6s  kubelet  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d572bf702f8507b4e54b302fc45310e2885eab034c794c6c2e3a4316b3ab1c1b": plugin type="multus" name="multus-cni-network" failed (add): [blue-ns/virt-launcher-vm-a-jwc88/4ab0c4f8-2622-4965-83b7-8c205e66c2de:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[blue-ns/virt-launcher-vm-a-jwc88 d572bf702f8507b4e54b302fc45310e2885eab034c794c6c2e3a4316b3ab1c1b network default NAD default] [blue-ns/virt-launcher-vm-a-jwc88 d572bf702f8507b4e54b302fc45310e2885eab034c794c6c2e3a4316b3ab1c1b network default NAD default] failed to configure pod interface: container veth name provided (net2) already exists

You'll need to understand what on earth is creating net2 . AFAIU, that shouldn't happen; you should have instead:

  • eth0
  • ovn-udn1

@oshoval
Copy link
Collaborator Author

oshoval commented Sep 17, 2024

Is this the first error it shows ? Trying to ensure it didn't fail for something else first, then also failed to cleanup the pod's sandbox, leaving a net2 interface, which causes follow-up CNI ADDs to also fail.

according describe on the pod, yes once i understand what you meant about the NSE, it might solve this, because i wont have this additional problematic NSE (that once i didn't have it, the pod did spin)

Events:
  Type     Reason                  Age   From               Message
  ----     ------                  ----  ----               -------
  Normal   Scheduled               10s   default-scheduler  Successfully assigned blue-ns/virt-launcher-vm-a-jwc88 to ovn-worker2
  Normal   AddedInterface          9s    multus             Add eth0 [10.244.2.11/24] from default/ovn-kubernetes
  Normal   AddedInterface          9s    multus             Add net1 [] from default/primary-udn-kubevirt-binding
  Warning  FailedCreatePodSandBox  8s    kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "118fff0589973422b5c9f921a8012840e896ddb2df81903bde3e6fe337d5d80f": plugin type="multus" name="multus-cni-network" failed (add): [blue-ns/virt-launcher-vm-a-jwc88/4ab0c4f8-2622-4965-83b7-8c205e66c2de:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[blue-ns/virt-launcher-vm-a-jwc88 118fff0589973422b5c9f921a8012840e896ddb2df81903bde3e6fe337d5d80f network default NAD default] [blue-ns/virt-launcher-vm-a-jwc88 118fff0589973422b5c9f921a8012840e896ddb2df81903bde3e6fe337d5d80f network default NAD default] failed to configure pod interface: container veth name provided (net2) already exists
'
  Normal   AddedInterface          6s  multus   Add eth0 [10.244.2.11/24] from default/ovn-kubernetes
  Normal   AddedInterface          6s  multus   Add net1 [] from default/primary-udn-kubevirt-binding
  Warning  FailedCreatePodSandBox  6s  kubelet  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d572bf702f8507b4e54b302fc45310e2885eab034c794c6c2e3a4316b3ab1c1b": plugin type="multus" name="multus-cni-network" failed (add): [blue-ns/virt-launcher-vm-a-jwc88/4ab0c4f8-2622-4965-83b7-8c205e66c2de:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[blue-ns/virt-launcher-vm-a-jwc88 d572bf702f8507b4e54b302fc45310e2885eab034c794c6c2e3a4316b3ab1c1b network default NAD default] [blue-ns/virt-launcher-vm-a-jwc88 d572bf702f8507b4e54b302fc45310e2885eab034c794c6c2e3a4316b3ab1c1b network default NAD default] failed to configure pod interface: container veth name provided (net2) already exists

You'll need to understand what on earth is creating net2 . AFAIU, that shouldn't happen; you should have instead:

  • eth0
  • ovn-udn1

the NSE that i added i think (we have 2 here, primary-udn-kubevirt-binding and the new one i added wrongly)
k8s.v1.cni.cncf.io/networks: '[{"name":"primary-udn-kubevirt-binding","namespace":"default","cni-args":{"logicNetworkName":"passtnet"}},{"name":"ovn-kubernetes","namespace":"default","ipam-claim-reference":"vm-a.passtnet"}]'
(since we have also v1.multus-cni.io/default-network: default/ovn-kubernetes that just replace the default file
but creates an interface as well)

but iiuc it will be dropped due to the discussion above, so it wont be interesting
thanks

@oshoval
Copy link
Collaborator Author

oshoval commented Sep 17, 2024

if i understood correctly (if i am wrong we can continue offline as we spoke, thanks)
ipam-ext should add this only v1.multus-cni.io/default-network: '[{ "name": "default/ovn-kubernetes" , "ipam-claim-reference":"vm-a.passtnet" }]'
OVN should take "ipam-claim-reference":"vm-a.passtnet" from it and copy it to the NSE of the primary interface that GetPodNADToNetworkMapping returns

Actually, to the primary interface that GetPodNADToNetworkMappingWithActiveNetwork returns. Right ?

possible thanks,
GetPodNADToNetworkMappingWithActiveNetwork calls GetPodNADToNetworkMapping
but it might be just possible to update the logic at GetPodNADToNetworkMappingWithActiveNetwork indeed
(so it will affect just that one on the way)

@oshoval
Copy link
Collaborator Author

oshoval commented Sep 18, 2024

WIP - OVN part ovn-org/ovn-kubernetes#4732
(not tested yet, so not ready for review, thanks)

@oshoval
Copy link
Collaborator Author

oshoval commented Sep 18, 2024

WIP - OVN part ovn-org/ovn-kubernetes#4732 (not tested yet, so not ready for review, thanks)

basically it seems to work

will raise soon some little points to discuss once we decide to implement it
but all in all it seems feasible

i will inject it to OVN to run all tests so we see if any problems exists
(beside the tests that need to be updated, unit tests etc)

@maiqueb
Copy link
Collaborator

maiqueb commented Sep 18, 2024

Two questions:

  • you're passing the following multus default network annotation: '[{"namespace":"default","name":"ovn-kubernetes","cni-args":{"ipam-claim-reference":"vm-a.passtnet"}}]'. Could this be a NSE instead ? If so, you could use directly the ipam-claim-reference attribute.
  • could you use instead the multus default network annotation to just pass the default network name, and have that default network configured via a NSE in the multus networks annotation ? (as you had before)

Thanks for the effort.

@oshoval
Copy link
Collaborator Author

oshoval commented Sep 19, 2024

Two questions:

  • you're passing the following multus default network annotation: '[{"namespace":"default","name":"ovn-kubernetes","cni-args":{"ipam-claim-reference":"vm-a.passtnet"}}]'. Could this be a NSE instead ? If so, you could use directly the ipam-claim-reference attribute.
  • could you use instead the multus default network annotation to just pass the default network name, and have that default network configured via a NSE in the multus networks annotation ? (as you had before)

Both suggestions involve adding a NSE, that NSE must not overlap with an interface that we are already creating,
either the ovn-udn1 or the default-network one (that is either created by the default-network annotation or the default /etc/cni/net.d file otherwise)
because else the NSE tells multus to create an interface, but since we already created interface belong to that network it will fail as seen above
so whatever solution we take must not create a new interface
maybe we can try to add "type": "noop" to a NSE ? (so multus wont create interface and it will be just a message passing method)

Note: i still didnt finish to make all tests pass with the current method
(but i dont know yet if it is just a noise or real, as some of them pass locally, but want it to pass on CI)

@oshoval
Copy link
Collaborator Author

oshoval commented Sep 19, 2024

another note, if we decide to implement it, we need to consider adding the NAD that appear on the PR desc to CNAO (that creates the other NAD)

@oshoval
Copy link
Collaborator Author

oshoval commented Sep 19, 2024

ah you are right, forgot NSE has json:"ipam-claim-reference,omitempty" so just wanted to use the current Unmarshal with minimum changes, will update, thanks

Please assure that it works then :)

updated PRs
injected this PR at ovn-org/ovn-kubernetes#4732
lets see all pass

@oshoval
Copy link
Collaborator Author

oshoval commented Sep 20, 2024

all tests passed (failures are just known flakes, rebased the ref PR because a fix that should solve them was merged)

let me know please if there is a green light to implement it
thanks

@maiqueb
Copy link
Collaborator

maiqueb commented Sep 20, 2024

all tests passed (failures are just known flakes, rebased the ref PR because a fix that should solve them was merged)

Cool, really good.

let me know please if there is a green light to implement it thanks

Yeah, if you don't mind, start working on that.

There is a slight chance of it not landing in the product though - I will discuss this next-week with the maintainers.

@oshoval oshoval changed the title WIP, POC: Use v1.multus-cni.io/default-network and NSE for default network UDN, IPAM: Use v1.multus-cni.io/default-network Sep 22, 2024
oshoval added a commit to oshoval/ovn-kubernetes that referenced this pull request Sep 22, 2024
In order to specify ipam-claim-reference for the primary network,
use v1.multus-cni.io/default-network instead
k8s.ovn.org/primary-udn-ipamclaim.

See kubevirt/ipam-extensions#69

Signed-off-by: Or Shoval <[email protected]>
oshoval added a commit to oshoval/ovn-kubernetes that referenced this pull request Sep 22, 2024
In order to specify ipam-claim-reference for the primary network,
use v1.multus-cni.io/default-network instead
k8s.ovn.org/primary-udn-ipamclaim.

See kubevirt/ipam-extensions#69

Signed-off-by: Or Shoval <[email protected]>
@oshoval
Copy link
Collaborator Author

oshoval commented Sep 22, 2024

Side quest idea if desired:

Allow make flash-sync that builds just the image (that will be available in the registry)
and update the manifest so it is ready for deployment
Can be used either locally when we dont need to build all the other stuff (i.e passt binary image)
and also on PRs like this
https://github.com/ovn-org/ovn-kubernetes/pull/4732/files#diff-be802a960021d736802f43387ec5845a88789ed10a7c0ed7aba8ac9c78f0c15aR392
(but of course that before that it will need to clone the code and inject this PR in this specific case, this can also be wrapped in a script nicely if desired)

@oshoval
Copy link
Collaborator Author

oshoval commented Sep 22, 2024

The failure now is the one fixed at #70

@oshoval
Copy link
Collaborator Author

oshoval commented Sep 22, 2024

both PRs basically ready for review please (as product, not poc)
thanks

@oshoval oshoval marked this pull request as ready for review September 22, 2024 14:09
@kubevirt-bot kubevirt-bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Sep 22, 2024
@oshoval
Copy link
Collaborator Author

oshoval commented Sep 22, 2024

/hold

until ovn-org/ovn-kubernetes#4732 is lgtm and ready to be merged
as they both need to be merged together

@kubevirt-bot kubevirt-bot added do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Sep 22, 2024
In order to specify ipam-claim-reference for the primary network,
use v1.multus-cni.io/default-network instead
k8s.ovn.org/primary-udn-ipamclaim.

Signed-off-by: Or Shoval <[email protected]>
@kubevirt-bot kubevirt-bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 26, 2024
@oshoval
Copy link
Collaborator Author

oshoval commented Sep 26, 2024

rebased

Quique raised some concerns offline that we should discuss (i.e about who creates the NAD, and that it should do it dynamically according the default config file)
EDIT - also if the fly can be changed manually, and there are pods using that NAD / file, we wont able to mirror the changes to the NAD due to the live payloads + we will need a controller for that
(but on the other hand, it means that if the file is changed and there are pods that use it, it shouldnt be allowed as well, regardless of the NAD)

We also discussed a potential alternative if desired, that default-network will have some indication
that tells multus, do take the original file, and not a NAD
This way the annotation will allow us to convoy NSE data without the need to create additional NAD
that reflects the default file anyway.

@maiqueb
Copy link
Collaborator

maiqueb commented Sep 27, 2024

rebased

Quique raised some concerns offline that we should discuss (i.e about who creates the NAD, and that it should do it dynamically according the default config file)

We also discussed a potential alternative if desired, that default-network will have some indication that tells multus, do take the original file, and not a NAD This way the annotation will allow us to convoy NSE data without the need to create additional NAD that reflects the default file anyway.

Yes, having to create the NAD is one clear downside of this approach. You raise a good point: who should be responsible for creating this.

Let's wait to what the maintainers think.

@oshoval
Copy link
Collaborator Author

oshoval commented Oct 1, 2024

Yes, having to create the NAD is one clear downside of this approach. You raise a good point: who should be responsible for creating this.

Let's wait to what the maintainers think.

should we raise it on the OVN PR or ask OVN maintainers offline please ?
this one is the ipam-ext, so not sure they will see it (unless it was already raised offline)

@maiqueb
Copy link
Collaborator

maiqueb commented Oct 1, 2024

Yes, having to create the NAD is one clear downside of this approach. You raise a good point: who should be responsible for creating this.
Let's wait to what the maintainers think.

should we raise it on the OVN PR or ask OVN maintainers offline please ? this one is the ipam-ext, so not sure they will see it (unless it was already raised offline)

Point Jaime to the OVN-K PR. I'll add this to the weekly sync we have THU.

@oshoval
Copy link
Collaborator Author

oshoval commented Oct 1, 2024

Point Jaime to the OVN-K PR. I'll add this to the weekly sync we have THU.

Done here ovn-org/ovn-kubernetes#4732 (comment)

@oshoval
Copy link
Collaborator Author

oshoval commented Oct 1, 2024

@maiqueb
if you like the idea on the last paragraph here [1]
please raise it on the meeting if possible
[1] #69 (comment)

@oshoval
Copy link
Collaborator Author

oshoval commented Oct 9, 2024

for now it is on hold due to the reasons above

@oshoval oshoval closed this Oct 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dco-signoff: yes Indicates the PR's author has DCO signed all their commits. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. size/M
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants