You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In order for network engineers to define new networks for NetworkProfiles a higher level CRD describing implementation details for that specific network is necessary. Networks should put together the plumbing mechanisms behind the scenes and mentioned only by name on NetworkProfiles. It works pretty much like a CNI JSON configuration file. The more low level details concentrated on this CRD should open the door for pluggable data planes, routing and switching mechanisms. If possible allowing even old CNI plugins to be executed out of the container runtime context.
Some ideas on how it could be implemented:
apiVersion: networking.fennec-project.io/v1alpha1
Kind: network
Metadata:
Name: s-plane
Spec:
InterfaceNamePrefix:
Description:
MasterOnHost: (name or index - may be a bridge, a vSwitch, a third party router or anything else that works with it.)
LinkType: (veth or memif)
MTU:
HardwareAddressBase (optional):
Vlan:
Tag:
Description:
IPv4:
DHCP: true | false
Multiple Options here...
IPv6:
DHCPv6: true | false
AcceptRA: true | false
Many other options
SRv6:
SRPolicies:
SegmentList: map{string}string (address, function)
Many other options
Tunneling:
GRE:
Many other options
Among multiple other possible options including delegating this configuration to a third party application called by the operator or possibly a plugin like thing.
The text was updated successfully, but these errors were encountered:
Networks:
In order for network engineers to define new networks for NetworkProfiles a higher level CRD describing implementation details for that specific network is necessary. Networks should put together the plumbing mechanisms behind the scenes and mentioned only by name on NetworkProfiles. It works pretty much like a CNI JSON configuration file. The more low level details concentrated on this CRD should open the door for pluggable data planes, routing and switching mechanisms. If possible allowing even old CNI plugins to be executed out of the container runtime context.
Some ideas on how it could be implemented:
apiVersion: networking.fennec-project.io/v1alpha1
Kind: network
Metadata:
Name: s-plane
Spec:
InterfaceNamePrefix:
Description:
MasterOnHost: (name or index - may be a bridge, a vSwitch, a third party router or anything else that works with it.)
LinkType: (veth or memif)
MTU:
HardwareAddressBase (optional):
Vlan:
Tag:
Description:
IPv4:
DHCP: true | false
Multiple Options here...
IPv6:
DHCPv6: true | false
AcceptRA: true | false
Many other options
SRv6:
SRPolicies:
SegmentList: map{string}string (address, function)
Many other options
Tunneling:
GRE:
Many other options
The text was updated successfully, but these errors were encountered: