You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Due to the challenges described below, IPv6 (including SRv6) deployments of Contiv-VPP use iptables policy renderer instead of the ACL policy renderer that is used for IPv4 deployments. IPtables rules are programmed in the individual network namespaces of all applicable pods.
Thanks to the iptables policy renderer, k8s network policies work properly for IPv6 cases, but the policy programming can be quite slow. It also means that packet processing happens both in VPP and in Linux (inside of network namespaces of individual pods), which is not ideal design-wise and performance-wise.
More info on policy implementation can be found in the development docs.
Challenges of using ACL policy renderer with IPv6
In general, ACL policy renderer works fine with IPv6, until k8s services are involved. Due to the no-NAT implementation of k8s services on IPv6, all traffic that is passing to a service backend has the destination IP address == virtual service IP.
The issues:
K8s network policies run below services in the sense that they are meant to be applied against real Pod IP addresses, not against virtual service IP addresses. This works fine if the traffic to a virtual service IP is NAT-ed on VPP before ACL processing (IPv4), but does not work properly in all cases if the traffic is not NAT-ed on VPP (IPv6).
Another issue occurs when k8s service port is different from an actual application port. In that case, the destination port in the TCP / UDP header needs to be modified, and only after that, the ACL should be processed. There is no way of implementing the port change on VPP for IPv6, since the nat66 implementation on VPP does not allow it.
The text was updated successfully, but these errors were encountered:
Due to the challenges described below, IPv6 (including SRv6) deployments of Contiv-VPP use iptables policy renderer instead of the ACL policy renderer that is used for IPv4 deployments. IPtables rules are programmed in the individual network namespaces of all applicable pods.
Thanks to the iptables policy renderer, k8s network policies work properly for IPv6 cases, but the policy programming can be quite slow. It also means that packet processing happens both in VPP and in Linux (inside of network namespaces of individual pods), which is not ideal design-wise and performance-wise.
More info on policy implementation can be found in the development docs.
Challenges of using ACL policy renderer with IPv6
In general, ACL policy renderer works fine with IPv6, until k8s services are involved. Due to the no-NAT implementation of k8s services on IPv6, all traffic that is passing to a service backend has the destination IP address == virtual service IP.
The issues:
The text was updated successfully, but these errors were encountered: