This example shows that vL3-NSEs can be created on the fly on NSC requests. This allows effective scaling for endpoints. The requested endpoint will be automatically spawned on the same node as NSC, allowing the best performance for connectivity.
Deploy NSC and supplier:
kubectl apply -k https://github.com/networkservicemesh/deployments-k8s/examples/features/vl3-scale-from-zero?ref=c91be29099fab1f8376d9ff90c858efd829de35e
Wait for applications ready:
kubectl wait -n ns-vl3-scale-from-zero --for=condition=ready --timeout=1m pod -l app=nse-supplier-k8s
kubectl wait -n ns-vl3-scale-from-zero --for=condition=ready --timeout=1m pod -l app=alpine
kubectl wait -n ns-vl3-scale-from-zero --for=condition=ready --timeout=1m pod -l app=nse-vl3-vpp
Find all nscs:
nscs=$(kubectl get pods -l app=alpine -o go-template --template="{{range .items}}{{.metadata.name}} {{end}}" -n ns-vl3-scale-from-zero)
[[ ! -z $nscs ]]
Ping each client by each client:
(
for nsc in $nscs
do
ipAddr=$(kubectl exec -n ns-vl3-scale-from-zero $nsc -- ifconfig nsm-1) || exit
ipAddr=$(echo $ipAddr | grep -Eo 'inet addr:[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'| cut -c 11-)
for pinger in $nscs
do
echo $pinger pings $ipAddr
kubectl exec $pinger -n ns-vl3-scale-from-zero -- ping -c2 -i 0.5 $ipAddr || exit
done
done
)
Ping each vl3-nse by each client.
Note: By default ipam prefix is 172.16.0.0/16
and client prefix len is 24
. We also have two vl3 nses in this example. So we expect to have two vl3 addresses: 172.16.0.0
and 172.16.1.0
that should be accessible by each client.
(
for nsc in $nscs
do
echo $nsc pings nses
kubectl exec -n ns-vl3-scale-from-zero $nsc -- ping 172.16.0.0 -c2 -i 0.5 || exit
kubectl exec -n ns-vl3-scale-from-zero $nsc -- ping 172.16.1.0 -c2 -i 0.5 || exit
done
)
Delete namespace:
kubectl delete ns ns-vl3-scale-from-zero