-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Measure performance of NSM cmd-forwarder-vpp and provide results #123
Comments
This is a good task for 'component performance'... but we will also want 'systemic' performance measurements as well :) |
Made some local testing and collected results. SetupN clients, 1 endpoint, each client makes 1 request and checks the connection. Intermediate resultsI measured maximum number of connections for cases KernelToKernel and MemifToMemif. adding some results i've collected and also a few charts Question@edwarnicke what do you think about testing this on some cloud cluster instead of local environment? And if you agree with that, which cloud environment you'd like me to start working with? |
@Mixaster995 Please unzip all charts and attach to the issue. |
@Mixaster995 Do you have any investigation result related to kernel interfaces instability? |
@edwarnicke I think this testing can be more useful with we moving to system level testing. I also think that we need to test performance on all our public clusters and start with GKE. WDYT? |
I'd be curious to see some other combinations like memif to memif :) |
Prepared some testing stand for local testing in |
Description
We need to know the performance of NSM cmd-forwarder-vpp.
Steps
N
is the number from step1. mechanisms:memif2memif
,kerne2kernel
,memif2vxlan2memif
,kernel2vxlan2kernel
,kernel2wireguard2kernel
,memif2wireguard2memif
.N
is the number from step1.memif2kernel
,kernel2memif
,memif2vxlan2kernel
,kernel2vxlan2memif
,memif2wireguard2kernel
,kernel2wireguard2memif
.Raw estimation
8d
The text was updated successfully, but these errors were encountered: