-
Notifications
You must be signed in to change notification settings - Fork 553
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize counter polling interval by making it more accurate #3391
Optimize counter polling interval by making it more accurate #3391
Conversation
865d118
to
b2d77a7
Compare
Depends on sonic-net/sonic-swss-common#950 |
b2d77a7
to
9e3d1fc
Compare
7fc0441
to
09d18ba
Compare
09d18ba
to
33283ac
Compare
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Commenter does not have sufficient privileges for PR 3391 in repo sonic-net/sonic-swss |
58e7d84
to
3c79f13
Compare
/azp run |
Add unit test to meet coverage. |
Azure Pipelines successfully started running 1 pipeline(s). |
/azpw run |
/AzurePipelines run |
Azure Pipelines successfully started running 1 pipeline(s). |
SVR6 test cases are not stable. Sometimes they pass but sometimes fail with the same code.
|
/azpw run |
/AzurePipelines run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azpw run |
/AzurePipelines run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azpw run |
/AzurePipelines run |
Azure Pipelines successfully started running 1 pipeline(s). |
@prsunny @dgsudharsan could you review and approve the PR? I added unit tests since the last approval to meet the coverage. thanks |
Cherry-pick PR to 202411: #3500 |
Define bulk chunk size and bulk chunk size per counter ID. This is to resolve the VS test failure in sonic-net#1457, which is caused by loop dependency. In PR sonic-net#1457, new fields `bulk_chunk_size` and `bulk_chunk_size_per_prefix` have been introduced to `sai_redis_flex_counter_group_parameter_t` whose instances are initialized by orchagent. However, the orchagent is still compiled with the old sairedis header, which prevents both new fields from being uninitialized which in turn fails vs test. We have to split this PR into two: 1. sonic-net#1519 which updates the header sairedis.h only. the motivation is to compile swss(orchagent) with both new fields initiated. 2. sonic-net#1457 contains all the rest of code The order to merge: 1. sonic-net#1519 2. sonic-net/sonic-swss#3391 3. sonic-net#1457
Define bulk chunk size and bulk chunk size per counter ID. This is to resolve the VS test failure in #1457, which is caused by loop dependency. In PR #1457, new fields `bulk_chunk_size` and `bulk_chunk_size_per_prefix` have been introduced to `sai_redis_flex_counter_group_parameter_t` whose instances are initialized by orchagent. However, the orchagent is still compiled with the old sairedis header, which prevents both new fields from being uninitialized which in turn fails vs test. We have to split this PR into two: 1. #1519 which updates the header sairedis.h only. the motivation is to compile swss(orchagent) with both new fields initiated. 2. #1457 contains all the rest of code The order to merge: 1. #1519 2. sonic-net/sonic-swss#3391 3. #1457
What I did
Optimize the counter-polling performance in terms of polling interval accuracy
Enable bulk counter-polling to run at a smaller chunk size
There is one counter-polling thread for each counter group. All such threads can compete for the critical sections at the vendor SAI level, which means a counter-polling thread can wait for a critical section if another thread has been in it, which introduces latency for the waiting counter group.
An example is the competition between the PFC watchdog and the port counter groups.
The port counter group contains many counters and is polled in a bulk mode which takes a relatively longer time. The PFC watchdog counter group contains only a few counters but is polled quickly. Sometimes, PFC watchdog counters must wait before polling, which makes the polling interval inaccurate and prevents the PFC storm from being detected in time.
To resolve this issue, we can reduce the chunk size of the port counter group. By default, the port counter group polls the counters of all ports in a single bulk operation. By using a smaller chunk size, it polls the counters in several bulk operations, with each polling counter of a subset (whose size =
chunk size
) of all ports. Furthermore, we support setting chunk size on a per-counter-ID basis.By doing so, the port counter group stays in the critical section for a shorter time and the PFC watchdog is more likely to be scheduled to poll counters and detect the PFC storm in time.
Collect the time stamp immediately after vendor SAI API returns.
Currently, many counter groups require a Lua plugin to execute based on polling interval, to calculate rates, detect certain events, etc.
Eg. For PFC watchdog counter group to PFC storm. In this case, the polling interval is calculated based on the difference of time stamps between the
current
andlast
poll to avoid deviation due to scheduling latency. However, the timestamp is collected in the Lua plugin which is several steps after the SAI API returns and is executed in a different context (redis-server). Both introduce even larger deviations. To overcome this, we collect the timestamp immediately after the SAI API returns.Depends on
Why I did it
How I verified it
Run regression test and observe counter-polling performance.
A comparison test shows very good results if we put any/or all of the above optimizations.
Details if related
For 2, each counter group contains more than one counter context based on the type of objects. counter context is mapped from (group, object type). But the counters fetched from different counter groups will be pushed into the same entry for the same objects.
eg. PFC_WD group contains counters of ports and queues. PORT group contains counters of ports. QUEUE_STAT group contains counters of queues.
Both PFC_WD and PORT groups will push counter data into an item representing a port. but each counter has its own polling interval, which means counter IDs polled from different counter groups can be polled with different time stamps.
We use the name of a counter group to identify the time stamp of the counter group.
Eg. In port counter entry, PORT_timestamp represents last time when the port counter group polls the counters. PFC_WD_timestamp represents the last time when the PFC watchdog counter group polls the counters