Replies: 2 comments 12 replies
-
This was also reported in #4786. We did have some changes recently that involved additional serialization-deserialization overhead (#4801), and also added resource version validation (for optimistic locking) in #4740. However, none of these have been released yet. If the difference in performance is between 6.4.x and 6.5-SNAPSHOT, then these are good candidates. If the performance degradation also appears in 6.4 it can only be related to the changes in OkHttp client-side.
For the new race condition, if it appears in 6.5 then can be related to #4740. In any case, having a few more hints would help in resolving these issues. |
Beta Was this translation helpful? Give feedback.
-
A possible root issue is that there is inconsistent synchronization around the map of resources held by the mock server. The KubernetesCrudDispatcher.processEvent method will temporarily remove the resource, then add it back if it's not a delete operation so there is a possible race condition for patch and put operations. It looks like create, delete, get, and watches have appropriate locking when being processed. That doesn't seem like a new problem though - the logic has been this way at least since 6.0. |
Beta Was this translation helpful? Give feedback.
-
In Strimzi, we use for some of our tests the Kubernetes Mock Server and it worked reasonably fine for us until 6.3.1. But from 6.4.0, we seem to have two issues in many places:
replaceStatus()
orpatchStatus()
on one of our custom resources, it seems there is a race condition where for some time, the mock server returnsnull
/404
to aget()
call on a given resource which is done in parallel to the status update call.This seems to happen with both 6.4.0 and 6.4.1 but also with the latest 6.5-SNAPSHOT builds.
Is this a known issue? Or anyone else experienced something similar? If not, I will try to put together some reproducer.
Beta Was this translation helpful? Give feedback.
All reactions