You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So far our unit tests have mocked all of the non-deterministic resource-usage-gathering functions like memory_info() or time(). We could write additional tests that run a computation that is known to take up a certain amount of memory and time like allocating a list of 100,000 integers and we could test that the collected memory is above a certain amount. Additionally we should do the same but with child processes (parameterization with include_children True and False) spawned and do a similar computation and we should ensure that the memory usage makes sense. There will be some non-determinism and tests may need to be re-ran occasionally. We want to find a balance between the threshold being close enough to the expected average such that it's a valid test but low enough such that we don't run into failures merely resulting from random chance too often.
The text was updated successfully, but these errors were encountered:
erikhuck
changed the title
Test with actual calculation
Test with actual computation
Mar 28, 2024
Yet another useful test to add would be running a series of jobs requiring increasing execution and memory and then check that the relative usage metrics match the expected increasing pattern.
Other ideas include using multiprocessing to test for child processes and having some processes include sleeps such that we can compare cpu utilization of the sleeping process vs the non-sleeping process and ensure the sleeping process has less CPU utilization. We can also determine that processes have greater than 0 cpu utilization.
And asserting that the max CPU/core percent is greater than the mean CPU/core percent and that the max/mean core percent is greater than the max/mean CPU percent.
So far our unit tests have mocked all of the non-deterministic resource-usage-gathering functions like
memory_info()
ortime()
. We could write additional tests that run a computation that is known to take up a certain amount of memory and time like allocating a list of 100,000 integers and we could test that the collected memory is above a certain amount. Additionally we should do the same but with child processes (parameterization withinclude_children
True and False) spawned and do a similar computation and we should ensure that the memory usage makes sense. There will be some non-determinism and tests may need to be re-ran occasionally. We want to find a balance between the threshold being close enough to the expected average such that it's a valid test but low enough such that we don't run into failures merely resulting from random chance too often.The text was updated successfully, but these errors were encountered: