You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have performed weak scaling analysis using chunk_size = 64, 128,256,512MB, starting 1 node.
we used chunk_per_worker 10 for all chunk size.
This creates non homogeneous data size for each analysis.
What about modifying this chunk_per_worker as 80,40,20 and 10 for chunk_size = 64, 128,256 and 512MB respectively?
Then the computational size is same for each chunk size, thus we should really be able to see the difference of chunk_size effect on computational time?
The text was updated successfully, but these errors were encountered:
We have performed weak scaling analysis using chunk_size = 64, 128,256,512MB, starting 1 node.
we used chunk_per_worker 10 for all chunk size.
This creates non homogeneous data size for each analysis.
What about modifying this chunk_per_worker as 80,40,20 and 10 for chunk_size = 64, 128,256 and 512MB respectively?
Then the computational size is same for each chunk size, thus we should really be able to see the difference of chunk_size effect on computational time?
The text was updated successfully, but these errors were encountered: