Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Creating better datasets for weak scaling analysis. #36

Open
tinaok opened this issue Oct 11, 2019 · 0 comments
Open

Creating better datasets for weak scaling analysis. #36

tinaok opened this issue Oct 11, 2019 · 0 comments

Comments

@tinaok
Copy link
Collaborator

tinaok commented Oct 11, 2019

We have performed weak scaling analysis using chunk_size = 64, 128,256,512MB, starting 1 node.
we used chunk_per_worker 10 for all chunk size.
This creates non homogeneous data size for each analysis.
What about modifying this chunk_per_worker as 80,40,20 and 10 for chunk_size = 64, 128,256 and 512MB respectively?
Then the computational size is same for each chunk size, thus we should really be able to see the difference of chunk_size effect on computational time?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant