Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Improve documentation for collective operations #1084

Closed
wants to merge 4 commits into from
Closed

docs: Improve documentation for collective operations #1084

wants to merge 4 commits into from

Conversation

swaptr
Copy link
Contributor

@swaptr swaptr commented Jan 30, 2023

Description

Current documentation for communication.Alltoall and communication.Alltoallv does not represent the different construction of sendbuf and recvbuf. Also, the documentation of non-blocking methods for this same are also updated.

Issue/s resolved: #1072

Changes proposed:

  • Improve documentation of collective operations.

Type of change

  • Documentation update

Memory requirements

NA

Performance

NA

Due Diligence

  • All split configurations tested
  • Multiple dtypes tested in relevant functions
  • Documentation updated (if needed)
  • Title of PR is suitable for corresponding CHANGELOG entry

Does this change modify the behaviour of other functions? If so, which?

no

@swaptr
Copy link
Contributor Author

swaptr commented Jan 30, 2023

Force-push for commit cleanup.

Copy link
Contributor

@ClaudiaComito ClaudiaComito left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot @swaptr, this looks so much better!

In the Alltoallv function, will you add the description of sendbuf and recvbuf as tuples of buffer, counts and displacements (type hints and docstrings)? Thanks!

@swaptr
Copy link
Contributor Author

swaptr commented Jan 30, 2023

Thanks for reviewing this @ClaudiaComito . Can you check if its correct now?

Comment on lines +1405 to +1406
sendbuf: Tuple[DNDarray, torch.Tensor, Any],
recvbuf: Tuple[DNDarray, torch.Tensor, Any],
Copy link
Contributor

@ClaudiaComito ClaudiaComito Feb 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @swaptr , you're actually validating my impression that the documentation is unclear, so thanks a lot for tackling this.

For Alltoallv, sendbuf and recvbuf must be a tuple of: data object, counts, displacements. Check out the official OpenMPI documentation. Here's another explanation I get back to often as it's easier for me to understand.

Here's an example in our ht.reshape() function

@ClaudiaComito ClaudiaComito added this to the Repo Clean-Up milestone Jul 31, 2023
@github-actions
Copy link
Contributor

github-actions bot commented Aug 7, 2023

Thank you for the PR!

@swaptr swaptr closed this Aug 7, 2023
@ClaudiaComito
Copy link
Contributor

ClaudiaComito commented Aug 7, 2023

Hi @swaptr , you can leave this PR open but will you consider allowing us to make changes? We're looking forward to merge your PR. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Improve Alltoallv documentation of sendbuf, recvbuf
2 participants