Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add 16 and maybe 8 bit FP datatypes #559

Open
lstewart opened this issue Nov 7, 2024 · 0 comments
Open

Add 16 and maybe 8 bit FP datatypes #559

lstewart opened this issue Nov 7, 2024 · 0 comments
Milestone

Comments

@lstewart
Copy link

lstewart commented Nov 7, 2024

Problem Statement

CPUs and GPUs have added support for 16 bit and 8 bit floating point datatypes. SHMEM should support them.

Proposed Changes

Propose adding support for important new datatypes. BF16 and FP16 are candidates.
FP8 is a less likely possibility.

Support is easy for data movement APIs, Put, Get, Collect, Broadcast, Alltoall
Support for computational atomics is easy if there is hardware support.
Support for Reductions is a good question for discussion. I think it might be useful to have a new kind of reduction in which the target buffer is a different datatype than the source. One could do a sum-reduct from BF16 to FP32 for example, to retain more precision.

Impact on Implementations

Impact on Users

Adds new capabilities, no impact on existing applications.

References and Pull Requests

The Great 8 Bit Debate of Artificial Intelligence
BF16 vs FP16

@davidozog davidozog added this to the OpenSHMEM 1.7 milestone Nov 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants