Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable torch.autocast with ZeRO #6993

Draft
wants to merge 6 commits into
base: master
Choose a base branch
from
Draft

Conversation

tohtana
Copy link
Contributor

@tohtana tohtana commented Feb 3, 2025

This PR supports torch.autocast with ZeRO. You need to enable torch.autocast in DeepSpeed config.

"torch_autocast": {
  "enabled": true,
  "dtype": "bfloat16"
}

You don't need to explicitly call torch.autocast in your code. The grad scaler is also applied in the DeepSpeed optimizer.

All the parameters are maintained in FP32 but certain operators and layers are computed in the specified dtype. With ZeRO enabled, the communication (allreduce/reduce_scatter) for specified layers are also done in the specified dtype (List of modules).
You cannot enable fp16 or bf16 with torch.autocast.

(Currently working on ZeRO3)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant