Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support FSDP in JAX workloads #797

Open
priyakasimbeg opened this issue Oct 17, 2024 · 2 comments
Open

Support FSDP in JAX workloads #797

priyakasimbeg opened this issue Oct 17, 2024 · 2 comments

Comments

@priyakasimbeg
Copy link
Contributor

It is useful to shard optimizer state across devices (to save significant memory). This reflects current practice. We want to support it.

  • We want to switch from no sharding to naive model parameter sharding in both framworks.
  • We will forbid (in the rules) any hacks that change the model parallelization strategy and have workload-default sharding.
  • Allow submitters to opt-out of it on a per-workload basis.
@priyakasimbeg
Copy link
Contributor Author

From meeting notes 9/5

Possible with pmap, but easier with jit.

  • How we replicate parameters is not under the control of the submitters.
  • Would introduce a breaking change to existing submissions.
  • Could be solved by creating a no op.
  • We currently have the workloads call jax.utils.replicate.
  • DeepSpeech code would be simplified with jit but introduce breaking changes.

Sourabh’s Suggestion:

  • Workloads shouldn’t do any across device all-gather.
  • Allow submissions control over how parameters are replicated (with a new submission function).
  • Switch to global arrays and switch the workload accordingly.

@priyakasimbeg
Copy link
Contributor Author

priyakasimbeg commented Nov 7, 2024

From 9/12 meeting notes:
Recap: It is useful to shard optimizer state across devices (to save significant memory). This reflects current practice. We want to support it.
We don’t want to support arbitrary model parallelism.

  • Sourabh: We could allow model-agnostic model parameter sharding.
  • Michael: We still want to ensure that the frameworks are comparative.

Proposal:
Switch from no sharding to naive model parameter sharding. Switch from pmap to jit in JAX and allow optimizer state sharding (that follows the model parameter sharding) in both frameworks.
Forbid (in the rules) any hacks that change the model parallelization strategy.
Have workload-default sharding. Allow submitters to opt-out of it on a per-workload basis.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants