Skip to content

PyTorch extensions for high performance and large scale training.

License

Notifications You must be signed in to change notification settings

froody/fairscale

This branch is 704 commits behind facebookresearch/fairscale:main.

Folders and files

NameName
Last commit message
Last commit date

Latest commit

author
Mandeep Singh Baines
Jul 8, 2020
0cd6524 · Jul 8, 2020

History

1 Commit
Jul 8, 2020
Jul 8, 2020
Jul 8, 2020
Jul 8, 2020
Jul 8, 2020
Jul 8, 2020
Jul 8, 2020
Jul 8, 2020
Jul 8, 2020
Jul 8, 2020
Jul 8, 2020
Jul 8, 2020
Jul 8, 2020
Jul 8, 2020
Jul 8, 2020
Jul 8, 2020
Jul 8, 2020
Jul 8, 2020
Jul 8, 2020
Jul 8, 2020
Jul 8, 2020
Jul 8, 2020

Repository files navigation

fairscale

fairscale is a PyTorch extension library for high performance and large scale training.

fairscale supports:

  • pipeline parallelism (fairscale.nn.Pipe)
  • optimizer state sharding (fairscale.optim.oss)

Examples

Run a 4-layer model on 2 GPUs. The first two layers run on cuda:0 and the next two layers run on cuda:1.

import torch

import fairscale

model = torch.nn.Sequential(a, b, c, d)
model = fairscale.nn.Pipe(model, balance=[2, 2], devices=[0, 1], chunks=8)

Requirements

  • PyTorch >= 1.4

Installation

Normal installation:

pip install .

Development mode:

pip install -e .

Contributors

See the CONTRIBUTING file for how to help out.

License

fairscale is licensed under the BSD-3-Clause License.

About

PyTorch extensions for high performance and large scale training.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%