-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sharded distributed sampler for cached dataloading in DDP #195
base: main
Are you sure you want to change the base?
Conversation
Example output:
GPU available: True (cuda), used: False
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
/hpc/mydata/ziwen.liu/anaconda/2022.05/x86_64/envs/viscy/lib/python3.11/site-packages/lightning/pytorch/trainer/setup.py:177: GPU available but not used. You can set it by doing `Trainer(accelerator='gpu')`.
Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/3
Initializing distributed: GLOBAL_RANK: 2, MEMBER: 3/3
Initializing distributed: GLOBAL_RANK: 1, MEMBER: 2/3
----------------------------------------------------------------------------------------------------
distributed_backend=gloo
All distributed processes registered. Starting with 3 processes
----------------------------------------------------------------------------------------------------
|
* update torch >2.4.1 * black * ruff
This reverts commit 8c13f49.
persistent_workers=bool(self.num_workers), | ||
pin_memory=True, | ||
shuffle=False, | ||
timeout=self.timeout, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@edyoshikun why is this needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At the beginning I had to add this timeout if it was taking long time to cache. I don't think we need this and in fact if it's =0 it works fine
Add a distributed sampler that only permutes index within ranks, improving cache hit rate in DDP.
See
viscy/scripts/shared_dict.py
for usage.