DDP specifics: Single node function execution, test loss sync, num_workers, sync_batchnorm, precision #4387
Answered
by
s-rog
ManiadisG
asked this question in
DDP / multi-GPU / multi-node
-
❓ Questions and HelpWhat is your question?This is about DDP specifics and about the handling of functions in a script that we only want executed once (not for every GPU). I think they are missing from the doc and I couldn't find answers elsewhere. I apologize if they have been covered already. Questions:
Questions 1-4 are summarized in the following script and whether it works/can work.
|
Beta Was this translation helpful? Give feedback.
Answered by
s-rog
Oct 29, 2020
Replies: 1 comment
-
I'll close this for now, if you have more questions please use our forums! https://forums.pytorchlightning.ai/ |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
Borda
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
if self.global_rank == 0: do stuff
I'll close this for now, if you have more questions please use our forums! https://forums.pytorchlightning.ai/