You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One of the scripts in the examples/ folder of Accelerate or an officially supported no_trainer script in the examples folder of the transformers repo (such as run_no_trainer_glue.py)
All the accelerate gather function is stricted to all_gather. However, there are also the way of using gather in main process to calculate the evaluation process. If we use all_gather for the evaluation process and pass it to cpu it will cost n times (n is number of process). However we only require to gather the distributed variable to one place to calculate.
@SangbumChoi definitely open to trying out something more efficient! Best case scenario we have a flag to use all_gather instead, and default to this new method as part of the func. Would you like to take a stab at a PR?
System Info
Information
Tasks
no_trainer
script in theexamples
folder of thetransformers
repo (such asrun_no_trainer_glue.py
)Reproduction
Related issue
huggingface/transformers#15466
https://github.com/huggingface/transformers/pull/28769/files
Expected behavior
accelerate/src/accelerate/utils/operations.py
Line 353 in 55136b8
All the accelerate gather function is stricted to
all_gather
. However, there are also the way of usinggather
in main process to calculate the evaluation process. If we useall_gather
for the evaluation process and pass it to cpu it will cost n times (n is number of process). However we only require to gather the distributed variable to one place to calculate.What do you think about this?
https://github.com/facebookresearch/detectron2/blob/ebe8b45437f86395352ab13402ba45b75b4d1ddb/detectron2/utils/comm.py#L188
The text was updated successfully, but these errors were encountered: