You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When performing distributed continual learning tasks, it is common to expand model parameters as tasks increase. For example, I have defined an expand_classifier() method with random initialization to increase the parameters of the classifier.
How can I ensure that the newly added parameters are initialized the same on each GPU model?
If i do
if self.accelerator.is_main_process:
self.model.module.prompt.expand_classifier()
How can i sync classifier across all distributed model?
The text was updated successfully, but these errors were encountered:
When performing distributed continual learning tasks, it is common to expand model parameters as tasks increase. For example, I have defined an
expand_classifier()
method with random initialization to increase the parameters of the classifier.How can I ensure that the newly added parameters are initialized the same on each GPU model?
If i do
How can i sync classifier across all distributed model?
The text was updated successfully, but these errors were encountered: