You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
local_batch_size = feats.size(0)
...
# Create label matrix, since in our specific case the
# label matrix in side each batch is the same, so
# we can just create it once and reuse it. For other
# cases, user need to compute it for each batch
if local_batch_size != self.last_local_batch_size:
etc....
My understanding is that, for a given batch in distributed setting, the label tensor (after all_gather) will be identical across all the gpus so no need to compute it multiple times. Just once per batch is enough.
My question is then on the condition local_batch_size != self.last_local_batch_size:: Why the check is done on the batch size and not on the tensors values, isn't the batch size pretty much the same during training ?
Thank you !
The text was updated successfully, but these errors were encountered:
Generally I think you are right! It's just in the situation of StableRep paper, the label map is pretty much decided according to the batch size. The batch size may change if the drop_last flag is false in the data loader.
If this loss is repurposed for supervised contrastive learning, I think we should comment out the if check.
Hi,
Thanks for the contribution and updated code on Supervised Contrastive Learning.
My question is related to this part of the loss:
https://github.com/google-research/syn-rep-learn/blob/main/StableRep/models/losses.py#L79
My understanding is that, for a given batch in distributed setting, the label tensor (after all_gather) will be identical across all the gpus so no need to compute it multiple times. Just once per batch is enough.
My question is then on the condition
local_batch_size != self.last_local_batch_size:
: Why the check is done on the batch size and not on the tensors values, isn't the batch size pretty much the same during training ?Thank you !
The text was updated successfully, but these errors were encountered: