Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question on MultiPosConLoss and local_batch_size != self.last_local_batch_size #3

Open
ArdalanM opened this issue Jan 12, 2024 · 1 comment

Comments

@ArdalanM
Copy link

Hi,

Thanks for the contribution and updated code on Supervised Contrastive Learning.
My question is related to this part of the loss:
https://github.com/google-research/syn-rep-learn/blob/main/StableRep/models/losses.py#L79

local_batch_size = feats.size(0)
...
# Create label matrix, since in our specific case the
# label matrix in side each batch is the same, so
# we can just create it once and reuse it. For other
# cases, user need to compute it for each batch

if local_batch_size != self.last_local_batch_size:
     etc....

My understanding is that, for a given batch in distributed setting, the label tensor (after all_gather) will be identical across all the gpus so no need to compute it multiple times. Just once per batch is enough.

My question is then on the condition local_batch_size != self.last_local_batch_size:: Why the check is done on the batch size and not on the tensors values, isn't the batch size pretty much the same during training ?

Thank you !

@HobbitLong
Copy link
Collaborator

Hi, @ArdalanM,

Thanks for the question!

Generally I think you are right! It's just in the situation of StableRep paper, the label map is pretty much decided according to the batch size. The batch size may change if the drop_last flag is false in the data loader.

If this loss is repurposed for supervised contrastive learning, I think we should comment out the if check.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants