-
Notifications
You must be signed in to change notification settings - Fork 176
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about RandAugment Implementation #65
Comments
When we say in our paper that we apply a "weak augmentation" then this means (among other things): "flip, with 50% probability, the image across the vertical axis". When we say "strong augmentation" then this means (for the case of RA): "apply the RA function 100% of the time". I'm not sure what any other repository is doing but the 50% number in our paper means only that flips are done 50% of the time, RA is treated as a black box that can do whatever it wants to augment. Similarly, when say we use CTA, this means we use the CTA function to augment 100% of the time and let it do whatever it wants. |
Thank you for this fast response, I believed it was 100% but had to double check |
Hi, @carlini, I want to get the weak and strong augmented images of labeled images as the unlabeled in each iteration, but after I change Line 49 in d4985a1
Line 88 in d4985a1
|
Sorry I don't understand quite what you want. Could you expand? |
Hi @carlini, I mean how to get the strong augmented labeled images? are they 'x_in'? |
Line 89 in d4985a1
What do you want to do with the augmentations? |
Hi @carlini, thank you for your reply. Lines 87 to 88 in d4985a1
|
The labeled images will get passed as part of the unlabeled set as well, and that's how the labeled images are strongly augmented. I'm still confused what your overall objective is, though. If you just want to weakly and strongly augment a particular set of images then it would be easier to call directly into the augmentation libraries. Do you ant something else? |
Hi @carlini, I just want the strong augmented images for only the labeled images in a batch. For example, in each batch, I have 64 labeled images L and 64*7*2 unlabeled images U, and I want the corresponding 64 strong augmented images of L. I know that labeled images will get passed as part of the unlabeled set, but I just want the strong augmented labeled images in each batch. I want to know what is ''x_in'', because I think they are the strong augmented version of "xt_in". Am I right? If not, what augmentation libraries should I call to get the strong augmented images of L in each iteration? Thank you so much. |
FixMatch does not strongly augment each of the labeled images in the batch on every iteration. It only weakly augments those images. The only time the labeled images become strongly augmented is when they are passed as unlabeled images. (See Algorithm 1, line 2.) So if you want strongly augmented labeled images, you will need to get collect them from the unlabeled dataset when they happen to appear there. |
Hi @carlini, thank you for your reply. Lines 87 to 88 in d4985a1
And I found some explanation in #4 (comment), but I still don't understand how to generate x_in. Thank you so much. |
Lines 137 to 138 in d4985a1
|
Hi @carlini, the code in https://github.com/google-research/fixmatch/blob/master/fixmatch.py#L38 seems to give augmentation to training labeled images and generate x_in. The code is a little complicated for me, and I hope you don't mind my bother. |
Yeah, the code is a bit convoluted. It evolved from three projects. If you'd like to find a simpler implementation David wrote one here However, for your question, no those augmentations aren't going into x_in. I'm not quite sure why you think they are. If you look at the code then xin Line 88 in d4985a1
is only ever used here, and exported as x Lines 136 to 138 in d4985a1
and the x gets fed here as the prediction opLines 228 to 233 in d4985a1
|
Thank you so much and thank you for the simplified version @carlini . Lines 63 to 69 in d4985a1
Are x['probe'] here the evaluation images, or the training labeled images? I am still not very clear about that... sorry |
Sorry for the late reply. If you haven't figured it out already, probe here is used to check how accurate the model is on these |
In the paper you mention that flips are applied with 50 percent probability. Is it also the case that each randaugment sample is also applied with 50 percent probability or are two randaugment choices always applied to each unlabelled image?
The reason I ask is that in the main pytorch reimplementation of your work there is a 50 percent chance for each randaugment policy to be applied but I didn't see this in your paper
Many Thanks
The text was updated successfully, but these errors were encountered: