You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the progressive learning network transformer is learned as a byproduct of optimizing the softmax objective loss for classification accuracy.
Contrastive loss (reference 1, reference 2) explicitly learns a transformer, penalizing samples of different classes that are close to one another (see also margin loss). This may be better suited for using kNN later, and shows state of the art accuracy.
Validate by comparing accuracy and then compare transfer efficiency. Determine the best form of contrastive loss to use.
Prior experiments
I had fiddled with this a bit. See attempted contrastive learning implementation here and preliminary transfer efficiency benchmarks for variations in the contrastive loss layers.
The text was updated successfully, but these errors were encountered:
Background
Currently, the progressive learning network transformer is learned as a byproduct of optimizing the softmax objective loss for classification accuracy.
Contrastive loss (reference 1, reference 2) explicitly learns a transformer, penalizing samples of different classes that are close to one another (see also margin loss). This may be better suited for using kNN later, and shows state of the art accuracy.
See official implementation here.
Proposed feature: implement contrastive loss.
Validate by comparing accuracy and then compare transfer efficiency. Determine the best form of contrastive loss to use.
Prior experiments
I had fiddled with this a bit. See attempted contrastive learning implementation here and preliminary transfer efficiency benchmarks for variations in the contrastive loss layers.
The text was updated successfully, but these errors were encountered: