You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have finetuned your model with 2 output classes (skin and background). As a backbone I took your pretrained (19 classes) model in order to speed-up the training process.
So, I just took context path, ffm, conv, conv16, conv32. Thus, only three convolutional layers (feat_out, feat_out16, feat_out32 respectively) are trained from scratch.
I have utilized all of your hyperparameters and all of your approaches to fit the model. But after some steps (after 250 steps) training loss increase (from 1.58) until 1.98 and afterwards remains there with small fluctuations.
What could be the problem? Any other ideas?
I tried using scheduler after 180-200 steps (initial lr=0.01, gamma=0.1), gradient clipping. Anyway, it is always the same picture - an increase in loss.
Thanks in advance for any ideas!
Hello there!
I have finetuned your model with 2 output classes (skin and background). As a backbone I took your pretrained (19 classes) model in order to speed-up the training process.
So, I just took context path, ffm, conv, conv16, conv32. Thus, only three convolutional layers (feat_out, feat_out16, feat_out32 respectively) are trained from scratch.
I have utilized all of your hyperparameters and all of your approaches to fit the model. But after some steps (after 250 steps) training loss increase (from 1.58) until 1.98 and afterwards remains there with small fluctuations.
What could be the problem? Any other ideas?
I tried using scheduler after 180-200 steps (initial lr=0.01, gamma=0.1), gradient clipping. Anyway, it is always the same picture - an increase in loss.
Thanks in advance for any ideas!
Batchsize = 64
Initial learning rate = 0.01
Oprimizer: SGD
Loss: OhemCELoss
P.S. I have been waiting for the 2500 steps (near 7 epochs) - there is not a single hint subsequent decreasing.
The text was updated successfully, but these errors were encountered: