Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loss error #157

Open
amkonshin opened this issue Nov 17, 2021 · 1 comment
Open

Loss error #157

amkonshin opened this issue Nov 17, 2021 · 1 comment

Comments

@amkonshin
Copy link

I have some issues with train on pytorch. I rewrite loss function so its same as urs.
I trained as u reccommended and get bad results, after investigation i mention that nn predict awful probabilities in channel 0 and channel 1 output. Its all above 1, so u cant search for good results using threshold 0.5 or something like that.
Than i rerun training and printing l1,obj and noobj losses and it cames obj and noobj losses always became 0 after few iterations.
Because ur loss function logloss :
def logloss(Ptrue,Pred,szs,eps=10e-10):
b,h,w,ch = szs
Pred = tf.clip_by_value(Pred,eps,1.)
Pred = -tf.log(Pred)
Pred = PredPtrue
Pred = tf.reshape(Pred,(b,h
w*ch))
Pred = tf.reduce_sum(Pred,1)
return Pred
equal to 0 when predicted values are above 1 (ln 1=0). So net learn to predict big values, and its not how it is supposed to work, cuz later we need this probs to estimate good predicitons and nms and so on.
So whats the point of using this loss func or maybe i am wrong somewhere?

@amkonshin
Copy link
Author

Btw ive found an error in my code, i used Relu instead of Sigmoid as last layer activation.
But i still can understand why we need 2 filter for obj and noobj, why can`t we use 1 filter with obj prob and use BCELoss?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant