Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

训练到后面 loss都变为0.1 an,ap变为0 #46

Open
Johere opened this issue Jul 10, 2017 · 6 comments
Open

训练到后面 loss都变为0.1 an,ap变为0 #46

Johere opened this issue Jul 10, 2017 · 6 comments

Comments

@Johere
Copy link

Johere commented Jul 10, 2017

@luhaofang 您好,我用您的代码,训练集是casia webface,一共10572类,40多万张图片,因为数据量比较大因此不使用softmax进行预训练,而是直接用triplet loss从头训练。训练到后面所有loss都是0.1,an=0,ap=0,并且使用中间的caffemodel进行测试,所有图片的128维特征向量都一样。。请问这是为什么呢?谢谢!

@luhaofang
Copy link
Owner

luhaofang commented Jul 10, 2017

hi, triplet loss is an embedding method for metric optimization. So it needs a well trained classification model first.

@Johere
Copy link
Author

Johere commented Jul 10, 2017

好的,谢谢您的回答 我试一下先进行预训练的效果。

@ZongxianLee
Copy link

@Johere hello, do you feel convenience to tell me your email? I am also using the triplet loss for solving fine grained classification and i was a new one. Hoping to communicate with you! Thanks!

@zhangxiaopang88
Copy link

您好,請問加上預訓練模型,最後的結果怎麽樣,我也是遇到同樣的問題,不知道怎麽解決呢 @luhaofang @Johere

@xiaomingdaren123
Copy link

@zhangxiaopang88
我也遇到了同样的问题,可以交流一下吗?

@libarry
Copy link

libarry commented May 12, 2019

我认为如果batch的样本很小就会出现这种情况,原文一次batch需要1600以上的样本。感觉以caffe的显存利用效率根本无法实现这样的batch_size

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants