Skip to content
This repository has been archived by the owner on Jan 12, 2024. It is now read-only.

Test problem #97

Open
qianxiao111 opened this issue Mar 17, 2023 · 2 comments
Open

Test problem #97

qianxiao111 opened this issue Mar 17, 2023 · 2 comments

Comments

@qianxiao111
Copy link

qianxiao111 commented Mar 17, 2023

What is it about testing the same abnomal image with a trained model that gives different results(AUC) each time? How can I solve this problem? And, When a normal sample were put into the abnomal folder for test, the test results is also bad.

@caiyu6666
Copy link

I found that in the testing code (lib/model.py, line 175), the netowork is not turned into the evaluation mode (missing self.netg.eval() and self.netd.eval()). As a result, the model will update the mean and var in BatchNorm2d layers.

I think that is the reason why the testing results change in each testing time.

Additionally, I found that the missing of net.eval() will cause higher performance, as the model can learn something from the testing set. So I think this bug can lead to unconvincing results.

@tobybreckon
Copy link

See discussion under issue #83

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants