You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I have tried to convert a sat2map model to onnx and I found a relevant decrease of accuracy.
Anyone with some advice, suggestions and if you experienced something similar.
I have tried many tests, different parameters for torch.onnx.export() but no improvements.
During training phase I specified --norm istance, hence the reported batch normalization problem is excluded.
Generator network is unet_256.
Have you come into something similar?
thks
Bea
The text was updated successfully, but these errors were encountered:
The issue of accuracy loss is due to batch normalization layer. BN in inference mode performs differently than training.
The test.py script is not launching an inference however you set --eval parameter at the command line. In inference mode the pytorch function model.eval() is called (a step that is needed to export in onnx too) and it changes the behave of BN. If your B size is 1 at inference, if possible retrain your model with instance normalization, this can solve this difference of outputs. It worked for me.
Hi,
I have tried to convert a sat2map model to onnx and I found a relevant decrease of accuracy.
Anyone with some advice, suggestions and if you experienced something similar.
I have tried many tests, different parameters for torch.onnx.export() but no improvements.
During training phase I specified --norm istance, hence the reported batch normalization problem is excluded.
Generator network is unet_256.
Have you come into something similar?
thks
Bea
The text was updated successfully, but these errors were encountered: