You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently our implementation of FID/256 relies on the torchmetrics FID, which includes image interpolation. For images of size 256, this doesn't cause a huge difference, but many standard methods don't include interpolation in calculating FID.
So, we should probably have an FID metric that has no interpolation and passes the images straight to the Inception model.
In my past experiments, this could cause a divergence of maybe 0.01-0.1 in the FID score on CLIC2020.
The text was updated successfully, but these errors were encountered:
Currently our implementation of FID/256 relies on the
torchmetrics
FID, which includes image interpolation. For images of size 256, this doesn't cause a huge difference, but many standard methods don't include interpolation in calculating FID.So, we should probably have an FID metric that has no interpolation and passes the images straight to the Inception model.
In my past experiments, this could cause a divergence of maybe 0.01-0.1 in the FID score on CLIC2020.
The text was updated successfully, but these errors were encountered: