You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am facing a bit of a bizzare problem: I have a bunch of different sized images and I am trying to train+infer on these, and I have the following example transform code:
RuntimeError: stack expects each tensor to be equal size, but got [3, 384, 384] at entry 0 and [3, 385, 384] at entry 3
I find this really bizarre, since, after PadSquare, when I resize using a single int, it should give me a square image back - but it seems like it does not... why is this? is this a bug? It almost seems like some round-off error (got [3, 384, 384] at entry 0 and [3, 385, 384]).
🐛 Bug
I am facing a bit of a bizzare problem: I have a bunch of different sized images and I am trying to train+infer on these, and I have the following example transform code:
When I use a batchsize >1 I get thrown this:
I find this really bizarre, since, after PadSquare, when I resize using a single int, it should give me a square image back - but it seems like it does not... why is this? is this a bug? It almost seems like some round-off error (got [3, 384, 384] at entry 0 and [3, 385, 384]).
Hoever, if I do this:
it works fine...
What is the reason behind this? I am perplexed! When I try out sample images in say colab, they seem to have the same size...
Unfortunately, I am loading some 150k images using imagefolder, so I am not able to inspect the image directly.
The text was updated successfully, but these errors were encountered: