Replies: 1 comment 5 replies
-
Are you already cropping your input images? If you aren't doing this already, it would enable you to train smaller images, meaning you might be able to do less downsampling. The transform you might use for cropping would depend on your application:
|
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
I use Spacingd and pixdim to standardise incoming image dimensions. Of course when I choose higher pixdim values I have smaller image volumes therefore having shorter run times. It turns out that I don't actually have sufficient computational power to use the pixdim resolution I ideally want to use, I run out of memory.
I have trained the model to a satisfactory level on a certain pixdim size i.e. 3 mm. Out of interest I then evaluated datasets with a smaller pixdim size i.e. 1 mm. As you would expect, the model unsurprisingly does a poor job: I can't really blame it! I wondered though, if there was anything I could do (besides accessing better computers!) which could enable this type of technique to work? I was considering if there was anything I could do like adding another layer, or upscaling the model in some way?
It is a long shot but wanted to ask!
Thanks.
Beta Was this translation helpful? Give feedback.
All reactions