Replies: 1 comment 2 replies
-
You can try to turn the IO off or use less iterations for the IO stage, say about 10 iterations. Without IO, the runtime probably will be within 1 second excluding the loading time of the model. The speed could be faster by running the inference(w/o IO) with batch. But this requires development and modification of the command line interface to accept a list of input files. I will add this in my todo list but cannot promise when the new interface will be released. |
Beta Was this translation helpful? Give feedback.
-
Hello @HastingsGreer and others,
When I use uniGradICON for registering two images, the default speed (on my end) is around 1-1:30 minutes. While this is relatively quick, I need to perform many registrations in a batch (i.e. 50+ registrations per patient), which results in ~1 hour long total computational time per patient. Do you (or anyone) have any tips on how to improve computation time?
Thanks ahead of time!
Beta Was this translation helpful? Give feedback.
All reactions