You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I would like to ask if anyone has met the following issue:
I train a MONAI UNet with multiple output channels (multiclass model for Heart, Left Lung, Right Lung) - the model is well trained with nice segmentation results - except the borders of organs - they are missclasified. As a pre-processing transformations i use ScaleIntensityRanged, Spacing and as a post-transformation i use Softmax and Argmax (Activationsd(keys=["pred"], softmax=True), AsDiscreted(keys=["pred"], argmax=True)). Each channel is picked after argmax 0 = background, 1 = channel 1, 2 = channel 2 etc..
The problem is following:
Around each organ there is a border of the following channel (channel 1 on the image is supposed to be heart, but left lung border is present.. same goes for channel 2 etc.). If i add another organ lets say spine, then channel 3 would have the border around spine (the following channel).
Logits:
Now i have tested multiple scenarions and found that the issue is highly dependant on the spacing in the z axis (model does not predict these wraps (borders), when the sample is with spacing of 2 mm or 3 mm, but once the input sample is with 5 mm z spacing, then the wraps occur). That lead me to try and use different interpolation methods, and when nearest is used instead of bilinear, the problem is resolved -> Model trained with bilinear interpolation and sample tested with nearest interpolation (same sample as above):
Logits:
however, that is not a solution (i would say its more of a coincidence), because when i train the model with nearest interpolation (instead of bilinear), then the problem also occurs. If more samples with spacing 5 mm are added to training, the issue is also solved, however i cannot know which spacing might be put into the model and the Interpolation and Spacingd transformation should standarize this. So my logits are dependant on input spacing in the z axis and used interpolation method.
I also tried to test and view the HU units of the wraps only and the structure (heart in this case) itself after and before interpolation:
and the difference between values is significant (t test) after and before interpolation. So the model should be able to clearly see the difference between those structures. What i find most confusing is that it happens for the values (mostly) of the following channel (the last channel is always clear of the wraps).
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello,
I would like to ask if anyone has met the following issue:
I train a MONAI UNet with multiple output channels (multiclass model for Heart, Left Lung, Right Lung) - the model is well trained with nice segmentation results - except the borders of organs - they are missclasified. As a pre-processing transformations i use ScaleIntensityRanged, Spacing and as a post-transformation i use Softmax and Argmax (Activationsd(keys=["pred"], softmax=True), AsDiscreted(keys=["pred"], argmax=True)). Each channel is picked after argmax 0 = background, 1 = channel 1, 2 = channel 2 etc..
The problem is following:
Around each organ there is a border of the following channel (channel 1 on the image is supposed to be heart, but left lung border is present.. same goes for channel 2 etc.). If i add another organ lets say spine, then channel 3 would have the border around spine (the following channel).
Logits:
Now i have tested multiple scenarions and found that the issue is highly dependant on the spacing in the z axis (model does not predict these wraps (borders), when the sample is with spacing of 2 mm or 3 mm, but once the input sample is with 5 mm z spacing, then the wraps occur). That lead me to try and use different interpolation methods, and when nearest is used instead of bilinear, the problem is resolved -> Model trained with bilinear interpolation and sample tested with nearest interpolation (same sample as above):
Logits:
however, that is not a solution (i would say its more of a coincidence), because when i train the model with nearest interpolation (instead of bilinear), then the problem also occurs. If more samples with spacing 5 mm are added to training, the issue is also solved, however i cannot know which spacing might be put into the model and the Interpolation and Spacingd transformation should standarize this. So my logits are dependant on input spacing in the z axis and used interpolation method.
I also tried to test and view the HU units of the wraps only and the structure (heart in this case) itself after and before interpolation:
and the difference between values is significant (t test) after and before interpolation. So the model should be able to clearly see the difference between those structures. What i find most confusing is that it happens for the values (mostly) of the following channel (the last channel is always clear of the wraps).
Spacing tranformation setup:
Spacingd( keys=["image"], pixdim=(config.spacing_x, config.spacing_y, config.spacing_z), mode=("bilinear"), allow_missing_keys=True, ), Spacingd( keys=["label"], pixdim=(config.spacing_x, config.spacing_y, config.spacing_z), mode=("nearest"), allow_missing_keys=True ), Spacingd( keys=["pred"], pixdim=(config.spacing_x, config.spacing_y, config.spacing_z), mode=("nearest"), allow_missing_keys=True ),
Is there some fundamental concept i am missing or what could be causing this issue?
Thank you and wish you a good day.
Beta Was this translation helpful? Give feedback.
All reactions