-
Notifications
You must be signed in to change notification settings - Fork 146
#271 Add segmentation evaluation #299
#271 Add segmentation evaluation #299
Conversation
|
||
sys.path.insert(0, os.path.abspath(os.path.join(__file__, '..', '..', '..', '..'))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I kept getting SystemError: Parent module '' not loaded, cannot perform relative import
so I sticked to this SO answer. It adds the absolute path of the prediction
folder to the path which makes it possible to import the desired methods properly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm, i did noticed that your other PR had a __main__
section, which fits with that SO question about running scripts. I didn't see a similar script runner in this one though. Personally, I've had issues when switching bw running modules inside vs outside of docker. That's what led me to add the try/except
code around the config imports.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed the __main__
section because developers are also able to choose which evaluation method to run by e.g.
python -c "from prediction.src.algorithms.evaluation.evaluation import evaluate_segmentation;evaluate_segmentation()"
.
By adapting the test you provided in #290 one can also import and run the evaluation method using docker :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This does look really wrong. :p (I mean, at the very least it should use dirname
!)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lamby Adapted. Thank you!
@@ -19,7 +19,7 @@ def hausdorff_distance(ground_true, predicted): | |||
return hd | |||
|
|||
|
|||
def sensitivity(ground_true, predicted): | |||
def sensitivity(ground_true, predicted, smooth=1e-10): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since this value is used a lot, it'd probably be better to create a top level constant
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. Good point, thanks!
0bd5ead
to
e1475cf
Compare
This should add the possibility to benchmark a segmentation model on the LIDC dataset by calculating the average Dice coefficient, Hausdorff distance, sensitivity and specificity.
I also fixed some flaws in the sensitivity and specificity calculation because we would otherwise divide by zero on certain occasions.
Description
Given a segmentation model, the method predicts each LIDC nodule segmentation mask and compares that with the ground truth mask using the aforementioned metrics.
Reference to official issue
This addresses #271.
Motivation and Context
We want to know how well the implemented models perform in order to improve them in a targeted way.
How Has This Been Tested?
I'd like to test it in the next days using a model that supports 3D convolutions.
CLA