Skip to content
This repository has been archived by the owner on Feb 22, 2020. It is now read-only.

#271 Add segmentation evaluation #299

Merged
merged 1 commit into from
Jan 26, 2018

Conversation

WGierke
Copy link
Contributor

@WGierke WGierke commented Jan 24, 2018

This should add the possibility to benchmark a segmentation model on the LIDC dataset by calculating the average Dice coefficient, Hausdorff distance, sensitivity and specificity.
I also fixed some flaws in the sensitivity and specificity calculation because we would otherwise divide by zero on certain occasions.

Description

Given a segmentation model, the method predicts each LIDC nodule segmentation mask and compares that with the ground truth mask using the aforementioned metrics.

Reference to official issue

This addresses #271.

Motivation and Context

We want to know how well the implemented models perform in order to improve them in a targeted way.

How Has This Been Tested?

I'd like to test it in the next days using a model that supports 3D convolutions.

CLA

  • I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well


sys.path.insert(0, os.path.abspath(os.path.join(__file__, '..', '..', '..', '..')))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this necessary?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I kept getting SystemError: Parent module '' not loaded, cannot perform relative import so I sticked to this SO answer. It adds the absolute path of the prediction folder to the path which makes it possible to import the desired methods properly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm, i did noticed that your other PR had a __main__ section, which fits with that SO question about running scripts. I didn't see a similar script runner in this one though. Personally, I've had issues when switching bw running modules inside vs outside of docker. That's what led me to add the try/except code around the config imports.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed the __main__ section because developers are also able to choose which evaluation method to run by e.g.
python -c "from prediction.src.algorithms.evaluation.evaluation import evaluate_segmentation;evaluate_segmentation()".
By adapting the test you provided in #290 one can also import and run the evaluation method using docker :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does look really wrong. :p (I mean, at the very least it should use dirname!)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lamby Adapted. Thank you!

@@ -19,7 +19,7 @@ def hausdorff_distance(ground_true, predicted):
return hd


def sensitivity(ground_true, predicted):
def sensitivity(ground_true, predicted, smooth=1e-10):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since this value is used a lot, it'd probably be better to create a top level constant

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Good point, thanks!

@reubano reubano merged commit abd30dd into drivendataorg:master Jan 26, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants