What is the best way to extract recall and precision for a specific Intersection-over-Union threshold? #2152
-
Hi all, thanks for this amazing work. I mantain a python package for airborne object detection. I'd like to migrate from our custom IoU evaluation to this library. I see a the mAP and IoU metrics in the detection module. Our current metric is the proportion of detections that overlap with targets of IoU > 0.4 Within a pytorch-lightning module I can define a metric
and call in during validation step
The docs are not clear to me, and I need a push in the right direction.
This discussion (#520) hints there is an argument, but it does not seem exist anymore. There is no arg reduction in torchmetrics.detection.IntersectionOverUnion.
This would allow me to calculate the recall and precision for the batch of detections.
The data is inside there to create the precision recall curve. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hello, I am not sure to understand well your first question but I can answer to the second one.
As you can read in the documentation:
|
Beta Was this translation helpful? Give feedback.
Hello, I am not sure to understand well your first question but I can answer to the second one.
It exists a boolean attribute
extended_summary
(default=False) of the classMeanAveragePrecision
which enables to return precision and recall when compute() is call.metric = MeanAveragePrecision(iou_type="bbox",
extended_summary=True)`validation_metrics = metric.compute()
precision = validation_metrics['precision']
recall= validation_metrics['recall']
As you can read in the documentation:
precision
: a tensor of shape(TxRxKxAxM)
containing the precision values. HereT
is thenumber of IoU thresholds,
R
is the number of recall thresholds,K
is the number of classes,A
is the number of areas and