Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues about mAP #3

Open
AXINLETTER opened this issue Dec 28, 2019 · 3 comments
Open

Issues about mAP #3

AXINLETTER opened this issue Dec 28, 2019 · 3 comments

Comments

@AXINLETTER
Copy link

Hello, the annotation information in the test set like:
{"license": 5, "file_name": "COCO_val2014_000000445200.jpg", "coco_url": "http://mscoco.org/images/445200", "height": 427, "width": 640, "date_captured": "2013-11-20 04:27:57", "flickr_url": "http://farm2.staticflickr.com/1147/5103880651_f3c1e2a721_z.jpg", "id": 445200}, and we use the model to get the detection result, like:
{"image_id": 445200, "category_id": 1, "score": 0.6065034866333008, "bbox": [380.34, 45.83, 86.32, 129.75], "COCO_category_id": 47},I can not understand which information means that this instance can complete that task in the anotation test set? I can get the score of an instance about being able to complete a task through the model but how to calculate the accuracy ? Thx~

@yassersouri
Copy link
Owner

I see that you have closed the issue, is it resolved?

@AXINLETTER AXINLETTER reopened this Sep 22, 2020
@AXINLETTER
Copy link
Author

I am so sorry that it took so long to think of this problem.
I saw it in the code
cocoEval = COCOeval(gtCOCO, dtCOCO, "bbox")
CocoEval. Params. CatIds = 1
CocoEval. The evaluate ()
CocoEval. The accumulate ()
CocoEval. The summarize ()
Then I got the accuracy rate of each task in the test set. I failed to figure out how to get the accuracy rate through the detection results. I hope you can answer, thank you!

@AXINLETTER AXINLETTER changed the title Annotation problems in test set. Issues about mAP Sep 22, 2020
@AXINLETTER
Copy link
Author

For an object in the picture, the highest score of 14 tasks considered to be able to complete the task? Or should a threshold be set for the scores in the detection results, and objects larger than the threshold are considered to be able to complete the task? Or is it possible to extract the top n objects with the highest score as the prediction of the ability to complete the task? Or other evaluation methods?

If this is the first case, I can see that there are the same images in the test sets of different tasks, and an object in one image is also marked as able to complete multiple tasks. Obviously, the former is not feasible. In the second case, I don't see the specific score threshold. This makes me confused. I hope you can answer my question,Thx~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants