Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training on text lines never converges #157

Open
4 tasks done
BoiseBound opened this issue Jan 2, 2025 · 0 comments
Open
4 tasks done

Training on text lines never converges #157

BoiseBound opened this issue Jan 2, 2025 · 0 comments
Labels
question Further information is requested

Comments

@BoiseBound
Copy link

Before Asking

  • I have read the README carefully. 我已经仔细阅读了README上的操作指引。

  • I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。

  • I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。

Search before asking

  • I have searched the DAMO-YOLO issues and found no similar questions.

Question

I have been able to successfully train DAMO-YOLO using COCO. Thank you for your efforts and for releasing your code to github!!

I now am trying to train my own dataset, which is a set of text lines, text regions, and graphics from document images. DAMO-YOLO training never achieves more than a MAP of 0.006. I believe that my COCO categories/images/annotations are set up is correctly.

I have tried training using only my images to train as well as trying to use pretraining with COCO-based models. Text line detection has properties that don't exist as much in COCO: (a) long width/short height or short width/long height bboxes, (b) potential bbox overlaps, (c) potentially many bboxes per image of the same type, (d) exclusively hollow objects (since text is not filled in). Given that COCO is able to detect the wheels in bicycles, I think (d) should be OK.

Since this is also document image detection, my images are very large (2500x2500 on average), so I have tried training 640x640, 1280x1280, and 1920x1920 -- and none of these have proven successful. I am thinking about trying to decompose images into overlapping subpieces that are 640x640.

I wondered if you might have recommendations for parameters I should set or things I should try?
Thank you.

Additional

No response

@BoiseBound BoiseBound added the question Further information is requested label Jan 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant