-
Notifications
You must be signed in to change notification settings - Fork 287
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data required to have good results #361
Comments
I think 5000 is a good start, I have seen decent results, but using 20k is probably better. |
@TontonTremblay I am trying to estimate pose of a single object. Following similar approach like yours. But instead of computing the belief map, trying to make the network regress the 2D projected vertices (in pixel co-ordinates) and later use the PnP to compute pose. Right now, using 6000 images for training and the model converges for the training data. I am also getting decent results in the test as well but not all the predictions are accurate. Do you think increasing the training sample will improve the performance of the model? |
you doing like image -> (x,y) in normalized space? I would think that more data cannot go wrong. |
And how many different background images? |
I have ~2000 hdri background to download from. I used all of them. https://drive.google.com/file/d/1lp36MgTlS4OFaH0vdsTFhyGFJpQDY2YX/view?usp=drive_link you can download with the link. |
Hey.
In order to train a model I require to generate synthetic data.
You mention that if I run the blenderproc script 5 times, each time generating 1000 frames. Each frame will have five copies of the object and ten randomly chosen distractor object:
./run_blenderproc_datagen.py --nb_runs 1 --nb_frames 10 --path_single_obj ../models/Ketchup/google_16k/textured.obj --nb_objects 5 --distractors_folder ~/data/google_scanned_models/ --nb_distractors 10 --backgrounds_folder ../dome_hdri_haven/
How many frames are needed to train a good model for an specific object? More or less.
Thanks;
Joan
The text was updated successfully, but these errors were encountered: