-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
resolution ratio of input image #23
Comments
Although I haven't conducted that particular experiment yet, my experience with other datasets suggests that training a model with full views (21 views for ZJU-MoCap) and an input ratio of 1.0 can lead to optimal rendering results. |
About the outdoor dataset, what's the resolution ratio when your cameras record the videos? Do you resize the images to 1024*1024 just after recording, before getting the smpl keypoints? |
The zjumocap dataset is captured with 21 industrial cameras (2048x2048). We resize the images to 1024*1024. The outdoor dataset is captured with 18 GoPro Cameras (1920x1080). We keep the original resolution. |
About the outdoor dataset, I found the vhull dir contains the 3D bbox information. But I wonder how to get background.ply. Is it generated from the 18 background images? Also, I noticed outdoor dataset no longer needs the smpl points, it just needs the human images, human 3d mask (generated from 2d mask and converted to 3d using camera intri and extri) and background information, is that right? |
|
Hi, it seems a little blurry when using your gui_human.py to visualize the results. Does the resolution ratio (input_ratio in the yaml) that cause the problem? Will the result seems much clearer if the parameter set to 1.0 for training and inference? Thank you!
The text was updated successfully, but these errors were encountered: