-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question for Nerfies experiment #42
Comments
Hi, to run Nerfies on the cat videos, we use colmap to get camera poses, which is more accurate than the PoseNet predictions when object does not move much. colmap pre-processing code is modified from this script and is available here. To preprocess, you want to store data at
The rest should be the same as Nerfies. |
Thanks for your quick reply! |
For hands,eagle,ama, we convert camera poses to nerfies format with the notebook here. I don't have cycles to clean this up but hope it can at least provide some guidance. |
Thanks a lot! It really helps! |
And by the way. How can I get this file in your notebook for AMA dataset? |
For Nerfies, the goal of setting center and scale parameters is to facilitate training, i.e.,moving the scene center to the coordinate center and properly scaling the input to MLPs. We set center to (0,0,0) because to goal is to reconstruct objects in its own coordinate system. For scale=0.05, we tried a few numbers and found 0.05 works best for Nerfies on the eagle video. The calibration file can be downloaded here and should be included if you follow the instruction : |
Thanks for your great work!
I have a question about the Nerfies experiment mentioned in your paper.
Nerfies uses colmap to conduct camera registration and scene-related calculation (scene scale and scene center), but banmo doesn't use colmap.
I also want to conduct the experiment mentioned in your paper. Is the related code released? Or any other guidance?
The text was updated successfully, but these errors were encountered: