You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Q1: For simple usage of our recored data. You can just download the rosbag file in the data link, and just get the fisheye images from the rosbag. If you want to record the data by your self, you can get instruction from LGSVL. However the LGSVL simulator stopped maintenance last year. Our dataset was set up and recorded using previous online platforms, without attempting an offline solution. The plan for using the simulator offline has been announced in its issue After learning this news, in order to make it more widely available for future use, we opened up the entire scenario's build, sensor reference configuration, and other files, which are also linked to the disk link.
Q2: As can be seen in our paper, some categories of objects has already been annotated, such as cars, pillars, obstacles. Howerver we did not include pedestrains in our simulated scene. For further study, you can try to add it according to LGSVL tutorial.
Hi,
thanks for your work. I have two questions:
Q1: May I ask how to get the original fisheye images? I am new to Unity and LGSVL. Here are my simple train of thoughts:
Are the above steps correct? Thanks!
Q2: Will you annotate 3D obstacles in the future? such as cars and pedestrian. Thank you.
The text was updated successfully, but these errors were encountered: