Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mesh Output looks "Spiky" #18

Open
satyajitghana opened this issue Jan 25, 2024 · 6 comments
Open

Mesh Output looks "Spiky" #18

satyajitghana opened this issue Jan 25, 2024 · 6 comments

Comments

@satyajitghana
Copy link

I tried running ImMesh on data collected with MID360, but the mesh output is filled with faces which are bunch of spikes, it should be a smooth wall surface, is there a smoothening parameter?

image image image
@ziv-lin
Copy link
Member

ziv-lin commented Jan 26, 2024

Can you turn on Rviz (in the launch file, default is set as false), and see how the quality of the point cloud accusation is (in Rviz)?

@chengwei0427
Copy link

Hi,
Nice work!
Is there a recommended environment configuration? For example, ubuntu18.04 or 20.04? Eigen version, ceres and so on.

@satyajitghana
Copy link
Author

I saved the ply and pcd and here's how it looks

the point cloud seems to look okay, but its a little noisy

image image

see the wall is thick

image image

some walls are really thin

image

but the mesh looks awful

image

really awful

image image

If i were to use OpenMVS and do mesh creation post SLAM, the mesh would definitely look better than this even though the point cloud has noise.

@satyajitghana
Copy link
Author

@chengwei0427 i had FASTLIO2 environment setup, i used the same stuff. Ubuntu 20.04. If you can build FASTLIO/R3LIVE you should be good to go to build this

@ziv-lin
Copy link
Member

ziv-lin commented Jan 27, 2024

  1. From the point cloud you show, if two walls are close to each other (laid inside a voxel), ImMesh will consider than as one wall, hence the reconstruction results definitely look Spiky.

You can try to reduce the "meshing/distance_scale" value in the launch file to see if the quality can improve?

  1. ImMesh directly take the raw input pointcloud for meshing, hence its results are effected by the noise of raw input LiDAR points. Hence the reconstruction results might be worse with those LiDAR have larger measurement noise.

@FPSychotic
Copy link

FPSychotic commented Feb 25, 2024

I have a mid-360 in a rover. Everything worked just fine, as I had already good setups for fast-lio, point-lio etc.
We need start by to know ,mid-360 is quite noisy and will mesh badly compared with avia, and also SLAM point clouds already mesh very bad. So slam, real-time, mid-360, is not a good start. Avia instead would be in the good side, maybe no the best, but is quite high in the potential meshing quality.
Just made a fast test in the rover.
I found,
-Ground mesh quite good, much better that any vertical part.
-Vertical parts doesn't look spiky at the first pass.
It means the mid360's base noise, if not exceptional can be used to generate a flat mesh , just general shapes, can be potentially useful for some things, as take its normals for point clouds without normals.

  • the mesh will turm spike as you add more noise, mainly, being or passing many times by the same place, worse is your odometry, faster will turn spiky, in example because drift makes.a double wall. if you move linear and reasonable fast amd you don't comeback to the same point your mesh will look much better.
  • you need very good odometry here, I have very good one , could be 3 min. In the same area moving around the rover without gets spiky.
  • I added manually Imu delay param to the yalm file ,I hope it use it, IMU data here will be important as hardware synch.
  • I would think in to use the offline meshing tool, great they did.
  • perform and performance is better I though , but won't make a revolution.
  • so sad no colour, would be great do it off line from pictures adding textures. Is not needed online ,please add tool offline.
  • odometry is very good , or looks quite good, for sure very close to Fast-lio.
  • would be nice can get the depth maps to can add textures with software as Metashape. Also can take geo-referenced images from camera.
  • didn't use the depth maps or enhanced pointsclouds yet. Os difficult for me understand what they do

Cool stuff, thanks to the devs, please if you cannot add the colour system, allow extract images and depth maps to do it in external software. Just get synchronised pictures and depth maps based timestamps or poses. Great have a GUI.
Smart and good people, best whises

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants