-
Notifications
You must be signed in to change notification settings - Fork 198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Robot jumps sometimes | slam #14
Comments
Good afternoon, The ICP registration algorithms try to find the least squares minimization between the distances of the corresponding points [sensor data -> map]. To mitigate this issue, I developed the localization pipeline for running with a tracking configuration and a recovery configuration. The drawback of setting [max_correspondence_distance] to a value low, is that you need reliable and accurate odometry. If the odometry is not reliable only sporadically (wheel slippage for example), then you can set [max_correspondence_distance] to a lower value for normal operation and rely on the tracking_recovery_matchers for recovering from these cases (because a higher [max_correspondence_distance] is used). Here are some examples: If the robot is jumping very, very far away (several meters), then it is likely that the transformation_validators along with the tracking configurations that you are using detected that the robot was lost and tried to activate the recovery algorithms. If you do not want the robot to jump very far and don't mind providing its initial pose using rviz when the system starts, then I recommend disabling the initial pose estimation algorithms (just comment the associated yaml file) and confirm that it is not being loaded into the parameter server. Otherwise, you can tune transformation_validators to accept point cloud registrations with more outliers and also tune the tracking and recovery configurations to only activate the initial pose estimation algorithms after a longer time and after more attempts were made to find the robot pose. If you need dynamic map update just for navigation in the form of a occupancy grid, you can use octomap_server. |
Hi. Thank you. I recently started to using the drl on Intel Atom x7 z8700 and getting a very high cpu consumption. Do you have an example config file for such CPUs? Also the octomap server, does it save the dynamically changed map for the next iteration? I could not find that |
Good afternoon, I tested the guardian_config configurations in a ML300 Rend Lake NUC, equiped with a 2.80 GHz Intel Core i5-3427U, which had better performance when compared with the Intel Atom x7 z8700. Overall, for reducing the CPU usage I would recommend to tune the drl configurations depending on your localization accuracy requirements, which will be a trade off between accuracy and computational requirements:
In the guardian_localization.launch, dynamic_robot_localization_system.launch and octo_map.launch you can see how I integrated my fork of octomap_mapping for using octomap with drl. I also added the option of updating just the navigation map. You can try to use the oficial version of octomap.
Have a nice day :) |
I am also a bit puzzled with the dynamic map navigation. So let's say I have created an inital map and DRL localizes against that map with incoming lidar measurements. Then I turn the octomap_server which basically recreates the map with additional new obstacles (or removes it) and now, the DRL localizes against the new map created by the Octomap, so far so good. Now, how can DRL localize incoming LiDAR data with the newly made map from Octomap? Does it mean that DRL updates its hash table (correspondance table) right after the Octomap server updates the map? If so, does it mean that the localization during the map update should be very precise otherwise over some period of time, the covariance will be too large? And if so, how does this affect the initial pose estimation if the DRL uses the map created by the Octomap is much smaller than the initial map (given the screenshot above)? Because I had a look at your fork of the EDIT: |
Hello :) The Both the initial occupancy grid and the octomap octree have the origin of the map in the same place, so the maps from localization and navigation should / must overlap, but they can have different range and resolution (for example, the localization map could have the entire view of a warehouse and the navigation map could / should have only the zones in which the robot could move -> giving better performance to path planning algorithms). For drl, the map update requires the rebuild of the lookup table (or kd-tree), but it does not require initial pose estimation, since both the new and old map have the same origin, and as such, the pose of the robot should be in the same place in both old and new maps (with minor offsets due to robot motion). If you really need to update the map used for localization, then you should change the argument use_slam in guardian_localization.launch. For the navigation map update to work as expected, the octomap should receive the occupancy grid loaded by the map_server for building the initial map. Have a nice day :) |
Hi,
|
In slam mode drl starts with no map and builds it over time when new sensor data arrives. You just need to be careful how you configure drl for registering the occupancy grid subscriber: The saving of the map is currently manual, but there are already tools for that:
We used drl in dynamic environments but in which the outliers were usually below 75%. I would recommend to have all your robot hardware connected using ethernet. You can adjust the transformation_validators for letting drl tolerate higher offsets when tracking the robot pose: |
Thanks and what does this parameter do? So if I understood correctly, if I want to reuse the map and update it, I need to:
And :
So that the DRL will provide the Also, I have noticed that if I use And DRL launches |
For using an initial map and having the occupancy grid subscriber created, you need to:
For octomap performing accurate ray tracking, it needs to process the sensor data in their original frame. |
Ok, I understood, but in drl you also used: and:
I guess I need to activate them as I plan to SLAM and Localize at the same time? |
Yes, when This is more related with our motion controller implementation that was using the tf base_link -> map, and since we had +- good odometry, I could configure pose_to_tf_publisher to republish the last odom -> map correction with an updated timestamp so that the lookup transform within the motion controller would not block until a new pose estimation was given after processing a new lidar scan. By having the pose_to_tf_publisher republish the odom -> map with timestamp ros::Time::now(), the motion controller would be getting at high frame rate the base_link -> map, at the expense of higher error (because it was using odometry estimation between the time stamps in the middle of the lidars time stamps). Depending on your use case, you can set The map server is necessary to read the pgm / png file and publish the occupancy grid msg. |
If you are planning on using slam, then you should probably use kd-trees instead of lookup tables, since a kd tree is usually faster to initialize. |
OK get it and should I keep including the tracking, initial_estimate and recovery yaml files too? As I still need to localize |
Yes, you should just adjust the part relative to the correspondence_estimation_approach: |
Hi Carlos,
Sometimes during the localization the robot jumps very far away. Is that because I need to restrict the box search further down to avoid that?
Also, is it possible to update the map along the navigation? Because I have a very dynamic obstacle and I think I need to always refresh my map
The text was updated successfully, but these errors were encountered: