Replies: 1 comment 1 reply
-
Hello Paulo, With enough of them, you could indeed do some lidar scan matching with them. They have 45 deg horizontal FOV, so with 4 you can get 180 deg coverage with 32 points, or 360 deg with 4x45 deg blindspots. One way to combine those scans for icp_odometry would be to use rtabmap_util/point_cloud_aggregator node. Ideally using wheel odometry/visual odometry as fixed frame when assembling them, then use that odometry frame as Not sure if only 32 points on horizontal (with limited range <4 meters) would be enough to robustly do scan matching. For cluttered spaces it may work, but probably won't in large empty spaces. For example, with rplidar 4 meters, it has 360 coverage with low resolution of points, it works in an house or apartment, but for large spaces the scan matching may fail (not enough correspondences between consecutive frames). cheers, |
Beta Was this translation helpful? Give feedback.
-
Hello @matlabbe,
I have added four T.O.F. sensors to our robot—two front and two rear. I am using them for additional collision avoidance, but I also wanted to use their information to improve the SLAM performance.
These are amazing little sensors from ST (https://www.st.com/en/imaging-and-photonics-solutions/vl53l8cx.html), and they have an 8x8 multi-zone.
I was wondering how to integrate that information into rtabmap. Can I use your standard LiDar implementation and feed in their information with some parameter modification? I look at them as four separate sparse three-dimensional LiDar; do you agree?
Otherwise, I am considering running a 'scan matching' algorithm at each node, sending that 'odometry' over our CAN Bus, and fusing it (EKF) it with the visual and wheel odometry. I implemented scan matching in grad school and found it very elegant, simple, and powerful.
Do you have any recommendations?
Thank you in advance,
Paulo
Beta Was this translation helpful? Give feedback.
All reactions