You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"Has anyone ever done camera-radar fusion using their own devices? After calibrating the Azure Kinect DK with the RSLidar, the bounding boxes in the point cloud often appear behind the correct point cloud, and the red point cloud indicating successful clustering does not appear. What aspects can be optimized in this situation?"
Additional
No response
The text was updated successfully, but these errors were encountered:
Has anyone ever done camera-radar fusion using their own devices?
RADAR isnt just a different LiDAR. RADAR has more measurements than just the pointcloud or the bounding boxes of the objects. i suggest https://github.com/TUMFTM/CameraRadarFusionNet. as it takes the RADAR CROSS SECTION (RCS) into consideration, there are more algo similar like center fusion that also do something similar.
After calibrating the Azure Kinect DK with the RSLidar, the bounding boxes in the point cloud often appear behind the correct point cloud, and the red point cloud indicating successful clustering does not appear. What aspects can be optimized in this situation?"
Please make clarify what you want to say. Interesting you first question about the
camera-radar fusion
but then you say youre using a Kinect with RSLidar?
Branch
noetic-devel
Question
"Has anyone ever done camera-radar fusion using their own devices? After calibrating the Azure Kinect DK with the RSLidar, the bounding boxes in the point cloud often appear behind the correct point cloud, and the red point cloud indicating successful clustering does not appear. What aspects can be optimized in this situation?"
Additional
No response
The text was updated successfully, but these errors were encountered: