-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
object detection criteria?! #132
Comments
To detect in lighting varying environment, find-object may not be the most robust even with SIFT/SURF features. Consider YOLO object detection. |
Hi matlabbe, *** I dont want to work with the meachine learning approaches. thanks a lot |
2d detections are published on you may use rtabmap_ros/pointcloud_to_depthimage node to generate a depth image for your camera from the lidar point cloud. With the 2D positions of the objects and the corresponding depth image, you could find the distance of the object from the camera. |
Thanks a lot for the comment, Thanks in advance |
and one more question, in which file i need to set fixed_frame_id to "odom"???
my camera is RGBD camera(D435i) I changed the following lines in demo_husky.launch file:
Is this setting correct? |
my issue related to the error of fixed_frame_id solved (_fixed_frame_id:=odom) Now, After lunching
AND
then, what should I expect in the output? my first question about the distance of the detected object from the camera, still remains!!! |
I tried a minimal example, based on husky launch file: roslaunch husky_gazebo husky_playpen.launch realsense_enabled:=true
roslaunch husky_viz view_robot.launch
#optional
roslaunch rtabmap_ros demo_husky.launch
lidar3d:=true
slam2d:=true
camera:=true
icp_odometry:=true
depth_from_lidar:=true
rtabmapviz:=true
rosrun rtabmap_ros pointcloud_to_depthimage \
_fixed_frame_id:=odom \
cloud:=/velodyne_points \
camera_info:=/realsense/color/camera_info \
_decimation:=4 \
_fill_holes_size:=3 \
_fill_holes_error:=0.3 \
image_raw:=/gen_depth \
_upscale:=true _upscale_depth_error_ratio:=0.3
roslaunch find_object_2d find_object_3d.launch \
depth_topic:=/gen_depth \
approx_sync:=true \
camera_info_topic:=/realsense/color/camera_info \
rgb_topic:=/realsense/color/image_raw Note that as In the gif, you can see the laser scan on left with detected object TF, the generated depth image from the lidar, find_object detections on right. When TF of the object can be computed (valid depth on the 2D detection), you can get the distance from the camera easily. |
Thanks a lot matlabbe, Kind Regards |
Yeah, if you can share it. |
Hi Matlabbe, i have shared the bag file through google drive with you. I want to detect the track-side objects (which you will see in the bag file). https://drive.google.com/file/d/1hGr_kNnRdsJC_l-wIv0qnDFtiay8UwFt/view?usp=sharing Kind Regards |
I have two more queries as well:
Kind Regards |
Hi,
For the rosbag, my approach above won't work, because you don't have a lot of rays with the lidar, so depth based on only one velodyne scan would be too sparse: You may want to use a node like rtabmap_ros/point_cloud_assembler to aggregate multiple scans over X seconds and then reproject that cloud to color camera for a dense depth image. Here is a quick test with |
Hi,
I want to do object detection by using an RGB-D camera. but the problem is that when the weather is a bit cloudy, find2d-object fails, and not able to detect objects. I have defined that for instance if 2 objects are detected, the vehicle should stop, but as I mentioned because of the weather it gets fail. are there any other criteria to consider for detecting the objects except the number of detected objects?
The text was updated successfully, but these errors were encountered: