Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

object detection criteria?! #132

Open
Masoumehrahimi opened this issue Oct 4, 2022 · 12 comments
Open

object detection criteria?! #132

Masoumehrahimi opened this issue Oct 4, 2022 · 12 comments

Comments

@Masoumehrahimi
Copy link

Hi,

I want to do object detection by using an RGB-D camera. but the problem is that when the weather is a bit cloudy, find2d-object fails, and not able to detect objects. I have defined that for instance if 2 objects are detected, the vehicle should stop, but as I mentioned because of the weather it gets fail. are there any other criteria to consider for detecting the objects except the number of detected objects?

@Masoumehrahimi Masoumehrahimi changed the title object detectio criteria?! object detection criteria?! Oct 4, 2022
@matlabbe
Copy link
Member

matlabbe commented Nov 6, 2022

To detect in lighting varying environment, find-object may not be the most robust even with SIFT/SURF features. Consider YOLO object detection.

@Masoumehrahimi
Copy link
Author

Hi matlabbe,
actually, i want to use find-2d-object package only for object detection, and then use a 3d lidar for estimating the 3d pose estimation of the detected object. How can i do that? what is the procedure? what points I need to consider?
Would you please guide me in this regard?

*** I dont want to work with the meachine learning approaches.

thanks a lot

@matlabbe
Copy link
Member

matlabbe commented Nov 7, 2022

2d detections are published on objects topic: http://wiki.ros.org/find_object_2d

you may use rtabmap_ros/pointcloud_to_depthimage node to generate a depth image for your camera from the lidar point cloud. With the 2D positions of the objects and the corresponding depth image, you could find the distance of the object from the camera.

@Masoumehrahimi
Copy link
Author

Thanks a lot for the comment,
I am also working with Rtabmap, and i have activated lidar and camera option (https://github.com/introlab/rtabmap_ros/blob/master/launch/demo/demo_husky.launch). I donot know how to use the option which you mentioned. how can I synch the depth-image coming from lidat with the 2d-image coming from find-2d-object?
Would you please explain a bit more. or is there any example that I check to see how it works?

Thanks in advance

@Masoumehrahimi
Copy link
Author

Masoumehrahimi commented Nov 7, 2022

and one more question, in which file i need to set fixed_frame_id to "odom"???

  1. I have changed the frame id from base_link to odom in "https://github.com/introlab/rtabmap_ros/blob/master/launch/demo/demo_husky.launch"
    but again, when i run: rosrun rtabmap_ros pointcloud_to_depthimage it shows an error that: fixed_frame_id should be set!
    (I am running rtabmap. launch file before pointcloud_to_depthimage )

  2. as i searched in the rtabmap forum, i found that i need to use RGB from camera and depth from lidar. (if i am not mistaken)
    as i activate depth_from_lidar in the demo-husky.launch file, but in the lines under camera, we have:

<arg name="depth"             value="$(eval camera and not depth_from_lidar)" />

my camera is RGBD camera(D435i)

I changed the following lines in demo_husky.launch file:

    <!-- If camera is used -->
    <arg name="depth"             value="$(arg depth_from_lidar)" />
    <arg name="subscribe_rgb"     value="$(eval camera)" />
    <arg name="rgbd_sync"         value="$(arg depth_from_lidar)" />
    <arg name="rgb_topic"         value="/camera/color/image_raw" />       
    <arg name="camera_info_topic" value="/camera/color/camera_info" />         
    <arg name="depth_topic"       value="/camera/depth/image_rect_raw" />     
    <arg name="approx_rgbd_sync"  value="false" />

Is this setting correct?

@Masoumehrahimi
Copy link
Author

Masoumehrahimi commented Nov 7, 2022

my issue related to the error of fixed_frame_id solved (_fixed_frame_id:=odom)
Now would you please explain a bit more about this :" With the 2D positions of the objects and the corresponding depth image, you could find the distance of the object from the camera."

Now, After lunching

  1. rosrun rtabmap_ros pointcloud_to_depthimage _fixed_frame_id:=odom cloud:=/velodyne_points camera_info:=/camera/color/camera_info
  2. and launching https://github.com/introlab/rtabmap_ros/blob/master/launch/demo/demo_husky.launch
    with the following changes:

AND


> <arg name="depth"             value="$(arg depth_from_lidar)" />
>     <arg name="subscribe_rgb"     value="$(eval camera)" />
>     <arg name="rgbd_sync"         value="$(arg depth_from_lidar)" />
>     <arg name="rgb_topic"         value="/camera/color/image_raw" />    
>     <arg name="camera_info_topic" value="/camera/color/camera_info" />        
>     <arg name="depth_topic"       value="/camera/depth/image_rect_raw" />   
>     <arg name="approx_rgbd_sync"  value="false" />

then, what should I expect in the output? my first question about the distance of the detected object from the camera, still remains!!!

@matlabbe
Copy link
Member

matlabbe commented Nov 27, 2022

I tried a minimal example, based on husky launch file:

roslaunch husky_gazebo husky_playpen.launch realsense_enabled:=true
roslaunch husky_viz view_robot.launch

#optional
roslaunch rtabmap_ros demo_husky.launch 
    lidar3d:=true 
    slam2d:=true 
    camera:=true 
    icp_odometry:=true 
    depth_from_lidar:=true 
    rtabmapviz:=true

rosrun rtabmap_ros pointcloud_to_depthimage \
    _fixed_frame_id:=odom \
    cloud:=/velodyne_points \
    camera_info:=/realsense/color/camera_info \
    _decimation:=4 \
    _fill_holes_size:=3 \
    _fill_holes_error:=0.3 \
    image_raw:=/gen_depth \
    _upscale:=true _upscale_depth_error_ratio:=0.3

roslaunch find_object_2d find_object_3d.launch \
    depth_topic:=/gen_depth \
    approx_sync:=true \
    camera_info_topic:=/realsense/color/camera_info \
    rgb_topic:=/realsense/color/image_raw

Note that as find_object requires depth image to be exact same size than RGB, I added a new option (see this commit) to pointcloud_to_depthimage to upscale back the depth image when decimation parameter is used. If you are going the start rtabmap like above (RGB + lidar, no depth), I also fixed a bug with latest code in this commit.
Peek 2022-11-26 18-00

In the gif, you can see the laser scan on left with detected object TF, the generated depth image from the lidar, find_object detections on right. When TF of the object can be computed (valid depth on the 2D detection), you can get the distance from the camera easily.

@Masoumehrahimi
Copy link
Author

Thanks a lot matlabbe,
I made changes to the files, and i tried a lot, but in rvis, i have alot of jumps in the output, may i send a short bag file, and just try with that one?

Kind Regards

@matlabbe
Copy link
Member

matlabbe commented Dec 1, 2022

Yeah, if you can share it.

@Masoumehrahimi
Copy link
Author

Masoumehrahimi commented Dec 1, 2022

Hi Matlabbe,

i have shared the bag file through google drive with you. I want to detect the track-side objects (which you will see in the bag file).
I would be grateful, if you share with me the launch files, and the changes that you made in those files.

https://drive.google.com/file/d/1hGr_kNnRdsJC_l-wIv0qnDFtiay8UwFt/view?usp=sharing

Kind Regards

@Masoumehrahimi
Copy link
Author

Masoumehrahimi commented Dec 1, 2022

I have two more queries as well:

  1. in https://github.com/introlab/rtabmap_ros/blob/master/launch/demo/demo_husky.launch , it is mentioned that if camera used, the depth should only be evaluated bu the camera (i have put those lines here as well). I think these lines should be modifed, when we want to use RGB image of camera only for object detection and depth image from lidar for distance estimation . would you please comment about this?
<!-- If camera is used -->
    <arg name="depth"             value="$(eval camera and not depth_from_lidar)" />
    <arg name="subscribe_rgb"     value="$(eval camera)" />
    <arg name="rgbd_sync"         value="$(eval camera and not depth_from_lidar)" />
    <arg name="rgb_topic"         value="/realsense/color/image_raw" />
    <arg name="camera_info_topic" value="/realsense/color/camera_info" />
    <arg name="depth_topic"       value="/realsense/depth/image_rect_raw" />
    <arg name="approx_rgbd_sync"  value="false" />
  1. do we need to do LiDAR-camera calibration? I want to know how did you consider the calibration between these two sensors?

Kind Regards

@matlabbe
Copy link
Member

matlabbe commented Dec 4, 2022

Hi,

  1. those parameters are for rtabmap, find_object would be configured independently with the sensors you want.
  2. Yes. For some projects just for quick testing, I did it by hand (measuring with tape) and adjusting rotation using Camera display from rviz by adjusting manually a static transform publisher between camera and lidar (until overlap looks good). However, if you need accuracy, search for Lidar-camera calibration packages on github/ros.

For the rosbag, my approach above won't work, because you don't have a lot of rays with the lidar, so depth based on only one velodyne scan would be too sparse:
Screenshot from 2022-12-03 17-21-41

You may want to use a node like rtabmap_ros/point_cloud_assembler to aggregate multiple scans over X seconds and then reproject that cloud to color camera for a dense depth image. Here is a quick test with delay option in rviz to keep the scans a longer period of time. In the Camera display on left, we see a more dense depth image than previously. So 3d position of objects on the track below velodyne FOV but seen at the bottom of the camera image could be estimated.
Screenshot from 2022-12-03 17-27-13

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants