You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I notice that in the paper Realtime Time Synchronized Event-based Stereo, the disparity ground truth of MVSEC is used. Thus, I am wondering how disparity is computed from depth ground truth for this dataset. I know that disparity = focus * baseline / depth. So what are the values of focus and baseline, in the unit of pixels?
The text was updated successfully, but these errors were encountered:
You should be able to obtain the focal length and baseline from the provided camera calibration. In general, if you have rectified images, the focal length and baseline can be read from the projection matrix, defined here: http://docs.ros.org/en/melodic/api/sensor_msgs/html/msg/CameraInfo.html
Thank you for sharing this awesome dataset!
I notice that in the paper Realtime Time Synchronized Event-based Stereo, the disparity ground truth of MVSEC is used. Thus, I am wondering how disparity is computed from depth ground truth for this dataset. I know that disparity = focus * baseline / depth. So what are the values of focus and baseline, in the unit of pixels?
The text was updated successfully, but these errors were encountered: