-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about the time offset between Imu and Camera #2
Comments
What do you mean the terms "hardware synchronization" ? Do you mean treating the IMU and camera as one entity and use only one processor chip to drive both IMU and camera synchronously? About your problem, perhaps, in my opinion, using a parallel computing unit such as a GPU or another processor would solve it. You could also check your processor core, make sure you have enough core for std::thread Another solution is to design an "electrical control system" that capture both camera + IMU GPIO signals and write your own interface to communicate with ROS, This is what I have done back then. However, you also have to deal with data transmitting time overhead between the main processor (which executes your "downstream task") and the "electrical control system". |
Thanks for your reply in time! About "Hardware Synchronization" I mean using the Imu's time stamps as cue (GPIO signals) to hardware trigger Camera's capturing action, which is just equivalent to the "electrical control system" as you mentioned. However, in my case, that hardware trigger is not supported by my camera (OAK-D) on harware level. As you proposed, I did use the multithreading for sending Imu (in Imu worker thread) and Image message streams (in camera worker thread) as well as sending the interpolated Imu stream (in main thread), which turns out to be satisfying in terms of the time offset between each instant when Image (visual cue) is sent and the corresponding instant when interpolated Imu is sent. However, the unsolved problem is the instants, when Image and Imu messages are sent (published) at the beginning of sequence repectively, have a visible temporal offset, as I mentioned before. So, I am still stuck with this problem. Do you have any idea about that? |
Have you tried to visualize the GPIO signals using an oscilloscope to investigate if ROS not causing any overhead? Try to use an analog oscilloscope if you want more precision. I had the same problem as you back then. The camera I used also did not support hardware trigger (I think there was a reason for it). There's no way we could control the camera by electrical pulse from outside source. The ROS timestamp is recorded at the time it received the signal and not at the time the signal was being sent I believe. The camera manufacturer usually do some stuff (adding info to make sure the message has been successfully sent for example) before sending the timestamp message. That info is usually on their technical sheet or their interface library. The 'visible temporal offset' you mentioned I believe is = packaging message time + data transmitting time + unpacking message time The solution is really depending on your camera and IMU manufacturers. Try to ask them if they had some kind of timestamp system implemented on their own microcontroller so we don't have to deal with data packaging + transmitting + unpacking time overhead. Else, you could also subtracting the 'offset' as a way to 'calibrate' the camera timestamp. |
Sorry for the late response. |
What is the data structure you are proposing? I believe the reason causing ROS to return 2 times the same timestamp values successively is because of the buffer reading speed being way faster than the writing speed. Back then, my instructor required a solution in real-time so the 'so-called' delayed interpolation method (doing the interpolation in another the thread) was off the table (still didn't know why my instructor did not accept this solution). |
Upon my observation in my case, that phenomenon is caused by writing data to Imu msg in both Imu and main threads in the way one thread's writing without knowing the other's writing is finished, the so-called 'data racing'. It happens typically via the Imu thread writing data to Imu message preemptively approaching the instant of interpolation being finished. |
In that case, I think you could also try using raw pointers instead of std::string (avoid heap allocation). https://godbolt.org/ is good place to test performance. |
I've never thought about that. Deserves a try! Thx! |
I'd like to ask another question not directly related with this topic. Have you ever use the visual-inertial sensor based on the implementation of synchronization upon linear interpolation for the calibration of VI sensor? For eg, using Kalibr tool (https://github.com/ethz-asl/kalibr)? |
By referring to your idea of Software Implementation, I found that the time offset (delay) between Imu and Image message stream is significant compared to hardware synchronization. For eg, when I use
rosbag record
to record both streams, the time offset at the beginning can approach 40ms. That must affect the downstreaming task, for eg, the calibration of visual inertial sensor. Have you also found such an issue in your project? If so, do you have any idea to overcome that?The text was updated successfully, but these errors were encountered: