-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Read .aedat file #1
Comments
The file you are looking for is 'Generate_DHP19.m' under 'generate_DHP19' folder |
Hi and thanks for your interest
Please have a look at the main script Generate_DHP19.m for details about the parameters
The source parameter is set from the header of the imported aedat. We used DAVIS346b for our recordings, with 346x260 pixel resolution.
Yes this is correct
Correct, the column order is x, y, z.
Negative values are due to the relative positions of the Vicon origin and subject. The Vicon origin is centered on the treadmill
We get the Vicon positions that are closest in time to those of the start and stop events of the current accumulated frame. You find the related piece of code in ExtractEventsToFramesAndMeanLabels.m script, variable k. |
Thanks for your kind reply. I still have some questions about the use of this dataset. (1) In the extract_from_aedat.m file, why do you need to do the following conversion: (2) As to matching the Vicon_data to events, is my following understandings correct? Sorry about above questions. Hopefully I described my questions clearly. |
No problem for the questions, I have uploaded a notebook with an example showing how to create the heatmaps from the 3D positions, it should clarify some issues. To answer the questions: The camera positions are needed after you make 2D heatmap predictions, and want to triangulate from 2 or more camera views into 3D space. (2) |
@enrico-c @tobidelbruck Hi, Both, I still have questions about aligning camera recording with Vicon data. I used previously discussed method to match Vicon data to events, and I had no luck to match them correctly by visualizing them. The precondition for using previously discussed method is that all DVS cameras and Vicon start to recording at the same time. Can you confirm this precondition? After loading the events and XYZPOS data, I noticed that almost all events have longer time length, about 2.5 seconds longer than Vicon data. Is this because you start DVS camera and Vicon at the same time and you firstly stop the Vicon then DVS cameras after finish recording? So we can ignore the events that exceeds the maximum time of Vicon data ( I noticed you did this in your ExtractEventsToFramesAndMeanLabels.m file)? Is it possible there is a time shift between events and Vicon data? But you already have done experiments and shown promising results in your paper using the previously discussed method to match Vicon data to events. Can you confirm whether your training images and labeled heatmaps can match correctly by checking and visualizing more samples? Sorry about the questions and thanks. |
I believe the timing is synchronized by 2 special events recorded in the
DVS stream that comes from the Vicon; one at start and one at end of
recording. The recording definitely don't start and end at exactly the
same time, but by using these special events you can synchronize the
recordings. I don't know the details of these special events; they are
encoded by particular bit patterns that should be documented (perhaps
they are not yet, except in code)
|
Hi, thanks for such a great dataset that definitely can progress the community research.
Can you please provide some description about parameters and how to set the parameters in following function?
function [startIndex, stopIndex, ...
pol_tmp3, X_tmp3, y_tmp3, cam_tmp3, timeStamp_tmp3] = ...
extract_from_aedat(...
aedat, events, ...
startTime, stopTime, sx, sy, nbcam, ...
thrEventHotPixel, dt, ...
xmin_mask1, xmax_mask1, ymin_mask1, ymax_mask1, ...
xmin_mask2, xmax_mask2, ymin_mask2, ymax_mask2)
what is meaning of input parameters (aedat, events, sy, sx ...)? And how to set these parameters? And also the meaning of output (cam_tmp3...). It would be better that you can provide a example file about how to read a '*.aedat' file.
I turned to use the files in 'read_aedat' folder. I used following code to read file:
output = ImportAedat('', 'mov1.aedat');
Is that correct? I did not set the source parameter since I noticed that is optional. But the default source is ''Dvs128', Did you use dvs128 for recording? In your paper, I just noticed that you used DAVIS camera, I did not locate the exact camera? davis240c? davis208? or else? So I am confused about the resolution of x, y.
As to the output, is 'output.data.polarity.cam' camera ID?
I loaded a '*.mat' Vicon recording. Does 1st, 2nd, 3rd column corresponds to x, y, z? Why there are negative values? Where did you set as position (0,0,0)? How to match the N timesteps to the timestamps from the events?
The text was updated successfully, but these errors were encountered: