Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestion on convering AEDAT to ES #44

Open
fanpeng-kong opened this issue Sep 10, 2019 · 2 comments
Open

Suggestion on convering AEDAT to ES #44

fanpeng-kong opened this issue Sep 10, 2019 · 2 comments

Comments

@fanpeng-kong
Copy link
Contributor

Hi,

I have written an aedat_to_es parser (see this commit on my aedat branch branch) which could extract the DVS events recorded in jAER from a DAVIS240C camera and store them in the ES format.

However, to make the conversion more general, i.e. different AEDAT versions, DAVIS cameras and event types, more work needs to be done. At least I have the following questions in mind:

  1. Which header information needs to be extracted from the AEDAT file? The AEDAT header is much more complicated than the ATIS dat files. I thinks at least we need to extract the AEDAT version and device information in order to decide the width and height.

  2. How to (handle the returned type)[https://github.com/fanpeng-kong/command_line_tools/blob/ae936199f39931a5f36f4da5ee027b192d4eb17e/source/aedat.hpp#L157] while reading bytes from .aedat file, as the event type will be determined during the run time instead of compile time?

  3. In case we want to store other DAVIS event like IMU event or the APS frames in the ES file, what would a better strategy: using generic event or expanding the ES specification to include these new types of events?

Any suggestion or discussion is appreciated.

Cheers,
Fanpeng

@aMarcireau
Copy link
Member

Hi,

@biphasic has developed a python / C++ library (https://github.com/neuromorphic-paris/loris) to read, convert and write event files, thus he can probably provide some insight on the matter. Here are my comments on the questions you have:

  1. Most algorithms need the width and height of the sensor. I think both need to be read from the header, or guessed based on the file version.

  2. Dynamic types are hard to handle given the approach chosen in sepia and tarsier. Some workarounds are possible, but generally involve boilerplate code (see https://github.com/neuromorphic-paris/command_line_tools/blob/master/source/cut.cpp for example). If a reasonable number of event types are supported, I would use a switch statement and dedicated (compile-time) writers for each type.

  3. The "generic" event type of the ES specification is meant as a patch to be used in algorithm pipelines where saving an intermediary stream of events is needed to save calculations. Though it can be used to store multiple event types in a single file, I think we need a better long-term solution. The ES specification does not support multiple event types per file for non-generic types: it is designed to be used within a container format which would contain high-level information (date, description, media owner...) and bundle multiple stream types (events, frames, audio...). This approach makes it possible to use existing compression formats, notably for frames (HEVC for example). To sum it up, here are the steps I think we should follow on the long term:

  • add an IMU event type to the ES specification
  • create a container format that can deal with at least ES and HEVC (audio support would be nice to have). I think MP4 is a good basis for such a format.
  • write a code layer on top of sepia to read the container format, and use the sepia implementation and existing HEVC decoders to generate a hybrid stream of synchronised frames and events.

In the short term, using a single file of generic events is probably the easiest solution. Generating two files (one with DVS events and the other with generic events representing IMU events and APS frames) is another option.

I would greatly appreciate your comments on these ideas, which are obviously open to discussion.

Cheers,
Alex

@biphasic
Copy link
Member

biphasic commented Oct 9, 2019

Hello,
inilabs really just looks at the file version in the header, see here.
A full list what can be found in an AEDAT header is listed here. As you say from the source ID you can get the sensor size.

@aMarcireau I support the idea of adding IMU events and then add frames within a container format. Just wondering if it will be straightforward to synchronise those?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants