How to organize ephys+opto-projected images #78
-
We are running slice based e-phys with multi-electrode arrays that are simultaneously stimulated optogenetically. Using a projector through a set of lenses, we stimulate various areas with time varying intensity. I'm not sure how to store the resulting data in NWB or how to think about it. The kind of data we have is a set of shapes on the screen (e.g. polygon/circle/ellispse) and a set of functions that determines the intensity of each shape for every frame projected (at 120Hz). Ideally, we'd have an image series where you can provide a series of masks (ordered in the z-axis) along with a 2D array for each mask, indicating the time-varying RGB value of the mask area. Maybe I can create an We also have an alignment, where for every projected frame we have an associated index in the e-phys data when this frame was projected. We have custom hardware digital IO passed between the projection and e-phys systems for this. I saw alignment code somewhere, but can't find it anymore. But, for the alignment indexes and other associated logged digital IO, can I just create a bunch of Side question, I couldn't figure out from the docs, what's the difference between epoch and trial? And again, do I use whichever I like and not worry about what it may mean to other people or tools around NWB as long as people in our lab understand? Similarly, we have lightsheet microscope data from cleared whole-brains. We have/will have an alignment of the brain to the Allen atlas. And then a bunch of regions in the 3D space along with cell counts for each region. We could simply link to the whole brain tiff file from the NWB file. But I don't see structures for the region definitions and cell counts for each. Is there some suggestions on how to approach this? If we made an NDX extension for the projected (and cleared brain) data, how would this work with tools that e.g. know how to read normal |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 5 replies
-
Wow, that's a lot of questions! I'll let others handle the broad philosophy points and tackle some of the technical ones
Sounds like standard ophys types are suitable here (
Because they are not hard-coded areas of the schema, instead they would be free-form columns added to your
Keep in mind if you're planning to submit to DANDI, include subject metadata (
This sounds rather like holographic stimulation, is that correct? In that case there is an extension that has been made for this and will be entering official review by the Technical Advisory Board soon: https://github.com/catalystneuro/ndx-patterned-ogen Check it out and let us know if it fits your needs - the odd thing I see is the bit about 'RGB' values?
All data in the NWB file should be aligned to the Whether or not you include the original TTL pulses and whatnot in some capacity is up to you; sometimes it depends on how complicated the rig is such that the act of aligning might be considered a part of replicating the findings of the study I'd usually add such alignment signals as
In general, epochs occur over a longer period of time and represent qualitative shifts to the structure of the experiment. For in vivo examples this can include a mouse being moved to a different type of maze, being presented different categories of visual stimuli (this example from Allen Institute Visual Coding dataset has a switch between natural movies and drifting gratings, among others). It's somewhat uncommon to see an epochs table with more than say a dozen rows trials are shorter and are usually repeated much more frequently, and also typically have a bunch of parameters that change with each trial. For example, this file from the International Brain Lab had quite a lot of values that changed over each individual trial
Referring to the NIH Data Sharing Policy in particular, this section """ Scientific data are defined as the recorded factual material commonly accepted in the scientific community as of sufficient quality to validate and replicate research findings, ... it is therefore quite important to give consideration to how you represent your data so that others are able to understand it in order to validate and replicate the findings. Especially reviewers of the eventual publication associated with the dataset |
Beta Was this translation helpful? Give feedback.
Wow, that's a lot of questions!
I'll let others handle the broad philosophy points and tackle some of the technical ones
Sounds like standard ophys types are suitable here (
Device
->ImagingPlane
->OnePhotonSeries
) - yes, we currently refer to lightsheet as aOnePhotonSeries
, but we're aware it's not identical to laser scanning approaches, so just ignore the laser-based metadata like the scan line rate