Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Read multiple fields of view on a grid, for HCS dataset #200

Open
tcompa opened this issue May 19, 2022 · 16 comments
Open

Read multiple fields of view on a grid, for HCS dataset #200

tcompa opened this issue May 19, 2022 · 16 comments

Comments

@tcompa
Copy link
Contributor

tcompa commented May 19, 2022

Hi there, and thanks for your support.

We (me and @mfranzon) encountered some issues while trying to reproduce the behavior of #57 (comment), that is, to visualize (in napari) a zarr file which includes a single well with several fields of view.

Our zarr file contains a HCS dataset, structured as in #57 (comment). There is only one well, which includes four fields of view. Upon loading the file in napari, logs show that

11:58:12 DEBUG creating lazy_reader. row:0 col:0
11:58:12 DEBUG creating lazy_reader. row:0 col:1
11:58:12 DEBUG creating lazy_reader. row:1 col:0
11:58:12 DEBUG creating lazy_reader. row:1 col:1

which is the expected behavior (placing the four fields on a 2x2 grid, as in https://github.com/ome/ome-zarr-py/blob/master/ome_zarr/reader.py#L407-L410). However, the tile_path function in reader.py is defined as

    def get_tile_path(self, level: int, row: int, col: int) -> str:
        return (
            f"{self.row_names[row]}/"
            f"{self.col_names[col]}/{self.first_field}/{level}"
        )

where the field is strictly equal to self.first_field, which is equal to "0".

We are missing something, here, because it seems that the reader will never load any field which is not the first one.
Is there a way to show the four fields together, on a 2x2 grid (as in #57 (comment))?

@jluethi
Copy link

jluethi commented May 19, 2022

I'm also very interested in this (and working with @tcompa & @mfranzon on this).

If there are ways to display multiple sites, will there also be ways to determine the arrangement of the multiple field of views? At the moment, it seems to choose the closest shape to a square for an arrangement, but the data may have been acquired at a different shape (e.g. 1x4 instead of 2x2 arrangement for the example on top).

The code that generates rows & column information from the log above is doing a square calculation:

        column_count = math.ceil(math.sqrt(field_count))
        row_count = math.ceil(field_count / column_count)

Could this information instead be read from the multiscales parameter translation, allowing for arbitrary positioning of the field of views within a well? Or could the grid dimensions be read from some other parts of the metadata? Are there already plans to support things like this?

@will-moore
Copy link
Member

Hi, the def get_tile_path(self, level: int, row: int, col: int) -> str: is a method of the Plate class and is for loading each well as a 'tile' for viewing a Plate. So, when viewing a Plate, you can only see the first Well.

napari --plugin napari-ome-zarr https://uk1s3.embassy.ebi.ac.uk/idr/zarr/v0.3/idr0094A/7751.zarr/

However, you see all the fields when viewing a single Well (add the row/column to the path):

$ napari --plugin napari-ome-zarr https://uk1s3.embassy.ebi.ac.uk/idr/zarr/v0.3/idr0094A/7751.zarr/A/1/

And you can open individual images with:

$ napari --plugin napari-ome-zarr https://uk1s3.embassy.ebi.ac.uk/idr/zarr/v0.3/idr0094A/7751.zarr/A/1/

If you add translation info into the multiscales metadata of each image, and open each image individually in napari, then you should find that the translation is respected.

It's a nice idea to use these translations when stitching fields into a single Well, but a bit more work since the translations can be in multiple dimensions so it's not simple as the dask.array.concatenate that is currently being used. (

lazy_rows.append(da.concatenate(lazy_row, axis=x_index))
)

@jluethi
Copy link

jluethi commented May 19, 2022

Thanks a lot for the super-fast response @will-moore!

Hi, the def get_tile_path(self, level: int, row: int, col: int) -> str: is a method of the Plate class and is for loading each well as a 'tile' for viewing a Plate. So, when viewing a Plate, you can only see the first Well.

Is it a design choice to only show the first field of view per plate or just something that hasn’t been implemented yet to show multiple fovs per well in the plate view?
At least for 2D data (maximum intensity projections) and rescaled to 8bit (instead of 16bit that we now try to use in napari), browsing through remotely hosted plates with all the fovs for dozens of wells worked very nicely (using a custom tool we built around it):

Screen.Recording.2022-05-19.at.14.23.14.mov

We're now trying to build towards the same experience using OME-Zarr & napari. Would be very happy to contribute to make this happen if you think that's something the ome-zarr-py library could support!
We've been able to hack something in the direction together by having all the wells fused as single fovs, then saved to OME-Zarr and can have the same data browsable in napari, but performance is currently still much slower...

Screen.Recording.2022-05-19.at.14.43.49.mov

However, you see all the fields when viewing a single Well (add the row/column to the path)

Also, interesting thought of accessing specific wells on their own or specific images.
Loading single wells in the way you describe above doesn’t load the pyramids anymore, right?
I get a warning when I load some wells:
UserWarning: data shape (19440, 20480) exceeds GL_MAX_TEXTURE_SIZE 16384 in at least one axis and will be downsampled. Rendering is currently in 2D mode.

And it then loads the full resolution image data first & downsamples it. Is this the expected behavior? Would be very useful to be able to lazily load the necessary pyramid levels within the fovs of a single well.
Or did we have some errors in our metadata? (it worked to load things in plate settings without those issues and we put all fovs into a single site so that they would be displayed.)

It's a nice idea to use these translations when stitching fields into a single Well, but a bit more work since the translations can be in multiple dimensions so it's not simple as the dask.array.concatenate that is currently being used.

True, that could get more complicated. Are there already thoughts on how one could handle well arrangements? Would love to join that discussion (or start it in a separat issue so this doesn't get to confusing) :)

@sbesson
Copy link
Member

sbesson commented May 19, 2022

Jotting down a few thoughts on possible options for storing the well position metadata:

  • in addition to dataset level coordinatesTransformation, the OME-NGFF 0.4 specification also introduced the concept of a coordinateTransformations at the level of the multiscales in addition to the transformation at the individual datasets level. The original PR included several discussions around the semantics and usage of this transformation with for instance a possible use case for registration. While working on the HCS implementations of the OME-NGFF 0.4 specification, we also discussed briefly internally whether this transformation levle could be a potential candidate for storing the absolute position of each field of view rather than building a grid ad-hoc as it is currently.
  • as of today, our implementations of the OME-NGFF specification have limited support for reading and writing coordinateTransformations defined at the level of the multiscales dictionary. As always, having a real-world example is the best driver for driving the addition of these APIs.
  • alongside the ongoing metadata specification work engaged by the OME team, a possible second location could be to store such metadata as OME metadata. The XSD schemas define the WellSample.PositionX and WellSample.PositionY properties to that effect and this metadata is currently used by the OMERO web viewers to display the field positions when selecting a well e.g. in http://idr.openmicroscopy.org/webclient/?show=well-2104073
  • finally the ongoing proposal on spaces and transforms aims at generalisation the transformation concepts introduced in OME-NGFF 0.4 and define a set of named coordinate system into which the array data can be mapped into via transformation

@will-moore
Copy link
Member

Your custom plate viewer looks very nice. Is that a browser (JavaScript) client or python?

Currently the viewing of a single Well loads all of the Images at full resolution.
In all our example data, we only had a maximum of 9 fields per Well, so I can see that in your case this creates a layer that is too big for napari.

I have a PR to fix the loading of a pyramids for Plate (#195) prompted by @mfranzon, so we should be able to apply the same approach for loading Wells.
It could also work to show all the Fields within each Well in the Plate layout (as you've done), although it would be slower than 1 field per Well.

We had a long discussion on NGFF "Collections" of images, ome/ngff#31, partly thinking about replacing the current HCS spec with something more generic. Some of that discussion was about the layout of Wells within a Plate. There's a trade-off. If everything is very generic, you could in theory use the translation of each resolution of each Image's pyramid to arrange all the images into a Plate. Then you could open all the images in napari each as a separate layer with it's own translation and napari (or any other viewer that supports the spec) should be able to lay them out correctly (without needing "HCS support"). However, the number of layers would be prohibitive and performance would suffer.

As I mentioned, you can already store fov translations in the multiscales metadata. Let's assume that when loading HCS data, we could use those translations to stitch fields into Wells (and even stitch those Wells into a Plate).
A minor issue here is that behaviour isn't defined in the ngff spec. Other clients may ignore the translations or decide that the translations should apply at the Plate level (instead of the Well).

So, I would suggest:

  • Create some sample data where the Fields of a Well have correct translations, so that opening the images individually in napari shows the correct layout.
  • Have go at updating the ome-zarr-py Well class, to respect the translations when stitching Fields into a Well.
  • If this seems viable, then open a PR (or a more specific issue) to propose defining that behaviour in the ngff spec.
  • Bonus: investigate whether you can also stitch those multi-fov Wells into a Plate

@jluethi
Copy link

jluethi commented May 19, 2022

Thanks @sbesson for the overview and the links! Will have to read through them in a bit more detail now! Certainly happy to provide real-world example data to test this!

a possible second location could be to store such metadata as OME metadata

That’s also a very interesting idea. May be a good way to approach grid-based layouts, while the spaces & transforms part looks very promising to settings where images aren’t placed on a perfect grid.

Your custom plate viewer looks very nice. Is that a browser (JavaScript) client or python?

It’s a browser-based viewer using the AngularJS framework and webGL for the viewer, but unfortunately already fairly old (server code in Python 2, using AngularJS that is also past official support). Which is why we were looking for a new solution to host such that and expand the utility beyond what we covered before.

Currently the viewing of a single Well loads all of the Images at full resolution.

Ah, we often hit use cases where we have 10x10 images per well or even more, thus > 100 images and thus layers that aren’t reasonable for napari anymore. We’d certainly be very interested about treating wells similarly to the plate, e.g. load the relevant pyramid level.

It could also work to show all the Fields within each Well in the Plate layout (as you've done), although it would be slower than 1 field per Well.

Do you mean showing more fields is slower as in “when we show larger areas of acquired data, that’s slower”, or as in “it’s faster to show the same area fused into fewer fields”?

However, the number of layers would be prohibitive and performance would suffer.

Yes, I would also be very concerned about scaling. We easily have >1000 field of views on a plate, > 100 fovs in a well. If those would all be separate layers, the current napari setup will not handle that gracefully. Would your suggestion with the translations result in something with many layers (1 per fov) or do you think that stitching per well can also be done with applying translations and still resulting in a single layer (similarly to how wells are stitched to a plate as a single layer)?

Very curious to try this out now! Thanks a lot for taking the time for those thoughtful explanations and the links to relevant conversations @will-moore & @sbesson

@will-moore
Copy link
Member

I would expect showing all the Fields will be slower simply because you'd probably be loading more pixel data to the client. E.g. 100 Fields per Well is going to load more than 1 Field per Well. Maybe if the Plate isn't very big, not so many Fields per Well, and you've the Images have small-enough resolution levels, then you might not load a lot more pixel data. But you'll still be making a lot more requests, which could be limiting.

If you're interested in browser-based client, you're aware of https://github.com/hms-dbmi/vizarr? e.g.
https://hms-dbmi.github.io/vizarr/?source=https://uk1s3.embassy.ebi.ac.uk/idr/zarr/v0.3/idr0094A/7751.zarr

I'm still on a learning curve with dask, but I would hope you could do the equivalent of numpy:

  • Start with a single Well (leave Plate for later), looking at get_lazy_well() https://github.com/ome/ome-zarr-py/blob/master/ome_zarr/reader.py#L439
  • create a dask array of zeros representing the Well, loop through fields (instead of row/column) loading the data for each - you'll also need to load the .zattrs for each field to get the translation. I guess this should be done in __init__.
  • translate the data and add it to the zeros array (instead of concatenating)
  • also, do this for a pyramid of levels (as is done for Plate). Could maybe try this step first.

@jluethi
Copy link

jluethi commented Jun 27, 2022

It took us (@mfranzon, @tcompa & me) quite a while to get to this, but we’ve now focused on this issue in the last week. Performance of multi field-of-view setups is much slower when loading low-res pyramid levels than the performance combining all the data beforehand and saving it as a single field of view. This gives us major concerns about scalability & interactive viewing for saving HCS datasets according to the current OME-NGFF spec that is organized by field of view.

First, we have established pyramidal lazy-loading of wells. We have started to play around with the translation parameters and can place single field of views correctly based on those translation parameters. We have started looking into combining multiple Multiscale image nodes into a well, but could not find a way to combine them into a single napari channel.

Conclusion: The metadata approach seems reasonable, but we’d need to implement it from “scratch” for the well approach.

Second, we decided to do pyramidal lazy-loading of wells so we could judge the performance, keeping the “guestimated” grid-placements for the moment. Here is a PR that allows for this and thus allows us to look at single wells even if they have dimensions like (3, 1, 19440, 20480).

#209

Before making this approach more complex by parsing all the metadata to position the FOVs, we evaluated its performance. We had the same image data in two different OME-Zarr files. We either combined all field of views beforehand to save them as a single field of views (FOV), or saved them as multiple FOVs.

See the file trees here as an explanation.

Single FOV Tree
SingleFOV.zarr
└── B
    └── 03
        └── 0          <= The single site
            ├── 0       <= Pyramid level 0
            │   ├── 0
            │   ├── 1
            │   └── 2
            ├── 1
            │   ├── 0
            │   ├── 1
            │   └── 2
            ├── 2
            │   ├── 0
            │   ├── 1
            │   └── 2
            ├── 3
            │   ├── 0
            │   ├── 1
            │   └── 2
            └── 4       <= Pyramid level 4
                ├── 0
                ├── 1
                └── 2
Multi FOV Tree
20200812-CardiomyocyteDifferentiation14-Cycle1.zarr
└── B
    └── 03
        ├── 0          <= The first site
        │   ├── 0       <= Pyramid level 0
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   ├── 1
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   ├── 2
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   ├── 3
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   └── 4       <= Pyramid level 4
        │       ├── 0
        │       ├── 1
        │       └── 2
        ├── 1          <= The second site
        │   ├── 0
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   ├── 1
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   ├── 2
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   ├── 3
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   └── 4
        │       ├── 0
        │       ├── 1
        │       └── 2
        ├── 2
        │   ├── 0
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   ├── 1
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   ├── 2
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   ├── 3
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   └── 4
        │       ├── 0
        │       ├── 1
        │       └── 2
        ├── 3
        │   ├── 0
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   ├── 1
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   ├── 2
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   ├── 3
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   └── 4
        │       ├── 0
        │       ├── 1
        │       └── 2
        ├── 4
        │   ├── 0
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   ├── 1
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   ├── 2
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   ├── 3
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   └── 4
        │       ├── 0
        │       ├── 1
        │       └── 2
        ├── 5
        │   ├── 0
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   ├── 1
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   ├── 2
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   ├── 3
        │   │   ├── 0
        │   │   ├── 1
        │   │   └── 2
        │   └── 4
        │       ├── 0
        │       ├── 1
        │       └── 2
        .
        .
        .
(and so on)

In the end, both Zarr files contain the same data, but their organization is slightly different. On the highest resolution level (pyramid level 0), both zarr structures have the same number of chunks and same file size (both have 216 chunks over the 3 channels & 72 original field of views). The higher we go in the pyramid, the more striking the difference is. e.g. at pyramid level 3 (with coarsening factor 2), single FOV consists of 6 files (we kept the chunk-size somewhat constant), while multi FOV consists still of 216 files (because every field of view folder has a pyramid and each pyramid has a chunk file for each level).

Performance when looking at the highest resolution is quite similar. But we run into strong scalability issues when loading the higher pyramid levels. This often happens if we want to browse a dataset zoomed-out or when we initially load a dataset. For example, if we load the single well with 9x8 sites & 3 channels, it loads in napari within 0.5s for the single FOV case, but takes about 4s to load the multi FOV case (same amount of image data is loaded, looks exactly the same. But the on-disk structure was different).

This is quite worrying, as we were thinking about using the multi-FOV architecture to save data with dozens to hundreds of wells, each having dozens - 100 field of views. In that case, performance for loading low-res pyramid levels would scale terribly, as we’d be loading 1000s or 10’000s of tiny pyramid files (e.g. 72 pyramid files per well instead of 3-6 pyramid files per well).

We already have a test case with 23 wells of 72 FOVs each (1656 fields in total) and in single FOV mode (when the FOVs are combined into a single image before putting them into an OME-Zarr file), this performs nicely. See the updated video here:

napari_23well_compressed.mp4

(performance is not amazing, but with NAPARI_ASYNC on, it’s quite usable)

Given the scale we’re aiming for, multi-FOV use cases in OME-Zarr don’t look like something that would scale, unless there is a trick we have not considered yet. According to our tests, the limiting factor really becomes loading the many low resolution pyramid files. Our single FOV workaround contains the same amount of actual image data, but the chunks remain large for higher pyramid levels because we combined FOVs.

Are there thoughts on ways to use the OME-NGFF spec with such large datasets? Are there ideas how we could handle lower-resolution pyramid levels when 100s to 1000s of FOVs exist? Combining FOVs before putting them into OME-Zarr works for visualization, but isn’t the spec that is described.

@sbesson
Copy link
Member

sbesson commented Jun 29, 2022

@jluethi thanks for the follow-up and the detailed report. I think your investigation raises extremely important questions about data structure and layout that are becoming increasingly common.

First of all, I do not think these questions are specific to OME-NGFF. We received similar IDR submissions i.e. acquisition using HCS instrumentation where each well effectively scans a wide area (like a whole slide scanner would do) and this data is stored on disk as 100-1000 fields of views. Based on our interactions with submitter, we established that the most valuable representation for some of these datasets was to stitch these fields of view into a large pyramidal image that could then be annotated with ROIs and tabular data - see https://idr.openmicroscopy.org/webclient/img_detail/12532806/?dataset=13651 for an example. Conceptually, this seems to be largely on the same line as to the single FOV tree layout that you described above.

From the client perspective, I think th amount of binary data that is fetched to be displayed is not that different between both representations. Instead, the main divergence likely arises from the number of extra calls in the multiple FOV layout to navigate between the different fields of views which are effectively tiles/chunks in the single FOV representation.

It would be interesting to increase the logging verbosity in your commands and profile which calls account for the extra time in the multiple FOV scenario. I would not discard a performance issue due to the sucession of calls made either in the ome-zarr library and/or napari.

That being said by the end of the day, a consequence of keeping fields of views separate in their data representation is that the client is ultimately reponsible for the computational cost of fetching & assembling data from multiple sources and it is expected this will be less scalable than making this computation upfront. The biggest question is whether having the fields of views resolvable individually is a requirement of your application. If so, the multi FOV tree preserves access to individual tiles. If not, the single FOV tree offers clear advantages especially when it comes to interactivity across the whole well.

Combining FOVs before putting them into OME-Zarr works for visualization, but isn’t the spec that is described.

Thanks for this feedback. I assume one of the culprit sentences is https://ngff.openmicroscopy.org/latest/#hcs-layout and especially "All images contained in a well are fields of view of the same well"?
As mentioned above, I don't this there is a requirement that data acquired using an HCS instrument system MUST be stored exactly in its original form. Fields of views can be preprocessed including correction, stitching... and in some cases like the IDR example above, the HCS structure might not even be relevant so the plate metadata could be dropped and stitched wells could simply be stored as multiscale images.
Any suggestion on how to rephrase ambiguous sections are very welcome.

@jluethi
Copy link

jluethi commented Jun 30, 2022

Thanks for the reply @sbesson and pointing to this other example!

It would be interesting to increase the logging verbosity in your commands and profile which calls account for the extra time in the multiple FOV scenario. I would not discard a performance issue due to the sucession of calls made either in the ome-zarr library and/or napari.

Yes, we were also looking into this. Profiling the lazy loading is a bit tricky, so we profiled simpler cases of loading the data into an array outside of napari and measuring the speeds. That would be a lower bound for performance, as any viewer overheads are not considered yet. In these tests, we saw the following when loading different levels from disk:

level   | single-FOV      | multi-FOV
0       |    0.46         |    0.42
1       |    0.11         |    0.21
2       |    0.04         |    0.17
3       |    0.02         |    0.16

Thus, performance loading level 0 is comparable (because we also chunk the data in the single FOV case with chunk sizes corresponding to the original images), but the higher up we go in the levels, the worse the multi-FOV case performs. Already at level 3, the performance is 5-8x slower and it looks like it gets worse if we scale further.
The main reason for this is that at higher pyramid levels, the single FOV pyramid can be saved in a few(/ at some point a single) chunk per channel, while the multi-FOV case contains at least one file per FOV.* Here's an example for the well we profiled above, containing 72 FOVs & 3 channels

Level Number of files (single-FOV) Number of files (multi-FOV, across all FOVs)
0 216 216
1 48 216
2 12 216
3 6 216

Thus, if the goal is to load high pyramid levels (i.e. low res data), then performance takes a significant hit the smaller the individual FOV structures are. Especially when larger plates & remote access are involved, this turns loading performance from slightly laggy to completely unusable for our cases. And one of the great benefits of an OME-Zarr plate file is the ability to browse a whole plate of images lazily.

The biggest question is whether having the fields of views resolvable individually is a requirement of your application.

For our use case, there are parts of the processing where we need to be aware of the original field of view. For example, if we do a shading / illumination correction, this needs to be performed by FOV. And we often parallelize the segmentation and measurements by running them per FOV. So conceptually, the multi-FOV with every field being a separate Zarr folder was attractive to use.
Given the viewing performance of this approach and the fundamental slowness of reading many low-resolution chunks, I don't think we can pursue the multi-FOV approach though to enable this. If there are good ideas on how to handle this, I'd be very interested though!
My current thinking is that we chunk by original FOV to make access to single FOVs fast and use something like the new OME-NGFF tables spec to define regions of interest that we use to save the original FOV metadata: https://forum.image.sc/t/proposal-for-ome-ngff-table-specification/68908

the HCS structure might not even be relevant so the plate metadata could be dropped and stitched wells could simply be stored as multiscale images.

My current thinking is that the plate layout is actually quite useful to us, as we often want to look at individual wells in a multi-well plate. Just the way of storing FOVs within a well is where we'd differ. As you said: "Fields of views can be preprocessed including correction, stitching" => we are currently looking at pursuing a layout where FOVs are always preprocessed into a single FOV per well. I imagine any HCS user that wants to visualize large plates will run into these same performance limitations. Could we expand the spec to reflect this knowledge and recommend single field of view saving of wells when visualization is an interest?
I'll have to think of a good alternative, but I agree, the phrasing "All images contained in a well are fields of view of the same well" is probably what I would suggest to change. The potentially confusing part is the fact that a field of view is also a unit at the microscope. I'd prefer the spec to say that the well can consist of a single or multiple OME-NGFF image structures, thus removing the field of view aspect from the spec definition. And we could then have a section that explains that field of views can be saved as those image units or combined into a single image unit.
Is that direction something you'd support? Happy to come up with a concrete proposal in this direction.

Or are there broader discussions on the HCS spec we could join? I saw some older discussion threads, but wasn't sure if anything was still being pursued.

*(small side note: The slower loading performance doesn't seem to primarily be a question of file sizes. While multi-FOV leads to slightly worse compression performance at low-res because chunks are fairly small, the difference at level 3 is 10-20%, not coming close to explaining the performance difference)

@jluethi
Copy link

jluethi commented Aug 26, 2022

@sbesson I'd be curious to continue this discussion on saving single images per well in OME-Zarr HCS datasets. I've thought a bit more about suggestions to changes in the spec wording to support this use case. I would take away the explicit definition of the images in a plate as field of views, as this typically is what individual image regions are also called in the microscope and thus may not map on to the representation in the OME-Zarr file anymore.

the group above the images defines the well and MUST implement the well specification. All images contained in a well are fields of view of the same well

This could become:
the group above the images defines the well and MUST implement the well specification. A well is a collection of 1 to m images.

The trade-off is the following (not sure where in the spec such a thing would be specified):
Saving wells as a single image has performance benefits in visualization while saving all field of views as separate images offers maximal flexibility.

And I'd then change the plate layout description. e.g.

First field of view of well A1

=> First image of well A1
(and some other mentions of field of views)

Is there interest to discuss this further? Who would be good people to talk with about changing this spec? If there is interest in incorporating something like this into the spec, what is the process for this? I can e.g. make a PR suggesting these changes to have the debate on actual wording there.


Also, on the topic of saving field of views, we've made good progress with using the proposed OME-NGFF tables to store region of interest (ROI) information for each field of view from the microscope (see e.g. here: fractal-analytics-platform/fractal-tasks-core#24). This approach gives us most of the flexibility of having images per FOV (e.g. we can load individual field of views again). It does move away though from the idea of defining transformations within a well, doing things like blending overlaps and such. But I don't think such approaches can scale to 1000s of field of views (which is the main use case for the HCS specification for us).

@will-moore
Copy link
Member

I would just say "Yes", go ahead and open an NGFF spec PR with your proposed changes. 👍
Although maybe the Spec is not somewhere to document implementation details, I wonder if it's possible to give readers of the spec some hints of the considerations discussed above in a single sentence?

@jluethi
Copy link

jluethi commented Aug 26, 2022

Ok, I'll work on a PR then.

I agree, the spec isn't really the place to have this discussion. And on some level, the spec should be able to support both a collection of images per well as well as single images per well.

But I do think it's important to convey that if we save HCS data in OME-Zarr with the intention of visualizing whole plates & we want to scale to 100s-1000s of field of views of the microscope, an approach of saving every field of view as its own image does not scale.

If we are at a smaller scale (10s to maybe 100 or so images) or we never want to visualize whole plates (only individual field of views at a time), then saving field of views as separate images does work and allows great flexibility.

@sbesson
Copy link
Member

sbesson commented Aug 29, 2022

@jluethi at least from my perspective, no objections to loosening the terms of the spec and indicate that a well is a container of many multiscale images (in a plate grid layout). This also matches the JSON specification for this node which uses the images key.

I also think that expressing trade-offs & recommendations in terms of layouts is very useful. The main difficulty is not to confuse the reading. But a small paragraph in the section about the HCS layout might be a good compromise for now

@jluethi
Copy link

jluethi commented Aug 29, 2022

Thanks @sbesson @will-moore for the inputs. I opened a PR for this here with a suggestion for the wording, open for feedback & discussion! :)
ome/ngff#137

@imagesc-bot
Copy link

This issue has been mentioned on Image.sc Forum. There might be relevant details there:

https://forum.image.sc/t/best-approach-for-appending-to-ome-ngff-datasets/89070/2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants