-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Read multiple fields of view on a grid, for HCS dataset #200
Comments
I'm also very interested in this (and working with @tcompa & @mfranzon on this). If there are ways to display multiple sites, will there also be ways to determine the arrangement of the multiple field of views? At the moment, it seems to choose the closest shape to a square for an arrangement, but the data may have been acquired at a different shape (e.g. 1x4 instead of 2x2 arrangement for the example on top). The code that generates rows & column information from the log above is doing a square calculation:
Could this information instead be read from the |
Hi, the
However, you see all the fields when viewing a single Well (add the row/column to the path):
And you can open individual images with:
If you add It's a nice idea to use these translations when stitching fields into a single Well, but a bit more work since the translations can be in multiple dimensions so it's not simple as the ome-zarr-py/ome_zarr/reader.py Line 453 in bb62c8a
|
Thanks a lot for the super-fast response @will-moore!
Is it a design choice to only show the first field of view per plate or just something that hasn’t been implemented yet to show multiple fovs per well in the plate view? Screen.Recording.2022-05-19.at.14.23.14.movWe're now trying to build towards the same experience using OME-Zarr & napari. Would be very happy to contribute to make this happen if you think that's something the ome-zarr-py library could support! Screen.Recording.2022-05-19.at.14.43.49.mov
Also, interesting thought of accessing specific wells on their own or specific images. And it then loads the full resolution image data first & downsamples it. Is this the expected behavior? Would be very useful to be able to lazily load the necessary pyramid levels within the fovs of a single well.
True, that could get more complicated. Are there already thoughts on how one could handle well arrangements? Would love to join that discussion (or start it in a separat issue so this doesn't get to confusing) :) |
Jotting down a few thoughts on possible options for storing the well position metadata:
|
Your custom plate viewer looks very nice. Is that a browser (JavaScript) client or python? Currently the viewing of a single Well loads all of the Images at full resolution. I have a PR to fix the loading of a pyramids for Plate (#195) prompted by @mfranzon, so we should be able to apply the same approach for loading Wells. We had a long discussion on NGFF "Collections" of images, ome/ngff#31, partly thinking about replacing the current HCS spec with something more generic. Some of that discussion was about the layout of Wells within a Plate. There's a trade-off. If everything is very generic, you could in theory use the As I mentioned, you can already store fov translations in the multiscales metadata. Let's assume that when loading HCS data, we could use those translations to stitch fields into Wells (and even stitch those Wells into a Plate). So, I would suggest:
|
Thanks @sbesson for the overview and the links! Will have to read through them in a bit more detail now! Certainly happy to provide real-world example data to test this!
That’s also a very interesting idea. May be a good way to approach grid-based layouts, while the spaces & transforms part looks very promising to settings where images aren’t placed on a perfect grid.
It’s a browser-based viewer using the AngularJS framework and webGL for the viewer, but unfortunately already fairly old (server code in Python 2, using AngularJS that is also past official support). Which is why we were looking for a new solution to host such that and expand the utility beyond what we covered before.
Ah, we often hit use cases where we have 10x10 images per well or even more, thus > 100 images and thus layers that aren’t reasonable for napari anymore. We’d certainly be very interested about treating wells similarly to the plate, e.g. load the relevant pyramid level.
Do you mean showing more fields is slower as in “when we show larger areas of acquired data, that’s slower”, or as in “it’s faster to show the same area fused into fewer fields”?
Yes, I would also be very concerned about scaling. We easily have >1000 field of views on a plate, > 100 fovs in a well. If those would all be separate layers, the current napari setup will not handle that gracefully. Would your suggestion with the translations result in something with many layers (1 per fov) or do you think that stitching per well can also be done with applying translations and still resulting in a single layer (similarly to how wells are stitched to a plate as a single layer)? Very curious to try this out now! Thanks a lot for taking the time for those thoughtful explanations and the links to relevant conversations @will-moore & @sbesson |
I would expect showing all the Fields will be slower simply because you'd probably be loading more pixel data to the client. E.g. 100 Fields per Well is going to load more than 1 Field per Well. Maybe if the Plate isn't very big, not so many Fields per Well, and you've the Images have small-enough resolution levels, then you might not load a lot more pixel data. But you'll still be making a lot more requests, which could be limiting. If you're interested in browser-based client, you're aware of https://github.com/hms-dbmi/vizarr? e.g. I'm still on a learning curve with dask, but I would hope you could do the equivalent of numpy:
|
It took us (@mfranzon, @tcompa & me) quite a while to get to this, but we’ve now focused on this issue in the last week. Performance of multi field-of-view setups is much slower when loading low-res pyramid levels than the performance combining all the data beforehand and saving it as a single field of view. This gives us major concerns about scalability & interactive viewing for saving HCS datasets according to the current OME-NGFF spec that is organized by field of view. First, we have established pyramidal lazy-loading of wells. We have started to play around with the translation parameters and can place single field of views correctly based on those translation parameters. We have started looking into combining multiple Multiscale image nodes into a well, but could not find a way to combine them into a single napari channel. Conclusion: The metadata approach seems reasonable, but we’d need to implement it from “scratch” for the well approach. Second, we decided to do pyramidal lazy-loading of wells so we could judge the performance, keeping the “guestimated” grid-placements for the moment. Here is a PR that allows for this and thus allows us to look at single wells even if they have dimensions like Before making this approach more complex by parsing all the metadata to position the FOVs, we evaluated its performance. We had the same image data in two different OME-Zarr files. We either combined all field of views beforehand to save them as a single field of views (FOV), or saved them as multiple FOVs. See the file trees here as an explanation. Single FOV Tree
Multi FOV Tree
In the end, both Zarr files contain the same data, but their organization is slightly different. On the highest resolution level (pyramid level 0), both zarr structures have the same number of chunks and same file size (both have 216 chunks over the 3 channels & 72 original field of views). The higher we go in the pyramid, the more striking the difference is. e.g. at pyramid level 3 (with coarsening factor 2), single FOV consists of 6 files (we kept the chunk-size somewhat constant), while multi FOV consists still of 216 files (because every field of view folder has a pyramid and each pyramid has a chunk file for each level). Performance when looking at the highest resolution is quite similar. But we run into strong scalability issues when loading the higher pyramid levels. This often happens if we want to browse a dataset zoomed-out or when we initially load a dataset. For example, if we load the single well with 9x8 sites & 3 channels, it loads in napari within 0.5s for the single FOV case, but takes about 4s to load the multi FOV case (same amount of image data is loaded, looks exactly the same. But the on-disk structure was different). This is quite worrying, as we were thinking about using the multi-FOV architecture to save data with dozens to hundreds of wells, each having dozens - 100 field of views. In that case, performance for loading low-res pyramid levels would scale terribly, as we’d be loading 1000s or 10’000s of tiny pyramid files (e.g. 72 pyramid files per well instead of 3-6 pyramid files per well). We already have a test case with 23 wells of 72 FOVs each (1656 fields in total) and in single FOV mode (when the FOVs are combined into a single image before putting them into an OME-Zarr file), this performs nicely. See the updated video here: napari_23well_compressed.mp4(performance is not amazing, but with NAPARI_ASYNC on, it’s quite usable) Given the scale we’re aiming for, multi-FOV use cases in OME-Zarr don’t look like something that would scale, unless there is a trick we have not considered yet. According to our tests, the limiting factor really becomes loading the many low resolution pyramid files. Our single FOV workaround contains the same amount of actual image data, but the chunks remain large for higher pyramid levels because we combined FOVs. Are there thoughts on ways to use the OME-NGFF spec with such large datasets? Are there ideas how we could handle lower-resolution pyramid levels when 100s to 1000s of FOVs exist? Combining FOVs before putting them into OME-Zarr works for visualization, but isn’t the spec that is described. |
@jluethi thanks for the follow-up and the detailed report. I think your investigation raises extremely important questions about data structure and layout that are becoming increasingly common. First of all, I do not think these questions are specific to OME-NGFF. We received similar IDR submissions i.e. acquisition using HCS instrumentation where each well effectively scans a wide area (like a whole slide scanner would do) and this data is stored on disk as 100-1000 fields of views. Based on our interactions with submitter, we established that the most valuable representation for some of these datasets was to stitch these fields of view into a large pyramidal image that could then be annotated with ROIs and tabular data - see https://idr.openmicroscopy.org/webclient/img_detail/12532806/?dataset=13651 for an example. Conceptually, this seems to be largely on the same line as to the single FOV tree layout that you described above. From the client perspective, I think th amount of binary data that is fetched to be displayed is not that different between both representations. Instead, the main divergence likely arises from the number of extra calls in the multiple FOV layout to navigate between the different fields of views which are effectively tiles/chunks in the single FOV representation. It would be interesting to increase the logging verbosity in your commands and profile which calls account for the extra time in the multiple FOV scenario. I would not discard a performance issue due to the sucession of calls made either in the That being said by the end of the day, a consequence of keeping fields of views separate in their data representation is that the client is ultimately reponsible for the computational cost of fetching & assembling data from multiple sources and it is expected this will be less scalable than making this computation upfront. The biggest question is whether having the fields of views resolvable individually is a requirement of your application. If so, the multi FOV tree preserves access to individual tiles. If not, the single FOV tree offers clear advantages especially when it comes to interactivity across the whole well.
Thanks for this feedback. I assume one of the culprit sentences is https://ngff.openmicroscopy.org/latest/#hcs-layout and especially "All images contained in a well are fields of view of the same well"? |
Thanks for the reply @sbesson and pointing to this other example!
Yes, we were also looking into this. Profiling the lazy loading is a bit tricky, so we profiled simpler cases of loading the data into an array outside of napari and measuring the speeds. That would be a lower bound for performance, as any viewer overheads are not considered yet. In these tests, we saw the following when loading different levels from disk:
Thus, performance loading level 0 is comparable (because we also chunk the data in the single FOV case with chunk sizes corresponding to the original images), but the higher up we go in the levels, the worse the multi-FOV case performs. Already at level 3, the performance is 5-8x slower and it looks like it gets worse if we scale further.
Thus, if the goal is to load high pyramid levels (i.e. low res data), then performance takes a significant hit the smaller the individual FOV structures are. Especially when larger plates & remote access are involved, this turns loading performance from slightly laggy to completely unusable for our cases. And one of the great benefits of an OME-Zarr plate file is the ability to browse a whole plate of images lazily.
For our use case, there are parts of the processing where we need to be aware of the original field of view. For example, if we do a shading / illumination correction, this needs to be performed by FOV. And we often parallelize the segmentation and measurements by running them per FOV. So conceptually, the multi-FOV with every field being a separate Zarr folder was attractive to use.
My current thinking is that the plate layout is actually quite useful to us, as we often want to look at individual wells in a multi-well plate. Just the way of storing FOVs within a well is where we'd differ. As you said: "Fields of views can be preprocessed including correction, stitching" => we are currently looking at pursuing a layout where FOVs are always preprocessed into a single FOV per well. I imagine any HCS user that wants to visualize large plates will run into these same performance limitations. Could we expand the spec to reflect this knowledge and recommend single field of view saving of wells when visualization is an interest? Or are there broader discussions on the HCS spec we could join? I saw some older discussion threads, but wasn't sure if anything was still being pursued. *(small side note: The slower loading performance doesn't seem to primarily be a question of file sizes. While multi-FOV leads to slightly worse compression performance at low-res because chunks are fairly small, the difference at level 3 is 10-20%, not coming close to explaining the performance difference) |
@sbesson I'd be curious to continue this discussion on saving single images per well in OME-Zarr HCS datasets. I've thought a bit more about suggestions to changes in the spec wording to support this use case. I would take away the explicit definition of the images in a plate as field of views, as this typically is what individual image regions are also called in the microscope and thus may not map on to the representation in the OME-Zarr file anymore.
This could become: The trade-off is the following (not sure where in the spec such a thing would be specified): And I'd then change the plate layout description. e.g.
=> First image of well A1 Is there interest to discuss this further? Who would be good people to talk with about changing this spec? If there is interest in incorporating something like this into the spec, what is the process for this? I can e.g. make a PR suggesting these changes to have the debate on actual wording there. Also, on the topic of saving field of views, we've made good progress with using the proposed OME-NGFF tables to store region of interest (ROI) information for each field of view from the microscope (see e.g. here: fractal-analytics-platform/fractal-tasks-core#24). This approach gives us most of the flexibility of having images per FOV (e.g. we can load individual field of views again). It does move away though from the idea of defining transformations within a well, doing things like blending overlaps and such. But I don't think such approaches can scale to 1000s of field of views (which is the main use case for the HCS specification for us). |
I would just say "Yes", go ahead and open an NGFF spec PR with your proposed changes. 👍 |
Ok, I'll work on a PR then. I agree, the spec isn't really the place to have this discussion. And on some level, the spec should be able to support both a collection of images per well as well as single images per well. But I do think it's important to convey that if we save HCS data in OME-Zarr with the intention of visualizing whole plates & we want to scale to 100s-1000s of field of views of the microscope, an approach of saving every field of view as its own image does not scale. If we are at a smaller scale (10s to maybe 100 or so images) or we never want to visualize whole plates (only individual field of views at a time), then saving field of views as separate images does work and allows great flexibility. |
@jluethi at least from my perspective, no objections to loosening the terms of the spec and indicate that a well is a container of many multiscale images (in a plate grid layout). This also matches the JSON specification for this node which uses the I also think that expressing trade-offs & recommendations in terms of layouts is very useful. The main difficulty is not to confuse the reading. But a small paragraph in the section about the HCS layout might be a good compromise for now |
Thanks @sbesson @will-moore for the inputs. I opened a PR for this here with a suggestion for the wording, open for feedback & discussion! :) |
This issue has been mentioned on Image.sc Forum. There might be relevant details there: https://forum.image.sc/t/best-approach-for-appending-to-ome-ngff-datasets/89070/2 |
Hi there, and thanks for your support.
We (me and @mfranzon) encountered some issues while trying to reproduce the behavior of #57 (comment), that is, to visualize (in napari) a zarr file which includes a single well with several fields of view.
Our zarr file contains a HCS dataset, structured as in #57 (comment). There is only one well, which includes four fields of view. Upon loading the file in napari, logs show that
which is the expected behavior (placing the four fields on a 2x2 grid, as in https://github.com/ome/ome-zarr-py/blob/master/ome_zarr/reader.py#L407-L410). However, the
tile_path
function inreader.py
is defined aswhere the field is strictly equal to
self.first_field
, which is equal to"0"
.We are missing something, here, because it seems that the reader will never load any field which is not the first one.
Is there a way to show the four fields together, on a 2x2 grid (as in #57 (comment))?
The text was updated successfully, but these errors were encountered: