-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
admin/4.0-release-prep-and-benchmark-upgrades #244
Conversation
Codecov Report
@@ Coverage Diff @@
## main #244 +/- ##
=======================================
Coverage 95.43% 95.43%
=======================================
Files 32 32
Lines 2190 2190
=======================================
Hits 2090 2090
Misses 100 100 Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Super minor typo, otherwise looks good!
benchmarks/benchmark_chunk_sizes.py
Outdated
# has on the duration of various reads. | ||
# We would expect that processing speed can be optimized based off of the | ||
# dimensions of the file and what the user is trying to do with said file. | ||
# i.e. If the user wants to nrmalize each channel and make a max projection |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# i.e. If the user wants to nrmalize each channel and make a max projection | |
# i.e. If the user wants to normalize each channel and make a max projection |
projs.compute() | ||
|
||
def setup(self, img_path, chunk_dims): | ||
random.seed(42) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🌌
Description
Makes GitHub igore Jupyter Notebook from lang detection because.... this repo isn't jupyter, a single file is...
More importantly, it cleans up and expands benchmarking. A whole new suite for benchmarking how
chunk_dims
affects both IO and processing performance. And adds aLibCompareSuite
for comparing aicsimageio against other "just-in-time" image reading libs.In many cases I reduced the number of files benchmarked because our benchmarks currently take ~36 minutes to run... generally just because it is running ALOT of TIFF-like image benchmarks. We don't need to run every file.
Pull request recommendations:
Resolves #242
Related: dask/dask-image#181
Related: dask/dask-image#229
Thanks for contributing!