If you are new to Scivision, start with the website.
The Scivision project is building:
-
A community of computer vision practitioners in the sciences and humanities (join the community on Slack)
-
A catalog of community-curated computer vision models and datasets from the sciences and humanities
-
A software ecosystem of interoperable tools and utilities for working with computer vision models and data, including:
- Scivision.Py, a Python package for conveniently downloading and using the computer vision models and datasets from Python (Scivision on PyPI)
- Pixelflow, a tool for extracting information about the characteristics of objects in images
Example use cases for these tools can be found in the gallery of notebooks using Scivision models and datasets
The Scivision project was founded by the Alan Turing Institute.
This main project repository on GitHub hosts
- development of the Python package (in the root directory)
- development of the website (in
frontend
) - the documentation sources (in
docs
)
Submit a bug or feature request here.
If you would like a link to a model or datasource to be listed in the catalog, such a contribution would be gratefully received. These can be submitted through the Scivision website. See the Contributing Guide for more details on how to format your model / data.
Pull requests for code changes are also welcome.
A quick overview of using the Scivision.Py python package.
$ pip install scivision
from scivision import load_pretrained_model
resnet18 = load_pretrained_model(
# The model URL
"https://github.com/alan-turing-institute/scivision_classifier",
# A Scivision model can contain several variants -- below we select the one to use
model_selection='resnet18',
# Allow the model and its dependencies to be installed if they are not already
# (including tensorflow in this example)
allow_install=True
)
We can give an image as input to the model. Any image data compatible with numpy (an 'Array_like') is accepted. We can obtain some image data by loading a Scivision datasource.
from scivision import load_pretrained_model
dataset = load_dataset('https://github.com/alan-turing-institute/scivision-test-data')
# 'dataset' provides several named arrays. This datasource provides one named 'test_image':
# the keys can be looked up with `list(dataset)` (or by consulting the datasource documentation)
#
test_image = dataset['test_image'].read()
Optionally, inspect the image (with matplotlib, for example):
import matplotlib.pyplot as plt
plt.imshow(test_image)
resnet18.predict(test_image)
Output: koala : 99.78%
from scivision import default_catalog
# The datasource catalog as a Pandas dataframe
default_catalog.datasources.to_dataframe()
# Similarly for the model catalog
default_catalog.models.to_dataframe()
Output:
name | description | tasks | url | pkg_url | format | scivision_usable | pretrained | labels_required | institution | tags | |
---|---|---|---|---|---|---|---|---|---|---|---|
0 | stardist | Single class object detection and segementation of star-convex polygons | (<TaskEnum.object_detection: 'object-detection'>, <TaskEnum.segmentation: 'segmentation'>) | https://github.com/stardist/stardist | git+https://github.com/stardist/stardist.git@master | image | False | True | True | ('epfl',) | ('2D', '3D', 'optical-microscopy', 'xray', 'microtomography', 'cell-counting', 'plant-phenotyping', 'climate-change-and-agriculture') |
1 | PlantCV | Open-source image analysis software package targeted for plant phenotyping | (<TaskEnum.segmentation: 'segmentation'>, <TaskEnum.thresholding: 'thresholding'>, <TaskEnum.object_detection: 'object-detection'>) | https://github.com/danforthcenter/plantcv | git+https://github.com/danforthcenter/plantcv@main | image | False | True | True | ('danforthcenter',) | ('2D', 'hyperspectral', 'multispectral', 'near-infrared', 'infrared', 'plant-phenotyping', 'climate-change-and-agriculture') |
⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ |
The catalogs are browsable online:
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind welcome!