Python scripts for performing stereo depth estimation using the MobileStereoNet model in ONNX
Stereo depth estimation on the cones images from the Middlebury dataset (https://vision.middlebury.edu/stereo/data/scenes2003/)
- Check the requirements.txt file. Additionally, pafy and youtube-dl are required for youtube video inference.
- DrivingStereo dataset, ONLY for the
driving_sereo_test.py
script. Link: https://drivingstereo-dataset.github.io/
pip install -r requirements.txt
pip install pafy youtube-dl
Download the ONNX model from Google Drive and save it into the models folder.
The Pytorch pretrained model was taken from the original repository.
- Image inference:
python image_depth_estimation.py
- Video inference:
python video_depth_estimation.py
- DrivingStereo dataset inference:
python driving_sereo_test.py
- MobileStereoNet model: https://github.com/cogsys-tuebingen/mobilestereonet
- PINTO0309's model zoo: https://github.com/PINTO0309/PINTO_model_zoo
- PINTO0309's model conversion tool: https://github.com/PINTO0309/openvino2tensorflow
- DrivingStereo dataset: https://drivingstereo-dataset.github.io/
- Original paper: https://arxiv.org/pdf/2108.09770.pdf