Python scripts performing stereo depth estimation using the Fast-ACVNet model in ONNX.
Stereo depth estimation on the cones images from the Middlebury dataset (https://vision.middlebury.edu/stereo/data/scenes2003/)
- Check the requirements.txt file.
- For ONNX, if you have a NVIDIA GPU, then install the onnxruntime-gpu, otherwise use the onnxruntime library.
git clone https://github.com/ibaiGorordo/ONNX-FastACVNet-Depth-Estimation.git
cd ONNX-FastACVNet-Depth-Estimation
pip install -r requirements.txt
For Nvidia GPU computers:
pip install onnxruntime-gpu
Otherwise:
pip install onnxruntime
The models were converted from the Pytorch implementation below by PINTO0309, download the models from the download script in his repository and save them into the models folder.
- The License of the models is MIT License: https://github.com/gangweiX/Fast-ACVNet/blob/main/LICENSE.md
The original Pytorch model can be found in this repository: https://github.com/gangweiX/Fast-ACVNet
- Image inference:
python image_depth_estimation.py
- Video inference:
python video_depth_estimation.py
Original video: Málaga Stereo and Laser Urban dataset, reference below
- Driving Stereo dataset inference: https://youtu.be/az4Z3dp72Zw
python driving_stereo_test.py
Original video: Driving stereo dataset, reference below
- Fast-ACVNet model: https://github.com/gangweiX/Fast-ACVNet
- PINTO0309's model zoo: https://github.com/PINTO0309/PINTO_model_zoo
- PINTO0309's model conversion tool: https://github.com/PINTO0309/openvino2tensorflow
- Driving Stereo dataset: https://drivingstereo-dataset.github.io/
- Málaga Stereo and Laser Urban dataset: https://www.mrpt.org/MalagaUrbanDataset
- Original paper: https://arxiv.org/abs/2209.12699