Lukas Radl*,
Michael Steiner*,
Mathias Parger,
Alexander Weinrauch,
Bernhard Kerbl,
Markus Steinberger
* denotes equal contribution
| Webpage
| Full Paper
| Video
This repository contains the official authors implementation for the popping detection method associated with the paper "StopThePop: Sorted Gaussian Splatting for View-Consistent Real-time Rendering", which can be found here.
@article{radl2024stopthepop,
author = {Radl, Lukas and Steiner, Michael and Parger, Mathias and Weinrauch, Alexander and Kerbl, Bernhard and Steinberger, Markus},
title = {{StopThePop: Sorted Gaussian Splatting for View-Consistent Real-time Rendering}},
journal = {ACM Transactions on Graphics},
number = {4},
volume = {43},
articleno = {64},
year = {2024},
}
This repository takes a sequence of video frames or a video, and outputs detailed view-consistency metrics.
Specifically, we tackle popping artefacts, which are very specific to 3DGS.
This repository is built on RAFT and Fast Blind Video Consistency, which are publily available.
This project is subject to the MIT License, with the exception of:
- core/: BSD-3 License
- popping_utils/occlusion_utils.py: MIT License
- popping_utils/flip.py: BSD-3 license
Clone the repository via
# HTTPS
git clone https://github.com/r4dl/PoppingDetection
Our default, provided install method is based on Conda package and environment management:
conda env create --file environment.yml
conda activate poppingdetection
As in StopThePop, this process assumes that you have CUDA SDK 11 installed, not 12.
Pretrained models can be downloaded by running
bash download_models.sh
or downloaded from Google Drive directly.
Our popping detection method supports
- Videos
- A sequence of frames
To run the popping detection, simply use
# Videos
python detect_popping.py -m <path to Model> -f <path to video1> <path to video2> --output_dir <output_path> --step <INT>
# Frame Sequences
python detect_popping.py -m <path to Model> -f <path to directory1> <path to directory2> --output_dir <output_path> --step <INT>
Command Line Arguments for detect_popping.py
Path to the model for optical flow prediction, we used models/raft-sintel.pth
Frame offset during optical flow prediction - we used 1,7 to evaluate short-range and long-range consistency, respectively
Directories (or videos) of videos to test
Add this flag to output all MSE/FLIP predictions
Add this flag to output all warped images
Path to where the outputs should be stored (output/<random>
by default)
By default, detect_popping.py
produces three outputs:
- Full Results
- Per View Results
- A Per-Frame Plot
The full, averaged results are contained in <output_dir>/results.json
and look like the following:
{
"MSE": {
"ours": 0.0004816666263067259,
"3dgs": 0.0005388583755252707
},
"FLIP": {
"ours": 0.006288248952531622,
"3dgs": 0.011204817680069102
}
}
Similarly, the per-view results are contained in <output_dir>/per_view.json
and look like:
{
"MSE": {
"ours": {
"00000.png": 0.0005637490773475146,
"00001.png": 0.0006141854811676444,
...
},
"3dgs": {
"00000.png": 0.0005698252213756839,
"00001.png": 0.0006286655123918763,
...
}
},
"FLIP": {
"ours": {
"00000.png": 0.006910023213958658,
"00001.png": 0.0069372149966621605,
...
},
"3dgs": {
"00000.png": 0.00812406283546771,
"00001.png": 0.009072611476942128,
...
}
}
}
In addition to the previous results, we also include a per-frame plot, which looks like the following:
- Method names are automatically determined by either the video name or the frame directory name
- By default, when 2+ frame sequences/videos are considered, the minimum FLIP score per-pixel is subtracted for more stable metrics - this can be disabled by setting
ENABLE_FLIP_MIN = False
indetect_popping.py