Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
Add details on saving and loading the quantized model
  • Loading branch information
klnavaneet authored Jan 4, 2024
1 parent 34484b6 commit 98c3ba0
Showing 1 changed file with 5 additions and 1 deletion.
6 changes: 5 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,9 @@ This Repository is an official implementation of "[Compact3D: Compressing Gaussi

Compact3D is a method to reduce the storage memory requirements of [3D Gaussian Splatting](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/) models. 3D Gaussian Splatting is a new technique for novel view synthesis where properties of 3D Gaussians (location, shape, color) are optimized to model a 3D scene. The method performs better than SOTA NeRF approaches, is extremely fast to train and can be rendered in real time during inference. However, since a typical scene requires millions of Gaussians to model it, the memory requirements can be an order of magnitude more than many NeRF approaches. Here, we reduce the size of the trained 3D Gaussian Splat models by 10-20x by vector quantizing the Gaussian parameters. An overview of our method is shown below. We perform K-Means quantization on the covariance and color parameters of all Gaussians and replace values of each with the corresponding entry in the codebook (i.e., the cluster center). This is done in conjuncion with the training of the parameter values as done in the non-quantized version of Gaussian splatting. We observe that the models can be compressed 20 times without a big drop in performance.

## Updates
**[4 January 2024]:** We have updated the code to store and load the quantized parameters using the indices and codebook. Only the compressed version of the model is saved.

## Getting Started

Our code is based on the excellent official repo for [3D Gaussian Splatting](https://github.com/graphdeco-inria/gaussian-splatting/tree/main). First, clone our repository.
Expand All @@ -32,9 +35,10 @@ bash run.sh

Once the model is trained, the rendering and evaluation process is exactly the same as in 3D Gaussian Splatting. Following their instructions,
```shell
python render.py -m <path to trained model> # Generate renderings
python render.py -m <path to trained model> --skip_train --load_quant # Generate renderings
python metrics.py -m <path to trained model> # Compute error metrics on renderings
```
Use the ```skip_train``` argument to avoid rendering the images from the train set camera viewpoints and the ```load_quant``` argument when loading our quantized model. Remove the ```load_quant``` argument if evaluating the original non-qunatized model.

## Citation

Expand Down

0 comments on commit 98c3ba0

Please sign in to comment.