Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Fix bug of multiple pre-processing when segmentation (PyTorch) (#645)
It is very slow in performing segmentation inference. #531 #234 And, it is because the dataloader will apply multiple data preprocessing if self.cache_convert is None. https://github.com/isl-org/Open3D-ML/blob/fcf97c07bf7a113a47d0fcf63760b245c2a2784e/ml3d/torch/dataloaders/torch_dataloader.py#L77-L83 When running the run_inference method, the cache_convert of dataloader is None. https://github.com/isl-org/Open3D-ML/blob/fcf97c07bf7a113a47d0fcf63760b245c2a2784e/ml3d/torch/pipelines/semantic_segmentation.py#L143-L147 This leads to extreme slowness in performing reasoning. I've added a get_cache method to provide cache to avoid slowdowns caused by multiple preprocessing during inference. I tested it on a GV100 GPU with RandLA-Net on the Toronto3D dataset. Inferencing time for a single scene is only two minutes and 37 seconds. Reasoning is considerably faster than before ```bash After: test 0/1: 100%|██████████████████████████████████████████████████████| 4990714/4990714 [02:37<00:00, 31769.86it/s] Before: test 0/1: 4%|██ | 187127/4990714 [05:12<2:19:39, 573.27it/s] ```
- Loading branch information