Detecting reliable geometric features is the key to creating a successful point-cloud registration. Point-cloud processing for extracting geometric features can be difficult, due to their invariance and the fact that most of them are corrupted by noise. In this work, we propose a new architecture, D3GATTEN, to solve this challenge, which allows to extract strong features, which later on can be used for point-cloud regis- tration, object reconstruction, and tracking. The key to our architecture is the use of the self-attention module to extract powerful features. Finally, compared with the most current methods, our architecture has achieved competitive results. Thoughtful tests were performed on the 3DMatch dataset, and it outperformed the existing state of the art. We also demonstrated that getting the best features is the essence of point-cloud alignment.
Our proposed network architecture
- Prerequisites
- Data Preparation
- Training & Evaluation
- Pretrained Model
- Results
- Demo
- SmartCam demo - ADI
- Create the environment and install the required libaries:
conda env create -f environment.yml
- Compile the customized Tensorflow operators located in tf_custom_ops:
sh compile_op.sh
- Compile the C++ extension module for python located in cpp_wrappers:
sh compile_wrappers.sh
The training set of 3DMatch can be downloaded from here. It is generated by datasets/cal_overlap.py
which select all the point cloud fragments pairs having more than 30% overlap.
- The training on 3DMatch dataset can be done by running:
python training_3DMatch.py
- The testing can be done by running:
python test.py
We provide the pre-trained model of 3DMatch in results/
.
Example results on 3DMatch dataset. The point clouds that will be registered are represented by the first two columns (a) and (b). The standard deviation of the two point clouds is represented in the third column (c), and the result produced after performing the transformation is represented in the last column (d).
The obtained proper correspondences are highlighted in green here. The erroneous correspondences are shown with red.
We provide a small demo to extract dense feature and detection score for two point cloud, and register them using RANSAC. To try the demo, please run:
python demo.py
- To use the camera, please follow the instructions here.
- Getting images from the ADI smart camera:
rosrun pcl_ros pointcloud_to_pcd input:=/topic/name
- Convert images to
.ply
format:
pcl_pcd2ply [-format 0|1] [-use_camera 0|1] input.pcd output.ply
- Run the demo
python demo.py --generate_features