Skip to content

benjaminkelenyi/D3GAtten

Repository files navigation

D3GATTEN: Dense 3D Geometric Features Extraction Using Self-Attention

Abstract

Detecting reliable geometric features is the key to creating a successful point-cloud registration. Point-cloud processing for extracting geometric features can be difficult, due to their invariance and the fact that most of them are corrupted by noise. In this work, we propose a new architecture, D3GATTEN, to solve this challenge, which allows to extract strong features, which later on can be used for point-cloud regis- tration, object reconstruction, and tracking. The key to our architecture is the use of the self-attention module to extract powerful features. Finally, compared with the most current methods, our architecture has achieved competitive results. Thoughtful tests were performed on the 3DMatch dataset, and it outperformed the existing state of the art. We also demonstrated that getting the best features is the essence of point-cloud alignment.

Overview

Our proposed network architecture

net_archPNG

Content

Prerequisites

  1. Create the environment and install the required libaries:
conda env create -f environment.yml
  1. Compile the customized Tensorflow operators located in tf_custom_ops:
sh compile_op.sh
  1. Compile the C++ extension module for python located in cpp_wrappers:
sh compile_wrappers.sh

Data Preparation

The training set of 3DMatch can be downloaded from here. It is generated by datasets/cal_overlap.py which select all the point cloud fragments pairs having more than 30% overlap.

Training & Evaluation

  1. The training on 3DMatch dataset can be done by running:
python training_3DMatch.py
  1. The testing can be done by running:
python test.py

Pretrained Model

We provide the pre-trained model of 3DMatch in results/.

Results

Example results on 3DMatch dataset. The point clouds that will be registered are represented by the first two columns (a) and (b). The standard deviation of the two point clouds is represented in the third column (c), and the result produced after performing the transformation is represented in the last column (d).

reg_res

The obtained proper correspondences are highlighted in green here. The erroneous correspondences are shown with red.

Screen Recording (12-20-2022 2-30-59 PM) (1)

Demo with own data

We provide a small demo to extract dense feature and detection score for two point cloud, and register them using RANSAC. To try the demo, please run:

python demo.py

d3gatten

Smartcam demo - ADI

  1. To use the camera, please follow the instructions here.
  2. Getting images from the ADI smart camera:
rosrun pcl_ros pointcloud_to_pcd input:=/topic/name
  1. Convert images to .ply format:
pcl_pcd2ply [-format 0|1] [-use_camera 0|1] input.pcd output.ply
  1. Run the demo
python demo.py --generate_features

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published