Skip to content

Latest commit

 

History

History
101 lines (69 loc) · 7.88 KB

README.md

File metadata and controls

101 lines (69 loc) · 7.88 KB

PointMamba

A Simple State Space Model for Point Cloud Analysis

Dingkang Liang1 *, Xin Zhou1 *, Wei Xu1, Xingkui Zhu1, Zhikang Zou2, Xiaoqing Ye2, Xiao Tan2 and Xiang Bai1†

1 Huazhong University of Science & Technology, 2 Baidu Inc.

(*) Equal contribution. ($\dagger$) Corresponding author.

arXiv Project Zhihu Hits GitHub closed issues Code License

📣 News

  • [11/Oct/2024] 🚀 Check out our latest work PointGST which achieves 99.48%, 97.76%, and 96.18% overall accuracy on the ScanObjNN OBJ_BG, OBJ_OBLY, and PB_T50_RS datasets, respectively.
  • [26/Sept/2024] PointMamba is accepted to NeurIPS 2024! 🥳🥳🥳
  • [30/May/2024] Update! We update the architecture and performance. Please check our latest paper and compare it with the new results. Code and weight will be updated soon.
  • [01/Apr/2024] ScanObjectNN with further data augmentation is now available, check it out!
  • [16/Mar/2024] The configurations and checkpoints for ModelNet40 are now accessible, check it out!
  • [05/Mar/2024] Our paper DAPT (github) has been accepted by CVPR 2024! 🥳🥳🥳 Check it out and give it a star 🌟!
  • [16/Feb/2024] Release the paper.

Abstract

Transformers have become one of the foundational architectures in point cloud analysis tasks due to their excellent global modeling ability. However, the attention mechanism has quadratic complexity, making the design of a linear complexity method with global modeling appealing. In this paper, we propose PointMamba, transferring the success of Mamba, a recent representative state space model (SSM), from NLP to point cloud analysis tasks. Unlike traditional Transformers, PointMamba employs a linear complexity algorithm, presenting global modeling capacity while significantly reducing computational costs. Specifically, our method leverages space-filling curves for effective point tokenization and adopts an extremely simple, non-hierarchical Mamba encoder as the backbone. Comprehensive evaluations demonstrate that PointMamba achieves superior performance across multiple datasets while significantly reducing GPU memory usage and FLOPs. This work underscores the potential of SSMs in 3D vision-related tasks and presents a simple yet effective Mamba-based baseline for future research.

Overview

Main Results

  • The table below will be updated once the code for the latest version release is ready.
Task Dataset Config Acc.(Scratch) Download (Scratch) Acc.(pretrain) Download (Finetune)
Pre-training ShapeNet pretrain.yaml N.A. here
Classification ModelNet40 finetune_modelnet.yaml 92.4% here 93.6% here
Classification ScanObjectNN finetune_scan_objbg.yaml 88.30% here 90.71% here
Classification* ScanObjectNN finetune_scan_objbg.yaml \ \ 93.29% here
Classification ScanObjectNN finetune_scan_objonly.yaml 87.78% here 88.47% here
Classification* ScanObjectNN finetune_scan_objonly.yaml \ \ 91.91% here
Classification ScanObjectNN finetune_scan_hardest.yaml 82.48% here 84.87% here
Classification* ScanObjectNN finetune_scan_hardest.yaml \ \ 88.17% here
Part Segmentation ShapeNetPart part segmentation 85.8% mIoU here 86.0% mIoU here

* indicates further using simple rotational augmentation for training.

Getting Started

Datasets

See DATASET.md for details.

Usage

See USAGE.md for details.

To Do

  • Release code.
  • Release checkpoints.
  • ModelNet40.
  • Update the code.

Acknowledgement

This project is based on Point-BERT (paper, code), Point-MAE (paper, code), Mamba (paper, code), Causal-Conv1d (code). Thanks for their wonderful works.

Citation

If you find this repository useful in your research, please consider giving a star ⭐ and a citation

@inproceedings{liang2024pointmamba,
      title={PointMamba: A Simple State Space Model for Point Cloud Analysis}, 
      author={Liang, Dingkang and Zhou, Xin and Xu, Wei and Zhu, Xingkui and Zou, Zhikang and Ye, Xiaoqing and Tan, Xiao and Bai, Xiang},
      booktitle={Advances in Neural Information Processing Systems},
      year={2024}
}