Lianghui Zhu1 *,Bencheng Liao1 *,Qian Zhang2, Xinlong Wang3, Wenyu Liu1, Xinggang Wang1 📧
1 Huazhong University of Science and Technology, 2 Horizon Robotics, 3 Beijing Academy of Artificial Intelligence
(*) equal contribution, (📧) corresponding author.
ArXiv Preprint (arXiv 2401.09417)
Jan. 18th, 2024
: We released our paper on Arxiv. Code/Models are coming soon. Please stay tuned! ☕️
Recently the state space models (SSMs) with efficient hardware-aware designs, i.e., Mamba, have shown great potential for long sequence modeling. Building efficient and generic vision backbones purely upon SSMs is an appealing direction. However, representing visual data is challenging for SSMs due to the position-sensitivity of visual data and the requirement of global context for visual understanding. In this paper, we show that the reliance of visual representation learning on self-attention is not necessary and propose a new generic vision backbone with bidirectional Mamba blocks (Vim), which marks the image sequences with position embeddings and compresses the visual representation with bidirectional state space models. On ImageNet classification, COCO object detection, and ADE20k semantic segmentation tasks, Vim achieves higher performance compared to well-established vision transformers like DeiT, while also demonstrating significantly improved computation & memory efficiency. For example, Vim is 2.8× faster than DeiT and saves 86.8% GPU memory when performing batch inference on images with a resolution of 1248×1248. The results demonstrate that Vim is capable of overcoming the computation & memory constraints on performing Transformer-style understanding for high-resolution images and it has great potential to become the next-generation backbone for vision foundation models.
-
Python 3.10.13
conda create -n your_env_name python=3.10.13
-
torch 2.1.1 + cu118
pip install torch==2.1.1 torchvision==0.16.1 torchaudio==2.1.1 --index-url https://download.pytorch.org/whl/cu118
-
Requirements: vim_requirements.txt
pip install -r vim/vim_requirements.txt
-
Install
causal_conv1d
andmamba
pip install -e causal_conv1d
pip install -e mamba
bash vim/scripts/pt-vim-t.sh
Model | #param. | Top-1 Acc. | Top-5 Acc. | Hugginface Repo |
---|---|---|---|---|
Vim-tiny | 7M | 73.1 | 91.1 | https://huggingface.co/hustvl/Vim-tiny |
To evaluate Vim-Ti
on ImageNet-1K, run:
python main.py --eval --resume /path/to/ckpt --model vim_tiny_patch16_224_bimambav2_final_pool_mean_abs_pos_embed_rope_also_residual_with_cls_token --data-path /path/to/imagenet
This project is based on Mamba (paper, code), Causal-Conv1d (code), DeiT (paper, code). Thanks for their wonderful works.
If you find Vim is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.
@article{vim,
title={Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model},
author={Lianghui Zhu and Bencheng Liao and Qian Zhang and Xinlong Wang and Wenyu Liu and Xinggang Wang},
journal={arXiv preprint arXiv:2401.09417},
year={2024}
}