In consideration of the duplicate rate requirement at the time of submission of the article, the abstract has been removed and will be added again later.
git clone https://github.com/yuanfuture/MCCA-MOT
Please go to KITTI official to download the required datasets. object tracking dataset.
The final dataset organization should be like this:
```
MCCA-MOT
├── data
│ ├── kitti
│ │ │── training
│ │ │ ├──calib & velodyne & label_02 & image_02 & depth_2 & (optional: planes)
│ │ │── testing
│ │ │ ├──calib & velodyne & image_02
```
cd your_path/MCCA-MOT
pip install -r requirements.txt
python main.py
If you want to evaluate the tracking results using the evaluation tool on the KITTI website, you will need to go https://github.com/JonathonLuiten/TrackEval to download the evaluation code and follow the appropriate steps to set.
the following results will be obtained.
HOAT( ↑) | AssA( ↑) | LocA(↑) | MOTA(↑) | MOTP(↑) | FP(↓) | FN(↓) | IDSW(↓) | |
---|---|---|---|---|---|---|---|---|
Car | 79.31% | 83.49% | 88.60% | 86.71% | 87.51% | 3992 | 513 | 66 |
Pedestrian | 51.79% | 56.95% | 78.52% | 60.36% | 74.50% | 7687 | 1317 | 173 |
Refer to CADDN for image feature promotion, and refer to SFD and TWISE for obtaining pseudo point cloud data. Many thanks to their wonderful work!
If you find this work useful, please consider to cite our paper:
@ARTICLE{
author={Hengyuan Liu},
journal={},
title={MCCA-MOT: Multimodal Collaboration-Guided Cascade Association Network for 3D Multi-Object Tracking},
year={2024}, volume={}, number={}, pages={}, doi={}