Skip to content

Commit

Permalink
Merge pull request #279 from LLaVA-VL/yhzhang/llava_video_dev
Browse files Browse the repository at this point in the history
Update LLaVA-Video paper link
  • Loading branch information
Luodian authored Oct 4, 2024
2 parents a4c9bce + 44bb013 commit 333d6fc
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
</p>

# LLaVA-NeXT: Open Large Multimodal Models
[![Static Badge](https://img.shields.io/badge/llava_video-paper-green)](http://arxiv.org/abs/2410.0271)
[![Static Badge](https://img.shields.io/badge/llava_video-paper-green)](http://arxiv.org/abs/2410.02713)
[![Static Badge](https://img.shields.io/badge/llava_onevision-paper-green)](https://arxiv.org/abs/2408.03326)
[![llava_next-blog](https://img.shields.io/badge/llava_next-blog-green)](https://llava-vl.github.io/blog/)

Expand All @@ -30,7 +30,7 @@
📄 **Explore more**:
- [LLaVA-Video-178K Dataset](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K): Download the dataset.
- [LLaVA-Video Models](https://huggingface.co/collections/lmms-lab/llava-video-661e86f5e8dabc3ff793c944): Access model checkpoints.
- [Paper](http://arxiv.org/abs/2410.0271): Detailed information about LLaVA-Video.
- [Paper](http://arxiv.org/abs/2410.02713): Detailed information about LLaVA-Video.
- [LLaVA-Video Documentation](https://github.com/LLaVA-VL/LLaVA-NeXT/blob/main/docs/LLaVA_Video_1003.md): Guidance on training, inference and evaluation.

- [2024/09/13] 🔥 **🚀 [LLaVA-OneVision-Chat](docs/LLaVA_OneVision_Chat.md)**. The new LLaVA-OV-Chat (7B/72B) significantly improves the chat experience of LLaVA-OV. 📄
Expand Down

0 comments on commit 333d6fc

Please sign in to comment.