Skip to content
@OpenGVLab

OpenGVLab

General Vision Team of Shanghai AI Laboratory

Static Badge Twitter

Welcome to OpenGVLab! 👋

We are a research group from Shanghai AI Lab focused on Vision-Centric AI research. The GV in our name, OpenGVLab, means general vision, a general understanding of vision, so little effort is needed to adapt to new vision-based tasks.

We develop model architecture and release pre-trained foundation models to the community to motivate further research in this area. We have made promising progress in general vision AI, with 109 SOTA🚀. In 2022, our open-sourced foundation model 65.5 mAP on the COCO object detection benchmark, 91.1% Top1 accuracy in Kinetics 400, achieved landmarks for AI vision👀 tasks for image🖼️ and video📹 understanding. In 2023, we created VideoChat🦜,llama-adapter🦙, 3D foundation model Ponder V2🧊 and many more wonderful works! In CVPR 2023, our vision foundation model InternImage was listed as one of the most influential papers, and by benefiting from our partner OpenDriveLab, we won the Best paper together🎉 .

In 2024, we released the best open-source VLM InternVL , video understanding foundation model InternVideo2, which won 7 Champions on EgoVis challenges 🥇. Up to now, our brilliant team have open-sourced more than 70 works, please find them here😃

Based on solid vision foundations, we have expanded to Multi-Modality models and. We aim to empower individuals and businesses by offering a higher starting point for developing vision-based AI products and lessening the burden of building an AI model from scratch.

Branches: Alpha (explore lattest advances in vision+language research), uni-medical (focus on medical AI), Vchitect (Generative AI)

Follow us:    Twitter X logo Twitter   🤗Hugging Face    Medium logo Medium    WeChat logo WeChat    zhihu logo Zhihu

Pinned Loading

  1. InternVL InternVL Public

    [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型

    Python 5.8k 456

  2. InternVideo InternVideo Public

    [ECCV2024] Video Foundation Models & Data for Multimodal Understanding

    Python 1.4k 85

  3. Ask-Anything Ask-Anything Public

    [CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.

    Python 3k 250

  4. VideoMamba VideoMamba Public

    [ECCV2024] VideoMamba: State Space Model for Efficient Video Understanding

    Python 825 60

  5. OmniQuant OmniQuant Public

    [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.

    Python 708 54

  6. LLaMA-Adapter LLaMA-Adapter Public

    [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters

    Python 5.7k 374

Repositories

Showing 10 of 67 repositories
  • InternVL Public

    [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型

    OpenGVLab/InternVL’s past year of commit activity
    Python 5,831 MIT 456 130 5 Updated Oct 26, 2024
  • OmniCorpus Public

    OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text

    OpenGVLab/OmniCorpus’s past year of commit activity
    Python 264 5 2 0 Updated Oct 25, 2024
  • InternVL-MMDetSeg Public

    Train InternViT-6B in MMSegmentation and MMDetection with DeepSpeed

    OpenGVLab/InternVL-MMDetSeg’s past year of commit activity
    Jupyter Notebook 52 4 1 0 Updated Oct 25, 2024
  • PhyGenBench Public

    The code and data of Paper: Towards World Simulator: Crafting Physical Commonsense-Based Benchmark for Video Generation

    OpenGVLab/PhyGenBench’s past year of commit activity
    Python 60 1 3 0 Updated Oct 25, 2024
  • MM-NIAH Public

    [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of existing MLLMs to comprehend long multimodal documents.

    OpenGVLab/MM-NIAH’s past year of commit activity
    Python 90 5 1 0 Updated Oct 22, 2024
  • VisionLLM Public

    VisionLLM Series

    OpenGVLab/VisionLLM’s past year of commit activity
    Python 879 Apache-2.0 25 12 0 Updated Oct 18, 2024
  • VideoMAEv2 Public

    [CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking

    OpenGVLab/VideoMAEv2’s past year of commit activity
    Python 509 MIT 58 13 0 Updated Oct 8, 2024
  • EfficientQAT Public

    EfficientQAT: Efficient Quantization-Aware Training for Large Language Models

    OpenGVLab/EfficientQAT’s past year of commit activity
    Python 213 15 4 0 Updated Oct 8, 2024
  • OmniQuant Public

    [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.

    OpenGVLab/OmniQuant’s past year of commit activity
    Python 708 MIT 54 23 1 Updated Oct 8, 2024
  • OpenGVLab/STM-Evaluation’s past year of commit activity
    Python 69 MIT 6 1 0 Updated Oct 6, 2024