Skip to content

DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding

License

Notifications You must be signed in to change notification settings

IDEA-Research/DINO-X-API

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DINO-X: A Unified Vision Model for Open-World Object Detection and Understanding

The World's Top-Performing Vision Model for Open-World Object Detection

The project provides examples for using DINO-X, which are hosted on DeepDataSpace.

IDEA Research

Highlights

Beyond Grounding DINO 1.5, DINO-X has several improvements, taking a step forward towards becoming a more general object-centric vision model. The highlights of the DINO-X are as follows:

The Strongest Open-Set Detection Performance: DINO-X Pro set new SOTA results on zero-shot transfer detection benchmarks: 56.0 AP on COCO, 59.8 AP on LVIS-minival and 52.4 AP on LVIS-val. Notably, Notably, it scores 63.3 AP and 56.5 AP on the rare classes of LVIS-minival and LVIS-val benchmarks, improving the previous SOTA performance by 5.8 box AP and 5.0 box AP. Such a result underscores its significantly enhanced capacity for recognizing long-tailed objects.

🔥 Diverse Input Prompt and Multi-level Output Semantic Representations: DINO-X can accept text prompts, visual prompts, and customized prompts as input, and it outputs representations at various semantic levels, including bounding boxes, segmentation masks, pose keypoints, and object captions, with multiple perception heads.

🍉 Rich and Practical Capabilities: DINO-X can simultaneously support lots of highly practical tasks, including Open-Set Object Detection and Segmentation, Phrase Grounding, Visual-Prompt Counting, Pose Estimation, and Region Captioning. We further develop a universal object prompt to achieve Prompt-Free Anything Detection and Recognition.

Latest News

  • 2024/11/25: Release DINO-X API for Open-World Detection.
  • 2024/11/22: Launch DINO-X project and init documentation.

Contents

Model Framework

DINO-X can accept text prompts, visual prompts, and customized prompts as input, and it can generate representations at various semantic levels, including bounding boxes, segmentation masks, pose keypoints, and object captions.

Performance

Side-by-Side Performance Comparison with Previous Best Methods

Zero-Shot Performance on Object Detection Benchmarks

Model COCO
(AP box)
LVIS-minival
(AP all)
LVIS-minival
(AP rare)
LVIS-val
(AP all)
LVIS-val
(AP rare)
Other Best
Open-Set Model
53.4
(OmDet-Turbo)
47.6
(T-Rex2 visual)
45.4
(T-Rex2 visual)
45.3
(T-Rex2 visual)
43.8
(T-Rex2 visual)
DetCLIPv3 - 48.8 49.9 41.4 41.4
Grounding DINO 52.5 27.4 18.1 - -
T-Rex2 (text) 52.2 54.9 49.2 45.8 42.7
Grounding DINO 1.5 Pro 54.3 55.7 56.1 47.6 44.6
Grounding DINO 1.6 Pro 55.4 57.7 57.5 51.1 51.5
DINO-X Pro 56.0 59.7 63.3 52.5 56.5
  • DINO-X Pro achieves SOTA performance on COCO, LVIS-minival, LVIS-val, zero-shot object detection benchmarks.
  • DINO-X Pro has significantly improved the model's performance on LVIS-rare classes, significantly surpassing the previous SOTA Grounding DINO 1.6 Pro model by 5.8 AP and 5.0 AP, respectively, demonstrating the exceptional capability of DINO-X in long-tailed object detection scenarios.

API Usage

Installation

  • Install the required packages
pip install -r requirements.txt

Note: If you encounter some errors with API, please install the latest version of dds-cloudapi-sdk:

pip install dds-cloudapi-sdk --upgrade

Register on Offical Website to Get API Token

  • First-Time Application: If you are interested in our project and wish to try our algorithm, you will need to apply for the corresponding API Token through our request API token website for your first attempt.

  • Request Additional Token Quotas: If you find our project helpful and need more API token quotas, you can request additional tokens by filling out this form. Our team will review your request and allocate more tokens for your use in one or two days. You can also apply for more tokens by sending us an email.

Run local API demo

  • Set your API token in demo.py and run local demo
python demo.py
  • After running the local demo, you will get the annotated image here: ./annotated_demo_image.jpg
Demo Image Visualization

With the text prompt "wheel . eye . helmet . mouse . mouth . vehicle . steering wheel . ear . nose", we will get the predicton results as follows:

Related Work

LICENSE

DINO-X API License

DINO-X is released under the Apache 2.0 license. Please see the LICENSE file for more information.

Copyright (c) IDEA. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use these files except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

BibTeX

If you find our work helpful for your research, please consider citing the following BibTeX entry.

@misc{ren2024dinoxunifiedvisionmodel,
      title={DINO-X: A Unified Vision Model for Open-World Object Detection and Understanding}, 
      author={Tianhe Ren and Yihao Chen and Qing Jiang and Zhaoyang Zeng and Yuda Xiong and Wenlong Liu and Zhengyu Ma and Junyi Shen and Yuan Gao and Xiaoke Jiang and Xingyu Chen and Zhuheng Song and Yuhong Zhang and Hongjie Huang and Han Gao and Shilong Liu and Hao Zhang and Feng Li and Kent Yu and Lei Zhang},
      year={2024},
      eprint={2411.14347},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2411.14347}, 
}
@misc{ren2024grounding,
      title={Grounding DINO 1.5: Advance the "Edge" of Open-Set Object Detection}, 
      author={Tianhe Ren and Qing Jiang and Shilong Liu and Zhaoyang Zeng and Wenlong Liu and Han Gao and Hongjie Huang and Zhengyu Ma and Xiaoke Jiang and Yihao Chen and Yuda Xiong and Hao Zhang and Feng Li and Peijun Tang and Kent Yu and Lei Zhang},
      year={2024},
      eprint={2405.10300},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
@misc{jiang2024trex2genericobjectdetection,
      title={T-Rex2: Towards Generic Object Detection via Text-Visual Prompt Synergy}, 
      author={Qing Jiang and Feng Li and Zhaoyang Zeng and Tianhe Ren and Shilong Liu and Lei Zhang},
      year={2024},
      eprint={2403.14610},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2403.14610}, 
}
@misc{liu2024groundingdinomarryingdino,
      title={Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection}, 
      author={Shilong Liu and Zhaoyang Zeng and Tianhe Ren and Feng Li and Hao Zhang and Jie Yang and Qing Jiang and Chunyuan Li and Jianwei Yang and Hang Su and Jun Zhu and Lei Zhang},
      year={2024},
      eprint={2303.05499},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2303.05499}, 
}

Releases

No releases published

Packages

No packages published

Languages