[CVPR2024 Highlight]GLEE: General Object Foundation Model for Images and Videos at Scale
 
 
 
 
Go to file
Junfeng Wu f36a49e88c
add demo for fast inference
2024-10-21 14:17:42 +08:00
assets update track loss for video finetune 2024-07-26 18:04:17 +08:00
configs uodate training code 2024-03-19 10:31:00 +08:00
conversion Update merge_sa1b.py 2024-05-08 23:11:16 +08:00
detectron2 uodate training code 2024-03-19 10:31:00 +08:00
dev uodate training code 2024-03-19 10:31:00 +08:00
docker uodate training code 2024-03-19 10:31:00 +08:00
docs uodate training code 2024-03-19 10:31:00 +08:00
projects add demo for fast inference 2024-10-21 14:17:42 +08:00
tests uodate training code 2024-03-19 10:31:00 +08:00
tools uodate training code 2024-03-19 10:31:00 +08:00
weights uodate training code 2024-03-19 10:31:00 +08:00
.gitignore Initial commit 2024-03-19 01:50:17 +08:00
LICENSE Initial commit 2024-03-19 01:50:17 +08:00
README.md Update README.md 2024-04-05 07:55:42 +08:00
app.py uodate training code 2024-03-19 10:31:00 +08:00
launch.py uodate training code 2024-03-19 10:31:00 +08:00
setup.cfg uodate training code 2024-03-19 10:31:00 +08:00
setup.py uodate training code 2024-03-19 10:31:00 +08:00

README.md

GLEE: General Object Foundation Model for Images and Videos at Scale

Junfeng Wu*, Yi Jiang*, Qihao Liu, Zehuan Yuan, Xiang Bai,and Song Bai

* Equal Contribution, Correspondence

[Project Page](https://glee-vision.github.io/)

PWCPWCPWCPWCPWCPWCPWCPWCPWCPWCPWCPWCPWCPWCPWCPWCPWC

data_demo

Highlight:

  • GLEE is accepted by CVPR2024 as Highlight!
  • GLEE is a general object foundation model jointly trained on over ten million images from various benchmarks with diverse levels of supervision.
  • GLEE is capable of addressing a wide range of object-centric tasks simultaneously while maintaining SOTA performance.
  • GLEE demonstrates remarkable versatility and robust zero-shot transferability across a spectrum of object-level image and video tasks, and able to serve as a foundational component for enhancing other architectures or models.

We will release the following contents for GLEE

  • Demo Code

  • Model Zoo

  • Comprehensive User Guide

  • Training Code and Scripts

  • Detailed Evaluation Code and Scripts

  • Tutorial for Zero-shot Testing or Fine-tuning GLEE on New Datasets

Getting started

  1. Installation: Please refer to INSTALL.md for more details.
  2. Data preparation: Please refer to DATA.md for more details.
  3. Training: Please refer to TRAIN.md for more details.
  4. Testing: Please refer to TEST.md for more details.
  5. Model zoo: Please refer to MODEL_ZOO.md for more details.

Run the demo APP

Try our online demo app on [HuggingFace Demo] or use it locally:

git clone https://github.com/FoundationVision/GLEE
# support CPU and GPU running
python app.py

Introduction

GLEE has been trained on over ten million images from 16 datasets, fully harnessing both existing annotated data and cost-effective automatically labeled data to construct a diverse training set. This extensive training regime endows GLEE with formidable generalization capabilities.

data_demo

GLEE consists of an image encoder, a text encoder, a visual prompter, and an object decoder, as illustrated in Figure. The text encoder processes arbitrary descriptions related to the task, including 1) object category list 2object names in any form 3captions about objects 4referring expressions. The visual prompter encodes user inputs such as 1) points 2) bounding boxes 3) scribbles during interactive segmentation into corresponding visual representations of target objects. Then they are integrated into a detector for extracting objects from images according to textual and visual input.

pipeline

Based on the above designs, GLEE can be used to seamlessly unify a wide range of object perception tasks in images and videos, including object detection, instance segmentation, grounding, multi-target tracking (MOT), video instance segmentation (VIS), video object segmentation (VOS), interactive segmentation and tracking, and supports open-world/large-vocabulary image and video detection and segmentation tasks.

Results

Image-level tasks

imagetask

odinw

Video-level tasks

videotask

visvosrvos`

Citing GLEE

@misc{wu2023GLEE,
  author= {Junfeng Wu, Yi Jiang, Qihao Liu, Zehuan Yuan, Xiang Bai, Song Bai},
  title = {General Object Foundation Model for Images and Videos at Scale},
  year={2023},
  eprint={2312.09158},
  archivePrefix={arXiv}
}

Acknowledgments

  • Thanks UNINEXT for the implementation of multi-dataset training and data processing.

  • Thanks VNext for providing experience of Video Instance Segmentation (VIS).

  • Thanks SEEM for providing the implementation of the visual prompter.

  • Thanks MaskDINO for providing a powerful detector and segmenter.