GLEE MODEL ZOO
Introduction
GLEE maintains state-of-the-art (SOTA) performance across multiple tasks while preserving versatility and openness, demonstrating strong generalization capabilities. Here, we provide the model weights for all three stages of GLEE: '-pretrain', '-joint', and '-scaleup'. The '-pretrain' weights refer to those pretrained on Objects365 and OpenImages, yielding effective initializations from over three million detection data. The '-joint' weights are derived from joint training on 15 datasets, where the model achieves optimal performance. The '-scaleup' weights are obtained by incorporating additional automatically annotated SA1B and GRIT data, which enhance zero-shot performance and support a richer semantic understanding. Additionally, we offer weights fine-tuned on VOS data for interactive video tracking applications.
Stage 1: Pretraining
Name |
Config |
Weight |
GLEE-Lite-pretrain |
Stage1_pretrain_openimage_obj365_CLIPfrozen_R50.yaml |
Model |
GLEE-Plus-pretrain |
Stage1_pretrain_openimage_obj365_CLIPfrozen_SwinL.yaml |
Model |
GLEE-Pro-pretrain |
Stage1_pretrain_openimage_obj365_CLIPfrozen_EVA02L_LSJ1536.yaml |
Model |
Stage 2: Image-level Joint Training
Name |
Config |
Weight |
GLEE-Lite-joint |
Stage2_joint_training_CLIPteacher_R50.yaml |
Model |
GLEE-Plus-joint |
Stage2_joint_training_CLIPteacher_SwinL |
Model |
GLEE-Pro-joint |
Stage2_joint_training_CLIPteacher_EVA02L.yaml |
Model |
Stage 3: Scale-up Training
Name |
Config |
Weight |
GLEE-Lite-scaleup |
Stage3_scaleup_CLIPteacher_R50.yaml |
Model |
GLEE-Plus-scaleup |
Stage3_scaleup_CLIPteacher_SwinL |
Model |
GLEE-Pro-scaleup |
Stage3_scaleup_CLIPteacher_EVA02L.yaml |
Model |
Single Tasks
We also provide models trained on a VOS task with ResNet-50 backbone:
Name |
Config |
Weight |
GLEE-Lite-vos |
VOS_joint_finetune_R50.yaml |
Model |