diff --git a/README.md b/README.md index e40cad0..033e810 100644 --- a/README.md +++ b/README.md @@ -11,11 +11,17 @@ We introduce **SEEM** that can **S**egment **E**verything **E**verywhere with **

-:fire: **Other awsome projects you may want to follow:** +:fire: **Related projects:** + +* [X-Decoder](https://github.com/microsoft/X-Decoder) : Generic decoder that can do multiple tasks with one model only;**We built SEEM based on X-Decoder codebase**. +* [FocalNet](https://github.com/microsoft/FocalNet) : Focal Modulation Networks; **We used FocalNet as the vision backbone**. +* [UniCL](https://github.com/microsoft/UniCL) : Unified Contrasative Learning; **We used this technique for image-text contrastive larning**. + +:fire: **Other projects you may find interesting:** -* [X-Decoder](https://github.com/microsoft/X-Decoder) : Generic decoder that can do multiple tasks with one model only;**And we built SEEM based on X-Decoder codebase**. -* [Grounding SAM](https://github.com/IDEA-Research/Grounded-Segment-Anything) : Combining Grounding DINO and Segment Anything; * [OpenSeed](https://github.com/IDEA-Research/OpenSeeD) : Strong open-set segmentation methods. +* [Grounding SAM](https://github.com/IDEA-Research/Grounded-Segment-Anything) : Combining Grounding DINO and Segment Anythin. +* [LLaVA](https://github.com/haotian-liu/LLaVA) : Large Language and Vision Assistant. ## :bulb: Highlights