mirror of
https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once.git
synced 2025-06-03 14:50:11 +08:00
Update README.md
This commit is contained in:
parent
970f77ae37
commit
5542785475
@ -1,10 +1,7 @@
|
||||
# 👀*SEEM:* Segment Everything Everywhere All at Once
|
||||
\[[ArXiv](https://arxiv.org/pdf/2212.11270.pdf)\] \[Demo Route 1](https://ab79f1361bb060f6.gradio.app)\] \[Demo Route 3](https://28d88f3bc59955d5.gradio.app)\] \[Demo Route 4](https://ddbd9f45c9f9af07.gradio.app)\]
|
||||
We introduce **SEEM** that can **S**egment **E**verything **E**verywhere with **M**ulti-modal prompts all at once. SEEM allows users to easily segment an image using prompts of different types including visual prompts (points, marks, boxes, scribbles and image segments) and language prompts (text and audio), etc. It can also work with any combination of prompts or generalize to custom prompts!
|
||||
|
||||
Paper link is avaliable at [here]()!
|
||||
|
||||
Demo link is avaliable at [here]()!
|
||||
|
||||
## :bulb: Highlights
|
||||
We emphasize **4** important features of **SEEM** here.
|
||||
1. **Versatility**: work with various types of prompts, for example, clicks, boxes, polygon, scribble, text, and referring image;
|
||||
|
Loading…
x
Reference in New Issue
Block a user