diff --git a/README.md b/README.md index af2a2c0..85123a5 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,8 @@ # Multi-modal Queried Object Detection in the Wild (NeurIPS 2023) - + +[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/multi-modal-queried-object-detection-in-the/zero-shot-object-detection-on-lvis-v1-0)](https://paperswithcode.com/sota/zero-shot-object-detection-on-lvis-v1-0?p=multi-modal-queried-object-detection-in-the) +[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/multi-modal-queried-object-detection-in-the/zero-shot-object-detection-on-lvis-v1-0-val)](https://paperswithcode.com/sota/zero-shot-object-detection-on-lvis-v1-0-val?p=multi-modal-queried-object-detection-in-the) +[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/multi-modal-queried-object-detection-in-the/zero-shot-object-detection-on-odinw)](https://paperswithcode.com/sota/zero-shot-object-detection-on-odinw?p=multi-modal-queried-object-detection-in-the) Official PyTorch implementation of "[Multi-modal Queried Object Detection in the Wild](https://arxiv.org/abs/2305.18980)": the first multi-modal queried open-set object detector. @@ -264,4 +267,4 @@ python tools/eval_odinw.py --config_file configs/pretrain/mq-glip-t.yaml \ --setting finetuning-free \ --add_name tiny \ --log_path 'OUTPUT/odinw_vision_log/' -``` \ No newline at end of file +```