angiecao
|
608e319eb6
|
[Feature] Support Side Adapter Network (#3232)
## Motivation
Support SAN for Open-Vocabulary Semantic Segmentation
Paper: [Side Adapter Network for Open-Vocabulary Semantic
Segmentation](https://arxiv.org/abs/2302.12242)
official Code: [SAN](https://github.com/MendelXu/SAN)
## Modification
- Added the parameters of backbone vit for implementing the image
encoder of CLIP.
- Added text encoder code.
- Added segmentor multimodel encoder-decoder code for open-vocabulary
semantic segmentation.
- Added SideAdapterNetwork decode head code.
- Added config files for train and inference.
- Added tools for converting pretrained models.
- Added loss implementation for mask classification model, such as SAN,
Maskformer and remove dependency on mmdetection.
- Added test units for text encoder, multimodel encoder-decoder, san
decode head and hungarian_assigner.
## Use cases
### Convert Models
**pretrained SAN model**
The official pretrained model can be downloaded from
[san_clip_vit_b_16.pth](https://huggingface.co/Mendel192/san/blob/main/san_vit_b_16.pth)
and
[san_clip_vit_large_14.pth](https://huggingface.co/Mendel192/san/blob/main/san_vit_large_14.pth).
Use tools/model_converters/san2mmseg.py to convert offcial model into
mmseg style.
`python tools/model_converters/san2mmseg.py <MODEL_PATH> <OUTPUT_PATH>`
**pretrained CLIP model**
Use the CLIP model provided by openai to train SAN. The CLIP model can
be download from
[ViT-B-16.pt](https://openaipublic.azureedge.net/clip/models/5806e77cd80f8b59890b7e101eabd078d9fb84e6937f9e85e4ecb61988df416f/ViT-B-16.pt)
and
[ViT-L-14-336px.pt](https://openaipublic.azureedge.net/clip/models/3035c92b350959924f9f00213499208652fc7ea050643e8b385c2dac08641f02/ViT-L-14-336px.pt).
Use tools/model_converters/clip2mmseg.py to convert model into mmseg
style.
`python tools/model_converters/clip2mmseg.py <MODEL_PATH> <OUTPUT_PATH>`
### Inference
test san_vit-base-16 model on coco-stuff164k dataset
`python tools/test.py
./configs/san/san-vit-b16_coco-stuff164k-640x640.py
<TRAINED_MODEL_PATH>`
### Train
test san_vit-base-16 model on coco-stuff164k dataset
`python tools/train.py
./configs/san/san-vit-b16_coco-stuff164k-640x640.py --cfg-options
model.pretrained=<PRETRAINED_MODEL_PATH>`
## Comparision Results
### Train on COCO-Stuff164k
| | | mIoU | mAcc | pAcc |
| --------------- | ----- | ----- | ----- | ----- |
| san-vit-base16 | official | 41.93 | 56.73 | 67.69 |
| | mmseg | 41.93 | 56.84 | 67.84 |
| san-vit-large14 | official | 45.57 | 59.52 | 69.76 |
| | mmseg | 45.78 | 59.61 | 69.21 |
### Evaluate on Pascal Context
| | | mIoU | mAcc | pAcc |
| --------------- | ----- | ----- | ----- | ----- |
| san-vit-base16 | official | 54.05 | 72.96 | 77.77 |
| | mmseg | 54.04 | 73.74 | 77.71 |
| san-vit-large14 | official | 57.53 | 77.56 | 78.89 |
| | mmseg | 56.89 | 76.96 | 78.74 |
### Evaluate on Voc12Aug
| | | mIoU | mAcc | pAcc |
| --------------- | ----- | ----- | ----- | ----- |
| san-vit-base16 | official | 93.86 | 96.61 | 97.11 |
| | mmseg | 94.58 | 97.01 | 97.38 |
| san-vit-large14 | official | 95.17 | 97.61 | 97.63 |
| | mmseg | 95.58 | 97.75 | 97.79 |
---------
Co-authored-by: CastleDream <35064479+CastleDream@users.noreply.github.com>
Co-authored-by: yeedrag <46050186+yeedrag@users.noreply.github.com>
Co-authored-by: Yang-ChangHui <71805205+Yang-Changhui@users.noreply.github.com>
Co-authored-by: Xu CAO <49406546+SheffieldCao@users.noreply.github.com>
Co-authored-by: xiexinch <xiexinch@outlook.com>
Co-authored-by: 小飞猪 <106524776+ooooo-create@users.noreply.github.com>
|
2023-09-20 21:20:26 +08:00 |
|
CastleDream
|
057155d3ab
|
[Feature] add bdd100K datasets (#3158)
## Motivation
Integrate [BDD100K](https://paperswithcode.com/dataset/bdd100k) dataset.
It shares the same classes as Cityscapes, and it's commonly used for
evaluating segmentation/detection tasks in driving scenes, such as in
[RobustNet](https://arxiv.org/abs/2103.15597),
[WildNet](https://github.com/suhyeonlee/WildNet).
Enhancement for Add BDD100K Dataset #2808
---------
Co-authored-by: xiexinch <xiexinch@outlook.com>
|
2023-07-14 10:09:16 +08:00 |
|
Tianlong Ai
|
8c89ff3dd1
|
[Datasets] Add Mapillary Vistas Datasets to MMSeg Core Package. (#2576)
## [Datasets] Add Mapillary Vistas Datasets to MMSeg Core Package .
## Motivation
Add Mapillary Vistas Datasets to core package.
Old PR #2484
## Modification
- Add Mapillary Vistas Datasets to core package.
- Delete `tools/datasets_convert/mapillary.py` , dataset does't need
converting.
- Add `schedule_240k.py` config.
- Add configs files.
```none
deeplabv3plus_r101-d8_4xb2-240k_mapillay_v1-512x1024.py
deeplabv3plus_r101-d8_4xb2-240k_mapillay_v2-512x1024.py
maskformer_swin-s_4xb2-240k_mapillary_v1-512x1024.py
maskformer_swin-s_4xb2-240k_mapillary_v2-512x1024.py
maskformer_r101-d8_4xb2-240k_mapillary_v1-512x1024.py
maskformer_r101-d8_4xb2-240k_mapillary_v2-512x1024.py
pspnet_r101-d8_4xb2-240k_mapillay_v1-512x1024.py
pspnet_r101-d8_4xb2-240k_mapillay_v2-512x1024.py
```
- Synchronized changes to `projects/mapillary_datasets`
---------
Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>
Co-authored-by: xiexinch <xiexinch@outlook.com>
|
2023-03-15 14:44:38 +08:00 |
|
王永韬
|
2d67e51db3
|
CodeCamp #140 [New] [Feature] Add synapse dataset and data augmentation in dev-1.x. (#2432)
## Motivation
Add Synapse dataset in MMSegmentation.
Old PR: https://github.com/open-mmlab/mmsegmentation/pull/2372.
|
2023-01-06 16:14:54 +08:00 |
|
Miao Zheng
|
b21df463d4
|
[Feature] LIP dataset (#2187)
* [WIP] LIP dataset
* wip
* keep473
* lip dataset prepare
* add ut and test data
|
2022-10-31 20:47:52 +08:00 |
|
Miao Zheng
|
50546da85c
|
[Fix]Remove modules from mmcv.runner and mmcv.utils (#1966)
* [WIP] mmcv-clean
* [WIP]Remove modules from mmcv.runner and mmcv.utils
* wip
* fix import mmengine
* remove ut
* loadcheckpoint in mae
|
2022-08-25 15:15:21 +08:00 |
|
zhengmiao
|
4b76f277a6
|
[Refactory] MMSegmentation Content
|
2022-07-15 15:47:29 +00:00 |
|