angiecao 608e319eb6
[Feature] Support Side Adapter Network (#3232)
## Motivation
Support SAN for Open-Vocabulary Semantic Segmentation
Paper: [Side Adapter Network for Open-Vocabulary Semantic
Segmentation](https://arxiv.org/abs/2302.12242)
official Code: [SAN](https://github.com/MendelXu/SAN)

## Modification
- Added the parameters of backbone vit for implementing the image
encoder of CLIP.
- Added text encoder code.
- Added segmentor multimodel encoder-decoder code for open-vocabulary
semantic segmentation.
- Added SideAdapterNetwork decode head code.
- Added config files for train and inference.
- Added tools for converting pretrained models.
- Added loss implementation for mask classification model, such as SAN,
Maskformer and remove dependency on mmdetection.
- Added test units for text encoder, multimodel encoder-decoder, san
decode head and hungarian_assigner.

## Use cases
### Convert Models
**pretrained SAN model**
The official pretrained model can be downloaded from
[san_clip_vit_b_16.pth](https://huggingface.co/Mendel192/san/blob/main/san_vit_b_16.pth)
and
[san_clip_vit_large_14.pth](https://huggingface.co/Mendel192/san/blob/main/san_vit_large_14.pth).
Use tools/model_converters/san2mmseg.py to convert offcial model into
mmseg style.
`python tools/model_converters/san2mmseg.py <MODEL_PATH> <OUTPUT_PATH>`

**pretrained CLIP model**
Use the CLIP model provided by openai to train SAN. The CLIP model can
be download from
[ViT-B-16.pt](https://openaipublic.azureedge.net/clip/models/5806e77cd80f8b59890b7e101eabd078d9fb84e6937f9e85e4ecb61988df416f/ViT-B-16.pt)
and
[ViT-L-14-336px.pt](https://openaipublic.azureedge.net/clip/models/3035c92b350959924f9f00213499208652fc7ea050643e8b385c2dac08641f02/ViT-L-14-336px.pt).
Use tools/model_converters/clip2mmseg.py to convert model into mmseg
style.
`python tools/model_converters/clip2mmseg.py <MODEL_PATH> <OUTPUT_PATH>`

### Inference
test san_vit-base-16 model on coco-stuff164k dataset
`python tools/test.py
./configs/san/san-vit-b16_coco-stuff164k-640x640.py
<TRAINED_MODEL_PATH>`

### Train
test san_vit-base-16 model on coco-stuff164k dataset
`python tools/train.py
./configs/san/san-vit-b16_coco-stuff164k-640x640.py --cfg-options
model.pretrained=<PRETRAINED_MODEL_PATH>`

## Comparision Results
### Train on COCO-Stuff164k
|                 |       | mIoU  | mAcc  | pAcc  |
| --------------- | ----- | ----- | ----- | ----- |
| san-vit-base16  | official  | 41.93 | 56.73 | 67.69 |
|                 | mmseg | 41.93 | 56.84 | 67.84 |
| san-vit-large14 | official  | 45.57 | 59.52 | 69.76 |
|                 | mmseg | 45.78 | 59.61 | 69.21 |

### Evaluate on Pascal Context
|                 |       | mIoU  | mAcc  | pAcc  |
| --------------- | ----- | ----- | ----- | ----- |
| san-vit-base16  | official  | 54.05 | 72.96 | 77.77 |
|                 | mmseg | 54.04 | 73.74 | 77.71 |
| san-vit-large14 | official  | 57.53 | 77.56 | 78.89 |
|                 | mmseg | 56.89 | 76.96 | 78.74 |

### Evaluate on Voc12Aug
|                 |       | mIoU  | mAcc  | pAcc  |
| --------------- | ----- | ----- | ----- | ----- |
| san-vit-base16  | official  | 93.86 | 96.61 | 97.11 |
|                 | mmseg | 94.58 | 97.01 | 97.38 |
| san-vit-large14 | official  | 95.17 | 97.61 | 97.63 |
|                 | mmseg | 95.58 | 97.75 | 97.79 |

---------

Co-authored-by: CastleDream <35064479+CastleDream@users.noreply.github.com>
Co-authored-by: yeedrag <46050186+yeedrag@users.noreply.github.com>
Co-authored-by: Yang-ChangHui <71805205+Yang-Changhui@users.noreply.github.com>
Co-authored-by: Xu CAO <49406546+SheffieldCao@users.noreply.github.com>
Co-authored-by: xiexinch <xiexinch@outlook.com>
Co-authored-by: 小飞猪 <106524776+ooooo-create@users.noreply.github.com>
2023-09-20 21:20:26 +08:00

164 lines
6.0 KiB
Python

# Copyright (c) OpenMMLab. All rights reserved.
import argparse
import os.path as osp
from collections import OrderedDict
import mmengine
import torch
from mmengine.runner import CheckpointLoader
def convert_vitlayer(paras):
new_para_name = ''
if paras[0] == 'ln_1':
new_para_name = '.'.join(['ln1'] + paras[1:])
elif paras[0] == 'attn':
new_para_name = '.'.join(['attn.attn'] + paras[1:])
elif paras[0] == 'ln_2':
new_para_name = '.'.join(['ln2'] + paras[1:])
elif paras[0] == 'mlp':
if paras[1] == 'c_fc':
new_para_name = '.'.join(['ffn.layers.0.0'] + paras[-1:])
else:
new_para_name = '.'.join(['ffn.layers.1'] + paras[-1:])
else:
print(f'Wrong for {paras}')
return new_para_name
def convert_translayer(paras):
new_para_name = ''
if paras[0] == 'attn':
new_para_name = '.'.join(['attentions.0.attn'] + paras[1:])
elif paras[0] == 'ln_1':
new_para_name = '.'.join(['norms.0'] + paras[1:])
elif paras[0] == 'ln_2':
new_para_name = '.'.join(['norms.1'] + paras[1:])
elif paras[0] == 'mlp':
if paras[1] == 'c_fc':
new_para_name = '.'.join(['ffns.0.layers.0.0'] + paras[2:])
elif paras[1] == 'c_proj':
new_para_name = '.'.join(['ffns.0.layers.1'] + paras[2:])
else:
print(f'Wrong for {paras}')
else:
print(f'Wrong for {paras}')
return new_para_name
def convert_key_name(ckpt, visual_split):
new_ckpt = OrderedDict()
for k, v in ckpt.items():
key_list = k.split('.')
if key_list[0] == 'visual':
new_transform_name = 'image_encoder'
if key_list[1] == 'class_embedding':
new_name = '.'.join([new_transform_name, 'cls_token'])
elif key_list[1] == 'positional_embedding':
new_name = '.'.join([new_transform_name, 'pos_embed'])
elif key_list[1] == 'conv1':
new_name = '.'.join([
new_transform_name, 'patch_embed.projection', key_list[2]
])
elif key_list[1] == 'ln_pre':
new_name = '.'.join(
[new_transform_name, key_list[1], key_list[2]])
elif key_list[1] == 'transformer':
new_layer_name = 'layers'
layer_index = key_list[3]
paras = key_list[4:]
if int(layer_index) < visual_split:
new_para_name = convert_vitlayer(paras)
new_name = '.'.join([
new_transform_name, new_layer_name, layer_index,
new_para_name
])
else:
new_para_name = convert_translayer(paras)
new_transform_name = 'decode_head.rec_with_attnbias'
new_layer_name = 'layers'
layer_index = str(int(layer_index) - visual_split)
new_name = '.'.join([
new_transform_name, new_layer_name, layer_index,
new_para_name
])
elif key_list[1] == 'proj':
new_name = 'decode_head.rec_with_attnbias.proj.weight'
elif key_list[1] == 'ln_post':
new_name = k.replace('visual', 'decode_head.rec_with_attnbias')
else:
print(f'pop parameter: {k}')
continue
else:
text_encoder_name = 'text_encoder'
if key_list[0] == 'transformer':
layer_name = 'transformer'
layer_index = key_list[2]
paras = key_list[3:]
new_para_name = convert_translayer(paras)
new_name = '.'.join([
text_encoder_name, layer_name, layer_index, new_para_name
])
elif key_list[0] in [
'positional_embedding', 'text_projection', 'bg_embed',
'attn_mask', 'logit_scale', 'token_embedding', 'ln_final'
]:
new_name = 'text_encoder.' + k
else:
print(f'pop parameter: {k}')
continue
new_ckpt[new_name] = v
return new_ckpt
def convert_tensor(ckpt):
cls_token = ckpt['image_encoder.cls_token']
new_cls_token = cls_token.unsqueeze(0).unsqueeze(0)
ckpt['image_encoder.cls_token'] = new_cls_token
pos_embed = ckpt['image_encoder.pos_embed']
new_pos_embed = pos_embed.unsqueeze(0)
ckpt['image_encoder.pos_embed'] = new_pos_embed
proj_weight = ckpt['decode_head.rec_with_attnbias.proj.weight']
new_proj_weight = proj_weight.transpose(1, 0)
ckpt['decode_head.rec_with_attnbias.proj.weight'] = new_proj_weight
return ckpt
def main():
parser = argparse.ArgumentParser(
description='Convert keys in timm pretrained vit models to '
'MMSegmentation style.')
parser.add_argument('src', help='src model path or url')
# The dst path must be a full path of the new checkpoint.
parser.add_argument('dst', help='save path')
args = parser.parse_args()
if any([s in args.src for s in ['B-16', 'b16', 'base_patch16']]):
visual_split = 9
elif any([s in args.src for s in ['L-14', 'l14', 'large_patch14']]):
visual_split = 18
else:
print('Make sure the clip model is ViT-B/16 or ViT-L/14!')
visual_split = -1
checkpoint = CheckpointLoader.load_checkpoint(args.src, map_location='cpu')
if isinstance(checkpoint, torch.jit.RecursiveScriptModule):
state_dict = checkpoint.state_dict()
else:
if 'state_dict' in checkpoint:
# timm checkpoint
state_dict = checkpoint['state_dict']
elif 'model' in checkpoint:
# deit checkpoint
state_dict = checkpoint['model']
else:
state_dict = checkpoint
weight = convert_key_name(state_dict, visual_split)
weight = convert_tensor(weight)
mmengine.mkdir_or_exist(osp.dirname(args.dst))
torch.save(weight, args.dst)
if __name__ == '__main__':
main()