mmsegmentation/mmseg/models/utils/timm_convert.py
sennnnn 8f8abe373f
[Refactor] Using mmcv transformer bricks to refactor vit. (#571)
* [Refactor] Using mmcv bricks to refactor vit

* Follow the vit code structure from mmclassification

* Add MMCV install into CI system.

* Add  to 'Install MMCV' CI item

* Add 'Install MMCV_CPU' and 'Install MMCV_GPU CI' items

* Fix & Add

1. Fix low code coverage of vit.py;

2. Remove HybirdEmbed;

3. Fix doc string of VisionTransformer;

* Add helpers unit test.

* Add converter to convert vit pretrain weights from timm style to mmcls style.

* Clean some rebundant code and refactor init

1. Use timm style init_weights;

2. Remove to_xtuple and trunc_norm_;

* Add comments for VisionTransformer.init_weights()

* Add arg: pretrain_style to choose timm or mmcls vit pretrain weights.
2021-06-17 10:41:25 -07:00

34 lines
1.1 KiB
Python

from collections import OrderedDict
def vit_convert(timm_dict):
mmseg_dict = OrderedDict()
for k, v in timm_dict.items():
if k.startswith('head'):
continue
if k.startswith('norm'):
new_k = k.replace('norm.', 'ln1.')
elif k.startswith('patch_embed'):
if 'proj' in k:
new_k = k.replace('proj', 'projection')
elif k.startswith('blocks'):
new_k = k.replace('blocks.', 'layers.')
if 'norm' in new_k:
new_k = new_k.replace('norm', 'ln')
elif 'mlp.fc1' in new_k:
new_k = new_k.replace('mlp.fc1', 'ffn.layers.0.0')
elif 'mlp.fc2' in new_k:
new_k = new_k.replace('mlp.fc2', 'ffn.layers.1')
elif 'attn.qkv' in new_k:
new_k = new_k.replace('attn.qkv.', 'attn.attn.in_proj_')
elif 'attn.proj' in new_k:
new_k = new_k.replace('attn.proj', 'attn.attn.out_proj')
else:
new_k = k
new_k = f'backbone.{new_k}'
mmseg_dict[new_k] = v
return mmseg_dict