[Fix] Fix the bug that setr cannot load pretrain (#1293)

* [Fix] Fix the bug that setr cannot load pretrain

* delete new pretrain
This commit is contained in:
Rockey 2022-02-17 16:25:17 +08:00 committed by GitHub
parent 2056caa790
commit 9522b4fc97
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 29 additions and 6 deletions

View File

@ -36,6 +36,23 @@ This head has two version head.
}
```
## Usage
You can download the pretrain from [here](https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_p16_384-b3be5167.pth). Then you can convert its keys with the script `vit2mmseg.py` in the tools directory.
```shell
python tools/model_converters/vit2mmseg.py ${PRETRAIN_PATH} ${STORE_PATH}
```
E.g.
```shell
python tools/model_converters/vit2mmseg.py \
jx_vit_large_p16_384-b3be5167.pth pretrain/vit_large_p16.pth
```
This script convert the model from `PRETRAIN_PATH` and store the converted model in `STORE_PATH`.
## Results and models
### ADE20K

View File

@ -8,7 +8,8 @@ model = dict(
backbone=dict(
img_size=(512, 512),
drop_rate=0.,
init_cfg=dict(type='Pretrained', checkpoint='mmcls://vit_large_p16')),
init_cfg=dict(
type='Pretrained', checkpoint='pretrain/vit_large_p16.pth')),
decode_head=dict(num_classes=150),
auxiliary_head=[
dict(

View File

@ -8,7 +8,8 @@ model = dict(
backbone=dict(
img_size=(512, 512),
drop_rate=0.,
init_cfg=dict(type='Pretrained', checkpoint='mmcls://vit_large_p16')),
init_cfg=dict(
type='Pretrained', checkpoint='pretrain/vit_large_p16.pth')),
decode_head=dict(num_classes=150),
auxiliary_head=[
dict(

View File

@ -8,7 +8,8 @@ model = dict(
backbone=dict(
img_size=(512, 512),
drop_rate=0.,
init_cfg=dict(type='Pretrained', checkpoint='mmcls://vit_large_p16')),
init_cfg=dict(
type='Pretrained', checkpoint='pretrain/vit_large_p16.pth')),
decode_head=dict(num_classes=150),
auxiliary_head=[
dict(

View File

@ -6,7 +6,8 @@ model = dict(
pretrained=None,
backbone=dict(
drop_rate=0,
init_cfg=dict(type='Pretrained', checkpoint='mmcls://vit_large_p16')),
init_cfg=dict(
type='Pretrained', checkpoint='pretrain/vit_large_p16.pth')),
test_cfg=dict(mode='slide', crop_size=(768, 768), stride=(512, 512)))
optimizer = dict(

View File

@ -7,7 +7,8 @@ model = dict(
pretrained=None,
backbone=dict(
drop_rate=0.,
init_cfg=dict(type='Pretrained', checkpoint='mmcls://vit_large_p16')),
init_cfg=dict(
type='Pretrained', checkpoint='pretrain/vit_large_p16.pth')),
test_cfg=dict(mode='slide', crop_size=(768, 768), stride=(512, 512)))
optimizer = dict(

View File

@ -9,7 +9,8 @@ model = dict(
pretrained=None,
backbone=dict(
drop_rate=0.,
init_cfg=dict(type='Pretrained', checkpoint='mmcls://vit_large_p16')),
init_cfg=dict(
type='Pretrained', checkpoint='pretrain/vit_large_p16.pth')),
auxiliary_head=[
dict(
type='SETRUPHead',