[Docs] Update metafile and Readme (#435)
* Update metafile format. * Update accuracy of checkpoints. * Add metafile and readme for tnt. * Add converted ckpts in swin-transformer. * Fix tnt ckpt link * Update swin_transformer metafilepull/454/head
parent
cb09ed54e5
commit
8f68779cc6
|
@ -17,4 +17,4 @@
|
|||
|
||||
| Model | Params(M) | Flops(G) | Mem (GB) | Top-1 (%) | Top-5 (%) | Config | Download |
|
||||
|:---------------------:|:---------:|:--------:|:---------:|:---------:|:---------:| :---------:|:--------:|
|
||||
| ResNet-50 | 25.56 | 4.12 | 1.9 |76.32 | 93.04 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/fp16/resnet50_b32x8_fp16_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/fp16/resnet50_batch256_fp16_imagenet_20210320-b3964210.pth) | [log](https://download.openmmlab.com/mmclassification/v0/fp16/resnet50_batch256_fp16_imagenet_20210320-b3964210.log.json) |
|
||||
| ResNet-50 | 25.56 | 4.12 | 1.9 |76.30 | 93.07 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/fp16/resnet50_b32x8_fp16_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/fp16/resnet50_batch256_fp16_imagenet_20210320-b3964210.pth) | [log](https://download.openmmlab.com/mmclassification/v0/fp16/resnet50_batch256_fp16_imagenet_20210320-b3964210.log.json) |
|
||||
|
|
|
@ -7,8 +7,13 @@ Collections:
|
|||
- Weight Decay
|
||||
- Mixed Precision Training
|
||||
Training Resources: 8x V100 GPUs
|
||||
Paper: https://arxiv.org/abs/1710.03740
|
||||
Paper:
|
||||
URL: https://arxiv.org/abs/1710.03740
|
||||
Title: Mixed Precision Training
|
||||
README: configs/fp16/README.md
|
||||
Code:
|
||||
URI: https://github.com/open-mmlab/mmclassification/blob/a41cb2fa938d957101cc446e271486206188bf5b/mmcls/core/fp16/hooks.py#L13
|
||||
Version: v0.15.0
|
||||
|
||||
Models:
|
||||
- Name: resnet50_b32x8_fp16_dynamic_imagenet
|
||||
|
@ -24,7 +29,7 @@ Models:
|
|||
- Task: Image Classification
|
||||
Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 76.32
|
||||
Top 5 Accuracy: 93.04
|
||||
Top 1 Accuracy: 76.30
|
||||
Top 5 Accuracy: 93.07
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/fp16/resnet50_batch256_fp16_imagenet_20210320-b3964210.pth
|
||||
Config: configs/fp16/resnet50_b32x8_fp16_dynamic_imagenet.py
|
||||
|
|
|
@ -10,8 +10,13 @@ Collections:
|
|||
Batch Size: 256
|
||||
Architecture:
|
||||
- MobileNet V2
|
||||
Paper: https://arxiv.org/abs/1801.04381
|
||||
Paper:
|
||||
URL: https://arxiv.org/abs/1801.04381
|
||||
Title: "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
|
||||
README: configs/mobilenet_v2/README.md
|
||||
Code:
|
||||
URL: https://github.com/open-mmlab/mmclassification/blob/v0.15.0/mmcls/models/backbones/mobilenet_v2.py#L101
|
||||
Version: v0.15.0
|
||||
|
||||
Models:
|
||||
- Name: mobilenet_v2_b32x8_imagenet
|
||||
|
|
|
@ -30,17 +30,17 @@
|
|||
|
||||
| Model | Params(M) | Flops(G) | Top-1 (%) | Top-5 (%) | Config | Download |
|
||||
|:---------------------:|:---------:|:--------:|:---------:|:---------:|:---------:|:--------:|
|
||||
| ResNet-50-b16x8 | 23.71 | 1.31 | 79.9 | 95.19 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnet50_b16x8_cifar100.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_b16x8_cifar100_20210528-67b58a1b.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_b16x8_cifar100_20210528-67b58a1b.log.json) |
|
||||
| ResNet-50-b16x8 | 23.71 | 1.31 | 79.90 | 95.19 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnet50_b16x8_cifar100.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_b16x8_cifar100_20210528-67b58a1b.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_b16x8_cifar100_20210528-67b58a1b.log.json) |
|
||||
|
||||
### ImageNet
|
||||
|
||||
| Model | Params(M) | Flops(G) | Top-1 (%) | Top-5 (%) | Config | Download |
|
||||
|:---------------------:|:---------:|:--------:|:---------:|:---------:|:---------:|:--------:|
|
||||
| ResNet-18 | 11.69 | 1.82 | 70.07 | 89.44 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnet18_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet18_batch256_imagenet_20200708-34ab8f90.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet18_batch256_imagenet_20200708-34ab8f90.log.json) |
|
||||
| ResNet-34 | 21.8 | 3.68 | 73.85 | 91.53 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnet34_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet34_batch256_imagenet_20200708-32ffb4f7.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet34_batch256_imagenet_20200708-32ffb4f7.log.json) |
|
||||
| ResNet-50 | 25.56 | 4.12 | 76.55 | 93.15 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnet50_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_batch256_imagenet_20200708-cfb998bf.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_batch256_imagenet_20200708-cfb998bf.log.json) |
|
||||
| ResNet-101 | 44.55 | 7.85 | 78.18 | 94.03 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnet101_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet101_batch256_imagenet_20200708-753f3608.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet101_batch256_imagenet_20200708-753f3608.log.json) |
|
||||
| ResNet-152 | 60.19 | 11.58 | 78.63 | 94.16 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnet152_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet152_batch256_imagenet_20200708-ec25b1f9.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet152_batch256_imagenet_20200708-ec25b1f9.log.json) |
|
||||
| ResNet-18 | 11.69 | 1.82 | 69.90 | 89.43 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnet18_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet18_8xb32_in1k_20210831-fbbb1da6.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet18_8xb32_in1k_20210831-fbbb1da6.log.json) |
|
||||
| ResNet-34 | 21.8 | 3.68 | 73.62 | 91.59 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnet34_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet34_8xb32_in1k_20210831-f257d4e6.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet34_8xb32_in1k_20210831-f257d4e6.log.json) |
|
||||
| ResNet-50 | 25.56 | 4.12 | 76.55 | 93.06 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnet50_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_8xb32_in1k_20210831-ea4938fc.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_8xb32_in1k_20210831-ea4938fc.log.json) |
|
||||
| ResNet-101 | 44.55 | 7.85 | 77.97 | 94.06 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnet101_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet101_8xb32_in1k_20210831-539c63f8.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet101_8xb32_in1k_20210831-539c63f8.log.json) |
|
||||
| ResNet-152 | 60.19 | 11.58 | 78.48 | 94.13 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnet152_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet152_8xb32_in1k_20210901-4d7582fa.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet152_8xb32_in1k_20210901-4d7582fa.log.json) |
|
||||
| ResNetV1D-50 | 25.58 | 4.36 | 77.54 | 93.57 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnetv1d50_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d50_b32x8_imagenet_20210531-db14775a.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d50_b32x8_imagenet_20210531-db14775a.log.json) |
|
||||
| ResNetV1D-101 | 44.57 | 8.09 | 78.93 | 94.48 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnetv1d101_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d101_b32x8_imagenet_20210531-6e13bcd3.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d101_b32x8_imagenet_20210531-6e13bcd3.log.json) |
|
||||
| ResNetV1D-152 | 60.21 | 11.82 | 79.41 | 94.7 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnetv1d152_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d152_b32x8_imagenet_20210531-278cf22a.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d152_b32x8_imagenet_20210531-278cf22a.log.json) |
|
||||
| ResNetV1D-152 | 60.21 | 11.82 | 79.41 | 94.70 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnetv1d152_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d152_b32x8_imagenet_20210531-278cf22a.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d152_b32x8_imagenet_20210531-278cf22a.log.json) |
|
||||
|
|
|
@ -10,106 +10,106 @@ Collections:
|
|||
Batch Size: 256
|
||||
Architecture:
|
||||
- ResNet
|
||||
Paper: https://openaccess.thecvf.com/content_cvpr_2016/html/He_Deep_Residual_Learning_CVPR_2016_paper.html
|
||||
Paper:
|
||||
URL: https://openaccess.thecvf.com/content_cvpr_2016/html/He_Deep_Residual_Learning_CVPR_2016_paper.html
|
||||
Title: "Deep Residual Learning for Image Recognition"
|
||||
README: configs/resnet/README.md
|
||||
Code:
|
||||
URL: https://github.com/open-mmlab/mmclassification/blob/v0.15.0/mmcls/models/backbones/resnet.py#L383
|
||||
Version: v0.15.0
|
||||
- Name: ResNet-CIFAR
|
||||
Metadata:
|
||||
Training Data: CIFAR-10
|
||||
Training Techniques:
|
||||
- SGD with Momentum
|
||||
- Weight Decay
|
||||
Training Resources: 8x 1080 GPUs
|
||||
Epochs: 200
|
||||
Batch Size: 128
|
||||
Architecture:
|
||||
- ResNet
|
||||
Paper:
|
||||
URL: https://openaccess.thecvf.com/content_cvpr_2016/html/He_Deep_Residual_Learning_CVPR_2016_paper.html
|
||||
Title: "Deep Residual Learning for Image Recognition"
|
||||
README: configs/resnet/README.md
|
||||
Code:
|
||||
URL: https://github.com/open-mmlab/mmclassification/blob/v0.15.0/mmcls/models/backbones/resnet_cifar.py#L10
|
||||
Version: v0.15.0
|
||||
|
||||
Models:
|
||||
- Name: resnet18_b16x8_cifar10
|
||||
Metadata:
|
||||
FLOPs: 560000000
|
||||
Parameters: 11170000
|
||||
Training Data: CIFAR-10
|
||||
Training Resources: 8x 1080 GPUs
|
||||
Epochs: 200
|
||||
Batch Size: 128
|
||||
In Collection: ResNet
|
||||
In Collection: ResNet-CIFAR
|
||||
Results:
|
||||
- Dataset: CIFAR-10
|
||||
Metrics:
|
||||
Top 1 Accuracy: 94.72
|
||||
Top 1 Accuracy: 94.82
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnet18_b16x8_cifar10_20200823-f906fa4e.pth
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnet18_b16x8_cifar10_20210528-bd6371c8.pth
|
||||
Config: configs/resnet/resnet18_b16x8_cifar10.py
|
||||
- Name: resnet34_b16x8_cifar10
|
||||
Metadata:
|
||||
FLOPs: 1160000000
|
||||
Parameters: 21280000
|
||||
Training Data: CIFAR-10
|
||||
Training Resources: 8x 1080 GPUs
|
||||
Epochs: 200
|
||||
Batch Size: 128
|
||||
In Collection: ResNet
|
||||
In Collection: ResNet-CIFAR
|
||||
Results:
|
||||
- Dataset: CIFAR-10
|
||||
Metrics:
|
||||
Top 1 Accuracy: 95.34
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnet34_b16x8_cifar10_20200823-52d5d832.pth
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnet34_b16x8_cifar10_20210528-a8aa36a6.pth
|
||||
Config: configs/resnet/resnet34_b16x8_cifar10.py
|
||||
- Name: resnet50_b16x8_cifar10
|
||||
Metadata:
|
||||
FLOPs: 1310000000
|
||||
Parameters: 23520000
|
||||
Training Data: CIFAR-10
|
||||
Training Resources: 8x 1080 GPUs
|
||||
Epochs: 200
|
||||
Batch Size: 128
|
||||
In Collection: ResNet
|
||||
In Collection: ResNet-CIFAR
|
||||
Results:
|
||||
- Dataset: CIFAR-10
|
||||
Metrics:
|
||||
Top 1 Accuracy: 95.36
|
||||
Top 1 Accuracy: 95.55
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_b16x8_cifar10_20200823-882aa7b1.pth
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_b16x8_cifar10_20210528-f54bfad9.pth
|
||||
Config: configs/resnet/resnet50_b16x8_cifar10.py
|
||||
- Name: resnet101_b16x8_cifar10
|
||||
Metadata:
|
||||
FLOPs: 2520000000
|
||||
Parameters: 42510000
|
||||
Training Data: CIFAR-10
|
||||
Training Resources: 8x 1080 GPUs
|
||||
Epochs: 200
|
||||
Batch Size: 128
|
||||
In Collection: ResNet
|
||||
In Collection: ResNet-CIFAR
|
||||
Results:
|
||||
- Dataset: CIFAR-10
|
||||
Metrics:
|
||||
Top 1 Accuracy: 95.66
|
||||
Top 1 Accuracy: 95.58
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnet101_b16x8_cifar10_20200823-d9501bbc.pth
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnet101_b16x8_cifar10_20210528-2d29e936.pth
|
||||
Config: configs/resnet/resnet101_b16x8_cifar10.py
|
||||
- Name: resnet152_b16x8_cifar10
|
||||
Metadata:
|
||||
FLOPs: 3740000000
|
||||
Parameters: 58160000
|
||||
Training Data: CIFAR-10
|
||||
Training Resources: 8x 1080 GPUs
|
||||
Epochs: 200
|
||||
Batch Size: 128
|
||||
In Collection: ResNet
|
||||
In Collection: ResNet-CIFAR
|
||||
Results:
|
||||
- Dataset: CIFAR-10
|
||||
Metrics:
|
||||
Top 1 Accuracy: 95.96
|
||||
Top 1 Accuracy: 95.76
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnet152_b16x8_cifar10_20200823-ad4d5d0c.pth
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnet152_b16x8_cifar10_20210528-3e8e9178.pth
|
||||
Config: configs/resnet/resnet152_b16x8_cifar10.py
|
||||
- Name: resnet50_b16x8_cifar100
|
||||
Metadata:
|
||||
FLOPs: 1310000000
|
||||
Parameters: 23710000
|
||||
Training Data: CIFAR-100
|
||||
Training Resources: 8x 1080 GPUs
|
||||
Epochs: 200
|
||||
Batch Size: 128
|
||||
In Collection: ResNet
|
||||
In Collection: ResNet-CIFAR
|
||||
Results:
|
||||
- Dataset: CIFAR-100
|
||||
Metrics:
|
||||
Top 1 Accuracy: 80.51
|
||||
Top 5 Accuracy: 95.27
|
||||
Top 1 Accuracy: 79.90
|
||||
Top 5 Accuracy: 95.19
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_batch256_cifar100_20210410-37f13c16.pth
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_b16x8_cifar100_20210528-67b58a1b.pth
|
||||
Config: configs/resnet/resnet50_b16x8_cifar100.py
|
||||
- Name: resnet18_b32x8_imagenet
|
||||
Metadata:
|
||||
|
@ -119,10 +119,10 @@ Models:
|
|||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 70.07
|
||||
Top 5 Accuracy: 89.44
|
||||
Top 1 Accuracy: 69.90
|
||||
Top 5 Accuracy: 89.43
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnet18_batch256_imagenet_20200708-34ab8f90.pth
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnet18_8xb32_in1k_20210831-fbbb1da6.pth
|
||||
Config: configs/resnet/resnet18_b32x8_imagenet.py
|
||||
- Name: resnet34_b32x8_imagenet
|
||||
Metadata:
|
||||
|
@ -132,10 +132,10 @@ Models:
|
|||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 73.85
|
||||
Top 5 Accuracy: 91.53
|
||||
Top 1 Accuracy: 73.62
|
||||
Top 5 Accuracy: 91.59
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnet34_batch256_imagenet_20200708-32ffb4f7.pth
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnet34_8xb32_in1k_20210831-f257d4e6.pth
|
||||
Config: configs/resnet/resnet34_b32x8_imagenet.py
|
||||
- Name: resnet50_b32x8_imagenet
|
||||
Metadata:
|
||||
|
@ -146,9 +146,9 @@ Models:
|
|||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 76.55
|
||||
Top 5 Accuracy: 93.15
|
||||
Top 5 Accuracy: 93.06
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_batch256_imagenet_20200708-cfb998bf.pth
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_8xb32_in1k_20210831-ea4938fc.pth
|
||||
Config: configs/resnet/resnet50_b32x8_imagenet.py
|
||||
- Name: resnet101_b32x8_imagenet
|
||||
Metadata:
|
||||
|
@ -158,10 +158,10 @@ Models:
|
|||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 78.18
|
||||
Top 5 Accuracy: 94.03
|
||||
Top 1 Accuracy: 77.97
|
||||
Top 5 Accuracy: 94.06
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnet101_batch256_imagenet_20200708-753f3608.pth
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnet101_8xb32_in1k_20210831-539c63f8.pth
|
||||
Config: configs/resnet/resnet101_b32x8_imagenet.py
|
||||
- Name: resnet152_b32x8_imagenet
|
||||
Metadata:
|
||||
|
@ -171,10 +171,10 @@ Models:
|
|||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 78.63
|
||||
Top 5 Accuracy: 94.16
|
||||
Top 1 Accuracy: 78.48
|
||||
Top 5 Accuracy: 94.13
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnet152_batch256_imagenet_20200708-ec25b1f9.pth
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnet152_8xb32_in1k_20210901-4d7582fa.pth
|
||||
Config: configs/resnet/resnet152_b32x8_imagenet.py
|
||||
- Name: resnetv1d50_b32x8_imagenet
|
||||
Metadata:
|
||||
|
@ -184,10 +184,10 @@ Models:
|
|||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 77.4
|
||||
Top 5 Accuracy: 93.66
|
||||
Top 1 Accuracy: 77.54
|
||||
Top 5 Accuracy: 93.57
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d50_batch256_imagenet_20200708-1ad0ce94.pth
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d50_b32x8_imagenet_20210531-db14775a.pth
|
||||
Config: configs/resnet/resnetv1d50_b32x8_imagenet.py
|
||||
- Name: resnetv1d101_b32x8_imagenet
|
||||
Metadata:
|
||||
|
@ -197,10 +197,10 @@ Models:
|
|||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 78.85
|
||||
Top 5 Accuracy: 94.38
|
||||
Top 1 Accuracy: 78.93
|
||||
Top 5 Accuracy: 94.48
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d101_batch256_imagenet_20200708-9cb302ef.pth
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d101_b32x8_imagenet_20210531-6e13bcd3.pth
|
||||
Config: configs/resnet/resnetv1d101_b32x8_imagenet.py
|
||||
- Name: resnetv1d152_b32x8_imagenet
|
||||
Metadata:
|
||||
|
@ -210,8 +210,8 @@ Models:
|
|||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 79.35
|
||||
Top 5 Accuracy: 94.61
|
||||
Top 1 Accuracy: 79.41
|
||||
Top 5 Accuracy: 94.70
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d152_batch256_imagenet_20200708-e79cb6a2.pth
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d152_b32x8_imagenet_20210531-278cf22a.pth
|
||||
Config: configs/resnet/resnetv1d152_b32x8_imagenet.py
|
||||
|
|
|
@ -21,6 +21,6 @@
|
|||
| Model | Params(M) | Flops(G) | Top-1 (%) | Top-5 (%) | Config | Download |
|
||||
|:---------------------:|:---------:|:--------:|:---------:|:---------:|:---------:|:--------:|
|
||||
| ResNeXt-32x4d-50 | 25.03 | 4.27 | 77.90 | 93.66 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnext/resnext50_32x4d_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnext/resnext50_32x4d_b32x8_imagenet_20210429-56066e27.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnext/resnext50_32x4d_b32x8_imagenet_20210429-56066e27.log.json) |
|
||||
| ResNeXt-32x4d-101 | 44.18 | 8.03 | 78.71 | 94.12 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnext/resnext101_32x4d_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnext/resnext101_32x4d_b32x8_imagenet_20210506-e0fa3dd5.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnext/resnext101_32x4d_b32x8_imagenet_20210506-e0fa3dd5.log.json) |
|
||||
| ResNeXt-32x8d-101 | 88.79 | 16.5 | 79.23 | 94.58 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnext/resnext101_32x8d_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnext/resnext101_32x8d_b32x8_imagenet_20210506-23a247d5.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnext/resnext101_32x8d_b32x8_imagenet_20210506-23a247d5.log.json) |
|
||||
| ResNeXt-32x4d-152 | 59.95 | 11.8 | 78.93 | 94.41 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnext/resnext152_32x4d_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnext/resnext152_32x4d_b32x8_imagenet_20210524-927787be.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnext/resnext152_32x4d_b32x8_imagenet_20210524-927787be.log.json) |
|
||||
| ResNeXt-32x4d-101 | 44.18 | 8.03 | 78.61 | 94.17 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnext/resnext101_32x4d_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnext/resnext101_32x4d_b32x8_imagenet_20210506-e0fa3dd5.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnext/resnext101_32x4d_b32x8_imagenet_20210506-e0fa3dd5.log.json) |
|
||||
| ResNeXt-32x8d-101 | 88.79 | 16.5 | 79.27 | 94.58 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnext/resnext101_32x8d_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnext/resnext101_32x8d_b32x8_imagenet_20210506-23a247d5.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnext/resnext101_32x8d_b32x8_imagenet_20210506-23a247d5.log.json) |
|
||||
| ResNeXt-32x4d-152 | 59.95 | 11.8 | 78.88 | 94.33 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnext/resnext152_32x4d_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnext/resnext152_32x4d_b32x8_imagenet_20210524-927787be.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnext/resnext152_32x4d_b32x8_imagenet_20210524-927787be.log.json) |
|
||||
|
|
|
@ -10,8 +10,13 @@ Collections:
|
|||
Batch Size: 256
|
||||
Architecture:
|
||||
- ResNeXt
|
||||
Paper: https://openaccess.thecvf.com/content_cvpr_2017/html/Xie_Aggregated_Residual_Transformations_CVPR_2017_paper.html
|
||||
Paper:
|
||||
URL: https://openaccess.thecvf.com/content_cvpr_2017/html/Xie_Aggregated_Residual_Transformations_CVPR_2017_paper.html
|
||||
Title: "Aggregated Residual Transformations for Deep Neural Networks"
|
||||
README: configs/resnext/README.md
|
||||
Code:
|
||||
URL: https://github.com/open-mmlab/mmclassification/blob/v0.15.0/mmcls/models/backbones/resnext.py#L90
|
||||
Version: v0.15.0
|
||||
|
||||
Models:
|
||||
- Name: resnext50_32x4d_b32x8_imagenet
|
||||
|
@ -22,10 +27,10 @@ Models:
|
|||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 77.92
|
||||
Top 5 Accuracy: 93.74
|
||||
Top 1 Accuracy: 77.90
|
||||
Top 5 Accuracy: 93.66
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnext/resnext50_32x4d_batch256_imagenet_20200708-c07adbb7.pth
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnext/resnext50_32x4d_b32x8_imagenet_20210429-56066e27.pth
|
||||
Config: configs/resnext/resnext50_32x4d_b32x8_imagenet.py
|
||||
- Name: resnext101_32x4d_b32x8_imagenet
|
||||
Metadata:
|
||||
|
@ -35,10 +40,10 @@ Models:
|
|||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 78.7
|
||||
Top 5 Accuracy: 94.34
|
||||
Top 1 Accuracy: 78.61
|
||||
Top 5 Accuracy: 94.17
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnext/resnext101_32x4d_batch256_imagenet_20200708-87f2d1c9.pth
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnext/resnext101_32x4d_b32x8_imagenet_20210506-e0fa3dd5.pth
|
||||
Config: configs/resnext/resnext101_32x4d_b32x8_imagenet.py
|
||||
- Name: resnext101_32x8d_b32x8_imagenet
|
||||
Metadata:
|
||||
|
@ -48,10 +53,10 @@ Models:
|
|||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 79.22
|
||||
Top 5 Accuracy: 94.52
|
||||
Top 1 Accuracy: 79.27
|
||||
Top 5 Accuracy: 94.58
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnext/resnext101_32x8d_batch256_imagenet_20200708-1ec34aa7.pth
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnext/resnext101_32x8d_b32x8_imagenet_20210506-23a247d5.pth
|
||||
Config: configs/resnext/resnext101_32x8d_b32x8_imagenet.py
|
||||
- Name: resnext152_32x4d_b32x8_imagenet
|
||||
Metadata:
|
||||
|
@ -61,8 +66,8 @@ Models:
|
|||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 79.06
|
||||
Top 5 Accuracy: 94.47
|
||||
Top 1 Accuracy: 78.88
|
||||
Top 5 Accuracy: 94.33
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnext/resnext152_32x4d_batch256_imagenet_20200708-aab5034c.pth
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/resnext/resnext152_32x4d_b32x8_imagenet_20210524-927787be.pth
|
||||
Config: configs/resnext/resnext152_32x4d_b32x8_imagenet.py
|
||||
|
|
|
@ -10,8 +10,13 @@ Collections:
|
|||
Batch Size: 256
|
||||
Architecture:
|
||||
- ResNet
|
||||
Paper: https://openaccess.thecvf.com/content_cvpr_2018/html/Hu_Squeeze-and-Excitation_Networks_CVPR_2018_paper.html
|
||||
Paper:
|
||||
URL: https://openaccess.thecvf.com/content_cvpr_2018/html/Hu_Squeeze-and-Excitation_Networks_CVPR_2018_paper.html
|
||||
Title: "Squeeze-and-Excitation Networks"
|
||||
README: configs/seresnet/README.md
|
||||
Code:
|
||||
URL: https://github.com/open-mmlab/mmclassification/blob/v0.15.0/mmcls/models/backbones/seresnet.py#L58
|
||||
Version: v0.15.0
|
||||
|
||||
Models:
|
||||
- Name: seresnet50_b32x8_imagenet
|
||||
|
|
|
@ -11,8 +11,13 @@ Collections:
|
|||
Batch Size: 1024
|
||||
Architecture:
|
||||
- Shufflenet V1
|
||||
Paper: https://openaccess.thecvf.com/content_cvpr_2018/html/Zhang_ShuffleNet_An_Extremely_CVPR_2018_paper.html
|
||||
Paper:
|
||||
URL: https://openaccess.thecvf.com/content_cvpr_2018/html/Zhang_ShuffleNet_An_Extremely_CVPR_2018_paper.html
|
||||
Title: "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices"
|
||||
README: configs/shufflenet_v1/README.md
|
||||
Code:
|
||||
URL: https://github.com/open-mmlab/mmclassification/blob/v0.15.0/mmcls/models/backbones/shufflenet_v1.py#L152
|
||||
Version: v0.15.0
|
||||
|
||||
Models:
|
||||
- Name: shufflenet_v1_1x_b64x16_linearlr_bn_nowd_imagenet
|
||||
|
|
|
@ -11,8 +11,13 @@ Collections:
|
|||
Batch Size: 1024
|
||||
Architecture:
|
||||
- Shufflenet V2
|
||||
Paper: https://openaccess.thecvf.com/content_ECCV_2018/papers/Ningning_Light-weight_CNN_Architecture_ECCV_2018_paper.pdf
|
||||
Paper:
|
||||
URL: https://openaccess.thecvf.com/content_ECCV_2018/papers/Ningning_Light-weight_CNN_Architecture_ECCV_2018_paper.pdf
|
||||
Title: "ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design"
|
||||
README: configs/shufflenet_v2/README.md
|
||||
Code:
|
||||
URL: https://github.com/open-mmlab/mmclassification/blob/v0.15.0/mmcls/models/backbones/shufflenet_v2.py#L134
|
||||
Version: v0.15.0
|
||||
|
||||
Models:
|
||||
- Name: shufflenet_v2_1x_b64x16_linearlr_bn_nowd_imagenet
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
Collections:
|
||||
- Name: Swin-Transformer
|
||||
Metadata:
|
||||
Training Data: ImageNet
|
||||
Training Data: ImageNet-1k
|
||||
Training Techniques:
|
||||
- AdamW
|
||||
- Weight Decay
|
||||
|
@ -10,58 +10,179 @@ Collections:
|
|||
Batch Size: 1024
|
||||
Architecture:
|
||||
- Shift Window Multihead Self Attention
|
||||
Paper: https://arxiv.org/pdf/2103.14030.pdf
|
||||
Paper:
|
||||
URL: https://arxiv.org/pdf/2103.14030.pdf
|
||||
Title: "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows"
|
||||
README: configs/swin_transformer/README.md
|
||||
Code:
|
||||
URL: https://github.com/open-mmlab/mmclassification/blob/v0.15.0/mmcls/models/backbones/swin_transformer.py#L176
|
||||
Version: v0.15.0
|
||||
|
||||
Models:
|
||||
- Config: configs/swin_transformer/swin_tiny_224_b16x64_300e_imagenet.py
|
||||
In Collection: Swin-Transformer
|
||||
Metadata:
|
||||
FLOPs: 4360000000
|
||||
Parameters: 28290000
|
||||
Training Data: ImageNet
|
||||
Training Resources: 16x 1080 GPUs
|
||||
Epochs: 300
|
||||
Batch Size: 1024
|
||||
Name: swin_tiny_224_imagenet
|
||||
Results:
|
||||
- Dataset: ImageNet
|
||||
Metrics:
|
||||
Top 1 Accuracy: 81.18
|
||||
Top 5 Accuracy: 95.61
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_tiny_224_b16x64_300e_imagenet_20210616_090925-66df6be6.pth
|
||||
- Config: configs/swin_transformer/swin_small_224_b16x64_300e_imagenet.py
|
||||
In Collection: Swin-Transformer
|
||||
Metadata:
|
||||
FLOPs: 8520000000
|
||||
Parameters: 48610000
|
||||
Training Data: ImageNet
|
||||
Training Resources: 16x 1080 GPUs
|
||||
Epochs: 300
|
||||
Batch Size: 1024
|
||||
Name: swin_small_224_imagenet
|
||||
Results:
|
||||
- Dataset: ImageNet
|
||||
Metrics:
|
||||
Top 1 Accuracy: 83.02
|
||||
Top 5 Accuracy: 96.29
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_small_224_b16x64_300e_imagenet_20210615_110219-7f9d988b.pth
|
||||
- Config: configs/swin_transformer/swin_base_224_b16x64_300e_imagenet.py
|
||||
In Collection: Swin-Transformer
|
||||
Metadata:
|
||||
FLOPs: 15140000000
|
||||
Parameters: 87770000
|
||||
Training Data: ImageNet
|
||||
Training Resources: 16x 1080 GPUs
|
||||
Epochs: 300
|
||||
Batch Size: 1024
|
||||
Name: swin_base_224_imagenet
|
||||
Results:
|
||||
- Dataset: ImageNet
|
||||
Metrics:
|
||||
Top 1 Accuracy: 83.36
|
||||
Top 5 Accuracy: 96.44
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_base_224_b16x64_300e_imagenet_20210616_190742-93230b0d.pth
|
||||
- Name: swin-tiny_64xb16_in1k
|
||||
Metadata:
|
||||
FLOPs: 4360000000
|
||||
Parameters: 28290000
|
||||
In Collection: Swin-Transformer
|
||||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 81.18
|
||||
Top 5 Accuracy: 95.61
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_tiny_224_b16x64_300e_imagenet_20210616_090925-66df6be6.pth
|
||||
Config: configs/swin_transformer/swin_tiny_224_b16x64_300e_imagenet.py
|
||||
- Name: swin-small_64xb16_in1k
|
||||
Metadata:
|
||||
FLOPs: 8520000000
|
||||
Parameters: 49610000
|
||||
In Collection: Swin-Transformer
|
||||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 83.02
|
||||
Top 5 Accuracy: 96.29
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_small_224_b16x64_300e_imagenet_20210615_110219-7f9d988b.pth
|
||||
Config: configs/swin_transformer/swin_small_224_b16x64_300e_imagenet.py
|
||||
- Name: swin-base_64xb16_in1k
|
||||
Metadata:
|
||||
FLOPs: 15140000000
|
||||
Parameters: 87770000
|
||||
In Collection: Swin-Transformer
|
||||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 83.36
|
||||
Top 5 Accuracy: 96.44
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_base_224_b16x64_300e_imagenet_20210616_190742-93230b0d.pth
|
||||
Config: configs/swin_transformer/swin_base_224_b16x64_300e_imagenet.py
|
||||
- Name: swin-tiny_3rdparty_in1k
|
||||
Metadata:
|
||||
FLOPs: 4360000000
|
||||
Parameters: 28290000
|
||||
In Collection: Swin-Transformer
|
||||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 81.18
|
||||
Top 5 Accuracy: 95.52
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_tiny_patch4_window7_224-160bb0a5.pth
|
||||
Converted From:
|
||||
Weights: https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_tiny_patch4_window7_224.pth
|
||||
Code: https://github.com/microsoft/Swin-Transformer/blob/777f6c66604bb5579086c4447efe3620344d95a9/models/swin_transformer.py#L458
|
||||
Config: configs/swin_transformer/swin_tiny_224_b16x64_300e_imagenet.py
|
||||
- Name: swin-small_3rdparty_in1k
|
||||
Metadata:
|
||||
FLOPs: 8520000000
|
||||
Parameters: 49610000
|
||||
In Collection: Swin-Transformer
|
||||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 83.21
|
||||
Top 5 Accuracy: 96.25
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_small_patch4_window7_224-cc7a01c9.pth
|
||||
Converted From:
|
||||
Weights: https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_small_patch4_window7_224.pth
|
||||
Code: https://github.com/microsoft/Swin-Transformer/blob/777f6c66604bb5579086c4447efe3620344d95a9/models/swin_transformer.py#L458
|
||||
Config: configs/swin_transformer/swin_small_224_b16x64_300e_imagenet.py
|
||||
- Name: swin-base_3rdparty_in1k
|
||||
Metadata:
|
||||
FLOPs: 15140000000
|
||||
Parameters: 87770000
|
||||
In Collection: Swin-Transformer
|
||||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 83.42
|
||||
Top 5 Accuracy: 96.44
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_base_patch4_window7_224-4670dd19.pth
|
||||
Converted From:
|
||||
Weights: https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window7_224.pth
|
||||
Code: https://github.com/microsoft/Swin-Transformer/blob/777f6c66604bb5579086c4447efe3620344d95a9/models/swin_transformer.py#L458
|
||||
Config: configs/swin_transformer/swin_base_224_b16x64_300e_imagenet.py
|
||||
- Name: swin-base_3rdparty_in1k-384
|
||||
Metadata:
|
||||
FLOPs: 44490000000
|
||||
Parameters: 87900000
|
||||
In Collection: Swin-Transformer
|
||||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 84.49
|
||||
Top 5 Accuracy: 96.95
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_base_patch4_window12_384-02c598a4.pth
|
||||
Converted From:
|
||||
Weights: https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window12_384.pth
|
||||
Code: https://github.com/microsoft/Swin-Transformer/blob/777f6c66604bb5579086c4447efe3620344d95a9/models/swin_transformer.py#L458
|
||||
Config: configs/swin_transformer/swin_base_384_evalonly_imagenet.py
|
||||
- Name: swin-base_in21k-pre-3rdparty_in1k
|
||||
Metadata:
|
||||
FLOPs: 15140000000
|
||||
Parameters: 87770000
|
||||
In Collection: Swin-Transformer
|
||||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 85.16
|
||||
Top 5 Accuracy: 97.50
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_base_patch4_window7_224_22kto1k-f967f799.pth
|
||||
Converted From:
|
||||
Weights: https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window7_224_22kto1k.pth
|
||||
Code: https://github.com/microsoft/Swin-Transformer/blob/777f6c66604bb5579086c4447efe3620344d95a9/models/swin_transformer.py#L458
|
||||
Config: configs/swin_transformer/swin_base_224_b16x64_300e_imagenet.py
|
||||
- Name: swin-base_in21k-pre-3rdparty_in1k-384
|
||||
Metadata:
|
||||
FLOPs: 44490000000
|
||||
Parameters: 87900000
|
||||
In Collection: Swin-Transformer
|
||||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 86.44
|
||||
Top 5 Accuracy: 98.05
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_base_patch4_window12_384_22kto1k-d59b0d1d.pth
|
||||
Converted From:
|
||||
Weights: https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window12_384_22kto1k.pth
|
||||
Code: https://github.com/microsoft/Swin-Transformer/blob/777f6c66604bb5579086c4447efe3620344d95a9/models/swin_transformer.py#L458
|
||||
Config: configs/swin_transformer/swin_base_384_evalonly_imagenet.py
|
||||
- Name: swin-large_in21k-pre-3rdparty_in1k
|
||||
Metadata:
|
||||
FLOPs: 34040000000
|
||||
Parameters: 196530000
|
||||
In Collection: Swin-Transformer
|
||||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 86.24
|
||||
Top 5 Accuracy: 97.88
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_large_patch4_window7_224_22kto1k-5f0996db.pth
|
||||
Converted From:
|
||||
Weights: https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_large_patch4_window7_224_22kto1k.pth
|
||||
Code: https://github.com/microsoft/Swin-Transformer/blob/777f6c66604bb5579086c4447efe3620344d95a9/models/swin_transformer.py#L458
|
||||
Config: configs/swin_transformer/swin_large_224_evalonly_imagenet.py
|
||||
- Name: swin-large_in21k-pre-3rdparty_in1k-384
|
||||
Metadata:
|
||||
FLOPs: 100040000000
|
||||
Parameters: 196740000
|
||||
In Collection: Swin-Transformer
|
||||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 87.25
|
||||
Top 5 Accuracy: 98.25
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/swin-transformer/convert/swin_large_patch4_window12_384_22kto1k-0a40944b.pth
|
||||
Converted From:
|
||||
Weights: https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_large_patch4_window12_384_22kto1k.pth
|
||||
Code: https://github.com/microsoft/Swin-Transformer/blob/777f6c66604bb5579086c4447efe3620344d95a9/models/swin_transformer.py#L458
|
||||
Config: configs/swin_transformer/swin_large_384_evalonly_imagenet.py
|
||||
|
|
|
@ -14,3 +14,19 @@
|
|||
primaryClass={cs.CV}
|
||||
}
|
||||
```
|
||||
|
||||
## Pretrain model
|
||||
|
||||
The pre-trained modles are converted from [timm](https://github.com/rwightman/pytorch-image-models/).
|
||||
|
||||
### ImageNet
|
||||
|
||||
| Model | Params(M) | Flops(G) | Top-1 (%) | Top-5 (%) | Download |
|
||||
|:---------------------:|:---------:|:--------:|:---------:|:---------:|:--------:|
|
||||
| Transformer in Transformer small\* | 23.76 | 3.36 | 81.52 | 95.73 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/tnt/tnt_s_patch16_224_evalonly_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/tnt/tnt-small-p16_3rdparty_in1k_20210903-c56ee7df.pth) | [log]()|
|
||||
|
||||
*Models with \* are converted from other repos.*
|
||||
|
||||
## Results and models
|
||||
|
||||
Waiting for adding.
|
||||
|
|
|
@ -0,0 +1,29 @@
|
|||
Collections:
|
||||
- Name: Transformer in Transformer
|
||||
Metadata:
|
||||
Training Data: ImageNet-1k
|
||||
Paper:
|
||||
URL: https://arxiv.org/abs/2103.00112
|
||||
Title: "Transformer in Transformer"
|
||||
README: configs/tnt/README.md
|
||||
Code:
|
||||
URL: https://github.com/open-mmlab/mmclassification/blob/v0.15.0/mmcls/models/backbones/tnt.py#L203
|
||||
Version: v0.15.0
|
||||
|
||||
Models:
|
||||
- Name: tnt-small-p16_3rdparty_in1k
|
||||
Metadata:
|
||||
FLOPs: 3360000000
|
||||
Parameters: 23760000
|
||||
In Collection: Transformer in Transformer
|
||||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 81.52
|
||||
Top 5 Accuracy: 95.73
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/tnt/tnt-small-p16_3rdparty_in1k_20210903-c56ee7df.pth
|
||||
Config: configs/tnt/tnt_s_patch16_224_evalonly_imagenet.py
|
||||
Converted From:
|
||||
Weights: https://github.com/contrastive/pytorch-image-models/releases/download/TNT/tnt_s_patch16_224.pth.tar
|
||||
Code: https://github.com/contrastive/pytorch-image-models/blob/809271b0f3e5d9be4e11c0c5cec1dbba8b5e2c60/timm/models/tnt.py#L144
|
|
@ -24,7 +24,7 @@
|
|||
| VGG-13 | 133.05 | 11.34 | 70.02 | 89.46 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/vgg/vgg13_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/vgg/vgg13_batch256_imagenet_20210208-4d1d6080.pth) | [log](https://download.openmmlab.com/mmclassification/v0/vgg/vgg13_batch256_imagenet_20210208-4d1d6080.log.json) |
|
||||
| VGG-16 | 138.36 | 15.5 | 71.62 | 90.49 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/vgg/vgg16_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/vgg/vgg16_batch256_imagenet_20210208-db26f1a5.pth) | [log](https://download.openmmlab.com/mmclassification/v0/vgg/vgg16_batch256_imagenet_20210208-db26f1a5.log.json) |
|
||||
| VGG-19 | 143.67 | 19.67 | 72.41 | 90.80 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/vgg/vgg19_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/vgg/vgg19_batch256_imagenet_20210208-e6920e4a.pth) | [log](https://download.openmmlab.com/mmclassification/v0/vgg/vgg19_batch256_imagenet_20210208-e6920e4a.log.json)|
|
||||
| VGG-11-BN | 132.87 | 7.64 | 70.75 | 90.12 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/vgg/vgg11bn_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/vgg/vgg11_bn_batch256_imagenet_20210207-f244902c.pth) | [log](https://download.openmmlab.com/mmclassification/v0/vgg/vgg11_bn_batch256_imagenet_20210207-f244902c.log.json) |
|
||||
| VGG-13-BN | 133.05 | 11.36 | 72.15 | 90.71 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/vgg/vgg13bn_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/vgg/vgg13_bn_batch256_imagenet_20210207-1a8b7864.pth) | [log](https://download.openmmlab.com/mmclassification/v0/vgg/vgg13_bn_batch256_imagenet_20210207-1a8b7864.log.json) |
|
||||
| VGG-16-BN | 138.37 | 15.53 | 73.72 | 91.68 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/vgg/vgg16_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/vgg/vgg16_bn_batch256_imagenet_20210208-7e55cd29.pth) | [log](https://download.openmmlab.com/mmclassification/v0/vgg/vgg16_bn_batch256_imagenet_20210208-7e55cd29.log.json) |
|
||||
| VGG-19-BN | 143.68 | 19.7 | 74.70 | 92.24 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/vgg/vgg19bn_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/vgg/vgg19_bn_batch256_imagenet_20210208-da620c4f.pth) | [log](https://download.openmmlab.com/mmclassification/v0/vgg/vgg19_bn_batch256_imagenet_20210208-da620c4f.log.json)|
|
||||
| VGG-11-BN | 132.87 | 7.64 | 70.67 | 90.16 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/vgg/vgg11bn_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/vgg/vgg11_bn_batch256_imagenet_20210207-f244902c.pth) | [log](https://download.openmmlab.com/mmclassification/v0/vgg/vgg11_bn_batch256_imagenet_20210207-f244902c.log.json) |
|
||||
| VGG-13-BN | 133.05 | 11.36 | 72.12 | 90.66 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/vgg/vgg13bn_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/vgg/vgg13_bn_batch256_imagenet_20210207-1a8b7864.pth) | [log](https://download.openmmlab.com/mmclassification/v0/vgg/vgg13_bn_batch256_imagenet_20210207-1a8b7864.log.json) |
|
||||
| VGG-16-BN | 138.37 | 15.53 | 73.74 | 91.66 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/vgg/vgg16_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/vgg/vgg16_bn_batch256_imagenet_20210208-7e55cd29.pth) | [log](https://download.openmmlab.com/mmclassification/v0/vgg/vgg16_bn_batch256_imagenet_20210208-7e55cd29.log.json) |
|
||||
| VGG-19-BN | 143.68 | 19.7 | 74.68 | 92.27 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/vgg/vgg19bn_b32x8_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/vgg/vgg19_bn_batch256_imagenet_20210208-da620c4f.pth) | [log](https://download.openmmlab.com/mmclassification/v0/vgg/vgg19_bn_batch256_imagenet_20210208-da620c4f.log.json)|
|
||||
|
|
|
@ -10,8 +10,13 @@ Collections:
|
|||
Batch Size: 256
|
||||
Architecture:
|
||||
- VGG
|
||||
Paper: https://arxiv.org/abs/1409.1556
|
||||
Paper:
|
||||
URL: https://arxiv.org/abs/1409.1556
|
||||
Title: "Very Deep Convolutional Networks for Large-Scale Image"
|
||||
README: configs/vgg/README.md
|
||||
Code:
|
||||
URL: https://github.com/open-mmlab/mmclassification/blob/v0.15.0/mmcls/models/backbones/vgg.py#L39
|
||||
Version: v0.15.0
|
||||
|
||||
Models:
|
||||
- Name: vgg11_b32x8_imagenet
|
||||
|
@ -74,8 +79,8 @@ Models:
|
|||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 70.75
|
||||
Top 5 Accuracy: 90.12
|
||||
Top 1 Accuracy: 70.67
|
||||
Top 5 Accuracy: 90.16
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/vgg/vgg11_bn_batch256_imagenet_20210207-f244902c.pth
|
||||
Config: configs/vgg/vgg11bn_b32x8_imagenet.py
|
||||
|
@ -87,12 +92,12 @@ Models:
|
|||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 72.15
|
||||
Top 5 Accuracy: 90.71
|
||||
Top 1 Accuracy: 72.12
|
||||
Top 5 Accuracy: 90.66
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/vgg/vgg13_bn_batch256_imagenet_20210207-1a8b7864.pth
|
||||
Config: configs/vgg/vgg13bn_b32x8_imagenet.py
|
||||
- Name: vgg16_b32x8_imagenet
|
||||
- Name: vgg16bn_b32x8_imagenet
|
||||
Metadata:
|
||||
FLOPs: 15530000000
|
||||
Parameters: 138370000
|
||||
|
@ -100,11 +105,11 @@ Models:
|
|||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 73.72
|
||||
Top 5 Accuracy: 91.68
|
||||
Top 1 Accuracy: 73.74
|
||||
Top 5 Accuracy: 91.66
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/vgg/vgg16_bn_batch256_imagenet_20210208-7e55cd29.pth
|
||||
Config: configs/vgg/vgg16_b32x8_imagenet.py
|
||||
Config: configs/vgg/vgg16bn_b32x8_imagenet.py
|
||||
- Name: vgg19bn_b32x8_imagenet
|
||||
Metadata:
|
||||
FLOPs: 19700000000
|
||||
|
@ -113,8 +118,8 @@ Models:
|
|||
Results:
|
||||
- Dataset: ImageNet-1k
|
||||
Metrics:
|
||||
Top 1 Accuracy: 74.7
|
||||
Top 5 Accuracy: 92.24
|
||||
Top 1 Accuracy: 74.68
|
||||
Top 5 Accuracy: 92.27
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/vgg/vgg19_bn_batch256_imagenet_20210208-da620c4f.pth
|
||||
Config: configs/vgg/vgg19bn_b32x8_imagenet.py
|
||||
|
|
|
@ -43,7 +43,7 @@ The ResNet family models below are trained by standard data augmentations, i.e.,
|
|||
| Swin-Transformer tiny | 28.29 | 4.36 | 81.18 | 95.61 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/swin_transformer/swin_tiny_224_b16x64_300e_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_tiny_224_b16x64_300e_imagenet_20210616_090925-66df6be6.pth) | [log](https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_tiny_224_b16x64_300e_imagenet_20210616_090925.log.json)|
|
||||
| Swin-Transformer small| 49.61 | 8.52 | 83.02 | 96.29 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/swin_transformer/swin_small_224_b16x64_300e_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_small_224_b16x64_300e_imagenet_20210615_110219-7f9d988b.pth) | [log](https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_small_224_b16x64_300e_imagenet_20210615_110219.log.json)|
|
||||
| Swin-Transformer base | 87.77 | 15.14 | 83.36 | 96.44 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/swin_transformer/swin_base_224_b16x64_300e_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_base_224_b16x64_300e_imagenet_20210616_190742-93230b0d.pth) | [log](https://download.openmmlab.com/mmclassification/v0/swin-transformer/swin_base_224_b16x64_300e_imagenet_20210616_190742.log.json)|
|
||||
| Transformer in Transformer small* | 23.76 | 3.36 | 81.52 | 95.73 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/tnt/tnt_s_patch16_224_evalonly_imagenet) | [model](http://download.openmmlab.com/mmclassification/v0/transformer-in-transformer/convert/tnt_s_patch16_224_evalonly_imagenet.pth) | [log]()|
|
||||
| Transformer in Transformer small\* | 23.76 | 3.36 | 81.52 | 95.73 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/tnt/tnt_s_patch16_224_evalonly_imagenet.py) | [model](https://download.openmmlab.com/mmclassification/v0/tnt/tnt-small-p16_3rdparty_in1k_20210903-c56ee7df.pth) | [log]()|
|
||||
|
||||
Models with * are converted from other repos, others are trained by ourselves.
|
||||
|
||||
|
|
|
@ -8,3 +8,4 @@ Import:
|
|||
- configs/shufflenet_v2/metafile.yml
|
||||
- configs/swin_transformer/metafile.yml
|
||||
- configs/vgg/metafile.yml
|
||||
- configs/tnt/metafile.yml
|
||||
|
|
Loading…
Reference in New Issue