[Docs] Improve ViT and MobileViT model pages. (#1155)
* [Docs] Improve the ViT model page * [Docs] Improve the MobileViT model page * fixpull/1164/head
parent
63b124e2d7
commit
6203fd6cc9
|
@ -4,14 +4,89 @@
|
|||
|
||||
<!-- [ALGORITHM] -->
|
||||
|
||||
## Abstract
|
||||
## Introduction
|
||||
|
||||
Light-weight convolutional neural networks (CNNs) are the de-facto for mobile vision tasks. Their spatial inductive biases allow them to learn representations with fewer parameters across different vision tasks. However, these networks are spatially local. To learn global representations, self-attention-based vision trans-formers (ViTs) have been adopted. Unlike CNNs, ViTs are heavy-weight. In this paper, we ask the following question: is it possible to combine the strengths of CNNs and ViTs to build a light-weight and low latency network for mobile vision tasks? Towards this end, we introduce MobileViT, a light-weight and general-purpose vision transformer for mobile devices. MobileViT presents a different perspective for the global processing of information with transformers, i.e., transformers as convolutions. Our results show that MobileViT significantly outperforms CNN- and ViT-based networks across different tasks and datasets. On the ImageNet-1k dataset, MobileViT achieves top-1 accuracy of 78.4% with about 6 million parameters, which is 3.2% and 6.2% more accurate than MobileNetv3 (CNN-based) and DeIT (ViT-based) for a similar number of parameters. On the MS-COCO object detection task, MobileViT is 5.7% more accurate than MobileNetv3 for a similar number of parameters.
|
||||
**MobileViT** aims at introducing a light-weight network, which takes the advantages of both ViTs and CNNs, uses the `InvertedResidual` blocks in [MobileNetV2](../mobilenet_v2/README.md) and `MobileViTBlock` which refers to [ViT](../vision_transformer/README.md) transformer blocks to build a standard 5-stage model structure.
|
||||
|
||||
The MobileViTBlock reckons transformers as convolutions to perform a global representation, meanwhile conbined with original convolution layers for local representation to build a block with global receptive field. This is different from ViT, which adds an extra class token and position embeddings for learning relative relationship. Without any position embeddings, MobileViT can benfit from multi-scale inputs during training.
|
||||
|
||||
Also, this paper puts forward a strategy for multi-scale training to dynamically adjust batch size based on the image size to both improve training efficiency and final performance.
|
||||
|
||||
It is also proven effective in downstream tasks such as object detection and segmentation.
|
||||
|
||||
<div align=center>
|
||||
<img src="https://user-images.githubusercontent.com/42952108/193229983-822bf025-89a6-4d95-b6be-76b7f1a62f2c.png" width="70%"/>
|
||||
</div>
|
||||
|
||||
## Abstract
|
||||
|
||||
<details>
|
||||
|
||||
<summary>Show the paper's abstract</summary>
|
||||
|
||||
<br>
|
||||
|
||||
Light-weight convolutional neural networks (CNNs) are the de-facto for mobile vision tasks. Their spatial inductive biases allow them to learn representations with fewer parameters across different vision tasks. However, these networks are spatially local. To learn global representations, self-attention-based vision trans-formers (ViTs) have been adopted. Unlike CNNs, ViTs are heavy-weight. In this paper, we ask the following question: is it possible to combine the strengths of CNNs and ViTs to build a light-weight and low latency network for mobile vision tasks? Towards this end, we introduce MobileViT, a light-weight and general-purpose vision transformer for mobile devices. MobileViT presents a different perspective for the global processing of information with transformers, i.e., transformers as convolutions. Our results show that MobileViT significantly outperforms CNN- and ViT-based networks across different tasks and datasets. On the ImageNet-1k dataset, MobileViT achieves top-1 accuracy of 78.4% with about 6 million parameters, which is 3.2% and 6.2% more accurate than MobileNetv3 (CNN-based) and DeIT (ViT-based) for a similar number of parameters. On the MS-COCO object detection task, MobileViT is 5.7% more accurate than MobileNetv3 for a similar number of parameters.
|
||||
</br>
|
||||
|
||||
</details>
|
||||
|
||||
## How to use it?
|
||||
|
||||
<!-- [TABS-BEGIN] -->
|
||||
|
||||
**Predict image**
|
||||
|
||||
```python
|
||||
>>> import torch
|
||||
>>> from mmcls.apis import init_model, inference_model
|
||||
>>>
|
||||
>>> model = init_model('configs/mobilevit/mobilevit-small_8xb128_in1k.py', 'https://download.openmmlab.com/mmclassification/v0/mobilevit/mobilevit-small_3rdparty_in1k_20221018-cb4f741c.pth')
|
||||
>>> predict = inference_model(model, 'demo/demo.JPEG')
|
||||
>>> print(predict['pred_class'])
|
||||
sea snake
|
||||
>>> print(predict['pred_score'])
|
||||
0.9839211702346802
|
||||
```
|
||||
|
||||
**Use the model**
|
||||
|
||||
```python
|
||||
>>> import torch
|
||||
>>> from mmcls.apis import init_model
|
||||
>>>
|
||||
>>> model = init_model('configs/mobilevit/mobilevit-small_8xb128_in1k.py', 'https://download.openmmlab.com/mmclassification/v0/mobilevit/mobilevit-small_3rdparty_in1k_20221018-cb4f741c.pth')
|
||||
>>> inputs = torch.rand(1, 3, 224, 224).to(model.data_preprocessor.device)
|
||||
>>> # To get classification scores.
|
||||
>>> out = model(inputs)
|
||||
>>> print(out.shape)
|
||||
torch.Size([1, 1000])
|
||||
>>> # To extract features.
|
||||
>>> outs = model.extract_feat(inputs)
|
||||
>>> print(outs[0].shape)
|
||||
torch.Size([1, 640])
|
||||
```
|
||||
|
||||
**Train/Test Command**
|
||||
|
||||
Place the ImageNet dataset to the `data/imagenet/` directory, or prepare datasets according to the [docs](https://mmclassification.readthedocs.io/en/1.x/user_guides/dataset_prepare.html#prepare-dataset).
|
||||
|
||||
Train:
|
||||
|
||||
```shell
|
||||
python tools/train.py configs/mobilevit/mobilevit-small_8xb128_in1k.py
|
||||
```
|
||||
|
||||
Test:
|
||||
|
||||
```shell
|
||||
python tools/test.py configs/mobilevit/mobilevit-small_8xb128_in1k.py https://download.openmmlab.com/mmclassification/v0/mobilevit/mobilevit-small_3rdparty_in1k_20221018-cb4f741c.pth
|
||||
```
|
||||
|
||||
<!-- [TABS-END] -->
|
||||
|
||||
For more configurable parameters, please refer to the [API](https://mmclassification.readthedocs.io/en/1.x/api/generated/mmcls.models.backbones.MobileViT.html#mmcls.models.backbones.MobileViT).
|
||||
|
||||
## Results and models
|
||||
|
||||
### ImageNet-1k
|
||||
|
|
|
@ -4,14 +4,89 @@
|
|||
|
||||
<!-- [ALGORITHM] -->
|
||||
|
||||
## Abstract
|
||||
## Introduction
|
||||
|
||||
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.
|
||||
**Vision Transformer**, known as **ViT**, succeeded in using a full transformer to outperform previous works that based on convolutional networks in vision field. ViT splits image into patches to feed the multi-head attentions, concatenates a learnable class token for final prediction and adds a learnable position embeddings for relative positional message between patches. Based on these three techniques with attentions, ViT provides a brand-new pattern to build a basic structure in vision field.
|
||||
|
||||
The strategy works even better when coupled with large datasets pre-trainings. Because of its simplicity and effectiveness, some after works in classification field are originated from ViT. And even in recent multi-modality field, ViT-based method still plays a role in it.
|
||||
|
||||
<div align=center>
|
||||
<img src="https://user-images.githubusercontent.com/26739999/142579081-b5718032-6581-472b-8037-ea66aaa9e278.png" width="70%"/>
|
||||
</div>
|
||||
|
||||
## Abstract
|
||||
|
||||
<details>
|
||||
|
||||
<summary>Show the paper's abstract</summary>
|
||||
|
||||
<br>
|
||||
|
||||
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.
|
||||
</br>
|
||||
|
||||
</details>
|
||||
|
||||
## How to use it?
|
||||
|
||||
<!-- [TABS-BEGIN] -->
|
||||
|
||||
**Predict image**
|
||||
|
||||
```python
|
||||
>>> import torch
|
||||
>>> from mmcls.apis import init_model, inference_model
|
||||
>>>
|
||||
>>> model = init_model('configs/vision_transformer/vit-base-p16_pt-32xb128-mae_in1k-224.py', 'https://download.openmmlab.com/mmclassification/v0/vit/vit-base-p16_pt-32xb128-mae_in1k_20220623-4c544545.pth')
|
||||
>>> predict = inference_model(model, 'demo/demo.JPEG')
|
||||
>>> print(predict['pred_class'])
|
||||
sea snake
|
||||
>>> print(predict['pred_score'])
|
||||
0.9184340238571167
|
||||
```
|
||||
|
||||
**Use the model**
|
||||
|
||||
```python
|
||||
>>> import torch
|
||||
>>> from mmcls.apis import init_model
|
||||
>>>
|
||||
>>> model = init_model('configs/vision_transformer/vit-base-p16_pt-32xb128-mae_in1k-224.py', 'https://download.openmmlab.com/mmclassification/v0/vit/vit-base-p16_pt-32xb128-mae_in1k_20220623-4c544545.pth')
|
||||
>>> inputs = torch.rand(1, 3, 224, 224).to(model.data_preprocessor.device)
|
||||
>>> # To get classification scores.
|
||||
>>> out = model(inputs)
|
||||
>>> print(out.shape)
|
||||
torch.Size([1, 1000])
|
||||
>>> # To extract features.
|
||||
>>> outs = model.extract_feat(inputs)
|
||||
>>> # The patch token features
|
||||
>>> print(outs[0][0].shape)
|
||||
torch.Size([1, 768, 14, 14])
|
||||
>>> # The cls token features
|
||||
>>> print(outs[0][1].shape)
|
||||
torch.Size([1, 768])
|
||||
```
|
||||
|
||||
**Train/Test Command**
|
||||
|
||||
Place the ImageNet dataset to the `data/imagenet/` directory, or prepare datasets according to the [docs](https://mmclassification.readthedocs.io/en/1.x/user_guides/dataset_prepare.html#prepare-dataset).
|
||||
|
||||
Train:
|
||||
|
||||
```shell
|
||||
python tools/train.py configs/vision_transformer/vit-base-p16_pt-32xb128-mae_in1k-224.py
|
||||
```
|
||||
|
||||
Test:
|
||||
|
||||
```shell
|
||||
python tools/test.py configs/vision_transformer/vit-base-p16_pt-32xb128-mae_in1k-224.py https://download.openmmlab.com/mmclassification/v0/vit/vit-base-p16_pt-32xb128-mae_in1k_20220623-4c544545.pth
|
||||
```
|
||||
|
||||
<!-- [TABS-END] -->
|
||||
|
||||
For more configurable parameters, please refer to the [API](https://mmclassification.readthedocs.io/en/1.x/api/generated/mmcls.models.backbones.VisionTransformer.html#mmcls.models.backbones.VisionTransformer).
|
||||
|
||||
## Results and models
|
||||
|
||||
The training step of Vision Transformers is divided into two steps. The first
|
||||
|
|
Loading…
Reference in New Issue