mmpretrain/configs/eva
Ma Zerun 274a67223e
[Feature] Implement layer-wise learning rate decay optimizer constructor. (#1399)
* [Feature] Implement layer-wise learning rate decay optimizer constructor.

* Use num_layers instead of max_depth to avoid misleading

* Add UT

* Update docstring

* Update log info

* update LearningRateDecay configs

---------

Co-authored-by: fangyixiao18 <fangyx18@hotmail.com>
2023-03-07 17:30:39 +08:00
..
benchmarks [Feature] Implement layer-wise learning rate decay optimizer constructor. (#1399) 2023-03-07 17:30:39 +08:00
README.md [Docs] Update generate_readme.py and readme files. (#1388) 2023-03-02 13:29:07 +08:00
eva-g-p14_8xb16_in1k-336px.py [Feature] Support EVA. (#1239) 2022-12-14 13:21:33 +08:00
eva-g-p14_8xb16_in1k-560px.py [Feature] Support EVA. (#1239) 2022-12-14 13:21:33 +08:00
eva-g-p14_headless.py [Refactor] Add selfsup algorithms. (#1389) 2023-03-06 16:53:15 +08:00
eva-g-p16_headless.py [Refactor] Add selfsup algorithms. (#1389) 2023-03-06 16:53:15 +08:00
eva-l-p14_8xb16_in1k-196px.py [Feature] Support EVA. (#1239) 2022-12-14 13:21:33 +08:00
eva-l-p14_8xb16_in1k-336px.py [Feature] Support EVA. (#1239) 2022-12-14 13:21:33 +08:00
eva-l-p14_headless.py [Refactor] Add selfsup algorithms. (#1389) 2023-03-06 16:53:15 +08:00
eva-mae-style_vit-base-p16_16xb256-coslr-400e_in1k.py [Refactor] Add selfsup algorithms. (#1389) 2023-03-06 16:53:15 +08:00
metafile.yml [Docs] Update generate_readme.py and readme files. (#1388) 2023-03-02 13:29:07 +08:00

README.md

EVA

EVA: Exploring the Limits of Masked Visual Representation Learning at Scale

Abstract

We launch EVA, a vision-centric foundation model to explore the limits of visual representation at scale using only publicly accessible data. EVA is a vanilla ViT pre-trained to reconstruct the masked out image-text aligned vision features conditioned on visible image patches. Via this pretext task, we can efficiently scale up EVA to one billion parameters, and sets new records on a broad range of representative vision downstream tasks, such as image recognition, video action recognition, object detection, instance segmentation and semantic segmentation without heavy supervised training. Moreover, we observe quantitative changes in scaling EVA result in qualitative changes in transfer learning performance that are not present in other models. For instance, EVA takes a great leap in the challenging large vocabulary instance segmentation task: our model achieves almost the same state-of-the-art performance on LVISv1.0 dataset with over a thousand categories and COCO dataset with only eighty categories. Beyond a pure vision encoder, EVA can also serve as a vision-centric, multi-modal pivot to connect images and text. We find initializing the vision tower of a giant CLIP from EVA can greatly stabilize the training and outperform the training from scratch counterpart with much fewer samples and less compute, providing a new direction for scaling up and accelerating the costly training of multi-modal foundation models.

How to use it?

Predict image

from mmpretrain import inference_model

predict = inference_model('beit-g-p14_eva-30m-in21k-pre_3rdparty_in1k-336px', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])

Use the model

import torch
from mmpretrain import get_model

model = get_model('beit-g-p14_3rdparty-eva_30m', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))

Train/Test Command

Prepare your dataset according to the docs.

Train:

python tools/train.py configs/eva/eva-mae-style_vit-base-p16_16xb256-coslr-400e_in1k.py

Test:

python tools/test.py configs/eva/eva-g-p14_8xb16_in1k-336px.py https://download.openmmlab.com/mmclassification/v0/eva/eva-g-p14_30m-in21k-pre_3rdparty_in1k-336px_20221213-210f9071.pth

Models and results

Pretrained models

Model Params (M) Flops (G) Config Download
beit-g-p14_3rdparty-eva_30m* 1011.60 267.17 config model
beit-g-p16_3rdparty-eva_30m* 1011.32 203.52 config model
beit-g-p14_eva-30m-pre_3rdparty_in21k* 1011.60 267.17 config model
beit-l-p14_3rdparty-eva_in21k* 303.18 81.08 config model
beit-l-p14_eva-pre_3rdparty_in21k* 303.18 81.08 config model
eva-mae-style_vit-base-p16_16xb256-coslr-400e_in1k N/A N/A config model | log

Models with * are converted from the official repo. The config files of these models are only for inference. We haven't reprodcue the training results.

Image Classification on ImageNet-1k

Model Pretrain Params (M) Flops (G) Top-1 (%) Top-5 (%) Config Download
beit-g-p14_eva-30m-in21k-pre_3rdparty_in1k-336px* EVA merged-30M ImageNet-21k 1013.01 620.64 89.61 98.93 config model
beit-g-p14_eva-30m-in21k-pre_3rdparty_in1k-560px* EVA merged-30M ImageNet-21k 1014.45 1906.76 89.71 98.96 config model
beit-l-p14_eva-pre_3rdparty_in1k-336px* EVA 304.53 191.10 88.66 98.75 config model
beit-l-p14_eva-in21k-pre_3rdparty_in1k-336px* EVA ImageNet-21k 304.53 191.10 89.17 98.86 config model
beit-l-p14_eva-pre_3rdparty_in1k-196px* EVA 304.14 61.57 87.94 98.5 config model
beit-l-p14_eva-in21k-pre_3rdparty_in1k-196px* EVA ImageNet-21k 304.14 61.57 88.58 98.65 config model
vit-base-p16_eva-mae-style-pre_8xb128-coslr-100e_in1k EVA mae-style N/A N/A 83.70 N/A config model | log
vit-base-p16_eva-mae-style-pre_8xb2048-linear-coslr-100e_in1k EVA mae-style N/A N/A 69.00 N/A config model | log

Models with * are converted from the official repo. The config files of these models are only for inference. We haven't reprodcue the training results.

Citation

@article{EVA,
  title={EVA: Exploring the Limits of Masked Visual Representation Learning at Scale},
  author={Fang, Yuxin and Wang, Wen and Xie, Binhui and Sun, Quan and Wu, Ledell and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue},
  journal={arXiv preprint arXiv:2211.07636},
  year={2022}
}