* [Refactor] Use to specify ViT-like backbone output. * Fix ClsBatchNormNeck * Update mmpretrain/models/necks/mae_neck.py --------- Co-authored-by: Yixiao Fang <36138628+fangyixiao18@users.noreply.github.com> |
||
---|---|---|
.. | ||
benchmarks | ||
README.md | ||
mae_vit-base-p16_8xb512-amp-coslr-300e_in1k.py | ||
mae_vit-base-p16_8xb512-amp-coslr-400e_in1k.py | ||
mae_vit-base-p16_8xb512-amp-coslr-800e_in1k.py | ||
mae_vit-base-p16_8xb512-amp-coslr-1600e_in1k.py | ||
mae_vit-huge-p14_8xb512-amp-coslr-1600e_in1k.py | ||
mae_vit-large-p16_8xb512-amp-coslr-300e_in1k.py | ||
mae_vit-large-p16_8xb512-amp-coslr-400e_in1k.py | ||
mae_vit-large-p16_8xb512-amp-coslr-800e_in1k.py | ||
mae_vit-large-p16_8xb512-amp-coslr-1600e_in1k.py | ||
metafile.yml |
README.md
MAE
Abstract
This paper shows that masked autoencoders (MAE) are scalable self-supervised learners for computer vision. Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels. It is based on two core designs. First, we develop an asymmetric encoder-decoder architecture, with an encoder that operates only on the visible subset of patches (without mask tokens), along with a lightweight decoder that reconstructs the original image from the latent representation and mask tokens. Second, we find that masking a high proportion of the input image, e.g., 75%, yields a nontrivial and meaningful self-supervisory task. Coupling these two designs enables us to train large models efficiently and effectively: we accelerate training (by 3× or more) and improve accuracy. Our scalable approach allows for learning high-capacity models that generalize well: e.g., a vanilla ViT-Huge model achieves the best accuracy (87.8%) among methods that use only ImageNet-1K data. Transfer performance in downstream tasks outperforms supervised pretraining and shows promising scaling behavior.

How to use it?
Predict image
from mmpretrain import inference_model
predict = inference_model('vit-base-p16_mae-300e-pre_8xb2048-linear-coslr-90e_in1k', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])
Use the model
import torch
from mmpretrain import get_model
model = get_model('mae_vit-base-p16_8xb512-amp-coslr-300e_in1k', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))
Train/Test Command
Prepare your dataset according to the docs.
Train:
python tools/train.py configs/mae/mae_vit-base-p16_8xb512-amp-coslr-300e_in1k.py
Test:
python tools/test.py configs/mae/benchmarks/vit-base-p16_8xb2048-linear-coslr-90e_in1k.py None
Models and results
Pretrained models
Model | Params (M) | Flops (G) | Config | Download |
---|---|---|---|---|
mae_vit-base-p16_8xb512-amp-coslr-300e_in1k |
N/A | N/A | config | model | log |
mae_vit-base-p16_8xb512-amp-coslr-400e_in1k |
N/A | N/A | config | model | log |
mae_vit-base-p16_8xb512-amp-coslr-800e_in1k |
N/A | N/A | config | model | log |
mae_vit-base-p16_8xb512-amp-coslr-1600e_in1k |
N/A | N/A | config | model | log |
mae_vit-large-p16_8xb512-amp-coslr-400e_in1k |
N/A | N/A | config | model | log |
mae_vit-large-p16_8xb512-amp-coslr-800e_in1k |
N/A | N/A | config | model | log |
mae_vit-large-p16_8xb512-amp-coslr-1600e_in1k |
N/A | N/A | config | model | log |
mae_vit-huge-p16_8xb512-amp-coslr-1600e_in1k |
N/A | N/A | config | model | log |
Image Classification on ImageNet-1k
Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Config | Download |
---|---|---|---|---|---|---|
vit-base-p16_mae-300e-pre_8xb2048-linear-coslr-90e_in1k |
MAE 300-Epochs | N/A | N/A | 60.80 | config | N/A |
vit-base-p16_mae-300e-pre_8xb128-coslr-100e_in1k |
MAE 300-Epochs | N/A | N/A | 83.10 | config | N/A |
vit-base-p16_mae-400e-pre_8xb2048-linear-coslr-90e_in1k |
MAE 400-Epochs | N/A | N/A | 62.50 | config | N/A |
vit-base-p16_mae-400e-pre_8xb128-coslr-100e_in1k |
MAE 400-Epochs | N/A | N/A | 83.30 | config | N/A |
vit-base-p16_mae-800e-pre_8xb2048-linear-coslr-90e_in1k |
MAE 800-Epochs | N/A | N/A | 65.10 | config | N/A |
vit-base-p16_mae-800e-pre_8xb128-coslr-100e_in1k |
MAE 800-Epochs | N/A | N/A | 83.30 | config | N/A |
vit-base-p16_mae-1600e-pre_8xb2048-linear-coslr-90e_in1k |
MAE 1600-Epochs | N/A | N/A | 67.10 | config | N/A |
vit-base-p16_mae-1600e-pre_8xb128-coslr-100e_in1k |
MAE 1600-Epochs | N/A | N/A | 83.50 | config | model | log |
vit-large-p16_mae-400e-pre_8xb2048-linear-coslr-90e_in1k |
MAE 400-Epochs | N/A | N/A | 70.70 | config | N/A |
vit-large-p16_mae-400e-pre_8xb128-coslr-50e_in1k |
MAE 400-Epochs | N/A | N/A | 85.20 | config | N/A |
vit-large-p16_mae-800e-pre_8xb2048-linear-coslr-90e_in1k |
MAE 800-Epochs | N/A | N/A | 73.70 | config | N/A |
vit-large-p16_mae-800e-pre_8xb128-coslr-50e_in1k |
MAE 800-Epochs | N/A | N/A | 85.40 | config | N/A |
vit-large-p16_mae-1600e-pre_8xb2048-linear-coslr-90e_in1k |
MAE 1600-Epochs | N/A | N/A | 75.50 | config | N/A |
vit-large-p16_mae-1600e-pre_8xb128-coslr-50e_in1k |
MAE 1600-Epochs | N/A | N/A | 85.70 | config | N/A |
vit-huge-p14_mae-1600e-pre_8xb128-coslr-50e_in1k |
MAE 1600-Epochs | N/A | N/A | 86.90 | config | model | log |
vit-huge-p14_mae-1600e-pre_32xb8-coslr-50e_in1k-448px |
MAE 1600-Epochs | N/A | N/A | 87.30 | config | model | log |
Citation
@article{He2021MaskedAA,
title={Masked Autoencoders Are Scalable Vision Learners},
author={Kaiming He and Xinlei Chen and Saining Xie and Yanghao Li and
Piotr Doll'ar and Ross B. Girshick},
journal={arXiv},
year={2021}
}