Yixiao Fang bbdf670d00
Bump version to v0.9.1 (#322)
* [Fix]: Set qkv bias to False for cae and True for mae (#303)

* [Fix]: Add mmcls transformer layer choice

* [Fix]: Fix transformer encoder layer bug

* [Fix]: Change UT of cae

* [Feature]: Change the file name of cosine annealing hook (#304)

* [Feature]: Change cosine annealing hook file name

* [Feature]: Add UT for cosine annealing hook

* [Fix]: Fix lint

* read tutorials and fix typo (#308)

* [Fix] fix config errors in MAE (#307)

* update readthedocs algorithm readme (#310)

* [Docs] Replace markdownlint with mdformat (#311)

* Replace markdownlint with mdformat to avoid installing ruby

* fix typo

* add 'ba' to codespell ignore-words-list

* Configure Myst-parser to parse anchor tag (#309)

* [Docs] rewrite install.md (#317)

* rewrite the install.md

* add faq.md

* fix lint

* add FAQ to README

* add Chinese version

* fix typo

* fix format

* remove modification

* fix format

* [Docs] refine README.md file (#318)

* refine README.md file

* fix lint

* format language button

* rename getting_started.md

* revise index.rst

* add model_zoo.md to index.rst

* fix lint

* refine readme

Co-authored-by: Jiahao Xie <52497952+Jiahao000@users.noreply.github.com>

* [Enhance] update byol models and results (#319)

* Update version information (#321)

Co-authored-by: Yuan Liu <30762564+YuanLiuuuuuu@users.noreply.github.com>
Co-authored-by: Yi Lu <21515006@zju.edu.cn>
Co-authored-by: RenQin <45731309+soonera@users.noreply.github.com>
Co-authored-by: Jiahao Xie <52497952+Jiahao000@users.noreply.github.com>
2022-06-01 09:59:05 +08:00

4.2 KiB

CAE

Context Autoencoder for Self-Supervised Representation Learning

Abstract

We present a novel masked image modeling (MIM) approach, context autoencoder (CAE), for self-supervised learning. We randomly partition the image into two sets: visible patches and masked patches. The CAE architecture consists of: (i) an encoder that takes visible patches as input and outputs their latent representations, (ii) a latent context regressor that predicts the masked patch representations from the visible patch representations that are not updated in this regressor, (iii) a decoder that takes the estimated masked patch representations as input and makes predictions for the masked patches, and (iv) an alignment module that aligns the masked patch representation estimation with the masked patch representations computed from the encoder. In comparison to previous MIM methods that couple the encoding and decoding roles, e.g., using a single module in BEiT, our approach attempts to separate the encoding role (content understanding) from the decoding role (making predictions for masked patches) using different modules, improving the content understanding capability. In addition, our approach makes predictions from the visible patches to the masked patches in the latent representation space that is expected to take on semantics. In addition, we present the explanations about why contrastive pretraining and supervised pretraining perform similarly and why MIM potentially performs better. We demonstrate the effectiveness of our CAE through superior transfer performance in downstream tasks: semantic segmentation, and object detection and instance segmentation.

Prerequisite

Create a new folder cae_ckpt under the root directory and download the weights for dalle encoder to that folder

Models and Benchmarks

Here, we report the results of the model, which is pre-trained on ImageNet-1k for 300 epochs, the details are below:

Backbone Pre-train epoch Fine-tuning Top-1 Pre-train Config Fine-tuning Config Download
ViT-B/16 300 83.2 config config model | log

Citation

@article{CAE,
  title={Context Autoencoder for Self-Supervised Representation Learning},
  author={Xiaokang Chen, Mingyu Ding, Xiaodi Wang, Ying Xin, Shentong Mo,
  Yunhao Wang, Shumin Han, Ping Luo, Gang Zeng, Jingdong Wang},
  journal={ArXiv},
  year={2022}
}