* [Enhance] add pre-commit hook for algo-readme and copyright (#213) * [Enhance] add test windows in workflows (#215) * [Enhance] add test windows in workflows * fix lint * add optional requirements * add try-except judgement * add opencv installation in windows test steps * fix path error on windows * update * update path * update * add pytest skip for algorithm test * update requirements/runtime.txt * update pytest skip * [Docs] translate 0_config.md into Chinese (#216) * [Docs] translate 0_config.md into Chinese * [Fix] fix format description in 0_config.md * Update: 0_config.md * [Fix] fix tsne 'no `init_cfg`' error (#222) * [Fix] fix tsne 'no init_cfg' and pool_type errors * [Refactor] fix linting of tsne vis * [Docs] reorganizing OpenMMLab projects and update algorithms in readme (#219) * [Docs] reorganizing OpenMMLab projects and update algorithms in readme * using small letters * fix typo * [Fix] fix image channel bgr/rgb bug and update benchmarks (#210) * [Fix] fix image channel bgr/rgb bug * update model zoo * update readme and metafile * [Fix] fix typo * [Fix] fix typo * [Fix] fix lint * modify Places205 directory according to the downloaded dataset * update results * [Fix] Fix the bug when using prefetch under multi-view methods, e.g., DenseCL (#218) * fig bug for prefetch_loader under multi-view setting * fix lint problem Co-authored-by: liming <liming.ai@bytedance.com> * [Feature]: MAE official (#221) * [Feature]: MAE single image pre-training * [Fix]: Fix config * [Fix]: Fix dataset link * [Feature]: Add run * [Refactor]: Delete spot * [Feature]: ignore nohup output file * [Feature]: Add auto script to generate run cmd * [Refactor]: Refactor mae config file * [Feature]: sz20 settings * [Feature]: Add auto resume * [Fix]: Fix lint * [Feature]: Make git ignore txt * [Refactor]: Delete gpus in script * [Fix]: Make generate_cmd to add --async * [Feature]: Initial version of Vit fine-tune * [Fix]: Add 1424 specific settings * [Fix]: Fix missing file client bug for 1424 * [Feature]: 1424 customized settings * [Fix]: Make drop in eval to False * [Feature]: Change the finetune and pre-training settings * [Feature]: Add debug setting * [Refactor]: Refactor the model * [Feature]: Customized settings * [Feature]: Add A100 settings * [Fix]: Change mae to imagenet * [Feature]: Change mae pretrain num workers to 32 * [Feature]: Change num workers to 16 * [Feature]: Add A100 setting for pre_release ft version * [Feature]: Add img_norm_cfg * [Fix]: Fix mae cls test missing logits bug * [Fix]: Fix mae cls head bias initialize to zero * [Feature]: Rename mae config name * [Feature]: Add MAE README.md * [Fix]: Fix lint * [Feature]: Fix typo * [Fix]: Fix typo * [Feature]: Fix invalid link * [Fix]: Fix finetune config file name * [Feature]: Official pretrain v1 * [Feature]: Change log interval to 100 * [Feature]: pretrain 1600 epochs * [Fix]: Change encoder num head to 12 * [Feature]: Mix precision * [Feature]: Add default value to random masking * [Feature]: Official MAE finetune * [Feature]: Finetune img per gpu 32 * [Feature]: Add multi machine training for lincls * [Fix]: Fix lincls master port master addr * [Feature]: Change img per gpu to 128 * [Feature]: Add linear eval and Refactor * [Fix]: Fix debug mode * [Fix]: Delete MAE dataset in __init__.py * [Feature]: normalize pixel for mae * [Fix]: Fix lint * [Feature]: LARS for linear eval * [Feature]: Add lars for mae linear eval * [Feature]: Change mae linear lars num workers to 32 * [Feature]: Change mae linear lars num workers to 8 * [Feature]: log every 25 iter for mae linear eval lars * [Feature]: Add 1600 epoch and 800 epoch pretraining * [Fix]: Change linear eval to 902 * [Fix]: Add random flip to linear eval * [Fix]: delete fp16 in mae * [Refactor]: Change backbone to mmcls * [Fix]: Align finetune settings * [Fix]: replace timm trunc_normal with mmcv trunc_normal * [Fix]: Change finetune layer_decay to 0.65 * [Fix]: Delete pretrain last norm when global_pooling * [Fix]: set requires_grad of norm1 to False * [Fix]: delete norm1 * [Fix]: Fix docstring bug * [Fix]: Fix lint * [Fix]: Add external link * [Fix]: Delete auto_resume and reformat config readme. * [Fix]: Fix pytest bug * [Fix]: Fix lint * [Refactor]: Rename filename * [Feature]: Add docstring * [Fix]: Rename config file name * [Fix]: Fix name inconsistency bug * [Fix]: Change the default value of persistent_worker in builder to True * [Fix]: Change the default value of CPUS_PER_TASK to 5 * [Fix]: Add a blank line to line136 in tools/train.py * [Fix]: Fix MAE algorithm docstring format and add paper name and url * [Feature]: Add MAE paper name and link, and store mae teaser on github * [Refactor]: Delete mae.png * [Fix]: Fix config file name” * [Fix]: Fix name bug * [Refactor]: Change default GPUS to 8 * [Fix]: Abandon change to drop_last * [Fix]: Fix docstring in mae algorithm * [Fix]: Fix lint * [Fix]: Fix lint * [Fix]: Fix mae finetune algo type bug * [Feature]: Add unit test for algorithm * [Feature]: Add unit test for remaining parts * [Fix]: Fix lint * [Fix]: Fix typo * [Fix]: Delete some unnecessary modification in gitignore * [Feature]: Change finetune setting in mae algo to mixup setting * [Fix]: Change norm_pix_loss to norm_pix in pretrain head * [Fix]: Delete modification in dist_train_linear.sh * [Refactor]: Delete global pool in mae_cls_vit.py * [Fix]: Change finetune param to mixup in test_mae_classification * [Fix]: Change norm_pix_loss to norm_pix of mae_pretrain_head in unit test * [Fix]: Change norm_pix_loss to norm_pix in unit test * [Refactor]: Create init_weights for mae_finetune_head and mae_linprobe_head * [Refactor]: Construct 2d sin-cosine position embedding using torch * [Refactor]: Using classification and using mixup from mmcls * [Fix]: Fix lint * [Fix]: Add False to finetune mae linprobe‘ “ * [Fix]: Set drop_last to False * [Fix]: Fix MAE finetune layerwise lr bug * [Refactor]: Delete redundant MAE when registering MAE * [Refactor]: Split initialize_weights in MAE to submodules * [Fix]: Change the min_lr of mae pretrain to 0.0 * [Refactor]: Delete unused _init_weights in mae_cls_vit * [Refactor]: Change MAE cls vit to a more general name * [Feature]: Add Epoch Fix cosine annealing lr updater * [Fix]: Fix lint * [Feature]: Add layer wise lr decay in optimizer constructor * [Fix]: Fix lint * [Fix]: Fix set layer wise lr decay bug * [Fix]: Fix UT for MAE * [Fix]: Fix lint * [Fix]: update algorithm readme format for MAE * [Fix]: Fix isort * [Fix]: Add Returns inmae_pretrain_vit * [Fix]: Change bgr to rgb * [Fix]: Change norm pix to True * [Fix]: Use cls_token to linear prob * [Fix]: Delete mixup.py * [Fix]: Fix MAE readme * [Feature]: Delete linprobe * [Refactor]: Merge MAE head into one file * [Fix]: Fix lint * [Fix]: rename mae_pretrain_head to mae_head * [Fix]: Fix import error in __init__.py * [Feature]: skip MAE algo UT when running on windows * [Fix]: Fix UT bug * [Feature]: Update model_zoo * [Fix]: Rename MAE pretrain model name * [Fix]: Delete mae ft prefix * [Feature]: Change b to base * [Refactor]: Change b in MAE pt config to base * [Fix]: Fix typo in docstring * [Fix]: Fix name bug * [Feature]: Add new constructor for MAE finetune * [Fix]: Fix model_zoo link * [Fix]: Skip UT for MAE * [Fix]: Change fixed channel order to param Co-authored-by: LIU Yuan <liuyuuan@pjlab.org.cn> Co-authored-by: liu yuan <liuyuan@pjlab.org.cn> * [Feature]: Add diff seeds to diff ranks and set torch seed in worker_init_fn (#228) * [Feature]: Add set diff seeds to diff ranks * [Fix]: Set diff seed to diff workers * Bump version to v0.7.0 (#227) * Bump version to v0.7.0 * [Docs] update readme Co-authored-by: wang11wang <95845452+wang11wang@users.noreply.github.com> Co-authored-by: Liangyu Chen <45140242+c-liangyu@users.noreply.github.com> Co-authored-by: Ming Li <73068772+mitming@users.noreply.github.com> Co-authored-by: liming <liming.ai@bytedance.com> Co-authored-by: Yuan Liu <30762564+YuanLiuuuuuu@users.noreply.github.com> Co-authored-by: LIU Yuan <liuyuuan@pjlab.org.cn> Co-authored-by: liu yuan <liuyuan@pjlab.org.cn> |
||
---|---|---|
.github | ||
configs | ||
docker | ||
docs | ||
mmselfsup | ||
requirements | ||
resources | ||
tests | ||
tools | ||
.gitignore | ||
.pre-commit-config.yaml | ||
.readthedocs.yml | ||
.style.yapf | ||
LICENSE | ||
MANIFEST.in | ||
README.md | ||
README_zh-CN.md | ||
model-index.yml | ||
requirements.txt | ||
setup.cfg | ||
setup.py |
README.md

📘Documentation | 🛠️Installation | 👀Model Zoo | 🆕Update News | 🤔Reporting Issues
Introduction
English | 简体中文
MMSelfSup is an open source self-supervised representation learning toolbox based on PyTorch. It is a part of the OpenMMLab project.
The master branch works with PyTorch 1.5 or higher.
Major features
-
Methods All in One
MMSelfsup provides state-of-the-art methods in self-supervised learning. For comprehensive comparison in all benchmarks, most of the pre-training methods are under the same setting.
-
Modular Design
MMSelfSup follows a similar code architecture of OpenMMLab projects with modular design, which is flexible and convenient for users to build their own algorithms.
-
Standardized Benchmarks
MMSelfSup standardizes the benchmarks including logistic regression, SVM / Low-shot SVM from linearly probed features, semi-supervised classification, object detection and semantic segmentation.
-
Compatibility
Since MMSelfSup adopts similar design of modulars and interfaces as those in other OpenMMLab projects, it supports smooth evaluation on downstream tasks with other OpenMMLab projects like object detection and segmentation.
License
This project is released under the Apache 2.0 license.
ChangeLog
MMSelfSup v0.7.0 was released in 03/03/2022.
Highlights of the new version:
- Support MAE
- Add Places205 benchmarks
- Add test Windows in workflows
Please refer to changelog.md for details and release history.
Differences between MMSelfSup and OpenSelfSup codebases can be found in compatibility.md.
Model Zoo and Benchmark
Model Zoo
Please refer to model_zoo.md for a comprehensive set of pre-trained models and benchmarks.
Supported algorithms:
- Relative Location (ICCV'2015)
- Rotation Prediction (ICLR'2018)
- DeepCLuster (ECCV'2018)
- NPID (CVPR'2018)
- ODC (CVPR'2020)
- MoCo v1 (CVPR'2020)
- SimCLR (ICML'2020)
- MoCo v2 (ArXiv'2020)
- BYOL (NeurIPS'2020)
- SwAV (NeurIPS'2020)
- DenseCL (CVPR'2021)
- SimSiam (CVPR'2021)
- MoCo v3 (ICCV'2021)
- MAE
More algorithms are in our plan.
Benchmark
Benchmarks | Setting |
---|---|
ImageNet Linear Classification (Multi-head) | Goyal2019 |
ImageNet Linear Classification (Last) | |
ImageNet Semi-Sup Classification | |
Places205 Linear Classification (Multi-head) | Goyal2019 |
iNaturalist2018 Linear Classification (Multi-head) | Goyal2019 |
PASCAL VOC07 SVM | Goyal2019 |
PASCAL VOC07 Low-shot SVM | Goyal2019 |
PASCAL VOC07+12 Object Detection | MoCo |
COCO17 Object Detection | MoCo |
Cityscapes Segmentation | MMSeg |
PASCAL VOC12 Aug Segmentation | MMSeg |
Installation
Please refer to install.md for installation and prepare_data.md for dataset preparation.
Get Started
Please see getting_started.md for the basic usage of MMSelfSup.
We also provides tutorials for more details:
- config
- add new dataset
- data pipeline
- add new module
- customize schedules
- customize runtime
- benchmarks
Citation
If you use this toolbox or benchmark in your research, please cite this project.
@misc{mmselfsup2021,
title={{MMSelfSup}: OpenMMLab Self-Supervised Learning Toolbox and Benchmark},
author={MMSelfSup Contributors},
howpublished={\url{https://github.com/open-mmlab/mmselfsup}},
year={2021}
}
Contributing
We appreciate all contributions improving MMSelfSup. Please refer to CONTRIBUTING.md for more details about the contributing guideline.
Acknowledgement
Remarks:
- MMSelfSup originates from OpenSelfSup, and we appreciate all early contributions made to OpenSelfSup. A few contributors are listed here: Xiaohang Zhan, Jiahao Xie, Enze Xie, Xiangxiang Chu, Zijian He.
- The implementation of MoCo and the detection benchmark borrow the code from MoCo.
- The implementation of SwAV borrows the code from SwAV.
- The SVM benchmark borrows the code from fair_self_supervision_benchmark.
mmselfsup/utils/clustering.py
is borrowed from deepcluster.
Projects in OpenMMLab
- MMCV: OpenMMLab foundational library for computer vision.
- MIM: MIM installs OpenMMLab packages.
- MMClassification: OpenMMLab image classification toolbox and benchmark.
- MMDetection: OpenMMLab detection toolbox and benchmark.
- MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
- MMRotate: OpenMMLab rotated object detection toolbox and benchmark.
- MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
- MMOCR: OpenMMLab text detection, recognition, and understanding toolbox.
- MMPose: OpenMMLab pose estimation toolbox and benchmark.
- MMHuman3D: OpenMMLab 3D human parametric model toolbox and benchmark.
- MMSelfSup: OpenMMLab self-supervised learning toolbox and benchmark.
- MMRazor: OpenMMLab model compression toolbox and benchmark.
- MMFewShot: OpenMMLab fewshot learning toolbox and benchmark.
- MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
- MMTracking: OpenMMLab video perception toolbox and benchmark.
- MMFlow: OpenMMLab optical flow toolbox and benchmark.
- MMEditing: OpenMMLab image and video editing toolbox.
- MMGeneration: OpenMMLab image and video generative models toolbox.
- MMDeploy: OpenMMLab model deployment framework.