XiaotongLu 0b24276158
[Feature] Add DMCP and fix the deploy pipeline of NAS algorithms (#406)
* Copybook

* Newly created copy PR

* Newly created copy PR

* update op_counters

* update subnet/commit/FLOPsCounter

* update docs/UT

* update docs/UT

* add setter for current_mask

* replace current_mask with activated_tensor_channel

* update subnet training

* fix ci

* fix ci

* fix ci

* fix readme.md

* fix readme.md

* update

* fix expression

* fix CI

* fix UT

* fix ci

* fix arch YAMLs

* fix yapf

* revise  mmcv version<=2.0.0rc3

* fix build.yaml

* Rollback mmdet to v3.0.0rc5

* Rollback mmdet to v3.0.0rc5

* Rollback mmseg to v1.0.0rc4

* remove search_groups in mutator

* revert env change

* update usage of sub_model

* fix UT

* fix bignas config

* fix UT for dcff & registry

* update Ut&channel_mutator

* fix test_channel_mutator

* fix Ut

* fix bug for load dcffnet

* update nas config

* update nas config

* fix api in evolution_search_loop

* update evolu_search_loop

* fix metric_predictor

* update url

* fix a0 fine_grained

* fix subnet export misskey

* fix ofa yaml

* fix lint

* fix comments

* add autoformer cfg

* update readme

* update supernet link

* fix sub_model configs

* update subnet inference readme

* fix lint

* fix lint

* Update autoformer_subnet_8xb256_in1k.py

* update test.py to support args.checkpoint as none

* update DARTS readme

* update readme

---------

Co-authored-by: gaoyang07 <1546308416@qq.com>
Co-authored-by: aptsunny <aptsunny@tongji.edu.cn>
Co-authored-by: sunyue1 <sunyue1@sensetime.com>
Co-authored-by: aptsunny <36404164+aptsunny@users.noreply.github.com>
Co-authored-by: wang shiguang <xiaohu_wyyx@163.com>
2023-03-02 18:22:20 +08:00
..

DARTS

DARTS: Differentiable Architecture Search

Abstract

This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques. Our implementation has been made publicly available to facilitate further research on efficient architecture search algorithms.

pipeline

Get Started

Step 1: Supernet training on Cifar-10

CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 ./tools/dist_train.sh \
  configs/nas/mmcls/darts/darts_supernet_unroll_1xb96_cifar10.py 4 \
  --work-dir $WORK_DIR

Step 2: Subnet retraining on Cifar-10

CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 ./tools/dist_train.sh \
  configs/nas/mmcls/darts/darts_subnet_1xb96_cifar10_2.0.py 4 \
  --work-dir $WORK_DIR \
  --cfg-options model.init_cfg.checkpoint=$STEP2_CKPT

Step 3: Subnet inference on Cifar-10

CUDA_VISIBLE_DEVICES=0 PORT=29500 ./tools/dist_test.sh \
  configs/nas/mmcls/darts/darts_subnet_1xb96_cifar10_2.0.py \
  none 1 --work-dir $WORK_DIR \
  --cfg-options model.init_cfg.checkpoint=$STEP2_CKPT

Results and models

Supernet

Dataset Unroll Config Download
Cifar10 True config model | log

Subnet

Dataset Params(M) Flops(G) Top-1 Acc Top-5 Acc Subnet Config Download Remarks
Cifar10 3.42 0.48 97.32 99.94 mutable config model | log MMRazor searched
Cifar10 3.83 0.55 97.27 99.98 mutable config model | log official

Citation

@inproceedings{liu2018darts,
  title={DARTS: Differentiable Architecture Search},
  author={Liu, Hanxiao and Simonyan, Karen and Yang, Yiming},
  booktitle={International Conference on Learning Representations},
  year={2018}
}