Yue Sun fb42405af8
[Feature] Add Autoformer algorithm (#315)
* update candidates

* update subnet_sampler_loop

* update candidate

* add readme

* rename variable

* rename variable

* clean

* update

* add doc string

* Revert "[Improvement] Support for candidate multiple dimensional search constraints."

* [Improvement] Update Candidate with multi-dim search constraints. (#322)

* update doc

* add support type

* clean code

* update candidates

* clean

* xx

* set_resource -> set_score

* fix ci bug

* py36 lint

* fix bug

* fix check constrain

* py36 ci

* redesign candidate

* fix pre-commit

* update cfg

* add build_resource_estimator

* fix ci bug

* remove runner.epoch in testcase

* [Feature] Autoformer architecture and dynamicOPs (#327)

* add DynamicSequential

* dynamiclayernorm

* add dynamic_pathchembed

* add DynamicMultiheadAttention and DynamicRelativePosition2D

* add channel-level dynamicOP

* add autoformer algo

* clean notes

* adapt channel_mutator

* vit fly

* fix import

* mutable init

* remove annotation

* add DynamicInputResizer

* add unittest for mutables

* add OneShotMutableChannelUnit_VIT

* clean code

* reset unit for vit

* remove attr

* add autoformer backbone UT

* add valuemutator UT

* clean code

* add autoformer algo UT

* update classifier UT

* fix test error

* ignore

* make lint

* update

* fix lint

* mutable_attrs

* fix test

* fix error

* remove DynamicInputResizer

* fix test ci

* remove InputResizer

* rename variables

* modify type

* Continued improvements of ChannelUnit

* fix lint

* fix lint

* remove OneShotMutableChannelUnit

* adjust derived type

* combination mixins

* clean code

* fix sample subnet

* search loop fly

* more annotations

* avoid counter warning and modify batch_augment cfg by gy

* restore

* source_value_mutables restriction

* simply arch_setting api

* update

* clean

* fix ut
2022-11-14 13:01:04 +08:00
..

DetNAS

DetNAS: Backbone Search for Object Detection

Abstract

Object detectors are usually equipped with backbone networks designed for image classification. It might be sub-optimal because of the gap between the tasks of image classification and object detection. In this work, we present DetNAS to use Neural Architecture Search (NAS) for the design of better backbones for object detection. It is non-trivial because detection training typically needs ImageNet pre-training while NAS systems require accuracies on the target detection task as supervisory signals. Based on the technique of one-shot supernet, which contains all possible networks in the search space, we propose a framework for backbone search on object detection. We train the supernet under the typical detector training schedule: ImageNet pre-training and detection fine-tuning. Then, the architecture search is performed on the trained supernet, using the detection task as the guidance. This framework makes NAS on backbones very efficient. In experiments, we show the effectiveness of DetNAS on various detectors, for instance, one-stage RetinaNet and the two-stage FPN. We empirically find that networks searched on object detection shows consistent superiority compared to those searched on ImageNet classification. The resulting architecture achieves superior performance than hand-crafted networks on COCO with much less FLOPs complexity.

pipeline

Introduction

Step 1: Supernet pre-training on ImageNet

python ./tools/mmcls/train_mmcls.py \
  configs/nas/detnas/detnas_supernet_shufflenetv2_8xb128_in1k.py \
  --work-dir $WORK_DIR

Step 2: Supernet fine-tuning on COCO

python ./tools/mmdet/train_mmdet.py \
  configs/nas/detnas/detnas_supernet_frcnn_shufflenetv2_fpn_1x_coco.py \
  --work-dir $WORK_DIR \
  --cfg-options load_from=$STEP1_CKPT

Step 3: Search for subnet on the trained supernet

python ./tools/mmdet/search_mmdet.py \
  configs/nas/detnas/detnas_evolution_search_frcnn_shufflenetv2_fpn_coco.py \
  $STEP2_CKPT \
  --work-dir $WORK_DIR

Step 4: Subnet retraining on ImageNet

python ./tools/mmcls/train_mmcls.py \
  configs/nas/detnas/detnas_subnet_shufflenetv2_8xb128_in1k.py \
  --work-dir $WORK_DIR \
  --cfg-options algorithm.mutable_cfg=$STEP3_SUBNET_YAML  # or modify the config directly

Step 5: Subnet fine-tuning on COCO

python ./tools/mmdet/train_mmdet.py \
  configs/nas/detnas/detnas_subnet_frcnn_shufflenetv2_fpn_1x_coco.py \
  --work-dir $WORK_DIR \
  --cfg-options algorithm.mutable_cfg=$STEP3_SUBNET_YAML load_from=$STEP4_CKPT  # or modify the config directly

Results and models

Dataset Supernet Subnet Params(M) Flops(G) mAP Config Download Remarks
COCO FRCNN-ShuffleNetV2 mutable 3.35(backbone) 0.34(backbone) 37.5 config pretrain |model | log MMRazor searched

Note:

  1. The experiment settings of DetNAS are similar with SPOS's, and our training dataset is COCO2017 rather than COCO2014.
  2. We also retrained official subnet with same experiment settings, the final result is 36.9

Citation

@article{chen2019detnas,
  title={Detnas: Backbone search for object detection},
  author={Chen, Yukang and Yang, Tong and Zhang, Xiangyu and Meng, Gaofeng and Xiao, Xinyu and Sun, Jian},
  journal={Advances in Neural Information Processing Systems},
  volume={32},
  pages={6642--6652},
  year={2019}
}