* [tmp] Update Dsnas
* [tmp] refactor arch_loss & flops_loss
* Update Dsnas & MMRAZOR_EVALUATOR:
1. finalized compute_loss & handle_grads in algorithm;
2. add MMRAZOR_EVALUATOR;
3. fix bugs.
* Update lr scheduler & fix a bug:
1. update param_scheduler & lr_scheduler for dsnas;
2. fix a bug of switching to finetune stage.
* remove old evaluators
* remove old evaluators
* update param_scheduler config
* merge dev-1.x into gy/estimator
* add flops_loss in Dsnas using ResourcesEstimator
* get resources before mutator.prepare_from_supernet
* delete unness broadcast api from gml
* broadcast spec_modules_resources when estimating
* update early fix mechanism for Dsnas
* fix merge
* update units in estimator
* minor change
* fix data_preprocessor api
* add flops_loss_coef
* remove DsnasOptimWrapper
* fix bn eps and data_preprocessor
* fix bn weight decay bug
* add betas for mutator optimizer
* set diff_rank_seed=True for dsnas
* fix start_factor of lr when warm up
* remove .module in non-ddp mode
* add GlobalAveragePoolingWithDropout
* add UT for dsnas
* remove unness channel adjustment for shufflenetv2
* update supernet configs
* delete unness dropout
* delete unness part with minor change on dsnas
* minor change on the flag of search stage
* update README and subnet configs
* add UT for OneHotMutableOP
* update benchmark test
* fix circle ci gpu config
* move delivery, recorder, tracer from structures to task modules
* move ops from models to models.architectures
* rename dynamic_op to dynamic_ops
* fix configs and metafiles
* remove some github ci
* fix configs / readme / metafile
Co-authored-by: gaojianfei <gaojianfei@sensetime.com>
* 1.Add FBKD
* 1.Add torch_connector and its ut. 2.Revise readme and fbkd config.
* 1.Revise UT for torch_connectors
* 1.Revise nonlocalblock into a subclass of NonLocal2d in mmcv.cnn
* 1.Add ZSKT algorithm with zskt_generator, at_loss. 2.Add teacher_detach in kl_divergence.
* 1.Amend readme. 2.Revise UT bugs of test_graph and test_distill.
* 1.Amend docstring of zskt_generator
* 1.Add torch version judgment in test_distillation_loss.
* 1.Revise defaults of batch_size to 1 in generators. 2.Revise mmcls.data to mmcls.structures
* 1.Rename function "at" to "calc_attention_matrix".
* Refactor ModelEstimator:
1. add EvaluatorLoop in engine.runners;
2. add estimator for structures (both subnet & supernet);
3. add layer_counter for each op.
* fix lint
* update estimator:
1. add ResourceEstimator based on BaseEstimator;
2. add notes & examples for ResourceEstimator & EvaluatorLoop usage;
3. fix a bug of latency test.
4. minor changes according to comments.
* add UT & fix a bug caused by UT
* add docstrings & remove old estimator
* update docstrings for op_spec_counters
* rename resource_evaluator_val_loop
* support adding resource attrs of each submodule in a measured model
* fix lint
* refactor estimator file structures
* support estimating resources for spec modules
* rm old UT
* update new estimator UT cases
* fix traversal range of the model
* cancel unit convert in accumulate_sub_module_flops_params
* use estimator_cfg to build ResourceEstimator
* fix a broadcast bug
* delete fixed input_shape
* add assertion and string-format-return when measuring spec_modules
* add UT for estimating spec_modules
* 1.Add DAFL, including config, DAFLLoss and readme. 2.Add DataFreeDistillationtillation. 3.Add Generator, including base_generator and dafl_generator. 4.Add get_module_device and set_requires_grad functions in utils.
* 1.Amend the file that report error in mypy test under py37, including gather_tensors, datafree_distillation, base_generator. 2.Revise other linting error.
* 1.Revise some docstrings.
* 1.Add UT for datafreedistillation. 2.Add all typing.hints.
* 1.Add UT for generators and gather_tensors.
* 1.Add assert of batch_size in base_generator
* 1.Isort
Co-authored-by: zhangzhongyu.vendor < zhangzhongyu.vendor@sensetime.com>
* add dynamic bricks
* add dynamic conv2d test
* add tests for dynamic linear and dynamic norm
* add docstring for dynamic conv2d
* add docstring for dynamic linear
* add docstring for dynamic batchnorm
* Refactor the dynamic op ( put more logic into the mixin )
* fix UT
* Fix UT ( fileio was moved to mmengine)
* derived mutable adds choices property
* Unify the register interface of mutable in dynamic op
* Unified getter interface of mutable in dynamic op
Co-authored-by: gaojianfei <gaojianfei@sensetime.com>
Co-authored-by: pppppM <gjf_mail@126.com>
* fix lint
* complement unittest for derived mutable
* add docstring for derived mutable
* add unittest for mutable value
* fix logger error
* fix according to comments
* not dump derived mutable when export
* add warning in `export_fix_subnet`
* fix __mul__ in mutable value
* move build_arch_param from mutable to mutator
* fix UT of diff mutable and mutator
* modify based on shiguang's comments
* remove mutator from the unittest of mutable
* 1.Add ABLoss and its config, readme and pipeline image. 2.Merge all connectors in general_connector into convconnector.
* 1.Improve convconnector to convmoduleconnecotr which aligns with mmcv. 2.Revise UT of test_connector. 3.Revise config of fitnet and abloss. 4.Revise mmcls import of darts_subnet_head to align with the newest mmcls-dev-1.x.
* 1.Simplify ConvModuleConncetor by ConvModule.
Co-authored-by: zhangzhongyu.vendor < zhangzhongyu.vendor@sensetime.com>
* Fix spelling mistakes
* 1.Rename general connectors. 2.Replace nn.conv2 to build_conv_layer, replace nn.bn to build_norm_layer.
* 1. Rename function init_parameters to init_weights in SingleConvConnector to realize automatically invocation.
* 1. Add norm_cfg in config and general_connector
* 1.Move calculate_student_loss to distillation algorithm. 2.Move mmrazor.models.connector to mmrazor.models.architectures. 3.Merge stu_connectors and tea_connectors into connectors, and call connectors by their connector_name.
* 1.Replace connector_name to connector in record_info. 2.Add assert that each connector must be in connectors.
Co-authored-by: zhangzhongyu.vendor < zhangzhongyu.vendor@sensetime.com>
* [Enhance] Add extra dataloader settings in configs (#141)
* [Docs] fix md link failure in docs (#142)
* [Docs] update Cream readme
* delete 'readme.md' in model_zoo.md
* fix md link failure in docs
* [Docs] add myst_parser to extensions in conf.py
* [Docs] delete the deprecated recommonmark
* [Docs] delete recommandmark from conf.py
* [Docs] fix md link failure and lint failture
* [Fix] Fix seed error in mmseg/train_seg.py and typos in train.md (#152)
* [Docs] update Cream readme
* delete 'readme.md' in model_zoo.md
* fix cwd docs and fix seed in #151
* delete readme of cream
* [Enhancement]Support broadcast_object_list in multi-machines & support Searcher running in single GPU (#153)
* broadcast_object_list support multi-machines
* add userwarning
* [Fix] Fix configs (#149)
* fix configs
* fix spos configs
* fix readme
* replace the official mutable_cfg with the mutable_cfg searched by ourselves
* update https prefix
Co-authored-by: pppppM <gjf_mail@126.com>
* [BUG]Support to prune models containing GroupNorm or InstanceNorm. (#144)
* suport GN and IN
* test pruner
* limit pytorch version
* fix pytest
* throw an error when tracing groupnorm with torch version under 1.6.0
Co-authored-by: caoweihan <caoweihan@sensetime.com>
* Bump version to 0.3.1
Co-authored-by: qiufeng <44188071+wutongshenqiu@users.noreply.github.com>
Co-authored-by: PJDong <1115957667@qq.com>
Co-authored-by: humu789 <88702197+humu789@users.noreply.github.com>
Co-authored-by: whcao <41630003+HIT-cwh@users.noreply.github.com>
Co-authored-by: caoweihan <caoweihan@sensetime.com>