* improve digit_version & use it for version_checking
* more testing for digit_version
* setuptools >= 50 is needed
* fix CI
* add debuging log
* >= to ==
* fix lint
* remove
* add failure case
* replace
* fix
* consider TORCH_VERSION == 'parrots'
* add unittest
* digit_version do not deal with the case if 'parrots' in version name.
* add flat cosine lr updater
* add test
* add doc
* update doc
* reformat
* update unittest
* update test flat cos
* remove momentum hook test
* update test
* change assert to ValueError
* fix unittest
* add by_epoch=True unittest
* change to start_percent
* change to start_percent in test
* porting mmcv for hip
* add nvcc
* fix format
* fix format
* fix bug for carafe
* fix test_utils because rocm_torch not allow set torch.backends.cudnn.benchmark to false
* add LOOSEVERSION
* fix format
* fix format of version
* fix code format
* test for yaml
* fix bug for citest
* fix bug for how to get torch._version_ at setup.py
* support print using hooks before running.
* Support to print hook trigger stages.
* Print stage-wise hook infos. And make `stages` as class attribute of
`Hook`
* Add util function `is_method_overriden` and use it in
`Hook.get_trigger_stages`.
* Add unit tests.
* Move `is_method_overriden` to `mmcv/utils/misc.py`
* Improve hook info text.
* Add base_class argument type assertion, and fix some typos.
* Remove `get_trigger_stages` to `get_triggered_stages`
* Use f-string.
* Refine default hooks and custom hooks priority rank.
* Add unit tests for custom hooks with string priority.
* Use priority `ABOVE_NORMAL` and `BELOW_NORMAL` instead of `HIGHER` and
`LOWER`.
And add unit tests for custom hook with the same priority as
default hooks.
* Assign different priority to default hooks, and add custom hook register in base runner.
* Add custom hook register in example train file
* Add unittest of custom hook
* Code format
* support clipping min_lr in StepLrUpdaterHook
* add docstring for StepLrUpdaterHook
* fix small bugs
* add unit test for StepLrUpdaterHook
* fix linting error
* [Fix] OneCycleLrUpdaterHook interface
* revise according to comments
* revise according to comments
* add test
* fix lint
* revise according to comments
* minors
* add pytest param
* fix lint
* ci
* add initializers and BaseModule for unified parameter initialization
* fix circle import
* bug fix
* add is_init flag in BaseModule
* fix docstring
* sort import and fix doc format
* fix bug
* fix docformat and double quote string
* fix import sort
* import sort
* sort import
* revise according to comments
* fix doc format
* revise according to comments
* revise import and fix typo
* polish code
* revise minors
* revice minors
* revise apply function
* revise bias initialization with probability
* add type test for bias_prob
* revise minors
* Refactor _load_checkpoint fn
* Update _load_checkpoint fn
* Update docs str and add unit test
* Fix unit test
* Fix lint
* Add comment and Optimize function
* Fix docs str
* Update load_ckpt and fix doc str
* Update doc str and add sort unit test
* Update and fix unit test
* Fix unit test
* Update and add unit test
* Fix openmmlab prefix error
* Update lr_updater.py
since epoch/iteration in runner starts with 0, we shouldn't leave the latter iteration to former (12th epoch for example, with first period equal to 12) period.
* Update lr_updater.py
* Update test_hooks.py
* Support to specify LR of DCN's conv_offset
* Resolve comments & add unit test
* Resolve formats
* Fix CI for DCN
* Mock DCN when cpu only
* Use mock for cpu testing
* Fix docstring and support ModulatedDCN
* set offset_lr_mult as dcn's arguments, link CU-49u01p
* fix lr bug
* fall back to set LR in constructor
* resolve comments
* Add build_runner
* Parametrize test_runner
* Add imports to runner __init__
* Refactor max_iters and max_epochs from run to init
* Add assertion error messages
* Add test_builder
* Make change retro-compatible
* Raise ValueError if max_epochs and max_iters
* add ema hook
* add ema hook resume
* add ema hook test
* fix typo
* fix according to comment
* delete logger
* fix according to comment
* fix unitest
* fix typo
* fix according to comment
* change to resume_from
* typo
* fix isort
* fix: remove all module wrapper when saving checkpoint
* refactor: move position of if
* docs: add docstring
* refactor: add _save_to_state_dict from official torch
* docs: modify docstring of _save_to_state_dict
* docs: modify docstring
* feat: add unittest
* feat: add DataParallel to unittest
* fix: a bug when model has batchnorm
* docs: update docstring
* feat: support for os.environ port for slurm training
* fix: port data type
* feat: add flawed unittest
* feat: add flawed unittest
* docs: add comments
* fix: unittest
* fix: unittest
* feat: add CosineRestartLrUpdaterHook
* style: rename period to periods
* fix: bug in period 0
* feat: rename eta_min to min_lr and add min_lr_ratio
* docs: fix docstring of restart lr updater
* refactor: use annealing_cos
* docs: add docstring to annealing_cos
* feat: cosine restart lr update hook
* refactor: modify code order for unittest