* add onnx support to roi_align and roi_pool
* add softnms ort support
* fix for lint
* format cpp code with clang-format:google
* add new empty line to the end of head files in onnxruntime
* update to pytorch1.7
* add test of softnms to onnxruntime
* fix for lint
* remote print in ops/info.py
* change import order, fix for flake8
* fix include
* add assert torch>=1.7.0
* [doc]: add document for onnxruntime custom operator
* update onnxruntime version to v1.5.1 for softnms
* remove doc menu
* Resolve lint for markdown
* resolve naming style in onnxruntime_op.md
* Use old cpp apis, optimize test_onnx.py
* Fixing strings in tests/test_ops/test_onnx.py
* code format with yapf
* fix soft_nms parrot
* add import in onnxruntime setup, avoid conflict
* fix doc and add assert
* change cpp guard
Co-authored-by: maningsheng <maningsheng@sensetime.com>
* Allow to replace nested tuple and list via options
* Add comments
* Fix single nested items
* Simplify the code
* Simplify the code
* Simplify the code
* Simplify the code
* Update docstring
* Update docstring
* Support quotation mark
* modify docstring
* add jit decorator
* add parrots_jit.py
* modify test_parrots_jit.py
* modify for lint
* fix isort
* skip test_parrots_jit.py when build without pytorch
* try ci
* rm log
* fix double quote
* modify for comments and use partial_shape instead of full_shape
* fix for lint
* small modify for parrots 0.9.0rc0
* def skip no elena directly
* add clamp without unittest
* add clamp-act with unit test
* fix name bug
* use logical and
* fix logical_and
* fix linting
* rename ClampLayer to Clamp
* rename ClampLayer to Clamp
Co-authored-by: nbei <631557085@qq.com>
* Update lr_updater.py
since epoch/iteration in runner starts with 0, we shouldn't leave the latter iteration to former (12th epoch for example, with first period equal to 12) period.
* Update lr_updater.py
* Update test_hooks.py
* add unittest for onnx convert
* build onnx and onnxruntime in CI
* skip onnx op unit test while using CUDA
* fix offset==0 case in NMS
* remove tmp file used in test
* delete tmp file before assert so that we can remove the tmp file anyway
* import_modules_from_strings when loading cfg from file
* add unittest to tell whether the feature is enabled as expected
* minor
* set an environment variable instead of writing a file
* use 'shutil' instead of 'os.system'
* add version check in wrappers
* fix assersion
* use digital version for version comparison
* fix unit tests
* reformat
* fall back to compare the first two version
* fix unittest
* fix unittest
* fix unit test
* clean unnecessary change
* Support to specify LR of DCN's conv_offset
* Resolve comments & add unit test
* Resolve formats
* Fix CI for DCN
* Mock DCN when cpu only
* Use mock for cpu testing
* Fix docstring and support ModulatedDCN
* set offset_lr_mult as dcn's arguments, link CU-49u01p
* fix lr bug
* fall back to set LR in constructor
* resolve comments
* Add build_runner
* Parametrize test_runner
* Add imports to runner __init__
* Refactor max_iters and max_epochs from run to init
* Add assertion error messages
* Add test_builder
* Make change retro-compatible
* Raise ValueError if max_epochs and max_iters
* Add test case for type defined using default_args
* Refactor build_from_cfg
* Update exception of missing type
* pre-commit
* Fix default_args is None
* pre-commit
* Bring back test
* Update exception raising
* add brightness and contrast augmentation
* remove unnecessary
* reformat
* relax the precision constrain for adjust_brightness aug
* fix percision assertion error in unit test
* remove toy
* rename alpha as factor
* use np.testing.assert_allclose in place of np.less_equal
* Mv wrappers into bricks and use wrappers in registry
* resolve import issues
* fix import issues
* set nn op forward to torch 1.6.1
* fix CI bug and add warning
* Fix CI by using patch mock
* mv warnings inside deprecated module's initialization
* add equalize augmentation in mmcv
* delete unnecessary
* reformat
* remove clip in implementing equalize, and add uint test with case step=0
* remove clip in implementing equalize, and add uint test with case step=0
* add comments for unit test
* rename function name as imequalize
* add Color augmentation
* reformat
* reformat
* reformat docstring
* reformat docstring
* add more unit test
* add more unit test
* add clip value and uint test for image with type float
* rename function name
* Support to split batched_nms when box number is too large
* mv data from gpu to cpu
* Set split_thr through nms_cfg
* clean code
* Update motivation in docstring
* fix typos
* add ema hook
* add ema hook resume
* add ema hook test
* fix typo
* fix according to comment
* delete logger
* fix according to comment
* fix unitest
* fix typo
* fix according to comment
* change to resume_from
* typo
* fix isort
* add pairwise function for 'gaussian' and 'concatenation' mode
* rename test function
* decrease the complexity of nonlocal unittest
* fix typo and make unittest more complete
* add unittest when zero_init is False
* minor fix
* pack theta and phi
Co-authored-by: Jiarui XU <xvjiarui0826@gmail.com>
* fix: remove all module wrapper when saving checkpoint
* refactor: move position of if
* docs: add docstring
* refactor: add _save_to_state_dict from official torch
* docs: modify docstring of _save_to_state_dict
* docs: modify docstring
* feat: add unittest
* feat: add DataParallel to unittest
* fix: a bug when model has batchnorm
* docs: update docstring
* update config with predefined variables
* rm redun
Signed-off-by: lixuanyi <lixuanyi@sensetime.com>
* add test for config
Signed-off-by: lixuanyi <lixuanyi@sensetime.com>
* support all types
Signed-off-by: lixuanyi <lixuanyi@sensetime.com>
* newline at the end
Signed-off-by: lixuanyi <lixuanyi@sensetime.com>
* update
Signed-off-by: lixuanyi <lixuanyi@sensetime.com>
* extract code into a function and add docs
Signed-off-by: lixuanyi <lixuanyi@sensetime.com>
* fix and add tests
Signed-off-by: lixuanyi <lixuanyi@sensetime.com>
* add unit tests and fix
* fix
* fix minor
* fix test
* migrate op
* migrate unittest
* update build no torch
* add back use_torch_vision for roi align
* fix type and unit test
* ignore test logging when no torch
* fix no torch ci test
* skip test registry
* remove coverage report when no torch
* fix mac ci order
* install latest pillow when no torch
* mv convws to brisk
* update actiion
* remove 1.3 cuda
* reorder
* add no torch
* fixed version
* make py3.8 on pt1.5
* make py3.8 on pt1.5
* remove torch 1.3
* disable py3.8
* pip
* merge master with cuda compile fix
* add cpu roi align
* fixed test
* fixed no torch
* add CUDA_ARGS
* use one line
* gencode=61
* seperate jobs
* update lint
* use parametrize test
* formart and rename
* unit test for all
* add ext ops, support parrots
* fix lint
* fix lint
* update op from mmdetection
* support non-pytorch env
* fix import bug
* test not import mmcv.op
* rename mmcv.op to mmcv.ops
* fix compile warning
* 1. fix syncbn warning in pytorch 1.5
2. support only cpu compile
3. add point_sample from mmdet
* fix text bug
* update docstrings
* fix line endings
* minor updates
* remove non_local from ops
* bug fix for nonlocal2d
* rename ops_ext to _ext and _ext to _flow_warp_ext
* update the doc
* try clang-format github action
* fix github action
* add ops to api.rst
* fix cpp format
* fix clang format issues
* remove .clang-format
Co-authored-by: Kai Chen <chenkaidev@gmail.com>
* Add utils to calculate model complexity info
* remove _InstanceNorm in unittest
* add docstring and increase unittest coverage
* fix deconv_flops_counter_hook to accept different data shape
* test when model is not a common instance
* put flops_counter.py and weight_init.py into mmcv/cnn/utils folder
* fix import name
* reformat some docstrings
* update the scripts with latest one and remove redundant codes
* directly represent a model without string and eval()
* reformat code
* feat: support for os.environ port for slurm training
* fix: port data type
* feat: add flawed unittest
* feat: add flawed unittest
* docs: add comments
* fix: unittest
* fix: unittest
* add non_local module
* rewrite non local module comments
* perfect docstring and adjust init function
* not to init norm layer
* Correct initialize when there is a norm
* set normal method for both embedded_gaussian and dot_product
* feat: add custom_group to DefaultOptimConstrutor
* refactor: move custom_groups validate to _validate_cfg
* docs: add doc to explain custom_groups
* feat: add unittest for non_exist_key
* refactor: one param per group
* fix: small fix
* fix: name
* docs: docstring
* refactor: change to mult for only lr and wd custom
* docs: docstring
* docs: more explaination
* feat: sort custom key
* docs: add docstring
* refactor: use reverse arg of sorted
* docs: fix comment
* docs: fix comment
* refactor: small modi
* refactor: small modi
* refactor: small modi
* feat: add CosineRestartLrUpdaterHook
* style: rename period to periods
* fix: bug in period 0
* feat: rename eta_min to min_lr and add min_lr_ratio
* docs: fix docstring of restart lr updater
* refactor: use annealing_cos
* docs: add docstring to annealing_cos
* feat: cosine restart lr update hook
* refactor: modify code order for unittest
* add a BaseRunner and rename Runner to EpochBasedRunner
* fix the train/val step
* bug fix
* update unit tests
* fix unit tests
* raise an error if both batch_processor and train_step are set
* add a unit test
* Support path as a key in dict of config
* reformat test case
* update pre-commit version and fix format
* fix bug
* clean code
* reformat
* fix missing parts
* add building bricks of cnn
* add unit tests
* use registry for building bricks
* minor updates
* add scale layer
* add test for scale
* add doc string
Co-authored-by: Jiarui XU <xvjiarui0826@gmail.com>
* track progress of iter&enum
* restore
* add momentum scheduler
* fix small bug
* cyclic scheduler"
* fix bug
* fix second phase's bug
* reformat
* feature (cosine lr): use relative ratio for more flexible scheduler
* Fix (runner): fix bugs in runner
* Refactor (hook): refactor cosing/cyclic LR/momentum hook with unittest
* Clean unnecessary files and reformat
* Fix memory key error when GPU is not avaliable
* Resolve comments
* Do not print momentum in text log
* Change hook register order
* Refactor max_iter
* Fix max_iter bugs in runner
* Enforce target_ratio to be either tuple or float
* add base for config
* fixed format
* rm terminal width
* support multiple & recursive base
* add test case
* fix format
* add test construct
* minor fix
* add more test, rewrite merge from opt
* avoid depulicate keys
* delete imported config as module
* rename merge_from_dict