Thanks for your contribution and we appreciate it a lot. The following
instructions would make your pull request more healthy and more easily
get feedback. If you do not understand some items, don't worry, just
make the pull request and seek help from maintainers.
## Motivation
Please describe the motivation of this PR and the goal you want to
achieve through this PR.
## Modification
Please briefly describe what modification is made in this PR.
1. add `NYUDataset`class
2. add script to process NYU dataset
3. add transforms for loading depth map
4. add docs & unittest
## BC-breaking (Optional)
Does the modification introduce changes that break the
backward-compatibility of the downstream repos?
If so, please describe how it breaks the compatibility and how the
downstream projects should modify their code to keep compatibility with
this PR.
## Use cases (Optional)
If this PR introduces a new feature, it is better to list some use cases
here, and update the documentation.
## Checklist
1. Pre-commit or other linting tools are used to fix the potential lint
issues.
5. The modification is covered by complete unit tests. If not, please
add more unit test to ensure the correctness.
6. If the modification has potential influence on downstream projects,
this PR should be tested with downstream projects, like MMDet or
MMDet3D.
7. The documentation has been modified accordingly, like docstring or
example tutorials.
* [Feature]Add Decathlon dataset
* fix test data
* add file
* remove order
* revise default value for prefix
* modify example
* revise based on comments
* add comments for ut
* support iSAID aerial dataset
* Update and rename docs/dataset_prepare.md to 博士/dataset_prepare.md
* Update dataset_prepare.md
* fix typo
* fix typo
* fix typo
* remove imgviz
* fix wrong order in annotation name
* upload models&logs
* upload models&logs
* add load_annotations
* fix unittest coverage
* fix unittest coverage
* fix correct crop size in config
* fix iSAID unit test
* fix iSAID unit test
* fix typos
* fix wrong crop size in readme
* use smaller figure as test data
* add smaller dataset in test data
* add blank in docs
* use 0 bytes pseudo data
* add footnote and comments for crop size
* change iSAID to isaid and add default value in it
* change iSAID to isaid in _base_
Co-authored-by: MengzhangLI <mcmong@pku.edu.cn>
* update LoveDA dataset api
* revised lint errors in dataset_prepare.md
* revised lint errors in loveda.py
* revised lint errors in loveda.py
* revised lint errors in dataset_prepare.md
* revised lint errors in dataset_prepare.md
* checked with isort and yapf
* checked with isort and yapf
* checked with isort and yapf
* Revert "checked with isort and yapf"
This reverts commit 686a51d9
* Revert "checked with isort and yapf"
This reverts commit b877e121bb2935ceefc503c09675019489829feb.
* Revert "revised lint errors in dataset_prepare.md"
This reverts commit 2289e27c
* Revert "checked with isort and yapf"
This reverts commit 159db2f8
* Revert "checked with isort and yapf"
This reverts commit 159db2f8
* add configs & fix bugs
* update new branch
* upload models&logs and add format-only
* change pretraied model path of HRNet
* fix the errors in dataset_prepare.md
* fix the errors in dataset_prepare.md and configs in loveda.py
* change the description in docs_zh-CN/dataset_prepare.md
* use init_cfg
* fix test converage
* adding pseudo loveda dataset
* adding pseudo loveda dataset
* adding pseudo loveda dataset
* adding pseudo loveda dataset
* adding pseudo loveda dataset
* adding pseudo loveda dataset
* Update docs/dataset_prepare.md
Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>
* Update docs_zh-CN/dataset_prepare.md
Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>
* Update docs_zh-CN/dataset_prepare.md
Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>
* Delete unused lines of unittest and Add docs
* add convert .py file
* add downloading links from zenodo
* move place of LoveDA and Cityscapes in doc
* move place of LoveDA and Cityscapes in doc
Co-authored-by: MengzhangLI <mcmong@pku.edu.cn>
Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>
* Support progressive test with fewer memory cost.
* Temp code
* Using processor to refactor evaluation workflow.
* refactor eval hook.
* Fix process bar.
* Fix middle save argument.
* Modify some variable name of dataset evaluate api.
* Modify some viriable name of eval hook.
* Fix some priority bugs of eval hook.
* Depreciated efficient_test.
* Fix training progress blocked by eval hook.
* Depreciated old test api.
* Fix test api error.
* Modify outer api.
* Build a sampler test api.
* TODO: Refactor format_results.
* Modify variable names.
* Fix num_classes bug.
* Fix sampler index bug.
* Fix grammaly bug.
* Support batch sampler.
* More readable test api.
* Remove some command arg and fix eval hook bug.
* Support format-only arg.
* Modify format_results of datasets.
* Modify tool which use test apis.
* support cityscapes eval
* fixed cityscapes
* 1. Add comments for batch_sampler;
2. Keep eval hook api same and add deprecated warning;
3. Add doc string for dataset.pre_eval;
* Add efficient_test doc string.
* Modify test tool to compat old version.
* Modify eval hook to compat with old version.
* Modify test api to compat old version api.
* Sampler explanation.
* update warning
* Modify deploy_test.py
* compatible with old output, add efficient test back
* clear logic of exclusive
* Warning about efficient_test.
* Modify format_results save folder.
* Fix bugs of format_results.
* Modify deploy_test.py.
* Update doc
* Fix deploy test bugs.
* Fix custom dataset unit tests.
* Fix dataset unit tests.
* Fix eval hook unit tests.
* Fix some imcompatible.
* Add pre_eval argument for eval hooks.
* Update eval hook doc string.
* Make pre_eval false in default.
* Add unit tests for dataset format_results.
* Fix some comments and bc-breaking bug.
* Fix pre_eval set cfg field.
* Remove redundant codes.
Co-authored-by: Jiarui XU <xvjiarui0826@gmail.com>