mmclassification/mmcls/core/evaluation
LXXXXR 1df10beaa1
Add evaluation metrics for multilabel task (#123)
* add mean_ap

* add difficult_examples in mAP to support dataset without difficult_examples

* fix docstring

* add CP,CR,CF1,OP,OR,OF1 as multilabel metrics

* fix docstring

* temporary solution to ci until new version of mmcv is avaliable (#127)

* temporary solution to ci until new version of mmcv is avaliable

* temporary solution to ci until new version of mmcv is avaliable

* add mean_ap

* add difficult_examples in mAP to support dataset without difficult_examples

* fix docstring

* add CP,CR,CF1,OP,OR,OF1 as multilabel metrics

* fix docstring

* Swap -1 and 0 for labels

* Revised according to comments

* Revised according to comments

* Revised according to comments

* Revert "Revised according to comments"
It is suggested that we should not include paper from arxiv.
This reverts commit 48a781cd6a.

* Revert "Revert "Revised according to comments""

This reverts commit 6d3b0f1a7b.

* Revert "Revised according to comments"
It is suggested we should not cite paper from arxiv.
This reverts commit 120ecda884.

* Revised according to comments

* revised according to comments

* Revised according to comments
2021-01-04 12:25:33 +08:00
..
__init__.py Add evaluation metrics for multilabel task (#123) 2021-01-04 12:25:33 +08:00
eval_hooks.py Use build_runner (#54) 2020-10-15 21:12:50 +08:00
mean_ap.py Add evaluation metrics for multilabel task (#123) 2021-01-04 12:25:33 +08:00
multilabel_eval_metrics.py Add evaluation metrics for multilabel task (#123) 2021-01-04 12:25:33 +08:00