3. Calculate mAP (mean AveragePrecision), CP (Class-wise mean Precision), CR (Class-wise mean Recall), CF
(Class-wise mean F1-score), OP (Overall mean Precision), OR (Overall mean Recall) and OF1 (Overall mean
F1-score).
```python
val_evaluator = [
dict(type='AveragePrecision'),
dict(type='MultiLabelMetric', average='macro'), # class-wise mean
dict(type='MultiLabelMetric', average='micro'), # overall mean
]
test_evaluator = val_evaluator
```
## Add new metrics
MMClassification supports the implementation of customized evaluation metrics for users who pursue higher customization.
You need to create a new file under `mmcls/evaluation/metrics`, and implement the new metric in the file, for example, in `mmcls/evaluation/metrics/my_metric.py`. And create a customized evaluation metric class `MyMetric` which inherits [`BaseMetric in MMEngine`](mmengine.evaluator.metrics.BaseMetric).
The data format processing method `process` and the metric calculation method `compute_metrics` need to be overwritten respectively. Add it to the `METRICS` registry to implement any customized evaluation metric.