-`model_cfg`: The config of the model in OpenMMLab codebases.
-`--model`: The backend model file. For example, if we convert a model to TensorRT, we need to pass the model file with ".engine" suffix.
-`--out`: The path to save output results in pickle format. (The results will be saved only if this argument is given)
-`--format-only`: Whether format the output results without evaluation or not. It is useful when you want to format the result to a specific format and submit it to the test server
-`--metrics`: The metrics to evaluate the model defined in OpenMMLab codebases. e.g. "segm", "proposal" for COCO in mmdet, "precision", "recall", "f1_score", "support" for single label dataset in mmcls.
-`--show`: Whether to show the evaluation result on the screen.
-`--show-dir`: The directory to save the evaluation result. (The results will be saved only if this argument is given)
-`--show-score-thr`: The threshold determining whether to show detection bounding boxes.
-`--device`: The device that the model runs on. Note that some backends restrict the device. For example, TensorRT must run on cuda.
-`--cfg-options`: Extra or overridden settings that will be merged into the current deploy config.
-`--metric-options`: Custom options for evaluation. The key-value pair in xxx=yyy
format will be kwargs for dataset.evaluate() function.
-`--log2file`: log evaluation results (and speed) to file.
-`--batch-size`: the batch size for inference, which would override `samples_per_gpu` in data config. Default is `1`. Note that not all models support `batch_size>1`.