To download the dataset, please refer to [prepare_data.md](../prepare_data.md).
The data is used to train and test compression model. So the data format should be the same of train data format.
### COCO format
To use coco data to eval model, you can refer to [configs/detection/yolox/yolox_s_8xb16_300e_coco.py](../../configs/detection/yolox/yolox_s_8xb16_300e_coco.py) for more configuration details.
### PAI-Itag detection format
To use pai-itag detection format data to eval detection, you can refer to [configs/detection/yolox/yolox_s_8xb16_300e_coco_pai.py](../../configs/detection/yolox/yolox_s_8xb16_300e_coco_pai.py) for more configuration details.
## Local & PAI-DSW
To use COCO format data, use config file `configs/detection/yolox/yolox_s_8xb16_300e_coco.py`
To use PAI-Itag format data, use config file `configs/detection/yolox/yolox_s_8xb16_300e_coco_pai.py`
### Compression
**Quantize:**
This is used to quantize yolox model; The quantized model will be saved in work_dir.
```shell
python tools/quantize.py \
${CONFIG_PATH} \
${MODEL_PATH} \
--work_dir ${WORK_DIR} \
--device ${DEVICE} \
--backend ${BACKEND}
```
<details>
<summary>Arguments</summary>
-`CONFIG_PATH`: the config file path of a detection method
-`WORK_DIR`: your path to save models and logs
-`MODEL_PATH`: the models to be quantized
-`DEVICE`: the device quantized models use (cpu/arm)
-`BACKEND`: the quantized models's framework (PyTorch/MNN)
</details>
**Examples:**
Edit `data_root`path in the `${CONFIG_PATH}` to your own data path.