Torch.jit is used to save the TorchScript model. It can be used independently from Python. It is convenient to be deployed in various environments and has little dependency on hardware. It can also reduce the inference time. For more details, you can refer to the official tutorial: https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html
**Blade**
Blade Model is used to greatly accelerate the inference process. It combines the technology of computational graph optimization, TensorRT/oneDNN, AI compiler optimization, etc. For more details, you can refer to the official tutorial: https://help.aliyun.com/document_detail/205129.html
We allow users to use different combinations of "export_type", "preprocess_jit", and "use_trt_efficientnms" as shown in the Table below to export the model.
### Inference
Take jit script model as an example. Set the config file as below:
```shell
export = dict(export_type='jit',
preprocess_jit=True,
static_opt=True,
batch_size=1,
use_trt_efficientnms=False)
```
Then, you can obtain the following exported model:
``` shell
yolox_s.pt.jit
yolox_s.pt.jit.config.json
yolox_s.pt.preprocess (only exists when set preprocess_jit = True)
```
You can simply use our EasyCV predictor to use the exported model:
We highly recommend you to use EasyCV predictor for inference with different export types below. Use YOLOX-s as an example, we test the en2end inference time of different models on a single NVIDIA Tesla V100.
Note that we only allow to export an end2end TorchScript Model. For the exported Blade model, NMS is not allowed to be wrapped into the model. You should follow [postprocess.py](https://github.com/alibaba/EasyCV/tree/master/easycv/models/detection/utils/postprocess.py) to add the postprocess procedure.