mirror of https://github.com/open-mmlab/mmocr.git
update demo docs for batch inference (#181)
parent
7e0d48df11
commit
05aedf11d6
|
@ -16,6 +16,7 @@ python demo/ocr_image_demo.py demo/demo_text_det.jpg demo/output.jpg
|
|||
|
||||
- The predicted result will be saved as `demo/output.jpg`.
|
||||
- To use other algorithms of text detection and recognition, please set arguments: `--det-config`, `--det-ckpt`, `--recog-config`, `--recog-ckpt`.
|
||||
- To use batch mode for text recognition, please set arguments: `--batch-mode`, `--batch-size`.
|
||||
|
||||
### Remarks
|
||||
|
||||
|
|
|
@ -5,8 +5,8 @@
|
|||
|
||||
</div>
|
||||
|
||||
### Text Detection Image Demo
|
||||
|
||||
### Text Detection Single Image Demo
|
||||
|
||||
We provide a demo script to test a [single image](/demo/demo_text_det.jpg) for text detection with a single GPU.
|
||||
|
||||
|
@ -26,6 +26,28 @@ python demo/image_demo.py demo/demo_text_det.jpg configs/textdet/panet/panet_r18
|
|||
|
||||
The predicted result will be saved as `demo/demo_text_det_pred.jpg`.
|
||||
|
||||
|
||||
### Text Detection Multiple Image Demo
|
||||
|
||||
We provide a demo script to test multi-images in batch mode for text detection with a single GPU.
|
||||
|
||||
*Text Detection Model Preparation:*
|
||||
The pre-trained text detection model can be downloaded from [model zoo](https://mmocr.readthedocs.io/en/latest/modelzoo.html).
|
||||
Take [PANet](/configs/textdet/panet/panet_r18_fpem_ffm_600e_icdar2015.py) as an example:
|
||||
|
||||
```shell
|
||||
python demo/batch_image_demo.py ${CONFIG_FILE} ${CHECKPOINT_FILE} ${SAVE_PATH} --images ${IMAGE1} ${IMAGE2} [--imshow] [--device ${GPU_ID}]
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
```shell
|
||||
python demo/batch_image_demo.py configs/textdet/panet/panet_r18_fpem_ffm_600e_icdar2015.py https://download.openmmlab.com/mmocr/textdet/panet/panet_r18_fpem_ffm_sbn_600e_icdar2015_20210219-42dbe46a.pth save_results --images demo/demo_text_det.jpg demo/demo_text_det.jpg
|
||||
```
|
||||
|
||||
The predicted result will be saved in folder `save_results`.
|
||||
|
||||
|
||||
### Text Detection Webcam Demo
|
||||
|
||||
We also provide live demos from a webcam as in [mmdetection](https://github.com/open-mmlab/mmdetection/blob/a616886bf1e8de325e6906b8c76b6a4924ef5520/docs/1_exist_data_model.md).
|
||||
|
|
|
@ -5,8 +5,7 @@
|
|||
|
||||
</div>
|
||||
|
||||
### Text Recognition Image Demo
|
||||
|
||||
### Text Recognition Single Image Demo
|
||||
|
||||
We provide a demo script to test a [single demo image](/demo/demo_text_recog.jpg) for text recognition with a single GPU.
|
||||
|
||||
|
@ -26,6 +25,28 @@ python demo/image_demo.py demo/demo_text_recog.jpg configs/textrecog/sar/sar_r31
|
|||
|
||||
The predicted result will be saved as `demo/demo_text_recog_pred.jpg`.
|
||||
|
||||
|
||||
### Text Recognition Multiple Image Demo
|
||||
|
||||
We provide a demo script to test multi-images in batch mode for text recognition with a single GPU.
|
||||
|
||||
*Text Recognition Model Preparation:*
|
||||
The pre-trained text recognition model can be downloaded from [model zoo](https://mmocr.readthedocs.io/en/latest/modelzoo.html).
|
||||
Take [SAR](/configs/textrecog/sar/sar_r31_parallel_decoder_academic.py) as an example:
|
||||
|
||||
```shell
|
||||
python demo/batch_image_demo.py ${CONFIG_FILE} ${CHECKPOINT_FILE} ${SAVE_PATH} --images ${IMAGE1} ${IMAGE2} [--imshow] [--device ${GPU_ID}]
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
```shell
|
||||
python demo/image_demo.py configs/textrecog/sar/sar_r31_parallel_decoder_academic.py https://download.openmmlab.com/mmocr/textrecog/sar/sar_r31_parallel_decoder_academic-dba3a4a3.pth save_results --images demo/demo_text_recog.jpg demo/demo_text_recog.jpg
|
||||
```
|
||||
|
||||
The predicted result will be saved in folder `save_results`.
|
||||
|
||||
|
||||
### Text Recognition Webcam Demo
|
||||
|
||||
We also provide live demos from a webcam as in [mmdetection](https://github.com/open-mmlab/mmdetection/blob/a616886bf1e8de325e6906b8c76b6a4924ef5520/docs/1_exist_data_model.md).
|
||||
|
|
Loading…
Reference in New Issue