[doc] fix doc (#8935)

* support min_area_rect crop

* add check_install

* fix requirement.txt

* fix check_install

* add lanms-neo for drrg

* fix

* fix doc

* fix

* support set gpu_id when inference

* fix #8855

* fix #8855

* opt slim doc

* fix doc bug
pull/8942/head
Double_V 2023-02-01 09:53:47 +08:00 committed by GitHub
parent d8c0dbdaae
commit 6cbd7d1ece
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 5 additions and 2 deletions

View File

@ -54,4 +54,7 @@ python deploy/slim/quantization/export_model.py -c configs/det/ch_PP-OCRv3/ch_PP
### 5. 量化模型部署
上述步骤导出的量化模型参数精度仍然是FP32但是参数的数值范围是int8导出的模型可以通过PaddleLite的opt模型转换工具完成模型转换。
量化模型部署的可参考 [移动端模型部署](../../lite/readme.md)
量化模型移动端部署的可参考 [移动端模型部署](../../lite/readme.md)
备注量化训练后的模型参数是float32类型转inference model预测时相对不量化无加速效果原因是量化后模型结构之间存在量化和反量化算子如果要使用量化模型部署建议使用TensorRT并设置precision为INT8加速量化模型的预测时间。

View File

@ -335,7 +335,7 @@ ocr = PaddleOCR(use_angle_cls=True, lang="ch") # need to run only once to downlo
img_path = 'PaddleOCR/doc/imgs/11.jpg'
img = cv2.imread(img_path)
# img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY), If your own training model supports grayscale images, you can uncomment this line
result = ocr.ocr(img_path, cls=True)
result = ocr.ocr(img, cls=True)
for idx in range(len(result)):
res = result[idx]
for line in res: