---
comments: true
---
# Formula Recognition Module Tutorial
## I. Overview
The formula recognition module is a key component of an OCR (Optical Character Recognition) system, responsible for converting mathematical formulas in images into editable text or computer-readable formats. The performance of this module directly affects the accuracy and efficiency of the entire OCR system. The formula recognition module typically outputs LaTeX or MathML code of the mathematical formulas, which will be passed as input to the text understanding module for further processing.
## II. Supported Model List
Model | Model Download Link |
En-BLEU(%) |
Zh-BLEU(%) |
GPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (M) |
Introduction |
UniMERNet | Inference Model/Training Model |
85.91 |
43.50 |
2266.96/- |
-/- |
1.53 G |
UniMERNet is a formula recognition model developed by Shanghai AI Lab. It uses Donut Swin as the encoder and MBartDecoder as the decoder. The model is trained on a dataset of one million samples, including simple formulas, complex formulas, scanned formulas, and handwritten formulas, significantly improving the recognition accuracy of real-world formulas. |
PP-FormulaNet-S | Inference Model/Training Model |
87.00 |
45.71 |
202.25/- |
-/- |
224 M |
PP-FormulaNet is an advanced formula recognition model developed by the Baidu PaddlePaddle Vision Team. The PP-FormulaNet-S version uses PP-HGNetV2-B4 as its backbone network. Through parallel masking and model distillation techniques, it significantly improves inference speed while maintaining high recognition accuracy, making it suitable for applications requiring fast inference. The PP-FormulaNet-L version, on the other hand, uses Vary_VIT_B as its backbone network and is trained on a large-scale formula dataset, showing significant improvements in recognizing complex formulas compared to PP-FormulaNet-S. |
PP-FormulaNet-L | Inference Model/Training Model |
90.36 |
45.78 |
1976.52/- |
-/- |
695 M |
PP-FormulaNet_plus-S | Inference Model/Training Model |
88.71 |
53.32 |
191.69/- |
-/- |
248 M |
PP-FormulaNet_plus is an enhanced version of the formula recognition model developed by the Baidu PaddlePaddle Vision Team, building upon the original PP-FormulaNet. Compared to the original version, PP-FormulaNet_plus utilizes a more diverse formula dataset during training, including sources such as Chinese dissertations, professional books, textbooks, exam papers, and mathematics journals. This expansion significantly improves the model’s recognition capabilities. Among the models, PP-FormulaNet_plus-M and PP-FormulaNet_plus-L have added support for Chinese formulas and increased the maximum number of predicted tokens for formulas from 1,024 to 2,560, greatly enhancing the recognition performance for complex formulas. Meanwhile, the PP-FormulaNet_plus-S model focuses on improving the recognition of English formulas. With these improvements, the PP-FormulaNet_plus series models perform exceptionally well in handling complex and diverse formula recognition tasks. |
PP-FormulaNet_plus-M | Inference Model/Training Model |
91.45 |
89.76 |
1301.56/- |
-/- |
592 M |
PP-FormulaNet_plus-L | Inference Model/Training Model |
92.22 |
90.64 |
1745.25/- |
-/- |
698 M |
LaTeX_OCR_rec | Inference Model/Training Model |
74.55 |
39.96 |
1244.61/- |
-/- |
99 M |
LaTeX-OCR is a formula recognition algorithm based on an autoregressive large model. It uses Hybrid ViT as the backbone network and a transformer as the decoder, significantly improving the accuracy of formula recognition. |
Parameter |
Description |
Type |
Options |
Default |
input |
Input data to be predicted; supports multiple input types |
Python Var /str /list |
- Python variable, such as image data represented by
numpy.ndarray
- File path, such as the local path of an image file:
/root/data/img.jpg
- URL link, such as a URL to an image file: Example
- Local directory, which should contain files to be predicted, such as
/root/data/
- List, whose elements must be of the types above, e.g.,
[numpy.ndarray, numpy.ndarray] , ["/root/data/img1.jpg", "/root/data/img2.jpg"] , ["/root/data1", "/root/data2"]
|
None |
batch_size |
Batch size |
int |
Any integer |
1 |
* The prediction results can be processed. Each result corresponds to a `Result` object, which supports printing, saving as an image, and saving as a `json` file:
Method |
Description |
Parameter |
Type |
Details |
Default |
print() |
Print the result to the terminal |
format_json |
bool |
Whether to format the output using JSON indentation |
True |
indent |
int |
Specify the indentation level to beautify the JSON output; only effective when format_json is True |
4 |
ensure_ascii |
bool |
Controls whether non-ASCII characters are escaped to Unicode . If set to True , all non-ASCII characters are escaped; if False , original characters are kept. Only effective when format_json is True |
False |
save_to_json() |
Save the result as a json-formatted file |
save_path |
str |
Path to save the file. If it is a directory, the saved file name will match the input file type |
None |
indent |
int |
Specify the indentation level to beautify the JSON output; only effective when format_json is True |
4 |
ensure_ascii |
bool |
Controls whether non-ASCII characters are escaped to Unicode . If set to True , all non-ASCII characters are escaped; if False , original characters are kept. Only effective when format_json is True |
False |
save_to_img() |
Save the result as an image file |
save_path |
str |
Path to save the file. If it is a directory, the saved file name will match the input file type |
None |
* In addition, you can also access the visualized image and prediction result via attributes, as follows: