OpenMMLab website HOT      OpenMMLab platform TRY IT OUT
 
[![build](https://github.com/open-mmlab/mmocr/workflows/build/badge.svg)](https://github.com/open-mmlab/mmocr/actions) [![docs](https://readthedocs.org/projects/mmocr/badge/?version=latest)](https://mmocr.readthedocs.io/en/latest/?badge=latest) [![codecov](https://codecov.io/gh/open-mmlab/mmocr/branch/main/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmocr) [![license](https://img.shields.io/github/license/open-mmlab/mmocr.svg)](https://github.com/open-mmlab/mmocr/blob/main/LICENSE) [![PyPI](https://badge.fury.io/py/mmocr.svg)](https://pypi.org/project/mmocr/) [![Average time to resolve an issue](https://isitmaintained.com/badge/resolution/open-mmlab/mmocr.svg)](https://github.com/open-mmlab/mmocr/issues) [![Percentage of issues still open](https://isitmaintained.com/badge/open/open-mmlab/mmocr.svg)](https://github.com/open-mmlab/mmocr/issues) [📘Documentation](https://mmocr.readthedocs.io/) | [🛠️Installation](https://mmocr.readthedocs.io/en/latest/install.html) | [👀Model Zoo](https://mmocr.readthedocs.io/en/latest/modelzoo.html) | [🆕Update News](https://mmocr.readthedocs.io/en/latest/changelog.html) | [🤔Reporting Issues](https://github.com/open-mmlab/mmocr/issues/new/choose)
English | [简体中文](README_zh-CN.md)
## Introduction MMOCR is an open-source toolbox based on PyTorch and mmdetection for text detection, text recognition, and the corresponding downstream tasks including key information extraction. It is part of the [OpenMMLab](https://openmmlab.com/) project. The main branch works with **PyTorch 1.6+**.
### Major Features - **Comprehensive Pipeline** The toolbox supports not only text detection and text recognition, but also their downstream tasks such as key information extraction. - **Multiple Models** The toolbox supports a wide variety of state-of-the-art models for text detection, text recognition and key information extraction. - **Modular Design** The modular design of MMOCR enables users to define their own optimizers, data preprocessors, and model components such as backbones, necks and heads as well as losses. Please refer to [Getting Started](https://mmocr.readthedocs.io/en/latest/getting_started.html) for how to construct a customized model. - **Numerous Utilities** The toolbox provides a comprehensive set of utilities which can help users assess the performance of models. It includes visualizers which allow visualization of images, ground truths as well as predicted bounding boxes, and a validation tool for evaluating checkpoints during training. It also includes data converters to demonstrate how to convert your own data to the annotation files which the toolbox supports. ## What's New While the stable version (0.6.2) and the preview version (1.0.0) are being maintained concurrently now, the former version will be deprecated by the end of 2022. Therefore, we recommend users upgrade to [MMOCR 1.0](https://github.com/open-mmlab/mmocr/tree/1.x) to fruitful new features and better performance brought by the new architecture. Check out our [maintenance plan](https://mmocr.readthedocs.io/en/dev-1.x/migration/overview.html) for how we will maintain them in the future. ### 💎 Stable version v0.6.2 was released in 2022-10-14. 1. It's now possible to train/test models through Python Interface. 2. ResizeOCR now fully supports all the parameters in mmcv.impad. Read [Changelog](https://mmocr.readthedocs.io/en/latest/changelog.html) for more details! ### 🌟 Preview of 1.x version A brand new version of **MMOCR v1.0.0rc2** was released in 2022-10-14: 1. **New engines**. MMOCR 1.x is based on [MMEngine](https://github.com/open-mmlab/mmengine), which provides a general and powerful runner that allows more flexible customizations and significantly simplifies the entrypoints of high-level interfaces. 2. **Unified interfaces**. As a part of the OpenMMLab 2.0 projects, MMOCR 1.x unifies and refactors the interfaces and internal logics of train, testing, datasets, models, evaluation, and visualization. All the OpenMMLab 2.0 projects share the same design in those interfaces and logics to allow the emergence of multi-task/modality algorithms. 3. **Cross project calling**. Benefiting from the unified design, you can use the models implemented in other OpenMMLab projects, such as MMDet. We provide an example of how to use MMDetection's Mask R-CNN through `MMDetWrapper`. Check our documents for more details. More wrappers will be released in the future. 4. **Stronger visualization**. We provide a series of useful tools which are mostly based on brand-new visualizers. As a result, it is more convenient for the users to explore the models and datasets now. 5. **More documentation and tutorials**. We add a bunch of documentation and tutorials to help users get started more smoothly. Read it [here](https://mmocr.readthedocs.io/en/dev-1.x/). Find more new features in [1.x branch](https://github.com/open-mmlab/mmocr/tree/1.x). Issues and PRs are welcome! ## Installation MMOCR depends on [PyTorch](https://pytorch.org/), [MMCV](https://github.com/open-mmlab/mmcv) and [MMDetection](https://github.com/open-mmlab/mmdetection). Below are quick steps for installation. Please refer to [Install Guide](https://mmocr.readthedocs.io/en/latest/install.html) for more detailed instruction. ```shell conda create -n open-mmlab python=3.8 pytorch=1.10 cudatoolkit=11.3 torchvision -c pytorch -y conda activate open-mmlab pip3 install openmim mim install mmcv-full mim install mmdet git clone https://github.com/open-mmlab/mmocr.git cd mmocr pip3 install -e . ``` ## Get Started Please see [Getting Started](https://mmocr.readthedocs.io/en/latest/getting_started.html) for the basic usage of MMOCR. ## [Model Zoo](https://mmocr.readthedocs.io/en/latest/modelzoo.html) Supported algorithms:
Text Detection - [x] [DBNet](configs/textdet/dbnet/README.md) (AAAI'2020) / [DBNet++](configs/textdet/dbnetpp/README.md) (TPAMI'2022) - [x] [Mask R-CNN](configs/textdet/maskrcnn/README.md) (ICCV'2017) - [x] [PANet](configs/textdet/panet/README.md) (ICCV'2019) - [x] [PSENet](configs/textdet/psenet/README.md) (CVPR'2019) - [x] [TextSnake](configs/textdet/textsnake/README.md) (ECCV'2018) - [x] [DRRG](configs/textdet/drrg/README.md) (CVPR'2020) - [x] [FCENet](configs/textdet/fcenet/README.md) (CVPR'2021)
Text Recognition - [x] [ABINet](configs/textrecog/abinet/README.md) (CVPR'2021) - [x] [CRNN](configs/textrecog/crnn/README.md) (TPAMI'2016) - [x] [MASTER](configs/textrecog/master/README.md) (PR'2021) - [x] [NRTR](configs/textrecog/nrtr/README.md) (ICDAR'2019) - [x] [RobustScanner](configs/textrecog/robust_scanner/README.md) (ECCV'2020) - [x] [SAR](configs/textrecog/sar/README.md) (AAAI'2019) - [x] [SATRN](configs/textrecog/satrn/README.md) (CVPR'2020 Workshop on Text and Documents in the Deep Learning Era) - [x] [SegOCR](configs/textrecog/seg/README.md) (Manuscript'2021)
Key Information Extraction - [x] [SDMG-R](configs/kie/sdmgr/README.md) (ArXiv'2021)
Named Entity Recognition - [x] [Bert-Softmax](configs/ner/bert_softmax/README.md) (NAACL'2019)
Please refer to [model_zoo](https://mmocr.readthedocs.io/en/latest/modelzoo.html) for more details. ## Contributing We appreciate all contributions to improve MMOCR. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guidelines. ## Acknowledgement MMOCR is an open-source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. We hope the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new OCR methods. ## Citation If you find this project useful in your research, please consider cite: ```bibtex @article{mmocr2021, title={MMOCR: A Comprehensive Toolbox for Text Detection, Recognition and Understanding}, author={Kuang, Zhanghui and Sun, Hongbin and Li, Zhizhong and Yue, Xiaoyu and Lin, Tsui Hin and Chen, Jianyong and Wei, Huaqiang and Zhu, Yiqin and Gao, Tong and Zhang, Wenwei and Chen, Kai and Zhang, Wayne and Lin, Dahua}, journal= {arXiv preprint arXiv:2108.06543}, year={2021} } ``` ## License This project is released under the [Apache 2.0 license](LICENSE). ## Projects in OpenMMLab - [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision. - [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages. - [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab image classification toolbox and benchmark. - [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark. - [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection. - [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark. - [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark. - [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox. - [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark. - [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark. - [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark. - [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark. - [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark. - [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark. - [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark. - [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark. - [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox. - [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox. - [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.