hanrui1sensetime 9e227b228b
[UnitTest] mmocr unittest (#130)
* WIP test_mmocr 8 out of 20

* test_mmocr_export

* test mmocr apis

* add test data

* add mmocr model unittest 5 passed 1 failed

* finish mmocr unittest

* fix lint

* fix yapf

* fix isort

* fix flake8

* fix docformatter

* fix docformatter

* try to fix unittest after merge master

* Change test.py for backend.DEFAULT

* fix flake8

* fix ut

* fix yapf

* fix ut build

* fix yapf

* fix mmocr_export ut

* fix mmocr_apis ort not cuda

* remove explicit .forward

* remove backendwrapper

* simplify the crnn and dbnet config

* simplify instance_test.json

* add another case of decoder

* increase coverage of test_mmocr_models base_recognizer

* improve coverage

* improve encode_decoder coverage

* reply for grimoire codereview

* what if not check cuda?

* remove image data

* reply to runningleon code review

* fix fpnc

* fix lint

* try to fix CI UT error

* fix fpnc with and wo custom ops

* fix yapf

* skip fpnc when cuda is not ready in ci

* reply for code review

* reply for code review

* fix yapf

* reply for code review

* fix yapf

* fix conflict

* remove unmatched data path

* remove unnecessary comments
2021-10-25 10:15:57 +08:00

50 lines
1.6 KiB
Python
Executable File

model = dict(
type='DBNet',
backbone=dict(
type='mmdet.ResNet',
depth=18,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=-1,
norm_cfg=dict(type='BN', requires_grad=True),
init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet18'),
norm_eval=False,
style='caffe'),
neck=dict(type='FPNC', in_channels=[2, 4, 8, 16], lateral_channels=8),
bbox_head=dict(
type='DBHead',
text_repr_type='quad',
in_channels=8,
loss=dict(type='DBLoss', alpha=5.0, beta=10.0, bbce_loss=True)),
train_cfg=None,
test_cfg=None)
dataset_type = 'IcdarDataset'
data_root = 'data/icdar2015'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
test_pipeline = [
dict(type='LoadImageFromFile', color_type='color_ignore_orientation'),
dict(
type='MultiScaleFlipAug',
img_scale=(128, 64),
flip=False,
transforms=[
dict(type='Resize', img_scale=(256, 128), keep_ratio=True),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img']),
])
]
data = dict(
samples_per_gpu=16,
test_dataloader=dict(samples_per_gpu=1),
test=dict(
type=dataset_type,
ann_file=data_root + '/instances_test.json',
img_prefix=data_root + '/imgs',
pipeline=test_pipeline))
evaluation = dict(interval=100, metric='hmean-iou')