mirror of https://github.com/open-mmlab/mmocr.git
[Fix] Fix TotalText Anno version issue (#945)
* fix tt converter version issue; fix typos in docs * remove incorrect descriptions * fix docstring & incorrect file name * fix docstring identationpull/971/head
parent
e51d8533ea
commit
06b73cf71a
|
@ -255,7 +255,7 @@ inconsistency results in false examples in the training set. Therefore, users sh
|
|||
## Totaltext
|
||||
|
||||
- Step0: Read [Important Note](#important-note)
|
||||
- Step1: Download `totaltext.zip` from [github dataset](https://github.com/cs-chan/Total-Text-Dataset/tree/master/Dataset) and `groundtruth_text.zip` from [github Groundtruth](https://github.com/cs-chan/Total-Text-Dataset/tree/master/Groundtruth/Text) (Our totaltext_converter.py supports groundtruth with both .mat and .txt format).
|
||||
- Step1: Download `totaltext.zip` from [github dataset](https://github.com/cs-chan/Total-Text-Dataset/tree/master/Dataset) and `groundtruth_text.zip` or `TT_new_train_GT.zip` (if you prefer to use the latest version of training annotations) from [github Groundtruth](https://github.com/cs-chan/Total-Text-Dataset/tree/master/Groundtruth/Text) (Our totaltext_converter.py supports groundtruth with both .mat and .txt format).
|
||||
|
||||
```bash
|
||||
mkdir totaltext && cd totaltext
|
||||
|
@ -267,17 +267,21 @@ inconsistency results in false examples in the training set. Therefore, users sh
|
|||
mv Images/Train imgs/training
|
||||
mv Images/Test imgs/test
|
||||
|
||||
# For annotations
|
||||
# For legacy training and test annotations
|
||||
unzip groundtruth_text.zip
|
||||
cd Groundtruth
|
||||
mv Polygon/Train ../annotations/training
|
||||
mv Polygon/Test ../annotations/test
|
||||
mv Groundtruth/Polygon/Train annotations/training
|
||||
mv Groundtruth/Polygon/Test annotations/test
|
||||
|
||||
# Using the latest training annotations
|
||||
# WARNING: Delete legacy train annotations before running the following command.
|
||||
unzip TT_new_train_GT.zip
|
||||
mv Train annotations/training
|
||||
```
|
||||
|
||||
- Step2: Generate `instances_training.json` and `instances_test.json` with the following command:
|
||||
|
||||
```bash
|
||||
python tools/data/textdet/totaltext_converter.py /path/to/totaltext -o /path/to/totaltext --split-list training test
|
||||
python tools/data/textdet/totaltext_converter.py /path/to/totaltext
|
||||
```
|
||||
|
||||
- The resulting directory structure looks like the following:
|
||||
|
@ -507,7 +511,7 @@ inconsistency results in false examples in the training set. Therefore, users sh
|
|||
│ └── instances_val.json
|
||||
```
|
||||
|
||||
### LSVT
|
||||
## LSVT
|
||||
|
||||
- Step1: Download [train_full_images_0.tar.gz](https://dataset-bj.cdn.bcebos.com/lsvt/train_full_images_0.tar.gz), [train_full_images_1.tar.gz](https://dataset-bj.cdn.bcebos.com/lsvt/train_full_images_1.tar.gz), and [train_full_labels.json](https://dataset-bj.cdn.bcebos.com/lsvt/train_full_labels.json) to `lsvt/`.
|
||||
|
||||
|
@ -705,8 +709,6 @@ inconsistency results in false examples in the training set. Therefore, users sh
|
|||
```bash
|
||||
# Annotations of ReCTS test split is not publicly available, split a validation
|
||||
# set by adding --val-ratio 0.2
|
||||
# Add --preserve-vertical to preserve vertical texts for training, otherwise
|
||||
# vertical images will be filtered and stored in PATH/TO/rects/ignores
|
||||
python tools/data/textdet/rects_converter.py PATH/TO/rects --nproc 4 --val-ratio 0.2
|
||||
```
|
||||
|
||||
|
@ -853,11 +855,10 @@ inconsistency results in false examples in the training set. Therefore, users sh
|
|||
|
||||
- Step1: Download `train_images.zip.001`, `train_images.zip.002`, and `train_gts.zip` from the [homepage](https://rctw.vlrlab.net/dataset.html), extract the zips to `rctw/imgs` and `rctw/annotations`, respectively.
|
||||
|
||||
- Step2: Generate `instances_training.json` and `instances_val.json` (optional). Since the original dataset doesn't have a validation set, you may specify `--val-ratio` to split the dataset. E.g., if val-ratio is 0.2, then 20% of the data are left out as the validation set in this example.
|
||||
- Step2: Generate `instances_training.json` and `instances_val.json` (optional). Since the test annotations are not publicly available, you may specify `--val-ratio` to split the dataset. E.g., if val-ratio is 0.2, then 20% of the data are left out as the validation set in this example.
|
||||
|
||||
```bash
|
||||
# Annotations of RCTW test split is not publicly available, split a validation set by adding --val-ratio 0.2
|
||||
# Add --preserve-vertical to preserve vertical texts for training, otherwise vertical images will be filtered and stored in PATH/TO/rctw/ignores
|
||||
python tools/data/textdet/rctw_converter.py PATH/TO/rctw --nproc 4
|
||||
```
|
||||
|
||||
|
|
|
@ -392,7 +392,7 @@ should be as follows:
|
|||
|
||||
## Totaltext
|
||||
|
||||
- Step1: Download `totaltext.zip` from [github dataset](https://github.com/cs-chan/Total-Text-Dataset/tree/master/Dataset) and `groundtruth_text.zip` from [github Groundtruth](https://github.com/cs-chan/Total-Text-Dataset/tree/master/Groundtruth/Text) (Our totaltext_converter.py supports groundtruth with both .mat and .txt format).
|
||||
- Step1: Download `totaltext.zip` from [github dataset](https://github.com/cs-chan/Total-Text-Dataset/tree/master/Dataset) and `groundtruth_text.zip` or `TT_new_train_GT.zip` (if you prefer to use the latest version of training annotations) from [github Groundtruth](https://github.com/cs-chan/Total-Text-Dataset/tree/master/Groundtruth/Text) (Our totaltext_converter.py supports groundtruth with both .mat and .txt format).
|
||||
|
||||
```bash
|
||||
mkdir totaltext && cd totaltext
|
||||
|
@ -404,27 +404,28 @@ should be as follows:
|
|||
mv Images/Train imgs/training
|
||||
mv Images/Test imgs/test
|
||||
|
||||
# For annotations
|
||||
# For legacy training and test annotations
|
||||
unzip groundtruth_text.zip
|
||||
cd Groundtruth
|
||||
mv Polygon/Train ../annotations/training
|
||||
mv Polygon/Test ../annotations/test
|
||||
mv Groundtruth/Polygon/Train annotations/training
|
||||
mv Groundtruth/Polygon/Test annotations/test
|
||||
|
||||
# Using the latest training annotations
|
||||
# WARNING: Delete legacy train annotations before running the following command.
|
||||
unzip TT_new_train_GT.zip
|
||||
mv Train annotations/training
|
||||
```
|
||||
|
||||
- Step2: Generate cropped images, `train_label.txt` and `test_label.txt` with the following command (the cropped images will be saved to `data/totaltext/dst_imgs/`):
|
||||
|
||||
```bash
|
||||
python tools/data/textrecog/totaltext_converter.py /path/to/totaltext -o /path/to/totaltext --split-list training test
|
||||
python tools/data/textrecog/totaltext_converter.py /path/to/totaltext
|
||||
```
|
||||
|
||||
- After running the above codes, the directory structure
|
||||
should be as follows:
|
||||
- After running the above codes, the directory structure should be as follows:
|
||||
|
||||
```text
|
||||
├── Totaltext
|
||||
│ ├── imgs
|
||||
│ ├── annotations
|
||||
├── totaltext
|
||||
│ ├── dst_imgs
|
||||
│ ├── train_label.txt
|
||||
│ └── test_label.txt
|
||||
```
|
||||
|
@ -635,7 +636,7 @@ The LV dataset has already provided cropped images and the corresponding annotat
|
|||
│ └── test_label.jsonl
|
||||
```
|
||||
|
||||
### LSVT
|
||||
## LSVT
|
||||
|
||||
- Step1: Download [train_full_images_0.tar.gz](https://dataset-bj.cdn.bcebos.com/lsvt/train_full_images_0.tar.gz), [train_full_images_1.tar.gz](https://dataset-bj.cdn.bcebos.com/lsvt/train_full_images_1.tar.gz), and [train_full_labels.json](https://dataset-bj.cdn.bcebos.com/lsvt/train_full_labels.json) to `lsvt/`.
|
||||
|
||||
|
@ -655,7 +656,7 @@ The LV dataset has already provided cropped images and the corresponding annotat
|
|||
rm train_full_images_0.tar.gz && rm train_full_images_1.tar.gz && rm -rf train_full_images_1
|
||||
```
|
||||
|
||||
- Step2: Generate `train_label.jsonl` and `val_label.jsonl` (optional) with the following command:
|
||||
- Step2: Generate `train_label.jsonl` and `val_label.jsonl` (optional) with the following command:
|
||||
|
||||
```bash
|
||||
# Annotations of LSVT test split is not publicly available, split a validation
|
||||
|
@ -672,7 +673,7 @@ The LV dataset has already provided cropped images and the corresponding annotat
|
|||
│ ├── crops
|
||||
│ ├── ignores
|
||||
│ ├── train_label.jsonl
|
||||
│ ├── val_label.jsonl (optional)
|
||||
│ └── val_label.jsonl (optional)
|
||||
```
|
||||
|
||||
## FUNSD
|
||||
|
|
|
@ -15,15 +15,15 @@ from shapely.geometry import Polygon
|
|||
from mmocr.utils import convert_annotations
|
||||
|
||||
|
||||
def collect_files(img_dir, gt_dir, split):
|
||||
def collect_files(img_dir, gt_dir):
|
||||
"""Collect all images and their corresponding groundtruth files.
|
||||
|
||||
Args:
|
||||
img_dir(str): The image directory
|
||||
gt_dir(str): The groundtruth directory
|
||||
split(str): The split of dataset. Namely: training or test
|
||||
img_dir (str): The image directory
|
||||
gt_dir (str): The groundtruth directory
|
||||
|
||||
Returns:
|
||||
files(list): The list of tuples (img_file, groundtruth_file)
|
||||
files (list): The list of tuples (img_file, groundtruth_file)
|
||||
"""
|
||||
assert isinstance(img_dir, str)
|
||||
assert img_dir
|
||||
|
@ -54,10 +54,11 @@ def collect_annotations(files, nproc=1):
|
|||
"""Collect the annotation information.
|
||||
|
||||
Args:
|
||||
files(list): The list of tuples (image_file, groundtruth_file)
|
||||
nproc(int): The number of process to collect annotations
|
||||
files (list): The list of tuples (image_file, groundtruth_file)
|
||||
nproc (int): The number of process to collect annotations
|
||||
|
||||
Returns:
|
||||
images(list): The list of image information dicts
|
||||
images (list): The list of image information dicts
|
||||
"""
|
||||
assert isinstance(files, list)
|
||||
assert isinstance(nproc, int)
|
||||
|
@ -75,12 +76,13 @@ def get_contours_mat(gt_path):
|
|||
"""Get the contours and words for each ground_truth mat file.
|
||||
|
||||
Args:
|
||||
gt_path(str): The relative path of the ground_truth mat file
|
||||
gt_path (str): The relative path of the ground_truth mat file
|
||||
|
||||
Returns:
|
||||
contours(list[lists]): A list of lists of contours
|
||||
for the text instances
|
||||
words(list[list]): A list of lists of words (string)
|
||||
for the text instances
|
||||
contours (list[lists]): A list of lists of contours
|
||||
for the text instances
|
||||
words (list[list]): A list of lists of words (string)
|
||||
for the text instances
|
||||
"""
|
||||
assert isinstance(gt_path, str)
|
||||
|
||||
|
@ -88,7 +90,13 @@ def get_contours_mat(gt_path):
|
|||
words = []
|
||||
data = scio.loadmat(gt_path)
|
||||
# 'gt' for the latest version; 'polygt' for the legacy version
|
||||
data_polygt = data.get('polygt', data['gt'])
|
||||
keys = data.keys()
|
||||
if 'gt' in keys:
|
||||
data_polygt = data.get('gt')
|
||||
elif 'polygt' in keys:
|
||||
data_polygt = data.get('polygt')
|
||||
else:
|
||||
raise NotImplementedError
|
||||
|
||||
for i, lines in enumerate(data_polygt):
|
||||
X = np.array(lines[1])
|
||||
|
@ -96,15 +104,11 @@ def get_contours_mat(gt_path):
|
|||
|
||||
point_num = len(X[0])
|
||||
word = lines[4]
|
||||
if len(word) == 0:
|
||||
word = '???'
|
||||
if len(word) == 0 or word == '#':
|
||||
word = '###'
|
||||
else:
|
||||
word = word[0]
|
||||
|
||||
if word == '#':
|
||||
word = '###'
|
||||
continue
|
||||
|
||||
words.append(word)
|
||||
|
||||
arr = np.concatenate([X, Y]).T
|
||||
|
@ -121,9 +125,10 @@ def load_mat_info(img_info, gt_file):
|
|||
"""Load the information of one ground truth in .mat format.
|
||||
|
||||
Args:
|
||||
img_info(dict): The dict of only the image information
|
||||
gt_file(str): The relative path of the ground_truth mat
|
||||
file for one image
|
||||
img_info (dict): The dict of only the image information
|
||||
gt_file (str): The relative path of the ground_truth mat
|
||||
file for one image
|
||||
|
||||
Returns:
|
||||
img_info(dict): The dict of the img and annotation information
|
||||
"""
|
||||
|
@ -138,7 +143,7 @@ def load_mat_info(img_info, gt_file):
|
|||
category_id = 1
|
||||
coordinates = np.array(contour).reshape(-1, 2)
|
||||
polygon = Polygon(coordinates)
|
||||
iscrowd = 0
|
||||
iscrowd = 1 if text == '###' else 0
|
||||
|
||||
area = polygon.area
|
||||
# convert to COCO style XYWH format
|
||||
|
@ -165,14 +170,15 @@ def process_line(line, contours, words):
|
|||
Args:
|
||||
line(str): The line in gt file containing annotation info
|
||||
contours(list[lists]): A list of lists of contours
|
||||
for the text instances
|
||||
for the text instances
|
||||
words(list[list]): A list of lists of words (string)
|
||||
for the text instances
|
||||
for the text instances
|
||||
|
||||
Returns:
|
||||
contours(list[lists]): A list of lists of contours
|
||||
for the text instances
|
||||
words(list[list]): A list of lists of words (string)
|
||||
for the text instances
|
||||
contours (list[lists]): A list of lists of contours
|
||||
for the text instances
|
||||
words (list[list]): A list of lists of words (string)
|
||||
for the text instances
|
||||
"""
|
||||
|
||||
line = '{' + line.replace('[[', '[').replace(']]', ']') + '}'
|
||||
|
@ -186,7 +192,7 @@ def process_line(line, contours, words):
|
|||
Y = np.array([ann_dict['y']])
|
||||
|
||||
if len(ann_dict['transcriptions']) == 0:
|
||||
word = '???'
|
||||
word = '###'
|
||||
else:
|
||||
word = ann_dict['transcriptions'][0]
|
||||
if len(ann_dict['transcriptions']) > 1:
|
||||
|
@ -211,12 +217,13 @@ def get_contours_txt(gt_path):
|
|||
"""Get the contours and words for each ground_truth txt file.
|
||||
|
||||
Args:
|
||||
gt_path(str): The relative path of the ground_truth mat file
|
||||
gt_path (str): The relative path of the ground_truth mat file
|
||||
|
||||
Returns:
|
||||
contours(list[lists]): A list of lists of contours
|
||||
for the text instances
|
||||
words(list[list]): A list of lists of words (string)
|
||||
for the text instances
|
||||
contours (list[lists]): A list of lists of contours
|
||||
for the text instances
|
||||
words (list[list]): A list of lists of words (string)
|
||||
for the text instances
|
||||
"""
|
||||
assert isinstance(gt_path, str)
|
||||
|
||||
|
@ -250,9 +257,10 @@ def load_txt_info(gt_file, img_info):
|
|||
"""Load the information of one ground truth in .txt format.
|
||||
|
||||
Args:
|
||||
img_info(dict): The dict of only the image information
|
||||
gt_file(str): The relative path of the ground_truth mat
|
||||
file for one image
|
||||
img_info (dict): The dict of only the image information
|
||||
gt_file (str): The relative path of the ground_truth mat
|
||||
file for one image
|
||||
|
||||
Returns:
|
||||
img_info(dict): The dict of the img and annotation information
|
||||
"""
|
||||
|
@ -265,7 +273,7 @@ def load_txt_info(gt_file, img_info):
|
|||
category_id = 1
|
||||
coordinates = np.array(contour).reshape(-1, 2)
|
||||
polygon = Polygon(coordinates)
|
||||
iscrowd = 0
|
||||
iscrowd = 1 if text == '###' else 0
|
||||
|
||||
area = polygon.area
|
||||
# convert to COCO style XYWH format
|
||||
|
@ -290,10 +298,11 @@ def load_png_info(gt_file, img_info):
|
|||
"""Load the information of one ground truth in .png format.
|
||||
|
||||
Args:
|
||||
gt_file(str): The relative path of the ground_truth file for one image
|
||||
img_info(dict): The dict of only the image information
|
||||
gt_file (str): The relative path of the ground_truth file for one image
|
||||
img_info (dict): The dict of only the image information
|
||||
|
||||
Returns:
|
||||
img_info(dict): The dict of the img and annotation information
|
||||
img_info (dict): The dict of the img and annotation information
|
||||
"""
|
||||
assert isinstance(gt_file, str)
|
||||
assert isinstance(img_info, dict)
|
||||
|
@ -334,14 +343,15 @@ def load_img_info(files):
|
|||
"""Load the information of one image.
|
||||
|
||||
Args:
|
||||
files(tuple): The tuple of (img_file, groundtruth_file)
|
||||
files (tuple): The tuple of (img_file, groundtruth_file)
|
||||
|
||||
Returns:
|
||||
img_info(dict): The dict of the img and annotation information
|
||||
img_info (dict): The dict of the img and annotation information
|
||||
"""
|
||||
assert isinstance(files, tuple)
|
||||
|
||||
img_file, gt_file = files
|
||||
# read imgs with ignoring orientations
|
||||
# read imgs while ignoring orientations
|
||||
img = mmcv.imread(img_file, 'unchanged')
|
||||
|
||||
split_name = osp.basename(osp.dirname(img_file))
|
||||
|
@ -366,15 +376,9 @@ def load_img_info(files):
|
|||
def parse_args():
|
||||
parser = argparse.ArgumentParser(
|
||||
description='Convert totaltext annotations to COCO format')
|
||||
parser.add_argument('root_path', help='totaltext root path')
|
||||
parser.add_argument('-o', '--out-dir', help='output path')
|
||||
parser.add_argument('root_path', help='Totaltext root path')
|
||||
parser.add_argument(
|
||||
'--split-list',
|
||||
nargs='+',
|
||||
help='a list of splits. e.g., "--split_list training test"')
|
||||
|
||||
parser.add_argument(
|
||||
'--nproc', default=1, type=int, help='number of process')
|
||||
'--nproc', default=1, type=int, help='Number of process')
|
||||
args = parser.parse_args()
|
||||
return args
|
||||
|
||||
|
@ -382,14 +386,11 @@ def parse_args():
|
|||
def main():
|
||||
args = parse_args()
|
||||
root_path = args.root_path
|
||||
out_dir = args.out_dir if args.out_dir else root_path
|
||||
mmcv.mkdir_or_exist(out_dir)
|
||||
|
||||
img_dir = osp.join(root_path, 'imgs')
|
||||
gt_dir = osp.join(root_path, 'annotations')
|
||||
|
||||
set_name = {}
|
||||
for split in args.split_list:
|
||||
for split in ['training', 'test']:
|
||||
set_name.update({split: 'instances_' + split + '.json'})
|
||||
assert osp.exists(osp.join(img_dir, split))
|
||||
|
||||
|
@ -398,9 +399,9 @@ def main():
|
|||
with mmcv.Timer(
|
||||
print_tmpl='It takes {}s to convert totaltext annotation'):
|
||||
files = collect_files(
|
||||
osp.join(img_dir, split), osp.join(gt_dir, split), split)
|
||||
osp.join(img_dir, split), osp.join(gt_dir, split))
|
||||
image_infos = collect_annotations(files, nproc=args.nproc)
|
||||
convert_annotations(image_infos, osp.join(out_dir, json_name))
|
||||
convert_annotations(image_infos, osp.join(root_path, json_name))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
|
|
@ -15,13 +15,13 @@ from mmocr.datasets.pipelines.crop import crop_img
|
|||
from mmocr.utils.fileio import list_to_file
|
||||
|
||||
|
||||
def collect_files(img_dir, gt_dir, split):
|
||||
def collect_files(img_dir, gt_dir):
|
||||
"""Collect all images and their corresponding groundtruth files.
|
||||
|
||||
Args:
|
||||
img_dir(str): The image directory
|
||||
gt_dir(str): The groundtruth directory
|
||||
split(str): The split of dataset. Namely: training or test
|
||||
img_dir (str): The image directory
|
||||
gt_dir (str): The groundtruth directory
|
||||
|
||||
Returns:
|
||||
files(list): The list of tuples (img_file, groundtruth_file)
|
||||
"""
|
||||
|
@ -55,10 +55,11 @@ def collect_annotations(files, nproc=1):
|
|||
"""Collect the annotation information.
|
||||
|
||||
Args:
|
||||
files(list): The list of tuples (image_file, groundtruth_file)
|
||||
nproc(int): The number of process to collect annotations
|
||||
files (list): The list of tuples (image_file, groundtruth_file)
|
||||
nproc (int): The number of process to collect annotations
|
||||
|
||||
Returns:
|
||||
images(list): The list of image information dicts
|
||||
images (list): The list of image information dicts
|
||||
"""
|
||||
assert isinstance(files, list)
|
||||
assert isinstance(nproc, int)
|
||||
|
@ -76,19 +77,25 @@ def get_contours_mat(gt_path):
|
|||
"""Get the contours and words for each ground_truth mat file.
|
||||
|
||||
Args:
|
||||
gt_path(str): The relative path of the ground_truth mat file
|
||||
gt_path (str): The relative path of the ground_truth mat file
|
||||
|
||||
Returns:
|
||||
contours(list[lists]): A list of lists of contours
|
||||
for the text instances
|
||||
words(list[list]): A list of lists of words (string)
|
||||
for the text instances
|
||||
contours (list[lists]): A list of lists of contours
|
||||
for the text instances
|
||||
words (list[list]): A list of lists of words (string)
|
||||
for the text instances
|
||||
"""
|
||||
assert isinstance(gt_path, str)
|
||||
|
||||
contours = []
|
||||
words = []
|
||||
data = scio.loadmat(gt_path)
|
||||
data_polygt = data['polygt']
|
||||
# 'gt' for the latest version; 'polygt' for the legacy version
|
||||
keys = data.keys()
|
||||
if 'gt' in keys:
|
||||
data_polygt = data.get('gt')
|
||||
elif 'polygt' in keys:
|
||||
data_polygt = data.get('polygt')
|
||||
|
||||
for i, lines in enumerate(data_polygt):
|
||||
X = np.array(lines[1])
|
||||
|
@ -96,15 +103,11 @@ def get_contours_mat(gt_path):
|
|||
|
||||
point_num = len(X[0])
|
||||
word = lines[4]
|
||||
if len(word) == 0:
|
||||
word = '???'
|
||||
if len(word) == 0 or word == '#':
|
||||
word = '###'
|
||||
else:
|
||||
word = word[0]
|
||||
|
||||
if word == '#':
|
||||
word = '###'
|
||||
continue
|
||||
|
||||
words.append(word)
|
||||
|
||||
arr = np.concatenate([X, Y]).T
|
||||
|
@ -121,9 +124,10 @@ def load_mat_info(img_info, gt_file):
|
|||
"""Load the information of one ground truth in .mat format.
|
||||
|
||||
Args:
|
||||
img_info(dict): The dict of only the image information
|
||||
gt_file(str): The relative path of the ground_truth mat
|
||||
file for one image
|
||||
img_info (dict): The dict of only the image information
|
||||
gt_file (str): The relative path of the ground_truth mat
|
||||
file for one image
|
||||
|
||||
Returns:
|
||||
img_info(dict): The dict of the img and annotation information
|
||||
"""
|
||||
|
@ -133,7 +137,7 @@ def load_mat_info(img_info, gt_file):
|
|||
contours, words = get_contours_mat(gt_file)
|
||||
anno_info = []
|
||||
for contour, word in zip(contours, words):
|
||||
if contour.shape[0] == 2:
|
||||
if contour.shape[0] == 2 or word == '###':
|
||||
continue
|
||||
coordinates = np.array(contour).reshape(-1, 2)
|
||||
polygon = Polygon(coordinates)
|
||||
|
@ -152,16 +156,17 @@ def process_line(line, contours, words):
|
|||
"""Get the contours and words by processing each line in the gt file.
|
||||
|
||||
Args:
|
||||
line(str): The line in gt file containing annotation info
|
||||
contours(list[lists]): A list of lists of contours
|
||||
for the text instances
|
||||
words(list[list]): A list of lists of words (string)
|
||||
for the text instances
|
||||
line (str): The line in gt file containing annotation info
|
||||
contours (list[lists]): A list of lists of contours
|
||||
for the text instances
|
||||
words (list[list]): A list of lists of words (string)
|
||||
for the text instances
|
||||
|
||||
Returns:
|
||||
contours(list[lists]): A list of lists of contours
|
||||
for the text instances
|
||||
words(list[list]): A list of lists of words (string)
|
||||
for the text instances
|
||||
contours (list[lists]): A list of lists of contours
|
||||
for the text instances
|
||||
words (list[list]): A list of lists of words (string)
|
||||
for the text instances
|
||||
"""
|
||||
|
||||
line = '{' + line.replace('[[', '[').replace(']]', ']') + '}'
|
||||
|
@ -175,7 +180,7 @@ def process_line(line, contours, words):
|
|||
Y = np.array([ann_dict['y']])
|
||||
|
||||
if len(ann_dict['transcriptions']) == 0:
|
||||
word = '???'
|
||||
word = '###'
|
||||
else:
|
||||
word = ann_dict['transcriptions'][0]
|
||||
if len(ann_dict['transcriptions']) > 1:
|
||||
|
@ -200,12 +205,13 @@ def get_contours_txt(gt_path):
|
|||
"""Get the contours and words for each ground_truth txt file.
|
||||
|
||||
Args:
|
||||
gt_path(str): The relative path of the ground_truth mat file
|
||||
gt_path (str): The relative path of the ground_truth mat file
|
||||
|
||||
Returns:
|
||||
contours(list[lists]): A list of lists of contours
|
||||
for the text instances
|
||||
words(list[list]): A list of lists of words (string)
|
||||
for the text instances
|
||||
contours (list[lists]): A list of lists of contours
|
||||
for the text instances
|
||||
words (list[list]): A list of lists of words (string)
|
||||
for the text instances
|
||||
"""
|
||||
assert isinstance(gt_path, str)
|
||||
|
||||
|
@ -231,10 +237,8 @@ def get_contours_txt(gt_path):
|
|||
contours, words = process_line(tmp_line, contours, words)
|
||||
|
||||
for word in words:
|
||||
|
||||
if word == '#':
|
||||
word = '###'
|
||||
continue
|
||||
|
||||
return contours, words
|
||||
|
||||
|
@ -243,17 +247,18 @@ def load_txt_info(gt_file, img_info):
|
|||
"""Load the information of one ground truth in .txt format.
|
||||
|
||||
Args:
|
||||
img_info(dict): The dict of only the image information
|
||||
gt_file(str): The relative path of the ground_truth mat
|
||||
file for one image
|
||||
img_info (dict): The dict of only the image information
|
||||
gt_file (str): The relative path of the ground_truth mat
|
||||
file for one image
|
||||
|
||||
Returns:
|
||||
img_info(dict): The dict of the img and annotation information
|
||||
img_info (dict): The dict of the img and annotation information
|
||||
"""
|
||||
|
||||
contours, words = get_contours_txt(gt_file)
|
||||
anno_info = []
|
||||
for contour, word in zip(contours, words):
|
||||
if contour.shape[0] == 2:
|
||||
if contour.shape[0] == 2 or word == '###':
|
||||
continue
|
||||
coordinates = np.array(contour).reshape(-1, 2)
|
||||
polygon = Polygon(coordinates)
|
||||
|
@ -272,10 +277,10 @@ def generate_ann(root_path, split, image_infos):
|
|||
"""Generate cropped annotations and label txt file.
|
||||
|
||||
Args:
|
||||
root_path(str): The relative path of the totaltext file
|
||||
split(str): The split of dataset. Namely: training or test
|
||||
image_infos(list[dict]): A list of dicts of the img and
|
||||
annotation information
|
||||
root_path (str): The relative path of the totaltext file
|
||||
split (str): The split of dataset. Namely: training or test
|
||||
image_infos (list[dict]): A list of dicts of the img and
|
||||
annotation information
|
||||
"""
|
||||
|
||||
dst_image_root = osp.join(root_path, 'dst_imgs', split)
|
||||
|
@ -297,7 +302,7 @@ def generate_ann(root_path, split, image_infos):
|
|||
dst_img = crop_img(image, anno['bbox'])
|
||||
|
||||
# Skip invalid annotations
|
||||
if min(dst_img.shape) == 0:
|
||||
if min(dst_img.shape) == 0 or word == '###':
|
||||
continue
|
||||
|
||||
dst_img_name = f'{src_img_root}_{index}.png'
|
||||
|
@ -313,9 +318,10 @@ def load_img_info(files):
|
|||
"""Load the information of one image.
|
||||
|
||||
Args:
|
||||
files(tuple): The tuple of (img_file, groundtruth_file)
|
||||
files (tuple): The tuple of (img_file, groundtruth_file)
|
||||
|
||||
Returns:
|
||||
img_info(dict): The dict of the img and annotation information
|
||||
img_info (dict): The dict of the img and annotation information
|
||||
"""
|
||||
assert isinstance(files, tuple)
|
||||
|
||||
|
@ -345,15 +351,9 @@ def load_img_info(files):
|
|||
def parse_args():
|
||||
parser = argparse.ArgumentParser(
|
||||
description='Convert totaltext annotations to COCO format')
|
||||
parser.add_argument('root_path', help='totaltext root path')
|
||||
parser.add_argument('-o', '--out-dir', help='output path')
|
||||
parser.add_argument('root_path', help='Totaltext root path')
|
||||
parser.add_argument(
|
||||
'--split-list',
|
||||
nargs='+',
|
||||
help='a list of splits. e.g., "--split_list training test"')
|
||||
|
||||
parser.add_argument(
|
||||
'--nproc', default=1, type=int, help='number of process')
|
||||
'--nproc', default=1, type=int, help='Number of process')
|
||||
args = parser.parse_args()
|
||||
return args
|
||||
|
||||
|
@ -361,23 +361,20 @@ def parse_args():
|
|||
def main():
|
||||
args = parse_args()
|
||||
root_path = args.root_path
|
||||
out_dir = args.out_dir if args.out_dir else root_path
|
||||
mmcv.mkdir_or_exist(out_dir)
|
||||
|
||||
img_dir = osp.join(root_path, 'imgs')
|
||||
gt_dir = osp.join(root_path, 'annotations')
|
||||
|
||||
set_name = {}
|
||||
for split in args.split_list:
|
||||
set_name.update({split: 'instances_' + split + '.json'})
|
||||
for split in ['training', 'test']:
|
||||
set_name.update({split: split + '_label' + '.txt'})
|
||||
assert osp.exists(osp.join(img_dir, split))
|
||||
|
||||
for split, json_name in set_name.items():
|
||||
print(f'Converting {split} into {json_name}')
|
||||
for split, ann_name in set_name.items():
|
||||
print(f'Converting {split} into {ann_name}')
|
||||
with mmcv.Timer(
|
||||
print_tmpl='It takes {}s to convert totaltext annotation'):
|
||||
files = collect_files(
|
||||
osp.join(img_dir, split), osp.join(gt_dir, split), split)
|
||||
osp.join(img_dir, split), osp.join(gt_dir, split))
|
||||
image_infos = collect_annotations(files, nproc=args.nproc)
|
||||
generate_ann(root_path, split, image_infos)
|
||||
|
||||
|
|
Loading…
Reference in New Issue