EasyCV/docs/source/data_hub.md

27 KiB
Raw Blame History

DataHub

EasyCV summarized various datasets in different fields. At present, we support part of them, and we will gradually support remainings.

For datasets we already support, please refer to: prepare_data.md.

Self-Supervised Learning

Name Field Describtion Download Dataset API support
ImageNet 1k
url
Common ImageNet is an image database organized according to the WordNet hierarchy (currently only the nouns).It is used in the ImageNet Large Scale Visual Recognition Challenge(ILSVRC) and is a benchmark for image classification. refer to prepare_data.md
Imagenet-1k TFrecords
url
Common Original imagenet raw images packed in TFrecord format. refer to prepare_data.md
ImageNet 21k
url
Common ImageNet-21K dataset, which is bigger and more diverse, is used less frequently for pretraining, mainly due to its complexity, low accessibility, and underestimation of its added value. refer to Alibaba-MIIL/ImageNet21K

Classification data

Name Field Describtion Download Dataset API support
Cifar10
url
Common The CIFAR-10 are labeled subsets of the 80 million tiny images dataset. It consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. cifar-10-python.tar.gz (163MB)
Cifar100
url
Common The CIFAR-100 are labeled subsets of the 80 million tiny images dataset. It is just like the CIFAR-10, except it has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class. cifar-100-python.tar.gz (161MB)
ImageNet 1k
url
Common ImageNet is an image database organized according to the WordNet hierarchy (currently only the nouns).It is used in the ImageNet Large Scale Visual Recognition Challenge(ILSVRC) and is a benchmark for image classification. refer to prepare_data.md
Imagenet-1k TFrecords
url
Common Original imagenet raw images packed in TFrecord format. refer to prepare_data.md
ImageNet 21k
url
Common ImageNet-21K dataset, which is bigger and more diverse, is used less frequently for pretraining, mainly due to its complexity, low accessibility, and underestimation of its added value. refer to Alibaba-MIIL/ImageNet21K
MNIST
url
Handwritten numbers The MNIST database of handwritten digits, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image. train-images-idx3-ubyte.gz (9.5MB)
train-labels-idx1-ubyte.gz
t10k-images-idx3-ubyte.gz (1.5MB)
t10k-labels-idx1-ubyte.gz
Fashion-MNIST
url
Clothing Fashion-MNIST is a clothing dataset of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. train-images-idx3-ubyte.gz (26MB)
train-labels-idx1-ubyte.gz (29KB)
t10k-images-idx3-ubyte.gz(4.3 MB)
t10k-labels-idx1-ubyte.gz (5.1KB)
Flower102
url
Flowers The Flower102 is consisting of 102 flower categories. The flowers chosen to be flower commonly occuring in the United Kingdom. Each class consists of between 40 and 258 images. 102flowers.tgz (329MB)
imagelabels.mat
setid.mat
Caltech 101
url
Common Pictures of objects belonging to 101 categories. About 40 to 800 images per category. Most categories have about 50 images. The size of each image is roughly 300 x 200 pixels. caltech-101.zip (137.4 MB)
Caltech 256
url
Common The Caltech-256 is a challenging set of 256 object categories containing a total of 30607 images. Compared to Caltech-101, Caltech-256 has the following improvements: a) the number of categories is more than doubled, b) the minimum number of images in any category is increased from 31 to 80, c) artifacts due to image rotation are avoided and d) a new and larger clutter category is introduced for testing background rejection. 256_ObjectCategories.tar (1.2GB)

Object Detection

Name Field Describtion Download Dataset API support
COCO2017
url
Common The COCO dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.It has been updated for several editions, and coco2017 is widely used. In 2017, the training/validation split was 118K/5K and test set is a subset of 41K images of the 2015 test set. train2017.zip (18G)
val2017.zip (1G)
annotations_trainval2017.zip (241MB)
VOC2007
url
Common PASCAL VOC 2007 is a dataset for image recognition consisting of 20 object categories. Each image in this dataset has pixel-level segmentation annotations, bounding box annotations, and object class annotations. VOCtrainval_06-Nov-2007.tar (439MB)
VOC2012
url
Common From 2009 to 2011, the amount of data is still growing on the basis of the previous year's dataset, and from 2011 to 2012, the amount of data used for classification, detection and person layout tasks does not change. Mainly for segmentation and action recognition, improve the corresponding data subsets and label information. VOCtrainval_11-May-2012.tar (2G)
Cityscapes
url
Street scenes The Cityscapes contains a diverse set of stereo video sequences recorded in street scenes from 50 different cities, with high quality pixel-level annotations of 5000 frames in addition to a larger set of 20000 weakly annotated frames. The dataset is thus an order of magnitude larger than similar previous attempts. leftImg8bit_trainvaltest.zip (11GB)
Openimages
url
Common Open Images is a dataset of ~9 million URLs to images that have been annotated with image-level labels and bounding boxes spanning thousands of classes. refer to cvdfoundation/open-images-dataset
**WIDER FACE **
url
Face The WIDER FACE dataset contains 32,203 images and labels 393,703 faces with a high degree of variability in scale, pose and occlusion. The database is split into training (40%), validation (10%) and testing (50%) set. Besides, the images are divided into three levels (Easy ⊆ Medium ⊆ Hard) according to the difficulties of the detection. WIDER Face Training Images [Google Drive] [Tencent Drive] (1.36GB)
WIDER Face Validation Images [Google Drive] [Tencent Drive] (345.95MB)
WIDER Face Testing Images [Google Drive] [Tencent Drive] (1.72GB)
Face annotations (3.6MB)
DeepFashion
url
Clothing The DeepFashion is a large-scale clothes database. It contains over 800,000 diverse fashion images ranging from well-posed shop images to unconstrained consumer photos. Second, DeepFashion is annotated with rich information of clothing items. Each image in this dataset is labeled with 50 categories, 1,000 descriptive attributes, bounding box and clothing landmarks. Third, DeepFashion contains over 300,000 cross-pose/cross-domain image pairs. Category and Attribute Prediction Benchmark: [Download Page]
In-shop Clothes Retrieval Benchmark: [Download Page]
Consumer-to-shop Clothes Retrieval Benchmark: [Download Page]
Fashion Landmark Detection Benchmark: [Download Page]
Fruit Images
url
Fruit Containing labelled fruit images to train object detection systems. 240 images in train folder. 60 images in test folder.It contains only 3 different fruits: Apple,Banana,Orange. archive.zip (30MB)
Oxford-IIIT Pet
url
Animal The Oxford-IIIT Pet Dataset is a 37 category pet dataset with roughly 100 images for each class created by the Visual Geometry Group at Oxford. The images have large variations in scale, pose and lighting. All images have an associated ground truth annotation of the breed, head ROI, and pixel level trimap segmentation. archive.zip (818MB)
Arthropod Taxonomy Orders
url
Animal The ArTaxOr data set covers arthropods, which includes insects, spiders, crustaceans, centipedes, millipedes etc. There are more than 1.3 million species of arthropods described. The dataset consists of images of arthropods in jpeg format and object boundary boxes in json format. There are between one and 50 objects per image. archive.zip (12GB)
African Wildlife
url
Animal Four animal classes commonly found in nature reserves in South Africa are represented in this data set: buffalo, elephant, rhino and zebra.
This data set contains at least 376 images for each animal. Each example in the data set consists of a jpg image and a txt label file. The images have differing aspect ratios and contain at least one example of the specified animal class.
The txt file contains a list of detectable instances on separate lines of the class in the YOLOv3 labeling format.
archive.zip (469MB)
AI-TOD航空图
url
Aerial
(small objects)
AI-TOD contains 700,621 objects across 8 categories in 28,036 aerial images. Compared with existing object detection datasets in aerial images, the average size of objects in AI-TOD is about 12.8 pixels, which is much smaller than other datasets. download url (22.95GB)
TinyPerson
url
Person
(small objects)
There are 1610 labeled and 759 unlabeled images in TinyPerson (both mostly from the same video set), for a total of 72651 annotations. download url (1.6GB)
WiderPerson
url
Person
(Dense pedestrian detection)
The WiderPerson dataset is a benchmark dataset for pedestrian detection in the wild, with images selected from a wide range of scenes, no longer limited to traffic scenes. We selected 13,382 images and annotated about 400K annotations with various occlusions. download url (969.72MB)
Caltech Pedestrian Dataset
url
Person The Caltech Pedestrian dataset consists of about 10 hours of 640x480 30Hz video taken from vehicles driving through regular traffic in an urban environment. About 250,000 frames (in 137 roughly minute-long clips) were annotated for a total of 350,000 bounding boxes and 2300 unique pedestrians. Annotations include temporal correspondence between bounding boxes and detailed occlusion labels. download url (1.98GB)
DOTA
url
Aerial DOTA is a large-scale dataset for object detection in aerial images. It can be used to develop and evaluate object detectors in aerial images. The images are collected from different sensors and platforms. Each image is of the size in the range from 800 × 800 to 20,000 × 20,000 pixels and contains objects exhibiting a wide variety of scales, orientations, and shapes. download url (156.33GB)

Image Segmentation

Name Field Describtion Download Dataset API support
VOC2007
url
Common PASCAL VOC 2007 is a dataset for image recognition consisting of 20 object categories. Each image in this dataset has pixel-level segmentation annotations, bounding box annotations, and object class annotations. VOCtrainval_06-Nov-2007.tar (439MB)
VOC2012
url
Common From 2009 to 2011, the amount of data is still growing on the basis of the previous year's dataset, and from 2011 to 2012, the amount of data used for classification, detection and person layout tasks does not change. Mainly for segmentation and action recognition, improve the corresponding data subsets and label information. VOCtrainval_11-May-2012.tar (2G)
Pascal Context
url
Common This dataset is a set of additional annotations for PASCAL VOC 2010. It goes beyond the original PASCAL semantic segmentation task by providing annotations for the whole scene. The statistics section has a full list of 400+ labels. voc2010/VOCtrainval_03-May-2010.tar (1.3GB)
VOC2010test.tar
trainval_merged.json (590MB)
COCO-Stuff 10K
url
Common COCO-Stuff augments the popular COCO dataset with pixel-level stuff annotations. These annotations can be used for scene understanding tasks like semantic segmentation, object detection and image captioning. cocostuff-10k-v1.1.zip (2.0 GB)
Cityscapes
url
Street scenes The Cityscapes contains a diverse set of stereo video sequences recorded in street scenes from 50 different cities, with high quality pixel-level annotations of 5000 frames in addition to a larger set of 20000 weakly annotated frames. The dataset is thus an order of magnitude larger than similar previous attempts. leftImg8bit_trainvaltest.zip (11GB)
ADE20K
url
Scene The ADE20K dataset is released by MIT and can be used for scene perception, parsing, segmentation, multi-object recognition and semantic understanding.The annotated images cover the scene categories from the SUN and Places database.It contains 25.574 training set and 2000 validation set. ADEChallengeData2016.zip (923MB)
release_test.zip (202MB)

Pose

Name Field Describtion Download Dataset API support
COCO2017
url
Person The COCO dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.It has been updated for several editions, and coco2017 is widely used. In 2017, the training/validation split was 118K/5K and test set is a subset of 41K images of the 2015 test set. train2017.zip (18G)
val2017.zip (1G)
annotations_trainval2017.zip (241MB)
person_detection_results.zip from OneDrive or GoogleDrive (26.2MB)
MPII
url
Person MPII Human Pose dataset is a state of the art benchmark for evaluation of articulated human pose estimation. The dataset includes around 25K images containing over 40K people with annotated body joints. The images were systematically collected using an established taxonomy of every day human activities. Overall the dataset covers 410 human activities and each image is provided with an activity label. Each image was extracted from a YouTube video and provided with preceding and following un-annotated frames. In addition, for the test set we obtained richer annotations including body part occlusions and 3D torso and head orientations. mpii_human_pose_v1.tar.gz (12.9GB)
mpii_human_pose_v1_u12_2.zip (12.5MB)
CrowdPose
url
Person Multi-person pose estimation is fundamental to many computer vision tasks and has made significant progress in recent years. However, few previous methods explored the problem of pose estimation in crowded scenes while it remains challenging and inevitable in many scenarios. Moreover, current benchmarks cannot provide an appropriate evaluation for such cases. In CrowdPose: Efficient Crowded Scenes Pose Estimation and A New Benchmark, the author propose a novel and efficient method to tackle the problem of pose estimation in the crowd and a new dataset to better evaluate algorithms. images.zip (2.2G)
Annotations
OCHuman
url
Person This dataset focus on heavily occluded human with comprehensive annotations including bounding-box, humans pose and instance mask. This dataset contains 13360 elaborately annotated human instances within 5081 images. With average 0.573 MaxIoU of each person, OCHuman is the most complex and challenging dataset related to human. Images (667MB) & Annotations