mmclassification/mmpretrain/datasets
Yixiao Fang aac398a83f
[Feature] Support new configs. (#1639)
* [Feature] Support new configs (#1638)

* add new config of mae and simclr

* update

* update setup.cfg

* update eva

* update

* update new config

* Add new config

* remove __init__.py

* 1. remove ; 2. remove mmpretrain/configs/_base_/models/convnext

* remove model_wrapper_cfg and add out type

* Add comment for setting default_scope to NOne

* update if '_base_' order

* update

* revert changes

---------

Co-authored-by: fangyixiao18 <fangyx18@hotmail.com>

* Add warn at the head of new config files

---------

Co-authored-by: Mashiro <57566630+HAOCHENYE@users.noreply.github.com>
Co-authored-by: mzr1996 <mzr1996@163.com>
2023-06-16 16:54:45 +08:00
..
samplers [Feature] Support multiple multi-modal algorithms and inferencers. (#1561) 2023-05-19 16:50:04 +08:00
transforms [Feature] Support new configs. (#1639) 2023-06-16 16:54:45 +08:00
__init__.py [Feature] Add support for vsr dataset (#1634) 2023-06-15 19:17:02 +08:00
base_dataset.py
builder.py
caltech101.py
categories.py [Feature] Support Chinese CLIP. (#1576) 2023-05-22 15:46:13 +08:00
cifar.py
coco_caption.py [Feature] Support multiple multi-modal algorithms and inferencers. (#1561) 2023-05-19 16:50:04 +08:00
coco_retrieval.py [Feature] Support multiple multi-modal algorithms and inferencers. (#1561) 2023-05-19 16:50:04 +08:00
coco_vqa.py [Feature] Support multiple multi-modal algorithms and inferencers. (#1561) 2023-05-19 16:50:04 +08:00
cub.py
custom.py
dataset_wrappers.py
dtd.py
fgvcaircraft.py
flamingo.py [Feature] Support multiple multi-modal algorithms and inferencers. (#1561) 2023-05-19 16:50:04 +08:00
flowers102.py
food101.py
gqa_dataset.py [Feature] Add GQA dataset. (#1585) 2023-05-23 11:25:42 +08:00
imagenet.py [Fix] Fix bug loading IN1k dataset. (#1641) 2023-06-16 15:35:27 +08:00
inshop.py
mnist.py [Refactor] Support to use "split" to specify training set/validation set in the ImageNet dataset (#1535) 2023-06-02 11:03:18 +08:00
multi_label.py [Feature] Support multiple multi-modal algorithms and inferencers. (#1561) 2023-05-19 16:50:04 +08:00
multi_task.py
nlvr2.py [Feature] Support multiple multi-modal algorithms and inferencers. (#1561) 2023-05-19 16:50:04 +08:00
nocaps.py [Feature] Support NoCap dataset based on BLIP. (#1582) 2023-05-23 18:06:43 +08:00
ocr_vqa.py [Feature] Support OCR-VQA dataset (#1621) 2023-06-13 10:28:45 +08:00
oxfordiiitpet.py
places205.py
refcoco.py [Feature] Support multiple multi-modal algorithms and inferencers. (#1561) 2023-05-19 16:50:04 +08:00
scienceqa.py [Feature]: Add image_only param (#1613) 2023-06-06 12:50:42 +08:00
stanfordcars.py
sun397.py [Refactor] Support to use "split" to specify training set/validation set in the ImageNet dataset (#1535) 2023-06-02 11:03:18 +08:00
textvqa.py [Feature] support TextVQA dataset (#1596) 2023-06-02 11:50:38 +08:00
utils.py
vg_vqa.py [Feature] Support multiple multi-modal algorithms and inferencers. (#1561) 2023-05-19 16:50:04 +08:00
visual_genome.py [Feature] Support multiple multi-modal algorithms and inferencers. (#1561) 2023-05-19 16:50:04 +08:00
voc.py [Refactor] Support to use "split" to specify training set/validation set in the ImageNet dataset (#1535) 2023-06-02 11:03:18 +08:00
vsr.py [Feature] Add support for vsr dataset (#1634) 2023-06-15 19:17:02 +08:00