.. |
samplers
|
[Feature] Support multiple multi-modal algorithms and inferencers. (#1561)
|
2023-05-19 16:50:04 +08:00 |
transforms
|
[Fix] Update torchvision transform wrapper (#1595)
|
2023-05-26 17:56:09 +08:00 |
__init__.py
|
[Feature] support TextVQA dataset (#1596)
|
2023-06-02 11:50:38 +08:00 |
base_dataset.py
|
[Feature] Support some downstream classification datasets. (#1467)
|
2023-05-05 14:43:14 +08:00 |
builder.py
|
…
|
|
caltech101.py
|
[Feature] Support some downstream classification datasets. (#1467)
|
2023-05-05 14:43:14 +08:00 |
categories.py
|
[Feature] Support Chinese CLIP. (#1576)
|
2023-05-22 15:46:13 +08:00 |
cifar.py
|
[Feature] Support some downstream classification datasets. (#1467)
|
2023-05-05 14:43:14 +08:00 |
coco_caption.py
|
[Feature] Support multiple multi-modal algorithms and inferencers. (#1561)
|
2023-05-19 16:50:04 +08:00 |
coco_retrieval.py
|
[Feature] Support multiple multi-modal algorithms and inferencers. (#1561)
|
2023-05-19 16:50:04 +08:00 |
coco_vqa.py
|
[Feature] Support multiple multi-modal algorithms and inferencers. (#1561)
|
2023-05-19 16:50:04 +08:00 |
cub.py
|
[Feature] Support some downstream classification datasets. (#1467)
|
2023-05-05 14:43:14 +08:00 |
custom.py
|
[Refactor] Update datasets (#1375)
|
2023-02-27 15:42:22 +08:00 |
dataset_wrappers.py
|
…
|
|
dtd.py
|
[Feature] Support some downstream classification datasets. (#1467)
|
2023-05-05 14:43:14 +08:00 |
fgvcaircraft.py
|
[Feature] Support some downstream classification datasets. (#1467)
|
2023-05-05 14:43:14 +08:00 |
flamingo.py
|
[Feature] Support multiple multi-modal algorithms and inferencers. (#1561)
|
2023-05-19 16:50:04 +08:00 |
flowers102.py
|
[Feature] Support some downstream classification datasets. (#1467)
|
2023-05-05 14:43:14 +08:00 |
food101.py
|
[Feature] Support some downstream classification datasets. (#1467)
|
2023-05-05 14:43:14 +08:00 |
gqa_dataset.py
|
[Feature] Add GQA dataset. (#1585)
|
2023-05-23 11:25:42 +08:00 |
imagenet.py
|
[Refactor] Support to use "split" to specify training set/validation set in the ImageNet dataset (#1535)
|
2023-06-02 11:03:18 +08:00 |
inshop.py
|
[Refactor] Add selfsup algorithms. (#1389)
|
2023-03-06 16:53:15 +08:00 |
mnist.py
|
[Refactor] Support to use "split" to specify training set/validation set in the ImageNet dataset (#1535)
|
2023-06-02 11:03:18 +08:00 |
multi_label.py
|
[Feature] Support multiple multi-modal algorithms and inferencers. (#1561)
|
2023-05-19 16:50:04 +08:00 |
multi_task.py
|
[Docs] Update user guides docs and tools for MMPretrain. (#1429)
|
2023-03-27 14:32:26 +08:00 |
nlvr2.py
|
[Feature] Support multiple multi-modal algorithms and inferencers. (#1561)
|
2023-05-19 16:50:04 +08:00 |
nocaps.py
|
[Feature] Support NoCap dataset based on BLIP. (#1582)
|
2023-05-23 18:06:43 +08:00 |
oxfordiiitpet.py
|
[Feature] Support some downstream classification datasets. (#1467)
|
2023-05-05 14:43:14 +08:00 |
places205.py
|
[Refactor] Update datasets (#1375)
|
2023-02-27 15:42:22 +08:00 |
refcoco.py
|
[Feature] Support multiple multi-modal algorithms and inferencers. (#1561)
|
2023-05-19 16:50:04 +08:00 |
scienceqa.py
|
[Feature]: Add image_only param (#1613)
|
2023-06-06 12:50:42 +08:00 |
stanfordcars.py
|
[Feature] Support some downstream classification datasets. (#1467)
|
2023-05-05 14:43:14 +08:00 |
sun397.py
|
[Refactor] Support to use "split" to specify training set/validation set in the ImageNet dataset (#1535)
|
2023-06-02 11:03:18 +08:00 |
textvqa.py
|
[Feature] support TextVQA dataset (#1596)
|
2023-06-02 11:50:38 +08:00 |
utils.py
|
…
|
|
vg_vqa.py
|
[Feature] Support multiple multi-modal algorithms and inferencers. (#1561)
|
2023-05-19 16:50:04 +08:00 |
visual_genome.py
|
[Feature] Support multiple multi-modal algorithms and inferencers. (#1561)
|
2023-05-19 16:50:04 +08:00 |
voc.py
|
[Refactor] Support to use "split" to specify training set/validation set in the ImageNet dataset (#1535)
|
2023-06-02 11:03:18 +08:00 |