2022-05-08 20:06:44 +02:00
# Data-Efficient architectures and training for Image classification
2020-12-23 10:47:58 -08:00
2022-05-08 20:06:44 +02:00
This repository contains PyTorch evaluation code, training code and pretrained models for the following papers:
< details >
< summary >
< a href = "README_deit.md" > DeiT< / a > Data-Efficient Image Transformers, ICML 2021 [< b > bib< / b > ]
< / summary >
2020-12-23 10:47:58 -08:00
```
2021-08-30 12:26:02 +02:00
@InProceedings {pmlr-v139-touvron21a,
2021-08-30 12:26:40 +02:00
title = {Training data-efficient image transformers & distillation through attention},
2021-08-30 12:26:02 +02:00
author = {Touvron, Hugo and Cord, Matthieu and Douze, Matthijs and Massa, Francisco and Sablayrolles, Alexandre and Jegou, Herve},
booktitle = {International Conference on Machine Learning},
2021-08-30 12:26:40 +02:00
pages = {10347--10357},
year = {2021},
volume = {139},
month = {July}
2020-12-23 10:47:58 -08:00
}
```
2022-05-08 20:06:44 +02:00
< / details >
2021-01-18 10:43:54 +01:00
< details >
< summary >
2022-05-08 20:06:44 +02:00
< a href = "README_cait.md" > CaiT< / a > (Going deeper with Image Transformers), ICCV 2021 [< b > bib< / b > ]
2021-01-18 10:43:54 +01:00
< / summary >
```
2022-05-08 20:06:44 +02:00
@InProceedings {Touvron_2021_ICCV,
author = {Touvron, Hugo and Cord, Matthieu and Sablayrolles, Alexandre and Synnaeve, Gabriel and J\'egou, Herv\'e},
title = {Going Deeper With Image Transformers},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {32-42}
}
2021-01-18 10:43:54 +01:00
```
< / details >
< details >
< summary >
2022-10-30 08:12:32 +01:00
< a href = "README_resmlp.md" > ResMLP< / a > (ResMLP: Feedforward networks for image classification with data-efficient training), TPAMI 2022 [< b > bib< / b > ]
2021-01-18 10:43:54 +01:00
< / summary >
```
2022-05-08 20:06:44 +02:00
@article {touvron2021resmlp,
title={ResMLP: Feedforward networks for image classification with data-efficient training},
author={Hugo Touvron and Piotr Bojanowski and Mathilde Caron and Matthieu Cord and Alaaeldin El-Nouby and Edouard Grave and Gautier Izacard and Armand Joulin and Gabriel Synnaeve and Jakob Verbeek and Herv'e J'egou},
journal={arXiv preprint arXiv:2105.03404},
year={2021},
}
2021-01-18 10:43:54 +01:00
```
< / details >
< details >
< summary >
2022-05-08 20:06:44 +02:00
< a href = "README_patchconvnet.md" > PatchConvnet< / a > (Augmenting Convolutional networks with attention-based aggregation) [< b > bib< / b > ]
2021-01-18 10:43:54 +01:00
< / summary >
```
2022-05-08 20:06:44 +02:00
@article {touvron2021patchconvnet,
title={Augmenting Convolutional networks with attention-based aggregation},
author={Hugo Touvron and Matthieu Cord and Alaaeldin El-Nouby and Piotr Bojanowski and Armand Joulin and Gabriel Synnaeve and Jakob Verbeek and Herve Jegou},
journal={arXiv preprint arXiv:2112.13692},
year={2021},
}
2021-01-18 10:43:54 +01:00
```
< / details >
< details >
< summary >
2022-07-13 20:09:04 +02:00
< a href = "README_3things.md" > 3Things< / a > (Three things everyone should know about Vision Transformers), ECCV 2022 [< b > bib< / b > ]
2021-01-18 10:43:54 +01:00
< / summary >
```
2022-05-08 20:06:44 +02:00
@article {Touvron2022ThreeTE,
title={Three things everyone should know about Vision Transformers},
author={Hugo Touvron and Matthieu Cord and Alaaeldin El-Nouby and Jakob Verbeek and Herve Jegou},
journal={arXiv preprint arXiv:2203.09795},
year={2022},
}
2021-01-18 10:43:54 +01:00
```
< / details >
2021-01-18 12:19:47 +01:00
< details >
< summary >
2022-07-13 20:09:04 +02:00
< a href = "README_revenge.md" > DeiT III< / a > (DeiT III: Revenge of the ViT), ECCV 2022 [< b > bib< / b > ]
2021-01-18 12:19:47 +01:00
< / summary >
```
2022-05-08 20:06:44 +02:00
@article {Touvron2022DeiTIR,
title={DeiT III: Revenge of the ViT},
author={Hugo Touvron and Matthieu Cord and Herve Jegou},
journal={arXiv preprint arXiv:2204.07118},
year={2022},
}
2021-01-18 12:19:47 +01:00
```
< / details >
2023-05-22 11:16:18 +02:00
< details >
< summary >
< a href = "README_cosub.md" > Cosub< / a > (Co-training 2L Submodels for Visual Recognition), CVPR 2023 [< b > bib< / b > ]
< / summary >
2021-01-18 12:19:47 +01:00
2023-05-22 11:16:18 +02:00
```
@article {Touvron2022Cotraining2S,
title={Co-training 2L Submodels for Visual Recognition},
author={Hugo Touvron and Matthieu Cord and Maxime Oquab and Piotr Bojanowski and Jakob Verbeek and Herv'e J'egou},
journal={arXiv preprint arXiv:2212.04884},
year={2022},
}
```
< / details >
2022-05-08 20:06:44 +02:00
If you find this repository useful, please consider giving a star ⭐ and cite the relevant papers.
2021-01-18 10:43:54 +01:00
2020-12-23 10:47:58 -08:00
# License
2021-01-08 10:51:58 +01:00
This repository is released under the Apache 2.0 license as found in the [LICENSE ](LICENSE ) file.
2020-12-23 10:47:58 -08:00
# Contributing
We actively welcome your pull requests! Please see [CONTRIBUTING.md ](.github/CONTRIBUTING.md ) and [CODE_OF_CONDUCT.md ](.github/CODE_OF_CONDUCT.md ) for more info.