mmpretrain/configs/resnext
Ma Zerun 159b38d276
[Reproduction] Reproduce training results of T2T-ViT (#610)
* Add cosine cool down lr updater

* Use ema hook

* Update decay mult

* Update configs.

* Update T2T-ViT readme and format all readme

* Update swin readme

* Update tnt readme

* Add docstring for `CosineAnnealingCooldownLrUpdaterHook`.

* Update t2t readme and metafile
2021-12-28 15:09:40 +08:00
..
README.md [Reproduction] Reproduce training results of T2T-ViT (#610) 2021-12-28 15:09:40 +08:00
metafile.yml [Improvement] Rename config files according to the config name standard. (#508) 2021-11-19 14:20:35 +08:00
resnext50-32x4d_8xb32_in1k.py [Improvement] Rename config files according to the config name standard. (#508) 2021-11-19 14:20:35 +08:00
resnext50_32x4d_b32x8_imagenet.py [Improvement] Rename config files according to the config name standard. (#508) 2021-11-19 14:20:35 +08:00
resnext101-32x4d_8xb32_in1k.py [Improvement] Rename config files according to the config name standard. (#508) 2021-11-19 14:20:35 +08:00
resnext101-32x8d_8xb32_in1k.py [Improvement] Rename config files according to the config name standard. (#508) 2021-11-19 14:20:35 +08:00
resnext101_32x4d_b32x8_imagenet.py [Improvement] Rename config files according to the config name standard. (#508) 2021-11-19 14:20:35 +08:00
resnext101_32x8d_b32x8_imagenet.py [Improvement] Rename config files according to the config name standard. (#508) 2021-11-19 14:20:35 +08:00
resnext152-32x4d_8xb32_in1k.py [Improvement] Rename config files according to the config name standard. (#508) 2021-11-19 14:20:35 +08:00
resnext152_32x4d_b32x8_imagenet.py [Improvement] Rename config files according to the config name standard. (#508) 2021-11-19 14:20:35 +08:00

README.md

Aggregated Residual Transformations for Deep Neural Networks

Abstract

We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call "cardinality" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.

Citation

@inproceedings{xie2017aggregated,
  title={Aggregated residual transformations for deep neural networks},
  author={Xie, Saining and Girshick, Ross and Doll{\'a}r, Piotr and Tu, Zhuowen and He, Kaiming},
  booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
  pages={1492--1500},
  year={2017}
}

Results and models

ImageNet-1k

Model Params(M) Flops(G) Top-1 (%) Top-5 (%) Config Download
ResNeXt-32x4d-50 25.03 4.27 77.90 93.66 config model | log
ResNeXt-32x4d-101 44.18 8.03 78.61 94.17 config model | log
ResNeXt-32x8d-101 88.79 16.5 79.27 94.58 config model | log
ResNeXt-32x4d-152 59.95 11.8 78.88 94.33 config model | log