mmpretrain/configs/densenet
Yixiao Fang 89000c10eb
[Refactor] Refactor configs and metafile (#1369)
* update base datasets

* update base

* update barlowtwins

* update with new convention

* update

* update

* update

* add schedule

* add densecl

* add eva

* add mae

* add maskfeat

* add milan and mixmim

* add moco

* add swav simclr

* add simmim and simsiam

* refine

* update

* add to model index

* update config inheritance

* fix error in metafile

* Update pre-commit and metafile check script

* update metafile

* fix name error

* Fix classification model name and config name

---------

Co-authored-by: mzr1996 <mzr1996@163.com>
2023-02-23 11:17:16 +08:00
..
README.md [Docs] Use relative link to config instead of abs link in README. 2022-09-22 09:59:06 +08:00
densenet121_4xb256_in1k.py Update auto_scale_lr fields 2022-07-18 11:11:13 +08:00
densenet161_4xb256_in1k.py Update auto_scale_lr fields 2022-07-18 11:11:13 +08:00
densenet169_4xb256_in1k.py Update auto_scale_lr fields 2022-07-18 11:11:13 +08:00
densenet201_4xb256_in1k.py Update auto_scale_lr fields 2022-07-18 11:11:13 +08:00
metafile.yml [Refactor] Refactor configs and metafile (#1369) 2023-02-23 11:17:16 +08:00

README.md

DenseNet

Densely Connected Convolutional Networks

Abstract

Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance.

Results and models

ImageNet-1k

Model Params(M) Flops(G) Top-1 (%) Top-5 (%) Config Download
DenseNet121* 7.98 2.88 74.96 92.21 config model
DenseNet169* 14.15 3.42 76.08 93.11 config model
DenseNet201* 20.01 4.37 77.32 93.64 config model
DenseNet161* 28.68 7.82 77.61 93.83 config model

Models with * are converted from pytorch, guided by original repo. The config files of these models are only for inference. We don't ensure these config files' training accuracy and welcome you to contribute your reproduction results.

Citation

@misc{https://doi.org/10.48550/arxiv.1608.06993,
      doi = {10.48550/ARXIV.1608.06993},
      url = {https://arxiv.org/abs/1608.06993},
      author = {Huang, Gao and Liu, Zhuang and van der Maaten, Laurens and Weinberger, Kilian Q.},
      keywords = {Computer Vision and Pattern Recognition (cs.CV), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
      title = {Densely Connected Convolutional Networks},
      publisher = {arXiv},
      year = {2016},
      copyright = {arXiv.org perpetual, non-exclusive license}
}