mirror of
https://github.com/huggingface/pytorch-image-models.git
synced 2025-06-03 15:01:08 +08:00
Update README.md
This commit is contained in:
parent
a8d103e18b
commit
6dcbaf211a
@ -267,10 +267,12 @@ All model architecture families include variants with pretrained weights. There
|
|||||||
A full version of the list below with source links can be found in the [documentation](https://rwightman.github.io/pytorch-image-models/models/).
|
A full version of the list below with source links can be found in the [documentation](https://rwightman.github.io/pytorch-image-models/models/).
|
||||||
|
|
||||||
* Aggregating Nested Transformers - https://arxiv.org/abs/2105.12723
|
* Aggregating Nested Transformers - https://arxiv.org/abs/2105.12723
|
||||||
|
* BEiT - https://arxiv.org/abs/2106.08254
|
||||||
* Big Transfer ResNetV2 (BiT) - https://arxiv.org/abs/1912.11370
|
* Big Transfer ResNetV2 (BiT) - https://arxiv.org/abs/1912.11370
|
||||||
* Bottleneck Transformers - https://arxiv.org/abs/2101.11605
|
* Bottleneck Transformers - https://arxiv.org/abs/2101.11605
|
||||||
* CaiT (Class-Attention in Image Transformers) - https://arxiv.org/abs/2103.17239
|
* CaiT (Class-Attention in Image Transformers) - https://arxiv.org/abs/2103.17239
|
||||||
* CoaT (Co-Scale Conv-Attentional Image Transformers) - https://arxiv.org/abs/2104.06399
|
* CoaT (Co-Scale Conv-Attentional Image Transformers) - https://arxiv.org/abs/2104.06399
|
||||||
|
* ConvNeXt - https://arxiv.org/abs/2201.03545
|
||||||
* ConViT (Soft Convolutional Inductive Biases Vision Transformers)- https://arxiv.org/abs/2103.10697
|
* ConViT (Soft Convolutional Inductive Biases Vision Transformers)- https://arxiv.org/abs/2103.10697
|
||||||
* CspNet (Cross-Stage Partial Networks) - https://arxiv.org/abs/1911.11929
|
* CspNet (Cross-Stage Partial Networks) - https://arxiv.org/abs/1911.11929
|
||||||
* DeiT (Vision Transformer) - https://arxiv.org/abs/2012.12877
|
* DeiT (Vision Transformer) - https://arxiv.org/abs/2012.12877
|
||||||
@ -288,11 +290,11 @@ A full version of the list below with source links can be found in the [document
|
|||||||
* MNASNet B1, A1 (Squeeze-Excite), and Small - https://arxiv.org/abs/1807.11626
|
* MNASNet B1, A1 (Squeeze-Excite), and Small - https://arxiv.org/abs/1807.11626
|
||||||
* MobileNet-V2 - https://arxiv.org/abs/1801.04381
|
* MobileNet-V2 - https://arxiv.org/abs/1801.04381
|
||||||
* Single-Path NAS - https://arxiv.org/abs/1904.02877
|
* Single-Path NAS - https://arxiv.org/abs/1904.02877
|
||||||
|
* TinyNet - https://arxiv.org/abs/2010.14819
|
||||||
* GhostNet - https://arxiv.org/abs/1911.11907
|
* GhostNet - https://arxiv.org/abs/1911.11907
|
||||||
* gMLP - https://arxiv.org/abs/2105.08050
|
* gMLP - https://arxiv.org/abs/2105.08050
|
||||||
* GPU-Efficient Networks - https://arxiv.org/abs/2006.14090
|
* GPU-Efficient Networks - https://arxiv.org/abs/2006.14090
|
||||||
* Halo Nets - https://arxiv.org/abs/2103.12731
|
* Halo Nets - https://arxiv.org/abs/2103.12731
|
||||||
* HardCoRe-NAS - https://arxiv.org/abs/2102.11646
|
|
||||||
* HRNet - https://arxiv.org/abs/1908.07919
|
* HRNet - https://arxiv.org/abs/1908.07919
|
||||||
* Inception-V3 - https://arxiv.org/abs/1512.00567
|
* Inception-V3 - https://arxiv.org/abs/1512.00567
|
||||||
* Inception-ResNet-V2 and Inception-V4 - https://arxiv.org/abs/1602.07261
|
* Inception-ResNet-V2 and Inception-V4 - https://arxiv.org/abs/1602.07261
|
||||||
@ -300,7 +302,11 @@ A full version of the list below with source links can be found in the [document
|
|||||||
* LeViT (Vision Transformer in ConvNet's Clothing) - https://arxiv.org/abs/2104.01136
|
* LeViT (Vision Transformer in ConvNet's Clothing) - https://arxiv.org/abs/2104.01136
|
||||||
* MLP-Mixer - https://arxiv.org/abs/2105.01601
|
* MLP-Mixer - https://arxiv.org/abs/2105.01601
|
||||||
* MobileNet-V3 (MBConvNet w/ Efficient Head) - https://arxiv.org/abs/1905.02244
|
* MobileNet-V3 (MBConvNet w/ Efficient Head) - https://arxiv.org/abs/1905.02244
|
||||||
|
* FBNet-V3 - https://arxiv.org/abs/2006.02049
|
||||||
|
* HardCoRe-NAS - https://arxiv.org/abs/2102.11646
|
||||||
|
* LCNet - https://arxiv.org/abs/2109.15099
|
||||||
* NASNet-A - https://arxiv.org/abs/1707.07012
|
* NASNet-A - https://arxiv.org/abs/1707.07012
|
||||||
|
* NesT - https://arxiv.org/abs/2105.12723
|
||||||
* NFNet-F - https://arxiv.org/abs/2102.06171
|
* NFNet-F - https://arxiv.org/abs/2102.06171
|
||||||
* NF-RegNet / NF-ResNet - https://arxiv.org/abs/2101.08692
|
* NF-RegNet / NF-ResNet - https://arxiv.org/abs/2101.08692
|
||||||
* PNasNet - https://arxiv.org/abs/1712.00559
|
* PNasNet - https://arxiv.org/abs/1712.00559
|
||||||
@ -326,6 +332,7 @@ A full version of the list below with source links can be found in the [document
|
|||||||
* Transformer-iN-Transformer (TNT) - https://arxiv.org/abs/2103.00112
|
* Transformer-iN-Transformer (TNT) - https://arxiv.org/abs/2103.00112
|
||||||
* TResNet - https://arxiv.org/abs/2003.13630
|
* TResNet - https://arxiv.org/abs/2003.13630
|
||||||
* Twins (Spatial Attention in Vision Transformers) - https://arxiv.org/pdf/2104.13840.pdf
|
* Twins (Spatial Attention in Vision Transformers) - https://arxiv.org/pdf/2104.13840.pdf
|
||||||
|
* Visformer - https://arxiv.org/abs/2104.12533
|
||||||
* Vision Transformer - https://arxiv.org/abs/2010.11929
|
* Vision Transformer - https://arxiv.org/abs/2010.11929
|
||||||
* VovNet V2 and V1 - https://arxiv.org/abs/1911.06667
|
* VovNet V2 and V1 - https://arxiv.org/abs/1911.06667
|
||||||
* Xception - https://arxiv.org/abs/1610.02357
|
* Xception - https://arxiv.org/abs/1610.02357
|
||||||
|
Loading…
x
Reference in New Issue
Block a user