Ross Wightman
27c42f0830
Fix torchscript use for offician Swin-V2, add support for non-square window/shift to WindowAttn/Block
2022-05-13 09:29:33 -07:00
Ross Wightman
c0211b0bf7
Swin-V2 test fixes, typo
2022-05-12 22:31:55 -07:00
Ross Wightman
9a86b900fa
Official SwinV2 models
2022-05-12 15:05:10 -07:00
Ross Wightman
d07d015173
Merge pull request #1249 from okojoalg/sequencer
...
Add Sequencer
2022-05-09 20:42:43 -07:00
Ross Wightman
39b725e1c9
Fix tests for rank-4 output where feature channels dim is -1 (3) and not 1
2022-05-09 15:20:24 -07:00
Ross Wightman
78a32655fa
Fix poolformer group_matcher to merge proj downsample with previous block, support coarse
2022-05-09 12:20:04 -07:00
Ross Wightman
d79f3d9d1e
Fix torchscript use for sequencer, add group_matcher, forward_head support, minor formatting
2022-05-09 12:09:39 -07:00
Ross Wightman
37b6920df3
Fix group_matcher regex for regnet.py
2022-05-09 10:40:40 -07:00
okojoalg
93a79a3dd9
Fix num_features in Sequencer
2022-05-06 23:16:32 +09:00
okojoalg
578d52e752
Add Sequencer
2022-05-06 00:36:01 +09:00
Ross Wightman
f5ca4141f7
Adjust arg order for recent vit model args, add a few comments
2022-05-02 22:41:38 -07:00
Ross Wightman
41dc49a337
Vision Transformer refactoring and Rel Pos impl
2022-05-02 15:37:39 -07:00
Ross Wightman
b7cb8d0337
Add Swin-V2 Small-NS weights (83.5 @ 224). Add layer scale like 'init_values' via post-norm LN weight scaling
2022-04-26 17:32:49 -07:00
jjsjann123
f88c606fcf
fixing channels_last on cond_conv2d; update nvfuser debug env variable
2022-04-25 12:41:46 -07:00
Li Dong
09e9f3defb
migrate azure blob for beit checkpoints
...
## Motivation
We are going to use a new blob account to store the checkpoints.
## Modification
Modify the azure blob storage URLs for BEiT checkpoints.
2022-04-23 13:02:29 +08:00
Ross Wightman
52ac881402
Missed first_conv in latest seresnext 'D' default_cfgs
2022-04-22 20:55:52 -07:00
Ross Wightman
7629d8264d
Add two new SE-ResNeXt101-D 32x8d weights, one anti-aliased and one not. Reshuffle default_cfgs vs model entrypoints for resnet.py so they are better aligned.
2022-04-22 16:54:53 -07:00
SeeFun
8f0bc0591e
fix convnext args
2022-04-05 20:00:57 +08:00
Ross Wightman
c5a8e929fb
Add initial swinv2 tiny / small weights
2022-04-03 15:22:55 -07:00
Ross Wightman
f670d98cb8
Make a few more layers symbolically traceable (remove from FX leaf modules)
...
* remove dtype kwarg from .to() calls in EvoNorm as it messed up script + trace combo
* BatchNormAct2d always uses custom forward (cut & paste from original) instead of super().forward. Fixes #1176
* BlurPool groups==channels, no need to use input.dim[1]
2022-03-24 21:43:56 -07:00
SeeFun
ec4e9aa5a0
Add ConvNeXt tiny and small pretrain in22k
...
Add ConvNeXt tiny and small pretrain in22k from ConvNeXt repo:
06f7b05f92
2022-03-24 15:18:08 +08:00
Ross Wightman
575924ed60
Update test crop for new RegNet-V weights to match Y
2022-03-23 21:40:53 -07:00
Ross Wightman
1618527098
Add layer scale and parallel blocks to vision_transformer
2022-03-23 16:09:07 -07:00
Ross Wightman
c42be74621
Add attrib / comments about Swin-S3 (AutoFormerV2) weights
2022-03-23 16:07:09 -07:00
Ross Wightman
474ac906a2
Add 'head norm first' convnext_tiny_hnf weights
2022-03-23 16:06:00 -07:00
Ross Wightman
dc51334cdc
Fix pruned adapt for EfficientNet models that are now using BatchNormAct layers
2022-03-22 20:33:01 -07:00
Ross Wightman
024fc4d9ab
version 0.6.1 for master
2022-03-21 22:03:13 -07:00
Ross Wightman
e1e037ba52
Fix bad tuple typing fix that was on XLA branch bust missed on master merge
2022-03-21 22:00:33 -07:00
Ross Wightman
fe457c1996
Update SwinTransformerV2Cr post-merge, update with grad checkpointing / grad matcher
...
* weight compat break, activate norm3 for final block of final stage (equivalent to pre-head norm, but while still in BLC shape)
* remove fold/unfold for TPU compat, add commented out roll code for TPU
* add option for end of stage norm in all stages
* allow weight_init to be selected between pytorch default inits and xavier / moco style vit variant
2022-03-21 14:50:28 -07:00
Ross Wightman
b049a5c5c6
Merge remote-tracking branch 'origin/master' into norm_norm_norm
2022-03-21 13:41:43 -07:00
Ross Wightman
9440a50c95
Merge branch 'mrT23-master'
2022-03-21 12:30:02 -07:00
Ross Wightman
d98aa47d12
Revert ml-decoder changes to model factory and train script
2022-03-21 12:29:02 -07:00
Ross Wightman
b20665d379
Merge pull request #1007 from qwertyforce/patch-1
...
update arxiv link
2022-03-21 12:12:58 -07:00
Ross Wightman
61d3493f87
Fix hf-hub handling when hf-hub is config source
2022-03-21 11:12:55 -07:00
Ross Wightman
5f47518f27
Fix pit implementation to be clsoer to deit/levit re distillation head handling
2022-03-21 11:12:14 -07:00
Ross Wightman
0862e6ebae
Fix correctness of some group matching regex (no impact on result), some formatting, missed forward_head for resnet
2022-03-19 14:58:54 -07:00
Ross Wightman
94bcdebd73
Add latest weights trained on TPU-v3 VM instances
2022-03-18 21:35:41 -07:00
Ross Wightman
0557c8257d
Fix bug introduced in non layer_decay weight_decay application. Remove debug print, fix arg desc.
2022-02-28 17:06:32 -08:00
Ross Wightman
372ad5fa0d
Significant model refactor and additions:
...
* All models updated with revised foward_features / forward_head interface
* Vision transformer and MLP based models consistently output sequence from forward_features (pooling or token selection considered part of 'head')
* WIP param grouping interface to allow consistent grouping of parameters for layer-wise decay across all model types
* Add gradient checkpointing support to a significant % of models, especially popular architectures
* Formatting and interface consistency improvements across models
* layer-wise LR decay impl part of optimizer factory w/ scale support in scheduler
* Poolformer and Volo architectures added
2022-02-28 13:56:23 -08:00
Ross Wightman
1420c118df
Missed comitting outstanding changes to default_cfg keys and test exclusions for swin v2
2022-02-23 19:50:26 -08:00
Ross Wightman
c6e4b7895a
Swin V2 CR impl refactor.
...
* reformat and change some naming so closer to existing timm vision transformers
* remove typing that wasn't adding clarity (or causing torchscript issues)
* support non-square windows
* auto window size adjust from image size
* post-norm + main-branch no
2022-02-23 17:28:52 -08:00
Christoph Reich
67d140446b
Fix bug in classification head
2022-02-20 22:28:05 +01:00
Christoph Reich
29add820ac
Refactor (back to relative imports)
2022-02-20 00:46:48 +01:00
Christoph Reich
74a04e0016
Add parameter to change normalization type
2022-02-20 00:46:00 +01:00
Christoph Reich
2a4f6c13dd
Create model functions
2022-02-20 00:40:22 +01:00
Christoph Reich
87b4d7a29a
Add get and reset classifier method
2022-02-19 22:47:02 +01:00
Christoph Reich
ff5f6bcd6c
Check input resolution
2022-02-19 22:42:02 +01:00
Christoph Reich
81bf0b4033
Change parameter names to match Swin V1
2022-02-19 22:37:22 +01:00
Christoph Reich
f227b88831
Add initials (CR) to model and file
2022-02-19 22:14:38 +01:00
Christoph Reich
90dc74c450
Add code from https://github.com/ChristophReich1996/Swin-Transformer-V2 and change docstring style to match timm
2022-02-19 22:12:11 +01:00