Ross Wightman
7c7ecd2492
Add --use-train-size flag to force use of train input_size (over test input size) for validation. Default test-time pooling to use train input size (fixes issues).
2022-07-07 22:01:24 -07:00
Ross Wightman
ce65a7b29f
Update vit_relpos w/ some additional weights, some cleanup to match recent vit updates, more MLP log coord experiments.
2022-07-07 21:33:25 -07:00
Ross Wightman
58621723bd
Add CrossStage3 DarkNet (cs3) weights
2022-07-07 17:43:38 -07:00
Ross Wightman
9be0c84715
Change set -> dict w/ None keys for dataset split synonym search, so always consistent if more than 1 exists. Fix #1224
2022-07-07 15:33:53 -07:00
Ross Wightman
db0cee9910
Refactor cspnet configuration using dataclasses, update feature extraction for new cs3 variants.
2022-07-07 14:43:27 -07:00
Ross Wightman
eca09b8642
Add MobileVitV2 support. Fix #1332 . Move GroupNorm1 to common layers (used in poolformer + mobilevitv2). Keep ol custom ConvNeXt LayerNorm2d impl as LayerNormExp2d for reference.
2022-07-07 14:41:01 -07:00
Ross Wightman
06307b8b41
Remove experimental downsample in block support in ConvNeXt. Experiment further before keeping it in.
2022-07-07 14:37:58 -07:00
Ross Wightman
bfc0dccb0e
Improve image extension handling, add methods to modify / get defaults. Fix #1335 fix #1274 .
2022-07-07 14:23:20 -07:00
Ross Wightman
7d4b3807d5
Support DeiT-3 (Revenge of the ViT) checkpoints. Add non-overlapping (w/ class token) pos-embed support to vit.
2022-07-04 22:25:22 -07:00
Ross Wightman
d0c5bd5722
Rename cs2->cs3 for darknets. Fix features_only for cs3 darknets.
2022-07-03 08:32:41 -07:00
Ross Wightman
d765305821
Remove first_conv for resnetaa50 def
2022-07-02 15:56:17 -07:00
Ross Wightman
dd9b8f57c4
Add feature_info to edgenext for features_only support, hopefully fix some fx / test errors
2022-07-02 15:20:45 -07:00
Ross Wightman
377e9bfa21
Add TPU trained darknet53 weights. Add mising pretrain_cfg for some csp/darknet models.
2022-07-02 15:18:52 -07:00
Ross Wightman
c170ba3173
Add weights for resnet10t, resnet14t, and resnetaa50 models. Fix #1314
2022-07-02 15:18:06 -07:00
Ross Wightman
188c194b0f
Left some experiment stem code in convnext by mistake
2022-07-02 15:17:28 -07:00
Ross Wightman
70d6d2c484
support test_crop_size in data config resolve
2022-07-02 15:17:05 -07:00
Ross Wightman
6064d16a2d
Add initial EdgeNeXt import. Significant cleanup / reorg (like ConvNeXt). Fix #1320
...
* edgenext refactored for torchscript compat, stage base organization
* slight refactor of ConvNeXt to match some EdgeNeXt additions
* remove use of funky LayerNorm layer in ConvNeXt and just use nn.LayerNorm and LayerNorm2d (permute)
2022-07-01 15:18:42 -07:00
Ross Wightman
7a9c6811c9
Add eps arg to LayerNorm2d, add 'tf' (tensorflow) variant of trunc_normal_ that applies scale/shift after sampling (instead of needing to move a/b)
2022-07-01 15:15:39 -07:00
Ross Wightman
82c311d082
Add more experimental darknet and 'cs2' darknet variants (different cross stage setup, closer to newer YOLO backbones) for train trials.
2022-07-01 15:14:01 -07:00
Ross Wightman
a050fde5cd
Add resnet10t (basic block) and resnet14t (bottleneck) with 1,1,1,1 repeats
2022-07-01 15:03:28 -07:00
Ross Wightman
e6d7df40ec
no longer a point using kwargs for pretrain_cfg resolve, just pass explicit arg
2022-06-24 21:36:23 -07:00
Ross Wightman
07d0c4ae96
Improve repr for DropPath module
2022-06-24 14:58:15 -07:00
Ross Wightman
e27c16b8a0
Remove unecessary code for synbn guard
2022-06-24 14:57:42 -07:00
Ross Wightman
0da3c9ebbf
Remove SiLU layer in default args that breaks import on old old PyTorch
2022-06-24 14:56:58 -07:00
Ross Wightman
7d657d2ef4
Improve resolve_pretrained_cfg behaviour when no cfg exists, warn instead of crash. Improve usability ex #1311
2022-06-24 14:55:25 -07:00
Ross Wightman
879df47c0a
Support BatchNormAct2d for sync-bn use. Fix #1254
2022-06-24 14:51:26 -07:00
Ross Wightman
7cedc8d474
Follow up to #1256 , fix interpolation warning in auto_autoaugment as well
2022-06-21 14:56:53 -07:00
Jakub Kaczmarzyk
db64393c0d
use `Image.Resampling` namespace for PIL mapping ( #1256 )
...
* use `Image.Resampling` namespace for PIL mapping
PIL shows a deprecation warning when accessing resampling constants via the `Image` namespace. The suggested namespace is `Image.Resampling`. This commit updates `_pil_interpolation_to_str` to use the `Image.Resampling` namespace.
```
/tmp/ipykernel_11959/698124036.py:2: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
Image.NEAREST: 'nearest',
/tmp/ipykernel_11959/698124036.py:3: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
Image.BILINEAR: 'bilinear',
/tmp/ipykernel_11959/698124036.py:4: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
Image.BICUBIC: 'bicubic',
/tmp/ipykernel_11959/698124036.py:5: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
Image.BOX: 'box',
/tmp/ipykernel_11959/698124036.py:6: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
Image.HAMMING: 'hamming',
/tmp/ipykernel_11959/698124036.py:7: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
Image.LANCZOS: 'lanczos',
```
* use new pillow resampling enum only if it exists
2022-06-12 22:30:57 -07:00
Ross Wightman
20a1fa63f8
Make dev version 0.6.2.dev0 for pypi pre
2022-05-15 14:29:57 -07:00
Ross Wightman
347308faad
Update README.md, version to 0.6.2
2022-05-13 13:54:41 -07:00
Ross Wightman
4b30bae67b
Add updated vit_relpos weights, and impl w/ support for official swin-v2 differences for relpos. Add bias control support for MLP layers
2022-05-13 13:53:57 -07:00
Ross Wightman
d4c0588012
Remove persistent buffers from Swin-V2. Change SwinV2Cr cos attn + tau/logit_scale to match official, add ckpt convert, init_value zeros resid LN weight by default
2022-05-13 10:50:59 -07:00
Ross Wightman
27c42f0830
Fix torchscript use for offician Swin-V2, add support for non-square window/shift to WindowAttn/Block
2022-05-13 09:29:33 -07:00
Ross Wightman
2f2b22d8c7
Disable nvfuser fma / opt level overrides per #1244
2022-05-13 09:27:13 -07:00
Ross Wightman
c0211b0bf7
Swin-V2 test fixes, typo
2022-05-12 22:31:55 -07:00
Ross Wightman
9a86b900fa
Official SwinV2 models
2022-05-12 15:05:10 -07:00
Ross Wightman
d07d015173
Merge pull request #1249 from okojoalg/sequencer
...
Add Sequencer
2022-05-09 20:42:43 -07:00
Ross Wightman
d30685c283
Merge pull request #1251 from hankyul2/fix-multistep-scheduler
...
fix: multistep lr decay epoch bugs
2022-05-09 16:07:46 -07:00
han
a16171335b
fix: change milestones to decay-milestones
...
- change argparser option `milestone` to `decay-milestone`
2022-05-10 07:57:19 +09:00
Ross Wightman
39b725e1c9
Fix tests for rank-4 output where feature channels dim is -1 (3) and not 1
2022-05-09 15:20:24 -07:00
Ross Wightman
78a32655fa
Fix poolformer group_matcher to merge proj downsample with previous block, support coarse
2022-05-09 12:20:04 -07:00
Ross Wightman
d79f3d9d1e
Fix torchscript use for sequencer, add group_matcher, forward_head support, minor formatting
2022-05-09 12:09:39 -07:00
Ross Wightman
37b6920df3
Fix group_matcher regex for regnet.py
2022-05-09 10:40:40 -07:00
okojoalg
93a79a3dd9
Fix num_features in Sequencer
2022-05-06 23:16:32 +09:00
han
57a988df30
fix: multistep lr decay epoch bugs
...
- add milestones arguments
- change decay_epochs to milestones variable
2022-05-06 13:14:43 +09:00
okojoalg
578d52e752
Add Sequencer
2022-05-06 00:36:01 +09:00
Ross Wightman
f5ca4141f7
Adjust arg order for recent vit model args, add a few comments
2022-05-02 22:41:38 -07:00
Ross Wightman
41dc49a337
Vision Transformer refactoring and Rel Pos impl
2022-05-02 15:37:39 -07:00
Ross Wightman
b7cb8d0337
Add Swin-V2 Small-NS weights (83.5 @ 224). Add layer scale like 'init_values' via post-norm LN weight scaling
2022-04-26 17:32:49 -07:00
jjsjann123
f88c606fcf
fixing channels_last on cond_conv2d; update nvfuser debug env variable
2022-04-25 12:41:46 -07:00
Li Dong
09e9f3defb
migrate azure blob for beit checkpoints
...
## Motivation
We are going to use a new blob account to store the checkpoints.
## Modification
Modify the azure blob storage URLs for BEiT checkpoints.
2022-04-23 13:02:29 +08:00
Ross Wightman
52ac881402
Missed first_conv in latest seresnext 'D' default_cfgs
2022-04-22 20:55:52 -07:00
Ross Wightman
7629d8264d
Add two new SE-ResNeXt101-D 32x8d weights, one anti-aliased and one not. Reshuffle default_cfgs vs model entrypoints for resnet.py so they are better aligned.
2022-04-22 16:54:53 -07:00
SeeFun
8f0bc0591e
fix convnext args
2022-04-05 20:00:57 +08:00
Ross Wightman
c5a8e929fb
Add initial swinv2 tiny / small weights
2022-04-03 15:22:55 -07:00
Ross Wightman
f670d98cb8
Make a few more layers symbolically traceable (remove from FX leaf modules)
...
* remove dtype kwarg from .to() calls in EvoNorm as it messed up script + trace combo
* BatchNormAct2d always uses custom forward (cut & paste from original) instead of super().forward. Fixes #1176
* BlurPool groups==channels, no need to use input.dim[1]
2022-03-24 21:43:56 -07:00
SeeFun
ec4e9aa5a0
Add ConvNeXt tiny and small pretrain in22k
...
Add ConvNeXt tiny and small pretrain in22k from ConvNeXt repo:
06f7b05f92
2022-03-24 15:18:08 +08:00
Ross Wightman
575924ed60
Update test crop for new RegNet-V weights to match Y
2022-03-23 21:40:53 -07:00
Ross Wightman
1618527098
Add layer scale and parallel blocks to vision_transformer
2022-03-23 16:09:07 -07:00
Ross Wightman
c42be74621
Add attrib / comments about Swin-S3 (AutoFormerV2) weights
2022-03-23 16:07:09 -07:00
Ross Wightman
474ac906a2
Add 'head norm first' convnext_tiny_hnf weights
2022-03-23 16:06:00 -07:00
Ross Wightman
dc51334cdc
Fix pruned adapt for EfficientNet models that are now using BatchNormAct layers
2022-03-22 20:33:01 -07:00
Ross Wightman
024fc4d9ab
version 0.6.1 for master
2022-03-21 22:03:13 -07:00
Ross Wightman
e1e037ba52
Fix bad tuple typing fix that was on XLA branch bust missed on master merge
2022-03-21 22:00:33 -07:00
Ross Wightman
341b464a5a
Remove redundant noise attr from Plateau scheduler (use parent)
2022-03-21 22:00:03 -07:00
Ross Wightman
fe457c1996
Update SwinTransformerV2Cr post-merge, update with grad checkpointing / grad matcher
...
* weight compat break, activate norm3 for final block of final stage (equivalent to pre-head norm, but while still in BLC shape)
* remove fold/unfold for TPU compat, add commented out roll code for TPU
* add option for end of stage norm in all stages
* allow weight_init to be selected between pytorch default inits and xavier / moco style vit variant
2022-03-21 14:50:28 -07:00
Ross Wightman
b049a5c5c6
Merge remote-tracking branch 'origin/master' into norm_norm_norm
2022-03-21 13:41:43 -07:00
Ross Wightman
7cdd164d77
Fix #1184 , scheduler noise bug during merge madness
2022-03-21 13:35:45 -07:00
Ross Wightman
9440a50c95
Merge branch 'mrT23-master'
2022-03-21 12:30:02 -07:00
Ross Wightman
d98aa47d12
Revert ml-decoder changes to model factory and train script
2022-03-21 12:29:02 -07:00
Ross Wightman
b20665d379
Merge pull request #1007 from qwertyforce/patch-1
...
update arxiv link
2022-03-21 12:12:58 -07:00
Ross Wightman
7a0994f581
Merge pull request #1150 from ChristophReich1996/master
...
Swin Transformer V2
2022-03-21 11:56:57 -07:00
Ross Wightman
61d3493f87
Fix hf-hub handling when hf-hub is config source
2022-03-21 11:12:55 -07:00
Ross Wightman
5f47518f27
Fix pit implementation to be clsoer to deit/levit re distillation head handling
2022-03-21 11:12:14 -07:00
Ross Wightman
0862e6ebae
Fix correctness of some group matching regex (no impact on result), some formatting, missed forward_head for resnet
2022-03-19 14:58:54 -07:00
Ross Wightman
94bcdebd73
Add latest weights trained on TPU-v3 VM instances
2022-03-18 21:35:41 -07:00
Ross Wightman
0557c8257d
Fix bug introduced in non layer_decay weight_decay application. Remove debug print, fix arg desc.
2022-02-28 17:06:32 -08:00
Ross Wightman
372ad5fa0d
Significant model refactor and additions:
...
* All models updated with revised foward_features / forward_head interface
* Vision transformer and MLP based models consistently output sequence from forward_features (pooling or token selection considered part of 'head')
* WIP param grouping interface to allow consistent grouping of parameters for layer-wise decay across all model types
* Add gradient checkpointing support to a significant % of models, especially popular architectures
* Formatting and interface consistency improvements across models
* layer-wise LR decay impl part of optimizer factory w/ scale support in scheduler
* Poolformer and Volo architectures added
2022-02-28 13:56:23 -08:00
Ross Wightman
1420c118df
Missed comitting outstanding changes to default_cfg keys and test exclusions for swin v2
2022-02-23 19:50:26 -08:00
Ross Wightman
c6e4b7895a
Swin V2 CR impl refactor.
...
* reformat and change some naming so closer to existing timm vision transformers
* remove typing that wasn't adding clarity (or causing torchscript issues)
* support non-square windows
* auto window size adjust from image size
* post-norm + main-branch no
2022-02-23 17:28:52 -08:00
Christoph Reich
67d140446b
Fix bug in classification head
2022-02-20 22:28:05 +01:00
Christoph Reich
29add820ac
Refactor (back to relative imports)
2022-02-20 00:46:48 +01:00
Christoph Reich
74a04e0016
Add parameter to change normalization type
2022-02-20 00:46:00 +01:00
Christoph Reich
2a4f6c13dd
Create model functions
2022-02-20 00:40:22 +01:00
Christoph Reich
87b4d7a29a
Add get and reset classifier method
2022-02-19 22:47:02 +01:00
Christoph Reich
ff5f6bcd6c
Check input resolution
2022-02-19 22:42:02 +01:00
Christoph Reich
81bf0b4033
Change parameter names to match Swin V1
2022-02-19 22:37:22 +01:00
Christoph Reich
f227b88831
Add initials (CR) to model and file
2022-02-19 22:14:38 +01:00
Christoph Reich
90dc74c450
Add code from https://github.com/ChristophReich1996/Swin-Transformer-V2 and change docstring style to match timm
2022-02-19 22:12:11 +01:00
Ross Wightman
2c3870e107
semobilevit_s for good measure
2022-01-31 22:36:09 -08:00
Ross Wightman
bcaeb91b03
Version to 0.6.0, possible interface incompatibilities vs 0.5.x
2022-01-31 15:42:14 -08:00
Ross Wightman
58ba49c8ef
Add MobileViT models (w/ ByobNet base). Close #1038 .
2022-01-31 15:39:34 -08:00
Ross Wightman
5f81d4de23
Move DeiT to own file, vit getting crowded. Working towards fixing #1029 , make pooling interface for transformers and mlp closer to convnets. Still working through some details...
2022-01-26 22:53:57 -08:00
ayasyrev
cf57695938
sched noise dup code remove
2022-01-26 11:53:08 +03:00
Ross Wightman
95cfc9b3e8
Merge remote-tracking branch 'origin/master' into norm_norm_norm
2022-01-25 22:20:45 -08:00
Ross Wightman
abc9ba2544
Transitioning default_cfg -> pretrained_cfg. Improving handling of pretrained_cfg source (HF-Hub, files, timm config, etc). Checkpoint handling tweaks.
2022-01-25 21:54:13 -08:00
Ross Wightman
07379c6d5d
Add vit_base2_patch32_256 for a model between base_patch16 and patch32 with a slightly larger img size and width
2022-01-24 14:46:47 -08:00
Ross Wightman
447677616f
version 0.5.5
2022-01-20 21:18:30 -08:00
Ross Wightman
83b40c5a58
Last batch of small model weights (for now). mobilenetv3_small 050/075/100 and updated mnasnet_small with lambc/lamb optimizer.
2022-01-19 10:02:02 -08:00
Mi-Peng
cdcd0a92ca
fix lars
2022-01-19 17:49:43 +08:00