Commit Graph

1197 Commits (9bb4c80d2a61812f4ca2a0d665f6147978feed39)
 

Author SHA1 Message Date
Alexander Soare 0cb8ea432c wip 2021-10-02 15:55:08 +01:00
Ross Wightman d9abfa48df Make broadcast_buffers disable its own flag for now (needs more testing on interaction with dist_bn) 2021-10-01 13:43:55 -07:00
Ross Wightman b1c2e3eb92 Match rel_pos_indices attr rename in conv branch 2021-09-30 23:19:05 -07:00
Ross Wightman b49630a138 Add relative pos embed option to LambdaLayer, fix last transpose/reshape. 2021-09-30 22:45:09 -07:00
Ross Wightman d657e2cc0b Remove dead code line from efficientnet 2021-09-30 21:54:42 -07:00
Ross Wightman 0ca687f224 Make 'regnetz' model experiments closer to actual RegNetZ, bottleneck expansion, expand from in_chs, no shortcut on stride 2, tweak model sizes 2021-09-30 21:49:38 -07:00
Ross Wightman b5bf4dce98
Merge pull request #898 from leondgarse/master
Remove a duplicate layer creation in  byobnet.py
2021-09-30 13:32:15 -07:00
leondgarse 51eaf9360d
Remove a duplicate layer creation in byobnet.py
`self.conv2_kxk` is repeated in `byobnet.py`. Remove the duplicate code.
2021-09-30 18:30:48 +08:00
Ross Wightman b81e79aae9 Fix bottleneck attn transpose typo, hopefully these train better now.. 2021-09-28 16:38:41 -07:00
Ross Wightman 80075b0b8a Add worker_seeding arg to allow selecting old vs updated data loader worker seed for (old) experiment repeatability 2021-09-28 16:37:45 -07:00
Ross Wightman 6478bcd02c Fix regnetz_d conv layer name, use inception mean/std 2021-09-26 14:54:17 -07:00
Ross Wightman 3f9959cdd2
Merge pull request #882 from ShoufaChen/master
fix `use_amp`
2021-09-25 21:37:44 -07:00
Shoufa Chen 908563d060
fix `use_amp`
Fix https://github.com/rwightman/pytorch-image-models/issues/881
2021-09-26 12:32:22 +08:00
Ross Wightman 0387e6057e Update binary cross ent impl to use thresholding as an option (convert soft targets from mixup/cutmix to 0, 1) 2021-09-23 15:45:39 -07:00
Ross Wightman 5d6983c462 Batch validate a list of files if model is a text file with model per line 2021-09-23 15:45:17 -07:00
Ross Wightman f8a63a3b71 Add worker_init_fn to loader for numpy seed per worker 2021-09-23 15:44:38 -07:00
Ross Wightman 515121cca1 Use reshape instead of view in std_conv, causing issues in recent PyTorch in channels_last 2021-09-23 15:43:48 -07:00
Ross Wightman da06cc61d4 ResNetV2 seems to work best without zero_init residual 2021-09-23 15:43:22 -07:00
Ross Wightman 8e11da0ce3 Add experimental RegNetZ(ish) models for training / perf trials. 2021-09-23 15:42:57 -07:00
Ross Wightman 3d9c23af87
Merge pull request #875 from alexander-soare/effnets-norm-layer
make it possible to provide norm_layer via create_model
2021-09-21 07:17:52 -07:00
Alexander Soare 6bbc50beb4 make it possible to provide norm_layer via create_model 2021-09-21 10:19:04 +01:00
Ross Wightman a6e8598aaf
Merge pull request #821 from rwightman/attn_update
Update attention / self-attn based models from a series of experiments
2021-09-13 17:49:34 -07:00
Ross Wightman cf5ac2800c BotNet models were still off, remove weights for bad configs. Add good SE-HaloNet33-TS weights. 2021-09-13 17:18:59 -07:00
Ross Wightman 24720abe3b Merge branch 'master' into attn_update 2021-09-13 16:51:10 -07:00
Ross Wightman 1c9284c640 Add BeiT 'finetuned' 1k weights and pretrained 22k weights, pretraining specific (masked) model excluded for now 2021-09-13 16:38:23 -07:00
Ross Wightman f8a215cfe6 A few more crossvit tweaks, fix training w/ no_weight_decay names, add crop option for scaling, adjust default crop_pct for large img size to 1.0 for better results 2021-09-13 14:17:34 -07:00
Ross Wightman 7ab2491ab7 Better handling of crossvit for tests / forward_features, fix torchscript regression in my changes 2021-09-13 13:01:05 -07:00
Ross Wightman 702982d8af Merge branch 'chunfuchen-feature/crossvit' 2021-09-13 11:50:58 -07:00
Ross Wightman f1808e0970 Post crossvit merge cleanup, change model names to reflect input size, cleanup img size vs scale handling, fix tests 2021-09-13 11:49:54 -07:00
Ross Wightman a897e0ebcc Merge branch 'feature/crossvit' of https://github.com/chunfuchen/pytorch-image-models into chunfuchen-feature/crossvit 2021-09-10 17:38:37 -07:00
Ross Wightman 4027412757 Add resnet33ts weights, update resnext26ts baseline weights 2021-09-09 14:46:41 -07:00
Richard Chen 9fe5798bee fix bug for reset classifier and fix for validating the dimension 2021-09-08 21:58:17 -04:00
Richard Chen 3718c5a5bd fix loading pretrained model 2021-09-08 11:53:05 -04:00
Richard Chen bb50b69a57 fix for torch script 2021-09-08 11:20:59 -04:00
Ross Wightman 5bd04714e4 Cleanup weight init for byob/byoanet and related 2021-09-05 15:34:05 -07:00
Ross Wightman 8642401e88 Swap botnet 26/50 weights/models after realizing a mistake in arch def, now figuring out why they were so low... 2021-09-05 15:17:19 -07:00
Ross Wightman 5f12de4875 Add initial AttentionPool2d that's being trialed. Fix comment and still trying to improve reliability of sgd test. 2021-09-05 12:41:14 -07:00
Ross Wightman 76881d207b Add baseline resnet26t @ 256x256 weights. Add 33ts variant of halonet with at least one halo in stage 2,3,4 2021-09-04 14:52:54 -07:00
Ross Wightman 54e90e82a5 Another attempt at sgd momentum test passing... 2021-09-03 20:50:26 -07:00
Ross Wightman 484e61648d Adding the attn series weights, tweaking model names, comments... 2021-09-03 18:09:42 -07:00
Ross Wightman 0639d9a591 Fix updated validation_batch_size fallback 2021-09-02 14:44:53 -07:00
Ross Wightman 5db057dca0 Fix misnamed arg, tweak other train script args for better defaults. 2021-09-02 14:15:49 -07:00
Ross Wightman fb94350896 Update training script and loader factory to allow use of scheduler updates, repeat augment, and bce loss 2021-09-01 17:46:40 -07:00
Ross Wightman f262137ff2 Add RepeatAugSampler as per DeiT RASampler impl, showing promise for current (distributed) training experiments. 2021-09-01 17:40:53 -07:00
Ross Wightman ba9c1108a1 Add a BCE loss impl that converts dense targets to sparse /w smoothing as an alternate to CE w/ smoothing. For training experiments. 2021-09-01 17:39:28 -07:00
Ross Wightman 29a37e23ee LR scheduler update:
* add polynomial decay 'poly'
* cleanup cycle specific args for cosine, poly, and tanh sched, t_mul -> cycle_mul, decay -> cycle_decay, default cycle_limit to 1 in each opt
* add k-decay for cosine and poly sched as per https://arxiv.org/abs/2004.05909
* change default tanh ub/lb to push inflection to later epochs
2021-09-01 17:33:11 -07:00
Ross Wightman 492c0a4e20 Update HaloAttn comment 2021-09-01 17:14:31 -07:00
Richard Chen 7ab9d4555c add crossvit 2021-09-01 17:13:12 -04:00
Ross Wightman 3b9032ea48 Use Tensor.unfold().unfold() for HaloAttn, fast like as_strided but more clarity 2021-08-27 12:45:53 -07:00
Ross Wightman fc894c375c Another attempt at sgd momentum test passing... 2021-08-27 10:39:31 -07:00