.. |
__init__.py
|
Update attention / self-attn based models from a series of experiments:
|
2021-08-20 16:13:11 -07:00 |
activations.py
|
…
|
|
activations_jit.py
|
…
|
|
activations_me.py
|
…
|
|
adaptive_avgmax_pool.py
|
Refactoring, cleanup, improved test coverage.
|
2021-06-12 16:40:02 -07:00 |
attention_pool2d.py
|
Add initial AttentionPool2d that's being trialed. Fix comment and still trying to improve reliability of sgd test.
|
2021-09-05 12:41:14 -07:00 |
blur_pool.py
|
…
|
|
bottleneck_attn.py
|
Add option to include relative pos embedding in the attention scaling as per references. See discussion #912
|
2021-10-12 15:37:01 -07:00 |
cbam.py
|
Add Gather-Excite and Global Context attn modules. Refactor existing SE-like attn for consistency and refactor byob/byoanet for less redundancy.
|
2021-05-27 18:03:29 -07:00 |
classifier.py
|
Refactoring, cleanup, improved test coverage.
|
2021-06-12 16:40:02 -07:00 |
cond_conv2d.py
|
…
|
|
config.py
|
…
|
|
conv2d_same.py
|
…
|
|
conv_bn_act.py
|
…
|
|
create_act.py
|
Fix #661, move hardswish out of default args for LeViT. Enable native torch support for hardswish, hardsigmoid, mish if present.
|
2021-05-26 15:28:42 -07:00 |
create_attn.py
|
Update attention / self-attn based models from a series of experiments:
|
2021-08-20 16:13:11 -07:00 |
create_conv2d.py
|
…
|
|
create_norm_act.py
|
…
|
|
drop.py
|
…
|
|
eca.py
|
Add non-local and BAT attention. Merge attn and self-attn factories into one. Add attention references to README. Add mlp 'mode' to ECA.
|
2021-05-31 13:18:11 -07:00 |
evo_norm.py
|
…
|
|
gather_excite.py
|
Add Gather-Excite and Global Context attn modules. Refactor existing SE-like attn for consistency and refactor byob/byoanet for less redundancy.
|
2021-05-27 18:03:29 -07:00 |
global_context.py
|
Add gc_efficientnetv2_rw_t weights (global context instead of SE attn). Add TF XL weights even though the fine-tuned ones don't validate that well. Change default arg for GlobalContext to use scal (mul) mode.
|
2021-08-07 16:45:29 -07:00 |
halo_attn.py
|
Add option to include relative pos embedding in the attention scaling as per references. See discussion #912
|
2021-10-12 15:37:01 -07:00 |
helpers.py
|
Throw in some FBNetV3 code I had lying around, some refactoring of SE reduction channel calcs for all EffNet archs.
|
2021-05-28 20:47:24 -07:00 |
inplace_abn.py
|
…
|
|
lambda_layer.py
|
Halo, bottleneck attn, lambda layer additions and cleanup along w/ experimental model defs
|
2021-10-06 16:32:48 -07:00 |
linear.py
|
…
|
|
median_pool.py
|
…
|
|
mixed_conv2d.py
|
…
|
|
mlp.py
|
Refactoring, cleanup, improved test coverage.
|
2021-06-12 16:40:02 -07:00 |
non_local_attn.py
|
Fix torchscript issue in bat
|
2021-06-08 23:19:51 -07:00 |
norm.py
|
Realized LayerNorm2d won't work in all cases as is, fixed.
|
2021-07-05 18:21:34 -07:00 |
norm_act.py
|
…
|
|
padding.py
|
…
|
|
patch_embed.py
|
…
|
|
pool2d_same.py
|
Support npz custom load for vision transformer hybrid models. Add posembed rescale for npz load.
|
2021-06-14 12:31:44 -07:00 |
selective_kernel.py
|
Remove min channels for SelectiveKernel, divisor should cover cases well enough.
|
2021-05-31 15:38:56 -07:00 |
separable_conv.py
|
…
|
|
space_to_depth.py
|
…
|
|
split_attn.py
|
Add non-local and BAT attention. Merge attn and self-attn factories into one. Add attention references to README. Add mlp 'mode' to ECA.
|
2021-05-31 13:18:11 -07:00 |
split_batchnorm.py
|
…
|
|
squeeze_excite.py
|
Add non-local and BAT attention. Merge attn and self-attn factories into one. Add attention references to README. Add mlp 'mode' to ECA.
|
2021-05-31 13:18:11 -07:00 |
std_conv.py
|
Use reshape instead of view in std_conv, causing issues in recent PyTorch in channels_last
|
2021-09-23 15:43:48 -07:00 |
test_time_pool.py
|
…
|
|
weight_init.py
|
…
|
|