This website requires JavaScript.
Explore
Help
Register
Sign In
mirrors
/
pytorch-image-models
mirror of
https://github.com/huggingface/pytorch-image-models.git
Watch
1
Star
0
Fork
You've already forked pytorch-image-models
0
Code
Issues
Projects
Releases
Wiki
Activity
a2f539f055
pytorch-image-models
/
timm
/
layers
History
Ross Wightman
962958723c
More Hiera updates. Add forward_intermediates to hieradat/sam2 impl. Make both use same classifier module. Add coarse bool to intermediates.
2024-08-16 11:10:04 -07:00
..
__init__.py
Rename global pos embed for Hiera abswin, factor out commonly used vit weight init fns to layers. Add a channels-last ver of normmlp head.
2024-08-15 17:46:36 -07:00
activations.py
…
activations_me.py
…
adaptive_avgmax_pool.py
…
attention2d.py
Add xavier_uniform init of MNVC hybrid attention modules. Small improvement in training stability.
2024-07-26 17:03:40 -07:00
attention_pool.py
…
attention_pool2d.py
Fix rotary embed version of attn pool. Bit of cleanup/naming
2024-06-11 23:49:17 -07:00
blur_pool.py
…
bottleneck_attn.py
…
cbam.py
…
classifier.py
More Hiera updates. Add forward_intermediates to hieradat/sam2 impl. Make both use same classifier module. Add coarse bool to intermediates.
2024-08-16 11:10:04 -07:00
cond_conv2d.py
…
config.py
…
conv2d_same.py
…
conv_bn_act.py
Remove separate ConvNormActAa class, merge with ConvNormAct
2024-06-10 12:05:35 -07:00
create_act.py
Add WIP HieraDet impl (SAM2 backbone support)
2024-08-15 17:58:15 -07:00
create_attn.py
…
create_conv2d.py
…
create_norm.py
…
create_norm_act.py
…
drop.py
…
eca.py
…
evo_norm.py
…
fast_norm.py
…
filter_response_norm.py
…
format.py
…
gather_excite.py
…
global_context.py
…
grid.py
…
grn.py
…
halo_attn.py
…
helpers.py
…
hybrid_embed.py
set_input_size initial impl for vit & swin v1. Move HybridEmbed to own location in timm/layers
2024-07-17 15:25:48 -07:00
inplace_abn.py
…
interpolate.py
…
lambda_layer.py
…
layer_scale.py
Fix hiera init with num_classes=0, fix weight tag names for sbb2 hiera/vit weights, add LayerScale/LayerScale2d to layers
2024-08-15 11:14:38 -07:00
linear.py
…
median_pool.py
…
mixed_conv2d.py
…
ml_decoder.py
…
mlp.py
…
non_local_attn.py
…
norm.py
…
norm_act.py
…
padding.py
Padding helpers work if tuples/lists passed
2024-07-19 14:28:03 -07:00
patch_dropout.py
…
patch_embed.py
set_input_size initial impl for vit & swin v1. Move HybridEmbed to own location in timm/layers
2024-07-17 15:25:48 -07:00
pool2d_same.py
…
pos_embed.py
Adding pos embed resize fns to FX autowrap exceptions
2024-06-10 12:06:47 -07:00
pos_embed_rel.py
…
pos_embed_sincos.py
…
selective_kernel.py
Remove separate ConvNormActAa class, merge with ConvNormAct
2024-06-10 12:05:35 -07:00
separable_conv.py
…
space_to_depth.py
…
split_attn.py
…
split_batchnorm.py
…
squeeze_excite.py
…
std_conv.py
…
test_time_pool.py
…
trace_utils.py
…
typing.py
…
weight_init.py
Rename global pos embed for Hiera abswin, factor out commonly used vit weight init fns to layers. Add a channels-last ver of normmlp head.
2024-08-15 17:46:36 -07:00