* add test that model supports forward_head(x, pre_logits=True)
* add head_hidden_size attr to all models and set differently from num_features attr when head has hidden layers
* test forward_features() feat dim == model.num_features and pre_logits feat dim == self.head_hidden_size
* more consistency in reset_classifier signature, add typing
* asserts in some heads where pooling cannot be disabled
Fix#2194
* add convnext, resnet, efficientformer, levit support
* remove kwargs only for fn so that torchscript isn't broken for all :(
* use reset_classifier() consistently in prune
* include relpos vit
* refactor reduction / size calcs so hybrid vits work and dynamic_img_size works
* fix -ve feature indices when pruning
* fix mvitv2 w/ class token
* refine naming
* add tests
* Update optim test to remove Variable/.data and fix _state_dict optim test
* Attempt to run python 3.11 w/ 2.1
* Try factoring out testmarker to common var
* More fiddling
* Abandon attempt to reduce redunancy
* Another try
* update ClassifierHead to allow different input format
* add output format support to patch embed
* fix some flatten issues for a few conv head models
* add Format enum and helpers for tensor format (layout) choices
* Split CI tests to run them in parallel
The idea of this PR is to split tests into multiple sets that can be run
in parallel by GH. For this, all tests in test_models.py that would run
on GH get a pytest marker. The GH workflow matrix is factorized to run
only a single marker. That way, only a subset of tests should run per
worker, leading to quicker results.
There is also a worker that runs all the tests that are not inside
test_models.py.
* [skip ci] empty commit to abort ci
* Fix typo in marker name
* Split fx into forward and backward
* Comment out test coverage for now
Checking if it's responsible for regression in CI runtime.
* Remove pytest cov completely from requirements
* Remove cov call in pyproject.toml
Missed that one.