Update README.md
Remove references to github.io docs that died with repo moveconvnext_xxlarge_eps
parent
10627bbd49
commit
450b74affb
18
README.md
18
README.md
|
@ -439,9 +439,7 @@ The work of many others is present here. I've tried to make sure all source mate
|
|||
|
||||
## Models
|
||||
|
||||
All model architecture families include variants with pretrained weights. There are specific model variants without any weights, it is NOT a bug. Help training new or better weights is always appreciated. Here are some example [training hparams](https://rwightman.github.io/pytorch-image-models/training_hparam_examples) to get you started.
|
||||
|
||||
A full version of the list below with source links can be found in the [documentation](https://rwightman.github.io/pytorch-image-models/models/).
|
||||
All model architecture families include variants with pretrained weights. There are specific model variants without any weights, it is NOT a bug. Help training new or better weights is always appreciated.
|
||||
|
||||
* Aggregating Nested Transformers - https://arxiv.org/abs/2105.12723
|
||||
* BEiT - https://arxiv.org/abs/2106.08254
|
||||
|
@ -542,15 +540,15 @@ Several (less common) features that I often utilize in my projects are included.
|
|||
|
||||
* All models have a common default configuration interface and API for
|
||||
* accessing/changing the classifier - `get_classifier` and `reset_classifier`
|
||||
* doing a forward pass on just the features - `forward_features` (see [documentation](https://rwightman.github.io/pytorch-image-models/feature_extraction/))
|
||||
* doing a forward pass on just the features - `forward_features` (see [documentation](https://huggingface.co/docs/timm/feature_extraction))
|
||||
* these makes it easy to write consistent network wrappers that work with any of the models
|
||||
* All models support multi-scale feature map extraction (feature pyramids) via create_model (see [documentation](https://rwightman.github.io/pytorch-image-models/feature_extraction/))
|
||||
* All models support multi-scale feature map extraction (feature pyramids) via create_model (see [documentation](https://huggingface.co/docs/timm/feature_extraction))
|
||||
* `create_model(name, features_only=True, out_indices=..., output_stride=...)`
|
||||
* `out_indices` creation arg specifies which feature maps to return, these indices are 0 based and generally correspond to the `C(i + 1)` feature level.
|
||||
* `output_stride` creation arg controls output stride of the network by using dilated convolutions. Most networks are stride 32 by default. Not all networks support this.
|
||||
* feature map channel counts, reduction level (stride) can be queried AFTER model creation via the `.feature_info` member
|
||||
* All models have a consistent pretrained weight loader that adapts last linear if necessary, and from 3 to 1 channel input if desired
|
||||
* High performance [reference training, validation, and inference scripts](https://rwightman.github.io/pytorch-image-models/scripts/) that work in several process/GPU modes:
|
||||
* High performance [reference training, validation, and inference scripts](https://huggingface.co/docs/timm/training_script) that work in several process/GPU modes:
|
||||
* NVIDIA DDP w/ a single GPU per process, multiple processes with APEX present (AMP mixed-precision optional)
|
||||
* PyTorch DistributedDataParallel w/ multi-gpu, single process (AMP disabled as it crashes when enabled)
|
||||
* PyTorch w/ single GPU single process (AMP optional)
|
||||
|
@ -604,19 +602,17 @@ Model validation results can be found in the [results tables](results/README.md)
|
|||
|
||||
## Getting Started (Documentation)
|
||||
|
||||
My current [documentation](https://rwightman.github.io/pytorch-image-models/) for `timm` covers the basics.
|
||||
|
||||
Hugging Face [`timm` docs](https://huggingface.co/docs/hub/timm) will be the documentation focus going forward and will eventually replace the `github.io` docs above.
|
||||
The official documentation can be found at https://huggingface.co/docs/hub/timm. Documentation contributions are welcome.
|
||||
|
||||
[Getting Started with PyTorch Image Models (timm): A Practitioner’s Guide](https://towardsdatascience.com/getting-started-with-pytorch-image-models-timm-a-practitioners-guide-4e77b4bf9055) by [Chris Hughes](https://github.com/Chris-hughes10) is an extensive blog post covering many aspects of `timm` in detail.
|
||||
|
||||
[timmdocs](http://timm.fast.ai/) is quickly becoming a much more comprehensive set of documentation for `timm`. A big thanks to [Aman Arora](https://github.com/amaarora) for his efforts creating timmdocs.
|
||||
[timmdocs](http://timm.fast.ai/) is an alternate set of documentation for `timm`. A big thanks to [Aman Arora](https://github.com/amaarora) for his efforts creating timmdocs.
|
||||
|
||||
[paperswithcode](https://paperswithcode.com/lib/timm) is a good resource for browsing the models within `timm`.
|
||||
|
||||
## Train, Validation, Inference Scripts
|
||||
|
||||
The root folder of the repository contains reference train, validation, and inference scripts that work with the included models and other features of this repository. They are adaptable for other datasets and use cases with a little hacking. See [documentation](https://rwightman.github.io/pytorch-image-models/scripts/) for some basics and [training hparams](https://rwightman.github.io/pytorch-image-models/training_hparam_examples) for some train examples that produce SOTA ImageNet results.
|
||||
The root folder of the repository contains reference train, validation, and inference scripts that work with the included models and other features of this repository. They are adaptable for other datasets and use cases with a little hacking. See [documentation (https://huggingface.co/docs/timm/training_script).
|
||||
|
||||
## Awesome PyTorch Resources
|
||||
|
||||
|
|
Loading…
Reference in New Issue