Commit Graph

132 Commits (a2f539f0552a9958a1960c8f5079a8b3782eb803)

Author SHA1 Message Date
Ross Wightman cec70b6779
Merge pull request #2225 from huggingface/small_things
Small things
2024-07-25 20:29:13 -07:00
Ross Wightman 34c9fee554 Fix pass through of input / target keys so ImageDataset readers so args work with hfds instead of just hfids (iterable) 2024-07-17 10:11:46 -07:00
Ross Wightman 83c2c2f0c5 Add 'Maybe' PIL / image tensor conversions in case image alread in tensor format 2024-07-08 13:43:51 -07:00
Ross Wightman 3bfd036b58 Add normalize flag to transforms factory, allow return of non-normalized native dtype torch.Tensors 2024-05-13 15:23:25 -07:00
Ross Wightman 286d941923 Add teddy-bear class back to first 1000 classes of imagenet22k_ms_synsets (index 851) 2024-04-09 09:33:08 -07:00
Ross Wightman 7d121ac2ef Small tweak of timm ToTensor for clarity 2024-02-10 14:57:40 -08:00
Ross Wightman 809a9e14e2 Pass train-crop-mode to create_loader/transforms from train.py args 2024-01-24 16:19:02 -08:00
Ross Wightman 2eac2f6955 Fiddling with iterator wrapping for HF ds streaming 2024-01-09 12:41:54 -08:00
Ross Wightman be0944edae Significant transforms, dataset, dataloading enhancements. 2024-01-08 09:38:42 -08:00
Ross Wightman 40d55ab4bc Add `in_chans` to data config helper. Fix #2021 2023-11-23 12:44:59 -08:00
Ruslan Baikulov 158bf129c4 Replace deprecated NumPy aliases of builtin types 2023-07-03 22:24:25 +03:00
Ross Wightman 8ce9a2c00a
Merge pull request #1222 from Leoooo333/master
Fix mixup/one_hot device problem
2023-05-10 08:59:15 -07:00
Ross Wightman fd592ec86c Fix an issue with FastCollateMixup still using device 2023-05-10 08:55:38 -07:00
Ross Wightman f4825a09ef
Merge pull request #212 from bryant1410/patch-1
Fix MultiEpochsDataLoader when there's no batching
2023-04-20 07:09:27 -07:00
Ross Wightman 9fcfb8bcc1 Add Microsoft FocalNet specific ('ms') ImageNet-22k classifier layout 2023-03-18 14:57:34 -07:00
Ross Wightman c30a160d3e Merge remote-tracking branch 'origin/main' into focalnet_and_swin_refactor 2023-03-15 15:58:39 -07:00
Ross Wightman 3a636eee71 Fix #1713 missed assignement in 3-aug level fn, fix few other minor lint complaints in auto_augment.py 2023-03-11 14:32:23 -08:00
Ross Wightman 7266c5c716 Merge branch 'main' into focalnet_and_swin_refactor 2023-02-17 09:20:14 -08:00
Ross Wightman 9c14654a0d Improve support for custom dataset label name/description through HF hub export, via pretrained_cfg 2023-02-08 08:29:20 -08:00
Ross Wightman 0f2803de7a Move ImageNet metadata (aka info) files to timm/data/_info. Add helper classes to make info available for labelling. Update inference.py for first use. 2023-02-06 17:45:03 -08:00
Ross Wightman e9f1376cde Cleanup resolve data config fns, add 'model' variant that takes model as first arg, make 'args' arg optional in original fn 2023-01-20 14:47:55 -08:00
Ross Wightman c061d5e401 Allow using class_map functionality w/ IterableDataset (TFDS/WDS) to remap class labels 2023-01-09 16:28:47 -08:00
Ross Wightman d5aa17e415 Remove print from auto_augment 2022-12-28 17:11:35 -08:00
Ross Wightman d1bfa9a000 Support HF datasets and TFSD w/ a sub-path by fixing split, fix #1598 ... add class mapping support to HF datasets in case class label isn't in info. 2022-12-22 21:34:13 -08:00
Ross Wightman e7da205345 Fix aa min_max level clamp 2022-12-10 16:43:28 -08:00
Ross Wightman e3b2f5be0a Add 3-Augment support to auto_augment.py, clean up weighted choice handling, and allow adjust per op prob via arg string 2022-12-10 16:25:50 -08:00
Ross Wightman 927f031293 Major module / path restructure, timm.models.layers -> timm.layers, add _ prefix to all non model modules in timm.models 2022-12-06 15:00:06 -08:00
Ross Wightman 3db4e346e0 Switch TFDS dataset to use INTEGER_ACCURATE jpeg decode by default 2022-12-05 10:21:34 -08:00
Ross Wightman 9da7e3a799 Add crop_mode for pretraind config / image transforms. Add support for dynamo compilation to benchmark/train/validate 2022-12-05 10:21:34 -08:00
Ross Wightman 0dadb4a6e9 Initial multi-weight support, handled so old pretraing config handling co-exists with new tags. 2022-12-05 10:21:34 -08:00
hongxin xiang 653bdc7105 Fix comment: https://github.com/rwightman/pytorch-image-models/pull/1564#issuecomment-1326743424 2022-11-25 09:52:52 +08:00
hongxin xiang bdc9fad638 Fix compatible BUG: QMNIST and ImageNet datasets do not exist in torchvision 0.10.1. 2022-11-24 14:37:44 +08:00
Ross Wightman 475ecdfa3d cast env var args for dataset readers to int 2022-10-17 14:40:11 -07:00
Ross Wightman 66f4af7090 Merge remote-tracking branch 'origin/master' into script_cleanup 2022-10-14 15:54:00 -07:00
Ross Wightman d3961536c9 comment some debug logs for WDS dataset 2022-10-14 15:39:00 -07:00
Ross Wightman e9dccc918c Rename dataset/parsers -> dataset/readers, create_parser to create_reader, etc 2022-10-14 15:14:38 -07:00
Ross Wightman f67a7ee8bd Set num_workers in Iterable WDS/TFDS datasets early so sample estimate is correct 2022-10-11 15:11:18 -07:00
Ross Wightman 4f18d6dc5f Fix logs in WDS parser 2022-10-07 10:06:17 -07:00
Ross Wightman b8c8550841 Data improvements. Improve train support for in_chans != 3. Add wds dataset support from bits_and_tpu branch w/ fixes and tweaks. TFDS tweaks. 2022-09-29 16:42:58 -07:00
Alex Fafard 7327792f39 update to support pickle based dictionaries 2022-09-27 11:13:48 -04:00
Ross Wightman 87939e6fab Refactor device handling in scripts, distributed init to be less 'cuda' centric. More device args passed through where needed. 2022-09-23 16:08:59 -07:00
Ross Wightman c88947ad3d Add initial Hugging Face Datasets parser impl. 2022-09-23 16:08:19 -07:00
Ross Wightman e069249a2d Add hf hub entries for laion2b clip models, add huggingface_hub dependency, update some setup/reqs, torch >= 1.7 2022-09-16 21:39:05 -07:00
Ross Wightman 9be0c84715 Change set -> dict w/ None keys for dataset split synonym search, so always consistent if more than 1 exists. Fix #1224 2022-07-07 15:33:53 -07:00
Ross Wightman bfc0dccb0e Improve image extension handling, add methods to modify / get defaults. Fix #1335 fix #1274. 2022-07-07 14:23:20 -07:00
Ross Wightman 70d6d2c484 support test_crop_size in data config resolve 2022-07-02 15:17:05 -07:00
Ross Wightman 7cedc8d474 Follow up to #1256, fix interpolation warning in auto_autoaugment as well 2022-06-21 14:56:53 -07:00
Jakub Kaczmarzyk db64393c0d
use `Image.Resampling` namespace for PIL mapping (#1256)
* use `Image.Resampling` namespace for PIL mapping

PIL shows a deprecation warning when accessing resampling constants via the `Image` namespace. The suggested namespace is `Image.Resampling`. This commit updates `_pil_interpolation_to_str` to use the `Image.Resampling` namespace.

```
/tmp/ipykernel_11959/698124036.py:2: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
  Image.NEAREST: 'nearest',
/tmp/ipykernel_11959/698124036.py:3: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
  Image.BILINEAR: 'bilinear',
/tmp/ipykernel_11959/698124036.py:4: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
  Image.BICUBIC: 'bicubic',
/tmp/ipykernel_11959/698124036.py:5: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
  Image.BOX: 'box',
/tmp/ipykernel_11959/698124036.py:6: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
  Image.HAMMING: 'hamming',
/tmp/ipykernel_11959/698124036.py:7: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
  Image.LANCZOS: 'lanczos',
```

* use new pillow resampling enum only if it exists
2022-06-12 22:30:57 -07:00
Junming Chen 569d114ef7
Fix device problem
Before, the one_hot could only run in device='cuda'. Now it will run on input device automatically.
2022-04-19 11:53:18 +08:00
Ross Wightman 372ad5fa0d Significant model refactor and additions:
* All models updated with revised foward_features / forward_head interface
* Vision transformer and MLP based models consistently output sequence from forward_features (pooling or token selection considered part of 'head')
* WIP param grouping interface to allow consistent grouping of parameters for layer-wise decay across all model types
* Add gradient checkpointing support to a significant % of models, especially popular architectures
* Formatting and interface consistency improvements across models
* layer-wise LR decay impl part of optimizer factory w/ scale support in scheduler
* Poolformer and Volo architectures added
2022-02-28 13:56:23 -08:00