6 Commits

Author SHA1 Message Date
Francisco Massa
8eae3269da
Add Knowledge-Distillation (#42)
* Add knowledge distillation

* Bugfix

* Bugfix

* Make names more readable and use single torch.cat call

* Remove criterion.train() in engine

The teacher should stay in eval mode

* Change default argument for teacher-model

* Return the average of classifiers during inference

* Cleanup unused code

* Add docstring for DistillationLoss

* Remove warnings from newer PyTorch

Also uses more stable variant, instead of using softmax + log, use directly log_softmax
2021-01-13 14:19:23 +01:00
Changlin Li
b06fcba6bf
Support parallelized evaluation (#24)
* support parallelized evaluation

* remove shuffle arg of loader val, add sampler val in non-dist branch

* replace timm eval sampler with torch sampler

* add logger synchronizing to support parallelized evaluation

* add command line argument dist-eval and warning
2021-01-08 11:05:39 +01:00
Francisco Massa
0c4b8f60bd Change LICENSE to Apache 2.0 2021-01-08 10:51:58 +01:00
Francisco Massa
dcd888df20
Update default data-path (#25)
Points to the new valid location
2021-01-07 22:18:54 +01:00
Zhiyuan Chen
0282d2a175
Remove drop block args (#23)
timm's Vision Transformer does not support drop block
2021-01-07 17:28:41 +01:00
Francisco Massa
1d38fa4c37 Initial commit 2020-12-23 10:47:58 -08:00