* 1.Add ZSKT algorithm with zskt_generator, at_loss. 2.Add teacher_detach in kl_divergence. * 1.Amend readme. 2.Revise UT bugs of test_graph and test_distill. * 1.Amend docstring of zskt_generator * 1.Add torch version judgment in test_distillation_loss. * 1.Revise defaults of batch_size to 1 in generators. 2.Revise mmcls.data to mmcls.structures * 1.Rename function "at" to "calc_attention_matrix". |
||
---|---|---|
.. | ||
test_classical_models | ||
__init__.py | ||
test_channel_mutator.py | ||
test_diff_mutator.py | ||
test_one_shot_mutator.py | ||
utils.py |