fast-reid/fastreid
liaoxingyu 3d1bae9f13 fix triplet loss backward propagation on multi-gpu training (#82)
Summary: fix torch.distributed.all_gather has no gradient when performing all_gather operation on the provided tensors, instead using `GatherLayer`.
2020-09-28 17:16:51 +08:00
..
config refactor reid head 2020-09-10 10:57:37 +08:00
data support finetuning from trained models 2020-09-28 17:10:10 +08:00
engine support finetuning from trained models 2020-09-28 17:10:10 +08:00
evaluation refactor evaluation code 2020-09-23 19:32:40 +08:00
layers fix triplet loss backward propagation on multi-gpu training (#82) 2020-09-28 17:16:51 +08:00
modeling fix triplet loss backward propagation on multi-gpu training (#82) 2020-09-28 17:16:51 +08:00
solver refactor evaluation code 2020-09-23 19:32:40 +08:00
utils fix typro 2020-09-10 11:04:59 +08:00
__init__.py feat: add SyncBN and GroupNorm suppor 2020-05-14 11:36:28 +08:00