Commit Graph

1 Commits (d1f8dbb30893d406b19ecc1e45f63c037bd5ec72)

Author SHA1 Message Date
liaoxingyu 3d1bae9f13 fix triplet loss backward propagation on multi-gpu training (#82)
Summary: fix torch.distributed.all_gather has no gradient when performing all_gather operation on the provided tensors, instead using `GatherLayer`.
2020-09-28 17:16:51 +08:00