Commit Graph

1 Commits (41c3d6ff4df7c5e0ba2ecfa2dbfa58575c2856c1)

Author SHA1 Message Date
liaoxingyu 3d1bae9f13 fix triplet loss backward propagation on multi-gpu training (#82)
Summary: fix torch.distributed.all_gather has no gradient when performing all_gather operation on the provided tensors, instead using `GatherLayer`.
2020-09-28 17:16:51 +08:00