Commit Graph

88 Commits (fdaa4b1a84135c9ab447d19406dd8f86e320338f)

Author SHA1 Message Date
liaoxingyu 7e83d3175f update README.md
Summary: add information about fastreid V1.0
2021-01-18 11:44:55 +08:00
liaoxingyu 15e1729a27 update fastreid V1.0 2021-01-18 11:36:38 +08:00
liaoxingyu c49414bb9f fix SE layer of basicblock in resnet (#375)
Summary: SE layer defined in `__init__` but not used in `forward`

close #375
2021-01-04 10:46:54 +08:00
liaoxingyu 766f309d03 feat: update pairwise cosface and pairwise circle loss 2020-12-22 15:51:49 +08:00
liaoxingyu a327a70f0d v0.3 update
Summary:
1. change DPP training in apex way;
2. make warmup scheduler by iter and lr scheduler by epoch;
3. replace random erasing with torchvision implementation;
4. naming modification in config file
2020-12-07 14:19:20 +08:00
liaoxingyu f496193f17 change cross_entroy_loss input name
Summary: change `pred_class_logits` to `pred_class_outputs` to prevent misleading. (#318)

close #318
2020-11-06 14:16:31 +08:00
liaoxingyu2 a00e50d37f fix triplet ddp training
Summary: fixup precision alignment when triplet loss with ddp
2020-11-06 11:01:10 +08:00
liaoxingyu2 42cadaeebc update backbone and config
Summary: update resnet backbone for adaptation caffe export; modify effnet loading keyword
2020-11-06 10:58:38 +08:00
liaoxingyu 3d1bae9f13 fix triplet loss backward propagation on multi-gpu training (#82)
Summary: fix torch.distributed.all_gather has no gradient when performing all_gather operation on the provided tensors, instead using `GatherLayer`.
2020-09-28 17:16:51 +08:00
liaoxingyu ea06ead9e3 fix problem when converting to caffe model (#261)
Summary: caffe cannot support normalization(sub mean and div std) in network, so make the normalization operators inplace which will not trigger when converting to caffe.

close #261
2020-09-25 11:26:20 +08:00
liaoxingyu 5dfe537515 update attribute project 2020-09-23 19:45:13 +08:00
liaoxingyu 2f29228086 fix regnet cfg problem 2020-09-23 14:41:44 +08:00
liaoxingyu df823da09a fix typro bce loss
close #273
2020-09-18 10:39:10 +08:00
liaoxingyu d9a63a959f fix bug in mgn #272
Summary: fix get_norm bug in mgn
2020-09-17 18:17:53 +08:00
liaoxingyu 648198e6e5 add efficientnet support 2020-09-10 11:04:52 +08:00
liaoxingyu 1b84348619 remove `num_splits` in batchnorm
Summary: `num_splits` works for GhostBN, but it's very uncommon
2020-09-10 11:01:07 +08:00
liaoxingyu 4d573b8107 refactor reid head
Summary: merge BNneckHead, LinearHead and ReductionHead into EmbeddingHead
because they are highly similar and can be prepared for ClsHead
2020-09-10 10:57:37 +08:00
liaoxingyu aa5c422606 fix pair-wise circle loss
fix #252
2020-09-09 15:28:52 +08:00
liaoxingyu 53fed7451d feat: support amp training
Summary: support automatic mixed precision training #217
2020-09-02 18:03:12 +08:00
liaoxingyu d00ce8fc3c refactor model arch 2020-09-01 16:14:45 +08:00
liaoxingyu f4305b0964 fix bnneck 2020-08-20 16:21:14 +08:00
liaoxingyu ac8409a7da updating for pytorch1.6 2020-08-20 15:51:41 +08:00
liaoxingyu 9c667d1a0f add pretrain_path support in backbone 2020-08-14 14:00:26 +08:00
liaoxingyu db6b42da4f update resnest url 2020-08-10 14:18:00 +08:00
liaoxingyu d1c20cbe50 fix pre-train model bugs
fix bugs locks when downloading pre-train model
2020-08-04 15:56:36 +08:00
liaoxingyu 35794621cc remove addmm warning 2020-07-31 16:32:10 +08:00
liaoxingyu 2430b8ed75 pretrain model bugfix
Fix pretrain model download bugs and testing bugs in multiprocess
2020-07-31 10:42:38 +08:00
liaoxingyu 65169b40bd support ddp testing 2020-07-30 20:15:28 +08:00
liaoxingyu 16655448c2 onnx/trt support
Summary: change model pretrain mode and support onnx/TensorRT export
2020-07-29 17:43:39 +08:00
liaoxingyu ee634df290 rm _all_ in resnet 2020-07-17 19:40:25 +08:00
liaoxingyu 5ad22d5d36 update regnet init 2020-07-17 19:35:40 +08:00
liaoxingyu f9539be683 add regnet config file 2020-07-17 19:32:36 +08:00
liaoxingyu 3b57dea49f support regnet backbone 2020-07-17 19:13:45 +08:00
liaoxingyu 3f35eb449d minor update 2020-07-14 11:58:06 +08:00
liaoxingyu f8d468647c resnet expansion add 2020-07-10 22:40:07 +08:00
liaoxingyu ea8a3cc534 fix typro 2020-07-10 16:26:35 +08:00
liaoxingyu fec7abc461 finish v0.2 ddp training 2020-07-06 16:57:43 +08:00
liaoxingyu 5ae2cff47e fix circle/arcface pred_logits
fix #136
2020-07-06 16:57:03 +08:00
liaoxingyu ecc2b1a790 update naive sampler
Summary: update naive sampler which will introduce unbalanced sampling
2020-06-15 20:50:25 +08:00
liaoxingyu 56a1ab4a5d update fast global avgpool
Summary: update fast pool according to https://arxiv.org/pdf/2003.13630.pdf
2020-06-12 16:34:03 +08:00
liaoxingyu cbdc01a1c3 update pairwise circle loss
Summary: add param of pairwise circle loss to config, and update pairwise circle loss version
2020-06-10 19:07:29 +08:00
liaoxingyu 3732f94405 update osnet 2020-06-09 14:38:49 +08:00
liaoxingyu 25a7f82df7 change style in baseline 2020-06-05 11:23:11 +08:00
liaoxingyu bc221cb05f fix mgn multi-gpu training problem
Summary: norm_type in pool_reduce will not change when use syncBN
2020-06-05 11:11:50 +08:00
liaoxingyu 94d85fe11c fix convert caffe model problem 2020-06-04 16:39:12 +08:00
liaoxingyu e7156e1cfa fix mgn not registered problem 2020-06-03 11:46:28 +08:00
liaoxingyu c036ac5bdd update reduction head 2020-05-30 16:50:02 +08:00
liaoxingyu 5528d17ace refactor code
Summary: change code style and refactor code, add avgmax pooling layer in gem_pool
2020-05-28 13:49:39 +08:00
liaoxingyu a1cb123cfa fix R101 bottleneck missing problem
Summary: add key 101 in block dict to support R101
2020-05-26 14:48:32 +08:00
liaoxingyu d4b71de3aa switch between soft and hard margin when inf
Summary: Add a mechnism to automatic switch triplet loss with soft margin to hard margin when loss becomes inf.
2020-05-26 14:36:33 +08:00