1
0
mirror of https://github.com/JDAI-CV/fast-reid.git synced 2025-06-03 14:50:47 +08:00

27 Commits

Author SHA1 Message Date
liaoxingyu
2b65882447 change way of layer freezing
Remove `find_unused_parameters` in DDP and add a new step function in optimizer for freezing backbone. It will accelerate training speed in this way.
2021-05-25 15:57:09 +08:00
Sherlock Liao
ff8a958fff
bugfix for plain_train_net.py and lr scheduler step () 2021-05-11 15:46:17 +08:00
liaoxingyu
44cee30dfc update fastreid v1.2
Summary:
1. refactor dataloader and heads
2. bugfix in fastattr, fastclas, fastface and partialreid
3. partial-fc supported in fastface
2021-04-02 21:33:13 +08:00
Xingyu Liao
890224f25c
support classification in fastreid ()
Summary: support classification and refactor build_dataloader which can support explicit parameters passing
2021-03-26 20:17:39 +08:00
Xingyu Liao
15c556c43a
remove apex dependency ()
Summary: Use Pytorch1.6(or above) built-in amp training
2021-03-23 12:12:35 +08:00
liaoxingyu
f57c5764e3 support multi-node training 2021-03-09 20:07:28 +08:00
liaoxingyu
a53fd17874 update docs 2021-01-23 15:25:58 +08:00
liaoxingyu
e26182e6ec make lr warmup by iter
Summary: change warmup way by iter not by epoch, which will make it more flexible when training small epochs
2021-01-22 11:17:21 +08:00
liaoxingyu
15e1729a27 update fastreid V1.0 2021-01-18 11:36:38 +08:00
liaoxingyu
2c17847980 feat: freeze FC
Summary: update freeze FC in the last stages of training
2020-12-28 14:46:28 +08:00
liaoxingyu
5469e8ce76 feat: add save best model mechanism 2020-12-22 15:49:46 +08:00
liaoxingyu
a327a70f0d v0.3 update
Summary:
1. change DPP training in apex way;
2. make warmup scheduler by iter and lr scheduler by epoch;
3. replace random erasing with torchvision implementation;
4. naming modification in config file
2020-12-07 14:19:20 +08:00
liaoxingyu
4d573b8107 refactor reid head
Summary: merge BNneckHead, LinearHead and ReductionHead into EmbeddingHead
because they are highly similar and can be prepared for ClsHead
2020-09-10 10:57:37 +08:00
liaoxingyu
d00ce8fc3c refactor model arch 2020-09-01 16:14:45 +08:00
liaoxingyu
ea8a3cc534 fix typro 2020-07-10 16:26:35 +08:00
liaoxingyu
fec7abc461 finish v0.2 ddp training 2020-07-06 16:57:43 +08:00
liaoxingyu
84c733fa85 fix: remove prefetcher, put normalizer in model
1. remove messy data prefetcher which will cause  confusion
2. put normliazer in model to accelerate training via GPU computing
2020-05-25 23:39:11 +08:00
liaoxingyu
e990cf3e34 style: fix some typro 2020-05-21 15:55:51 +08:00
liaoxingyu
bf18479541 fix: revise syncBN bug 2020-05-14 14:52:37 +08:00
liaoxingyu
0b15ac4e03 feat(hooks&optim): update stochastic weight averging hooks
Update swa method which will do after regular training if you
set this option enabled.
2020-05-08 12:20:04 +08:00
liaoxingyu
948af64fd1 feat: add swa algorithm
Add swa and related config options,
if it is enabled, model will do swa after regular training
2020-05-06 10:17:44 +08:00
liaoxingyu
a2dcd7b4ab feat(layers/norm): add ghost batchnorm
add a get_norm fucntion to easily change normalization between batchnorm, ghost bn and group bn
2020-05-01 09:02:46 +08:00
liaoxingyu
4d2fa28dbb update freeze layer
update preciseBN
update circle loss with metric learning and cross entropy loss form
update loss call methods
2020-04-06 23:34:27 +08:00
liaoxingyu
6a8961ce48 1. upload circle loss and arcface
2. finish freeze training
3. update augmix data augmentation
2020-04-05 23:54:26 +08:00
L1aoXingyu
12957f66aa Change architecture:
1. delete redundant preprocess
2. add data prefetcher to accelerate data loading
3. fix minor bug of triplet sampler when only one image for one id
2020-02-18 21:01:23 +08:00
L1aoXingyu
a2f69d0537 Update StrongBaseline results for market1501 and dukemtmc 2020-02-11 22:38:40 +08:00
L1aoXingyu
db6ed12b14 Update sampler code 2020-02-10 07:38:56 +08:00