Commit Graph

37 Commits (100830e5efc06f259b347d55edda90f3e75c14f1)

Author SHA1 Message Date
ViokingTung 2cac19ce31
fix num_vis type error bug in visualizer.py (#566) 2021-09-02 14:30:18 +08:00
liaoxingyu 7e652fea2a feat: Add contiguous parameters support
Support contiguous parameters to train faster. It can split parameters into different contiguous groups by freeze_layer, lr and weight decay.
2021-07-05 11:10:37 +08:00
liaoxingyu 2cabc3428a Support vision transformer backbone 2021-05-31 17:08:57 +08:00
Xingyu Liao 15c556c43a
remove apex dependency (#442)
Summary: Use Pytorch1.6(or above) built-in amp training
2021-03-23 12:12:35 +08:00
liaoxingyu ef6ebf451b refactor apex import 2021-01-23 15:35:48 +08:00
liaoxingyu a53fd17874 update docs 2021-01-23 15:25:58 +08:00
liaoxingyu e26182e6ec make lr warmup by iter
Summary: change warmup way by iter not by epoch, which will make it more flexible when training small epochs
2021-01-22 11:17:21 +08:00
liaoxingyu 15e1729a27 update fastreid V1.0 2021-01-18 11:36:38 +08:00
liaoxingyu 0e1b91f74a fix checkpoint bug in ddp training
Summary: change pytorch ddp to apex ddp in checkpoint
2020-12-28 14:35:02 +08:00
liaoxingyu bb7a00e615 feat: add save best model checkpoint 2020-12-22 15:50:23 +08:00
liaoxingyu a327a70f0d v0.3 update
Summary:
1. change DPP training in apex way;
2. make warmup scheduler by iter and lr scheduler by epoch;
3. replace random erasing with torchvision implementation;
4. naming modification in config file
2020-12-07 14:19:20 +08:00
liaoxingyu 7e9a4775da fixup finetune problem
Summary: support finetune from the other model with different number of classes, and simplify calling way (#325)

close #325

close #325
2020-11-06 15:58:22 +08:00
liaoxingyu2 a00e50d37f fix triplet ddp training
Summary: fixup precision alignment when triplet loss with ddp
2020-11-06 11:01:10 +08:00
liaoxingyu2 3bd2fad9a5 support faiss-based rerank
Summary: accelerate rerank with faiss-gpu
2020-11-06 10:59:53 +08:00
liaoxingyu 74a6938289 fix typro 2020-09-10 11:04:59 +08:00
liaoxingyu d00ce8fc3c refactor model arch 2020-09-01 16:14:45 +08:00
liaoxingyu 16655448c2 onnx/trt support
Summary: change model pretrain mode and support onnx/TensorRT export
2020-07-29 17:43:39 +08:00
liaoxingyu ea8a3cc534 fix typro 2020-07-10 16:26:35 +08:00
liaoxingyu fec7abc461 finish v0.2 ddp training 2020-07-06 16:57:43 +08:00
liaoxingyu 3840f3f79a fix arcface NaN problem
Summary: fix classifier init bugs, which will not initialize classifier weights when use arcface or circle loss.
In this way, it will lead loss NaN problem.
2020-06-18 12:05:44 +08:00
liaoxingyu 8879db3fba update training instruction
Summary: update dataset configuration and training instruction
2020-06-16 19:43:36 +08:00
liaoxingyu 5528d17ace refactor code
Summary: change code style and refactor code, add avgmax pooling layer in gem_pool
2020-05-28 13:49:39 +08:00
liaoxingyu 84c733fa85 fix: remove prefetcher, put normalizer in model
1. remove messy data prefetcher which will cause  confusion
2. put normliazer in model to accelerate training via GPU computing
2020-05-25 23:39:11 +08:00
liaoxingyu e990cf3e34 style: fix some typro 2020-05-21 15:55:51 +08:00
liaoxingyu 2ac55a7601 feat: update roc curve and TPR@FPR metric
support plot multiple ROC curves with different model
2020-05-20 14:29:33 +08:00
liaoxingyu e344eae1cc feat: support plotting roc curve and compute auc score
ROC curve and AUC score will be help for thresholds
2020-05-19 20:45:26 +08:00
liaoxingyu 320010f2ae feat: support re-rank in test phase 2020-05-13 11:47:52 +08:00
liaoxingyu 9addfb0ae2 feat: support visualizing label list
add features to support label list visualization, which can be used
for label correction or check the hardest sample
2020-05-12 21:35:33 +08:00
liaoxingyu 9b6fda3830 style: remove title in visualization 2020-05-11 14:12:29 +08:00
liaoxingyu 13bb03eb07 feat: add rank result visualization tools
Update visualization tools which can save rank list with AP metrics from high to low, vice versa.
In order to compute AP fast in visualizer, modify rank_cylib to get all_AP instead of mAP.
In this way, we can use Cython to compute results.
2020-05-10 23:17:10 +08:00
liaoxingyu a2dcd7b4ab feat(layers/norm): add ghost batchnorm
add a get_norm fucntion to easily change normalization between batchnorm, ghost bn and group bn
2020-05-01 09:02:46 +08:00
liaoxingyu 9cf222e093 refactor bn_no_bias 2020-04-08 21:04:09 +08:00
liaoxingyu 6a8961ce48 1. upload circle loss and arcface
2. finish freeze training
3. update augmix data augmentation
2020-04-05 23:54:26 +08:00
L1aoXingyu 12957f66aa Change architecture:
1. delete redundant preprocess
2. add data prefetcher to accelerate data loading
3. fix minor bug of triplet sampler when only one image for one id
2020-02-18 21:01:23 +08:00
L1aoXingyu 8a9c0ccfad Finish first version for fastreid 2020-02-10 22:13:04 +08:00
L1aoXingyu db6ed12b14 Update sampler code 2020-02-10 07:38:56 +08:00
liaoxingyu b761b656f3 Finish basic training loop and evaluation results 2020-01-20 21:33:37 +08:00