Commit Graph

56 Commits (270520c864ef4c659962a92cfbe4df52f6f512a6)

Author SHA1 Message Date
zuchen.wang 270520c864 change MultiStepLr intervals 2021-11-17 17:57:55 +08:00
zuchen.wang dfd7e5f61e 添加binary cross entropy loss和binary focal loss 2021-11-10 16:55:40 +08:00
zuchen.wang 5a075c1fe8 Merge branch 'develop' into pcb_online 2021-10-29 14:29:29 +08:00
zuchen.wang 99b124304f change input size to (0,0) 2021-10-29 11:36:55 +08:00
zuchen.wang ad31a2cd16 modify config 2021-10-27 16:15:58 +08:00
zuchen.wang 6687df06e0 update default config 2021-10-27 15:24:25 +08:00
zuchen.wang d4e2ac32d8 add pcb 2021-10-27 15:09:57 +08:00
zuchen.wang 8b309b0f4e add contrastive loss 2021-10-13 20:05:07 +08:00
liaoxingyu 7e652fea2a feat: Add contiguous parameters support
Support contiguous parameters to train faster. It can split parameters into different contiguous groups by freeze_layer, lr and weight decay.
2021-07-05 11:10:37 +08:00
liaoxingyu 2cabc3428a Support vision transformer backbone 2021-05-31 17:08:57 +08:00
liaoxingyu 55300730e1 update fastreid v1.2 readme and changelog 2021-04-06 20:09:13 +08:00
liaoxingyu 44cee30dfc update fastreid v1.2
Summary:
1. refactor dataloader and heads
2. bugfix in fastattr, fastclas, fastface and partialreid
3. partial-fc supported in fastface
2021-04-02 21:33:13 +08:00
Xingyu Liao fb36b23678
bugfix for attribute project (#450)
Summary: refactor sample weight in attribute recognition;
change all options to False in defaults.py and modify yaml files
2021-03-31 17:07:19 +08:00
Xingyu Liao be0a089e1f
bugfix & merge classification transforms (#448)
Summary: change heads definition in project and config file, merge classification transforms into default transforms
2021-03-30 15:47:14 +08:00
Xingyu Liao 890224f25c
support classification in fastreid (#443)
Summary: support classification and refactor build_dataloader which can support explicit parameters passing
2021-03-26 20:17:39 +08:00
liaoxingyu 77a91b1204 feat: support multi-teacher kd
Summary: support multi-teacher kd with logits and overhaul distillation
2021-01-29 17:25:31 +08:00
liaoxingyu e26182e6ec make lr warmup by iter
Summary: change warmup way by iter not by epoch, which will make it more flexible when training small epochs
2021-01-22 11:17:21 +08:00
liaoxingyu 15e1729a27 update fastreid V1.0 2021-01-18 11:36:38 +08:00
liaoxingyu f56ca8345e fix keywords error
Summary: add `freeze_fc` and `flip_test` keywords
2020-12-28 14:34:18 +08:00
liaoxingyu a327a70f0d v0.3 update
Summary:
1. change DPP training in apex way;
2. make warmup scheduler by iter and lr scheduler by epoch;
3. replace random erasing with torchvision implementation;
4. naming modification in config file
2020-12-07 14:19:20 +08:00
liaoxingyu2 42cadaeebc update backbone and config
Summary: update resnet backbone for adaptation caffe export; modify effnet loading keyword
2020-11-06 10:58:38 +08:00
liaoxingyu 4d573b8107 refactor reid head
Summary: merge BNneckHead, LinearHead and ReductionHead into EmbeddingHead
because they are highly similar and can be prepared for ClsHead
2020-09-10 10:57:37 +08:00
liaoxingyu 53fed7451d feat: support amp training
Summary: support automatic mixed precision training #217
2020-09-02 18:03:12 +08:00
liaoxingyu ae7c9288cf support faiss retrieval and cython roc evaluation 2020-08-12 16:27:57 +08:00
liaoxingyu d1c20cbe50 fix pre-train model bugs
fix bugs locks when downloading pre-train model
2020-08-04 15:56:36 +08:00
liaoxingyu 16655448c2 onnx/trt support
Summary: change model pretrain mode and support onnx/TensorRT export
2020-07-29 17:43:39 +08:00
liaoxingyu 3b57dea49f support regnet backbone 2020-07-17 19:13:45 +08:00
liaoxingyu 3f35eb449d minor update 2020-07-14 11:58:06 +08:00
liaoxingyu fec7abc461 finish v0.2 ddp training 2020-07-06 16:57:43 +08:00
liaoxingyu ecc2b1a790 update naive sampler
Summary: update naive sampler which will introduce unbalanced sampling
2020-06-15 20:50:25 +08:00
liaoxingyu 56a1ab4a5d update fast global avgpool
Summary: update fast pool according to https://arxiv.org/pdf/2003.13630.pdf
2020-06-12 16:34:03 +08:00
liaoxingyu cbdc01a1c3 update pairwise circle loss
Summary: add param of pairwise circle loss to config, and update pairwise circle loss version
2020-06-10 19:07:29 +08:00
liaoxingyu 84c733fa85 fix: remove prefetcher, put normalizer in model
1. remove messy data prefetcher which will cause  confusion
2. put normliazer in model to accelerate training via GPU computing
2020-05-25 23:39:11 +08:00
liaoxingyu fd90555e19 feat: add multi-dataset joint training
new feature that can support joint training, and find some bugs in funtion combine_all of datasets/bases.py that assume person id in dataset has been relabeld from 0 to num_class.
Another bug appears in msmt17 which trainset and testset person id both begin from 0, and we should change testset id from num_class of trainset.
2020-05-18 20:06:04 +08:00
liaoxingyu bf18479541 fix: revise syncBN bug 2020-05-14 14:52:37 +08:00
liaoxingyu 5ae3d4fecf feat: add aqe support in test phase
query expansion will combine the retrived topk nearest neighbors with the original query feature,
it will enhance mAP by a large margin.feat:
2020-05-13 16:27:22 +08:00
liaoxingyu 320010f2ae feat: support re-rank in test phase 2020-05-13 11:47:52 +08:00
liaoxingyu 0b15ac4e03 feat(hooks&optim): update stochastic weight averging hooks
Update swa method which will do after regular training if you
set this option enabled.
2020-05-08 12:20:04 +08:00
liaoxingyu 948af64fd1 feat: add swa algorithm
Add swa and related config options,
if it is enabled, model will do swa after regular training
2020-05-06 10:17:44 +08:00
liaoxingyu a2dcd7b4ab feat(layers/norm): add ghost batchnorm
add a get_norm fucntion to easily change normalization between batchnorm, ghost bn and group bn
2020-05-01 09:02:46 +08:00
liaoxingyu a6bd0371e2 feat($data): add autoaugment
add auto augmentation support with ImageNet policy and CIFAR10 policy.
modify codes in transforms and config for adapting to this augmentation.
2020-04-27 11:41:12 +08:00
liaoxingyu 3984f0c91d refactor($modeling/meta): refactor heads output
without intermediate variables generated by reid heads, make it more flexible
2020-04-24 12:16:18 +08:00
liaoxingyu 95a3c62ad2 refactor(fastreid)
refactor architecture
2020-04-20 10:59:29 +08:00
liaoxingyu 9684500a57 chagne arch
1. change dataset show to trainset show and testset show seperately
2. add cls layer to easily plug in circle loss and arcface
2020-04-19 12:54:01 +08:00
liaoxingyu be9faa5605 update focal loss
update dataset info display
update seperate lr
update adaptive label smooth regularization
2020-04-17 13:46:10 +08:00
liaoxingyu 4d2fa28dbb update freeze layer
update preciseBN
update circle loss with metric learning and cross entropy loss form
update loss call methods
2020-04-06 23:34:27 +08:00
liaoxingyu 6a8961ce48 1. upload circle loss and arcface
2. finish freeze training
3. update augmix data augmentation
2020-04-05 23:54:26 +08:00
liaoxingyu eacee874aa fix merge 2020-03-25 11:06:39 +08:00
liaoxingyu 23bedfce12 update version0.2 code 2020-03-25 10:58:26 +08:00
L1aoXingyu 12957f66aa Change architecture:
1. delete redundant preprocess
2. add data prefetcher to accelerate data loading
3. fix minor bug of triplet sampler when only one image for one id
2020-02-18 21:01:23 +08:00