zuchen.wang
8b309b0f4e
add contrastive loss
2021-10-13 20:05:07 +08:00
Xingyu Liao
1dce15efad
faster dataloader with pre-fetch and cuda stream ( #456 )
...
Summary: add a background thread to create a generator with pre-fetch, and create a new cuda stream to copy tensor from cpu to gpu in parallel.
Reviewed by: l1aoxingyu
2021-04-12 15:03:35 +08:00
Xingyu Liao
883fd4aede
add configurable decorator & linear loss decouple ( #441 )
...
Summary: Add configurable decorator which can call `Baseline` with `Baseline(cfg)` or `Baseline(cfg, heads=heads, ...)`
Decouple linear and loss computation for partial-fc support.
Reviewed By: l1aoxingyu
2021-03-23 12:10:06 +08:00
liaoxingyu
6b4b935ce4
fix augmix warning
...
Summary: add array.clone to avoid warning about numpy array not writable
2021-01-29 11:50:33 +08:00
liaoxingyu
15e1729a27
update fastreid V1.0
2021-01-18 11:36:38 +08:00
liaoxingyu
766f309d03
feat: update pairwise cosface and pairwise circle loss
2020-12-22 15:51:49 +08:00
liaoxingyu
a327a70f0d
v0.3 update
...
Summary:
1. change DPP training in apex way;
2. make warmup scheduler by iter and lr scheduler by epoch;
3. replace random erasing with torchvision implementation;
4. naming modification in config file
2020-12-07 14:19:20 +08:00
liaoxingyu
ea06ead9e3
fix problem when converting to caffe model ( #261 )
...
Summary: caffe cannot support normalization(sub mean and div std) in network, so make the normalization operators inplace which will not trigger when converting to caffe.
close #261
2020-09-25 11:26:20 +08:00
liaoxingyu
d9a63a959f
fix bug in mgn #272
...
Summary: fix get_norm bug in mgn
2020-09-17 18:17:53 +08:00
liaoxingyu
4d573b8107
refactor reid head
...
Summary: merge BNneckHead, LinearHead and ReductionHead into EmbeddingHead
because they are highly similar and can be prepared for ClsHead
2020-09-10 10:57:37 +08:00
liaoxingyu
d00ce8fc3c
refactor model arch
2020-09-01 16:14:45 +08:00
liaoxingyu
16655448c2
onnx/trt support
...
Summary: change model pretrain mode and support onnx/TensorRT export
2020-07-29 17:43:39 +08:00
liaoxingyu
3f35eb449d
minor update
2020-07-14 11:58:06 +08:00
liaoxingyu
ea8a3cc534
fix typro
2020-07-10 16:26:35 +08:00
liaoxingyu
fec7abc461
finish v0.2 ddp training
2020-07-06 16:57:43 +08:00
liaoxingyu
56a1ab4a5d
update fast global avgpool
...
Summary: update fast pool according to https://arxiv.org/pdf/2003.13630.pdf
2020-06-12 16:34:03 +08:00
liaoxingyu
25a7f82df7
change style in baseline
2020-06-05 11:23:11 +08:00
liaoxingyu
94d85fe11c
fix convert caffe model problem
2020-06-04 16:39:12 +08:00
liaoxingyu
5528d17ace
refactor code
...
Summary: change code style and refactor code, add avgmax pooling layer in gem_pool
2020-05-28 13:49:39 +08:00
liaoxingyu
84c733fa85
fix: remove prefetcher, put normalizer in model
...
1. remove messy data prefetcher which will cause confusion
2. put normliazer in model to accelerate training via GPU computing
2020-05-25 23:39:11 +08:00
liaoxingyu
9fae467adf
feat(engine/defaults): add DefaultPredictor to get image reid features
...
Add a new predictor interface, and modify demo code to predict image features.
2020-05-08 19:24:27 +08:00
liaoxingyu
a2dcd7b4ab
feat(layers/norm): add ghost batchnorm
...
add a get_norm fucntion to easily change normalization between batchnorm, ghost bn and group bn
2020-05-01 09:02:46 +08:00
liaoxingyu
329764bb60
refactor(heads): move num_classes out from heads
...
set parameter num_classes in meta_arch to easily modify different heads fc layer
2020-04-29 21:29:48 +08:00
liaoxingyu
4d3e5fd378
refactor(evaluation): add feature l2 norm in evaluation
...
change the l2 norm function from inference function in Module to reid evaluation.
because sometimes we need to use the original features generated by model rather than normalized ones.
2020-04-27 14:51:39 +08:00
liaoxingyu
3984f0c91d
refactor($modeling/meta): refactor heads output
...
without intermediate variables generated by reid heads, make it more flexible
2020-04-24 12:16:18 +08:00
liaoxingyu
95a3c62ad2
refactor(fastreid)
...
refactor architecture
2020-04-20 10:59:29 +08:00
liaoxingyu
9684500a57
chagne arch
...
1. change dataset show to trainset show and testset show seperately
2. add cls layer to easily plug in circle loss and arcface
2020-04-19 12:54:01 +08:00
liaoxingyu
23bedfce12
update version0.2 code
2020-03-25 10:58:26 +08:00
L1aoXingyu
12957f66aa
Change architecture:
...
1. delete redundant preprocess
2. add data prefetcher to accelerate data loading
3. fix minor bug of triplet sampler when only one image for one id
2020-02-18 21:01:23 +08:00
L1aoXingyu
e01d9b241f
Update AGW baseline result
2020-02-13 20:37:08 +08:00
L1aoXingyu
8a9c0ccfad
Finish first version for fastreid
2020-02-10 22:13:04 +08:00
L1aoXingyu
db6ed12b14
Update sampler code
2020-02-10 07:38:56 +08:00
liaoxingyu
71950d2c09
1. Fix evaluation code
...
2. Finish multi-dataset evaluation
3. Decouple image preprocess and output postprocess with model forward for DataParallel training
4. Finish build backbone registry
5. Fix dataset sampler
2020-01-21 20:24:26 +08:00
liaoxingyu
b761b656f3
Finish basic training loop and evaluation results
2020-01-20 21:33:37 +08:00