liaoxingyu
7ed6240e2c
fix ReidEvalution too much memory cost
...
Move `matches` matrix computation in each iteration to reduce the extra memory cost
#420 #404
2021-06-08 15:41:43 +08:00
liaoxingyu
0572765085
fix for lint_python
2021-05-31 17:40:33 +08:00
liaoxingyu
91ff631184
Minor changes
...
Some minor changes, such as class name changing, remove extra blank line, etc.
2021-05-31 17:27:14 +08:00
liaoxingyu
2cabc3428a
Support vision transformer backbone
2021-05-31 17:08:57 +08:00
liaoxingyu
44cee30dfc
update fastreid v1.2
...
Summary:
1. refactor dataloader and heads
2. bugfix in fastattr, fastclas, fastface and partialreid
3. partial-fc supported in fastface
2021-04-02 21:33:13 +08:00
Xingyu Liao
15c556c43a
remove apex dependency ( #442 )
...
Summary: Use Pytorch1.6(or above) built-in amp training
2021-03-23 12:12:35 +08:00
Xingyu Liao
883fd4aede
add configurable decorator & linear loss decouple ( #441 )
...
Summary: Add configurable decorator which can call `Baseline` with `Baseline(cfg)` or `Baseline(cfg, heads=heads, ...)`
Decouple linear and loss computation for partial-fc support.
Reviewed By: l1aoxingyu
2021-03-23 12:10:06 +08:00
liaoxingyu
15e1729a27
update fastreid V1.0
2021-01-18 11:36:38 +08:00
liaoxingyu
fe2e46d40e
fix arcSoftmax fp16 training problem
...
Summary: fixup fp16 training when using arcSoftmax by aligning the data type
2020-12-28 14:45:26 +08:00
liaoxingyu
a327a70f0d
v0.3 update
...
Summary:
1. change DPP training in apex way;
2. make warmup scheduler by iter and lr scheduler by epoch;
3. replace random erasing with torchvision implementation;
4. naming modification in config file
2020-12-07 14:19:20 +08:00
liaoxingyu
bd395917a8
fix splat layer problem ( #297 )
...
Summary: fix get_norm problem in splat.py
2020-10-09 11:21:36 +08:00
liaoxingyu
3d1bae9f13
fix triplet loss backward propagation on multi-gpu training ( #82 )
...
Summary: fix torch.distributed.all_gather has no gradient when performing all_gather operation on the provided tensors, instead using `GatherLayer`.
2020-09-28 17:16:51 +08:00
liaoxingyu
c9824be7e1
fix non-local inter_channels
2020-09-10 11:40:13 +08:00
liaoxingyu
296fbea989
extent gem pool
2020-09-10 11:01:22 +08:00
liaoxingyu
1b84348619
remove `num_splits` in batchnorm
...
Summary: `num_splits` works for GhostBN, but it's very uncommon
2020-09-10 11:01:07 +08:00
liaoxingyu
d00ce8fc3c
refactor model arch
2020-09-01 16:14:45 +08:00
liaoxingyu
ac8409a7da
updating for pytorch1.6
2020-08-20 15:51:41 +08:00
liaoxingyu
16655448c2
onnx/trt support
...
Summary: change model pretrain mode and support onnx/TensorRT export
2020-07-29 17:43:39 +08:00
liaoxingyu
3b57dea49f
support regnet backbone
2020-07-17 19:13:45 +08:00
liaoxingyu
fec7abc461
finish v0.2 ddp training
2020-07-06 16:57:43 +08:00
liaoxingyu
f10ce253f1
refactor arcface and circle loss
...
#111
2020-06-22 11:56:39 +08:00
liaoxingyu
56a1ab4a5d
update fast global avgpool
...
Summary: update fast pool according to https://arxiv.org/pdf/2003.13630.pdf
2020-06-12 16:34:03 +08:00
liaoxingyu
85672b1e75
add circle & arcface layer info
...
Summary: show num_features and num_classes in circle & arcface layer, like nn.Linear
2020-05-31 15:50:56 +08:00
liaoxingyu
5528d17ace
refactor code
...
Summary: change code style and refactor code, add avgmax pooling layer in gem_pool
2020-05-28 13:49:39 +08:00
liaoxingyu
b28c0032e8
fix: add monkey-patching to enable syncBN
...
add a trigger to make syncBN work
2020-05-15 13:33:33 +08:00
liaoxingyu
bf18479541
fix: revise syncBN bug
2020-05-14 14:52:37 +08:00
liaoxingyu
0872a32621
feat: add syncBN support
2020-05-14 13:15:09 +08:00
liaoxingyu
0356ef8c5c
feat: add SyncBN and GroupNorm suppor
2020-05-14 11:36:28 +08:00
liaoxingyu
a2dcd7b4ab
feat(layers/norm): add ghost batchnorm
...
add a get_norm fucntion to easily change normalization between batchnorm, ghost bn and group bn
2020-05-01 09:02:46 +08:00
liaoxingyu
ec19bcc1d3
style(configs): put all config files together
...
put all config files into one place for easily control,
and add tools for put train_net.py which almost the same in
different projects
2020-04-29 16:18:54 +08:00
liaoxingyu
8abd3bab03
feat($layers): add new act func
...
add mish, gelu supported
2020-04-24 12:17:00 +08:00
liaoxingyu
95a3c62ad2
refactor(fastreid)
...
refactor architecture
2020-04-20 10:59:29 +08:00
liaoxingyu
9684500a57
chagne arch
...
1. change dataset show to trainset show and testset show seperately
2. add cls layer to easily plug in circle loss and arcface
2020-04-19 12:54:01 +08:00
liaoxingyu
9cf222e093
refactor bn_no_bias
2020-04-08 21:04:09 +08:00
liaoxingyu
23bedfce12
update version0.2 code
2020-03-25 10:58:26 +08:00
L1aoXingyu
8a9c0ccfad
Finish first version for fastreid
2020-02-10 22:13:04 +08:00
L1aoXingyu
db6ed12b14
Update sampler code
2020-02-10 07:38:56 +08:00
liaoxingyu
b761b656f3
Finish basic training loop and evaluation results
2020-01-20 21:33:37 +08:00