liaoxingyu
bf18479541
fix: revise syncBN bug
2020-05-14 14:52:37 +08:00
liaoxingyu
0872a32621
feat: add syncBN support
2020-05-14 13:15:09 +08:00
liaoxingyu
cd7a4e9be7
add projects folder
2020-05-14 12:52:50 +08:00
liaoxingyu
0356ef8c5c
feat: add SyncBN and GroupNorm suppor
2020-05-14 11:36:28 +08:00
liaoxingyu
5ae3d4fecf
feat: add aqe support in test phase
...
query expansion will combine the retrived topk nearest neighbors with the original query feature,
it will enhance mAP by a large margin.feat:
2020-05-13 16:27:22 +08:00
liaoxingyu
320010f2ae
feat: support re-rank in test phase
2020-05-13 11:47:52 +08:00
liaoxingyu
e502fadba9
docs: update README
2020-05-12 23:00:15 +08:00
Xingyu Liao
01d940bfdd
update vehicle dataset
...
Summary: Pull Request resolved : #49
Reviewed By: xingyu liao
2020-05-12 21:44:05 +08:00
liaoxingyu
9addfb0ae2
feat: support visualizing label list
...
add features to support label list visualization, which can be used
for label correction or check the hardest sample
2020-05-12 21:35:33 +08:00
Jinkai Zheng
640c9bfc97
Add small, medium and large vehicle test datasets
2020-05-11 01:34:19 -05:00
Jinkai Zheng
e059b751a6
Add modified vehicle datasets with small, medium and large test datastes
2020-05-11 01:30:56 -05:00
liaoxingyu
9b6fda3830
style: remove title in visualization
2020-05-11 14:12:29 +08:00
liaoxingyu
13bb03eb07
feat: add rank result visualization tools
...
Update visualization tools which can save rank list with AP metrics from high to low, vice versa.
In order to compute AP fast in visualizer, modify rank_cylib to get all_AP instead of mAP.
In this way, we can use Cython to compute results.
2020-05-10 23:17:10 +08:00
liaoxingyu
651e6ba9c4
feat: support multiprocess predictor
...
add asyncpredictor to support multiprocessing feature extraction with dataloader
2020-05-09 18:23:36 +08:00
liaoxingyu
4be4cacb73
fix: add a simple way to reset data prefetcher when resume training
...
use data prefetcher build-in reset function to reload it rather than
redefining a new data prefetcher, otherwise it will introduce other
problems in eval-only mode.
2020-05-09 11:58:27 +08:00
liaoxingyu
9fae467adf
feat(engine/defaults): add DefaultPredictor to get image reid features
...
Add a new predictor interface, and modify demo code to predict image features.
2020-05-08 19:24:27 +08:00
liaoxingyu
8ab0bc2455
style(backbone): make parameters loading logging more elegant
2020-05-08 12:22:06 +08:00
liaoxingyu
0b15ac4e03
feat(hooks&optim): update stochastic weight averging hooks
...
Update swa method which will do after regular training if you
set this option enabled.
2020-05-08 12:20:04 +08:00
liaoxingyu
afac8aad5d
Fix(engine): fix preciseBN dataloader bugs
...
preciseBN needs to pass data prefetcher, but now a DataLoader is passed
2020-05-06 14:26:34 +08:00
liaoxingyu
948af64fd1
feat: add swa algorithm
...
Add swa and related config options,
if it is enabled, model will do swa after regular training
2020-05-06 10:17:44 +08:00
liaoxingyu
9d9a1f4f2d
update model zoo results
2020-05-06 09:58:49 +08:00
liaoxingyu
6d96529d4c
fix(data): fix resume training bug
...
fix dataset pid dictionary loading bug when resume training,
data prefetcher will pre-load a batch of data, this will lead to
misalignment of old pid dict and updated pid dict.
We can address this problem by redefine a prefetcher in resume_or_load
2020-05-05 23:20:42 +08:00
L1aoXingyu
35076d5cf5
update model zoo
2020-05-04 14:36:16 +08:00
liaoxingyu
fcc1e04f5c
style(model_zoo): add bot, agw, sbs as built-in model
2020-05-01 09:56:33 +08:00
liaoxingyu
46228ce946
chore(configs): update all training config
2020-05-01 09:04:51 +08:00
liaoxingyu
a2dcd7b4ab
feat(layers/norm): add ghost batchnorm
...
add a get_norm fucntion to easily change normalization between batchnorm, ghost bn and group bn
2020-05-01 09:02:46 +08:00
liaoxingyu
329764bb60
refactor(heads): move num_classes out from heads
...
set parameter num_classes in meta_arch to easily modify different heads fc layer
2020-04-29 21:29:48 +08:00
liaoxingyu
907798c8c9
Merge branch 'master' of github.com:L1aoXingyu/fast-reid
2020-04-29 21:07:57 +08:00
liaoxingyu
2327a5565f
chore(configs): update Market1501 training config
2020-04-29 21:06:57 +08:00
liaoxingyu
d27729c5bb
refactor(preciseBN): add preciseBN datasets show
2020-04-29 21:05:53 +08:00
Xingyu Liao
8256f8f37e
Merge pull request #45 from JinkaiZheng/patch-2
...
update vehicle reid results
2020-04-29 18:40:07 +08:00
Jinkai Zheng
8fcec375f9
Update MODEL_ZOO.md
2020-04-29 18:33:11 +08:00
liaoxingyu
fdaf82cd62
chore: update model zoo
2020-04-29 18:15:27 +08:00
liaoxingyu
6cee977748
chore(model_zoo): fix typo
2020-04-29 16:22:02 +08:00
liaoxingyu
ec19bcc1d3
style(configs): put all config files together
...
put all config files into one place for easily control,
and add tools for put train_net.py which almost the same in
different projects
2020-04-29 16:18:54 +08:00
liaoxingyu
e38a799b63
fix(engine/defaults): fix precise bn bug
...
fix problem in precise bn, which will not use precise bn datasets, and throw some errors
2020-04-29 16:16:54 +08:00
liaoxingyu
18fd7faff7
docs(stronger baseline): update stronger baseline results
...
update stronger baseline results on Market1501, DukeMTMC and MSMT17
2020-04-28 21:02:21 +08:00
liaoxingyu
09e8d6afbb
Merge branch 'master' of github.com:L1aoXingyu/fast-reid
2020-04-28 11:55:31 +08:00
liaoxingyu
10567bd026
feat(stronger baseline): add solid tricks for a stronger baseline
...
add 7 tricks on top of BagofTricks, check it in projects/StrongerBaseline/README.md
2020-04-28 11:53:26 +08:00
Xingyu Liao
8352ce97d3
Merge pull request #44 from zjk15068083791/jinkai
...
add vehicle reid dataset
2020-04-27 16:35:44 +08:00
zjk15068083791
2f3f6e3267
Add files via upload
2020-04-27 16:27:44 +08:00
liaoxingyu
5daf322ac6
fix(data/samplers): fix bug in triplet sampler
...
use drop indices to avoid two groups of same id,
but did not consider drop indices = 0, then will lead to
indentites[:0], this is a no teminal loop
2020-04-27 15:25:29 +08:00
liaoxingyu
325d9abb76
feat($solver): change scheduler call methods
...
using name of lr scheduler in config to call
2020-04-27 15:12:01 +08:00
liaoxingyu
9e3f2c1e7a
fix($data/transforms): change augmix augmentation pool
...
change augmentation_all from augmentation_reid in augmix
because we found AutoAugmentation using ImageNet Policy will not harm performance
2020-04-27 15:06:27 +08:00
liaoxingyu
4d3e5fd378
refactor(evaluation): add feature l2 norm in evaluation
...
change the l2 norm function from inference function in Module to reid evaluation.
because sometimes we need to use the original features generated by model rather than normalized ones.
2020-04-27 14:51:39 +08:00
liaoxingyu
9910bb9158
fix($modeling/heads): fix targets missing bug
...
fix bug in heads about return outputs without targets.
2020-04-27 14:49:58 +08:00
liaoxingyu
2efbc6d371
fix($modeling/heads/bnneck_head): fix heads outputs bug
...
fix bug of heads outputs, which will lead to no targets return.
2020-04-27 11:48:21 +08:00
liaoxingyu
a6bd0371e2
feat($data): add autoaugment
...
add auto augmentation support with ImageNet policy and CIFAR10 policy.
modify codes in transforms and config for adapting to this augmentation.
2020-04-27 11:41:12 +08:00
liaoxingyu
8abd3bab03
feat($layers): add new act func
...
add mish, gelu supported
2020-04-24 12:17:00 +08:00
liaoxingyu
3984f0c91d
refactor($modeling/meta): refactor heads output
...
without intermediate variables generated by reid heads, make it more flexible
2020-04-24 12:16:18 +08:00