Commit Graph

98 Commits (6cbca133a822b2f9eeb7427b71ead75b0360c0e4)

Author SHA1 Message Date
liaoxingyu 4be4cacb73 fix: add a simple way to reset data prefetcher when resume training
use data prefetcher build-in reset function to reload it rather than
redefining a new data prefetcher, otherwise it will introduce other
problems in eval-only mode.
2020-05-09 11:58:27 +08:00
liaoxingyu 9fae467adf feat(engine/defaults): add DefaultPredictor to get image reid features
Add a new predictor interface, and modify demo code to predict image features.
2020-05-08 19:24:27 +08:00
liaoxingyu 8ab0bc2455 style(backbone): make parameters loading logging more elegant 2020-05-08 12:22:06 +08:00
liaoxingyu 0b15ac4e03 feat(hooks&optim): update stochastic weight averging hooks
Update swa method which will do after regular training if you
set this option enabled.
2020-05-08 12:20:04 +08:00
liaoxingyu afac8aad5d Fix(engine): fix preciseBN dataloader bugs
preciseBN needs to pass data prefetcher, but now a DataLoader is passed
2020-05-06 14:26:34 +08:00
liaoxingyu 948af64fd1 feat: add swa algorithm
Add swa and related config options,
if it is enabled, model will do swa after regular training
2020-05-06 10:17:44 +08:00
liaoxingyu 6d96529d4c fix(data): fix resume training bug
fix dataset pid dictionary loading bug when resume training,
data prefetcher will pre-load a batch of data, this will lead to
misalignment of old pid dict and updated pid dict.
We can address this problem by redefine a prefetcher in resume_or_load
2020-05-05 23:20:42 +08:00
liaoxingyu a2dcd7b4ab feat(layers/norm): add ghost batchnorm
add a get_norm fucntion to easily change normalization between batchnorm, ghost bn and group bn
2020-05-01 09:02:46 +08:00
liaoxingyu 329764bb60 refactor(heads): move num_classes out from heads
set parameter num_classes in meta_arch to easily modify different heads fc layer
2020-04-29 21:29:48 +08:00
liaoxingyu d27729c5bb refactor(preciseBN): add preciseBN datasets show 2020-04-29 21:05:53 +08:00
liaoxingyu ec19bcc1d3 style(configs): put all config files together
put all config files into one place for easily control,
and add tools for put train_net.py which almost the same in
different projects
2020-04-29 16:18:54 +08:00
liaoxingyu e38a799b63 fix(engine/defaults): fix precise bn bug
fix problem in precise bn, which will not use precise bn datasets, and throw some errors
2020-04-29 16:16:54 +08:00
zjk15068083791 2f3f6e3267
Add files via upload 2020-04-27 16:27:44 +08:00
liaoxingyu 5daf322ac6 fix(data/samplers): fix bug in triplet sampler
use drop indices to avoid two groups of same id,
but did not consider drop indices = 0, then will lead to
indentites[:0], this is a no teminal loop
2020-04-27 15:25:29 +08:00
liaoxingyu 325d9abb76 feat($solver): change scheduler call methods
using name of lr scheduler in config to call
2020-04-27 15:12:01 +08:00
liaoxingyu 9e3f2c1e7a fix($data/transforms): change augmix augmentation pool
change augmentation_all from augmentation_reid in augmix
because we found AutoAugmentation using ImageNet Policy will not harm performance
2020-04-27 15:06:27 +08:00
liaoxingyu 4d3e5fd378 refactor(evaluation): add feature l2 norm in evaluation
change the l2 norm function from inference function in Module to reid evaluation.
because sometimes we need to use the original features generated by model rather than normalized ones.
2020-04-27 14:51:39 +08:00
liaoxingyu 9910bb9158 fix($modeling/heads): fix targets missing bug
fix bug in heads about return outputs without targets.
2020-04-27 14:49:58 +08:00
liaoxingyu 2efbc6d371 fix($modeling/heads/bnneck_head): fix heads outputs bug
fix bug of heads outputs, which will lead to no targets return.
2020-04-27 11:48:21 +08:00
liaoxingyu a6bd0371e2 feat($data): add autoaugment
add auto augmentation support with ImageNet policy and CIFAR10 policy.
modify codes in transforms and config for adapting to this augmentation.
2020-04-27 11:41:12 +08:00
liaoxingyu 8abd3bab03 feat($layers): add new act func
add mish, gelu supported
2020-04-24 12:17:00 +08:00
liaoxingyu 3984f0c91d refactor($modeling/meta): refactor heads output
without intermediate variables generated by reid heads, make it more flexible
2020-04-24 12:16:18 +08:00
liaoxingyu e3ae03cc58 feat($modeling/backbones): add new backbones
add osnet, resnext and resnest backbone supported
2020-04-24 12:14:56 +08:00
liaoxingyu b098b194ba refactor($modeling/meta_arch): remove bdb_network 2020-04-21 11:44:29 +08:00
liaoxingyu 6c9af664dc refactor($modeling/meta_arch): remove useless parts
remove useless meta_archs and backbones
2020-04-21 11:42:14 +08:00
liaoxingyu bb50b6c5a7 docs($projects): update agw readme 2020-04-21 11:35:54 +08:00
liaoxingyu 95a3c62ad2 refactor(fastreid)
refactor architecture
2020-04-20 10:59:29 +08:00
liaoxingyu 9684500a57 chagne arch
1. change dataset show to trainset show and testset show seperately
2. add cls layer to easily plug in circle loss and arcface
2020-04-19 12:54:01 +08:00
liaoxingyu be9faa5605 update focal loss
update dataset info display
update seperate lr
update adaptive label smooth regularization
2020-04-17 13:46:10 +08:00
liaoxingyu 9cf222e093 refactor bn_no_bias 2020-04-08 21:04:09 +08:00
liaoxingyu 4d2fa28dbb update freeze layer
update preciseBN
update circle loss with metric learning and cross entropy loss form
update loss call methods
2020-04-06 23:34:27 +08:00
liaoxingyu 6a8961ce48 1. upload circle loss and arcface
2. finish freeze training
3. update augmix data augmentation
2020-04-05 23:54:26 +08:00
liaoxingyu c6e0176c53 Upload demo.py and example 2020-04-03 15:07:27 +08:00
liaoxingyu eacee874aa fix merge 2020-03-25 11:06:39 +08:00
liaoxingyu 91dc9bc71f Merge branch 'master' of github.com:L1aoXingyu/fast-reid
 Conflicts:
	fastreid/config/defaults.py
	fastreid/layers/gem_pool.py
	fastreid/modeling/backbones/resnet.py
	fastreid/modeling/heads/__init__.py
	fastreid/modeling/heads/build.py
	fastreid/modeling/losses/build.py
	fastreid/modeling/meta_arch/__init__.py
	fastreid/modeling/meta_arch/abd_network.py
	fastreid/modeling/meta_arch/baseline.py
	fastreid/modeling/meta_arch/bdb_network.py
	fastreid/modeling/meta_arch/mf_network.py
	projects/StrongBaseline/configs/Base-Strongbaseline.yml
	projects/StrongBaseline/configs/baseline_dukemtmc.yml
	projects/StrongBaseline/train_net.py
2020-03-25 11:05:28 +08:00
liaoxingyu 23bedfce12 update version0.2 code 2020-03-25 10:58:26 +08:00
L1aoXingyu b1058118ca update BDB-net code
update MF-net code
2020-03-19 12:23:41 +08:00
L1aoXingyu acf363c181 1. Change loss function as a build-in attributes of heads
2. Update agw and bagtricks result
2020-03-16 15:23:09 +08:00
L1aoXingyu bab602dfd2 Fix minor bug in build criterion, it will replace by multiple call
Refactor resnet pretrain
2020-02-28 21:20:41 +08:00
L1aoXingyu b020c7f0ae Fix data prefetcher minor bug 2020-02-27 12:16:57 +08:00
L1aoXingyu 12957f66aa Change architecture:
1. delete redundant preprocess
2. add data prefetcher to accelerate data loading
3. fix minor bug of triplet sampler when only one image for one id
2020-02-18 21:01:23 +08:00
L1aoXingyu e01d9b241f Update AGW baseline result 2020-02-13 20:37:08 +08:00
L1aoXingyu 327d74ffbb Update strong baseline result
Change data sampler
2020-02-13 00:19:15 +08:00
L1aoXingyu a2f69d0537 Update StrongBaseline results for market1501 and dukemtmc 2020-02-11 22:38:40 +08:00
L1aoXingyu 8a9c0ccfad Finish first version for fastreid 2020-02-10 22:13:04 +08:00
L1aoXingyu db6ed12b14 Update sampler code 2020-02-10 07:38:56 +08:00
liaoxingyu 71950d2c09 1. Fix evaluation code
2. Finish multi-dataset evaluation
3. Decouple image preprocess and output postprocess with model forward for DataParallel training
4. Finish build backbone registry
5. Fix dataset sampler
2020-01-21 20:24:26 +08:00
liaoxingyu b761b656f3 Finish basic training loop and evaluation results 2020-01-20 21:33:37 +08:00