* add accv workshop 1st project * update projects * update projects * fix lint * Update projects/fgia_accv2022_1st/README.md Co-authored-by: Yuan Liu <30762564+YuanLiuuuuuu@users.noreply.github.com> * Update projects/fgia_accv2022_1st/README.md Co-authored-by: Yuan Liu <30762564+YuanLiuuuuuu@users.noreply.github.com> * Update projects/fgia_accv2022_1st/README.md Co-authored-by: Yuan Liu <30762564+YuanLiuuuuuu@users.noreply.github.com> * update * Update projects/fgia_accv2022_1st/README.md Co-authored-by: Yuan Liu <30762564+YuanLiuuuuuu@users.noreply.github.com> * Update projects/fgia_accv2022_1st/README.md Co-authored-by: Yuan Liu <30762564+YuanLiuuuuuu@users.noreply.github.com> * Update projects/fgia_accv2022_1st/README.md Co-authored-by: Yuan Liu <30762564+YuanLiuuuuuu@users.noreply.github.com> * update * Update projects/fgia_accv2022_1st/README.md Co-authored-by: Yuan Liu <30762564+YuanLiuuuuuu@users.noreply.github.com> * Update projects/fgia_accv2022_1st/README.md Co-authored-by: Yuan Liu <30762564+YuanLiuuuuuu@users.noreply.github.com> * Update projects/fgia_accv2022_1st/README.md Co-authored-by: Yuan Liu <30762564+YuanLiuuuuuu@users.noreply.github.com> * Update projects/fgia_accv2022_1st/README.md Co-authored-by: Yuan Liu <30762564+YuanLiuuuuuu@users.noreply.github.com> * update * update * update Co-authored-by: Yuan Liu <30762564+YuanLiuuuuuu@users.noreply.github.com> |
||
---|---|---|
.. | ||
README.md |
README.md
Solution of FGIA ACCV 2022(1st Place)
This is fine-tuning part of the 1st Place Solution for Webly-supervised Fine-grained Recognition, refer to the ACCV workshop competition in https://www.cvmart.net/race/10412/base.
Result
Reproduce / 复现
For detailed self-supervised pretrain code, please refer to MMSelfSup. For detailed finetuning and inference code, please refer to this repo.
Description
Overview of Our Solution
Our Model
- ViT(MAE-pre-train) # Pretrained from MMSelfSup.
- Swin-v2(SimMIM-pre-train) # From MMCls-swin_transformer_v2.
**The architectures we use **
- ViT + CE-loss + post-LongTail-Adjusment
- ViT + SubCenterArcFaceWithAdvMargin(CE)
- Swin-B + SubCenterArcFaceWithAdvMargin(SoftMax-EQL)
- Swin-L + SubCenterArcFaceWithAdvMargin(SoftMAx-EQL)
bag of tricks paper and code
- MAE | Config
- Swinv2 | Config
- ArcFace | Code
- SubCenterArcFaceWithAdvMargin | Code
- Post-LT-adjusment | Code
- SoftMaxEQL | Code
- FlipTTA Code
- clean dataset
- self-emsemble: Uniform-model-soup | code
- pseudo | Code
- bagging-emsemble Code,
- post-process: re-distribute-label;
Used but no improvements
- Using retrieval paradigm to solve this classification task;
- Using EfficientNetv2 backbone.
Not used but worth to do
- Try DiVE algorithm to improve performance in long tail dataset;
- Use SimMIM to pre-train Swin-v2 on the competition dataset;
- refine the re-distribute-label tool.