Commit Graph

8 Commits (6254ebef8d1d106e9cca8d4097786c6172cf70ce)

Author SHA1 Message Date
Zaida Zhou b6a7fd98e4
Upgrade pre commit hooks (#2321)
* Upgrade the versions of pre-commit hooks

* update the versions of zh-cn.yaml
2022-10-08 11:48:44 +08:00
Zaida Zhou 45fa3e44a2
Add pyupgrade pre-commit hook (#1937)
* add pyupgrade

* add options for pyupgrade

* minor refinement
2022-05-18 11:47:14 +08:00
Zaida Zhou 6e9ce18323
Add copyright pre-commit-hook (#1742)
* first commit

* Add copyright pre-commit-hook
2022-02-24 09:24:25 +08:00
q.yao 91b8478c84
Reduce ms_deformable_attn test memory usage (#1407) 2021-10-15 19:49:39 +08:00
Zaida Zhou 97e5bada4c
continue PR #1223 (#1404)
* fix MultiScaleDeformableAttention inference issue on cpu model

* fix lint

* add unintest

* remove some code

* Update tests/test_ops/test_ms_deformable_attn.py

Co-authored-by: Zaida Zhou <58739961+zhouzaida@users.noreply.github.com>

* fix device

* remove device

* add more device

* refactor unittest

Co-authored-by: zhicheng huang <zhichenghzc@gmail.com>
Co-authored-by: zhangshilong <2392587229zsl@gmail.com>
Co-authored-by: Shilong Zhang <61961338+jshilong@users.noreply.github.com>
2021-10-14 20:50:38 +08:00
Shilong Zhang e05fb56031
Refactor the baseclass related to transformer (#978)
* minor changes

* change to modulist

* change to Sequential

* replace dropout with attn_drop and proj_drop in MultiheadAttention

* add operation_name for attn

* add drop path and move all ffn args to ffncfgs

* fix typo

* fix a bug when use default value of ffn_cfgs

* fix ffns

* add deprecate warning

* fix deprecate warning

* change to pop kwargs

* support register FFN of transformer

* support batch first

* fix batch first wapper

* fix forward wapper

* fix typo

* fix lint

* add unitest for transformer

* fix unitest

* fix equal

* use allclose

* fix comments

* fix comments

* change configdict to dict

* move drop to a file

* add comments for drop path

* add noqa 501

* move bnc wapper to MultiheadAttention

* move bnc wapper to MultiheadAttention

* use dep warning

* resolve comments

* add unitest:

* rename residual to identity

* revert runner

* msda residual to identity

* rename inp_identity to identity

* fix name

* fix transformer

* remove key in msda

* remove assert for key

Co-authored-by: HIT-cwh <2892770585@qq.com>
Co-authored-by: bkhuang <congee524@gmail.com>
Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>
2021-06-11 18:09:31 +08:00
pc 732ff5093e
Add ms_deformable_attn in parrots (#1042) 2021-05-25 13:13:05 +08:00
ZhangShilong 54a7ebb4ec
[Feature]: support Multi-Scale-DeformAttention in deformable-detr (#878)
* add c++ ms_deform_atten

* fix cpp lint

* fix cpp lint

* clang format

* remove cmakefile

* google style

* clang-format precommit

* use clang-format-lint-action

* add transformer base class

* add merge

* add docstr

* add pyargs

* fix according to commments

* resiger module

* change to use basemodule

* add _ between build function

* split the name

* fix according to comments

* fix lint and fix unitest

* fix cpp lint

* fix bug of deformdetr_atten

* fix drop out

* fix residual

* use CUDA_1D_KERNEL_LOOP
2021-04-23 16:35:15 +08:00