Commit Graph

10 Commits (f07ed6bcc0fcfb82c12933748ec94f94b90c7cd6)

Author SHA1 Message Date
Hakjin Lee 8e8ab22686
[Enhancement] Support MultiScaleDeformableAttention with AMP (#2541)
* [Enhance] Support FP16 for MSDeformAttn

* [Fix] Data type mismatch

* Update mmcv/ops/multi_scale_deform_attn.py

* Add UT

Author:    nijkah <nijkah@gmail.com>

* Add cuda available condition

---------

Co-authored-by: Zaida Zhou <58739961+zhouzaida@users.noreply.github.com>
2023-02-17 19:27:14 +08:00
ZShaopeng 12442667ff
[Feature] Support MultiScaleDeformableAttn with cambricon MLU backend (#2396)
* [Feature] Support MsDeformAttnForward with cambricon MLU backend

* [Feature] Support MsDeformAttnForward with cambricon MLU backend

* [Feature] Support MsDeformAttnForward with cambricon MLU backend

* [Feature] Support MsDeformAttnForward with cambricon MLU backend

* [Feature] Support MsDeformAttnForward with cambricon MLU backend

* [Feature] Support MsDeformAttnForward with cambricon MLU backend

* [Feature] Support ms with cambricon MLU backend

* [Feature] Support msdeformattn_1104 with cambricon MLU backend

* [Feature] Support ms with cambricon MLU backend

* [Feature] Support MsDeformAttn_1108 with cambricon MLU backend

* [Feature] Support MsDeformAttn_1108 with cambricon MLU backend

* [Feature] Support MsDeformAttn_1108 with cambricon MLU backend

* [Feature] Support MsDeformAttn with cambricon MLU backend

* [Feature] Support MsDeformAttn with cambricon MLU backend

* [Feature] Support MsDeformAttn with cambricon MLU backend

* [Feature] Support MsDeformAttn with cambricon MLU backend

* [Feature] Support MsDeformAttn with cambricon MLU backend

* [Feature] Support MsDeformAttn with cambricon MLU backend

* [Feature] Support MsDeformAttn with cambricon MLU backend

* [Feature] Support MsDeformAttn with cambricon MLU backend

* [Feature] Support MsDeformAttn with cambricon MLU backend

* [Feature] Support MsDeformAttn with cambricon MLU backend

* Revert "[Feature] Support MsDeformAttn with cambricon MLU backend"

This reverts commit 27963cccc86d240852a40a2c1510147a3e9f269f.

* [Feature] Support MsdeformAttn with cambricon MLU backend

* [Feature] Support MsDeformAttn with cambricon MLU backend

* [Feature] Support MsdeformAttn with cambricon MLU backend

Co-authored-by: zhangshaopeng <wicky-zheng@outlook.com>
Co-authored-by: wicky-zheng <root@notebook-mmcv-290m8-torch1-6-y0yv66-notebook-0.notebook-mmcv-290m8-torch1-6-y0yv66.ns-ad8b689a0ecd41fcb4469c803dcd539d.svc.cluster.local>
2022-11-16 14:08:04 +08:00
Zaida Zhou b6a7fd98e4
Upgrade pre commit hooks (#2321)
* Upgrade the versions of pre-commit hooks

* update the versions of zh-cn.yaml
2022-10-08 11:48:44 +08:00
Zaida Zhou 45fa3e44a2
Add pyupgrade pre-commit hook (#1937)
* add pyupgrade

* add options for pyupgrade

* minor refinement
2022-05-18 11:47:14 +08:00
Zaida Zhou 6e9ce18323
Add copyright pre-commit-hook (#1742)
* first commit

* Add copyright pre-commit-hook
2022-02-24 09:24:25 +08:00
q.yao 91b8478c84
Reduce ms_deformable_attn test memory usage (#1407) 2021-10-15 19:49:39 +08:00
Zaida Zhou 97e5bada4c
continue PR #1223 (#1404)
* fix MultiScaleDeformableAttention inference issue on cpu model

* fix lint

* add unintest

* remove some code

* Update tests/test_ops/test_ms_deformable_attn.py

Co-authored-by: Zaida Zhou <58739961+zhouzaida@users.noreply.github.com>

* fix device

* remove device

* add more device

* refactor unittest

Co-authored-by: zhicheng huang <zhichenghzc@gmail.com>
Co-authored-by: zhangshilong <2392587229zsl@gmail.com>
Co-authored-by: Shilong Zhang <61961338+jshilong@users.noreply.github.com>
2021-10-14 20:50:38 +08:00
Shilong Zhang e05fb56031
Refactor the baseclass related to transformer (#978)
* minor changes

* change to modulist

* change to Sequential

* replace dropout with attn_drop and proj_drop in MultiheadAttention

* add operation_name for attn

* add drop path and move all ffn args to ffncfgs

* fix typo

* fix a bug when use default value of ffn_cfgs

* fix ffns

* add deprecate warning

* fix deprecate warning

* change to pop kwargs

* support register FFN of transformer

* support batch first

* fix batch first wapper

* fix forward wapper

* fix typo

* fix lint

* add unitest for transformer

* fix unitest

* fix equal

* use allclose

* fix comments

* fix comments

* change configdict to dict

* move drop to a file

* add comments for drop path

* add noqa 501

* move bnc wapper to MultiheadAttention

* move bnc wapper to MultiheadAttention

* use dep warning

* resolve comments

* add unitest:

* rename residual to identity

* revert runner

* msda residual to identity

* rename inp_identity to identity

* fix name

* fix transformer

* remove key in msda

* remove assert for key

Co-authored-by: HIT-cwh <2892770585@qq.com>
Co-authored-by: bkhuang <congee524@gmail.com>
Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>
2021-06-11 18:09:31 +08:00
pc 732ff5093e
Add ms_deformable_attn in parrots (#1042) 2021-05-25 13:13:05 +08:00
ZhangShilong 54a7ebb4ec
[Feature]: support Multi-Scale-DeformAttention in deformable-detr (#878)
* add c++ ms_deform_atten

* fix cpp lint

* fix cpp lint

* clang format

* remove cmakefile

* google style

* clang-format precommit

* use clang-format-lint-action

* add transformer base class

* add merge

* add docstr

* add pyargs

* fix according to commments

* resiger module

* change to use basemodule

* add _ between build function

* split the name

* fix according to comments

* fix lint and fix unitest

* fix cpp lint

* fix bug of deformdetr_atten

* fix drop out

* fix residual

* use CUDA_1D_KERNEL_LOOP
2021-04-23 16:35:15 +08:00