mmpretrain/mmpretrain
Ma Zerun b017670e1b
[Improve] Use PyTorch official `scaled_dot_product_attention` to accelerate `MultiheadAttention`. (#1434)
* [Improve] Use PyTorch official `scaled_dot_product_attention` to accelerate `MultiheadAttention`.

* Support `--local-rank` and `--amp` option for new version PyTorch.

* Fix imports and UT.
2023-03-29 15:50:44 +08:00
..
apis [Refactor] Update dev scripts to be compatible with selfsup tasks. (#1412) 2023-03-20 14:30:57 +08:00
datasets [Docs] Update user guides docs and tools for MMPretrain. (#1429) 2023-03-27 14:32:26 +08:00
engine [Feature] Implememnt the universal visualizer for multiple task. (#1404) 2023-03-09 11:36:54 +08:00
evaluation [Docs] Update migration.md (#1417) 2023-03-17 10:30:09 +08:00
models [Improve] Use PyTorch official `scaled_dot_product_attention` to accelerate `MultiheadAttention`. (#1434) 2023-03-29 15:50:44 +08:00
structures [Refactor] Move transforms in mmselfsup to mmpretrain. (#1396) 2023-03-03 15:01:11 +08:00
utils [Refactor] Move and refactor utils from mmselfsup. (#1385) 2023-02-28 17:04:40 +08:00
visualization [Refactor] Refactor the `browse_dataset.py` to support selfsup pipeline. (#1414) 2023-03-15 14:18:36 +08:00
__init__.py Rename the package name to `mmpretrain`. 2023-02-17 15:20:55 +08:00
registry.py Rename the package name to `mmpretrain`. 2023-02-17 15:20:55 +08:00
version.py Rename the package name to `mmpretrain`. 2023-02-17 15:20:55 +08:00