mirror of
https://github.com/open-mmlab/mmselfsup.git
synced 2025-06-03 14:59:38 +08:00
update apex doc
This commit is contained in:
parent
a1c0e03aec
commit
f00e0bb25e
@ -63,6 +63,19 @@ Assuming that you only have 1 GPU that can contain 64 images in a batch, while y
|
||||
optimizer_config = dict(update_interval=4)
|
||||
```
|
||||
|
||||
### Mixed Precision Training (Optional)
|
||||
We use [Apex](https://github.com/NVIDIA/apex) to implement Mixed Precision Training.
|
||||
If you want to use Mixed Precision Training, you can add below in the config file.
|
||||
```python
|
||||
use_fp16 = True
|
||||
optimizer_config = dict(use_fp16=use_fp16)
|
||||
```
|
||||
An example:
|
||||
```python
|
||||
bash tools/ tools/dist_train.sh configs/selfsup/moco/r50_v1_fp16.py 8
|
||||
```
|
||||
|
||||
|
||||
## Benchmarks
|
||||
|
||||
We provide several standard benchmarks to evaluate representation learning. The config files or scripts for evaluation mentioned below are NOT recommended to be changed if you want to use this repo in your publications. We hope that all methods are under a fair comparison.
|
||||
|
@ -51,6 +51,13 @@ e. Install.
|
||||
pip install -v -e . # or "python setup.py develop"
|
||||
```
|
||||
|
||||
f. Install Apex (optional), following the [official instructions](https://github.com/NVIDIA/apex), e.g.
|
||||
```shell
|
||||
git clone https://github.com/NVIDIA/apex
|
||||
cd apex
|
||||
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
|
||||
```
|
||||
|
||||
Note:
|
||||
|
||||
1. The git commit id will be written to the version number with step d, e.g. 0.6.0+2e7045c. The version will also be saved in trained models.
|
||||
|
Loading…
x
Reference in New Issue
Block a user