This page provides basic tutorials about the usage of OpenSelfSup.
For installation instructions, please see [INSTALL.md](INSTALL.md).
## Train existing methods
**Note**: The default learning rate in config files is for 8 GPUs (except for those under `configs/linear_classification` that use 1 GPU). If using differnt number GPUs, the total batch size will change in proportion, you have to scale the learning rate following `new_lr = old_lr * new_ngpus / old_ngpus`. We recommend to use `tools/dist_train.sh` even with 1 gpu, since some methods do not support non-distributed training.
### Train with single/multiple GPUs
```shell
# checkpoints and logs are saved in the same sub-directory as the config file under `work_dirs/` by default.
-`${FEAT_LIST}` is a string to specify features from layer1 to layer5 to evaluate; e.g., if you want to evaluate layer5 only, then `FEAT_LIST` is `feat5`, if you want to evaluate all features, then then `FEAT_LIST` is `feat1 feat2 feat3 feat4 feat5` (separated by space).
-`$GPU_NUM` is the number of GPUs to extract features.
2. Compute the hash of the weight file and append the hash id to the filename.
```shell
python tools/publish_model.py ${WEIGHT_FILE}
```
## How-to
### Use a new dataset
1. Write a data source file under `openselfsup/datasets/data_sources/`. You may refer to the existing ones.
2. Create new config files for your experiments.
### Design your own methods
#### What you need to do
1. Create a dataset file under `openselfsup/datasets/` (better using existing ones);
2. Create a model file under `openselfsup/models/`. The model typically contains:
i) backbone (required): images to deep features from differet depth of layers.
ii) neck (optional): deep features to compact feature vectors.
iii) head (optional): define loss functions.
iv) memory_bank (optional): define memory banks.
3. Create a config file under `configs/` and setup the configs;
4. Create a hook file under `openselfsup/hooks/` if your method requires additional operations before run, every several iterations, every several epoch, or after run.
You may refer to existing modules under respective folders.
* Configure data augmentations in the config file.
The augmentations are the same as `torchvision.transforms`. `torchvision.transforms.RandomAppy` corresponds to `RandomAppliedTrans`. `Lighting` and `GaussianBlur` is additionally implemented.
You may specify optimization paramters including lr, momentum and weight_decay for a certain group of paramters in the config file with `paramwise_options`. `paramwise_options` is a dict whose key is regular expressions and value is options. Options include 6 fields: lr, lr_mult, momentum, momentum_mult, weight_decay, weight_decay_mult.