[Docs] Logger Hook Config Updated to Add WandB (#1345)

* [Docs] Logger Hook Config Updated to Add WandB

* [Docs] WandB init_kwargs comment added

* [Docs] WandbLoggerHook Details Added To Config Doc File

* [Docs] WandbLoggerHook Details Added To Config Doc File (Pass lint test)

* fix comment

Co-authored-by: liukuikun <liukuikun@sensetime.com>
pull/1463/head
AmirMasoud Nourollah 2022-10-12 20:50:56 -07:00 committed by GitHub
parent a7a3067769
commit 98ceffda7b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 19 additions and 14 deletions

View File

@ -223,16 +223,16 @@ Mainly include optimizer settings, `optimizer hook` settings, learning rate sche
```python
# The configuration file used to build the optimizer, support all optimizers in PyTorch.
optimizer = dict(type='SGD', # Optimizer type
lr=0.1, # Learning rate of optimizers, see detail usages of the parameters in the documentation of PyTorch
momentum=0.9, # Momentum
weight_decay=0.0001) # Weight decay of SGD
lr=0.1, # Learning rate of optimizers, see detail usages of the parameters in the documentation of PyTorch
momentum=0.9, # Momentum
weight_decay=0.0001) # Weight decay of SGD
# Config used to build the optimizer hook, refer to https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/optimizer.py#L8 for implementation details.
optimizer_config = dict(grad_clip=None) # Most of the methods do not use gradient clip
# Learning rate scheduler config used to register LrUpdater hook
lr_config = dict(policy='step', # The policy of scheduler, also support CosineAnnealing, Cyclic, etc. Refer to details of supported LrUpdater from https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/lr_updater.py#L9.
step=[30, 60, 90]) # Steps to decay the learning rate
runner = dict(type='EpochBasedRunner', # Type of runner to use (i.e. IterBasedRunner or EpochBasedRunner)
max_epochs=100) # Runner that runs the workflow in total max_epochs. For IterBasedRunner use `max_iters`
max_epochs=100) # Runner that runs the workflow in total max_epochs. For IterBasedRunner use `max_iters`
```
### Runtime Setting
@ -243,11 +243,16 @@ This part mainly includes saving the checkpoint strategy, log configuration, tra
# Config to set the checkpoint hook, Refer to https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/checkpoint.py for implementation.
checkpoint_config = dict(interval=1) # The save interval is 1
# config to register logger hook
log_config = dict(
interval=100, # Interval to print the log
log_config = dict( # Config to register logger hook
interval=50, # Interval to print the log
hooks=[
dict(type='TextLoggerHook'), # The Tensorboard logger is also supported
# dict(type='TensorboardLoggerHook')
dict(type='TextLoggerHook', by_epoch=False),
dict(type='TensorboardLoggerHook', by_epoch=False),
dict(type='WandbLoggerHook', by_epoch=False, # The Wandb logger is also supported, It requires `wandb` to be installed.
init_kwargs={
'project': "MMOCR", # Project name in WandB
}), # Check https://docs.wandb.ai/ref/python/init for more init arguments.
# ClearMLLoggerHook, DvcliveLoggerHook, MlflowLoggerHook, NeptuneLoggerHook, PaviLoggerHook, SegmindLoggerHook are also supported based on MMCV implementation.
])
dist_params = dict(backend='nccl') # Parameters to setup distributed training, the port can also be set.
@ -337,8 +342,8 @@ The `train_cfg` and `test_cfg` are deprecated in config file, please specify the
```python
# deprecated
model = dict(
type=...,
...
type=...,
...
)
train_cfg=dict(...)
test_cfg=dict(...)
@ -349,9 +354,9 @@ The migration example is as below.
```python
# recommended
model = dict(
type=...,
...
train_cfg=dict(...),
test_cfg=dict(...),
type=...,
...
train_cfg=dict(...),
test_cfg=dict(...),
)
```