[Docs] Fix typos (#814)

* Update model.md

* Update model.md

* Update model.md

* Update evaluation.md

* Update param_scheduler.md

* Update hook.md

* Fix lint issue

* fix lint issues

Co-authored-by: shanmo <shanmo1412@gmail.com>
This commit is contained in:
Timothy 2022-12-11 17:12:29 +08:00 committed by GitHub
parent 7e2d47ba5f
commit be0bc3a0ef
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 6 additions and 6 deletions

View File

@ -73,7 +73,7 @@ The four features mentioned above are described below.
- Save checkpoints by interval, and support saving them by epoch or iteration
Suppose we train a total of 20 epochs and want to save the checkpoints every 5 epochs, the following configuration will help us to achieve this requirement.
Suppose we train a total of 20 epochs and want to save the checkpoints every 5 epochs, the following configuration will help us achieve this requirement.
```python
# the default value of by_epoch is True

View File

@ -113,7 +113,7 @@ class MMResNet50(BaseModel):
elif mode == 'predict':
return x, labels
# train_step, val_step and test_step have been implemented in BaseModel. we
# train_step, val_step and test_step have been implemented in BaseModel.
# We list the equivalent code here for better understanding
def train_step(self, data, optim_wrapper):
data = self.data_preprocessor(data)
@ -135,7 +135,7 @@ class MMResNet50(BaseModel):
Now, you may have a deeper understanding of dataflow, and can answer the first question in [Runner and model](#runner-and-model).
`BaseModel.train_step` implements the standard optimization standard, and if we want to customize a new optimization process, we can override it in the subclass. However, it is important to note that we need to make sure that `train_step` returns a loss dict.
`BaseModel.train_step` implements the standard optimization, and if we want to customize a new optimization process, we can override it in the subclass. However, it is important to note that we need to make sure that `train_step` returns a loss dict.
## DataPreprocessor
@ -155,7 +155,7 @@ The answer to the first question is that: `MMResNet50` inherit from `BaseModel`,
class BaseDataPreprocessor(nn.Module):
def forward(self, data, training=True): # ignore the training parameter here
# suppose data given by CIFAR10 is a tuple. Actually
# BaseDataPreprocessor could move varies type of data
# BaseDataPreprocessor could move various type of data
# to target device.
return tuple(_data.cuda() for _data in data)
```

View File

@ -11,8 +11,8 @@ We first introduce how to use PyTorch's `torch.optim.lr_scheduler` to adjust lea
<details>
<summary>How to use PyTorch's builtin learning rate scheduler?</summary>
Here is an example which refers from [PyTorch official documentation](https://pytorch.org/docs/stable/optim.html):
Here is an example which refers from [PyTorch official documentation](https://pytorch.org/docs/stable/optim.html):
Initialize an ExponentialLR object, and call the `step` method after each training epoch.
```python

View File

@ -71,7 +71,7 @@ class SimpleAccuracy(BaseMetric):
def process(self, data_batch: Sequence[dict], data_samples: Sequence[dict]):
"""Process one batch of data and predictions. The processed
Results should be stored in `self.results`, which will be used
to computed the metrics when all batches have been processed.
to compute the metrics when all batches have been processed.
Args:
data_batch (Sequence[Tuple[Any, dict]]): A batch of data