diff --git a/docs/en/tutorials/hook.md b/docs/en/tutorials/hook.md index d10d3b24..d17286ba 100644 --- a/docs/en/tutorials/hook.md +++ b/docs/en/tutorials/hook.md @@ -73,7 +73,7 @@ The four features mentioned above are described below. - Save checkpoints by interval, and support saving them by epoch or iteration - Suppose we train a total of 20 epochs and want to save the checkpoints every 5 epochs, the following configuration will help us to achieve this requirement. + Suppose we train a total of 20 epochs and want to save the checkpoints every 5 epochs, the following configuration will help us achieve this requirement. ```python # the default value of by_epoch is True diff --git a/docs/en/tutorials/model.md b/docs/en/tutorials/model.md index 73986028..adcaacc8 100644 --- a/docs/en/tutorials/model.md +++ b/docs/en/tutorials/model.md @@ -113,7 +113,7 @@ class MMResNet50(BaseModel): elif mode == 'predict': return x, labels - # train_step, val_step and test_step have been implemented in BaseModel. we + # train_step, val_step and test_step have been implemented in BaseModel. # We list the equivalent code here for better understanding def train_step(self, data, optim_wrapper): data = self.data_preprocessor(data) @@ -135,7 +135,7 @@ class MMResNet50(BaseModel): Now, you may have a deeper understanding of dataflow, and can answer the first question in [Runner and model](#runner-and-model). -`BaseModel.train_step` implements the standard optimization standard, and if we want to customize a new optimization process, we can override it in the subclass. However, it is important to note that we need to make sure that `train_step` returns a loss dict. +`BaseModel.train_step` implements the standard optimization, and if we want to customize a new optimization process, we can override it in the subclass. However, it is important to note that we need to make sure that `train_step` returns a loss dict. ## DataPreprocessor @@ -155,7 +155,7 @@ The answer to the first question is that: `MMResNet50` inherit from `BaseModel`, class BaseDataPreprocessor(nn.Module): def forward(self, data, training=True): # ignore the training parameter here # suppose data given by CIFAR10 is a tuple. Actually - # BaseDataPreprocessor could move varies type of data + # BaseDataPreprocessor could move various type of data # to target device. return tuple(_data.cuda() for _data in data) ``` diff --git a/docs/en/tutorials/param_scheduler.md b/docs/en/tutorials/param_scheduler.md index 33a6d6ab..e97335bd 100644 --- a/docs/en/tutorials/param_scheduler.md +++ b/docs/en/tutorials/param_scheduler.md @@ -11,8 +11,8 @@ We first introduce how to use PyTorch's `torch.optim.lr_scheduler` to adjust lea
How to use PyTorch's builtin learning rate scheduler? -Here is an example which refers from [PyTorch official documentation](https://pytorch.org/docs/stable/optim.html): +Here is an example which refers from [PyTorch official documentation](https://pytorch.org/docs/stable/optim.html): Initialize an ExponentialLR object, and call the `step` method after each training epoch. ```python diff --git a/docs/zh_cn/tutorials/evaluation.md b/docs/zh_cn/tutorials/evaluation.md index f19857ac..552fa032 100644 --- a/docs/zh_cn/tutorials/evaluation.md +++ b/docs/zh_cn/tutorials/evaluation.md @@ -71,7 +71,7 @@ class SimpleAccuracy(BaseMetric): def process(self, data_batch: Sequence[dict], data_samples: Sequence[dict]): """Process one batch of data and predictions. The processed Results should be stored in `self.results`, which will be used - to computed the metrics when all batches have been processed. + to compute the metrics when all batches have been processed. Args: data_batch (Sequence[Tuple[Any, dict]]): A batch of data