[Docs] translate 2_data_pipeline.md and 3_new_module.md into Chinese and fix some typos. (#168)

* [Docs] translate 2_data_pipeline.md into Chinese

* [Docs] translate 3_new_module.md into Chinese

* [Docs] Fix typos from py to python
This commit is contained in:
Yun Du 2022-01-10 12:39:14 +08:00 committed by GitHub
parent 86ca16c766
commit 54471dd5ef
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
12 changed files with 118 additions and 119 deletions

View File

@ -28,7 +28,7 @@ To write a new dataset, you need to implement:
Assume the name of your `DataSource` is `NewDataSource`, you can create a file, named `new_data_source.py` under `mmselfsup/datasets/data_sources` and implement `NewDataSource` in it.
```py
```python
import mmcv
import numpy as np
@ -49,7 +49,7 @@ class NewDataSource(BaseDataSource):
Then, add `NewDataSource` in `mmselfsup/dataset/data_sources/__init__.py`.
```py
```python
from .base import BaseDataSource
...
from .new_data_source import NewDataSource
@ -63,7 +63,7 @@ __all__ = [
Assume the name of your `Dataset` is `NewDataset`, you can create a file, named `new_dataset.py` under `mmselfsup/datasets` and implement `NewDataset` in it.
```py
```python
# Copyright (c) OpenMMLab. All rights reserved.
import torch
from mmcv.utils import build_from_cfg
@ -89,7 +89,7 @@ class NewDataset(BaseDataset):
Then, add `NewDataset` in `mmselfsup/dataset/__init__.py`.
```py
```python
from .base import BaseDataset
...
from .new_dataset import NewDataset
@ -103,7 +103,7 @@ __all__ = [
To use `NewDataset`, you can modify the config as the following:
```py
```python
train=dict(
type='NewDataset',
data_source=dict(

View File

@ -10,7 +10,7 @@
Here is a config example of `Pipeline` for `SimCLR` training:
```py
```python
train_pipeline = [
dict(type='RandomResizedCrop', size=224),
dict(type='RandomHorizontalFlip'),
@ -36,7 +36,7 @@ Every augmentation in the `Pipeline` receives an image as input and outputs an a
1.Write a new transformation function in [transforms.py](../../mmselfsup/datasets/pipelines/transforms.py) and overwrite the `__call__` function, which takes a `Pillow` image as input:
```py
```python
@PIPELINES.register_module()
class MyTransform(object):
@ -47,7 +47,7 @@ class MyTransform(object):
2.Use it in config files. We reuse the config file shown above and add `MyTransform` to it.
```py
```python
train_pipeline = [
dict(type='RandomResizedCrop', size=224),
dict(type='RandomHorizontalFlip'),

View File

@ -19,7 +19,7 @@ Assuming we are going to create a customized backbone `CustomizedBackbone`
1.Create a new file `mmselfsup/models/backbones/customized_backbone.py` and implement `CustomizedBackbone` in it.
```py
```python
import torch.nn as nn
from ..builder import BACKBONES
@ -45,7 +45,7 @@ class CustomizedBackbone(nn.Module):
2.Import the customized backbone in `mmselfsup/models/backbones/__init__.py`.
```py
```python
from .customized_backbone import CustomizedBackbone
__all__ = [
@ -55,7 +55,7 @@ __all__ = [
3.Use it in your config file.
```py
```python
model = dict(
...
backbone=dict(
@ -71,7 +71,7 @@ we include all projection heads in `mmselfsup/models/necks`. Assuming we are goi
1.Create a new file `mmselfsup/models/necks/customized_proj_head.py` and implement `CustomizedProjHead` in it.
```py
```python
import torch.nn as nn
from mmcv.runner import BaseModule
@ -92,7 +92,7 @@ You need to implement the forward function, which takes the feature from the bac
2.Import the `CustomizedProjHead` in `mmselfsup/models/necks/__init__`.
```py
```python
from .customized_proj_head import CustomizedProjHead
__all__ = [
@ -104,7 +104,7 @@ __all__ = [
3.Use it in your config file.
```py
```python
model = dict(
...,
neck=dict(
@ -119,7 +119,7 @@ To add a new loss function, we mainly implement the `forward` function in the lo
1.Create a new file `mmselfsup/models/heads/customized_head.py` and implement your customized `CustomizedHead` in it.
```py
```python
import torch
import torch.nn as nn
from mmcv.runner import BaseModule
@ -142,7 +142,7 @@ class CustomizedHead(BaseModule):
2.Import the module in `mmselfsup/models/heads/__init__.py`
```py
```python
from .customized_head import CustomizedHead
__all__ = [..., CustomizedHead, ...]
@ -150,7 +150,7 @@ __all__ = [..., CustomizedHead, ...]
3.Use it in your config file.
```py
```python
model = dict(
...,
head=dict(type='CustomizedHead')
@ -163,7 +163,7 @@ After creating each component, mentioned above, we need to create a `CustomizedA
1.Create a new file `mmselfsup/models/algorithms/customized_algorithm.py` and implement `CustomizedAlgorithm` in it.
```py
```python
# Copyright (c) OpenMMLab. All rights reserved.
import torch
@ -187,7 +187,7 @@ class CustomizedAlgorithm(BaseModel):
2.Import the module in `mmselfsup/models/algorithms/__init__.py`
```py
```python
from .customized_algorithm import CustomizedAlgorithm
__all__ = [..., CustomizedAlgorithm, ...]
@ -195,7 +195,7 @@ __all__ = [..., CustomizedAlgorithm, ...]
3.Use it in your config file.
```py
```python
model = dict(
type='CustomizedAlgorightm',
backbone=...,

View File

@ -20,7 +20,7 @@ We already support to use all the optimizers implemented by PyTorch, and to use
For example, if you want to use SGD, the modification could be as the following.
```py
```python
optimizer = dict(type='SGD', lr=0.0003, weight_decay=0.0001)
```
@ -28,7 +28,7 @@ To modify the learning rate of the model, just modify the `lr` in the config of
For example, if you want to use `Adam` with the setting like `torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)` in PyTorch, the config should looks like:
```py
```python
optimizer = dict(type='Adam', lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)
```
@ -42,7 +42,7 @@ Learning rate decay is widely used to improve performance. And to use learning r
For example, we use CosineAnnealing policy to train SimCLR, and the config is:
```py
```python
lr_config = dict(
policy='CosineAnnealing',
...)
@ -67,7 +67,7 @@ Here are some examples:
1.linear & warmup by iter
```py
```python
lr_config = dict(
policy='CosineAnnealing',
by_epoch=False,
@ -80,7 +80,7 @@ lr_config = dict(
2.exp & warmup by epoch
```py
```python
lr_config = dict(
policy='CosineAnnealing',
min_lr=0,
@ -98,7 +98,7 @@ Momentum scheduler is usually used with LR scheduler, for example, the following
Here is an example:
```py
```python
lr_config = dict(
policy='cyclic',
target_ratio=(10, 1e-4),
@ -119,7 +119,7 @@ Some models may have some parameter-specific settings for optimization, for exam
For example, if we do not want to apply weight decay to the parameters of BatchNorm or GroupNorm, and the bias in each layer, we can use following config file:
```py
```python
optimizer = dict(
type=...,
lr=...,
@ -140,7 +140,7 @@ Currently we support `grad_clip` option in `optimizer_config`, and you can refer
Here is an example:
```py
```python
optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
# norm_type: type of the used p-norm, here norm_type is 2.
```
@ -153,14 +153,14 @@ When there is not enough computation resource, the batch size can only be set to
Here is an example:
```py
```python
data = dict(imgs_per_gpu=64)
optimizer_config = dict(type="DistOptimizerHook", update_interval=4)
```
Indicates that during training, back-propagation is performed every 4 iters. And the above is equivalent to:
```py
```python
data = dict(imgs_per_gpu=256)
optimizer_config = dict(type="OptimizerHook")
```
@ -171,7 +171,7 @@ In academic research and industrial practice, it is likely that you need some op
Implement your `CustomizedOptim` in `mmselfsup/core/optimizer/optimizers.py`
```py
```python
import torch
from torch.optim import * # noqa: F401,F403
from torch.optim.optimizer import Optimizer, required
@ -193,7 +193,7 @@ class CustomizedOptim(Optimizer):
Import it in `mmselfsup/core/optimizer/__init__.py`
```py
```python
from .optimizers import CustomizedOptim
from .builder import build_optimizer
@ -202,7 +202,7 @@ __all__ = ['CustomizedOptim', 'build_optimizer', ...]
Use it in your config file
```py
```python
optimizer = dict(
type='CustomizedOptim',
...

View File

@ -21,13 +21,13 @@ Workflow is a list of (phase, duration) to specify the running order and duratio
For example, we use epoch-based runner by default, and the "duration" means how many epochs the phase to be executed in a cycle. Usually, we only want to execute training phase, just use the following config.
```py
```python
workflow = [('train', 1)]
```
Sometimes we may want to check some metrics (e.g. loss, accuracy) about the model on the validate set. In such case, we can set the workflow as
```py
```python
[('train', 1), ('val', 1)]
```

View File

@ -11,7 +11,7 @@ In MMSelfSup, we provide many benchmarks, thus the models can be evaluated on di
- [Segmentation](#segmentation)
First, you are supposed to extract your backbone weights by `tools/model_converters/extract_backbone_weights.py`
```
```shell
python ./tools/misc/extract_backbone_weights.py {CHECKPOINT} {MODEL_FILE}
```
@ -115,11 +115,10 @@ Remarks:
- `CONFIG`: Use config files under `configs/benchmarks/mmdetection/` or write your own config files
- `PRETRAIN`: the pretrained model file.
Or if you want to do detection task with [detectron2](https://github.com/facebookresearch/detectron2), we also provides some config files.
Please refer [INSTALL.md](https://github.com/facebookresearch/detectron2/blob/main/INSTALL.md) for installation and follow the [directory structure](https://github.com/facebookresearch/detectron2/tree/main/datasets) to prepare your datasets required by detectron2.
```
```shell
conda activate detectron2 # use detectron2 environment here, otherwise use open-mmlab environment
cd benchmarks/detection
python convert-pretrain-to-detectron2.py ${WEIGHT_FILE} ${OUTPUT_FILE} # must use .pkl as the output extension.

View File

@ -28,7 +28,7 @@
假设你基于父类`DataSource` 创建的子类名为 `NewDataSource` 你可以在`mmselfsup/datasets/data_sources` 目录下创建一个文件,文件名为 `new_data_source.py` ,并在这个文件中实现 `NewDataSource` 创建。
```py
```python
import mmcv
import numpy as np
@ -49,7 +49,7 @@ class NewDataSource(BaseDataSource):
然后, 在 `mmselfsup/dataset/data_sources/__init__.py`中添加`NewDataSource`
```py
```python
from .base import BaseDataSource
...
from .new_data_source import NewDataSource
@ -63,7 +63,7 @@ __all__ = [
假设你基于父类 `Dataset` 创建的子类名为 `NewDataset`,你可以在`mmselfsup/datasets`目录下创建一个文件,文件名为`new_dataset.py` ,并在这个文件中实现 `NewDataset` 创建。
```py
```python
# Copyright (c) OpenMMLab. All rights reserved.
import torch
from mmcv.utils import build_from_cfg
@ -89,7 +89,7 @@ class NewDataset(BaseDataset):
然后,在 `mmselfsup/dataset/__init__.py`中添加 `NewDataset`
```py
```python
from .base import BaseDataset
...
from .new_dataset import NewDataset
@ -103,7 +103,7 @@ __all__ = [
为了使用 `NewDataset`,你可以修改配置如下:
```py
```python
train=dict(
type='NewDataset',
data_source=dict(

View File

@ -1,16 +1,16 @@
# Tutorial 2: Customize Data Pipelines
# 教程 2自定义数据管道
- [Tutorial 2: Customize Data Pipelines](#tutorial-2-customize-data-pipelines)
- [Overview of `Pipeline`](#overview-of-pipeline)
- [Creating new augmentations in `Pipeline`](#creating-new-augmentations-in-pipeline)
- [教程 2自定义数据管道](#教程-2-自定义数据管道)
- [`Pipeline` 概览](#Pipeline-概览)
- [在 `Pipeline` 中创建新的数据增强](#在-Pipeline-中创建新的数据增强)
## Overview of `Pipeline`
## `Pipeline` 概览
`DataSource` and `Pipeline` are two important components in `Dataset`. We have introduced `DataSource` in [add_new_dataset](./1_new_dataset.md). And the `Pipeline` is responsible for applying a series of data augmentations to images, such as random flip.
`DataSource` `Pipeline``Dataset` 的两个重要组件。我们已经在 [add_new_dataset](./1_new_dataset.md) 中介绍了 `DataSource``Pipeline` 负责对图像进行一系列的数据增强,例如随机翻转。
Here is a config example of `Pipeline` for `SimCLR` training:
这是用于 `SimCLR` 训练的 `Pipeline` 的配置示例:
```py
```python
train_pipeline = [
dict(type='RandomResizedCrop', size=224),
dict(type='RandomHorizontalFlip'),
@ -30,13 +30,13 @@ train_pipeline = [
]
```
Every augmentation in the `Pipeline` receives an image as input and outputs an augmented image.
`Pipeline` 中的每个增强都接收一张图像作为输入,并输出一张增强后的图像。
## Creating new augmentations in `Pipeline`
## `Pipeline` 中创建新的数据增强
1.Write a new transformation function in [transforms.py](../../mmselfsup/datasets/pipelines/transforms.py) and overwrite the `__call__` function, which takes a `Pillow` image as input:
1.在 [transforms.py](../../mmselfsup/datasets/pipelines/transforms.py) 中编写一个新的数据增强函数,并覆盖 `__call__` 函数,该函数接收一张 `Pillow` 图像作为输入:
```py
```python
@PIPELINES.register_module()
class MyTransform(object):
@ -45,9 +45,9 @@ class MyTransform(object):
return img
```
2.Use it in config files. We reuse the config file shown above and add `MyTransform` to it.
2.在配置文件中使用它。我们重新使用上面的配置文件,并在其中添加 `MyTransform`
```py
```python
train_pipeline = [
dict(type='RandomResizedCrop', size=224),
dict(type='RandomHorizontalFlip'),
@ -66,4 +66,4 @@ train_pipeline = [
dict(type='RandomGrayscale', p=0.2),
dict(type='GaussianBlur', sigma_min=0.1, sigma_max=2.0, p=0.5)
]
```
```

View File

@ -1,25 +1,25 @@
# Tutorial 3: Adding New Modules
# 教程 3添加新的模块
- [Tutorial 3: Adding New Modules](#tutorial-3-adding-new-modules)
- [Add new backbone](#add-new-backbone)
- [Add new necks](#add-new-necks)
- [Add new loss](#add-new-loss)
- [Combine all](#combine-all)
- [教程 3添加新的模块](#教程-3-添加新的模块)
- [添加新的 backbone](#添加新的-backbone)
- [添加新的 Necks](#添加新的-Necks)
- [添加新的损失](#添加新的损失)
- [合并所有改动](#合并所有改动)
In self-supervised learning domain, each model can be divided into following four parts:
在自监督学习领域,每个模型可以被分为以下四个部分:
- backbone: used to extract image's feature
- projection head: projects feature extracted by backbone to another space
- loss: loss function the model will optimize
- memory bank(optional): some methods, `e.g. odc`, need extract memory bank to store image's feature.
- backbone:用于提取图像特征。
- projection head:将 backbone 提取的特征映射到另一空间。
- loss:用于模型优化的损失函数。
- memory bank(可选):一些方法(例如 `odc` ),需要额外的 memory bank 用于存储图像特征。
## Add new backbone
## 添加新的 backbone
Assuming we are going to create a customized backbone `CustomizedBackbone`
假设我们要创建一个自定义的 backbone `CustomizedBackbone`
1.Create a new file `mmselfsup/models/backbones/customized_backbone.py` and implement `CustomizedBackbone` in it.
1.创建新文件 `mmselfsup/models/backbones/customized_backbone.py` 并在其中实现 `CustomizedBackbone`
```py
```python
import torch.nn as nn
from ..builder import BACKBONES
@ -43,9 +43,9 @@ class CustomizedBackbone(nn.Module):
## TODO
```
2.Import the customized backbone in `mmselfsup/models/backbones/__init__.py`.
2.`mmselfsup/models/backbones/__init__.py` 中导入自定义的 backbone。
```py
```python
from .customized_backbone import CustomizedBackbone
__all__ = [
@ -53,9 +53,9 @@ __all__ = [
]
```
3.Use it in your config file.
3.在你的配置文件中使用它。
```py
```python
model = dict(
...
backbone=dict(
@ -65,13 +65,13 @@ model = dict(
)
```
## Add new necks
## 添加新的 Necks
we include all projection heads in `mmselfsup/models/necks`. Assuming we are going to create a `CustomizedProjHead`.
我们在 `mmselfsup/models/necks` 中包含了所有的 projection heads。假设我们要创建一个 `CustomizedProjHead`
1.Create a new file `mmselfsup/models/necks/customized_proj_head.py` and implement `CustomizedProjHead` in it.
1.创建一个新文件 `mmselfsup/models/necks/customized_proj_head.py` 并在其中实现 `CustomizedProjHead`
```py
```python
import torch.nn as nn
from mmcv.runner import BaseModule
@ -88,11 +88,11 @@ class CustomizedProjHead(BaseModule):
## TODO
```
You need to implement the forward function, which takes the feature from the backbone and outputs the projected feature.
你需要实现前向函数,该函数从 backbone 中获取特征,并输出映射后的特征。
2.Import the `CustomizedProjHead` in `mmselfsup/models/necks/__init__`.
2.`mmselfsup/models/necks/__init__` 中导入 `CustomizedProjHead`
```py
```python
from .customized_proj_head import CustomizedProjHead
__all__ = [
@ -102,9 +102,9 @@ __all__ = [
]
```
3.Use it in your config file.
3.在你的配置文件中使用它。
```py
```python
model = dict(
...,
neck=dict(
@ -113,13 +113,13 @@ model = dict(
...)
```
## Add new loss
## 添加新的损失
To add a new loss function, we mainly implement the `forward` function in the loss module.
为了增加一个新的损失函数,我们主要在损失模块中实现 `forward` 函数。
1.Create a new file `mmselfsup/models/heads/customized_head.py` and implement your customized `CustomizedHead` in it.
1.创建一个新的文件 `mmselfsup/models/heads/customized_head.py` 并在其中实现你自定义的 `CustomizedHead`
```py
```python
import torch
import torch.nn as nn
from mmcv.runner import BaseModule
@ -140,30 +140,30 @@ class CustomizedHead(BaseModule):
## TODO
```
2.Import the module in `mmselfsup/models/heads/__init__.py`
2.`mmselfsup/models/heads/__init__.py` 中导入该模块。
```py
```python
from .customized_head import CustomizedHead
__all__ = [..., CustomizedHead, ...]
```
3.Use it in your config file.
3.在你的配置文件中使用它。
```py
```python
model = dict(
...,
head=dict(type='CustomizedHead')
)
```
## Combine all
## 合并所有改动
After creating each component, mentioned above, we need to create a `CustomizedAlgorithm` to organize them logically. And the `CustomizedAlgorithm` takes raw images as inputs and outputs the loss to the optimizer.
在创建了上述每个组件后,我们需要创建一个 `CustomizedAlgorithm` 来有逻辑的将他们组织到一起。 `CustomizedAlgorithm` 接收原始图像作为输入,并将损失输出给优化器。
1.Create a new file `mmselfsup/models/algorithms/customized_algorithm.py` and implement `CustomizedAlgorithm` in it.
1.创建一个新文件 `mmselfsup/models/algorithms/customized_algorithm.py` 并在其中实现 `CustomizedAlgorithm`
```py
```python
# Copyright (c) OpenMMLab. All rights reserved.
import torch
@ -185,17 +185,17 @@ class CustomizedAlgorithm(BaseModel):
## TODO
```
2.Import the module in `mmselfsup/models/algorithms/__init__.py`
2. `mmselfsup/models/algorithms/__init__.py` 中导入该模块。
```py
```python
from .customized_algorithm import CustomizedAlgorithm
__all__ = [..., CustomizedAlgorithm, ...]
```
3.Use it in your config file.
3.在你的配置文件中使用它。
```py
```python
model = dict(
type='CustomizedAlgorightm',
backbone=...,

View File

@ -20,7 +20,7 @@ We already support to use all the optimizers implemented by PyTorch, and to use
For example, if you want to use SGD, the modification could be as the following.
```py
```python
optimizer = dict(type='SGD', lr=0.0003, weight_decay=0.0001)
```
@ -28,7 +28,7 @@ To modify the learning rate of the model, just modify the `lr` in the config of
For example, if you want to use `Adam` with the setting like `torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)` in PyTorch, the config should looks like:
```py
```python
optimizer = dict(type='Adam', lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)
```
@ -42,7 +42,7 @@ Learning rate decay is widely used to improve performance. And to use learning r
For example, we use CosineAnnealing policy to train SimCLR, and the config is:
```py
```python
lr_config = dict(
policy='CosineAnnealing',
...)
@ -67,7 +67,7 @@ Here are some examples:
1.linear & warmup by iter
```py
```python
lr_config = dict(
policy='CosineAnnealing',
by_epoch=False,
@ -80,7 +80,7 @@ lr_config = dict(
2.exp & warmup by epoch
```py
```python
lr_config = dict(
policy='CosineAnnealing',
min_lr=0,
@ -98,7 +98,7 @@ Momentum scheduler is usually used with LR scheduler, for example, the following
Here is an example:
```py
```python
lr_config = dict(
policy='cyclic',
target_ratio=(10, 1e-4),
@ -119,7 +119,7 @@ Some models may have some parameter-specific settings for optimization, for exam
For example, if we do not want to apply weight decay to the parameters of BatchNorm or GroupNorm, and the bias in each layer, we can use following config file:
```py
```python
optimizer = dict(
type=...,
lr=...,
@ -140,7 +140,7 @@ Currently we support `grad_clip` option in `optimizer_config`, and you can refer
Here is an example:
```py
```python
optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
# norm_type: type of the used p-norm, here norm_type is 2.
```
@ -153,14 +153,14 @@ When there is not enough computation resource, the batch size can only be set to
Here is an example:
```py
```python
data = dict(imgs_per_gpu=64)
optimizer_config = dict(type="DistOptimizerHook", update_interval=4)
```
Indicates that during training, back-propagation is performed every 4 iters. And the above is equivalent to:
```py
```python
data = dict(imgs_per_gpu=256)
optimizer_config = dict(type="OptimizerHook")
```
@ -171,7 +171,7 @@ In academic research and industrial practice, it is likely that you need some op
Implement your `CustomizedOptim` in `mmselfsup/core/optimizer/optimizers.py`
```py
```python
import torch
from torch.optim import * # noqa: F401,F403
from torch.optim.optimizer import Optimizer, required
@ -193,7 +193,7 @@ class CustomizedOptim(Optimizer):
Import it in `mmselfsup/core/optimizer/__init__.py`
```py
```python
from .optimizers import CustomizedOptim
from .builder import build_optimizer
@ -202,7 +202,7 @@ __all__ = ['CustomizedOptim', 'build_optimizer', ...]
Use it in your config file
```py
```python
optimizer = dict(
type='CustomizedOptim',
...

View File

@ -21,13 +21,13 @@ Workflow is a list of (phase, duration) to specify the running order and duratio
For example, we use epoch-based runner by default, and the "duration" means how many epochs the phase to be executed in a cycle. Usually, we only want to execute training phase, just use the following config.
```py
```python
workflow = [('train', 1)]
```
Sometimes we may want to check some metrics (e.g. loss, accuracy) about the model on the validate set. In such case, we can set the workflow as
```py
```python
[('train', 1), ('val', 1)]
```

View File

@ -11,7 +11,7 @@ In MMSelfSup, we provide many benchmarks, thus the models can be evaluated on di
- [Segmentation](#segmentation)
First, you are supposed to extract your backbone weights by `tools/model_converters/extract_backbone_weights.py`
```
```shell
python ./tools/misc/extract_backbone_weights.py {CHECKPOINT} {MODEL_FILE}
```
@ -119,7 +119,7 @@ Remarks:
Or if you want to do detection task with [detectron2](https://github.com/facebookresearch/detectron2), we also provides some config files.
Please refer [INSTALL.md](https://github.com/facebookresearch/detectron2/blob/main/INSTALL.md) for installation and follow the [directory structure](https://github.com/facebookresearch/detectron2/tree/main/datasets) to prepare your datasets required by detectron2.
```
```shell
conda activate detectron2 # use detectron2 environment here, otherwise use open-mmlab environment
cd benchmarks/detection
python convert-pretrain-to-detectron2.py ${WEIGHT_FILE} ${OUTPUT_FILE} # must use .pkl as the output extension.