mirror of
https://github.com/open-mmlab/mmfewshot.git
synced 2025-06-03 14:49:43 +08:00
Merge branch 'main' into pr/53
This commit is contained in:
commit
3f5ea7524f
38
.github/workflows/build.yml
vendored
38
.github/workflows/build.yml
vendored
@ -223,3 +223,41 @@ jobs:
|
||||
env_vars: OS,PYTHON
|
||||
name: codecov-umbrella
|
||||
fail_ci_if_error: false
|
||||
|
||||
test_windows:
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
matrix:
|
||||
os: [windows-2022]
|
||||
python: [3.8]
|
||||
platform: [cpu, cu102]
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Set up Python ${{ matrix.python }}
|
||||
uses: actions/setup-python@v2
|
||||
with:
|
||||
python-version: ${{ matrix.python }}
|
||||
- name: Upgrade pip
|
||||
run: pip install pip --upgrade --user
|
||||
- name: Install OpenCV
|
||||
run: pip install opencv-python>=3
|
||||
- name: Install PyTorch
|
||||
# As a complement to Linux CI, we test on PyTorch LTS version
|
||||
run: pip install torch==1.8.2+${{ matrix.platform }} torchvision==0.9.2+${{ matrix.platform }} -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
|
||||
- name: Install MMCV
|
||||
run: |
|
||||
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cpu/torch1.8/index.html --only-binary mmcv-full
|
||||
- name: Install MMCLS and MMDET
|
||||
run: pip install mmcls mmdet
|
||||
- name: Install unittest dependencies
|
||||
run: pip install -r requirements/tests.txt -r requirements/optional.txt
|
||||
- name: Build and install
|
||||
run: pip install -e .
|
||||
- name: Run unittests
|
||||
run: |
|
||||
python -m pip install timm
|
||||
coverage run --branch --source mmfewshot -m pytest tests/
|
||||
- name: Generate coverage report
|
||||
run: |
|
||||
coverage xml
|
||||
coverage report -m
|
||||
|
19
README.md
19
README.md
@ -147,18 +147,21 @@ mmfewshot is an open source project that is contributed by researchers and engin
|
||||
|
||||
## Projects in OpenMMLab
|
||||
|
||||
- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.
|
||||
- [MIM](https://github.com/open-mmlab/mim): MIM Installs OpenMMLab Packages.
|
||||
- [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages.
|
||||
- [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab image classification toolbox and benchmark.
|
||||
- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.
|
||||
- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.
|
||||
- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark.
|
||||
- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.
|
||||
- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox.
|
||||
- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.
|
||||
- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.
|
||||
- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark.
|
||||
- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark.
|
||||
- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.
|
||||
- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.
|
||||
- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.
|
||||
- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.
|
||||
- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox.
|
||||
- [MMOCR](https://github.com/open-mmlab/mmocr): A comprehensive toolbox for text detection, recognition and understanding.
|
||||
- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab's next-generation toolbox for generative models.
|
||||
- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.
|
||||
- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.
|
||||
- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.
|
||||
- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox.
|
||||
- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox.
|
||||
- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.
|
||||
|
@ -147,21 +147,24 @@ MMFewShot 是一款由不同学校和公司共同贡献的开源项目。我们
|
||||
|
||||
## OpenMMLab 的其他项目
|
||||
|
||||
- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab 计算机视觉基础库
|
||||
- [MIM](https://github.com/open-mmlab/mim): MIM 是 OpenMMLab 项目、算法、模型的统一入口
|
||||
- [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab 图像分类工具箱与测试基准
|
||||
- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab 检测工具箱与测试基准
|
||||
- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab 新一代通用3D目标检测平台
|
||||
- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab 语义分割工具箱与测试基准
|
||||
- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab 新一代视频理解工具箱与测试基准
|
||||
- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab 一体化视频目标感知平台
|
||||
- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab 姿态估计工具箱与测试基准
|
||||
- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab 图像视频编辑工具箱
|
||||
- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab 全流程文字检测识别理解工具包
|
||||
- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab 新一代生成模型工具箱
|
||||
- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab 光流估计工具箱与测试基准
|
||||
- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab 少样本学习工具箱与测试基准
|
||||
- [MIM](https://github.com/open-mmlab/mim): MIM 是 OpenMMlab 项目、算法、模型的统一入口
|
||||
- [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab 图像分类工具箱
|
||||
- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab 目标检测工具箱
|
||||
- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab 新一代通用 3D 目标检测平台
|
||||
- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab 旋转框检测工具箱与测试基准
|
||||
- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab 语义分割工具箱
|
||||
- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab 全流程文字检测识别理解工具箱
|
||||
- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab 姿态估计工具箱
|
||||
- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 人体参数化模型工具箱与测试基准
|
||||
- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab 自监督学习工具箱与测试基准
|
||||
- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab 模型压缩工具箱与测试基准
|
||||
- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab 少样本学习工具箱与测试基准
|
||||
- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab 新一代视频理解工具箱
|
||||
- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab 一体化视频目标感知平台
|
||||
- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab 光流估计工具箱与测试基准
|
||||
- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab 图像视频编辑工具箱
|
||||
- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab 图片视频生成模型工具箱
|
||||
- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab 模型部署框架
|
||||
|
||||
## 欢迎加入 OpenMMLab 社区
|
||||
|
||||
|
@ -1,6 +1,7 @@
|
||||
## Test a model
|
||||
|
||||
- single GPU
|
||||
- CPU
|
||||
- single node multiple GPU
|
||||
- multiple node
|
||||
|
||||
@ -10,6 +11,10 @@ You can use the following commands to infer a dataset.
|
||||
# single-gpu
|
||||
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]
|
||||
|
||||
# CPU: disable GPUs and run single-gpu testing script
|
||||
export CUDA_VISIBLE_DEVICES=-1
|
||||
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]
|
||||
|
||||
# multi-gpu
|
||||
./tools/dist_test.sh ${CONFIG_FILE} ${CHECKPOINT_FILE} ${GPU_NUM} [optional arguments]
|
||||
|
||||
@ -46,6 +51,20 @@ python tools/train.py ${CONFIG_FILE} [optional arguments]
|
||||
|
||||
If you want to specify the working directory in the command, you can add an argument `--work_dir ${YOUR_WORK_DIR}`.
|
||||
|
||||
### Train on CPU
|
||||
|
||||
The process of training on the CPU is consistent with single GPU training. We just need to disable GPUs before the training process.
|
||||
|
||||
```shell
|
||||
export CUDA_VISIBLE_DEVICES=-1
|
||||
python tools/train.py ${CONFIG_FILE} [optional arguments]
|
||||
```
|
||||
|
||||
**Note**:
|
||||
|
||||
We do not recommend users to use CPU for training because it is too slow. We support this feature to allow users to debug on machines without GPU for convenience.
|
||||
|
||||
|
||||
### Train with multiple GPUs
|
||||
|
||||
```shell
|
||||
|
@ -1,6 +1,7 @@
|
||||
## Test a model
|
||||
|
||||
- single GPU
|
||||
- CPU
|
||||
- single node multiple GPU
|
||||
- multiple node
|
||||
|
||||
@ -10,6 +11,10 @@ You can use the following commands to infer a dataset.
|
||||
# single-gpu
|
||||
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]
|
||||
|
||||
# CPU: disable GPUs and run single-gpu testing script
|
||||
export CUDA_VISIBLE_DEVICES=-1
|
||||
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]
|
||||
|
||||
# multi-gpu
|
||||
./tools/dist_test.sh ${CONFIG_FILE} ${CHECKPOINT_FILE} ${GPU_NUM} [optional arguments]
|
||||
|
||||
@ -46,6 +51,20 @@ python tools/train.py ${CONFIG_FILE} [optional arguments]
|
||||
|
||||
If you want to specify the working directory in the command, you can add an argument `--work_dir ${YOUR_WORK_DIR}`.
|
||||
|
||||
### Train on CPU
|
||||
|
||||
The process of training on the CPU is consistent with single GPU training. We just need to disable GPUs before the training process.
|
||||
|
||||
```shell
|
||||
export CUDA_VISIBLE_DEVICES=-1
|
||||
python tools/train.py ${CONFIG_FILE} [optional arguments]
|
||||
```
|
||||
|
||||
**Note**:
|
||||
|
||||
We do not recommend users to use CPU for training because it is too slow. We support this feature to allow users to debug on machines without GPU for convenience.
|
||||
|
||||
|
||||
### Train with multiple GPUs
|
||||
|
||||
```shell
|
||||
|
@ -32,7 +32,7 @@ assert (digit_version(mmcv_minimum_version) <= mmcv_version
|
||||
f'Please install mmcv>={mmcv_minimum_version}, <={mmcv_maximum_version}.'
|
||||
|
||||
mmdet_minimum_version = '2.16.0'
|
||||
mmdet_maximum_version = '2.21.0'
|
||||
mmdet_maximum_version = '2.23.0'
|
||||
mmdet_version = digit_version(mmdet.__version__)
|
||||
|
||||
|
||||
|
@ -1,4 +1,5 @@
|
||||
# Copyright (c) OpenMMLab. All rights reserved.
|
||||
import warnings
|
||||
from typing import Dict, Union
|
||||
|
||||
import torch
|
||||
@ -22,7 +23,7 @@ def train_model(model: Union[MMDataParallel, MMDistributedDataParallel],
|
||||
distributed: bool = False,
|
||||
validate: bool = False,
|
||||
timestamp: str = None,
|
||||
device: str = 'cuda',
|
||||
device: str = None,
|
||||
meta: Dict = None) -> None:
|
||||
logger = get_root_logger(log_level=cfg.log_level)
|
||||
|
||||
@ -54,13 +55,14 @@ def train_model(model: Union[MMDataParallel, MMDistributedDataParallel],
|
||||
broadcast_buffers=False,
|
||||
find_unused_parameters=find_unused_parameters)
|
||||
else:
|
||||
if device == 'cuda':
|
||||
model = MMDataParallel(
|
||||
model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)
|
||||
elif device == 'cpu':
|
||||
if device == 'cpu':
|
||||
warnings.warn(
|
||||
'The argument `device` is deprecated. To use cpu to train, '
|
||||
'please refers to https://mmclassification.readthedocs.io/en'
|
||||
'/latest/getting_started.html#train-a-model')
|
||||
model = model.cpu()
|
||||
else:
|
||||
raise ValueError(F'unsupported device name {device}.')
|
||||
model = MMDataParallel(model, device_ids=cfg.gpu_ids)
|
||||
|
||||
# build runner
|
||||
optimizer = build_optimizer(model, cfg.optimizer)
|
||||
|
@ -55,8 +55,8 @@ def train_detector(model: nn.Module,
|
||||
broadcast_buffers=False,
|
||||
find_unused_parameters=find_unused_parameters)
|
||||
else:
|
||||
model = MMDataParallel(
|
||||
model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)
|
||||
# Please use MMCV >= 1.4.4 for CPU training!
|
||||
model = MMDataParallel(model, device_ids=cfg.gpu_ids)
|
||||
|
||||
# build runner
|
||||
optimizer = build_optimizer(model, cfg.optimizer)
|
||||
|
@ -1,5 +1,6 @@
|
||||
# Copyright (c) OpenMMLab. All rights reserved.
|
||||
import copy
|
||||
import os
|
||||
import tempfile
|
||||
|
||||
import numpy as np
|
||||
@ -144,10 +145,10 @@ def test_few_shot_coco_dataset():
|
||||
|
||||
# test save and load dataset
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
dataset.save_data_infos(tmpdir + 'ann.json')
|
||||
dataset.save_data_infos(tmpdir + f'{os.sep}ann.json')
|
||||
data_config['ann_cfg'] = [{
|
||||
'type': 'saved_dataset',
|
||||
'ann_file': tmpdir + 'ann.json'
|
||||
'ann_file': tmpdir + f'{os.sep}ann.json'
|
||||
}]
|
||||
dataset = FewShotCocoDataset(**data_config)
|
||||
count = 0
|
||||
|
@ -1,5 +1,6 @@
|
||||
# Copyright (c) OpenMMLab. All rights reserved.
|
||||
import copy
|
||||
import os
|
||||
import tempfile
|
||||
|
||||
import numpy as np
|
||||
@ -108,10 +109,10 @@ def test_few_shot_voc_dataset():
|
||||
dataset.data_infos[1]['ann']['bboxes_ignore'] = np.array(
|
||||
[[11, 11, 100, 100]])
|
||||
dataset.data_infos[1]['ann']['labels_ignore'] = np.array([0])
|
||||
dataset.save_data_infos(tmpdir + 'ann.json')
|
||||
dataset.save_data_infos(tmpdir + f'{os.sep}ann.json')
|
||||
data_config['ann_cfg'] = [{
|
||||
'type': 'saved_dataset',
|
||||
'ann_file': tmpdir + 'ann.json'
|
||||
'ann_file': tmpdir + f'{os.sep}ann.json'
|
||||
}]
|
||||
dataset = FewShotVOCDataset(**data_config)
|
||||
count = 0
|
||||
|
@ -1,4 +1,5 @@
|
||||
# Copyright (c) OpenMMLab. All rights reserved.
|
||||
import os
|
||||
import tempfile
|
||||
|
||||
import numpy as np
|
||||
@ -85,4 +86,4 @@ def test_nway_kshot_dataset():
|
||||
assert count <= 1
|
||||
# test save dataset
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
nway_kshot_dataset.save_data_infos(tmpdir + 'ann.json')
|
||||
nway_kshot_dataset.save_data_infos(tmpdir + f'{os.sep}ann.json')
|
||||
|
@ -1,4 +1,5 @@
|
||||
# Copyright (c) OpenMMLab. All rights reserved.
|
||||
import os
|
||||
import tempfile
|
||||
|
||||
import numpy as np
|
||||
@ -115,4 +116,4 @@ def test_query_aware_dataset():
|
||||
|
||||
# test save dataset
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
query_aware_dataset.save_data_infos(tmpdir + 'ann.json')
|
||||
query_aware_dataset.save_data_infos(tmpdir + f'{os.sep}ann.json')
|
||||
|
@ -1,4 +1,5 @@
|
||||
# Copyright (c) OpenMMLab. All rights reserved.
|
||||
import os
|
||||
import tempfile
|
||||
|
||||
from mmdet.apis import set_random_seed
|
||||
@ -45,4 +46,4 @@ def test_two_branch_dataset():
|
||||
assert len(two_branch_dataset) == 25
|
||||
# test save dataset
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
two_branch_dataset.save_data_infos(tmpdir + 'ann.json')
|
||||
two_branch_dataset.save_data_infos(tmpdir + f'{os.sep}ann.json')
|
||||
|
@ -3,6 +3,7 @@ import argparse
|
||||
import os
|
||||
import os.path as osp
|
||||
import time
|
||||
import warnings
|
||||
|
||||
import mmcv
|
||||
import torch
|
||||
@ -64,10 +65,19 @@ def parse_args():
|
||||
help='whether to set deterministic options for CUDNN backend.')
|
||||
parser.add_argument('--local_rank', type=int, default=0)
|
||||
parser.add_argument(
|
||||
'--device',
|
||||
choices=['cpu', 'cuda'],
|
||||
default='cuda',
|
||||
help='device used for testing')
|
||||
'--device', default=None, help='device used for testing. (Deprecated)')
|
||||
parser.add_argument(
|
||||
'--gpu-ids',
|
||||
type=int,
|
||||
nargs='+',
|
||||
help='(Deprecated, please use --gpu-id) ids of gpus to use '
|
||||
'(only applicable to non-distributed testing)')
|
||||
parser.add_argument(
|
||||
'--gpu-id',
|
||||
type=int,
|
||||
default=0,
|
||||
help='id of gpu to use '
|
||||
'(only applicable to non-distributed testing)')
|
||||
parser.add_argument(
|
||||
'--show_task_results',
|
||||
action='store_true',
|
||||
@ -75,6 +85,15 @@ def parse_args():
|
||||
args = parser.parse_args()
|
||||
if 'LOCAL_RANK' not in os.environ:
|
||||
os.environ['LOCAL_RANK'] = str(args.local_rank)
|
||||
|
||||
if args.device:
|
||||
warnings.warn(
|
||||
'--device is deprecated. To use cpu to test, please '
|
||||
'refers to https://mmclassification.readthedocs.io/en/latest/'
|
||||
'getting_started.html#inference-with-pretrained-models')
|
||||
|
||||
assert args.metrics or args.out, \
|
||||
'Please specify at least one of output path and evaluation metrics.'
|
||||
return args
|
||||
|
||||
|
||||
@ -96,7 +115,14 @@ def main():
|
||||
# use config filename as default work_dir if cfg.work_dir is None
|
||||
cfg.work_dir = osp.join('./work_dirs',
|
||||
osp.splitext(osp.basename(args.config))[0])
|
||||
|
||||
if args.gpu_ids is not None:
|
||||
cfg.gpu_ids = args.gpu_ids[0:1]
|
||||
warnings.warn('`--gpu-ids` is deprecated, please use `--gpu-id`. '
|
||||
'Because we only support single GPU mode in '
|
||||
'non-distributed testing. Use the first GPU '
|
||||
'in `gpu_ids` now.')
|
||||
else:
|
||||
cfg.gpu_ids = [args.gpu_id]
|
||||
# init distributed env first, since logger depends on the dist info.
|
||||
if args.launcher == 'none':
|
||||
distributed = False
|
||||
|
@ -4,6 +4,7 @@ import copy
|
||||
import os
|
||||
import os.path as osp
|
||||
import time
|
||||
import warnings
|
||||
|
||||
import cv2
|
||||
import mmcv
|
||||
@ -45,7 +46,13 @@ def parse_args():
|
||||
'--gpu-ids',
|
||||
type=int,
|
||||
nargs='+',
|
||||
help='ids of gpus to use '
|
||||
help='(Deprecated, please use --gpu-id) ids of gpus to use '
|
||||
'(only applicable to non-distributed training)')
|
||||
group_gpus.add_argument(
|
||||
'--gpu-id',
|
||||
type=int,
|
||||
default=0,
|
||||
help='id of gpu to use '
|
||||
'(only applicable to non-distributed training)')
|
||||
parser.add_argument('--seed', type=int, default=None, help='random seed')
|
||||
parser.add_argument(
|
||||
@ -87,10 +94,19 @@ def main():
|
||||
osp.splitext(osp.basename(args.config))[0])
|
||||
if args.resume_from is not None:
|
||||
cfg.resume_from = args.resume_from
|
||||
if args.gpus is not None:
|
||||
cfg.gpu_ids = range(1)
|
||||
warnings.warn('`--gpus` is deprecated because we only support '
|
||||
'single GPU mode in non-distributed training. '
|
||||
'Use `gpus=1` now.')
|
||||
if args.gpu_ids is not None:
|
||||
cfg.gpu_ids = args.gpu_ids
|
||||
else:
|
||||
cfg.gpu_ids = range(1) if args.gpus is None else range(args.gpus)
|
||||
cfg.gpu_ids = args.gpu_ids[0:1]
|
||||
warnings.warn('`--gpu-ids` is deprecated, please use `--gpu-id`. '
|
||||
'Because we only support single GPU mode in '
|
||||
'non-distributed training. Use the first GPU '
|
||||
'in `gpu_ids` now.')
|
||||
if args.gpus is None and args.gpu_ids is None:
|
||||
cfg.gpu_ids = [args.gpu_id]
|
||||
|
||||
# init distributed env first, since logger depends on the dist info.
|
||||
if args.launcher == 'none':
|
||||
|
@ -27,6 +27,18 @@ def parse_args():
|
||||
nargs='+',
|
||||
help='evaluation metrics, which depends on the dataset, e.g., "bbox",'
|
||||
' "segm", "proposal" for COCO, and "mAP", "recall" for PASCAL VOC')
|
||||
parser.add_argument(
|
||||
'--gpu-ids',
|
||||
type=int,
|
||||
nargs='+',
|
||||
help='(Deprecated, please use --gpu-id) ids of gpus to use '
|
||||
'(only applicable to non-distributed training)')
|
||||
parser.add_argument(
|
||||
'--gpu-id',
|
||||
type=int,
|
||||
default=0,
|
||||
help='id of gpu to use '
|
||||
'(only applicable to non-distributed testing)')
|
||||
parser.add_argument('--show', action='store_true', help='show results')
|
||||
parser.add_argument(
|
||||
'--show-dir', help='directory where painted images will be saved')
|
||||
@ -116,7 +128,14 @@ def main():
|
||||
# currently only support single images testing
|
||||
samples_per_gpu = cfg.data.test.pop('samples_per_gpu', 1)
|
||||
assert samples_per_gpu == 1, 'currently only support single images testing'
|
||||
|
||||
if args.gpu_ids is not None:
|
||||
cfg.gpu_ids = args.gpu_ids[0:1]
|
||||
warnings.warn('`--gpu-ids` is deprecated, please use `--gpu-id`. '
|
||||
'Because we only support single GPU mode in '
|
||||
'non-distributed testing. Use the first GPU '
|
||||
'in `gpu_ids` now.')
|
||||
else:
|
||||
cfg.gpu_ids = [args.gpu_id]
|
||||
# init distributed env first, since logger depends on the dist info.
|
||||
if args.launcher == 'none':
|
||||
distributed = False
|
||||
@ -176,7 +195,8 @@ def main():
|
||||
shuffle=False)
|
||||
|
||||
if not distributed:
|
||||
model = MMDataParallel(model, device_ids=[0])
|
||||
# Please use MMCV >= 1.4.4 for CPU testing!
|
||||
model = MMDataParallel(model, device_ids=cfg.gpu_ids)
|
||||
show_kwargs = dict(show_score_thr=args.show_score_thr)
|
||||
if cfg.data.get('model_init', None) is not None:
|
||||
from mmfewshot.detection.apis import (single_gpu_model_init,
|
||||
|
@ -48,8 +48,14 @@ def parse_args():
|
||||
'--gpu-ids',
|
||||
type=int,
|
||||
nargs='+',
|
||||
help='ids of gpus to use '
|
||||
help='(Deprecated, please use --gpu-id) ids of gpus to use '
|
||||
'(only applicable to non-distributed training)')
|
||||
parser.add_argument(
|
||||
'--gpu-id',
|
||||
type=int,
|
||||
default=0,
|
||||
help='id of gpu to use '
|
||||
'(only applicable to non-distributed testing)')
|
||||
parser.add_argument('--seed', type=int, default=None, help='random seed')
|
||||
parser.add_argument(
|
||||
'--deterministic',
|
||||
@ -119,15 +125,24 @@ def main():
|
||||
osp.splitext(osp.basename(args.config))[0])
|
||||
if args.resume_from is not None:
|
||||
cfg.resume_from = args.resume_from
|
||||
if args.gpus is not None:
|
||||
cfg.gpu_ids = range(1)
|
||||
warnings.warn('`--gpus` is deprecated because we only support '
|
||||
'single GPU mode in non-distributed training. '
|
||||
'Use `gpus=1` now.')
|
||||
if args.gpu_ids is not None:
|
||||
cfg.gpu_ids = args.gpu_ids
|
||||
else:
|
||||
cfg.gpu_ids = range(1) if args.gpus is None else range(args.gpus)
|
||||
cfg.gpu_ids = args.gpu_ids[0:1]
|
||||
warnings.warn('`--gpu-ids` is deprecated, please use `--gpu-id`. '
|
||||
'Because we only support single GPU mode in '
|
||||
'non-distributed training. Use the first GPU '
|
||||
'in `gpu_ids` now.')
|
||||
if args.gpus is None and args.gpu_ids is None:
|
||||
cfg.gpu_ids = [args.gpu_id]
|
||||
|
||||
# init distributed env first, since logger depends on the dist info.
|
||||
if args.launcher == 'none':
|
||||
distributed = False
|
||||
rank, world_size = get_dist_info()
|
||||
rank = 0
|
||||
else:
|
||||
distributed = True
|
||||
init_dist(args.launcher, **cfg.dist_params)
|
||||
|
Loading…
x
Reference in New Issue
Block a user