q.yao 3a785f1223
[Refactor] Refactor codebase (#220)
* [WIP] Refactor v2.0 (#163)

* Refactor backend wrapper

* Refactor mmdet.inference

* Fix

* merge

* refactor utils

* Use deployer and deploy_model to manage pipeline

* Resolve comments

* Add a real inference api function

* rename wrappers

* Set execute to private method

* Rename deployer deploy_model

* Refactor task

* remove type hint

* lint

* Resolve comments

* resolve comments

* lint

* docstring

* [Fix]: Fix bugs in details in refactor branch (#192)

* [WIP] Refactor v2.0 (#163)

* Refactor backend wrapper

* Refactor mmdet.inference

* Fix

* merge

* refactor utils

* Use deployer and deploy_model to manage pipeline

* Resolve comments

* Add a real inference api function

* rename wrappers

* Set execute to private method

* Rename deployer deploy_model

* Refactor task

* remove type hint

* lint

* Resolve comments

* resolve comments

* lint

* docstring

* Fix errors

* lint

* resolve comments

* fix bugs

* conflict

* lint and typo

* Resolve comment

* refactor mmseg (#201)

* support mmseg

* fix docstring

* fix docstring

* [Refactor]: Get the count of backend files (#202)

* Fix backend files

* resolve comments

* lint

* Fix ncnn

* [Refactor]: Refactor folders of mmdet (#200)

* Move folders

* lint

* test object detection model

* lint

* reset changes

* fix openvino

* resolve comments

* __init__.py

* Fix path

* [Refactor]: move mmseg (#206)

* [Refactor]: Refactor mmedit (#205)

* feature mmedit

* edit2.0

* edit

* refactor mmedit

* fix __init__.py

* fix __init__

* fix formai

* fix comment

* fix comment

* Fix wrong func_name of ConvFCBBoxHead (#209)

* [Refactor]: Refactor mmdet unit test (#207)

* Move folders

* lint

* test object detection model

* lint

* WIP

* remove print

* finish unit test

* Fix tests

* resolve comments

* Add mask test

* lint

* resolve comments

* Refine cfg file

* Move files

* add files

* Fix path

* [Unittest]: Refine the unit tests in mmdet #214

* [Refactor] refactor mmocr to mmdeploy/codebase (#213)

* refactor mmocr to mmdeploy/codebase

* fix docstring of show_result

* fix docstring of visualize

* refine docstring

* replace print with logging

* refince codes

* resolve comments

* resolve comments

* [Refactor]: mmseg  tests (#210)

* refactor mmseg tests

* rename test_codebase

* update

* add model.py

* fix

* [Refactor] Refactor mmcls and the package (#217)

* refactor mmcls

* fix yapf

* fix isort

* refactor-mmcls-package

* fix print to logging

* fix docstrings according to others comments

* fix comments

* fix comments

* fix allentdans comment in pr215

* remove mmocr init

* [Refactor] Refactor mmedit tests (#212)

* feature mmedit

* edit2.0

* edit

* refactor mmedit

* fix __init__.py

* fix __init__

* fix formai

* fix comment

* fix comment

* buff

* edit test and code refactor

* refactor dir

* refactor tests/mmedit

* fix docstring

* add test coverage

* fix lint

* fix comment

* fix comment

* Update typehint (#216)

* update type hint

* update docstring

* update

* remove file

* fix ppl

* Refine get_predefined_partition_cfg

* fix tensorrt version > 8

* move parse_cuda_device_id to device.py

* Fix cascade

* onnx2ncnn docstring

Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: RunningLeon <maningsheng@sensetime.com>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
2021-11-25 09:57:05 +08:00

69 lines
1.9 KiB
Python

# dataset settings
dataset_type = 'CityscapesDataset'
data_root = '.'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (128, 128)
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(128, 128),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(type='Normalize', **img_norm_cfg),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img']),
])
]
data = dict(
samples_per_gpu=1,
workers_per_gpu=1,
val=dict(
type=dataset_type,
data_root=data_root,
img_dir='',
ann_dir='',
pipeline=test_pipeline),
test=dict(
type=dataset_type,
data_root=data_root,
img_dir='',
ann_dir='',
pipeline=test_pipeline))
# model settings
norm_cfg = dict(type='SyncBN', requires_grad=True, momentum=0.01)
model = dict(
type='EncoderDecoder',
backbone=dict(
type='FastSCNN',
downsample_dw_channels=(32, 48),
global_in_channels=64,
global_block_channels=(64, 96, 128),
global_block_strides=(2, 2, 1),
global_out_channels=128,
higher_in_channels=64,
lower_in_channels=128,
fusion_out_channels=128,
out_indices=(0, 1, 2),
norm_cfg=norm_cfg,
align_corners=False),
decode_head=dict(
type='DepthwiseSeparableFCNHead',
in_channels=128,
channels=128,
concat_input=False,
num_classes=19,
in_index=-1,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1)),
# model training and testing settings
train_cfg=dict(),
test_cfg=dict(mode='whole'))