mmsegmentation/mmseg/utils/inverted_residual_module.py

74 lines
2.4 KiB
Python
Raw Normal View History

Fast-SCNN implemented (#58) * init commit: fast_scnn * 247917iters * 4x8_80k * configs placed in configs_unify. 4x8_80k exp.running. * mmseg/utils/collect_env.py modified to support Windows * study on lr * bug in configs_unify/***/cityscapes.py fixed. * lr0.08_100k * lr_power changed to 1.2 * log_config by_epoch set to False. * lr1.2 * doc strings added * add fast_scnn backbone test * 80k 0.08,0.12 * add 450k * fast_scnn test: fix BN bug. * Add different config files into configs/ * .gitignore recovered. * configs_unify del * .gitignore recovered. * delete sub-optimal config files of fast-scnn * Code style improved. * add docstrings to component modules of fast-scnn * relevant files modified according to Jerry's instructions * relevant files modified according to Jerry's instructions * lint problems fixed. * fast_scnn config extremely simplified. * InvertedResidual * fixed padding problems * add unit test for inverted_residual * add unit test for inverted_residual: debug 0 * add unit test for inverted_residual: debug 1 * add unit test for inverted_residual: debug 2 * add unit test for inverted_residual: debug 3 * add unit test for sep_fcn_head: debug 0 * add unit test for sep_fcn_head: debug 1 * add unit test for sep_fcn_head: debug 2 * add unit test for sep_fcn_head: debug 3 * add unit test for sep_fcn_head: debug 4 * add unit test for sep_fcn_head: debug 5 * FastSCNN type(dwchannels) changed to tuple. * t changed to expand_ratio. * Spaces fixed. * Update mmseg/models/backbones/fast_scnn.py Co-authored-by: Jerry Jiarui XU <xvjiarui0826@gmail.com> * Update mmseg/models/decode_heads/sep_fcn_head.py Co-authored-by: Jerry Jiarui XU <xvjiarui0826@gmail.com> * Update mmseg/models/decode_heads/sep_fcn_head.py Co-authored-by: Jerry Jiarui XU <xvjiarui0826@gmail.com> * Docstrings fixed. * Docstrings fixed. * Inverted Residual kept coherent with mmcl. * Inverted Residual kept coherent with mmcl. Debug 0 * _make_layer parameters renamed. * final commit * Arg scale_factor deleted. * Expand_ratio docstrings updated. * final commit * Readme for Fast-SCNN added. * model-zoo.md modified. * fast_scnn README updated. * Move InvertedResidual module into mmseg/utils. * test_inverted_residual module corrected. * test_inverted_residual.py moved. * encoder_decoder modified to avoid bugs when running PSPNet. getting_started.md bug fixed. * Revert "encoder_decoder modified to avoid bugs when running PSPNet. " This reverts commit dd0aadfb Co-authored-by: Jerry Jiarui XU <xvjiarui0826@gmail.com>
2020-08-18 23:33:05 +08:00
from mmcv.cnn import ConvModule, build_norm_layer
from torch import nn
class InvertedResidual(nn.Module):
"""Inverted residual module.
Args:
in_channels (int): The input channels of the InvertedResidual block.
out_channels (int): The output channels of the InvertedResidual block.
stride (int): Stride of the middle (first) 3x3 convolution.
expand_ratio (int): adjusts number of channels of the hidden layer
in InvertedResidual by this amount.
conv_cfg (dict): Config dict for convolution layer.
Default: None, which means using conv2d.
norm_cfg (dict): Config dict for normalization layer.
Default: dict(type='BN').
act_cfg (dict): Config dict for activation layer.
Default: dict(type='ReLU6').
"""
def __init__(self,
in_channels,
out_channels,
stride,
expand_ratio,
dilation=1,
conv_cfg=None,
norm_cfg=dict(type='BN'),
act_cfg=dict(type='ReLU6')):
super(InvertedResidual, self).__init__()
self.stride = stride
assert stride in [1, 2]
hidden_dim = int(round(in_channels * expand_ratio))
self.use_res_connect = self.stride == 1 \
and in_channels == out_channels
layers = []
if expand_ratio != 1:
# pw
layers.append(
ConvModule(
in_channels,
hidden_dim,
kernel_size=1,
conv_cfg=conv_cfg,
norm_cfg=norm_cfg,
act_cfg=act_cfg))
layers.extend([
# dw
ConvModule(
hidden_dim,
hidden_dim,
kernel_size=3,
padding=dilation,
stride=stride,
dilation=dilation,
groups=hidden_dim,
conv_cfg=conv_cfg,
norm_cfg=norm_cfg,
act_cfg=act_cfg),
# pw-linear
nn.Conv2d(hidden_dim, out_channels, 1, 1, 0, bias=False),
build_norm_layer(norm_cfg, out_channels)[1],
])
self.conv = nn.Sequential(*layers)
def forward(self, x):
if self.use_res_connect:
return x + self.conv(x)
else:
return self.conv(x)