Upgrade pre commit hooks master (#2155)

* Upgrade pre commit hooks

* Upgrade pre commit hooks

* mim install mmcv-full

* install mim

* install mmcv-full

* test mmcv-full 1.6.0

* fix timm

* fix timm

* fix timm
This commit is contained in:
Miao Zheng 2022-10-08 16:29:12 +08:00 committed by GitHub
parent 9d2312b4ac
commit 0391dcd105
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 16 additions and 14 deletions

View File

@ -70,13 +70,14 @@ jobs:
coverage run --branch --source mmseg -m pytest tests/
coverage xml
coverage report -m
if: ${{matrix.torch >= '1.5.0'}}
# timm from v0.6.11 requires torch>=1.7
if: ${{matrix.torch >= '1.7.0'}}
- name: Skip timm unittests and generate coverage report
run: |
coverage run --branch --source mmseg -m pytest tests/ --ignore tests/test_models/test_backbones/test_timm_backbone.py
coverage xml
coverage report -m
if: ${{matrix.torch < '1.5.0'}}
if: ${{matrix.torch < '1.7.0'}}
build_cuda101:
runs-on: ubuntu-18.04
@ -144,13 +145,14 @@ jobs:
coverage run --branch --source mmseg -m pytest tests/
coverage xml
coverage report -m
if: ${{matrix.torch >= '1.5.0'}}
# timm from v0.6.11 requires torch>=1.7
if: ${{matrix.torch >= '1.7.0'}}
- name: Skip timm unittests and generate coverage report
run: |
coverage run --branch --source mmseg -m pytest tests/ --ignore tests/test_models/test_backbones/test_timm_backbone.py
coverage xml
coverage report -m
if: ${{matrix.torch < '1.5.0'}}
if: ${{matrix.torch < '1.7.0'}}
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1.0.10
with:
@ -249,7 +251,7 @@ jobs:
run: pip install -e .
- name: Run unittests
run: |
python -m pip install timm
python -m pip install 'timm<0.6.11'
coverage run --branch --source mmseg -m pytest tests/
- name: Generate coverage report
run: |

View File

@ -1,6 +1,6 @@
repos:
- repo: https://gitlab.com/pycqa/flake8.git
rev: 3.8.3
rev: 5.0.4
hooks:
- id: flake8
- repo: https://github.com/PyCQA/isort
@ -8,11 +8,11 @@ repos:
hooks:
- id: isort
- repo: https://github.com/pre-commit/mirrors-yapf
rev: v0.30.0
rev: v0.32.0
hooks:
- id: yapf
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.1.0
rev: v4.3.0
hooks:
- id: trailing-whitespace
- id: check-yaml
@ -34,7 +34,7 @@ repos:
- mdformat_frontmatter
- linkify-it-py
- repo: https://github.com/codespell-project/codespell
rev: v2.1.0
rev: v2.2.1
hooks:
- id: codespell
- repo: https://github.com/myint/docformatter

View File

@ -53,7 +53,7 @@ Briefly, it is a deep supervision trick to improve the accuracy. In the training
## Why is the log file not created
In the train script, we call `get_root_logger`at Line 167, and `get_root_logger` in mmseg calls `get_logger` in mmcv, mmcv will return the same logger which has beed initialized in 'mmsegmentation/tools/train.py' with the parameter `log_file`. There is only one logger (initialized with `log_file`) during training.
In the train script, we call `get_root_logger`at Line 167, and `get_root_logger` in mmseg calls `get_logger` in mmcv, mmcv will return the same logger which has been initialized in 'mmsegmentation/tools/train.py' with the parameter `log_file`. There is only one logger (initialized with `log_file`) during training.
Ref: [https://github.com/open-mmlab/mmcv/blob/21bada32560c7ed7b15b017dc763d862789e29a8/mmcv/utils/logging.py#L9-L16](https://github.com/open-mmlab/mmcv/blob/21bada32560c7ed7b15b017dc763d862789e29a8/mmcv/utils/logging.py#L9-L16)
If you find the log file not been created, you might check if `mmcv.utils.get_logger` is called elsewhere.

View File

@ -33,7 +33,7 @@ data = dict(
- `train`, `val` and `test`: The [`config`](https://github.com/open-mmlab/mmcv/blob/master/docs/en/understand_mmcv/config.md)s to build dataset instances for model training, validation and testing by
using [`build and registry`](https://github.com/open-mmlab/mmcv/blob/master/docs/en/understand_mmcv/registry.md) mechanism.
- `samples_per_gpu`: How many samples per batch and per gpu to load during model training, and the `batch_size` of training is equal to `samples_per_gpu` times gpu number, e.g. when using 8 gpus for distributed data parallel trainig and `samples_per_gpu=4`, the `batch_size` is `8*4=32`.
- `samples_per_gpu`: How many samples per batch and per gpu to load during model training, and the `batch_size` of training is equal to `samples_per_gpu` times gpu number, e.g. when using 8 gpus for distributed data parallel training and `samples_per_gpu=4`, the `batch_size` is `8*4=32`.
If you would like to define `batch_size` for testing and validation, please use `test_dataloaser` and
`val_dataloader` with mmseg >=0.24.1.

View File

@ -337,7 +337,7 @@ class VisionTransformer(BaseModule):
constant_init(m, val=1.0, bias=0.)
def _pos_embeding(self, patched_img, hw_shape, pos_embed):
"""Positiong embeding method.
"""Positioning embeding method.
Resize the pos_embed, if the input image size doesn't match
the training size.

View File

@ -78,7 +78,7 @@ def sigmoid_focal_loss(pred,
valid_mask=None,
reduction='mean',
avg_factor=None):
r"""A warpper of cuda version `Focal Loss
r"""A wrapper of cuda version `Focal Loss
<https://arxiv.org/abs/1708.02002>`_.
Args:
pred (torch.Tensor): The prediction with shape (N, C), C is the number

View File

@ -19,4 +19,4 @@ default_section = THIRDPARTY
skip = *.po,*.ts,*.ipynb
count =
quiet-level = 3
ignore-words-list = formating,sur,hist,dota,ba
ignore-words-list = formating,sur,hist,dota,ba,warmup