commit
6f15e33ab9
|
@ -0,0 +1,76 @@
|
|||
# Contributor Covenant Code of Conduct
|
||||
|
||||
## Our Pledge
|
||||
|
||||
In the interest of fostering an open and welcoming environment, we as
|
||||
contributors and maintainers pledge to making participation in our project and
|
||||
our community a harassment-free experience for everyone, regardless of age, body
|
||||
size, disability, ethnicity, sex characteristics, gender identity and expression,
|
||||
level of experience, education, socio-economic status, nationality, personal
|
||||
appearance, race, religion, or sexual identity and orientation.
|
||||
|
||||
## Our Standards
|
||||
|
||||
Examples of behavior that contributes to creating a positive environment
|
||||
include:
|
||||
|
||||
* Using welcoming and inclusive language
|
||||
* Being respectful of differing viewpoints and experiences
|
||||
* Gracefully accepting constructive criticism
|
||||
* Focusing on what is best for the community
|
||||
* Showing empathy towards other community members
|
||||
|
||||
Examples of unacceptable behavior by participants include:
|
||||
|
||||
* The use of sexualized language or imagery and unwelcome sexual attention or
|
||||
advances
|
||||
* Trolling, insulting/derogatory comments, and personal or political attacks
|
||||
* Public or private harassment
|
||||
* Publishing others' private information, such as a physical or electronic
|
||||
address, without explicit permission
|
||||
* Other conduct which could reasonably be considered inappropriate in a
|
||||
professional setting
|
||||
|
||||
## Our Responsibilities
|
||||
|
||||
Project maintainers are responsible for clarifying the standards of acceptable
|
||||
behavior and are expected to take appropriate and fair corrective action in
|
||||
response to any instances of unacceptable behavior.
|
||||
|
||||
Project maintainers have the right and responsibility to remove, edit, or
|
||||
reject comments, commits, code, wiki edits, issues, and other contributions
|
||||
that are not aligned to this Code of Conduct, or to ban temporarily or
|
||||
permanently any contributor for other behaviors that they deem inappropriate,
|
||||
threatening, offensive, or harmful.
|
||||
|
||||
## Scope
|
||||
|
||||
This Code of Conduct applies both within project spaces and in public spaces
|
||||
when an individual is representing the project or its community. Examples of
|
||||
representing a project or community include using an official project e-mail
|
||||
address, posting via an official social media account, or acting as an appointed
|
||||
representative at an online or offline event. Representation of a project may be
|
||||
further defined and clarified by project maintainers.
|
||||
|
||||
## Enforcement
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
||||
reported by contacting the project team at chenkaidev@gmail.com. All
|
||||
complaints will be reviewed and investigated and will result in a response that
|
||||
is deemed necessary and appropriate to the circumstances. The project team is
|
||||
obligated to maintain confidentiality with regard to the reporter of an incident.
|
||||
Further details of specific enforcement policies may be posted separately.
|
||||
|
||||
Project maintainers who do not follow or enforce the Code of Conduct in good
|
||||
faith may face temporary or permanent repercussions as determined by other
|
||||
members of the project's leadership.
|
||||
|
||||
## Attribution
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
|
||||
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
|
||||
|
||||
[homepage]: https://www.contributor-covenant.org
|
||||
|
||||
For answers to common questions about this code of conduct, see
|
||||
https://www.contributor-covenant.org/faq
|
|
@ -0,0 +1 @@
|
|||
We appreciate all contributions to improve MMDetection. Please refer to [CONTRIBUTING.md](https://github.com/open-mmlab/mmcv/blob/master/CONTRIBUTING.md) in MMCV for more details about the contributing guideline.
|
|
@ -0,0 +1,9 @@
|
|||
blank_issues_enabled: false
|
||||
|
||||
contact_links:
|
||||
- name: Common Issues
|
||||
url: https://mmdetection.readthedocs.io/en/latest/faq.html
|
||||
about: Check if your issue already has solutions
|
||||
- name: MMDetection Documentation
|
||||
url: https://mmdetection.readthedocs.io/en/latest/
|
||||
about: Check if your question is answered in docs
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
name: Error report
|
||||
about: Create a report to help us improve
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
Thanks for your error report and we appreciate it a lot.
|
||||
|
||||
**Checklist**
|
||||
|
||||
1. I have searched related issues but cannot get the expected help.
|
||||
2. I have read the [FAQ documentation](https://mmdetection.readthedocs.io/en/latest/faq.html) but cannot get the expected help.
|
||||
3. The bug has not been fixed in the latest version.
|
||||
|
||||
**Describe the bug**
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
**Reproduction**
|
||||
|
||||
1. What command or script did you run?
|
||||
|
||||
```none
|
||||
A placeholder for the command.
|
||||
```
|
||||
|
||||
2. Did you make any modifications on the code or config? Did you understand what you have modified?
|
||||
3. What dataset did you use?
|
||||
|
||||
**Environment**
|
||||
|
||||
1. Please run `python mmdet/utils/collect_env.py` to collect necessary environment information and paste it here.
|
||||
2. You may add addition that may be helpful for locating the problem, such as
|
||||
- How you installed PyTorch [e.g., pip, conda, source]
|
||||
- Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)
|
||||
|
||||
**Error traceback**
|
||||
If applicable, paste the error trackback here.
|
||||
|
||||
```none
|
||||
A placeholder for trackback.
|
||||
```
|
||||
|
||||
**Bug fix**
|
||||
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!
|
|
@ -0,0 +1,22 @@
|
|||
---
|
||||
name: Feature request
|
||||
about: Suggest an idea for this project
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Describe the feature**
|
||||
|
||||
**Motivation**
|
||||
A clear and concise description of the motivation of the feature.
|
||||
Ex1. It is inconvenient when [....].
|
||||
Ex2. There is a recent paper [....], which is very helpful for [....].
|
||||
|
||||
**Related resources**
|
||||
If there is an official code release or third-party implementations, please also provide the information here, which would be very helpful.
|
||||
|
||||
**Additional context**
|
||||
Add any other context or screenshots about the feature request here.
|
||||
If you would like to implement the feature and create a PR, please leave a comment here and that would be much appreciated.
|
|
@ -0,0 +1,8 @@
|
|||
---
|
||||
name: General questions
|
||||
about: Ask general questions to get help
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
|
||||
---
|
|
@ -0,0 +1,68 @@
|
|||
---
|
||||
name: Reimplementation Questions
|
||||
about: Ask about questions during model reimplementation
|
||||
title: ''
|
||||
labels: 'reimplementation'
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Notice**
|
||||
|
||||
There are several common situations in the reimplementation issues as below
|
||||
|
||||
1. Reimplement a model in the model zoo using the provided configs
|
||||
2. Reimplement a model in the model zoo on other dataset (e.g., custom datasets)
|
||||
3. Reimplement a custom model but all the components are implemented in MMDetection
|
||||
4. Reimplement a custom model with new modules implemented by yourself
|
||||
|
||||
There are several things to do for different cases as below.
|
||||
|
||||
- For case 1 & 3, please follow the steps in the following sections thus we could help to quick identify the issue.
|
||||
- For case 2 & 4, please understand that we are not able to do much help here because we usually do not know the full code and the users should be responsible to the code they write.
|
||||
- One suggestion for case 2 & 4 is that the users should first check whether the bug lies in the self-implemented code or the original code. For example, users can first make sure that the same model runs well on supported datasets. If you still need help, please describe what you have done and what you obtain in the issue, and follow the steps in the following sections and try as clear as possible so that we can better help you.
|
||||
|
||||
**Checklist**
|
||||
|
||||
1. I have searched related issues but cannot get the expected help.
|
||||
2. The issue has not been fixed in the latest version.
|
||||
|
||||
**Describe the issue**
|
||||
|
||||
A clear and concise description of what the problem you meet and what have you done.
|
||||
|
||||
**Reproduction**
|
||||
|
||||
1. What command or script did you run?
|
||||
|
||||
```none
|
||||
A placeholder for the command.
|
||||
```
|
||||
|
||||
2. What config dir you run?
|
||||
|
||||
```none
|
||||
A placeholder for the config.
|
||||
```
|
||||
|
||||
3. Did you make any modifications on the code or config? Did you understand what you have modified?
|
||||
4. What dataset did you use?
|
||||
|
||||
**Environment**
|
||||
|
||||
1. Please run `python mmdet/utils/collect_env.py` to collect necessary environment information and paste it here.
|
||||
2. You may add addition that may be helpful for locating the problem, such as
|
||||
1. How you installed PyTorch [e.g., pip, conda, source]
|
||||
2. Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)
|
||||
|
||||
**Results**
|
||||
|
||||
If applicable, paste the related results here, e.g., what you expect and what you get.
|
||||
|
||||
```none
|
||||
A placeholder for results comparison
|
||||
```
|
||||
|
||||
**Issue fix**
|
||||
|
||||
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!
|
|
@ -0,0 +1,21 @@
|
|||
Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
|
||||
|
||||
## Motivation
|
||||
Please describe the motivation of this PR and the goal you want to achieve through this PR.
|
||||
|
||||
## Modification
|
||||
Please briefly describe what modification is made in this PR.
|
||||
|
||||
## BC-breaking (Optional)
|
||||
Does the modification introduce changes that break the back-compatibility of the downstream repos?
|
||||
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
|
||||
|
||||
## Use cases (Optional)
|
||||
If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.
|
||||
|
||||
## Checklist
|
||||
|
||||
1. Pre-commit or other linting tools are used to fix the potential lint issues.
|
||||
2. The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
|
||||
3. If the modification has potential influence on downstream projects, this PR should be tested with downstream projects, like MMDet or MMCls.
|
||||
4. The documentation has been modified accordingly, like docstring or example tutorials.
|
|
@ -0,0 +1,161 @@
|
|||
name: build
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Set up Python 3.7
|
||||
uses: actions/setup-python@v2
|
||||
with:
|
||||
python-version: 3.7
|
||||
- name: Install pre-commit hook
|
||||
run: |
|
||||
pip install pre-commit
|
||||
pre-commit install
|
||||
- name: Linting
|
||||
run: pre-commit run --all-files
|
||||
# - name: Check docstring coverage
|
||||
# run: |
|
||||
# pip install interrogate
|
||||
# interrogate -v --ignore-init-method --ignore-module --ignore-nested-functions --ignore-regex "__repr__" --fail-under 80 mmfewshot
|
||||
|
||||
build_cpu:
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
python-version: [3.7]
|
||||
torch: [1.3.1, 1.5.1, 1.6.0]
|
||||
include:
|
||||
- torch: 1.3.1
|
||||
torchvision: 0.4.2
|
||||
mmcv: "latest+torch1.3.0+cpu"
|
||||
- torch: 1.5.1
|
||||
torchvision: 0.6.1
|
||||
mmcv: "latest+torch1.5.0+cpu"
|
||||
- torch: 1.6.0
|
||||
torchvision: 0.7.0
|
||||
mmcv: "latest+torch1.6.0+cpu"
|
||||
- torch: 1.7.0
|
||||
torchvision: 0.8.1
|
||||
mmcv: "latest+torch1.7.0+cpu"
|
||||
- torch: 1.8.0
|
||||
torchvision: 0.9.0
|
||||
mmcv: "latest+torch1.8.0+cpu"
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: actions/setup-python@v2
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
- name: Install Pillow
|
||||
run: pip install Pillow==6.2.2
|
||||
if: ${{matrix.torchvision == '0.4.2'}}
|
||||
- name: Install PyTorch
|
||||
run: pip install torch==${{matrix.torch}}+cpu torchvision==${{matrix.torchvision}}+cpu -f https://download.pytorch.org/whl/torch_stable.html
|
||||
- name: Install MMCV
|
||||
run: |
|
||||
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cpu/torch${{matrix.torch}}/index.html
|
||||
python -c 'import mmcv; print(mmcv.__version__)'
|
||||
- name: Install unittest dependencies
|
||||
run: pip install -r requirements/tests.txt -r requirements/optional.txt
|
||||
- name: Build and install
|
||||
run: rm -rf .eggs && pip install -e .
|
||||
- name: Run unittests and generate coverage report
|
||||
run: |
|
||||
coverage run --branch --source mmfewshot -m pytest tests/
|
||||
coverage xml
|
||||
coverage report -m
|
||||
|
||||
build_cuda:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
env:
|
||||
CUDA: 10.1.105-1
|
||||
CUDA_SHORT: 10.1
|
||||
UBUNTU_VERSION: ubuntu1804
|
||||
strategy:
|
||||
matrix:
|
||||
python-version: [3.7]
|
||||
torch: [1.3.1, 1.5.1+cu101, 1.6.0+cu101, 1.7.0+cu101, 1.8.0+cu101]
|
||||
include:
|
||||
- torch: 1.3.1
|
||||
torch_version: torch1.3.1
|
||||
torchvision: 0.4.2
|
||||
mmcv: "latest+torch1.3.0+cu101"
|
||||
- torch: 1.5.1+cu101
|
||||
torch_version: torch1.5.1
|
||||
torchvision: 0.6.1+cu101
|
||||
mmcv: "latest+torch1.5.0+cu101"
|
||||
- torch: 1.6.0+cu101
|
||||
torch_version: torch1.6.0
|
||||
torchvision: 0.7.0+cu101
|
||||
mmcv: "latest+torch1.6.0+cu101"
|
||||
- torch: 1.6.0+cu101
|
||||
torch_version: torch1.6.0
|
||||
torchvision: 0.7.0+cu101
|
||||
mmcv: "latest+torch1.6.0+cu101"
|
||||
python-version: 3.6
|
||||
- torch: 1.6.0+cu101
|
||||
torch_version: torch1.6.0
|
||||
torchvision: 0.7.0+cu101
|
||||
mmcv: "latest+torch1.6.0+cu101"
|
||||
python-version: 3.8
|
||||
- torch: 1.7.0+cu101
|
||||
torch_version: torch1.7.0
|
||||
torchvision: 0.8.1+cu101
|
||||
mmcv: "latest+torch1.7.0+cu101"
|
||||
- torch: 1.8.0+cu101
|
||||
torch_version: torch1.8.0
|
||||
torchvision: 0.9.0+cu101
|
||||
mmcv: "latest+torch1.8.0+cu101"
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: actions/setup-python@v2
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
- name: Install CUDA
|
||||
run: |
|
||||
export INSTALLER=cuda-repo-${UBUNTU_VERSION}_${CUDA}_amd64.deb
|
||||
wget http://developer.download.nvidia.com/compute/cuda/repos/${UBUNTU_VERSION}/x86_64/${INSTALLER}
|
||||
sudo dpkg -i ${INSTALLER}
|
||||
wget https://developer.download.nvidia.com/compute/cuda/repos/${UBUNTU_VERSION}/x86_64/7fa2af80.pub
|
||||
sudo apt-key add 7fa2af80.pub
|
||||
sudo apt update -qq
|
||||
sudo apt install -y cuda-${CUDA_SHORT/./-} cuda-cufft-dev-${CUDA_SHORT/./-}
|
||||
sudo apt clean
|
||||
export CUDA_HOME=/usr/local/cuda-${CUDA_SHORT}
|
||||
export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:${CUDA_HOME}/include:${LD_LIBRARY_PATH}
|
||||
export PATH=${CUDA_HOME}/bin:${PATH}
|
||||
- name: Install Pillow
|
||||
run: pip install Pillow==6.2.2
|
||||
if: ${{matrix.torchvision < 0.5}}
|
||||
- name: Install PyTorch
|
||||
run: pip install torch==${{matrix.torch}} torchvision==${{matrix.torchvision}} -f https://download.pytorch.org/whl/torch_stable.html
|
||||
- name: Install mmfewshot dependencies
|
||||
run: |
|
||||
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu101/${{matrix.torch_version}}/index.html
|
||||
pip install -r requirements.txt
|
||||
python -c 'import mmcv; print(mmcv.__version__)'
|
||||
- name: Build and install
|
||||
run: |
|
||||
rm -rf .eggs
|
||||
python setup.py check -m -s
|
||||
TORCH_CUDA_ARCH_LIST=7.0 pip install .
|
||||
- name: Run unittests and generate coverage report
|
||||
run: |
|
||||
coverage run --branch --source mmfewshot -m pytest tests/
|
||||
coverage xml
|
||||
coverage report -m
|
||||
- name: Upload coverage to Codecov
|
||||
uses: codecov/codecov-action@v1.0.10
|
||||
with:
|
||||
file: ./coverage.xml
|
||||
flags: unittests
|
||||
env_vars: OS,PYTHON
|
||||
name: codecov-umbrella
|
||||
fail_ci_if_error: false
|
|
@ -0,0 +1,24 @@
|
|||
name: deploy
|
||||
|
||||
on: push
|
||||
|
||||
jobs:
|
||||
build-n-publish:
|
||||
runs-on: ubuntu-latest
|
||||
if: startsWith(github.event.ref, 'refs/tags')
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Set up Python 3.7
|
||||
uses: actions/setup-python@v2
|
||||
with:
|
||||
python-version: 3.7
|
||||
- name: Install torch
|
||||
run: pip install torch
|
||||
- name: Install wheel
|
||||
run: pip install wheel
|
||||
- name: Build MMFewShot
|
||||
run: python setup.py sdist bdist_wheel
|
||||
- name: Publish distribution to PyPI
|
||||
run: |
|
||||
pip install twine
|
||||
twine upload dist/* -u __token__ -p ${{ secrets.pypi_password }}
|
|
@ -0,0 +1,121 @@
|
|||
# Byte-compiled / optimized / DLL files
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
|
||||
# C extensions
|
||||
*.so
|
||||
|
||||
# Distribution / packaging
|
||||
.Python
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
wheels/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
MANIFEST
|
||||
|
||||
# PyInstaller
|
||||
# Usually these files are written by a python script from a template
|
||||
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
||||
*.manifest
|
||||
*.spec
|
||||
|
||||
# Installer logs
|
||||
pip-log.txt
|
||||
pip-delete-this-directory.txt
|
||||
|
||||
# Unit test / coverage reports
|
||||
htmlcov/
|
||||
.tox/
|
||||
.coverage
|
||||
.coverage.*
|
||||
.cache
|
||||
nosetests.xml
|
||||
coverage.xml
|
||||
*.cover
|
||||
.hypothesis/
|
||||
.pytest_cache/
|
||||
|
||||
# Translations
|
||||
*.mo
|
||||
*.pot
|
||||
|
||||
# Django stuff:
|
||||
*.log
|
||||
local_settings.py
|
||||
db.sqlite3
|
||||
|
||||
# Flask stuff:
|
||||
instance/
|
||||
.webassets-cache
|
||||
|
||||
# Scrapy stuff:
|
||||
.scrapy
|
||||
|
||||
# Sphinx documentation
|
||||
docs/_build/
|
||||
|
||||
# PyBuilder
|
||||
target/
|
||||
|
||||
# Jupyter Notebook
|
||||
.ipynb_checkpoints
|
||||
|
||||
# pyenv
|
||||
.python-version
|
||||
|
||||
# celery beat schedule file
|
||||
celerybeat-schedule
|
||||
|
||||
# SageMath parsed files
|
||||
*.sage.py
|
||||
|
||||
# Environments
|
||||
.env
|
||||
.venv
|
||||
env/
|
||||
venv/
|
||||
ENV/
|
||||
env.bak/
|
||||
venv.bak/
|
||||
|
||||
# Spyder project settings
|
||||
.spyderproject
|
||||
.spyproject
|
||||
|
||||
# Rope project settings
|
||||
.ropeproject
|
||||
|
||||
# mkdocs documentation
|
||||
/site
|
||||
|
||||
# mypy
|
||||
.mypy_cache/
|
||||
|
||||
data/
|
||||
data
|
||||
.vscode
|
||||
.idea
|
||||
.DS_Store
|
||||
|
||||
# custom
|
||||
*.pkl
|
||||
*.pkl.json
|
||||
*.log.json
|
||||
work_dirs/
|
||||
|
||||
# Pytorch
|
||||
*.pth
|
||||
*.py~
|
||||
*.sh~
|
|
@ -0,0 +1,40 @@
|
|||
repos:
|
||||
- repo: https://gitlab.com/pycqa/flake8.git
|
||||
rev: 3.8.3
|
||||
hooks:
|
||||
- id: flake8
|
||||
- repo: https://github.com/asottile/seed-isort-config
|
||||
rev: v2.2.0
|
||||
hooks:
|
||||
- id: seed-isort-config
|
||||
- repo: https://github.com/timothycrosley/isort
|
||||
rev: 4.3.21
|
||||
hooks:
|
||||
- id: isort
|
||||
- repo: https://github.com/pre-commit/mirrors-yapf
|
||||
rev: v0.30.0
|
||||
hooks:
|
||||
- id: yapf
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v3.1.0
|
||||
hooks:
|
||||
- id: trailing-whitespace
|
||||
- id: check-yaml
|
||||
- id: end-of-file-fixer
|
||||
- id: requirements-txt-fixer
|
||||
- id: double-quote-string-fixer
|
||||
- id: check-merge-conflict
|
||||
- id: fix-encoding-pragma
|
||||
args: ["--remove"]
|
||||
- id: mixed-line-ending
|
||||
args: ["--fix=lf"]
|
||||
- repo: https://github.com/jumanjihouse/pre-commit-hooks
|
||||
rev: 2.1.4
|
||||
hooks:
|
||||
- id: markdownlint
|
||||
args: ["-r", "~MD002,~MD013,~MD024,~MD029,~MD033,~MD034,~MD036", "-t", "allow_different_nesting"]
|
||||
- repo: https://github.com/myint/docformatter
|
||||
rev: v1.3.1
|
||||
hooks:
|
||||
- id: docformatter
|
||||
args: ["--in-place", "--wrap-descriptions", "79"]
|
|
@ -0,0 +1,28 @@
|
|||
import mmcv
|
||||
|
||||
from .version import __version__, short_version
|
||||
|
||||
|
||||
def digit_version(version_str):
|
||||
digit_version = []
|
||||
for x in version_str.split('.'):
|
||||
if x.isdigit():
|
||||
digit_version.append(int(x))
|
||||
elif x.find('rc') != -1:
|
||||
patch_version = x.split('rc')
|
||||
digit_version.append(int(patch_version[0]) - 1)
|
||||
digit_version.append(int(patch_version[1]))
|
||||
return digit_version
|
||||
|
||||
|
||||
mmcv_minimum_version = '1.3.2'
|
||||
mmcv_maximum_version = '1.4.0'
|
||||
mmcv_version = digit_version(mmcv.__version__)
|
||||
|
||||
|
||||
assert (mmcv_version >= digit_version(mmcv_minimum_version)
|
||||
and mmcv_version <= digit_version(mmcv_maximum_version)), \
|
||||
f'MMCV=={mmcv.__version__} is used but incompatible. ' \
|
||||
f'Please install mmcv>={mmcv_minimum_version}, <={mmcv_maximum_version}.'
|
||||
|
||||
__all__ = ['__version__', 'short_version']
|
|
@ -0,0 +1,40 @@
|
|||
# dataset settings
|
||||
dataset_type = 'ImageNet'
|
||||
img_norm_cfg = dict(
|
||||
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
|
||||
train_pipeline = [
|
||||
dict(type='LoadImageFromFile'),
|
||||
dict(type='RandomResizedCrop', size=224),
|
||||
dict(type='RandomFlip', flip_prob=0.5, direction='horizontal'),
|
||||
dict(type='Normalize', **img_norm_cfg),
|
||||
dict(type='ImageToTensor', keys=['img']),
|
||||
dict(type='ToTensor', keys=['gt_label']),
|
||||
dict(type='Collect', keys=['img', 'gt_label'])
|
||||
]
|
||||
test_pipeline = [
|
||||
dict(type='LoadImageFromFile'),
|
||||
dict(type='Resize', size=(256, -1)),
|
||||
dict(type='CenterCrop', crop_size=224),
|
||||
dict(type='Normalize', **img_norm_cfg),
|
||||
dict(type='ImageToTensor', keys=['img']),
|
||||
dict(type='Collect', keys=['img'])
|
||||
]
|
||||
data = dict(
|
||||
samples_per_gpu=32,
|
||||
workers_per_gpu=2,
|
||||
train=dict(
|
||||
type=dataset_type,
|
||||
data_prefix='data/imagenet/train',
|
||||
pipeline=train_pipeline),
|
||||
val=dict(
|
||||
type=dataset_type,
|
||||
data_prefix='data/imagenet/val',
|
||||
ann_file='data/imagenet/meta/val.txt',
|
||||
pipeline=test_pipeline),
|
||||
test=dict(
|
||||
# replace `data/val` with `data/test` for standard test
|
||||
type=dataset_type,
|
||||
data_prefix='data/imagenet/val',
|
||||
ann_file='data/imagenet/meta/val.txt',
|
||||
pipeline=test_pipeline))
|
||||
evaluation = dict(interval=1, metric='accuracy')
|
|
@ -0,0 +1,17 @@
|
|||
# checkpoint saving
|
||||
checkpoint_config = dict(interval=1)
|
||||
# yapf:disable
|
||||
task_type = 'mmcls'
|
||||
log_config = dict(
|
||||
interval=100,
|
||||
hooks=[
|
||||
dict(type='TextLoggerHook'),
|
||||
# dict(type='TensorboardLoggerHook')
|
||||
])
|
||||
# yapf:enable
|
||||
|
||||
dist_params = dict(backend='nccl')
|
||||
log_level = 'INFO'
|
||||
load_from = None
|
||||
resume_from = None
|
||||
workflow = [('train', 1)]
|
|
@ -0,0 +1,17 @@
|
|||
# model settings
|
||||
model = dict(
|
||||
type='ImageClassifier',
|
||||
backbone=dict(
|
||||
type='ResNet',
|
||||
depth=50,
|
||||
num_stages=4,
|
||||
out_indices=(3, ),
|
||||
style='pytorch'),
|
||||
neck=dict(type='GlobalAveragePooling'),
|
||||
head=dict(
|
||||
type='LinearClsHead',
|
||||
num_classes=1000,
|
||||
in_channels=2048,
|
||||
loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
|
||||
topk=(1, 5),
|
||||
))
|
|
@ -0,0 +1,6 @@
|
|||
# optimizer
|
||||
optimizer = dict(type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0001)
|
||||
optimizer_config = dict(grad_clip=None)
|
||||
# learning policy
|
||||
lr_config = dict(policy='step', step=[30, 60, 90])
|
||||
runner = dict(type='EpochBasedRunner', max_epochs=100)
|
|
@ -0,0 +1,4 @@
|
|||
_base_ = [
|
||||
'../_base_/models/resnet50.py', '../_base_/datasets/imagenet_bs32.py',
|
||||
'../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py'
|
||||
]
|
|
@ -0,0 +1,49 @@
|
|||
# dataset settings
|
||||
dataset_type = 'CocoDataset'
|
||||
data_root = 'data/coco/'
|
||||
img_norm_cfg = dict(
|
||||
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
|
||||
train_pipeline = [
|
||||
dict(type='LoadImageFromFile'),
|
||||
dict(type='LoadAnnotations', with_bbox=True),
|
||||
dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
|
||||
dict(type='RandomFlip', flip_ratio=0.5),
|
||||
dict(type='Normalize', **img_norm_cfg),
|
||||
dict(type='Pad', size_divisor=32),
|
||||
dict(type='DefaultFormatBundle'),
|
||||
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
|
||||
]
|
||||
test_pipeline = [
|
||||
dict(type='LoadImageFromFile'),
|
||||
dict(
|
||||
type='MultiScaleFlipAug',
|
||||
img_scale=(1333, 800),
|
||||
flip=False,
|
||||
transforms=[
|
||||
dict(type='Resize', keep_ratio=True),
|
||||
dict(type='RandomFlip'),
|
||||
dict(type='Normalize', **img_norm_cfg),
|
||||
dict(type='Pad', size_divisor=32),
|
||||
dict(type='ImageToTensor', keys=['img']),
|
||||
dict(type='Collect', keys=['img']),
|
||||
])
|
||||
]
|
||||
data = dict(
|
||||
samples_per_gpu=2,
|
||||
workers_per_gpu=2,
|
||||
train=dict(
|
||||
type=dataset_type,
|
||||
ann_file=data_root + 'annotations/instances_train2017.json',
|
||||
img_prefix=data_root + 'train2017/',
|
||||
pipeline=train_pipeline),
|
||||
val=dict(
|
||||
type=dataset_type,
|
||||
ann_file=data_root + 'annotations/instances_val2017.json',
|
||||
img_prefix=data_root + 'val2017/',
|
||||
pipeline=test_pipeline),
|
||||
test=dict(
|
||||
type=dataset_type,
|
||||
ann_file=data_root + 'annotations/instances_val2017.json',
|
||||
img_prefix=data_root + 'val2017/',
|
||||
pipeline=test_pipeline))
|
||||
evaluation = dict(interval=1, metric='bbox')
|
|
@ -0,0 +1,55 @@
|
|||
# dataset settings
|
||||
dataset_type = 'VOCDataset'
|
||||
data_root = 'data/VOCdevkit/'
|
||||
img_norm_cfg = dict(
|
||||
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
|
||||
train_pipeline = [
|
||||
dict(type='LoadImageFromFile'),
|
||||
dict(type='LoadAnnotations', with_bbox=True),
|
||||
dict(type='Resize', img_scale=(1000, 600), keep_ratio=True),
|
||||
dict(type='RandomFlip', flip_ratio=0.5),
|
||||
dict(type='Normalize', **img_norm_cfg),
|
||||
dict(type='Pad', size_divisor=32),
|
||||
dict(type='DefaultFormatBundle'),
|
||||
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
|
||||
]
|
||||
test_pipeline = [
|
||||
dict(type='LoadImageFromFile'),
|
||||
dict(
|
||||
type='MultiScaleFlipAug',
|
||||
img_scale=(1000, 600),
|
||||
flip=False,
|
||||
transforms=[
|
||||
dict(type='Resize', keep_ratio=True),
|
||||
dict(type='RandomFlip'),
|
||||
dict(type='Normalize', **img_norm_cfg),
|
||||
dict(type='Pad', size_divisor=32),
|
||||
dict(type='ImageToTensor', keys=['img']),
|
||||
dict(type='Collect', keys=['img']),
|
||||
])
|
||||
]
|
||||
data = dict(
|
||||
samples_per_gpu=2,
|
||||
workers_per_gpu=2,
|
||||
train=dict(
|
||||
type='RepeatDataset',
|
||||
times=3,
|
||||
dataset=dict(
|
||||
type=dataset_type,
|
||||
ann_file=[
|
||||
data_root + 'VOC2007/ImageSets/Main/trainval.txt',
|
||||
data_root + 'VOC2012/ImageSets/Main/trainval.txt'
|
||||
],
|
||||
img_prefix=[data_root + 'VOC2007/', data_root + 'VOC2012/'],
|
||||
pipeline=train_pipeline)),
|
||||
val=dict(
|
||||
type=dataset_type,
|
||||
ann_file=data_root + 'VOC2007/ImageSets/Main/test.txt',
|
||||
img_prefix=data_root + 'VOC2007/',
|
||||
pipeline=test_pipeline),
|
||||
test=dict(
|
||||
type=dataset_type,
|
||||
ann_file=data_root + 'VOC2007/ImageSets/Main/test.txt',
|
||||
img_prefix=data_root + 'VOC2007/',
|
||||
pipeline=test_pipeline))
|
||||
evaluation = dict(interval=1, metric='mAP')
|
|
@ -0,0 +1,18 @@
|
|||
checkpoint_config = dict(interval=1)
|
||||
# Used in MMFewShot to identify the type task, we support mmcls and mmdet now
|
||||
task_type = 'mmdet'
|
||||
# yapf:disable
|
||||
log_config = dict(
|
||||
interval=50,
|
||||
hooks=[
|
||||
dict(type='TextLoggerHook'),
|
||||
# dict(type='TensorboardLoggerHook')
|
||||
])
|
||||
# yapf:enable
|
||||
custom_hooks = [dict(type='NumClassCheckHook')]
|
||||
|
||||
dist_params = dict(backend='nccl')
|
||||
log_level = 'INFO'
|
||||
load_from = None
|
||||
resume_from = None
|
||||
workflow = [('train', 1)]
|
|
@ -0,0 +1,62 @@
|
|||
# model settings
|
||||
model = dict(
|
||||
type='FastRCNN',
|
||||
pretrained='torchvision://resnet50',
|
||||
backbone=dict(
|
||||
type='ResNet',
|
||||
depth=50,
|
||||
num_stages=4,
|
||||
out_indices=(0, 1, 2, 3),
|
||||
frozen_stages=1,
|
||||
norm_cfg=dict(type='BN', requires_grad=True),
|
||||
norm_eval=True,
|
||||
style='pytorch'),
|
||||
neck=dict(
|
||||
type='FPN',
|
||||
in_channels=[256, 512, 1024, 2048],
|
||||
out_channels=256,
|
||||
num_outs=5),
|
||||
roi_head=dict(
|
||||
type='StandardRoIHead',
|
||||
bbox_roi_extractor=dict(
|
||||
type='SingleRoIExtractor',
|
||||
roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
|
||||
out_channels=256,
|
||||
featmap_strides=[4, 8, 16, 32]),
|
||||
bbox_head=dict(
|
||||
type='Shared2FCBBoxHead',
|
||||
in_channels=256,
|
||||
fc_out_channels=1024,
|
||||
roi_feat_size=7,
|
||||
num_classes=80,
|
||||
bbox_coder=dict(
|
||||
type='DeltaXYWHBBoxCoder',
|
||||
target_means=[0., 0., 0., 0.],
|
||||
target_stds=[0.1, 0.1, 0.2, 0.2]),
|
||||
reg_class_agnostic=False,
|
||||
loss_cls=dict(
|
||||
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
|
||||
loss_bbox=dict(type='L1Loss', loss_weight=1.0))),
|
||||
# model training and testing settings
|
||||
train_cfg=dict(
|
||||
rcnn=dict(
|
||||
assigner=dict(
|
||||
type='MaxIoUAssigner',
|
||||
pos_iou_thr=0.5,
|
||||
neg_iou_thr=0.5,
|
||||
min_pos_iou=0.5,
|
||||
match_low_quality=False,
|
||||
ignore_iof_thr=-1),
|
||||
sampler=dict(
|
||||
type='RandomSampler',
|
||||
num=512,
|
||||
pos_fraction=0.25,
|
||||
neg_pos_ub=-1,
|
||||
add_gt_as_proposals=True),
|
||||
pos_weight=-1,
|
||||
debug=False)),
|
||||
test_cfg=dict(
|
||||
rcnn=dict(
|
||||
score_thr=0.05,
|
||||
nms=dict(type='nms', iou_threshold=0.5),
|
||||
max_per_img=100)))
|
|
@ -0,0 +1,112 @@
|
|||
# model settings
|
||||
norm_cfg = dict(type='BN', requires_grad=False)
|
||||
model = dict(
|
||||
type='FasterRCNN',
|
||||
pretrained='open-mmlab://detectron2/resnet50_caffe',
|
||||
backbone=dict(
|
||||
type='ResNet',
|
||||
depth=50,
|
||||
num_stages=3,
|
||||
strides=(1, 2, 2),
|
||||
dilations=(1, 1, 1),
|
||||
out_indices=(2, ),
|
||||
frozen_stages=1,
|
||||
norm_cfg=norm_cfg,
|
||||
norm_eval=True,
|
||||
style='caffe'),
|
||||
rpn_head=dict(
|
||||
type='RPNHead',
|
||||
in_channels=1024,
|
||||
feat_channels=1024,
|
||||
anchor_generator=dict(
|
||||
type='AnchorGenerator',
|
||||
scales=[2, 4, 8, 16, 32],
|
||||
ratios=[0.5, 1.0, 2.0],
|
||||
strides=[16]),
|
||||
bbox_coder=dict(
|
||||
type='DeltaXYWHBBoxCoder',
|
||||
target_means=[.0, .0, .0, .0],
|
||||
target_stds=[1.0, 1.0, 1.0, 1.0]),
|
||||
loss_cls=dict(
|
||||
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
|
||||
loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
|
||||
roi_head=dict(
|
||||
type='StandardRoIHead',
|
||||
shared_head=dict(
|
||||
type='ResLayer',
|
||||
depth=50,
|
||||
stage=3,
|
||||
stride=2,
|
||||
dilation=1,
|
||||
style='caffe',
|
||||
norm_cfg=norm_cfg,
|
||||
norm_eval=True),
|
||||
bbox_roi_extractor=dict(
|
||||
type='SingleRoIExtractor',
|
||||
roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0),
|
||||
out_channels=1024,
|
||||
featmap_strides=[16]),
|
||||
bbox_head=dict(
|
||||
type='BBoxHead',
|
||||
with_avg_pool=True,
|
||||
roi_feat_size=7,
|
||||
in_channels=2048,
|
||||
num_classes=80,
|
||||
bbox_coder=dict(
|
||||
type='DeltaXYWHBBoxCoder',
|
||||
target_means=[0., 0., 0., 0.],
|
||||
target_stds=[0.1, 0.1, 0.2, 0.2]),
|
||||
reg_class_agnostic=False,
|
||||
loss_cls=dict(
|
||||
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
|
||||
loss_bbox=dict(type='L1Loss', loss_weight=1.0))),
|
||||
# model training and testing settings
|
||||
train_cfg=dict(
|
||||
rpn=dict(
|
||||
assigner=dict(
|
||||
type='MaxIoUAssigner',
|
||||
pos_iou_thr=0.7,
|
||||
neg_iou_thr=0.3,
|
||||
min_pos_iou=0.3,
|
||||
match_low_quality=True,
|
||||
ignore_iof_thr=-1),
|
||||
sampler=dict(
|
||||
type='RandomSampler',
|
||||
num=256,
|
||||
pos_fraction=0.5,
|
||||
neg_pos_ub=-1,
|
||||
add_gt_as_proposals=False),
|
||||
allowed_border=0,
|
||||
pos_weight=-1,
|
||||
debug=False),
|
||||
rpn_proposal=dict(
|
||||
nms_pre=12000,
|
||||
max_per_img=2000,
|
||||
nms=dict(type='nms', iou_threshold=0.7),
|
||||
min_bbox_size=0),
|
||||
rcnn=dict(
|
||||
assigner=dict(
|
||||
type='MaxIoUAssigner',
|
||||
pos_iou_thr=0.5,
|
||||
neg_iou_thr=0.5,
|
||||
min_pos_iou=0.5,
|
||||
match_low_quality=False,
|
||||
ignore_iof_thr=-1),
|
||||
sampler=dict(
|
||||
type='RandomSampler',
|
||||
num=512,
|
||||
pos_fraction=0.25,
|
||||
neg_pos_ub=-1,
|
||||
add_gt_as_proposals=True),
|
||||
pos_weight=-1,
|
||||
debug=False)),
|
||||
test_cfg=dict(
|
||||
rpn=dict(
|
||||
nms_pre=6000,
|
||||
max_per_img=1000,
|
||||
nms=dict(type='nms', iou_threshold=0.7),
|
||||
min_bbox_size=0),
|
||||
rcnn=dict(
|
||||
score_thr=0.05,
|
||||
nms=dict(type='nms', iou_threshold=0.5),
|
||||
max_per_img=100)))
|
|
@ -0,0 +1,103 @@
|
|||
# model settings
|
||||
norm_cfg = dict(type='BN', requires_grad=False)
|
||||
model = dict(
|
||||
type='FasterRCNN',
|
||||
pretrained='open-mmlab://detectron2/resnet50_caffe',
|
||||
backbone=dict(
|
||||
type='ResNet',
|
||||
depth=50,
|
||||
num_stages=4,
|
||||
strides=(1, 2, 2, 1),
|
||||
dilations=(1, 1, 1, 2),
|
||||
out_indices=(3, ),
|
||||
frozen_stages=1,
|
||||
norm_cfg=norm_cfg,
|
||||
norm_eval=True,
|
||||
style='caffe'),
|
||||
rpn_head=dict(
|
||||
type='RPNHead',
|
||||
in_channels=2048,
|
||||
feat_channels=2048,
|
||||
anchor_generator=dict(
|
||||
type='AnchorGenerator',
|
||||
scales=[2, 4, 8, 16, 32],
|
||||
ratios=[0.5, 1.0, 2.0],
|
||||
strides=[16]),
|
||||
bbox_coder=dict(
|
||||
type='DeltaXYWHBBoxCoder',
|
||||
target_means=[.0, .0, .0, .0],
|
||||
target_stds=[1.0, 1.0, 1.0, 1.0]),
|
||||
loss_cls=dict(
|
||||
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
|
||||
loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
|
||||
roi_head=dict(
|
||||
type='StandardRoIHead',
|
||||
bbox_roi_extractor=dict(
|
||||
type='SingleRoIExtractor',
|
||||
roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
|
||||
out_channels=2048,
|
||||
featmap_strides=[16]),
|
||||
bbox_head=dict(
|
||||
type='Shared2FCBBoxHead',
|
||||
in_channels=2048,
|
||||
fc_out_channels=1024,
|
||||
roi_feat_size=7,
|
||||
num_classes=80,
|
||||
bbox_coder=dict(
|
||||
type='DeltaXYWHBBoxCoder',
|
||||
target_means=[0., 0., 0., 0.],
|
||||
target_stds=[0.1, 0.1, 0.2, 0.2]),
|
||||
reg_class_agnostic=False,
|
||||
loss_cls=dict(
|
||||
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
|
||||
loss_bbox=dict(type='L1Loss', loss_weight=1.0))),
|
||||
# model training and testing settings
|
||||
train_cfg=dict(
|
||||
rpn=dict(
|
||||
assigner=dict(
|
||||
type='MaxIoUAssigner',
|
||||
pos_iou_thr=0.7,
|
||||
neg_iou_thr=0.3,
|
||||
min_pos_iou=0.3,
|
||||
match_low_quality=True,
|
||||
ignore_iof_thr=-1),
|
||||
sampler=dict(
|
||||
type='RandomSampler',
|
||||
num=256,
|
||||
pos_fraction=0.5,
|
||||
neg_pos_ub=-1,
|
||||
add_gt_as_proposals=False),
|
||||
allowed_border=0,
|
||||
pos_weight=-1,
|
||||
debug=False),
|
||||
rpn_proposal=dict(
|
||||
nms_pre=12000,
|
||||
max_per_img=2000,
|
||||
nms=dict(type='nms', iou_threshold=0.7),
|
||||
min_bbox_size=0),
|
||||
rcnn=dict(
|
||||
assigner=dict(
|
||||
type='MaxIoUAssigner',
|
||||
pos_iou_thr=0.5,
|
||||
neg_iou_thr=0.5,
|
||||
min_pos_iou=0.5,
|
||||
match_low_quality=False,
|
||||
ignore_iof_thr=-1),
|
||||
sampler=dict(
|
||||
type='RandomSampler',
|
||||
num=512,
|
||||
pos_fraction=0.25,
|
||||
neg_pos_ub=-1,
|
||||
add_gt_as_proposals=True),
|
||||
pos_weight=-1,
|
||||
debug=False)),
|
||||
test_cfg=dict(
|
||||
rpn=dict(
|
||||
nms=dict(type='nms', iou_threshold=0.7),
|
||||
nms_pre=6000,
|
||||
max_per_img=1000,
|
||||
min_bbox_size=0),
|
||||
rcnn=dict(
|
||||
score_thr=0.05,
|
||||
nms=dict(type='nms', iou_threshold=0.5),
|
||||
max_per_img=100)))
|
|
@ -0,0 +1,108 @@
|
|||
# model settings
|
||||
model = dict(
|
||||
type='FasterRCNN',
|
||||
pretrained='torchvision://resnet50',
|
||||
backbone=dict(
|
||||
type='ResNet',
|
||||
depth=50,
|
||||
num_stages=4,
|
||||
out_indices=(0, 1, 2, 3),
|
||||
frozen_stages=1,
|
||||
norm_cfg=dict(type='BN', requires_grad=True),
|
||||
norm_eval=True,
|
||||
style='pytorch'),
|
||||
neck=dict(
|
||||
type='FPN',
|
||||
in_channels=[256, 512, 1024, 2048],
|
||||
out_channels=256,
|
||||
num_outs=5),
|
||||
rpn_head=dict(
|
||||
type='RPNHead',
|
||||
in_channels=256,
|
||||
feat_channels=256,
|
||||
anchor_generator=dict(
|
||||
type='AnchorGenerator',
|
||||
scales=[8],
|
||||
ratios=[0.5, 1.0, 2.0],
|
||||
strides=[4, 8, 16, 32, 64]),
|
||||
bbox_coder=dict(
|
||||
type='DeltaXYWHBBoxCoder',
|
||||
target_means=[.0, .0, .0, .0],
|
||||
target_stds=[1.0, 1.0, 1.0, 1.0]),
|
||||
loss_cls=dict(
|
||||
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
|
||||
loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
|
||||
roi_head=dict(
|
||||
type='StandardRoIHead',
|
||||
bbox_roi_extractor=dict(
|
||||
type='SingleRoIExtractor',
|
||||
roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
|
||||
out_channels=256,
|
||||
featmap_strides=[4, 8, 16, 32]),
|
||||
bbox_head=dict(
|
||||
type='Shared2FCBBoxHead',
|
||||
in_channels=256,
|
||||
fc_out_channels=1024,
|
||||
roi_feat_size=7,
|
||||
num_classes=80,
|
||||
bbox_coder=dict(
|
||||
type='DeltaXYWHBBoxCoder',
|
||||
target_means=[0., 0., 0., 0.],
|
||||
target_stds=[0.1, 0.1, 0.2, 0.2]),
|
||||
reg_class_agnostic=False,
|
||||
loss_cls=dict(
|
||||
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
|
||||
loss_bbox=dict(type='L1Loss', loss_weight=1.0))),
|
||||
# model training and testing settings
|
||||
train_cfg=dict(
|
||||
rpn=dict(
|
||||
assigner=dict(
|
||||
type='MaxIoUAssigner',
|
||||
pos_iou_thr=0.7,
|
||||
neg_iou_thr=0.3,
|
||||
min_pos_iou=0.3,
|
||||
match_low_quality=True,
|
||||
ignore_iof_thr=-1),
|
||||
sampler=dict(
|
||||
type='RandomSampler',
|
||||
num=256,
|
||||
pos_fraction=0.5,
|
||||
neg_pos_ub=-1,
|
||||
add_gt_as_proposals=False),
|
||||
allowed_border=-1,
|
||||
pos_weight=-1,
|
||||
debug=False),
|
||||
rpn_proposal=dict(
|
||||
nms_pre=2000,
|
||||
max_per_img=1000,
|
||||
nms=dict(type='nms', iou_threshold=0.7),
|
||||
min_bbox_size=0),
|
||||
rcnn=dict(
|
||||
assigner=dict(
|
||||
type='MaxIoUAssigner',
|
||||
pos_iou_thr=0.5,
|
||||
neg_iou_thr=0.5,
|
||||
min_pos_iou=0.5,
|
||||
match_low_quality=False,
|
||||
ignore_iof_thr=-1),
|
||||
sampler=dict(
|
||||
type='RandomSampler',
|
||||
num=512,
|
||||
pos_fraction=0.25,
|
||||
neg_pos_ub=-1,
|
||||
add_gt_as_proposals=True),
|
||||
pos_weight=-1,
|
||||
debug=False)),
|
||||
test_cfg=dict(
|
||||
rpn=dict(
|
||||
nms_pre=1000,
|
||||
max_per_img=1000,
|
||||
nms=dict(type='nms', iou_threshold=0.7),
|
||||
min_bbox_size=0),
|
||||
rcnn=dict(
|
||||
score_thr=0.05,
|
||||
nms=dict(type='nms', iou_threshold=0.5),
|
||||
max_per_img=100)
|
||||
# soft-nms is also supported for rcnn testing
|
||||
# e.g., nms=dict(type='soft_nms', iou_threshold=0.5, min_score=0.05)
|
||||
))
|
|
@ -0,0 +1,60 @@
|
|||
# model settings
|
||||
model = dict(
|
||||
type='RetinaNet',
|
||||
pretrained='torchvision://resnet50',
|
||||
backbone=dict(
|
||||
type='ResNet',
|
||||
depth=50,
|
||||
num_stages=4,
|
||||
out_indices=(0, 1, 2, 3),
|
||||
frozen_stages=1,
|
||||
norm_cfg=dict(type='BN', requires_grad=True),
|
||||
norm_eval=True,
|
||||
style='pytorch'),
|
||||
neck=dict(
|
||||
type='FPN',
|
||||
in_channels=[256, 512, 1024, 2048],
|
||||
out_channels=256,
|
||||
start_level=1,
|
||||
add_extra_convs='on_input',
|
||||
num_outs=5),
|
||||
bbox_head=dict(
|
||||
type='RetinaHead',
|
||||
num_classes=80,
|
||||
in_channels=256,
|
||||
stacked_convs=4,
|
||||
feat_channels=256,
|
||||
anchor_generator=dict(
|
||||
type='AnchorGenerator',
|
||||
octave_base_scale=4,
|
||||
scales_per_octave=3,
|
||||
ratios=[0.5, 1.0, 2.0],
|
||||
strides=[8, 16, 32, 64, 128]),
|
||||
bbox_coder=dict(
|
||||
type='DeltaXYWHBBoxCoder',
|
||||
target_means=[.0, .0, .0, .0],
|
||||
target_stds=[1.0, 1.0, 1.0, 1.0]),
|
||||
loss_cls=dict(
|
||||
type='FocalLoss',
|
||||
use_sigmoid=True,
|
||||
gamma=2.0,
|
||||
alpha=0.25,
|
||||
loss_weight=1.0),
|
||||
loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
|
||||
# model training and testing settings
|
||||
train_cfg=dict(
|
||||
assigner=dict(
|
||||
type='MaxIoUAssigner',
|
||||
pos_iou_thr=0.5,
|
||||
neg_iou_thr=0.4,
|
||||
min_pos_iou=0,
|
||||
ignore_iof_thr=-1),
|
||||
allowed_border=-1,
|
||||
pos_weight=-1,
|
||||
debug=False),
|
||||
test_cfg=dict(
|
||||
nms_pre=1000,
|
||||
min_bbox_size=0,
|
||||
score_thr=0.05,
|
||||
nms=dict(type='nms', iou_threshold=0.5),
|
||||
max_per_img=100))
|
|
@ -0,0 +1,56 @@
|
|||
# model settings
|
||||
model = dict(
|
||||
type='RPN',
|
||||
pretrained='open-mmlab://detectron2/resnet50_caffe',
|
||||
backbone=dict(
|
||||
type='ResNet',
|
||||
depth=50,
|
||||
num_stages=3,
|
||||
strides=(1, 2, 2),
|
||||
dilations=(1, 1, 1),
|
||||
out_indices=(2, ),
|
||||
frozen_stages=1,
|
||||
norm_cfg=dict(type='BN', requires_grad=False),
|
||||
norm_eval=True,
|
||||
style='caffe'),
|
||||
neck=None,
|
||||
rpn_head=dict(
|
||||
type='RPNHead',
|
||||
in_channels=1024,
|
||||
feat_channels=1024,
|
||||
anchor_generator=dict(
|
||||
type='AnchorGenerator',
|
||||
scales=[2, 4, 8, 16, 32],
|
||||
ratios=[0.5, 1.0, 2.0],
|
||||
strides=[16]),
|
||||
bbox_coder=dict(
|
||||
type='DeltaXYWHBBoxCoder',
|
||||
target_means=[.0, .0, .0, .0],
|
||||
target_stds=[1.0, 1.0, 1.0, 1.0]),
|
||||
loss_cls=dict(
|
||||
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
|
||||
loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
|
||||
# model training and testing settings
|
||||
train_cfg=dict(
|
||||
rpn=dict(
|
||||
assigner=dict(
|
||||
type='MaxIoUAssigner',
|
||||
pos_iou_thr=0.7,
|
||||
neg_iou_thr=0.3,
|
||||
min_pos_iou=0.3,
|
||||
ignore_iof_thr=-1),
|
||||
sampler=dict(
|
||||
type='RandomSampler',
|
||||
num=256,
|
||||
pos_fraction=0.5,
|
||||
neg_pos_ub=-1,
|
||||
add_gt_as_proposals=False),
|
||||
allowed_border=0,
|
||||
pos_weight=-1,
|
||||
debug=False)),
|
||||
test_cfg=dict(
|
||||
rpn=dict(
|
||||
nms_pre=12000,
|
||||
max_per_img=2000,
|
||||
nms=dict(type='nms', iou_threshold=0.7),
|
||||
min_bbox_size=0)))
|
|
@ -0,0 +1,58 @@
|
|||
# model settings
|
||||
model = dict(
|
||||
type='RPN',
|
||||
pretrained='torchvision://resnet50',
|
||||
backbone=dict(
|
||||
type='ResNet',
|
||||
depth=50,
|
||||
num_stages=4,
|
||||
out_indices=(0, 1, 2, 3),
|
||||
frozen_stages=1,
|
||||
norm_cfg=dict(type='BN', requires_grad=True),
|
||||
norm_eval=True,
|
||||
style='pytorch'),
|
||||
neck=dict(
|
||||
type='FPN',
|
||||
in_channels=[256, 512, 1024, 2048],
|
||||
out_channels=256,
|
||||
num_outs=5),
|
||||
rpn_head=dict(
|
||||
type='RPNHead',
|
||||
in_channels=256,
|
||||
feat_channels=256,
|
||||
anchor_generator=dict(
|
||||
type='AnchorGenerator',
|
||||
scales=[8],
|
||||
ratios=[0.5, 1.0, 2.0],
|
||||
strides=[4, 8, 16, 32, 64]),
|
||||
bbox_coder=dict(
|
||||
type='DeltaXYWHBBoxCoder',
|
||||
target_means=[.0, .0, .0, .0],
|
||||
target_stds=[1.0, 1.0, 1.0, 1.0]),
|
||||
loss_cls=dict(
|
||||
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
|
||||
loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
|
||||
# model training and testing settings
|
||||
train_cfg=dict(
|
||||
rpn=dict(
|
||||
assigner=dict(
|
||||
type='MaxIoUAssigner',
|
||||
pos_iou_thr=0.7,
|
||||
neg_iou_thr=0.3,
|
||||
min_pos_iou=0.3,
|
||||
ignore_iof_thr=-1),
|
||||
sampler=dict(
|
||||
type='RandomSampler',
|
||||
num=256,
|
||||
pos_fraction=0.5,
|
||||
neg_pos_ub=-1,
|
||||
add_gt_as_proposals=False),
|
||||
allowed_border=0,
|
||||
pos_weight=-1,
|
||||
debug=False)),
|
||||
test_cfg=dict(
|
||||
rpn=dict(
|
||||
nms_pre=2000,
|
||||
max_per_img=1000,
|
||||
nms=dict(type='nms', iou_threshold=0.7),
|
||||
min_bbox_size=0)))
|
|
@ -0,0 +1,11 @@
|
|||
# optimizer
|
||||
optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
|
||||
optimizer_config = dict(grad_clip=None)
|
||||
# learning policy
|
||||
lr_config = dict(
|
||||
policy='step',
|
||||
warmup='linear',
|
||||
warmup_iters=500,
|
||||
warmup_ratio=0.001,
|
||||
step=[8, 11])
|
||||
runner = dict(type='EpochBasedRunner', max_epochs=12)
|
|
@ -0,0 +1,11 @@
|
|||
# optimizer
|
||||
optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
|
||||
optimizer_config = dict(grad_clip=None)
|
||||
# learning policy
|
||||
lr_config = dict(
|
||||
policy='step',
|
||||
warmup='linear',
|
||||
warmup_iters=500,
|
||||
warmup_ratio=0.001,
|
||||
step=[16, 19])
|
||||
runner = dict(type='EpochBasedRunner', max_epochs=20)
|
|
@ -0,0 +1,11 @@
|
|||
# optimizer
|
||||
optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
|
||||
optimizer_config = dict(grad_clip=None)
|
||||
# learning policy
|
||||
lr_config = dict(
|
||||
policy='step',
|
||||
warmup='linear',
|
||||
warmup_iters=500,
|
||||
warmup_ratio=0.001,
|
||||
step=[16, 22])
|
||||
runner = dict(type='EpochBasedRunner', max_epochs=24)
|
|
@ -0,0 +1,8 @@
|
|||
_base_ = [
|
||||
'../_base_/models/faster_rcnn_r50_fpn.py',
|
||||
'../_base_/datasets/coco_detection.py',
|
||||
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
|
||||
]
|
||||
|
||||
model = dict(type='TestDetection')
|
||||
data = dict(samples_per_gpu=1)
|
|
@ -0,0 +1,33 @@
|
|||
import mmcls # noqa: F401, F403
|
||||
import mmcv
|
||||
import mmdet # noqa: F401, F403
|
||||
|
||||
from .builders import * # noqa: F401, F403
|
||||
from .classification import * # noqa: F401, F403
|
||||
from .detection import * # noqa: F401, F403
|
||||
from .version import __version__, short_version
|
||||
|
||||
|
||||
def digit_version(version_str):
|
||||
digit_version = []
|
||||
for x in version_str.split('.'):
|
||||
if x.isdigit():
|
||||
digit_version.append(int(x))
|
||||
elif x.find('rc') != -1:
|
||||
patch_version = x.split('rc')
|
||||
digit_version.append(int(patch_version[0]) - 1)
|
||||
digit_version.append(int(patch_version[1]))
|
||||
return digit_version
|
||||
|
||||
|
||||
mmcv_minimum_version = '1.3.2'
|
||||
mmcv_maximum_version = '1.4.0'
|
||||
mmcv_version = digit_version(mmcv.__version__)
|
||||
|
||||
|
||||
assert (mmcv_version >= digit_version(mmcv_minimum_version)
|
||||
and mmcv_version <= digit_version(mmcv_maximum_version)), \
|
||||
f'MMCV=={mmcv.__version__} is used but incompatible. ' \
|
||||
f'Please install mmcv>={mmcv_minimum_version}, <={mmcv_maximum_version}.'
|
||||
|
||||
__all__ = ['__version__', 'short_version']
|
|
@ -0,0 +1,2 @@
|
|||
from .test import * # noqa: F401, F403
|
||||
from .train import * # noqa: F401, F403
|
|
@ -0,0 +1,21 @@
|
|||
from mmcls.apis.test import multi_gpu_test as cls_multi_gpu_test
|
||||
from mmcls.apis.test import single_gpu_test as cls_single_gpu_test
|
||||
from mmdet.apis.test import multi_gpu_test as det_multi_gpu_test
|
||||
from mmdet.apis.test import single_gpu_test as det_single_gpu_test
|
||||
|
||||
|
||||
def single_gpu_test(*args, task_type='mmdet', **kwargs):
|
||||
if task_type == 'mmdet':
|
||||
return det_single_gpu_test(*args, **kwargs)
|
||||
elif task_type == 'mmcls':
|
||||
return cls_single_gpu_test(*args, **kwargs)
|
||||
else:
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
def multi_gpu_test(*args, task_type='mmdet', **kwargs):
|
||||
if task_type == 'mmdet':
|
||||
return det_multi_gpu_test(*args, **kwargs)
|
||||
elif task_type == 'mmcls':
|
||||
return cls_multi_gpu_test(*args, **kwargs)
|
||||
raise NotImplementedError
|
|
@ -0,0 +1,34 @@
|
|||
import random
|
||||
|
||||
import numpy as np
|
||||
import torch
|
||||
from mmcls.apis.train import train_model as train_classifier
|
||||
from mmdet.apis.train import train_detector
|
||||
|
||||
|
||||
def set_random_seed(seed, deterministic=False):
|
||||
"""Set random seed.
|
||||
|
||||
Args:
|
||||
seed (int): Seed to be used.
|
||||
deterministic (bool): Whether to set the deterministic option for
|
||||
CUDNN backend, i.e., set `torch.backends.cudnn.deterministic`
|
||||
to True and `torch.backends.cudnn.benchmark` to False.
|
||||
Default: False.
|
||||
"""
|
||||
random.seed(seed)
|
||||
np.random.seed(seed)
|
||||
torch.manual_seed(seed)
|
||||
torch.cuda.manual_seed_all(seed)
|
||||
if deterministic:
|
||||
torch.backends.cudnn.deterministic = True
|
||||
torch.backends.cudnn.benchmark = False
|
||||
|
||||
|
||||
def train_model(*args, task_type='mmdet', **kwargs):
|
||||
if task_type == 'mmdet':
|
||||
train_detector(*args, **kwargs)
|
||||
elif task_type == 'mmcls':
|
||||
train_classifier(*args, **kwargs)
|
||||
else:
|
||||
raise NotImplementedError
|
|
@ -0,0 +1,2 @@
|
|||
from .dataset_builder import * # noqa: F401, F403
|
||||
from .model_builder import * # noqa: F401, F403
|
|
@ -0,0 +1,35 @@
|
|||
from mmcls.datasets.builder import build_dataloader as build_cls_dataloader
|
||||
from mmcls.datasets.builder import build_dataset as build_cls_dataset
|
||||
from mmdet.datasets.builder import build_dataloader as build_det_dataloader
|
||||
from mmdet.datasets.builder import build_dataset as build_det_dataset
|
||||
|
||||
|
||||
def build_dataloader(dataset=None, task_type='mmdet', round_up=True, **kwargs):
|
||||
# TODO: identify how to bulid the dataloader via the type of dataset
|
||||
# just an example
|
||||
# if isinstance(dataset,base_meta_learning_dataset):
|
||||
# data_loader=build_det_metalearning_dataloader(dataset=dataset, **kwargs)
|
||||
if task_type == 'mmdet':
|
||||
data_loader = build_det_dataloader(dataset=dataset, **kwargs)
|
||||
elif task_type == 'mmcls':
|
||||
data_loader = build_cls_dataloader(
|
||||
dataset=dataset, round_up=round_up, **kwargs)
|
||||
else:
|
||||
raise NotImplementedError
|
||||
return data_loader
|
||||
|
||||
|
||||
def build_dataset(*args, task_type='mmdet', **kwargs):
|
||||
|
||||
if task_type == 'mmdet':
|
||||
dataset = build_det_dataset(*args, **kwargs)
|
||||
elif task_type == 'mmcls':
|
||||
dataset = build_cls_dataset(*args, **kwargs)
|
||||
else:
|
||||
raise NotImplementedError
|
||||
return dataset
|
||||
|
||||
|
||||
# TODO: check whether det and cls can use same dataloader for meta_learnig
|
||||
def build_det_metalearning_dataloader():
|
||||
pass
|
|
@ -0,0 +1,9 @@
|
|||
from mmcls.models.builder import build_classifier as build_cls_model
|
||||
from mmdet.models.builder import build_detector as build_det_model
|
||||
|
||||
|
||||
def build_model(*args, task_type='mmdet', **kwargs):
|
||||
if task_type == 'mmdet':
|
||||
return build_det_model(*args, **kwargs)
|
||||
elif task_type == 'mmcls':
|
||||
return build_cls_model(*args, **kwargs)
|
|
@ -0,0 +1,2 @@
|
|||
from .datasets import * # noqa: F401,F403
|
||||
from .models import * # noqa: F401,F403
|
|
@ -0,0 +1,8 @@
|
|||
# jsut an example
|
||||
from mmcls.datasets.builder import DATASETS
|
||||
from mmcls.datasets.imagenet import ImageNet
|
||||
|
||||
|
||||
@DATASETS.register_module()
|
||||
class BaseMetaLearingDataset(ImageNet):
|
||||
pass
|
|
@ -0,0 +1 @@
|
|||
from .classifiers import * # noqa: F401,F403
|
|
@ -0,0 +1,3 @@
|
|||
from .base_meta_learning_classifier import BaseMetaLearingClassifier
|
||||
|
||||
__all__ = ['BaseMetaLearingClassifier']
|
|
@ -0,0 +1,8 @@
|
|||
from mmcls.models.builder import CLASSIFIERS
|
||||
from mmcls.models.classifiers import BaseClassifier
|
||||
|
||||
|
||||
# just an example
|
||||
@CLASSIFIERS.register_module()
|
||||
class BaseMetaLearingClassifier(BaseClassifier):
|
||||
pass
|
|
@ -0,0 +1,2 @@
|
|||
from .datasets import * # noqa: F401,F403
|
||||
from .models import * # noqa: F401,F403
|
|
@ -0,0 +1,3 @@
|
|||
from .base_meta_learning_dataset import BaseMetaLearingDataset
|
||||
|
||||
__all__ = ['BaseMetaLearingDataset']
|
|
@ -0,0 +1,8 @@
|
|||
# jsut an example
|
||||
from mmdet.datasets.builder import DATASETS
|
||||
from mmdet.datasets.custom import CustomDataset
|
||||
|
||||
|
||||
@DATASETS.register_module()
|
||||
class BaseMetaLearingDataset(CustomDataset):
|
||||
pass
|
|
@ -0,0 +1 @@
|
|||
from .detectors import * # noqa: F401,F403
|
|
@ -0,0 +1,3 @@
|
|||
from .base_meta_learning_detector import TestDetection
|
||||
|
||||
__all__ = ['TestDetection']
|
|
@ -0,0 +1,12 @@
|
|||
from mmdet.models.builder import DETECTORS
|
||||
from mmdet.models.detectors import BaseDetector, FasterRCNN
|
||||
|
||||
|
||||
@DETECTORS.register_module()
|
||||
class BaseMetaLearingDetector(BaseDetector):
|
||||
pass
|
||||
|
||||
|
||||
@DETECTORS.register_module()
|
||||
class TestDetection(FasterRCNN):
|
||||
pass
|
|
@ -0,0 +1,10 @@
|
|||
def check_config(cfg):
|
||||
"""Check for missing or deprecated arguments."""
|
||||
support_tasks = ['mmcls', 'mmdet']
|
||||
if 'task_type' not in cfg:
|
||||
raise AttributeError(f'Please set `task_type` '
|
||||
f'in your config, {support_tasks} are supported')
|
||||
if cfg.task_type not in support_tasks:
|
||||
raise ValueError(f'{support_tasks} are supported, '
|
||||
f'but get `task_type` {cfg.task_type}')
|
||||
return cfg
|
|
@ -0,0 +1,19 @@
|
|||
# Copyright (c) Open-MMLab. All rights reserved.
|
||||
|
||||
__version__ = '0.1.0'
|
||||
short_version = __version__
|
||||
|
||||
|
||||
def parse_version_info(version_str):
|
||||
version_info = []
|
||||
for x in version_str.split('.'):
|
||||
if x.isdigit():
|
||||
version_info.append(int(x))
|
||||
elif x.find('rc') != -1:
|
||||
patch_version = x.split('rc')
|
||||
version_info.append(int(patch_version[0]))
|
||||
version_info.append(f'rc{patch_version[1]}')
|
||||
return tuple(version_info)
|
||||
|
||||
|
||||
version_info = parse_version_info(__version__)
|
|
@ -0,0 +1,7 @@
|
|||
[pytest]
|
||||
addopts = --xdoctest --xdoctest-style=auto
|
||||
norecursedirs = .git ignore build __pycache__ data docker docs .eggs
|
||||
|
||||
filterwarnings= default
|
||||
ignore:.*No cfgstr given in Cacher constructor or call.*:Warning
|
||||
ignore:.*Define the __nice__ method for.*:Warning
|
|
@ -0,0 +1,4 @@
|
|||
-r requirements/build.txt
|
||||
-r requirements/optional.txt
|
||||
-r requirements/runtime.txt
|
||||
-r requirements/tests.txt
|
|
@ -0,0 +1,3 @@
|
|||
# These must be installed before building mmdetection
|
||||
cython
|
||||
numpy
|
|
@ -0,0 +1,4 @@
|
|||
recommonmark
|
||||
sphinx
|
||||
sphinx_markdown_tables
|
||||
sphinx_rtd_theme
|
|
@ -0,0 +1,5 @@
|
|||
albumentations>=0.3.2
|
||||
cityscapesscripts
|
||||
imagecorruptions
|
||||
scipy
|
||||
sklearn
|
|
@ -0,0 +1,5 @@
|
|||
mmcls
|
||||
mmcv
|
||||
mmdet
|
||||
torch
|
||||
torchvision
|
|
@ -0,0 +1,8 @@
|
|||
matplotlib
|
||||
mmcls
|
||||
mmdet
|
||||
numpy
|
||||
pycocotools; platform_system == "Linux"
|
||||
pycocotools-windows; platform_system == "Windows"
|
||||
six
|
||||
terminaltables
|
|
@ -0,0 +1,13 @@
|
|||
asynctest
|
||||
codecov
|
||||
flake8
|
||||
interrogate
|
||||
isort==4.3.21
|
||||
# Note: used for kwarray.group_items, this may be ported to mmcv in the future.
|
||||
kwarray
|
||||
mmcls
|
||||
mmdet
|
||||
pytest
|
||||
ubelt
|
||||
xdoctest>=0.10.0
|
||||
yapf
|
|
@ -0,0 +1,13 @@
|
|||
[isort]
|
||||
line_length = 79
|
||||
multi_line_output = 0
|
||||
known_standard_library = setuptools
|
||||
known_first_party = mmfewshot
|
||||
known_third_party = mmcls,mmcv,mmdet,numpy,pytest,torch
|
||||
no_lines_before = STDLIB,LOCALFOLDER
|
||||
default_section = THIRDPARTY
|
||||
|
||||
[yapf]
|
||||
BASED_ON_STYLE = pep8
|
||||
BLANK_LINE_BEFORE_NESTED_CLASS_OR_DEF = true
|
||||
SPLIT_BEFORE_EXPRESSION_AFTER_OPENING_PAREN = true
|
|
@ -0,0 +1,161 @@
|
|||
#!/usr/bin/env python
|
||||
import os
|
||||
from setuptools import find_packages, setup
|
||||
|
||||
import torch
|
||||
from torch.utils.cpp_extension import (BuildExtension, CppExtension,
|
||||
CUDAExtension)
|
||||
|
||||
|
||||
def readme():
|
||||
with open('README.md', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
return content
|
||||
|
||||
|
||||
version_file = 'mmfewshot/version.py'
|
||||
|
||||
|
||||
def get_version():
|
||||
with open(version_file, 'r') as f:
|
||||
exec(compile(f.read(), version_file, 'exec'))
|
||||
return locals()['__version__']
|
||||
|
||||
|
||||
def make_cuda_ext(name, module, sources, sources_cuda=[]):
|
||||
|
||||
define_macros = []
|
||||
extra_compile_args = {'cxx': []}
|
||||
|
||||
if torch.cuda.is_available() or os.getenv('FORCE_CUDA', '0') == '1':
|
||||
define_macros += [('WITH_CUDA', None)]
|
||||
extension = CUDAExtension
|
||||
extra_compile_args['nvcc'] = [
|
||||
'-D__CUDA_NO_HALF_OPERATORS__',
|
||||
'-D__CUDA_NO_HALF_CONVERSIONS__',
|
||||
'-D__CUDA_NO_HALF2_OPERATORS__',
|
||||
]
|
||||
sources += sources_cuda
|
||||
else:
|
||||
print(f'Compiling {name} without CUDA')
|
||||
extension = CppExtension
|
||||
|
||||
return extension(
|
||||
name=f'{module}.{name}',
|
||||
sources=[os.path.join(*module.split('.'), p) for p in sources],
|
||||
define_macros=define_macros,
|
||||
extra_compile_args=extra_compile_args)
|
||||
|
||||
|
||||
def parse_requirements(fname='requirements.txt', with_version=True):
|
||||
"""Parse the package dependencies listed in a requirements file but strips
|
||||
specific versioning information.
|
||||
|
||||
Args:
|
||||
fname (str): path to requirements file
|
||||
with_version (bool, default=False): if True include version specs
|
||||
|
||||
Returns:
|
||||
List[str]: list of requirements items
|
||||
|
||||
CommandLine:
|
||||
python -c "import setup; print(setup.parse_requirements())"
|
||||
"""
|
||||
import sys
|
||||
from os.path import exists
|
||||
import re
|
||||
require_fpath = fname
|
||||
|
||||
def parse_line(line):
|
||||
"""Parse information from a line in a requirements text file."""
|
||||
if line.startswith('-r '):
|
||||
# Allow specifying requirements in other files
|
||||
target = line.split(' ')[1]
|
||||
for info in parse_require_file(target):
|
||||
yield info
|
||||
else:
|
||||
info = {'line': line}
|
||||
if line.startswith('-e '):
|
||||
info['package'] = line.split('#egg=')[1]
|
||||
elif '@git+' in line:
|
||||
info['package'] = line
|
||||
else:
|
||||
# Remove versioning from the package
|
||||
pat = '(' + '|'.join(['>=', '==', '>']) + ')'
|
||||
parts = re.split(pat, line, maxsplit=1)
|
||||
parts = [p.strip() for p in parts]
|
||||
|
||||
info['package'] = parts[0]
|
||||
if len(parts) > 1:
|
||||
op, rest = parts[1:]
|
||||
if ';' in rest:
|
||||
# Handle platform specific dependencies
|
||||
# http://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-platform-specific-dependencies
|
||||
version, platform_deps = map(str.strip,
|
||||
rest.split(';'))
|
||||
info['platform_deps'] = platform_deps
|
||||
else:
|
||||
version = rest # NOQA
|
||||
info['version'] = (op, version)
|
||||
yield info
|
||||
|
||||
def parse_require_file(fpath):
|
||||
with open(fpath, 'r') as f:
|
||||
for line in f.readlines():
|
||||
line = line.strip()
|
||||
if line and not line.startswith('#'):
|
||||
for info in parse_line(line):
|
||||
yield info
|
||||
|
||||
def gen_packages_items():
|
||||
if exists(require_fpath):
|
||||
for info in parse_require_file(require_fpath):
|
||||
parts = [info['package']]
|
||||
if with_version and 'version' in info:
|
||||
parts.extend(info['version'])
|
||||
if not sys.version.startswith('3.4'):
|
||||
# apparently package_deps are broken in 3.4
|
||||
platform_deps = info.get('platform_deps')
|
||||
if platform_deps is not None:
|
||||
parts.append(';' + platform_deps)
|
||||
item = ''.join(parts)
|
||||
yield item
|
||||
|
||||
packages = list(gen_packages_items())
|
||||
return packages
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
setup(
|
||||
name='mmfewshot',
|
||||
version=get_version(),
|
||||
description='OpenMMLab Detection Toolbox and Benchmark',
|
||||
long_description=readme(),
|
||||
long_description_content_type='text/markdown',
|
||||
author='OpenMMLab',
|
||||
author_email='openmmlab@gmail.com',
|
||||
keywords='computer vision, object detection',
|
||||
url='https://github.com/open-mmlab/mmdetection',
|
||||
packages=find_packages(exclude=('configs', 'tools', 'demo')),
|
||||
classifiers=[
|
||||
'Development Status :: 5 - Production/Stable',
|
||||
'License :: OSI Approved :: Apache Software License',
|
||||
'Operating System :: OS Independent',
|
||||
'Programming Language :: Python :: 3',
|
||||
'Programming Language :: Python :: 3.6',
|
||||
'Programming Language :: Python :: 3.7',
|
||||
'Programming Language :: Python :: 3.8',
|
||||
],
|
||||
license='Apache License 2.0',
|
||||
setup_requires=parse_requirements('requirements/build.txt'),
|
||||
tests_require=parse_requirements('requirements/tests.txt'),
|
||||
install_requires=parse_requirements('requirements/runtime.txt'),
|
||||
extras_require={
|
||||
'all': parse_requirements('requirements.txt'),
|
||||
'tests': parse_requirements('requirements/tests.txt'),
|
||||
'build': parse_requirements('requirements/build.txt'),
|
||||
'optional': parse_requirements('requirements/optional.txt'),
|
||||
},
|
||||
ext_modules=[],
|
||||
cmdclass={'build_ext': BuildExtension},
|
||||
zip_safe=False)
|
|
@ -0,0 +1,16 @@
|
|||
import pytest
|
||||
from mmcv import ConfigDict
|
||||
|
||||
from mmfewshot.utils.check_config import check_config
|
||||
|
||||
|
||||
def test_check_config():
|
||||
config = dict(task_type='mmdet')
|
||||
cfg = ConfigDict(config)
|
||||
check_config(cfg)
|
||||
with pytest.raises(AttributeError):
|
||||
cfg.pop('task_type')
|
||||
check_config(cfg)
|
||||
with pytest.raises(ValueError):
|
||||
cfg.task_type = 'cls'
|
||||
check_config(cfg)
|
|
@ -0,0 +1,10 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
CONFIG=$1
|
||||
CHECKPOINT=$2
|
||||
GPUS=$3
|
||||
PORT=${PORT:-29500}
|
||||
|
||||
PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \
|
||||
python -m torch.distributed.launch --nproc_per_node=$GPUS --master_port=$PORT \
|
||||
$(dirname "$0")/test.py $CONFIG $CHECKPOINT --launcher pytorch ${@:4}
|
|
@ -0,0 +1,9 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
CONFIG=$1
|
||||
GPUS=$2
|
||||
PORT=${PORT:-29500}
|
||||
|
||||
PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \
|
||||
python -m torch.distributed.launch --nproc_per_node=$GPUS --master_port=$PORT \
|
||||
$(dirname "$0")/train.py $CONFIG --launcher pytorch ${@:3}
|
|
@ -0,0 +1,24 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
set -x
|
||||
|
||||
PARTITION=$1
|
||||
JOB_NAME=$2
|
||||
CONFIG=$3
|
||||
CHECKPOINT=$4
|
||||
GPUS=${GPUS:-8}
|
||||
GPUS_PER_NODE=${GPUS_PER_NODE:-8}
|
||||
CPUS_PER_TASK=${CPUS_PER_TASK:-5}
|
||||
PY_ARGS=${@:5}
|
||||
SRUN_ARGS=${SRUN_ARGS:-""}
|
||||
|
||||
PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \
|
||||
srun -p ${PARTITION} \
|
||||
--job-name=${JOB_NAME} \
|
||||
--gres=gpu:${GPUS_PER_NODE} \
|
||||
--ntasks=${GPUS} \
|
||||
--ntasks-per-node=${GPUS_PER_NODE} \
|
||||
--cpus-per-task=${CPUS_PER_TASK} \
|
||||
--kill-on-bad-exit=1 \
|
||||
${SRUN_ARGS} \
|
||||
python -u tools/test.py ${CONFIG} ${CHECKPOINT} --launcher="slurm" ${PY_ARGS}
|
|
@ -0,0 +1,24 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
set -x
|
||||
|
||||
PARTITION=$1
|
||||
JOB_NAME=$2
|
||||
CONFIG=$3
|
||||
WORK_DIR=$4
|
||||
GPUS=${GPUS:-8}
|
||||
GPUS_PER_NODE=${GPUS_PER_NODE:-8}
|
||||
CPUS_PER_TASK=${CPUS_PER_TASK:-5}
|
||||
SRUN_ARGS=${SRUN_ARGS:-""}
|
||||
PY_ARGS=${@:5}
|
||||
|
||||
PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \
|
||||
srun -p ${PARTITION} \
|
||||
--job-name=${JOB_NAME} \
|
||||
--gres=gpu:${GPUS_PER_NODE} \
|
||||
--ntasks=${GPUS} \
|
||||
--ntasks-per-node=${GPUS_PER_NODE} \
|
||||
--cpus-per-task=${CPUS_PER_TASK} \
|
||||
--kill-on-bad-exit=1 \
|
||||
${SRUN_ARGS} \
|
||||
python -u tools/train.py ${CONFIG} --work-dir=${WORK_DIR} --launcher="slurm" ${PY_ARGS}
|
|
@ -0,0 +1,206 @@
|
|||
import argparse
|
||||
import os
|
||||
import warnings
|
||||
|
||||
import mmcv
|
||||
import torch
|
||||
from mmcv import Config, DictAction
|
||||
from mmcv.parallel import MMDataParallel, MMDistributedDataParallel
|
||||
from mmcv.runner import (get_dist_info, init_dist, load_checkpoint,
|
||||
wrap_fp16_model)
|
||||
|
||||
import mmfewshot # noqa: F401, F403
|
||||
from mmfewshot.apis.test import multi_gpu_test, single_gpu_test
|
||||
from mmfewshot.builders import build_dataloader, build_dataset, build_model
|
||||
from mmfewshot.utils.check_config import check_config
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(
|
||||
description='MMFewShot test (and eval) a model')
|
||||
parser.add_argument('config', help='test config file path')
|
||||
parser.add_argument('checkpoint', help='checkpoint file')
|
||||
parser.add_argument('--out', help='output result file in pickle format')
|
||||
parser.add_argument(
|
||||
'--eval',
|
||||
type=str,
|
||||
nargs='+',
|
||||
help='evaluation metrics, which depends on the dataset '
|
||||
'of specific task_type, e.g., "bbox","segm", "proposal" for '
|
||||
'COCO, and "mAP", "recall" for PASCAL VOC in'
|
||||
'MMDet or "accuracy", "precision", "recall", "f1_score", '
|
||||
'"support" for single label dataset, and "mAP", "CP", "CR",'
|
||||
'"CF1", "OP", "OR", "OF1" for '
|
||||
'multi-label dataset in MMCLS')
|
||||
parser.add_argument('--show', action='store_true', help='show results')
|
||||
parser.add_argument(
|
||||
'--show-dir', help='directory where painted images will be saved')
|
||||
parser.add_argument(
|
||||
'--show-score-thr',
|
||||
type=float,
|
||||
default=0.3,
|
||||
help='score threshold (default: 0.3),Only work when task_type is mmdet'
|
||||
)
|
||||
parser.add_argument(
|
||||
'--gpu-collect',
|
||||
action='store_true',
|
||||
help='whether to use gpu to collect results.')
|
||||
parser.add_argument(
|
||||
'--tmpdir',
|
||||
help='tmp directory used for collecting results from multiple '
|
||||
'workers, available when gpu-collect is not specified')
|
||||
parser.add_argument(
|
||||
'--cfg-options',
|
||||
nargs='+',
|
||||
action=DictAction,
|
||||
help='override some settings in the used config, the key-value pair '
|
||||
'in xxx=yyy format will be merged into config file. If the value to '
|
||||
'be overwritten is a list, it should be like key="[a,b]" or key=a,b '
|
||||
'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" '
|
||||
'Note that the quotation marks are necessary and that no white space '
|
||||
'is allowed.')
|
||||
parser.add_argument(
|
||||
'--options',
|
||||
nargs='+',
|
||||
action=DictAction,
|
||||
help='custom options for evaluation, the key-value pair in xxx=yyy '
|
||||
'format will be kwargs for dataset.evaluate() function (deprecate), '
|
||||
'change to --eval-options instead.')
|
||||
parser.add_argument(
|
||||
'--eval-options',
|
||||
nargs='+',
|
||||
action=DictAction,
|
||||
help='custom options for evaluation, the key-value pair in xxx=yyy '
|
||||
'format will be kwargs for dataset.evaluate() function')
|
||||
parser.add_argument(
|
||||
'--show-options',
|
||||
nargs='+',
|
||||
action=DictAction,
|
||||
help='custom options for show_result. key-value pair in xxx=yyy.'
|
||||
'Check available options in `model.show_result`. Only work when '
|
||||
'task_type is mmcls')
|
||||
parser.add_argument(
|
||||
'--launcher',
|
||||
choices=['none', 'pytorch', 'slurm', 'mpi'],
|
||||
default='none',
|
||||
help='job launcher')
|
||||
parser.add_argument('--local_rank', type=int, default=0)
|
||||
args = parser.parse_args()
|
||||
if 'LOCAL_RANK' not in os.environ:
|
||||
os.environ['LOCAL_RANK'] = str(args.local_rank)
|
||||
|
||||
if args.options and args.eval_options:
|
||||
raise ValueError(
|
||||
'--options and --eval-options cannot be both '
|
||||
'specified, --options is deprecated in favor of --eval-options')
|
||||
if args.options:
|
||||
warnings.warn('--options is deprecated in favor of --eval-options')
|
||||
args.eval_options = args.options
|
||||
return args
|
||||
|
||||
|
||||
def main():
|
||||
args = parse_args()
|
||||
|
||||
assert args.out or args.eval or args.format_only or args.show \
|
||||
or args.show_dir, \
|
||||
('Please specify at least one operation (save/eval/format/show the '
|
||||
'results / save the results) with the argument "--out", "--eval"',
|
||||
'"--show" or "--show-dir"')
|
||||
|
||||
if args.out is not None and not args.out.endswith(('.pkl', '.pickle')):
|
||||
raise ValueError('The output file must be a pkl file.')
|
||||
|
||||
cfg = Config.fromfile(args.config)
|
||||
if args.cfg_options is not None:
|
||||
cfg.merge_from_dict(args.cfg_options)
|
||||
|
||||
cfg = check_config(cfg)
|
||||
|
||||
# import modules from string list.
|
||||
if cfg.get('custom_imports', None):
|
||||
from mmcv.utils import import_modules_from_strings
|
||||
import_modules_from_strings(**cfg['custom_imports'])
|
||||
# set cudnn_benchmark
|
||||
if cfg.get('cudnn_benchmark', False):
|
||||
torch.backends.cudnn.benchmark = True
|
||||
cfg.model.pretrained = None
|
||||
|
||||
# init distributed env first, since logger depends on the dist info.
|
||||
if args.launcher == 'none':
|
||||
distributed = False
|
||||
else:
|
||||
distributed = True
|
||||
init_dist(args.launcher, **cfg.dist_params)
|
||||
|
||||
# build the dataloader
|
||||
dataset = build_dataset(cfg.data.test, task_type=cfg.task_type)
|
||||
data_loader = build_dataloader(
|
||||
dataset,
|
||||
samples_per_gpu=cfg.data.samples_per_gpu,
|
||||
workers_per_gpu=cfg.data.workers_per_gpu,
|
||||
dist=distributed,
|
||||
shuffle=False,
|
||||
round_up=False)
|
||||
|
||||
# build the model and load checkpoint
|
||||
cfg.model.train_cfg = None
|
||||
model = build_model(cfg.model, task_type=cfg.task_type)
|
||||
fp16_cfg = cfg.get('fp16', None)
|
||||
if fp16_cfg is not None:
|
||||
wrap_fp16_model(model)
|
||||
checkpoint = load_checkpoint(model, args.checkpoint, map_location='cpu')
|
||||
# old versions did not save class info in checkpoints, this walkaround is
|
||||
# for backward compatibility
|
||||
if 'CLASSES' in checkpoint.get('meta', {}):
|
||||
model.CLASSES = checkpoint['meta']['CLASSES']
|
||||
else:
|
||||
model.CLASSES = dataset.CLASSES
|
||||
|
||||
if not distributed:
|
||||
model = MMDataParallel(model, device_ids=[0])
|
||||
if cfg.task_type == 'mmdet':
|
||||
show_kwargs = dict(show_score_thr=args.show_score_thr)
|
||||
elif cfg.task_type == 'mmcls':
|
||||
show_kwargs = {} if args.show_options is None\
|
||||
else args.show_options
|
||||
outputs = single_gpu_test(
|
||||
model,
|
||||
data_loader,
|
||||
args.show,
|
||||
args.show_dir,
|
||||
task_type=cfg.task_type,
|
||||
**show_kwargs)
|
||||
else:
|
||||
model = MMDistributedDataParallel(
|
||||
model.cuda(),
|
||||
device_ids=[torch.cuda.current_device()],
|
||||
broadcast_buffers=False)
|
||||
outputs = multi_gpu_test(
|
||||
model,
|
||||
data_loader,
|
||||
args.tmpdir,
|
||||
args.gpu_collect,
|
||||
task_type=cfg.task_type,
|
||||
)
|
||||
|
||||
rank, _ = get_dist_info()
|
||||
if rank == 0:
|
||||
if args.out:
|
||||
print(f'\nwriting results to {args.out}')
|
||||
mmcv.dump(outputs, args.out)
|
||||
kwargs = {} if args.eval_options is None else args.eval_options
|
||||
if args.eval:
|
||||
eval_kwargs = cfg.get('evaluation', {}).copy()
|
||||
# hard-code way to remove EvalHook args
|
||||
for key in [
|
||||
'interval', 'tmpdir', 'start', 'gpu_collect', 'save_best',
|
||||
'rule'
|
||||
]:
|
||||
eval_kwargs.pop(key, None)
|
||||
eval_kwargs.update(dict(metric=args.eval, **kwargs))
|
||||
print(dataset.evaluate(outputs, **eval_kwargs))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
|
@ -0,0 +1,192 @@
|
|||
import argparse
|
||||
import copy
|
||||
import os
|
||||
import os.path as osp
|
||||
import time
|
||||
import warnings
|
||||
|
||||
import mmcv
|
||||
import torch
|
||||
from mmcv import Config, DictAction
|
||||
from mmcv.runner import get_dist_info, init_dist
|
||||
from mmcv.utils import get_git_hash
|
||||
from mmdet.utils import collect_env, get_root_logger
|
||||
|
||||
import mmfewshot # noqa: F401, F403
|
||||
from mmfewshot import __version__
|
||||
from mmfewshot.apis import set_random_seed, train_model
|
||||
from mmfewshot.builders.dataset_builder import build_dataset
|
||||
from mmfewshot.builders.model_builder import build_model
|
||||
from mmfewshot.utils.check_config import check_config
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(description='Train a FewShot model')
|
||||
parser.add_argument('config', help='train config file path')
|
||||
parser.add_argument('--work-dir', help='the dir to save logs and models')
|
||||
parser.add_argument(
|
||||
'--resume-from', help='the checkpoint file to resume from')
|
||||
parser.add_argument(
|
||||
'--no-validate',
|
||||
action='store_true',
|
||||
help='whether not to evaluate the checkpoint during training')
|
||||
group_gpus = parser.add_mutually_exclusive_group()
|
||||
group_gpus.add_argument(
|
||||
'--gpus',
|
||||
type=int,
|
||||
help='number of gpus to use '
|
||||
'(only applicable to non-distributed training)')
|
||||
group_gpus.add_argument(
|
||||
'--gpu-ids',
|
||||
type=int,
|
||||
nargs='+',
|
||||
help='ids of gpus to use '
|
||||
'(only applicable to non-distributed training)')
|
||||
parser.add_argument('--seed', type=int, default=None, help='random seed')
|
||||
parser.add_argument(
|
||||
'--deterministic',
|
||||
action='store_true',
|
||||
help='whether to set deterministic options for CUDNN backend.')
|
||||
parser.add_argument(
|
||||
'--options',
|
||||
nargs='+',
|
||||
action=DictAction,
|
||||
help='override some settings in the used config, the key-value pair '
|
||||
'in xxx=yyy format will be merged into config file (deprecate), '
|
||||
'change to --cfg-options instead.')
|
||||
parser.add_argument(
|
||||
'--cfg-options',
|
||||
nargs='+',
|
||||
action=DictAction,
|
||||
help='override some settings in the used config, the key-value pair '
|
||||
'in xxx=yyy format will be merged into config file. If the value to '
|
||||
'be overwritten is a list, it should be like key="[a,b]" or key=a,b '
|
||||
'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" '
|
||||
'Note that the quotation marks are necessary and that no white space '
|
||||
'is allowed.')
|
||||
parser.add_argument(
|
||||
'--launcher',
|
||||
choices=['none', 'pytorch', 'slurm', 'mpi'],
|
||||
default='none',
|
||||
help='job launcher')
|
||||
parser.add_argument('--local_rank', type=int, default=0)
|
||||
args = parser.parse_args()
|
||||
if 'LOCAL_RANK' not in os.environ:
|
||||
os.environ['LOCAL_RANK'] = str(args.local_rank)
|
||||
|
||||
if args.options and args.cfg_options:
|
||||
raise ValueError(
|
||||
'--options and --cfg-options cannot be both '
|
||||
'specified, --options is deprecated in favor of --cfg-options')
|
||||
if args.options:
|
||||
warnings.warn('--options is deprecated in favor of --cfg-options')
|
||||
args.cfg_options = args.options
|
||||
|
||||
return args
|
||||
|
||||
|
||||
def main():
|
||||
args = parse_args()
|
||||
|
||||
cfg = Config.fromfile(args.config)
|
||||
|
||||
if args.cfg_options is not None:
|
||||
cfg.merge_from_dict(args.cfg_options)
|
||||
|
||||
cfg = check_config(cfg)
|
||||
|
||||
# import modules from string list.
|
||||
if cfg.get('custom_imports', None):
|
||||
from mmcv.utils import import_modules_from_strings
|
||||
import_modules_from_strings(**cfg['custom_imports'])
|
||||
# set cudnn_benchmark
|
||||
if cfg.get('cudnn_benchmark', False):
|
||||
torch.backends.cudnn.benchmark = True
|
||||
|
||||
# work_dir is determined in this priority: CLI > segment in file > filename
|
||||
if args.work_dir is not None:
|
||||
# update configs according to CLI args if args.work_dir is not None
|
||||
cfg.work_dir = args.work_dir
|
||||
elif cfg.get('work_dir', None) is None:
|
||||
# use config filename as default work_dir if cfg.work_dir is None
|
||||
cfg.work_dir = osp.join('./work_dirs',
|
||||
osp.splitext(osp.basename(args.config))[0])
|
||||
if args.resume_from is not None:
|
||||
cfg.resume_from = args.resume_from
|
||||
if args.gpu_ids is not None:
|
||||
cfg.gpu_ids = args.gpu_ids
|
||||
else:
|
||||
cfg.gpu_ids = range(1) if args.gpus is None else range(args.gpus)
|
||||
|
||||
# init distributed env first, since logger depends on the dist info.
|
||||
if args.launcher == 'none':
|
||||
distributed = False
|
||||
else:
|
||||
distributed = True
|
||||
init_dist(args.launcher, **cfg.dist_params)
|
||||
# re-set gpu_ids with distributed training mode
|
||||
_, world_size = get_dist_info()
|
||||
cfg.gpu_ids = range(world_size)
|
||||
|
||||
# create work_dir
|
||||
mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
|
||||
# dump config
|
||||
cfg.dump(osp.join(cfg.work_dir, osp.basename(args.config)))
|
||||
# init the logger before other steps
|
||||
timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime())
|
||||
log_file = osp.join(cfg.work_dir, f'{timestamp}.log')
|
||||
logger = get_root_logger(log_file=log_file, log_level=cfg.log_level)
|
||||
|
||||
# init the meta dict to record some important information such as
|
||||
# environment info and seed, which will be logged
|
||||
meta = dict()
|
||||
# log env info
|
||||
env_info_dict = collect_env()
|
||||
env_info = '\n'.join([(f'{k}: {v}') for k, v in env_info_dict.items()])
|
||||
dash_line = '-' * 60 + '\n'
|
||||
logger.info('Environment info:\n' + dash_line + env_info + '\n' +
|
||||
dash_line)
|
||||
meta['env_info'] = env_info
|
||||
meta['config'] = cfg.pretty_text
|
||||
# log some basic info
|
||||
logger.info(f'Distributed training: {distributed}')
|
||||
logger.info(f'Config:\n{cfg.pretty_text}')
|
||||
|
||||
# set random seeds
|
||||
if args.seed is not None:
|
||||
logger.info(f'Set random seed to {args.seed}, '
|
||||
f'deterministic: {args.deterministic}')
|
||||
set_random_seed(args.seed, deterministic=args.deterministic)
|
||||
cfg.seed = args.seed
|
||||
meta['seed'] = args.seed
|
||||
meta['exp_name'] = osp.basename(args.config)
|
||||
|
||||
model = build_model(cfg.model, task_type=cfg.task_type)
|
||||
model.init_weights()
|
||||
|
||||
datasets = [build_dataset(cfg.data.train, task_type=cfg.task_type)]
|
||||
if len(cfg.workflow) == 2:
|
||||
val_dataset = copy.deepcopy(cfg.data.val)
|
||||
val_dataset.pipeline = cfg.data.train.pipeline
|
||||
datasets.append(build_dataset(val_dataset, task_type=cfg.task_type))
|
||||
if cfg.checkpoint_config is not None:
|
||||
# save mmfewshot version, config file content and class names in
|
||||
# checkpoints as meta data
|
||||
cfg.checkpoint_config.meta = dict(
|
||||
mmfewshot_version=__version__ + get_git_hash()[:7],
|
||||
CLASSES=datasets[0].CLASSES)
|
||||
# add an attribute for visualization convenience
|
||||
model.CLASSES = datasets[0].CLASSES
|
||||
train_model(
|
||||
model,
|
||||
datasets,
|
||||
cfg,
|
||||
task_type=cfg.task_type,
|
||||
distributed=distributed,
|
||||
validate=(not args.no_validate),
|
||||
timestamp=timestamp,
|
||||
meta=meta)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
|
@ -0,0 +1,19 @@
|
|||
# Copyright (c) Open-MMLab. All rights reserved.
|
||||
|
||||
__version__ = '0.1.0'
|
||||
short_version = __version__
|
||||
|
||||
|
||||
def parse_version_info(version_str):
|
||||
version_info = []
|
||||
for x in version_str.split('.'):
|
||||
if x.isdigit():
|
||||
version_info.append(int(x))
|
||||
elif x.find('rc') != -1:
|
||||
patch_version = x.split('rc')
|
||||
version_info.append(int(patch_version[0]))
|
||||
version_info.append(f'rc{patch_version[1]}')
|
||||
return tuple(version_info)
|
||||
|
||||
|
||||
version_info = parse_version_info(__version__)
|
Loading…
Reference in New Issue