Bump version to v1.0.0rc8 (#1583)

* Bump version to v1.0.0rc8

* Apply suggestions from code review

Co-authored-by: Yixiao Fang <36138628+fangyixiao18@users.noreply.github.com>

* Update README.md

---------

Co-authored-by: Yixiao Fang <36138628+fangyixiao18@users.noreply.github.com>
pull/1655/head v1.0.0rc8
Ma Zerun 2023-05-23 11:22:51 +08:00 committed by GitHub
parent be389eb846
commit 4dd8a86145
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
10 changed files with 94 additions and 9 deletions

View File

@ -86,6 +86,12 @@ https://github.com/open-mmlab/mmpretrain/assets/26739999/e4dcd3a2-f895-4d1b-a351
## What's new
🌟 v1.0.0rc8 was released in 22/05/2023
- Support multiple **multi-modal** algorithms and inferencers. You can explore these features by the [gradio demo](https://github.com/open-mmlab/mmpretrain/tree/main/projects/gradio_demo)!
- Add EVA-02, Dino-V2, ViT-SAM and GLIP backbones.
- Register torchvision transforms into MMPretrain, you can now easily integrate torchvision's data augmentations in MMPretrain. See [the doc](https://mmpretrain.readthedocs.io/en/latest/api/data_process.html#torchvision-transforms)
🌟 v1.0.0rc7 was released in 07/04/2023
- Integrated Self-supervised learning algorithms from **MMSelfSup**, such as **MAE**, **BEiT**, etc.
@ -160,6 +166,9 @@ Results and models are available in the [model zoo](https://mmpretrain.readthedo
<td>
<b>Self-supervised Learning</b>
</td>
<td>
<b>Multi-Modality Algorithms</b>
</td>
<td>
<b>Others</b>
</td>
@ -239,6 +248,15 @@ Results and models are available in the [model zoo](https://mmpretrain.readthedo
<li><a href="configs/mixmim">MixMIM (arXiv'2022)</a></li>
</ul>
</td>
<td>
<ul>
<li><a href="configs/blip">BLIP (arxiv'2022)</a></li>
<li><a href="configs/blip2">BLIP-2 (arxiv'2023)</a></li>
<li><a href="configs/ofa">OFA (CoRR'2022)</a></li>
<li><a href="configs/flamingo">Flamingo (NeurIPS'2022)</a></li>
<li><a href="configs/chinese_clip">Chinese CLIP (arxiv'2022)</a></li>
</ul>
</td>
<td>
Image Retrieval Task:
<ul>

View File

@ -84,6 +84,12 @@ https://github.com/open-mmlab/mmpretrain/assets/26739999/e4dcd3a2-f895-4d1b-a351
## 更新日志
🌟 2023/5/22 发布了 v1.0.0rc8 版本
- 支持多种多模态算法和推理器。您可以通过 [gradio demo](https://github.com/open-mmlab/mmpretrain/tree/main/projects/gradio_demo) 探索这些功能!
- 新增 EVA-02Dino-V2ViT-SAM 和 GLIP 主干网络。
- 将 torchvision 变换注册到 MMPretrain现在您可以轻松地将 torchvision 的数据增强集成到 MMPretrain 中。
🌟 2023/4/7 发布了 v1.0.0rc7 版本
- 整和来自 MMSelfSup 的自监督学习算法,例如 `MAE`, `BEiT`
@ -157,6 +163,9 @@ mim install -e ".[multimodal]"
<td>
<b>自监督学习</b>
</td>
<td>
<b>多模态算法</b>
</td>
<td>
<b>其它</b>
</td>
@ -235,6 +244,15 @@ mim install -e ".[multimodal]"
<li><a href="configs/mixmim">MixMIM (arXiv'2022)</a></li>
</ul>
</td>
<td>
<ul>
<li><a href="configs/blip">BLIP (arxiv'2022)</a></li>
<li><a href="configs/blip2">BLIP-2 (arxiv'2023)</a></li>
<li><a href="configs/ofa">OFA (CoRR'2022)</a></li>
<li><a href="configs/flamingo">Flamingo (NeurIPS'2022)</a></li>
<li><a href="configs/chinese_clip">Chinese CLIP (arxiv'2022)</a></li>
</ul>
</td>
<td>
图像检索任务:
<ul>

View File

@ -3,7 +3,7 @@ ARG CUDA="11.3"
ARG CUDNN="8"
FROM pytorch/torchserve:latest-gpu
ARG MMPRE="1.0.0rc5"
ARG MMPRE="1.0.0rc8"
ENV PYTHONUNBUFFERED TRUE

View File

@ -63,7 +63,7 @@ pip install -U openmim && mim install -e .
Just install with mim.
```shell
pip install -U openmim && mim install "mmpretrain>=1.0.0rc7"
pip install -U openmim && mim install "mmpretrain>=1.0.0rc8"
```
```{note}
@ -80,7 +80,7 @@ can add `[multimodal]` during the installation. For example:
mim install -e ".[multimodal]"
# Install as a Python package
mim install "mmpretrain[multimodal]>=1.0.0rc7"
mim install "mmpretrain[multimodal]>=1.0.0rc8"
```
## Verify the installation

View File

@ -1,5 +1,52 @@
# Changelog (MMPreTrain)
## v1.0.0rc8(22/05/2023)
### Highlights
- Support multiple multi-modal algorithms and inferencers. You can explore these features by the [gradio demo](https://github.com/open-mmlab/mmpretrain/tree/main/projects/gradio_demo)!
- Add EVA-02, Dino-V2, ViT-SAM and GLIP backbones.
- Register torchvision transforms into MMPretrain, you can now easily integrate torchvision's data augmentations in MMPretrain.
### New Features
- Support Chinese CLIP. ([#1576](https://github.com/open-mmlab/mmpretrain/pull/1576))
- Add ScienceQA Metrics ([#1577](https://github.com/open-mmlab/mmpretrain/pull/1577))
- Support multiple multi-modal algorithms and inferencers. ([#1561](https://github.com/open-mmlab/mmpretrain/pull/1561))
- add eva02 backbone ([#1450](https://github.com/open-mmlab/mmpretrain/pull/1450))
- Support dinov2 backbone ([#1522](https://github.com/open-mmlab/mmpretrain/pull/1522))
- Support some downstream classification datasets. ([#1467](https://github.com/open-mmlab/mmpretrain/pull/1467))
- Support GLIP ([#1308](https://github.com/open-mmlab/mmpretrain/pull/1308))
- Register torchvision transforms into mmpretrain ([#1265](https://github.com/open-mmlab/mmpretrain/pull/1265))
- Add ViT of SAM ([#1476](https://github.com/open-mmlab/mmpretrain/pull/1476))
### Improvements
- [Refactor] Support to freeze channel reduction and add layer decay function ([#1490](https://github.com/open-mmlab/mmpretrain/pull/1490))
- [Refactor] Support resizing pos_embed while loading ckpt and format output ([#1488](https://github.com/open-mmlab/mmpretrain/pull/1488))
### Bug Fixes
- Fix scienceqa ([#1581](https://github.com/open-mmlab/mmpretrain/pull/1581))
- Fix config of beit ([#1528](https://github.com/open-mmlab/mmpretrain/pull/1528))
- Incorrect stage freeze on RIFormer Model ([#1573](https://github.com/open-mmlab/mmpretrain/pull/1573))
- Fix ddp bugs caused by `out_type`. ([#1570](https://github.com/open-mmlab/mmpretrain/pull/1570))
- Fix multi-task-head loss potential bug ([#1530](https://github.com/open-mmlab/mmpretrain/pull/1530))
- Support bce loss without batch augmentations ([#1525](https://github.com/open-mmlab/mmpretrain/pull/1525))
- Fix clip generator init bug ([#1518](https://github.com/open-mmlab/mmpretrain/pull/1518))
- Fix the bug in binary cross entropy loss ([#1499](https://github.com/open-mmlab/mmpretrain/pull/1499))
### Docs Update
- Update PoolFormer citation to CVPR version ([#1505](https://github.com/open-mmlab/mmpretrain/pull/1505))
- Refine Inference Doc ([#1489](https://github.com/open-mmlab/mmpretrain/pull/1489))
- Add doc for usage of confusion matrix ([#1513](https://github.com/open-mmlab/mmpretrain/pull/1513))
- Update MMagic link ([#1517](https://github.com/open-mmlab/mmpretrain/pull/1517))
- Fix example_project README ([#1575](https://github.com/open-mmlab/mmpretrain/pull/1575))
- Add NPU support page ([#1481](https://github.com/open-mmlab/mmpretrain/pull/1481))
- train cfg: Removed old description ([#1473](https://github.com/open-mmlab/mmpretrain/pull/1473))
- Fix typo in MultiLabelDataset docstring ([#1483](https://github.com/open-mmlab/mmpretrain/pull/1483))
## v1.0.0rc7(07/04/2023)
### Highlights

View File

@ -16,7 +16,8 @@ and make sure you fill in all required information in the template.
| MMPretrain version | MMEngine version | MMCV version |
| :----------------: | :---------------: | :--------------: |
| 1.0.0rc7 (main) | mmengine >= 0.5.0 | mmcv >= 2.0.0rc4 |
| 1.0.0rc8 (main) | mmengine >= 0.7.1 | mmcv >= 2.0.0rc4 |
| 1.0.0rc7 | mmengine >= 0.5.0 | mmcv >= 2.0.0rc4 |
```{note}
Since the `dev` branch is under frequent development, the MMEngine and MMCV

View File

@ -67,7 +67,7 @@ pip install -U openmim && mim install -e .
直接使用 mim 安装即可。
```shell
pip install -U openmim && mim install "mmpretrain>=1.0.0rc7"
pip install -U openmim && mim install "mmpretrain>=1.0.0rc8"
```
```{note}
@ -83,7 +83,7 @@ MMPretrain 中的多模态模型需要额外的依赖项,要安装这些依赖
mim install -e ".[multimodal]"
# 作为 Python 包安装
mim install "mmpretrain[multimodal]>=1.0.0rc7"
mim install "mmpretrain[multimodal]>=1.0.0rc8"
```
## 验证安装

View File

@ -13,7 +13,8 @@
| MMPretrain 版本 | MMEngine 版本 | MMCV 版本 |
| :-------------: | :---------------: | :--------------: |
| 1.0.0rc7 (main) | mmengine >= 0.5.0 | mmcv >= 2.0.0rc4 |
| 1.0.0rc8 (main) | mmengine >= 0.7.1 | mmcv >= 2.0.0rc4 |
| 1.0.0rc7 | mmengine >= 0.5.0 | mmcv >= 2.0.0rc4 |
```{note}
由于 `dev` 分支处于频繁开发中MMEngine 和 MMCV 版本依赖可能不准确。如果您在使用

View File

@ -10,7 +10,7 @@ mmcv_minimum_version = '2.0.0rc4'
mmcv_maximum_version = '2.1.0'
mmcv_version = digit_version(mmcv.__version__)
mmengine_minimum_version = '0.5.0'
mmengine_minimum_version = '0.7.1'
mmengine_maximum_version = '1.0.0'
mmengine_version = digit_version(mmengine.__version__)

View File

@ -1,6 +1,6 @@
# Copyright (c) OpenMMLab. All rights reserved
__version__ = '1.0.0rc7'
__version__ = '1.0.0rc8'
def parse_version_info(version_str):