[Refactor] Fix spelling (#1689)
parent
5c43d3ef42
commit
465b6bdeec
|
@ -38,7 +38,7 @@ For Vicuna model, please refer to [MiniGPT-4 page](https://github.com/Vision-CAI
|
|||
| :------------------------------ | :--------: | :-------: | :--------------------------------------: | :------------------------------------------------------------------------------------------------------------: |
|
||||
| `minigpt-4_vicuna-7b_caption`\* | 8121.32 | N/A | [config](minigpt-4_vicuna-7b_caption.py) | [model](https://download.openmmlab.com/mmpretrain/v1.0/minigpt4/minigpt-4_linear-projection_20230615-714b5f52.pth) |
|
||||
|
||||
*Models with * are converted from the [official repo](https://github.com/Vision-CAIR/MiniGPT-4/tree/main). The config files of these models are only for inference. We haven't reprodcue the training results.*
|
||||
*Models with * are converted from the [official repo](https://github.com/Vision-CAIR/MiniGPT-4/tree/main). The config files of these models are only for inference. We haven't reproduce the training results.*
|
||||
|
||||
## Citation
|
||||
|
||||
|
|
|
@ -48,7 +48,7 @@ python tools/test.py configs/otter/otter-9b_caption.py https://download.openmmla
|
|||
| :---------------------------- | :--------: | :------: | :------: | :---------------------------: | :------------------------------------------------------------------------------------------------------: |
|
||||
| `otter-9b_3rdparty_caption`\* | 8220.45 | Upcoming | Upcoming | [config](otter-9b_caption.py) | [model](https://download.openmmlab.com/mmclassification/v1/otter/otter-9b-adapter_20230613-51c5be8d.pth) |
|
||||
|
||||
*Models with * are converted from the [official repo](https://github.com/Luodian/Otter/tree/main). The config files of these models are only for inference. We haven't reprodcue the training results.*
|
||||
*Models with * are converted from the [official repo](https://github.com/Luodian/Otter/tree/main). The config files of these models are only for inference. We haven't reproduce the training results.*
|
||||
|
||||
### Visual Question Answering on VQAv2
|
||||
|
||||
|
@ -56,7 +56,7 @@ python tools/test.py configs/otter/otter-9b_caption.py https://download.openmmla
|
|||
| :------------------------ | :--------: | :------: | :-----------------------: | :------------------------------------------------------------------------------------------------------: |
|
||||
| `otter-9b_3rdparty_vqa`\* | 8220.45 | Upcoming | [config](otter-9b_vqa.py) | [model](https://download.openmmlab.com/mmclassification/v1/otter/otter-9b-adapter_20230613-51c5be8d.pth) |
|
||||
|
||||
*Models with * are converted from the [official repo](https://github.com/Luodian/Otter/tree/main). The config files of these models are only for inference. We haven't reprodcue the training results.*
|
||||
*Models with * are converted from the [official repo](https://github.com/Luodian/Otter/tree/main). The config files of these models are only for inference. We haven't reproduce the training results.*
|
||||
|
||||
## Citation
|
||||
|
||||
|
|
|
@ -163,7 +163,7 @@ class BlipRetrieval(BaseModel):
|
|||
]
|
||||
self.copy_params()
|
||||
|
||||
# multimodal backone shares weights with text backbone in BLIP
|
||||
# multimodal backbone shares weights with text backbone in BLIP
|
||||
# No need to set up
|
||||
|
||||
# Notice that this topk is used for select k candidate to compute
|
||||
|
|
|
@ -39,7 +39,7 @@ class SimMIMSwinTransformer(SwinTransformer):
|
|||
freeze running stats (mean and var). Note: Effect on Batch Norm
|
||||
and its variants only. Defaults to False.
|
||||
norm_cfg (dict): Config dict for normalization layer at end
|
||||
of backone. Defaults to dict(type='LN')
|
||||
of backbone. Defaults to dict(type='LN')
|
||||
stage_cfgs (Sequence | dict): Extra config dict for each
|
||||
stage. Defaults to empty dict.
|
||||
patch_cfg (dict): Extra config dict for patch embedding.
|
||||
|
|
Loading…
Reference in New Issue