mmclassification/mmpretrain/models/heads/itc_head.py

158 lines
6.2 KiB
Python
Raw Normal View History

[Feature] Support multiple multi-modal algorithms and inferencers. (#1561) * [Feat] Migrate blip caption to mmpretrain. (#50) * Migrate blip caption to mmpretrain * minor fix * support train * [Feature] Support OFA caption task. (#51) * [Feature] Support OFA caption task. * Remove duplicated files. * [Feature] Support OFA vqa task. (#58) * [Feature] Support OFA vqa task. * Fix lint. * [Feat] Add BLIP retrieval to mmpretrain. (#55) * init * minor fix for train * fix according to comments * refactor * Update Blip retrieval. (#62) * [Feature] Support OFA visual grounding task. (#59) * [Feature] Support OFA visual grounding task. * minor add TODO --------- Co-authored-by: yingfhu <yingfhu@gmail.com> * [Feat] Add flamingos coco caption and vqa. (#60) * first init * init flamingo coco * add vqa * minor fix * remove unnecessary modules * Update config * Use `ApplyToList`. --------- Co-authored-by: mzr1996 <mzr1996@163.com> * [Feature]: BLIP2 coco retrieval (#53) * [Feature]: Add blip2 retriever * [Feature]: Add blip2 all modules * [Feature]: Refine model * [Feature]: x1 * [Feature]: Runnable coco ret * [Feature]: Runnable version * [Feature]: Fix lint * [Fix]: Fix lint * [Feature]: Use 364 img size * [Feature]: Refactor blip2 * [Fix]: Fix lint * refactor files * minor fix * minor fix --------- Co-authored-by: yingfhu <yingfhu@gmail.com> * Remove * fix blip caption inputs (#68) * [Feat] Add BLIP NLVR support. (#67) * first init * init flamingo coco * add vqa * add nlvr * refactor nlvr * minor fix * minor fix * Update dataset --------- Co-authored-by: mzr1996 <mzr1996@163.com> * [Feature]: BLIP2 Caption (#70) * [Feature]: Add language model * [Feature]: blip2 caption forward * [Feature]: Reproduce the results * [Feature]: Refactor caption * refine config --------- Co-authored-by: yingfhu <yingfhu@gmail.com> * [Feat] Migrate BLIP VQA to mmpretrain (#69) * reformat * change * change * change * change * change * change * change * change * change * change * change * change * change * change * change * change * change * change * change * refactor code --------- Co-authored-by: yingfhu <yingfhu@gmail.com> * Update RefCOCO dataset * [Fix] fix lint * [Feature] Implement inference APIs for multi-modal tasks. (#65) * [Feature] Implement inference APIs for multi-modal tasks. * [Project] Add gradio demo. * [Improve] Update requirements * Update flamingo * Update blip * Add NLVR inferencer * Update flamingo * Update hugging face model register * Update ofa vqa * Update BLIP-vqa (#71) * Update blip-vqa docstring (#72) * Refine flamingo docstring (#73) * [Feature]: BLIP2 VQA (#61) * [Feature]: VQA forward * [Feature]: Reproduce accuracy * [Fix]: Fix lint * [Fix]: Add blank line * minor fix --------- Co-authored-by: yingfhu <yingfhu@gmail.com> * [Feature]: BLIP2 docstring (#74) * [Feature]: Add caption docstring * [Feature]: Add docstring to blip2 vqa * [Feature]: Add docstring to retrieval * Update BLIP-2 metafile and README (#75) * [Feature]: Add readme and docstring * Update blip2 results --------- Co-authored-by: mzr1996 <mzr1996@163.com> * [Feature] BLIP Visual Grounding on MMPretrain Branch (#66) * blip grounding merge with mmpretrain * remove commit * blip grounding test and inference api * refcoco dataset * refcoco dataset refine config * rebasing * gitignore * rebasing * minor edit * minor edit * Update blip-vqa docstring (#72) * rebasing * Revert "minor edit" This reverts commit 639cec757c215e654625ed0979319e60f0be9044. * blip grounding final * precommit * refine config * refine config * Update blip visual grounding --------- Co-authored-by: Yiqin Wang 王逸钦 <wyq1217@outlook.com> Co-authored-by: mzr1996 <mzr1996@163.com> * Update visual grounding metric * Update OFA docstring, README and metafiles. (#76) * [Docs] Update installation docs and gradio demo docs. (#77) * Update OFA name * Update Visual Grounding Visualizer * Integrate accelerate support * Fix imports. * Fix timm backbone * Update imports * Update README * Update circle ci * Update flamingo config * Add gradio demo README * [Feature]: Add scienceqa (#1571) * [Feature]: Add scienceqa * [Feature]: Change param name * Update docs * Update video --------- Co-authored-by: Hubert <42952108+yingfhu@users.noreply.github.com> Co-authored-by: yingfhu <yingfhu@gmail.com> Co-authored-by: Yuan Liu <30762564+YuanLiuuuuuu@users.noreply.github.com> Co-authored-by: Yiqin Wang 王逸钦 <wyq1217@outlook.com> Co-authored-by: Rongjie Li <limo97@163.com>
2023-05-19 16:50:04 +08:00
# Copyright (c) OpenMMLab. All rights reserved.
from typing import Optional, Tuple
import torch
import torch.nn as nn
import torch.nn.functional as F
from mmengine.dist import all_gather
from mmengine.model import BaseModule
from mmpretrain.registry import MODELS
@MODELS.register_module()
class ITCHead(BaseModule):
"""Image-text matching head for multi-modal pre-trained task. Adapted by
BLIP, ALBEF. Normally used for retrieval task.
Args:
embed_dim (int): Embed channel size for queue.
queue_size (int): Queue size for image and text. Defaults to 57600.
temperature (float): Temperature to calculate the similarity.
Defaults to 0.07.
use_distill (bool): Whether to use distill to calculate loss.
Defaults to True.
alpha (float): Weight for momentum similarity. Defaults to 0.4.
init_cfg (dict, optional): the config to control the initialization.
Defaults to None.
"""
def __init__(self,
embed_dim: int,
queue_size: int = 57600,
temperature: float = 0.07,
use_distill: bool = True,
alpha: float = 0.4,
init_cfg: Optional[dict] = None):
super(ITCHead, self).__init__(init_cfg=init_cfg)
self.temp = nn.Parameter(temperature * torch.ones([]))
self.use_distill = use_distill
if self.use_distill:
# create the queue
self.register_buffer('image_queue',
torch.randn(embed_dim, queue_size))
self.register_buffer('text_queue',
torch.randn(embed_dim, queue_size))
self.register_buffer('idx_queue', torch.full((1, queue_size),
-100))
self.register_buffer('queue_ptr', torch.zeros(1, dtype=torch.long))
self.image_queue = F.normalize(self.image_queue, dim=0)
self.text_queue = F.normalize(self.text_queue, dim=0)
self.queue_size = queue_size
# This value will be warmup by `WarmupParamHook`
self.alpha = alpha
def forward(self, feats: Tuple[torch.Tensor]) -> torch.Tensor:
"""The forward process."""
return feats[-1]
def loss(self, feats: Tuple[torch.Tensor], data_samples, **kwargs) -> dict:
"""Calculate losses from the classification score.
Args:
feats (tuple[Tensor]): The features extracted from the backbone.
Multiple stage inputs are acceptable but only the last stage
will be used to classify. The shape of every item should be
``(num_samples, num_classes)``.
data_samples (List[ClsDataSample]): The annotation data of
every samples.
**kwargs: Other keyword arguments to forward the loss module.
Returns:
dict[str, Tensor]: a dictionary of loss components
"""
# The part can be traced by torch.fx
img_feats, text_feats, img_feats_m, text_feats_m = self(feats)
img_feats_all = torch.cat(
[img_feats_m.t(),
self.image_queue.clone().detach()], dim=1)
text_feats_all = torch.cat(
[text_feats_m.t(),
self.text_queue.clone().detach()], dim=1)
# The part can not be traced by torch.fx
losses = self._get_loss(img_feats, text_feats, img_feats_m,
text_feats_m, img_feats_all, text_feats_all,
data_samples, **kwargs)
return losses
def _get_loss(self, img_feats, text_feats, img_feats_m, text_feats_m,
img_feats_all, text_feats_all, data_samples, **kwargs):
"""Unpack data samples and compute loss."""
idx = torch.tensor([ds.image_id
for ds in data_samples]).to(img_feats.device)
idx = idx.view(-1, 1)
idx_all = torch.cat([idx.t(), self.idx_queue.clone().detach()], dim=1)
pos_idx = torch.eq(idx, idx_all).float()
sim_targets = pos_idx / pos_idx.sum(1, keepdim=True)
with torch.no_grad():
if self.use_distill:
sim_i2t_m = img_feats_m @ text_feats_all / self.temp
sim_t2i_m = text_feats_m @ img_feats_all / self.temp
sim_i2t_targets = (
self.alpha * F.softmax(sim_i2t_m, dim=1) +
(1 - self.alpha) * sim_targets)
sim_t2i_targets = (
self.alpha * F.softmax(sim_t2i_m, dim=1) +
(1 - self.alpha) * sim_targets)
sim_i2t = img_feats @ text_feats_all / self.temp
sim_t2i = text_feats @ img_feats_all / self.temp
if self.use_distill:
loss_i2t = -torch.sum(
F.log_softmax(sim_i2t, dim=1) * sim_i2t_targets, dim=1).mean()
loss_t2i = -torch.sum(
F.log_softmax(sim_t2i, dim=1) * sim_t2i_targets, dim=1).mean()
else:
loss_i2t = -torch.sum(
F.log_softmax(sim_i2t, dim=1) * sim_targets, dim=1).mean()
loss_t2i = -torch.sum(
F.log_softmax(sim_t2i, dim=1) * sim_targets, dim=1).mean()
# compute loss
losses = dict()
losses['itc_loss'] = (loss_i2t + loss_t2i) / 2
self._dequeue_and_enqueue(img_feats_m, text_feats_m, idx)
return losses
@torch.no_grad()
def _dequeue_and_enqueue(self, image_feat, text_feat, idxs=None):
# gather keys before updating queue
image_feats = torch.cat(all_gather(image_feat))
text_feats = torch.cat(all_gather(text_feat))
batch_size = image_feats.shape[0]
ptr = int(self.queue_ptr)
assert self.queue_size % batch_size == 0 # for simplicity
# replace the keys at ptr (dequeue and enqueue)
self.image_queue[:, ptr:ptr + batch_size] = image_feats.T
self.text_queue[:, ptr:ptr + batch_size] = text_feats.T
if idxs is not None:
idxs = torch.cat(all_gather(idxs))
self.idx_queue[:, ptr:ptr + batch_size] = idxs.T
ptr = (ptr + batch_size) % self.queue_size # move pointer
self.queue_ptr[0] = ptr