mmselfsup/docs/en/user_guides/visualization.md

8.0 KiB

Visualization

Visualization can give an intuitive interpretation of the performance of the model.

How visualization is implemented

It is recommended to learn the basic concept of visualization in engine.md.

OpenMMLab 2.0 introduces the visualization object Visualizer and several visualization backends VisBackend. The diagram below shows the relationship between Visualizer and VisBackend,

What Visualization do in MMSelfsup

(1) Save training data using different storage backends

The backends in MMEngine includes LocalVisBackend, TensorboardVisBackend and WandbVisBackend .

During training, after_train_iter() in the default hook LoggerHook will be called, and use add_scalars in different backends, as follows:

...
def after_train_iter(...):
    ...
    runner.visualizer.add_scalars(
        tag, step=runner.iter + 1, file_path=self.json_log_path)
...

(2) Browse dataset

The function add_datasample() is impleted in SelfSupVisualizer, and it is mainly used in browse_dataset.py for browsing dataset. More tutorial is in analysis_tools.md

Use Different Storage Backends

If you want to use a different backend (Wandb, Tensorboard, or a custom backend with a remote window), just change the vis_backends in the config, as follows:

Local

vis_backends = [dict(type='LocalVisBackend')]

Tensorboard

vis_backends = [dict(type='TensorboardVisBackend')]
visualizer = dict(
    type='SelfSupVisualizer', vis_backends=vis_backends, name='visualizer')

E.g.

Wandb

vis_backends = [dict(type='WandbVisBackend')]
visualizer = dict(
    type='SelfSupVisualizer', vis_backends=vis_backends, name='visualizer')

Note that when multiple visualization backends exist for vis_backends, only WandbVisBackend is valid.

E.g.

Customize Visualization

The customization of the visualization is similar to other components. If you want to customize Visualizer, VisBackend or VisualizationHook, you can refer to Visualization Doc in MMEngine.

Visualize Datasets

tools/misc/browse_dataset.py helps the user to browse a mmselfsup dataset (transformed images) visually, or save the image to a designated directory.

python tools/misc/browse_dataset.py ${CONFIG} [-h] [--skip-type ${SKIP_TYPE[SKIP_TYPE...]}] [--output-dir ${OUTPUT_DIR}] [--not-show] [--show-interval ${SHOW_INTERVAL}]

An example:

python tools/misc/browse_dataset.py configs/selfsup/simsiam/simsiam_resnet50_8xb32-coslr-100e_in1k.py

An example of visualization:

  • The left two pictures are images from contrastive learning data pipeline.
  • The right one is a masked image.

Visualize t-SNE

We provide an off-the-shelf tool to visualize the quality of image representations by t-SNE.

python tools/analysis_tools/visualize_tsne.py ${CONFIG_FILE} --checkpoint ${CKPT_PATH} --work-dir ${WORK_DIR} [optional arguments]

Arguments:

  • CONFIG_FILE: config file for t-SNE, which listed in the directory configs/tsne/
  • CKPT_PATH: the path or link of the model's checkpoint.
  • WORK_DIR: the directory to save the results of visualization.
  • [optional arguments]: for optional arguments, you can refer to visualize_tsne.py

An example of command:

python ./tools/analysis_tools/visualize_tsne.py \
    configs/tsne/resnet50_imagenet.py \
    --checkpoint https://download.openmmlab.com/mmselfsup/1.x/mocov2/mocov2_resnet50_8xb32-coslr-200e_in1k/mocov2_resnet50_8xb32-coslr-200e_in1k_20220825-b6d23c86.pth \
    --work-dir  ./work_dirs/tsne/mocov2/ \
    --max-num-class 100

An example of visualization, left is from MoCoV2_ResNet50 and right is from MAE_ViT-base:

Visualize Low-level Feature Reconstruction

We provide several reconstruction visualization for listed algorithms:

  • MAE
  • SimMIM
  • MaskFeat

Users can run command below to visualize the reconstruction.

python tools/analysis_tools/visualize_reconstruction.py ${CONFIG_FILE} \
    --checkpoint ${CKPT_PATH} \
    --img-path ${IMAGE_PATH} \
    --out-file ${OUTPUT_PATH}

Arguments:

  • CONFIG_FILE: config file for the pre-trained model.
  • CKPT_PATH: the path of model's checkpoint.
  • IMAGE_PATH: the input image path.
  • OUTPUT_PATH: the output image path, including 4 sub-images.
  • [optional arguments]: for optional arguments, you can refer to visualize_reconstruction.py

An example:

python tools/analysis_tools/visualize_reconstruction.py configs/selfsup/mae/mae_vit-huge-p16_8xb512-amp-coslr-1600e_in1k.py \
    --checkpoint https://download.openmmlab.com/mmselfsup/1.x/mae/mae_vit-huge-p16_8xb512-fp16-coslr-1600e_in1k/mae_vit-huge-p16_8xb512-fp16-coslr-1600e_in1k_20220916-ff848775.pth \
    --img-path data/imagenet/val/ILSVRC2012_val_00000003.JPEG \
    --out-file test_mae.jpg \
    --norm-pix


# As for SimMIM, it generates the mask in data pipeline, thus we use '--use-vis-pipeline' to apply 'vis_pipeline' defined in config instead of the pipeline defined in script.
python tools/analysis_tools/visualize_reconstruction.py configs/selfsup/simmim/simmim_swin-large_16xb128-amp-coslr-800e_in1k-192.py \
    --checkpoint https://download.openmmlab.com/mmselfsup/1.x/simmim/simmim_swin-large_16xb128-amp-coslr-800e_in1k-192/simmim_swin-large_16xb128-amp-coslr-800e_in1k-192_20220916-4ad216d3.pth \
    --img-path data/imagenet/val/ILSVRC2012_val_00000003.JPEG \
    --out-file test_simmim.jpg \
    --use-vis-pipeline

Results of MAE:

Results of SimMIM:

Results of MaskFeat: