-`-o, --output-dir`: The output path for visualized images. If not specified, it will be set to `''`, which means not to save.
- **`-p, --phase`**: Phase of visualizing dataset,must be one of `['train', 'val', 'test']`. If not specified, it will be set to `'train'`.
- **`-n, --show-number`**: The number of samples to visualized. If not specified, display all images in the dataset.
-`--show-interval`: The interval of show (s).
- **`-m, --mode`**: The display mode, can be one of `['original', 'transformed', 'concat', 'pipeline']`. If not specified, it will be set to `'transformed'`.
- **`-r, --rescale-factor`**: The image rescale factor, which is useful if the output is too large or too small.
-`-c, --channel-order`: The channel of the showing images, could be "BGR" or "RGB", If not specified, it will be set to 'BGR'.
-`--cfg-options` : Modifications to the configuration file, refer to [Learn about Configs](./config.md).
2. The `-r, --rescale-factor` option is set when the label information is too large or too small relative to the picture. For example, when visualizing the CIFAR dataset, since the resolution of the image is very small, `--rescale-factor` can be set to 10.
- **`-p, --parameter`**: The param to visualize its change curve, choose from "lr" and "momentum". Default to use "lr".
- **`-d, --dataset-size`**: The size of the datasets. If set,`build_dataset` will be skipped and `${DATASET_SIZE}` will be used as the size. Default to use the function `build_dataset`.
- **`-n, --ngpus`**: The number of GPUs used in training, default to be 1.
- **`-s, --save-path`**: The learning rate curve plot save path, default not to save.
-`--title`: Title of figure. If not set, default to be config file name.
-`--style`: Style of plt. If not set, default to be `whitegrid`.
-`--window-size`: The shape of the display window. If not specified, it will be set to `12*7`. If used, it must be in the format `'W*H'`.
-`--cfg-options`: Modifications to the configuration file, refer to [Learn about Configs](./config.md).
MMClassification provides `tools\visualizations\vis_cam.py` tool to visualize class activation map. Please use `pip install "grad-cam>=1.3.6"` command to install [pytorch-grad-cam](https://github.com/jacobgil/pytorch-grad-cam).
-`--target-layers` : The target layers to get activation maps, one or more network layers can be specified. If not set, use the norm layer of the last block.
-`--preview-model` : Whether to print all network layer names in the model.
-`--method` : Visualization method, supports `GradCAM`, `GradCAM++`, `XGradCAM`, `EigenCAM`, `EigenGradCAM`, `LayerCAM`, which is case insensitive. Defaults to `GradCAM`.
-`--target-category` : Target category, if not set, use the category detected by the given model.
-`--save-path` : The path to save the CAM visualization image. If not set, the CAM image will not be saved.
-`--vit-like` : Whether the network is ViT-like network.
-`--num-extra-tokens` : The number of extra tokens in ViT-like backbones. If not set, use num_extra_tokens the backbone.
-`--aug_smooth` : Whether to use TTA(Test Time Augment) to get CAM.
-`--eigen_smooth` : Whether to use the principal component to reduce noise.
-`--device` : The computing device used. Default to 'cpu'.
The argument `--preview-model` can view all network layers names in the given model. It will be helpful if you know nothing about the model layers when setting `--target-layers`.
```
**Examples(CNN)**:
Here are some examples of `target-layers` in ResNet-50, which can be any module or layer:
-`'backbone.layer4'` means the output of the forth ResLayer.
-`'backbone.layer4.2'` means the output of the third BottleNeck block in the forth ResLayer.
-`'backbone.layer4.2.conv1'` means the output of the `conv1` layer in above BottleNeck block.
```{note}
For `ModuleList` or `Sequential`, you can also use the index to specify which sub-module is the target layer.
For example, the `backbone.layer4[-1]` is the same as `backbone.layer4.2` since `layer4` is a `Sequential` with three sub-modules.
```
1. Use different methods to visualize CAM for `ResNet50`, the `target-category` is the predicted result by the given checkpoint, using the default `target-layers`.
2. Use different `target-category` to get CAM from the same picture. In `ImageNet` dataset, the category 238 is 'Greater Swiss Mountain dog', the category 281 is 'tabby, tabby cat'.
For ViT-like networks, such as ViT, T2T-ViT and Swin-Transformer, the features are flattened. And for drawing the CAM, we need to specify the `--vit-like` argument to reshape the features into square feature maps.
Besides the flattened features, some ViT-like networks also add extra tokens like the class token in ViT and T2T-ViT, and the distillation token in DeiT. In these networks, the final classification is done on the tokens computed in the last attention block, and therefore, the classification score will not be affected by other features and the gradient of the classification score with respect to them, will be zero. Therefore, you shouldn't use the output of the last attention block as the target layer in these networks.
To exclude these extra tokens, we need know the number of extra tokens. Almost all transformer-based backbones in MMClassification have the `num_extra_tokens` attribute. If you want to use this tool in a new or third-party network that don't have the `num_extra_tokens` attribute, please specify it the `--num-extra-tokens` argument.
1. Visualize CAM for `Swin Transformer`, using default `target-layers`: