torchreid.utils¶
Average Meter¶
Loggers¶
-
class
torchreid.utils.loggers.
Logger
(fpath=None)[source]¶ Writes console output to external text file.
Imported from https://github.com/Cysu/open-reid/blob/master/reid/utils/logging.py
Parameters: fpath (str) – directory to save logging file. - Examples::
>>> import sys >>> import os >>> import os.path as osp >>> from torchreid.utils import Logger >>> save_dir = 'log/resnet50-softmax-market1501' >>> log_name = 'train.log' >>> sys.stdout = Logger(osp.join(args.save_dir, log_name))
-
class
torchreid.utils.loggers.
RankLogger
(sources, targets)[source]¶ Records the rank1 matching accuracy obtained for each test dataset at specified evaluation steps and provides a function to show the summarized results, which are convenient for analysis.
Parameters: - sources (str or list) – source dataset name(s).
- targets (str or list) – target dataset name(s).
- Examples::
>>> from torchreid.utils import RankLogger >>> s = 'market1501' >>> t = 'market1501' >>> ranklogger = RankLogger(s, t) >>> ranklogger.write(t, 10, 0.5) >>> ranklogger.write(t, 20, 0.7) >>> ranklogger.write(t, 30, 0.9) >>> ranklogger.show_summary() >>> # You will see: >>> # => Show performance summary >>> # market1501 (source) >>> # - epoch 10 rank1 50.0% >>> # - epoch 20 rank1 70.0% >>> # - epoch 30 rank1 90.0% >>> # If there are multiple test datasets >>> t = ['market1501', 'dukemtmcreid'] >>> ranklogger = RankLogger(s, t) >>> ranklogger.write(t[0], 10, 0.5) >>> ranklogger.write(t[0], 20, 0.7) >>> ranklogger.write(t[0], 30, 0.9) >>> ranklogger.write(t[1], 10, 0.1) >>> ranklogger.write(t[1], 20, 0.2) >>> ranklogger.write(t[1], 30, 0.3) >>> ranklogger.show_summary() >>> # You can see: >>> # => Show performance summary >>> # market1501 (source) >>> # - epoch 10 rank1 50.0% >>> # - epoch 20 rank1 70.0% >>> # - epoch 30 rank1 90.0% >>> # dukemtmcreid (target) >>> # - epoch 10 rank1 10.0% >>> # - epoch 20 rank1 20.0% >>> # - epoch 30 rank1 30.0%
Generic Tools¶
-
torchreid.utils.tools.
check_isfile
(fpath)[source]¶ Checks if the given path is a file.
Parameters: fpath (str) – file path. Returns: bool
-
torchreid.utils.tools.
download_url
(url, dst)[source]¶ Downloads file from a url to a destination.
Parameters: - url (str) – url to download file.
- dst (str) – destination path.
ReID Tools¶
-
torchreid.utils.reidtools.
visualize_ranked_results
(distmat, dataset, save_dir='', topk=20)[source]¶ Visualizes ranked results.
Supports both image-reid and video-reid.
Parameters: - distmat (numpy.ndarray) – distance matrix of shape (num_query, num_gallery).
- dataset (tuple) – a 2-tuple containing (query, gallery), each of which contains tuples of (img_path(s), pid, camid).
- save_dir (str) – directory to save output images.
- topk (int, optional) – denoting top-k images in the rank list to be visualized.
Torch Tools¶
-
torchreid.utils.torchtools.
save_checkpoint
(state, save_dir, is_best=False, remove_module_from_keys=False)[source]¶ Saves checkpoint.
Parameters: - state (dict) – dictionary.
- save_dir (str) – directory to save checkpoint.
- is_best (bool, optional) – if True, this checkpoint will be copied and named
model-best.pth.tar
. Default is False. - remove_module_from_keys (bool, optional) – whether to remove “module.” from layer names. Default is False.
- Examples::
>>> state = { >>> 'state_dict': model.state_dict(), >>> 'epoch': 10, >>> 'rank1': 0.5, >>> 'optimizer': optimizer.state_dict() >>> } >>> save_checkpoint(state, 'log/my_model')
-
torchreid.utils.torchtools.
load_checkpoint
(fpath)[source]¶ Loads checkpoint.
UnicodeDecodeError
can be well handled, which means python2-saved files can be read from python3.Parameters: fpath (str) – path to checkpoint. Returns: dict - Examples::
>>> from torchreid.utils import load_checkpoint >>> fpath = 'log/my_model/model.pth.tar-10' >>> checkpoint = load_checkpoint(fpath)
-
torchreid.utils.torchtools.
resume_from_checkpoint
(fpath, model, optimizer=None)[source]¶ Resumes training from a checkpoint.
This will load (1) model weights and (2)
state_dict
of optimizer ifoptimizer
is not None.Parameters: - fpath (str) – path to checkpoint.
- model (nn.Module) – model.
- optimizer (Optimizer, optional) – an Optimizer.
Returns: start_epoch.
Return type: int
- Examples::
>>> from torchreid.utils import resume_from_checkpoint >>> fpath = 'log/my_model/model.pth.tar-10' >>> start_epoch = resume_from_checkpoint(fpath, model, optimizer)
-
torchreid.utils.torchtools.
open_all_layers
(model)[source]¶ Opens all layers in model for training.
- Examples::
>>> from torchreid.utils import open_all_layers >>> open_all_layers(model)
-
torchreid.utils.torchtools.
open_specified_layers
(model, open_layers)[source]¶ Opens specified layers in model for training while keeping other layers frozen.
Parameters: - model (nn.Module) – neural net model.
- open_layers (str or list) – layers open for training.
- Examples::
>>> from torchreid.utils import open_specified_layers >>> # Only model.classifier will be updated. >>> open_layers = 'classifier' >>> open_specified_layers(model, open_layers) >>> # Only model.fc and model.classifier will be updated. >>> open_layers = ['fc', 'classifier'] >>> open_specified_layers(model, open_layers)
-
torchreid.utils.torchtools.
count_num_param
(model)[source]¶ Counts number of parameters in a model while ignoring
self.classifier
.Parameters: model (nn.Module) – network model. - Examples::
>>> from torchreid.utils import count_num_param >>> model_size = count_num_param(model)
Warning
This method is deprecated in favor of
torchreid.utils.compute_model_complexity
.
-
torchreid.utils.torchtools.
load_pretrained_weights
(model, weight_path)[source]¶ Loads pretrianed weights to model.
- Features::
- Incompatible layers (unmatched in name or size) will be ignored.
- Can automatically deal with keys containing “module.”.
Parameters: - model (nn.Module) – network model.
- weight_path (str) – path to pretrained weights.
- Examples::
>>> from torchreid.utils import load_pretrained_weights >>> weight_path = 'log/my_model/model-best.pth.tar' >>> load_pretrained_weights(model, weight_path)
-
torchreid.utils.model_complexity.
compute_model_complexity
(model, input_size, verbose=False, only_conv_linear=True)[source]¶ Returns number of parameters and FLOPs.
Note
(1) this function only provides an estimate of the theoretical time complexity rather than the actual running time which depends on implementations and hardware, and (2) the FLOPs is only counted for layers that are used at test time. This means that redundant layers such as person ID classification layer will be ignored as it is discarded when doing feature extraction. Note that the inference graph depends on how you construct the computations in
forward()
.Parameters: - model (nn.Module) – network model.
- input_size (tuple) – input size, e.g. (1, 3, 256, 128).
- verbose (bool, optional) – shows detailed complexity of each module. Default is False.
- only_conv_linear (bool, optional) – only considers convolution and linear layers when counting flops. Default is True. If set to False, flops of all layers will be counted.
- Examples::
>>> from torchreid import models, utils >>> model = models.build_model(name='resnet50', num_classes=1000) >>> num_params, flops = utils.compute_model_complexity(model, (1, 3, 256, 128), verbose=True)