torchreid.utils

Average Meter

class torchreid.utils.avgmeter.AverageMeter[source]

Computes and stores the average and current value.

Examples::
>>> # Initialize a meter to record loss
>>> losses = AverageMeter()
>>> # Update meter after every minibatch update
>>> losses.update(loss_value, batch_size)

Loggers

class torchreid.utils.loggers.Logger(fpath=None)[source]

Writes console output to external text file.

Imported from https://github.com/Cysu/open-reid/blob/master/reid/utils/logging.py

Parameters:fpath (str) – directory to save logging file.

Examples

>>> import sys
>>> import os
>>> import os.path as osp
>>> from torchreid.utils import Logger
>>> save_dir = 'log/resnet50-softmax-market1501'
>>> log_name = 'train.log'
>>> sys.stdout = Logger(osp.join(args.save_dir, log_name))
class torchreid.utils.loggers.RankLogger(sources, targets)[source]

Records the rank1 matching accuracy obtained for each test dataset at specified evaluation steps and provides a function to show the summarized results, which are convenient for analysis.

Parameters:
  • sources (str or list) – source dataset name(s).
  • targets (str or list) – target dataset name(s).
Examples::
>>> from torchreid.utils import RankLogger
>>> s = 'market1501'
>>> t = 'market1501'
>>> ranklogger = RankLogger(s, t)
>>> ranklogger.write(t, 10, 0.5)
>>> ranklogger.write(t, 20, 0.7)
>>> ranklogger.write(t, 30, 0.9)
>>> ranklogger.show_summary()
>>> # You will see:
>>> # => Show performance summary
>>> # market1501 (source)
>>> # - epoch 10   rank1 50.0%
>>> # - epoch 20   rank1 70.0%
>>> # - epoch 30   rank1 90.0%
>>> # If there are multiple test datasets
>>> t = ['market1501', 'dukemtmcreid']
>>> ranklogger = RankLogger(s, t)
>>> ranklogger.write(t[0], 10, 0.5)
>>> ranklogger.write(t[0], 20, 0.7)
>>> ranklogger.write(t[0], 30, 0.9)
>>> ranklogger.write(t[1], 10, 0.1)
>>> ranklogger.write(t[1], 20, 0.2)
>>> ranklogger.write(t[1], 30, 0.3)
>>> ranklogger.show_summary()
>>> # You can see:
>>> # => Show performance summary
>>> # market1501 (source)
>>> # - epoch 10   rank1 50.0%
>>> # - epoch 20   rank1 70.0%
>>> # - epoch 30   rank1 90.0%
>>> # dukemtmcreid (target)
>>> # - epoch 10   rank1 10.0%
>>> # - epoch 20   rank1 20.0%
>>> # - epoch 30   rank1 30.0%
show_summary()[source]

Shows saved results.

write(name, epoch, rank1)[source]

Writes result.

Parameters:
  • name (str) – dataset name.
  • epoch (int) – current epoch.
  • rank1 (float) – rank1 result.

Generic Tools

torchreid.utils.tools.mkdir_if_missing(dirname)[source]

Creates dirname if it is missing.

torchreid.utils.tools.check_isfile(fpath)[source]

Checks if the given path is a file.

Parameters:fpath (str) – file path.
Returns:bool
torchreid.utils.tools.read_json(fpath)[source]

Reads json file from a path.

torchreid.utils.tools.write_json(obj, fpath)[source]

Writes to a json file.

torchreid.utils.tools.download_url(url, dst)[source]

Downloads file from a url to a destination.

Parameters:
  • url (str) – url to download file.
  • dst (str) – destination path.
torchreid.utils.tools.read_image(path)[source]

Reads image from path using PIL.Image.

Parameters:path (str) – path to an image.
Returns:PIL image

ReID Tools

torchreid.utils.reidtools.visualize_ranked_results(distmat, dataset, save_dir='', topk=20)[source]

Visualizes ranked results.

Supports both image-reid and video-reid.

Parameters:
  • distmat (numpy.ndarray) – distance matrix of shape (num_query, num_gallery).
  • dataset (tuple) – a 2-tuple containing (query, gallery), each of which contains tuples of (img_path(s), pid, camid).
  • save_dir (str) – directory to save output images.
  • topk (int, optional) – denoting top-k images in the rank list to be visualized.

Torch Tools

torchreid.utils.torchtools.save_checkpoint(state, save_dir, is_best=False, remove_module_from_keys=False)[source]

Saves checkpoint.

Parameters:
  • state (dict) – dictionary.
  • save_dir (str) – directory to save checkpoint.
  • is_best (bool, optional) – if True, this checkpoint will be copied and named “model-best.pth.tar”. Default is False.
  • remove_module_from_keys (bool, optional) – whether to remove “module.” from layer names. Default is False.
Examples::
>>> state = {
>>>     'state_dict': model.state_dict(),
>>>     'epoch': 10,
>>>     'rank1': 0.5,
>>>     'optimizer': optimizer.state_dict()
>>> }
>>> save_checkpoint(state, 'log/my_model')
torchreid.utils.torchtools.load_checkpoint(fpath)[source]

Loads checkpoint.

UnicodeDecodeError can be well handled, which means python2-saved files can be read from python3.

Parameters:fpath (str) – path to checkpoint.
Returns:dict
Examples::
>>> from torchreid.utils import load_checkpoint
>>> fpath = 'log/my_model/model.pth.tar-10'
>>> checkpoint = load_checkpoint(fpath)
torchreid.utils.torchtools.resume_from_checkpoint(fpath, model, optimizer=None)[source]

Resumes training from a checkpoint.

This will load (1) model weights and (2) state_dict of optimizer if optimizer is not None.

Parameters:
  • fpath (str) – path to checkpoint.
  • model (nn.Module) – model.
  • optimizer (Optimizer, optional) – an Optimizer.
Returns:

start_epoch.

Return type:

int

Examples::
>>> from torchreid.utils import resume_from_checkpoint
>>> fpath = 'log/my_model/model.pth.tar-10'
>>> start_epoch = resume_from_checkpoint(fpath, model, optimizer)
torchreid.utils.torchtools.open_all_layers(model)[source]

Opens all layers in model for training.

Examples::
>>> from torchreid.utils import open_all_layers
>>> open_all_layers(model)
torchreid.utils.torchtools.open_specified_layers(model, open_layers)[source]

Opens specified layers in model for training while keeping other layers frozen.

Parameters:
  • model (nn.Module) – neural net model.
  • open_layers (str or list) – layers open for training.
Examples::
>>> from torchreid.utils import open_specified_layers
>>> # Only model.classifier will be updated.
>>> open_layers = 'classifier'
>>> open_specified_layers(model, open_layers)
>>> # Only model.fc and model.classifier will be updated.
>>> open_layers = ['fc', 'classifier']
>>> open_specified_layers(model, open_layers)
torchreid.utils.torchtools.count_num_param(model)[source]

Counts number of parameters in a model.

Examples::
>>> from torchreid.utils import count_num_param
>>> model_size = count_num_param(model)
torchreid.utils.torchtools.load_pretrained_weights(model, weight_path)[source]

Loads pretrianed weights to model.

Features::
  • Incompatible layers (unmatched in name or size) will be ignored.
  • Can automatically deal with keys containing “module.”.
Parameters:
  • model (nn.Module) – model.
  • weight_path (str) – path to pretrained weights.
Examples::
>>> from torchreid.utils import load_pretrained_weights
>>> weight_path = 'log/my_model/model-best.pth.tar'
>>> load_pretrained_weights(model, weight_path)