<liclass="toctree-l2"><aclass="reference internal"href="#combine-train-query-and-gallery">Combine train, query and gallery</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="#optimize-layers-with-different-learning-rates">Optimize layers with different learning rates</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="#do-two-stepped-transfer-learning">Do two-stepped transfer learning</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="#test-a-trained-model">Test a trained model</a></li>
<h2><aclass="toc-backref"href="#id1">Find model keys</a><aclass="headerlink"href="#find-model-keys"title="Permalink to this headline">¶</a></h2>
<p>Keys are listed under the <em>Public keys</em> section within each model class in <aclass="reference internal"href="pkg/models.html#torchreid-models"><spanclass="std std-ref">torchreid.models</span></a>.</p>
</div>
<divclass="section"id="show-available-models">
<h2><aclass="toc-backref"href="#id2">Show available models</a><aclass="headerlink"href="#show-available-models"title="Permalink to this headline">¶</a></h2>
<h2><aclass="toc-backref"href="#id3">Change the training sampler</a><aclass="headerlink"href="#change-the-training-sampler"title="Permalink to this headline">¶</a></h2>
<p>The default <codeclass="docutils literal notranslate"><spanclass="pre">train_sampler</span></code> is “RandomSampler”. You can give the specific sampler name as input to <codeclass="docutils literal notranslate"><spanclass="pre">train_sampler</span></code>, e.g. <codeclass="docutils literal notranslate"><spanclass="pre">train_sampler='RandomIdentitySampler'</span></code> for triplet loss.</p>
<h2><aclass="toc-backref"href="#id4">Choose an optimizer/lr_scheduler</a><aclass="headerlink"href="#choose-an-optimizer-lr-scheduler"title="Permalink to this headline">¶</a></h2>
<p>Please refer to the source code of <codeclass="docutils literal notranslate"><spanclass="pre">build_optimizer</span></code>/<codeclass="docutils literal notranslate"><spanclass="pre">build_lr_scheduler</span></code> in <aclass="reference internal"href="pkg/optim.html#torchreid-optim"><spanclass="std std-ref">torchreid.optim</span></a> for details.</p>
</div>
<divclass="section"id="resume-training">
<h2><aclass="toc-backref"href="#id5">Resume training</a><aclass="headerlink"href="#resume-training"title="Permalink to this headline">¶</a></h2>
<p>Suppose the checkpoint is saved in “log/resnet50/model.pth.tar-30”, you can do</p>
<h2><aclass="toc-backref"href="#id6">Compute model complexity</a><aclass="headerlink"href="#compute-model-complexity"title="Permalink to this headline">¶</a></h2>
<p>We provide a tool in <codeclass="docutils literal notranslate"><spanclass="pre">torchreid.utils.model_complexity.py</span></code> to automatically compute the model complexity, i.e. number of parameters and FLOPs.</p>
<h2><aclass="toc-backref"href="#id7">Combine multiple datasets</a><aclass="headerlink"href="#combine-multiple-datasets"title="Permalink to this headline">¶</a></h2>
<p>Easy. Just give whatever datasets (keys) you want to the <codeclass="docutils literal notranslate"><spanclass="pre">sources</span></code> argument when instantiating a data manager. For example,</p>
<p>In this example, the target datasets are Market1501, DukeMTMC-reID, CUHK03 and MSMT17 as the <codeclass="docutils literal notranslate"><spanclass="pre">targets</span></code> argument is not specified. Please refer to <codeclass="docutils literal notranslate"><spanclass="pre">Engine.test()</span></code> in <aclass="reference internal"href="pkg/engine.html#torchreid-engine"><spanclass="std std-ref">torchreid.engine</span></a> for details regarding how evaluation is performed.</p>
<h2><aclass="toc-backref"href="#id8">Do cross-dataset evaluation</a><aclass="headerlink"href="#do-cross-dataset-evaluation"title="Permalink to this headline">¶</a></h2>
<p>Easy. Just give whatever datasets (keys) you want to the argument <codeclass="docutils literal notranslate"><spanclass="pre">targets</span></code>, like</p>
<spanclass="n">targets</span><spanclass="o">=</span><spanclass="s1">'dukemtmcreid'</span><spanclass="p">,</span><spanclass="c1"># or targets='cuhk03' or targets=['dukemtmcreid', 'cuhk03']</span>
<h2><aclass="toc-backref"href="#id9">Combine train, query and gallery</a><aclass="headerlink"href="#combine-train-query-and-gallery"title="Permalink to this headline">¶</a></h2>
<p>This can be easily done by setting <codeclass="docutils literal notranslate"><spanclass="pre">combineall=True</span></code> when instantiating a data manager. Below is an example of using Market1501,</p>
<h2><aclass="toc-backref"href="#id10">Optimize layers with different learning rates</a><aclass="headerlink"href="#optimize-layers-with-different-learning-rates"title="Permalink to this headline">¶</a></h2>
<p>A common practice for fine-tuning pretrained models is to use a smaller learning rate for base layers and a large learning rate for randomly initialized layers (referred to as <codeclass="docutils literal notranslate"><spanclass="pre">new_layers</span></code>). <codeclass="docutils literal notranslate"><spanclass="pre">torchreid.optim.optimizer</span></code> has implemented such feature. What you need to do is to set <codeclass="docutils literal notranslate"><spanclass="pre">staged_lr=True</span></code> and give the names of <codeclass="docutils literal notranslate"><spanclass="pre">new_layers</span></code> such as “classifier”.</p>
<p>Below is an example of setting different learning rates for base layers and new layers in ResNet50,</p>
<divclass="highlight-python notranslate"><divclass="highlight"><pre><span></span><spanclass="c1"># New layer "classifier" has a learning rate of 0.01</span>
<spanclass="c1"># The base layers have a learning rate of 0.001</span>
<p>Please refer to <aclass="reference internal"href="pkg/optim.html#torchreid-optim"><spanclass="std std-ref">torchreid.optim</span></a> for more details.</p>
<h2><aclass="toc-backref"href="#id11">Do two-stepped transfer learning</a><aclass="headerlink"href="#do-two-stepped-transfer-learning"title="Permalink to this headline">¶</a></h2>
<p>To prevent the pretrained layers to be damaged by harmful gradients back-propagated from randomly initialized layers, one can adopt the <em>two-stepped transfer learning strategy</em> presented in <aclass="reference external"href="https://arxiv.org/abs/1611.05244">Deep Transfer Learning for Person Re-identification</a>. The basic idea is to pretrain the randomly initialized layers for few epochs while keeping the base layers frozen before training all layers end-to-end.</p>
<p>This has been implemented in <codeclass="docutils literal notranslate"><spanclass="pre">Engine.run()</span></code> (see <aclass="reference internal"href="pkg/engine.html#torchreid-engine"><spanclass="std std-ref">torchreid.engine</span></a>). The arguments to enable this feature are <codeclass="docutils literal notranslate"><spanclass="pre">fixbase_epoch</span></code> and <codeclass="docutils literal notranslate"><spanclass="pre">open_layers</span></code>. Intuitively, <codeclass="docutils literal notranslate"><spanclass="pre">fixbase_epoch</span></code> denotes the number of epochs to keep the base layers frozen; <codeclass="docutils literal notranslate"><spanclass="pre">open_layers</span></code> means which layers are open for training. Note that <codeclass="docutils literal notranslate"><spanclass="pre">fixbase_epoch</span></code> is not counted into <codeclass="docutils literal notranslate"><spanclass="pre">max_epoch</span></code>.</p>
<p>For example, say you want to pretrain the classification layer named “classifier” in ResNet50 for 5 epochs before training all layers, you can do</p>
<h2><aclass="toc-backref"href="#id12">Test a trained model</a><aclass="headerlink"href="#test-a-trained-model"title="Permalink to this headline">¶</a></h2>
<p>You can load a trained model using <codeclass="code docutils literal notranslate"><spanclass="pre">torchreid.utils.load_pretrained_weights(model,</span><spanclass="pre">weight_path)</span></code> and set <codeclass="docutils literal notranslate"><spanclass="pre">test_only=True</span></code> in <codeclass="docutils literal notranslate"><spanclass="pre">engine.run()</span></code>.</p>
<h2><aclass="toc-backref"href="#id13">Visualize ranked results</a><aclass="headerlink"href="#visualize-ranked-results"title="Permalink to this headline">¶</a></h2>
<p>Ranked images can be visualized by setting the <codeclass="docutils literal notranslate"><spanclass="pre">visrank</span></code> to True in <codeclass="docutils literal notranslate"><spanclass="pre">engine.run()</span></code>. <codeclass="docutils literal notranslate"><spanclass="pre">visrank_topk</span></code> determines the top-k images to be visualized (Default is <codeclass="docutils literal notranslate"><spanclass="pre">visrank_topk=20</span></code>). Typically, <codeclass="docutils literal notranslate"><spanclass="pre">visrank</span></code> is used in test mode, i.e. setting <codeclass="docutils literal notranslate"><spanclass="pre">test_only=True</span></code> in <codeclass="docutils literal notranslate"><spanclass="pre">engine.run()</span></code>. The images are saved under <codeclass="docutils literal notranslate"><spanclass="pre">osp.join(save_dir,</span><spanclass="pre">'visrank-'+str(epoch+1),</span><spanclass="pre">dataset_name</span></code>.</p>
<h2><aclass="toc-backref"href="#id14">Use your own dataset</a><aclass="headerlink"href="#use-your-own-dataset"title="Permalink to this headline">¶</a></h2>
<li>Write your own dataset class. Below is a template for image dataset. However, it can also be applied to a video dataset class, for which you simply change <codeclass="docutils literal notranslate"><spanclass="pre">ImageDataset</span></code> to <codeclass="docutils literal notranslate"><spanclass="pre">VideoDataset</span></code>.</li>
<spanclass="n">targets</span><spanclass="o">=</span><spanclass="s1">'market1501'</span><spanclass="c1"># or targets=['market1501', 'cuhk03']</span>
<h2><aclass="toc-backref"href="#id15">Design your own Engine</a><aclass="headerlink"href="#design-your-own-engine"title="Permalink to this headline">¶</a></h2>
<p>A new Engine should be designed if you have your own loss function. The base Engine class <codeclass="docutils literal notranslate"><spanclass="pre">torchreid.engine.Engine</span></code> has implemented some generic methods which you want to inherit to avoid re-writing. Please refer to the source code for more details. You are suggested to see how <codeclass="docutils literal notranslate"><spanclass="pre">ImageSoftmaxEngine</span></code> and <codeclass="docutils literal notranslate"><spanclass="pre">ImageTripletEngine</span></code> are constructed (also <codeclass="docutils literal notranslate"><spanclass="pre">VideoSoftmaxEngine</span></code> and <codeclass="docutils literal notranslate"><spanclass="pre">VideoTripletEngine</span></code>). All you need to implement might be just a <codeclass="docutils literal notranslate"><spanclass="pre">train()</span></code> function.</p>
Built with <ahref="http://sphinx-doc.org/">Sphinx</a> using a <ahref="https://github.com/rtfd/sphinx_rtd_theme">theme</a> provided by <ahref="https://readthedocs.org">Read the Docs</a>.