update docs

gh-pages
KaiyangZhou 2019-05-22 22:23:33 +01:00
parent 48be7c83c1
commit 819e04956b
5 changed files with 16 additions and 7 deletions

View File

@ -422,9 +422,12 @@
<span class="sd">&quot;&quot;&quot;Returns number of parameters and FLOPs.</span>
<span class="sd"> .. note::</span>
<span class="sd"> Only layers that are used in the inference graph will be counted.</span>
<span class="sd"> For instance, person ID classification layer is not counted because it</span>
<span class="sd"> is typically discarded when doing feature extraction at test time.</span>
<span class="sd"> (1) this function only provides an estimate of the theoretical time complexity</span>
<span class="sd"> rather than the actual running time which depends on implementations and hardware,</span>
<span class="sd"> and (2) the FLOPs is only counted for layers that are used at test time. This means</span>
<span class="sd"> that redundant layers such as person ID classification layer will be ignored as it</span>
<span class="sd"> is discarded when doing feature extraction. Note that the inference graph depends on</span>
<span class="sd"> how you construct the computations in ``forward()``.</span>
<span class="sd"> Args:</span>
<span class="sd"> model (nn.Module): network model.</span>

View File

@ -65,6 +65,8 @@ We provide a tool in ``torchreid.utils.model_complexity.py`` to automatically co
# count flops for all layers including ReLU and BatchNorm
utils.compute_model_complexity(model, (1, 3, 256, 128), verbose=True, only_conv_linear=False)
It is worth noting that (1) this function only provides an estimate of the theoretical time complexity rather than the actual running time which depends on implementations and hardware, and (2) the FLOPs is only counted for layers that are used at test time. This means that redundant layers such as person ID classification layer will be ignored as it is discarded when doing feature extraction. Note that the inference graph depends on how you construct the computations in ``forward()``.
Combine multiple datasets
---------------------------

View File

@ -622,9 +622,12 @@ other layers frozen.</p>
<dd><p>Returns number of parameters and FLOPs.</p>
<div class="admonition note">
<p class="first admonition-title">Note</p>
<p class="last">Only layers that are used in the inference graph will be counted.
For instance, person ID classification layer is not counted because it
is typically discarded when doing feature extraction at test time.</p>
<p class="last">(1) this function only provides an estimate of the theoretical time complexity
rather than the actual running time which depends on implementations and hardware,
and (2) the FLOPs is only counted for layers that are used at test time. This means
that redundant layers such as person ID classification layer will be ignored as it
is discarded when doing feature extraction. Note that the inference graph depends on
how you construct the computations in <code class="docutils literal notranslate"><span class="pre">forward()</span></code>.</p>
</div>
<table class="docutils field-list" frame="void" rules="none">
<col class="field-name" />

File diff suppressed because one or more lines are too long

View File

@ -257,6 +257,7 @@
<span class="n">utils</span><span class="o">.</span><span class="n">compute_model_complexity</span><span class="p">(</span><span class="n">model</span><span class="p">,</span> <span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">256</span><span class="p">,</span> <span class="mi">128</span><span class="p">),</span> <span class="n">verbose</span><span class="o">=</span><span class="bp">True</span><span class="p">,</span> <span class="n">only_conv_linear</span><span class="o">=</span><span class="bp">False</span><span class="p">)</span>
</pre></div>
</div>
<p>It is worth noting that (1) this function only provides an estimate of the theoretical time complexity rather than the actual running time which depends on implementations and hardware, and (2) the FLOPs is only counted for layers that are used at test time. This means that redundant layers such as person ID classification layer will be ignored as it is discarded when doing feature extraction. Note that the inference graph depends on how you construct the computations in <code class="docutils literal notranslate"><span class="pre">forward()</span></code>.</p>
</div>
<div class="section" id="combine-multiple-datasets">
<h2><a class="toc-backref" href="#id7">Combine multiple datasets</a><a class="headerlink" href="#combine-multiple-datasets" title="Permalink to this headline"></a></h2>