Update MODEL_ZOO.md

update places205 linear classification results
pull/32/head
Jiahao Xie 2020-09-01 00:58:51 +08:00 committed by GitHub
parent 838b49649c
commit 93fab337ba
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 6 additions and 1 deletions

View File

@ -22,7 +22,12 @@
### Places205 Linear Classification
Coming soon.
**Note**
* Config: `configs/benchmarks/linear_classification/places205/r50_multihead.py`.
* For DeepCluster, use the corresponding one with `_sobel`.
* Places205 evaluates features in around 9k dimensions from different layers. Top-1 result of the last epoch is reported.
<table><thead><tr><th rowspan="2">Method</th><th rowspan="2">Config</th><th rowspan="2">Remarks</th><th colspan="5">Places205</th></tr><tr><td>feat1</td><td>feat2</td><td>feat3</td><td>feat4</td><td>feat5</td></tr></thead><tbody><tr><td><a href="https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py" target="_blank" rel="noopener noreferrer">ImageNet</a></td><td>-</td><td>torchvision</td><td>21.27</td><td>36.10</td><td>43.03</td><td>51.38</td><td>53.05</td></tr><tr><td>Random</td><td>-</td><td>kaiming</td><td>17.19</td><td>21.70</td><td>19.23</td><td>14.59</td><td>11.73</td></tr><tr><td><a href="https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Doersch_Unsupervised_Visual_Representation_ICCV_2015_paper.pdf" target="_blank" rel="noopener noreferrer">Relative-Loc</a></td><td>selfsup/relative_loc/r50.py</td><td>default</td><td>21.07</td><td>34.86</td><td>42.84</td><td>45.71</td><td>41.45</td></tr><tr><td><a href="https://arxiv.org/abs/1803.07728" target="_blank" rel="noopener noreferrer">Rotation-Pred</a></td><td>selfsup/rotation_pred/r50.py</td><td>default</td><td>18.65</td><td>35.71</td><td>42.28</td><td>45.98</td><td>43.72</td></tr><tr><td><a href="https://arxiv.org/abs/1807.05520" target="_blank" rel="noopener noreferrer">DeepCluster</a></td><td>selfsup/deepcluster/r50.py</td><td>default</td><td>18.80</td><td>33.93</td><td>41.44</td><td>47.22</td><td>42.61</td></tr><tr><td><a href="https://arxiv.org/abs/1805.01978" target="_blank" rel="noopener noreferrer">NPID</a></td><td>selfsup/npid/r50.py</td><td>default</td><td>20.53</td><td>34.03</td><td>40.48</td><td>47.13</td><td>47.73</td></tr><tr><td><a href="https://openaccess.thecvf.com/content_CVPR_2020/papers/Zhan_Online_Deep_Clustering_for_Unsupervised_Representation_Learning_CVPR_2020_paper.pdf" target="_blank" rel="noopener noreferrer">ODC</a></td><td>selfsup/odc/r50_v1.py</td><td>default</td><td>20.94</td><td>34.78</td><td>41.19</td><td>47.45</td><td>49.18</td></tr><tr><td><a href="https://arxiv.org/abs/1911.05722" target="_blank" rel="noopener noreferrer">MoCo</a></td><td>selfsup/moco/r50_v1.py</td><td>default</td><td>21.13</td><td>35.19</td><td>42.40</td><td>48.78</td><td>50.70</td></tr><tr><td><a href="https://arxiv.org/abs/2003.04297" target="_blank" rel="noopener noreferrer">MoCo v2</a></td><td>selfsup/moco/r50_v2.py</td><td>default</td><td>20.86</td><td>35.63</td><td>42.57</td><td>49.93</td><td>52.05</td></tr><tr><td></td><td>selfsup/moco/r50_v2_simclr_neck.py</td><td>-&gt; SimCLR neck</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td><a href="https://arxiv.org/abs/2002.05709" target="_blank" rel="noopener noreferrer">SimCLR</a></td><td>selfsup/simclr/r50_bs256_ep200.py</td><td>default</td><td>22.55</td><td>34.14</td><td>40.35</td><td>47.15</td><td>51.64</td></tr><tr><td></td><td>selfsup/simclr/r50_bs256_ep200_mocov2_neck.py</td><td>-&gt; MoCo v2 neck</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td><a href="https://arxiv.org/abs/2006.07733" target="_blank" rel="noopener noreferrer">BYOL</a></td><td>selfsup/byol/r50_bs4096_ep200.py</td><td>default</td><td>22.28</td><td>35.95</td><td>43.03</td><td>49.79</td><td>52.75</td></tr></tbody></table>
### ImageNet Semi-Supervised Classification