Commit Graph

55 Commits (d2f9b593d42dcb06a260e6fa2b2fb8555187587f)
 

Author SHA1 Message Date
Etienne Guevel d2f9b593d4 edit tests + README 2024-05-29 17:17:33 +02:00
Etienne Guevel be41e7b2d3 Add env for cpu 2024-05-29 13:50:56 +02:00
Etienne Guevel dc63c4c5f0 Merge branch 'RGB' of github.com:etienneguevel/dinov2 into RGB 2024-05-26 01:07:55 +02:00
Etienne Guevel d605692003 fix stuff 2024-05-26 01:07:49 +02:00
Etienne Guevel 279ec583f9 fix stuff 2024-05-26 01:00:55 +02:00
Etienne Guevel 9d6d6d42b5 Merge branch 'RGB' of github.com:etienneguevel/dinov2 into RGB 2024-05-24 17:52:19 +02:00
Etienne Guevel 0a0b3c13ac train changes for data preservation 2024-05-24 17:52:14 +02:00
Etienne Guevel 996dd33131 train changes for data preservation 2024-05-24 17:46:07 +02:00
Etienne Guevel 8da3dfab58 Merge branch 'RGB' of github.com:etienneguevel/dinov2 into RGB 2024-05-24 17:16:54 +02:00
Etienne Guevel 619d4edb6d Merge branch 'RGB' of github.com:etienneguevel/dinov2 into RGB 2024-05-24 17:16:23 +02:00
Etienne Guevel 84dd00b098 Merge branch 'RGB' of github.com:etienneguevel/dinov2 into RGB 2024-05-24 17:13:05 +02:00
Etienne Guevel 9ef866789f preserve lab data changes 2024-05-24 17:12:49 +02:00
Etienne Guevel 2e05e6b4ef preserve lab data changes 2024-05-24 16:55:10 +02:00
Etienne Guevel 6684955cef fix RGB issue 2024-05-16 23:02:51 +02:00
Etienne Guevel 6aaa74ce49 convert RGB 2024-05-16 12:10:38 +02:00
Etienne Guevel 9e43e49463 modifs for RGBA 2024-05-16 11:46:01 +02:00
Etienne Guevel 252ca3b3ab fix mem_per_gpu 2024-05-15 16:52:13 +02:00
Etienne Guevel 874ac5192e add mem-per-gpu 2024-05-15 16:17:15 +02:00
Etienne Guevel 087a4cea01 _handle and _streams fix 2024-05-15 16:01:34 +02:00
Etienne Guevel 0e1c5b4e35 _handles change 2024-05-15 10:18:33 +02:00
Etienne Guevel 12e4811549 initial changes 2024-05-14 15:11:51 +02:00
Patrick Labatut e1277af2ba
[bug] Fix interpolation of positional embeddings (#378)
Use size instead of scale factor to specify the output size of nn.interpolate(): this avoids any rounding issue leading to mismatching output size and consistently generate the same output size as with the previous kludge (from facebookresearch/dino#8).
2024-02-22 19:10:54 +01:00
qasfb 2302b6bf46
Update vision_transformer.py (#331)
Account for register tokens in get_intermediate_layers
2023-12-01 18:12:17 +01:00
Patrick Labatut da4b3825f0 Lint 2023-10-27 07:33:32 -07:00
qasfb ad5a262b22
Update param_groups.py (#283)
* Update param_groups.py

Update lr decay rates for reg tokens

* Update param_groups.py
2023-10-27 16:30:19 +02:00
Patrick Labatut e203621e57 More top-level README updates 2023-10-27 15:31:51 +02:00
Patrick Labatut 89272b5c0c Add more prominent link to paper 2023-10-27 15:26:57 +02:00
Patrick Labatut 9c7e324579
Add new backbones trained with registers (#282)
Add new backbones (and matching linear classification heads) trained with 4 registers following [Vision Transformers Need Registers](https://arxiv.org/abs/2309.16588).
2023-10-27 15:15:10 +02:00
Patrick Labatut 44abdbe27c
Fix interpolate parameters to allow tracing (#247)
Pass scale factor as a tuple of floats to F.interpolate() to allow tracing.
2023-09-30 22:29:41 +02:00
Patrick Labatut e7df9fc95d
Expose DPT depth models via torch.hub.load() (#238)
Add streamlined model versions w/o the mmcv dependency to directly load them via torch.hub.load().
2023-09-30 20:12:02 +02:00
Patrick Labatut 82185b17a8
Expose linear depth models via PyTorch Hub (#237)
Add streamlined model versions w/o the mmcv dependency to directly load them via torch.hub.load().
2023-09-30 20:06:05 +02:00
Patrick Labatut b507fbcf50
Minor config tweaks (#246)
Ignore import warnings in hubconf and trigger lint workflow on PR.
2023-09-30 20:01:59 +02:00
Patrick Labatut 9a4564ce5e
Rework Pytorch Hub support code (#202)
Rework support code for torch.hub.load() to allow reusing shared functions and eventually expose more models.
2023-09-27 17:06:03 +02:00
Patrick Labatut 6a6261546c
Update README (#189)
Update the top-level README to make it clearer what's currently available.
2023-08-31 19:00:59 +02:00
Patrick Labatut dc1d2cbcc8 Fix broken links in notebooks 2023-08-31 09:43:55 -07:00
Patrick Labatut 91d8cd81c2
Add semantic segmentation (Mask2Former) code (#186)
Add semantic segmentation (Mask2Former based on ViT-Adapter) code + update demo notebook for segmentation with a dedicated section.
2023-08-31 15:36:47 +02:00
Patrick Labatut d5b0405eff
Add semantic segmentation (linear) code (#185)
Add semantic segmentation (linear) code + demo notebook
2023-08-31 15:09:49 +02:00
Patrick Labatut d5c376b5b3
Add depth estimation code (#184)
Add depth estimation code + demo notebook
2023-08-31 14:57:50 +02:00
Patrick Labatut 3a7bf1ca4b
Add (optional) extras dependencies (#183)
Add (optional) extras dependencies for dense tasks (mmcv and mmsegmentation) to conda and pip requirements.
2023-08-31 14:53:28 +02:00
Patrick Labatut 81b2b64193
Update license everywhere (#182)
Update code and models license from CC-BY-NC to Apache 2.0 in headers and other files.
2023-08-31 14:41:52 +02:00
qasfb bd0bd9be19
Set default number of nodes to 1
We put default number of nodes to 1 in particular so that the linear evaluation with 8 GPUs has a global batch size of 1024, to reproduce the results.
2023-08-30 17:52:02 +02:00
Patrick Labatut ebc1cba109
Allow disabling xFormers via environment variable (#180)
Allow disabling the use of xFormers (for inference) by simply setting the XFORMERS_DISABLED environment variable
2023-08-30 17:20:47 +02:00
Patrick Labatut be7e57252f
Add missing hubconf arg (#178)
Add missing explicit layers argument
2023-08-29 16:05:32 +02:00
Leonid Ganeline 10d420147b
Remove mutable default arguments (#170)
Passing a list as default argument is not recommended.
2023-08-24 02:15:22 +02:00
Leonid Ganeline 84afc6fcce
Exclude venv from flake8 linting (#168)
Issue: flake8 processes the venv dir.
Change: added `exclude = venv` to flake8 conf
2023-08-24 02:14:23 +02:00
Patrick Labatut 43c80c1ba8
Try to fix embedded video in README.md (#165) 2023-08-23 22:16:22 +02:00
Alexander Seiler c3c2683a13
Correct some typos (#62)
Signed-off-by: Alexander Seiler <seileralex@gmail.com>
2023-04-27 11:13:12 +02:00
Patrick Labatut c0ffb6ed71
Do not force using FlashAttention (#58)
Remove hardcoded selection of operator implementation and use xFormers fMHA dispatcher instead.
2023-04-26 02:26:24 +02:00
Patrick Labatut ca58ffcd87
Fix linear classifier wrapper (#61)
Fix linear classifier wrapper in PyTorch Hub configuration module to support multi-sample batches.
2023-04-26 02:15:45 +02:00
Patrick Labatut 3e7e278d6f
Improve and fix ImageNet-1k dataset preparation (#60)
Document and fix implementation of extra metadata generation for ImageNet-1k.
2023-04-26 01:08:35 +02:00