27 Commits

Author SHA1 Message Date
hanoch
e2f0c15a00 remove torch anomaly() 2024-10-22 10:32:51 +03:00
hanoch
f366ed0f30 stability of denum in data norm/standard 2024-10-21 15:57:07 +03:00
hanoch
5f8046d05f claer ml debug 2024-10-20 17:08:46 +03:00
hanoch
102d731395 claer ml debug 2024-10-20 16:51:29 +03:00
hanoch
60ef66ce19 train.py save runs to incremental anywhere : /mnt/Data/<user> 2024-10-20 15:27:57 +03:00
hanoch
d166717631 CLearML connect config 2024-10-20 14:40:49 +03:00
hanoch
db5033a5f4 Milesone MaP_person = 82.5%
gradient clipping with optimizer scaler
CLearML connect config
2024-10-20 10:46:21 +03:00
hanoch
3da5c91239 CLearML connect config 2024-10-15 15:22:54 +03:00
hanoch
890f748e16 CLearML connect config 2024-10-15 14:50:31 +03:00
hanoch
65d0872a36 CLearML connect config 2024-10-15 14:40:02 +03:00
hanoch
e7c36bab68 modify TIR channel expansion to be w/o augmentation 2024-10-14 14:02:24 +03:00
hanoch
abdcce0e70 Test : medium size try/excepton 2024-10-14 10:32:25 +03:00
hanoch
e5654f43f0 tir channel expansion, adapt mosaic to TIR or any other than UINT8/RGB , inversion aug is dtype dependant. test.py : verbose save class mAP , mAP per size. ClearML support 2024-09-23 11:23:13 +03:00
hanoch
c3276b3de9 Test/ train/detect :Scaling and normalization. 2024-09-15 09:20:39 +03:00
hanoch
02f785d2dc check train dataset objects class id violation. n-ch in as param, 2024-08-27 13:58:16 +03:00
hanoch
18086f7c35 comment 2024-08-19 15:56:20 +03:00
hanoch
c8c3363926 Fixed randomizer gets integers only! 2024-08-15 15:46:47 +03:00
hanoch
c1a580e621 comment 2024-08-15 15:40:01 +03:00
hanoch
b82fdc0574 Add support using the "parent" directory in data.yaml as parent for the images folder 2024-08-08 14:44:50 +03:00
hanoch
5bb345f112 fix annotation txtx file creation 2024-08-07 18:03:48 +03:00
Kıvanç Tezören
55b90e1119
Add option to use YOLOv5 AP metric (#775)
* Add YOLOv5 metric option

* Inform if using v5 metric
2022-09-16 21:14:01 +03:00
AlexeyAB84
36ce6b2087 Use compute_loss_ota() if there is not loss_ota param or loss_ota==1 2022-08-16 02:10:07 +03:00
AlexeyAB84
711a16ba57 Added param loss_ota for hyp.yaml, to disable OTA for faster training 2022-08-09 07:48:28 +03:00
Mohammad Khoshbin
b8956dd5a5
fix training with frozen layers (#378) 2022-08-02 17:55:28 +03:00
Mohammad Khoshbin
cc42a206ef
fix cpu-only training CUDA error (#369) 2022-07-30 18:43:18 +03:00
Dhia_Oussayed
0d882e553e
Fixed a bug for Hyperparameters Evolution. (#344)
* Update hyp.scratch.custom.yaml

anchors parameter missing, evolution won't start without.

* Update train.py

Updated the hyperparameter evolution metadata variable to match the hyp.yaml files for the evolution to run successfully.

* Update hyp.scratch.p5.yaml

Added the anchors parameter, evolution don't start without it.

* Update hyp.scratch.p6.yaml

added the anchors parameter for the hyperparameter evolution.

* Update hyp.scratch.tiny.yaml

added the anchors parameter for the hyperparameters evolution

* Update hyp.scratch.custom.yaml

* Update hyp.scratch.tiny.yaml

* Update hyp.scratch.p5.yaml

* Update hyp.scratch.p6.yaml

* Update train.py

* Update train_aux.py
2022-07-28 22:09:52 +03:00
Kin-Yiu, Wong
a1c6e04c7c
main code
code for inference
2022-07-06 23:23:27 +08:00