148 Commits

Author SHA1 Message Date
Glenn Jocher
5fac5ad165
Precision-Recall Curve Feature Addition (#1107)
* initial commit

* Update general.py

Indent update

* Update general.py

refactor duplicate code

* 200 dpi
2020-10-09 14:50:59 +02:00
Glenn Jocher
66676eb039 init_torch_seeds >> init_seeds bug fix 2020-10-06 15:00:47 +02:00
Glenn Jocher
f1c63e2784 add mosaic and warmup to hyperparameters (#931) 2020-09-13 14:03:54 -07:00
Glenn Jocher
a62a45b2dd prevent testloader caching on --notest 2020-09-11 16:59:13 -07:00
Glenn Jocher
c8e51812a5 hyp evolution force-autoanchor fix 2020-09-04 13:13:10 -07:00
Glenn Jocher
c687d5c129 reorganize train initialization steps 2020-09-04 12:25:53 -07:00
Glenn Jocher
44cdcc7e0b hyp['anchors'] evolution update 2020-09-03 12:54:22 -07:00
NanoCode012
d8274d0434
Fix results_file not renaming (#903) 2020-09-03 00:47:50 -07:00
Glenn Jocher
281d78c105
Update train.py (#902)
* Update train.py with simplified ckpt names

* Return default hyps to hyp.scratch.yaml

Leave line commented for future use once mystery of best finetuning hyps to apply becomes clearer.

* Force test_batch*_pred.jpg replot on final epoch

This will allow you to see predictions final testing run after training completes in runs/exp0
2020-09-02 15:08:43 -07:00
Naman Gupta
6f3db5e662
Remove autoanchor and class checks on resumed training (#889)
* Class frequency not calculated on resuming training

Calculation of class frequency is not needed when resuming training.
Anchors can still be recalculated whether resuming or not.

* Check rank for autoanchor

* Update train.py

no autoanchor checks on resume

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2020-09-02 09:05:31 -07:00
Glenn Jocher
f06e2d518c opt.image_weights bug fix (#885) 2020-08-31 11:05:37 -07:00
Glenn Jocher
69ff781ca5 opt.img_weights bug fix (#885) 2020-08-31 10:33:07 -07:00
Glenn Jocher
08e97a2f88 Update hyperparameters to add lrf, anchors 2020-08-28 14:58:43 -07:00
Glenn Jocher
a21bd0687c Update train.py forward simplification 2020-08-25 13:48:03 -07:00
Glenn Jocher
09402a2174 torch.from_tensor() bug fix 2020-08-25 03:14:17 -07:00
Glenn Jocher
83dc540b1d remove ema.ema hasattr(ema, 'module') check 2020-08-22 15:18:39 -07:00
Glenn Jocher
4447f4b937
--resume to same runs/exp directory (#765)
* initial commit

* add weight backup dir on resume
2020-08-20 18:24:33 -07:00
NanoCode012
fb4fc8cd02
Fix ema attribute error in DDP mode (#775)
* Fix ema error in DDP mode

* Update train.py

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2020-08-18 09:52:21 -07:00
Glenn Jocher
ebafd1ead5
single command --resume (#756)
* single command --resume

* else check files, remove TODO

* argparse.Namespace()

* tensorboard lr

* bug fix in get_latest_run()
2020-08-17 16:28:43 -07:00
Glenn Jocher
916d4aad9a
v3.0 Release (#725)
* initial commit

* remove yolov3-spp from test.py study

* update study --img range

* update mAP

* cleanup and speed updates

* update README plot
2020-08-13 14:25:05 -07:00
NanoCode012
0892c44bc4
Fix Logging (#719)
* Add logging setup

* Fix fusing layers message

* Fix logging does not have end

* Add logging

* Change logging to use logger

* Update yolo.py

I tried this in a cloned branch, and everything seems to work fine

* Update yolo.py

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2020-08-12 14:18:19 -07:00
Marc
a925f283a7
max workers for dataloader (#722) 2020-08-12 13:57:36 -07:00
NanoCode012
4949401a94
Fix redundant outputs via Logging in DDP training (#500)
* Change print to logging

* Clean function set_logging

* Add line spacing

* Change leftover prints to log

* Fix scanning labels output

* Fix rank naming

* Change leftover print to logging

* Reorganized DDP variables

* Fix type error

* Make quotes consistent

* Fix spelling

* Clean function call

* Add line spacing

* Update datasets.py

* Update train.py

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2020-08-11 11:18:45 -07:00
Glenn Jocher
e71fd0ec0b Model freeze capability (#679) 2020-08-10 22:49:43 -07:00
Glenn Jocher
8e5c66579b update train.py remove save_json final_epoch 2020-08-09 21:24:40 -07:00
Glenn Jocher
41523e2c91
Dataset autodownload feature addition (#685)
* initial commit

* move download scripts into data/scripts

* new check_dataset() function in general.py

* move check_dataset() out of with context

* Update general.py

* DDP update

* Update general.py
2020-08-09 20:52:57 -07:00
NanoCode012
3d8ed0a76b
Fix missing model.stride in DP and DDP mode (#683) 2020-08-09 11:01:36 -07:00
Glenn Jocher
a0ac5adb7b Single-source training update (#680) 2020-08-09 02:27:35 -07:00
Glenn Jocher
3c6e2f7668
Single-source training (#680)
* Single-source training

* Extract hyperparameters into seperate files

* weight decay scientific notation yaml reader bug fix

* remove import glob

* intersect_dicts() implementation

* 'or' bug fix

* .to(device) bug fix
2020-08-09 02:12:44 -07:00
NanoCode012
d7cfbc47ab
Fix unrecognized local rank argument (#676) 2020-08-08 16:40:10 -07:00
Glenn Jocher
93684531c6
train.py --logdir argparser addition (#660)
* train.py --logdir argparser addition

* train.py --logdir argparser addition
2020-08-06 22:26:38 -07:00
NanoCode012
886b9841c8
Add Multi-Node support for DDP Training (#504)
* Add support for multi-node DDP

* Remove local_rank confusion

* Fix spacing
2020-08-06 11:15:24 -07:00
lorenzomammana
728efa6576
Fix missing imports (#627)
* Fix missing imports

* Update detect.py

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2020-08-04 12:22:15 -07:00
Glenn Jocher
eb99dff9ef import random bug fix (#614) 2020-08-03 10:52:59 -07:00
Jirka Borovec
d5b6416c87
Explicit Imports (#498)
* expand imports

* optimize

* miss

* fix
2020-08-02 15:47:36 -07:00
Glenn Jocher
f1096b2cf7 hyperparameter evolution update (#566) 2020-08-02 10:32:04 -07:00
Glenn Jocher
c1a2a7a411 hyperparameter evolution bug fix (#566) 2020-08-01 23:00:10 -07:00
Glenn Jocher
8074745908 hyperparameter evolution bug fix (#566) 2020-08-01 19:24:14 -07:00
Glenn Jocher
e32abb5fb9 hyperparameter evolution bug fix (#566) 2020-08-01 19:15:48 -07:00
Glenn Jocher
8056fe2db8 hyperparameter evolution bug fix (#566) 2020-08-01 15:07:40 -07:00
Glenn Jocher
127cbeb3f5 hyperparameter expansion to flips, perspective, mixup 2020-08-01 13:47:54 -07:00
Glenn Jocher
bcd452c482 replace random_affine() with random_perspective()
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2020-07-31 15:53:52 -07:00
Liu Changyu
c020875b17
PyTorch 1.6.0 update with native AMP (#573)
* PyTorch have Automatic Mixed Precision (AMP) Training.

* Fixed the problem of inconsistent code length indentation

* Fixed the problem of inconsistent code length indentation

* Mixed precision training is turned on by default
2020-07-31 10:52:45 -07:00
Laughing
4e2b9ecc7e
LR --resume repeat bug fix (#565) 2020-07-30 10:48:20 -07:00
AlexWang1900
a209a32019
Fix bug #541 #542 (#545)
* fix #541 #542

* Update train.py

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2020-07-28 18:31:01 -07:00
NanoCode012
7f8471eaeb
--notest bug fix (#518)
* Fix missing results_file and fi when notest passed

* Update train.py

reverting previous changes and  removing functionality from 'if not opt.notest or final_epoch:  # Calculate mAP' loop.

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2020-07-25 10:24:39 -07:00
Glenn Jocher
9da56b62dd
v2.0 Release (#491)
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2020-07-23 15:34:23 -07:00
Glenn Jocher
5e970d45c4
Update train.py (#462) 2020-07-22 12:32:03 -07:00
Glenn Jocher
3edc38f603 update train.py gsutil bucket fix (#463) 2020-07-21 23:25:33 -07:00
Glenn Jocher
776555771f update train.py gsutil bucket fix (#463) 2020-07-21 23:21:36 -07:00