Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors
 
 
Go to file
nanmi cad7acac83
Update README.md ()
2022-08-12 10:19:17 +03:00
cfg Add files via upload 2022-08-10 10:40:06 +08:00
data Add files via upload 2022-08-10 10:40:06 +08:00
models Add files via upload 2022-08-10 10:40:06 +08:00
onnx_inference Add files via upload 2022-08-10 10:40:06 +08:00
utils Add files via upload 2022-08-10 10:40:06 +08:00
LICENSE.md Create LICENSE.md 2022-07-07 08:15:57 +03:00
README.md Update README.md () 2022-08-12 10:19:17 +03:00
detect.py Add files via upload 2022-08-10 10:40:06 +08:00
hubconf.py Add files via upload 2022-08-10 10:40:06 +08:00
requirements.txt Add files via upload 2022-08-10 10:40:06 +08:00
test.py Add files via upload 2022-08-10 10:40:06 +08:00
train.py Add files via upload 2022-08-10 10:40:06 +08:00

README.md

yolov7-pose

Implementation of "YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors"

Pose estimation implimentation is based on YOLO-Pose.

Dataset preparison

[Keypoints Labels of MS COCO 2017]

Training

yolov7-w6-person.pt

python -m torch.distributed.launch --nproc_per_node 8 --master_port 9527 train.py --data data/coco_kpts.yaml --cfg cfg/yolov7-w6-pose.yaml --weights weights/yolov7-w6-person.pt --batch-size 128 --img 960 --kpt-label --sync-bn --device 0,1,2,3,4,5,6,7 --name yolov7-w6-pose --hyp data/hyp.pose.yaml

Deploy

TensorRT:https://github.com/nanmi/yolov7-pose

Testing

yolov7-w6-pose.pt

python test.py --data data/coco_kpts.yaml --img 960 --conf 0.001 --iou 0.65 --weights yolov7-w6-pose.pt --kpt-label

Citation

@article{wang2022yolov7,
  title={{YOLOv7}: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors},
  author={Wang, Chien-Yao and Bochkovskiy, Alexey and Liao, Hong-Yuan Mark},
  journal={arXiv preprint arXiv:2207.02696},
  year={2022}
}

Acknowledgements

Expand