English's Github Trend's slide
Trending Slides
Japanese Github Trend's slide
Trending Slides
@meituan
|==
| 简体中文
</a> https://colab.research.google.com/github/meituan/YOLOv6/main/turtorial.ipynb(

== YOLOv6
Implementation of paper:
* YOLOv6 v3.0: A Full-Scale Reloading 🔥 * YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications
== What’s New
* [2023.01.06] Release P6 models and enhance the performance of P5 models. ⭐️ Benchmark Renew the neck of the detector with a BiC module and SimCSPSPPF Block. Propose an anchor-aided training (AAT) strategy. Involve a new self-distillation strategy for small models of YOLOv6. Expand YOLOv6 and hit a new SOTA performance on the COCO dataset. * [2022.11.04] Release base models to simplify the training and deployment process. * [2022.09.06] Customized quantization methods. 🚀 Quantization Tutorial * [2022.09.05] Release M/L models and update N/T/S models with enhanced performance. * [2022.06.23] Release N/T/S models with excellent performance.
== Benchmark
Model
Size
mAP^val
0.5:0.95
Speed^T4
trt fp16 b1
(fps)
Speed^T4
trt fp16 b32
(fps)
Params
^(M)
FLOPs
^(G)
:----------------------------------------
---
:----------------
--------------------------
---------------------------
--------------
-------------
YOLOv6-N
640
37.5
779
1187
4.7
11.4
YOLOv6-S
640
45.0
339
484
18.5
45.3
YOLOv6-M
640
50.0
175
226
34.9
85.8
YOLOv6-L
640
52.8
98
116
59.6
150.7
YOLOv6-N6
1280
44.9
228
281
10.4
49.8
YOLOv6-S6
1280
50.3
98
108
41.4
198.0
YOLOv6-M6
1280
55.2
47
55
79.6
379.5
YOLOv6-L6
1280
57.2
26
29
140.4
673.4
^^^++
Legacy models
Model
Size
mAP^val
0.5:0.95
Speed^T4
trt fp16 b1
(fps)
Speed^T4
trt fp16 b32
(fps)
Params
^(M)
FLOPs
^(G)
:-----------------------------------------------------------
----
:------------------------------------
---------------------------------------
----------------------------------------
--------------------
-------------------
[YOLOv6-N](https://raw.githubusercontent.com/meituan/YOLOv6/releases/download/0.2.0/yolov6n.pt)
640
35.9300e
36.3^400e
802
1234
4.3
11.1
[YOLOv6-T](https://raw.githubusercontent.com/meituan/YOLOv6/releases/download/0.2.0/yolov6t.pt)
640
40.3300e
41.1^400e
449
659
15.0
36.7
[YOLOv6-S](https://raw.githubusercontent.com/meituan/YOLOv6/releases/download/0.2.0/yolov6s.pt)
640
43.5300e
43.8^400e
358
495
17.2
44.2
[YOLOv6-M](https://raw.githubusercontent.com/meituan/YOLOv6/releases/download/0.2.0/yolov6m.pt)
640
49.5
179
233
34.3
82.2
[YOLOv6-L-ReLU](https://raw.githubusercontent.com/meituan/YOLOv6/releases/download/0.2.0/yolov6l_relu.pt)
640
51.7
113
149
58.5
144.0
[YOLOv6-L](https://raw.githubusercontent.com/meituan/YOLOv6/releases/download/0.2.0/yolov6l.pt)
640
52.5
98
121
58.5
144.0
- Speed is tested with TensorRT 7.2 on T4. == readme === Quantized model 🚀
Model
Size
Precision
mAP^val
0.5:0.95
Speed^T4
trt b1
(fps)
Speed^T4
trt b32
(fps)
:--------------------
----
---------
:-----------------------
----------------------------------
-----------------------------------
YOLOv6-N RepOpt
640
INT8
34.8
1114
1828
YOLOv6-N
640
FP16
35.9
802
1234
YOLOv6-T RepOpt
640
INT8
39.8
741
1167
YOLOv6-T
640
FP16
40.3
449
659
YOLOv6-S RepOpt
640
INT8
43.3
619
924
YOLOv6-S
640
FP16
43.5
377
541
- Speed is tested with TensorRT 8.4 on T4. - Precision is figured on models for 300 epochs. </details> == Quick Start++ ++ ++
++
* YouTube Tutorial: [How to train YOLOv6 on a custom dataset](https://youtu.be/fFCWrMFH2UY) * Demo of YOLOv6 inference on Google Colab [](https://colab.research.google.com/github/mahdilamb/YOLOv6/main/inference.ipynb) * Blog post: [YOLOv6 Object Detection — Paper Explanation and Inference](https://learnopencv.com/yolov6-object-detection/)
=== [FAQ(Continuously updated)](https://raw.githubusercontent.com/meituan/YOLOv6/wiki/FAQ%EF%BC%88Continuously-updated%EF%BC%89) If you have any questions, welcome to join our WeChat group to discuss and exchange.
Reproduce our results on COCO
Please refer to [Train COCO Dataset](./docs/Train_coco_data.md).Resume training
If your training process is corrupted, you can resume training by` = single GPU training. python tools/train.py --resume = multi GPU training. python -m torch.distributed.launch --nproc_per_node 8 tools/train.py --resume
` Above command will automatically find the latest checkpoint in YOLOv6 directory, then resume the training process. Your can also specify a checkpoint path to--resume
parameter by` = remember to replace /path/to/your/checkpoint/path to the checkpoint path which you want to resume training. --resume /path/to/your/checkpoint/path
` This will resume from the specific checkpoint you provide.Inference
First, download a pretrained model from the YOLOv6 [release](https://raw.githubusercontent.com/meituan/YOLOv6/releases/tag/0.3.0) or use your trained model to do inference. Second, run inference withtools/infer.py
If you want to inference on local camera or web camera, you can run:shell = P5 models python tools/infer.py --weights yolov6s.pt --source img.jpg / imgdir / video.mp4 = P6 models python tools/infer.py --weights yolov6s6.pt --img 1280 1280 --source img.jpg / imgdir / video.mp4
shell = P5 models python tools/infer.py --weights yolov6s.pt --webcam --webcam-addr 0 = P6 models python tools/infer.py --weights yolov6s6.pt --img 1280 1280 --webcam --webcam-addr 0
webcam-addr
can be local camera number id or rtsp address.Tutorials
* [User Guide(zh_CN)](https://yolov6-docs.readthedocs.io/zh_CN/latest/) * [Train COCO Dataset](./docs/Train_coco_data.md) * [Train custom data](./docs/Train_custom_data.md) * [Test speed](./docs/Test_speed.md) * [Tutorial of Quantization for YOLOv6](./docs/Tutorial%20of%20Quantization.md)Third-party resources
* YOLOv6 NCNN Android app demo: [ncnn-android-yolov6](https://raw.githubusercontent.com/FeiGeChuanShu/ncnn-android-yolov6) from [FeiGeChuanShu](https://raw.githubusercontent.com/FeiGeChuanShu) * YOLOv6 ONNXRuntime/MNN/TNN C++: [YOLOv6-ORT](https://raw.githubusercontent.com/DefTruth/lite.ai.toolkit/main/lite/ort/cv/yolov6.cpp), [YOLOv6-MNN](https://raw.githubusercontent.com/DefTruth/lite.ai.toolkit/main/lite/mnn/cv/mnn_yolov6.cpp) and [YOLOv6-TNN](https://raw.githubusercontent.com/DefTruth/lite.ai.toolkit/main/lite/tnn/cv/tnn_yolov6.cpp) from [DefTruth](https://raw.githubusercontent.com/DefTruth) * YOLOv6 TensorRT Python: [yolov6-tensorrt-python](https://raw.githubusercontent.com/Linaom1214/TensorRT-For-YOLO-Series) from [Linaom1214](https://raw.githubusercontent.com/Linaom1214) * YOLOv6 TensorRT Windows C++: [yolort](https://raw.githubusercontent.com/zhiqwang/yolov5-rt-stack/tree/main/deployment/tensorrt-yolov6) from [Wei Zeng](https://raw.githubusercontent.com/Wulingtian) * [YOLOv6 web demo](https://huggingface.co/spaces/nateraw/yolov6) on [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://raw.githubusercontent.com/gradio-app/gradio). [](https://huggingface.co/spaces/nateraw/yolov6) * Tutorial: [How to train YOLOv6 on a custom dataset](https://blog.roboflow.com/how-to-train-yolov6-on-a-custom-dataset/)