English | 简体中文
-
Use only regular convolution and Relu activation functions.
-
Apply CSP (1/2 channel dim) blocks in the network structure, except for Nano base model.
Advantage:
- Adopt a unified network structure and configuration, and the accuracy loss of the PTQ 8-bit quantization model is negligible.
- Suitable for users who are just getting started or who need to apply, optimize and deploy an 8-bit quantization model quickly and frequently.
Model | Size | mAPval 0.5:0.95 |
SpeedT4 TRT FP16 b1 (FPS) |
SpeedT4 TRT FP16 b32 (FPS) |
SpeedT4 TRT INT8 b1 (FPS) |
SpeedT4 TRT INT8 b32 (FPS) |
Params (M) |
FLOPs (G) |
---|---|---|---|---|---|---|---|---|
YOLOv6-N-base | 640 | 36.6distill | 727 | 1302 | 814 | 1805 | 4.65 | 11.46 |
YOLOv6-S-base | 640 | 45.3distill | 346 | 525 | 487 | 908 | 13.14 | 30.6 |
YOLOv6-M-base | 640 | 49.4distill | 179 | 245 | 284 | 439 | 28.33 | 72.30 |
YOLOv6-L-base | 640 | 51.1distill | 116 | 157 | 196 | 288 | 59.61 | 150.89 |
- Speed is tested with TensorRT 8.2.4.2 on T4.
- The processes of model training, evaluation, and inference are the same as the original ones. For details, please refer to this README.