-
Notifications
You must be signed in to change notification settings - Fork 148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sparse/Quantization Aware Training for YOLOv10 #2328
Comments
Hi @yoloyash YOLOv10 is not currrently on our roadmap but starting from the YOLOv8 example should put you in a great spot. Targeting the Convolutional layers (inputs and weights only) is a good place to start and also skipping the initial conv/predictors should help with accuracy |
@bfineran thank you so much for the reply! I think is enough to get me started. If you have any other suggestions, please let me know! |
Hi @yoloyash |
Hi @jeanniefinks thank you! |
Is your feature request related to a problem? Please describe.
Need to reduce model size of YOLOv10 while maintaining performance.
Describe the solution you'd like
Sparse and Quantization Aware Training for YOLOv10. Maintain sparsity while training and quantizing to 8 bits. I just need some idea how to go about implementing. If I'm able to make it work, I'll open a PR.
Describe alternatives you've considered
I have been looking at the YOLOv8 recipes by SpareML. While these have given me a lot of ideas, I'm not sure which layers to quantize and prune.
Additional context
I'm also unsure what type of sparsification algorithm sparseml uses at the backend? is it the rigged lottery?
The text was updated successfully, but these errors were encountered: