integrations/neural-magic/ #8506
Replies: 3 comments 2 replies
-
"Can I use DeepSparse on my Jetson Nano Dev Kit? Do I need to convert my YOLO model to TensorRT for that? The blog mentions converting it to ONNX format. Will using DeepSparse make my model run faster on my Jetson?" |
Beta Was this translation helpful? Give feedback.
1 reply
-
Which version of YOLO was DeepSparse tested on? In version 8.2.2, I'm unable to get it to work |
Beta Was this translation helpful? Give feedback.
1 reply
-
I'm curious that can I use sparseML with the open ai clip model? is there a code or support for this? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
integrations/neural-magic/
Learn how to deploy your YOLOv8 models rapidly using Neural Magic’s DeepSparse. This guide focuses on integrating Ultralytics YOLOv8 with the DeepSparse Engine for high-speed, CPU-based inference, leveraging advanced neural network sparsity techniques.
https://docs.ultralytics.com/integrations/neural-magic/
Beta Was this translation helpful? Give feedback.
All reactions