You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! I know that PyTorch can use MPS to accelerate inference on Apple computers, but ONNX provides ability to use ANE (Apple Neural Engine) as well.
Have you tried converting BS-Roformer or Demucs to ONNX or CoreML?
My previous attempts to do so were unsuccessful(
The text was updated successfully, but these errors were encountered:
Hey, thanks for the heads up! That's pretty cool - I haven't tried it, but if you could get the models which already use ONNX (eg. UVR_MDXNET_KARA_2.onnx, UVR-MDX-NET-Inst_HQ_3.onnx, UVR-MDX-NET-Inst_HQ_4.onnx) to work with ANE that would probably be easier than converting the model architecture of one of the others.
However, I'm not sure I really understand the goal - at least on my personal Macbook with an M3 Max, all of the models in audio-separator which I use frequently already use my GPU resources fully!
Gotcha; I'm afraid I don't know where to start with that (I'd imagine it needs work in PyTorch first?) - PRs very welcome though if you can figure it out! 🙏
Hi! I know that PyTorch can use MPS to accelerate inference on Apple computers, but ONNX provides ability to use ANE (Apple Neural Engine) as well.
Have you tried converting BS-Roformer or Demucs to ONNX or CoreML?
My previous attempts to do so were unsuccessful(
The text was updated successfully, but these errors were encountered: