-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't load simple model (with 8bit and 16bit inputs) #28
Comments
Do you know if your model loads using onnxruntime from C or python? |
import onnx
onnx.load("<path>/model.onnx") works |
Ok, thanks, so it is likely a bug in this package! I don't have enough time to work on this. If you feel like doing this, here is one approach to pinpoint the bug:
|
It is in onnxruntime :( Now, I'm not sure how the python version works |
thanks a lot for digging into this! I never looked into the python implementation, but I am also curious how it works. |
Hi nice package!
I'm trying to add new ops to ONNX.jl, and I use this package to test if the onnx file is valid (loadable and return the right results).
I'm using Onnx backend test suit and I think where is a bug on this package, but I'm not sure.
Here is a minimal working example:
That's a simple model for the "test_min_int16" model.zip. Basically, it's
min(x,y)
graph wherex,y
areINT16
.And I get this:
I think that should work.
For "test_min_int32" is it working. (the whole test suit)
The text was updated successfully, but these errors were encountered: