-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bfloat16 Support #136
Comments
Most RTNeural layers are agnostic to the input data type. Although they're mostly designed to work with floating-point data types, it's also 100% possible to use most RTNeural layers with types like I haven't tried using RTNeural yet with a We should probably add some tests to the RTNeural test suite, to explicitly make sure that |
Tried it doesn't seem to work with std:: bfloat16_t. It needs some of the parts to be statically casted, which solved only some parts of the issue. That would be really nice if you could have a quick look at it. So the answer is, it doesn't work. |
Thanks for the update! Would it be possible to share a compiler log with any errors messages that may have resulted. It would also be nice to know the compiler and RTNeural backend that you were using in your test. If you already have a public repo or something similar where you've been doing your testing, that would be very useful as well! |
Anyway to use the model with bfloat16 data type?
The text was updated successfully, but these errors were encountered: