Skip to content
This repository has been archived by the owner on Nov 16, 2023. It is now read-only.

Fully convolutional network - cannot reuse inference session with different input shape #301

Open
hamster3d opened this issue May 28, 2021 · 1 comment

Comments

@hamster3d
Copy link

I have a fully convolutional network with variable input shape, namely (None, None, None, 3)
After the first inference (with the shape [1,116,32,3]), when I try to provide input with different shape I got a shape validation error:
Uncaught (in promise) Error: input tensor[1] check failed: expected shape '[1,116,32,3]' but got [1,205,40,3]

This error doesn't appear if all the consequent requests have the same input shape.

The workaround for now is to reload the model.

@fs-eire
Copy link
Contributor

fs-eire commented Sep 10, 2021

Thanks for your feedback.

ONNX.js assumes that for a single inference session, the graph is static. ie. the shape of every value node in the graph will not change.

if you have only a few different types of input shapes, you can create separated inference sessions for them; or you can try ONNX Runtime Web, which implements a shader key to resolve this problem. however, if you have a lot of different types of input shapes (say it's 100+), then you may not use WebGL backend. because WebGL backend uses different shader programs for different input shapes. if you have too many different input shapes, this will create too many WebGL programs, which will soon reach the maximum limit and eventually fail the browser.


we are working on migrating ONNX.js to ONNX Runtime Web which offers enhanced user experience and improved performance. Please visit ONNX Runtime Web to get more details.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants