Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there any way to solve cuda out of memory problem when input image is large? #45

Open
universewill opened this issue Oct 28, 2020 · 1 comment
Assignees

Comments

@universewill
Copy link

Is there any way to solve cuda out of memory problem when input image is large?
My input is about 1240x650 large. How to get around with this problem?

@Mukosame
Copy link
Owner

Hi @universewill , your input is really large! Although there are some tricks (see this issue) that we can try to make the network accept larger images, I'm sorry that your input would still be too large.
Sorry again for I haven't presented a script supporting large image inference. If you are willing to re-write the dataloading & test function, here is a general idea about how to achieve it:

  • First set a threshold based on you gpu memory caps. And crop your input with a padding region iteratively to make every patch meet the threshold.
  • Send the patch sequences to the inference, and stitch them back iteratively in the same manner.
    In this way, you should be able to test on large input images. (Just pay attention to the overlap area and make them blend naturally.)
    Hope this solve your question!

@Mukosame Mukosame self-assigned this Oct 29, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants