Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA memory efficiency #126

Open
Doan-IT opened this issue Oct 30, 2024 · 2 comments
Open

CUDA memory efficiency #126

Doan-IT opened this issue Oct 30, 2024 · 2 comments

Comments

@Doan-IT
Copy link

Doan-IT commented Oct 30, 2024

  • Thanks for sharing a great project. I tested offline and had a problem with Cuda out of memory.
  • When I looked at the demo.ipynb notebooks tutorial, I saw that you put the whole video on GPU: video=video.cuda() and CoTrackerThreeOffline() class forward processing with fmaps_chunk_size=200. This process is expensive for GPU and inefficient.
  • I want to ask if it is possible to split video into frames in CPU and process each frame in turn on GPU?
  • I am not sure if the logic of model but such processing will reduce GPU memory a lot and be more accessible.
    Thanks.
@Haozong-Zeng
Copy link

Same issue here. I was trying to run the demo on a 640X480 video with 3000 frames, and it took 57 VRAM (20G + shared memory) and thus ran very slow. My project involves processing usually longer videos.

@gjpblabla
Copy link

Hello, have you solved this problem? I was wondering if I could change the batchsize, but I don't know how to do it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants