You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing a great project. I tested offline and had a problem with Cuda out of memory.
When I looked at the demo.ipynb notebooks tutorial, I saw that you put the whole video on GPU: video=video.cuda() and CoTrackerThreeOffline() class forward processing with fmaps_chunk_size=200. This process is expensive for GPU and inefficient.
I want to ask if it is possible to split video into frames in CPU and process each frame in turn on GPU?
I am not sure if the logic of model but such processing will reduce GPU memory a lot and be more accessible.
Thanks.
The text was updated successfully, but these errors were encountered:
Same issue here. I was trying to run the demo on a 640X480 video with 3000 frames, and it took 57 VRAM (20G + shared memory) and thus ran very slow. My project involves processing usually longer videos.
Thanks.
The text was updated successfully, but these errors were encountered: