-
-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch cache from theine to otter #405
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #405 +/- ##
==========================================
+ Coverage 81.44% 81.45% +0.01%
==========================================
Files 40 40
Lines 7200 7204 +4
==========================================
+ Hits 5864 5868 +4
Misses 961 961
Partials 375 375 ☔ View full report in Codecov by Sentry. |
@FZambia Hi, I’m the author of Theine. Could you please provide more details about the memory leak issue so I can debug it? Thanks! |
@Yiling-J hello, nice to see you here :) Probably a heap profile will give you more insights than it gave to me: (this was a heap from Centrifugo node in production on which memory leaked quite a lot above normal values). The setup I used to reproduce the memory leak locally involves running the modified Go example from this repo and connecting from a Node.JS script. The heap was very similar to the above's. Since I have not identified the root cause I can't be 100% sure that the issue is in I already tried to come up to some minimal leak reproducer – but all my attempts failed, it just works properly. In general, my thinking is that somehow theine cache keeps references to objects even though TTL is expired so that GC does not remove them. But it's just an assumption. |
@FZambia I think I found the issue. Theine stores In the case of centrifuge, because the TTL is very short and the key is the data string, Theine deletes expired entries proactively and moves them to the sync pool very fast. This can result in a large number of entries in the pool, consuming significant memory. I have created a syncpool branch to address this issue. If you have time, could you try this branch and see if the memory usage issue is still reproducible? |
After thinking it a bit and running some simple benchmarks, I think sync pool might not be the cause. If entries in the sync pool are not used, they will be GCed after two GC cycles. And they are not used means there are no new 'Set' operations, so memory usage should remain stable. And if new 'Set' operations occur, they will utilize entries from the pool, still keeping memory usage stable. @FZambia could you also share an Otter profile for comparison? From the profile, the in-use space is not large (0.23GB). What would be the ideal memory usage if there were no leaks? |
I did experiments yesterday with a new commit – and yep, it does not help, the growth is still there. I'll run my reproducer for a longer time and share heap profiles. |
@Yiling-J hello, Ran with Then tried with One more thing btw, Centrifugo customer who reported the issue tried a new version in production where |
In my test setup this fixes a memory leak when using
CompressionPreparedMessageCacheSize
option. The reason of memory leak withtheine
is unclear to me.