-
Notifications
You must be signed in to change notification settings - Fork 197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Calling group_send reconnects every time #100
Comments
May be related to #83 |
Just checked it works 7 times faster if I pass connection from |
Is this on the released version of |
Yes, it was metrics from the pip version, on git one are 2 connection per message, and if I make it one the speed is increasing from 7.34 ms to 5.3 ms per message. But why it connects for each message, wouldn't it better to just keep first connection? |
Ah yes, the git version is efficient at sending to lots of clients but will still connect every time you call I'll change this issue to represent the fact we should have connection persistence, but I wouldn't expect to get it worked on in the next few months unless someone other than me has time, as I have a lot of other stuff to do as well. |
group_send
Experienced the same problem. Added a PR which fixes the issue for me. |
I think it should be I can see that layer implements |
The |
@andrewgodwin how connections are closed in a "normal use"? which part of the system does that? if having |
@Banksee The Redis connections are closed once the |
@andrewgodwin yes, it is the case with the current implementation, but I wonder how we should close persistent connections, I am thinking of some counter for this, i.e.:
|
That will still fail tests, though - you can't persist any coroutine or task beyond the end of the last blocking |
|
@andrewgodwin Our team (@genialis) stumbled upon this problem recently, with servers starting to crash when there was increased traffic through the Channels layer; the system ran out of ports because of the constant reconnecting. We've managed to implement persistent connections and get the test suite to work with them (the code is at https://github.com/genialis/channels_redis_persist). Because our approach was substantially different than the one taken by @Anisa in #104, we'll open a new PR once the implementation passes internal testing and we've taken in changes from upstream master. |
Alright - I'll take a look at the PR when it's ready. If you've managed to make the connections persistent without leaking coroutines then I would very much like that in the main repo! |
i made some changes regarding this . please have a look. |
I'm wondering if we can use redis pubsub to somehow mimic the behaviour of group_send? |
You cannot use pubsub, as it doesn't queue messages for processes that aren't currently listening, whereas using lists does. Additionally, I believe #117 should have solved this, so I'm going to close it. Reopen if the master version of this code still reconnects on group send! |
#117 works! |
NB: connection pooling (#117) does not work when using async_to_sync() around group_send() See: #223 (comment) |
Each call of
connection
context processor cause reconnection to redis. So ingroup_send
there are number_of_channels + 1 reconnections (with the same parameters in my case). Why is it so and is there a way to optimize this?The text was updated successfully, but these errors were encountered: