Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calling group_send reconnects every time #100

Closed
imbolc opened this issue Apr 15, 2018 · 21 comments
Closed

Calling group_send reconnects every time #100

imbolc opened this issue Apr 15, 2018 · 21 comments

Comments

@imbolc
Copy link

imbolc commented Apr 15, 2018

Each call of connection context processor cause reconnection to redis. So in group_send there are number_of_channels + 1 reconnections (with the same parameters in my case). Why is it so and is there a way to optimize this?

@imbolc
Copy link
Author

imbolc commented Apr 15, 2018

May be related to #83

@imbolc
Copy link
Author

imbolc commented Apr 15, 2018

Just checked it works 7 times faster if I pass connection from group_send to send

@andrewgodwin
Copy link
Member

Is this on the released version of channels_redis? The git master branch uses a Lua-based sending script and should be significantly faster and reconnect less.

@imbolc
Copy link
Author

imbolc commented Apr 16, 2018

Yes, it was metrics from the pip version, on git one are 2 connection per message, and if I make it one the speed is increasing from 7.34 ms to 5.3 ms per message. But why it connects for each message, wouldn't it better to just keep first connection?

@andrewgodwin
Copy link
Member

Ah yes, the git version is efficient at sending to lots of clients but will still connect every time you call group_send because there's no connection persistence between calls. It used to be in there but it ended up causing connection leaks and making unit testing harder, and so I removed the persistence in favour of making it slow and correct, with the hope we'll get time to properly test and put persistence back in later.

I'll change this issue to represent the fact we should have connection persistence, but I wouldn't expect to get it worked on in the next few months unless someone other than me has time, as I have a lot of other stuff to do as well.

@andrewgodwin andrewgodwin changed the title It reconnects a lot of time in group_send Calling group_send reconnects every time Apr 16, 2018
@t-persson
Copy link

Experienced the same problem. Added a PR which fixes the issue for me.

@denolehov
Copy link

denolehov commented May 30, 2018

I think it should be channels' responsibility to call a self.channel_layer.close(), so, having that - what needs to be done in @Anisa's PR is adding .close() calls to the layer fixtures

I can see that layer implements .close() method, but it never gets called in the channels

@andrewgodwin
Copy link
Member

The close() method is only really needed for tests, and even then I think I removed the need for it there. It's not needed in normal use.

@denolehov
Copy link

denolehov commented May 31, 2018

@andrewgodwin how connections are closed in a "normal use"? which part of the system does that?

if having .close() in the tests for @Anisa's PR is enough - I will be able to submit my changes.

@andrewgodwin
Copy link
Member

@Banksee The Redis connections are closed once the receive() or send() call returns, normally. It's done via a context manager if remember right. I don't believe it will solve the problems in that PR, but you're welcome to try!

@denolehov
Copy link

denolehov commented May 31, 2018

@andrewgodwin yes, it is the case with the current implementation, but I wonder how we should close persistent connections, I am thinking of some counter for this, i.e.:

  1. async with connection(index) as connection: ... call
  2. lock, create connection if counter is 0, return existing connection if counter is >= 1
  3. increase counter, release lock
  4. on __aexit__(...) call
  5. decrease the counter
  6. call loop.call_later(0.3, self.close)
  7. in self.close() check if counter is still 0 after delay and close the connection, no-op otherwise

@andrewgodwin
Copy link
Member

That will still fail tests, though - you can't persist any coroutine or task beyond the end of the last blocking receive() call, unless you find a way to convince aioredis to have an open connection without a coroutine stuck onto it.

@denolehov
Copy link

async_to_sync(aioredis.create_redis(**kwargs)) 😈

@jberci
Copy link
Contributor

jberci commented Jun 21, 2018

@andrewgodwin Our team (@genialis) stumbled upon this problem recently, with servers starting to crash when there was increased traffic through the Channels layer; the system ran out of ports because of the constant reconnecting.

We've managed to implement persistent connections and get the test suite to work with them (the code is at https://github.com/genialis/channels_redis_persist). Because our approach was substantially different than the one taken by @Anisa in #104, we'll open a new PR once the implementation passes internal testing and we've taken in changes from upstream master.

@andrewgodwin
Copy link
Member

Alright - I'll take a look at the PR when it's ready. If you've managed to make the connections persistent without leaking coroutines then I would very much like that in the main repo!

@gekco
Copy link
Contributor

gekco commented Jul 29, 2018

i made some changes regarding this . please have a look.

@ericls
Copy link

ericls commented Aug 13, 2018

I'm wondering if we can use redis pubsub to somehow mimic the behaviour of group_send?

@andrewgodwin
Copy link
Member

You cannot use pubsub, as it doesn't queue messages for processes that aren't currently listening, whereas using lists does.

Additionally, I believe #117 should have solved this, so I'm going to close it. Reopen if the master version of this code still reconnects on group send!

@ericls
Copy link

ericls commented Aug 14, 2018

#117 works!
looking forward to the new release

@glitchnsec
Copy link

Confirmed #117 works, upgrading to latest channels-redis took my team's sorrows away! On behalf of my team, thank you genialis and the rest of the community!

@harmv
Copy link

harmv commented Mar 31, 2022

NB: connection pooling (#117) does not work when using async_to_sync() around group_send()

See: #223 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants