-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
high memory consumption #6
Comments
@joseph12631 Thoughts on this? |
just dumped heap on a production system where the memory is leaking like a faulty faucet. I'm also using |
awesome @netroy , I really need to learn how to examine and group headdumps as you did. |
@jfromaniello https://github.com/bnoordhuis/node-heapdump is your friend |
bumping the version of |
Not found. any solution ? |
I'm seeing this as well FWIW. Any updates by chance? If not, I'll take a look into it when I get the bandwidth -- in the meantime, I plan on hacking together a quick-fix by wrapping node-loggly + winston |
I don't knowI posted this issue two years ago and it never got solved. Not How can you build a product for developers and never take care of such an I'll never ever use or try their product again. Open source doesn't work We are also an startup for developers and we put a lot of work and |
This should be updated to allow the bulk endpoint to be used. If you are getting a high throughput of logs, you are sending A LOT of requests which will inevitably hike up memory usage. I don't think this is an actual bug its just the fact that you are queueing way too many requests in flight when your application is already under load. Doubling the actual problem. |
@jcrugzz it is a bug as far as I'm concerned, a bug in the code or a bug in the documentation. I personally will prefer the library handle this for me. But, you shouldn't care about my opinion since as I mention before I will never use loggly service. They don't care about their SDKs. |
@jfromaniello The bug fix is to send logs in bulk which is now supported in Just for information clarity, the REAL bug lies in the default socket pooling in node (it caps at 5 sockets and you are queueing many many more requests) and at a point the requests will still need to be queued by the event loop. Now that I think about it, we could just add a custom agent to mitigate some of this problem but in general the bulk endpoint will be the way to go if this issue is seen. |
@jcrugzz, I don't believe this to be the case. For this particular app, the logging is actually pretty reasonable by most logging standards, but more importantly, it's bottlenecked by the Twitter rate limit. So for a particular 15-minute window, it spends maybe one or two doing actual work, and then it sits idly for another @jfromaniello, I could be mistaken (feel free to correct me @indexzero), but this project isn't sponsored by Loggly. Edit: Whoops, the Edit 2: I stumbled across an open issue for a potential memory leak for another library that I'm using, so it's entirely possibly that this issue isn't the culprit. |
Poking my head in here because I got the notifications, only to echo @dsimmons in pointing out that this only exists because @indexzero is a nice man who likes to share things, and nodejitsu is a nice company that like to share things. @jfromaniello I know it's frustrating when things don't work, but abusing the author of an open source project is pretty much always poor form. By all means, drop Loggly an email telling them you'd love to use their product in node, and with Winston, so they should invest some developer effort in making that more possible. @indexzero thanks for being rad. |
Not attacking anyone I thought the underlying loggly library was official. BTW we are all good people here, I myself also maintain a lot of OSS |
This issue caused a hard to debug bug in my app. Root cause was the fact that my app generates couple thousand log messages during the startup. This means 2000 almost parallel HTTP request which translates to 2000 parallel TCP connections. That put my macbook to its knees. It'd be really nice if loggly transport could do little buffering and switch to bulk API if that's possible. |
Does anyone official know the status of this? Where is the actual bug? |
Batching in log files is one of those issues that is way more complex in its implications than it's implementation. What happens if we have 10 log lines buffered, waiting to be sent as a batch, then the process dies/server crashes/network fails? In the simple case, those log lines get dropped on the floor. Annoyingly, those are probably exactly the ones we wanted because they describe what was occurring right when the problem occurred. The approach I've used in the past is to persist log lines to disk as immediately as possible (file-system write concerns being a whole different kettle of fish) then check that store for un-persisted lines the next time the process starts and send them off. That is possible, but it gets real messy real fast. I think the issue we're really trying to solve is "HTTP connections are expensive. When we have 1000 of them at once things are awful." I suggest the better way to solve that is to have a single long-lived connection to loggly open that we then stream the log events down, rather than negotiating a separate connection for each one. This could be solved to varying success with keepalive, SPDY, websockets, etc. Depends what kinds of things the Loggly API supports. |
The main issue is that winston-loggly uses Loggly's HTTP interface. This involves a new HTTP request for every log line. I wrote a new winston transport that uses Loggly's syslog interface. It creates one permanent connection to Loggly's server, over SSL, and will reconnect if disconnected. It includes configurable options on how to handle the reconnection as well. It is RFC5424 compliant, so your winston log levels get translated to syslog levels and are indexed on Loggly. Please feel free to check it out at https://github.com/mathrawka/winston-loggly-syslog |
That's awesome @mathrawka. There will be a number of ways to fix this in |
We forked and added bulk support in this repo https://github.com/loggly/winston-loggly-bulk. This should fix the problem. |
Update README.md and package.json file
I don't know if its
winston-loggly
ornode-loggly
fault but I have seen that with this enabled my application consume a lot of memory under high load.The GC take cares of it after I stop the load tests but it goes from 28 mb to 800mb in few seconds. After I disable winston-loggly (I did it by changing the loglevel) it goes from 28mb to 60mb and remains there all the time.
I tried generating heapdumps but I couldnt gather any important information.
The text was updated successfully, but these errors were encountered: