Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

high memory consumption #6

Open
jfromaniello opened this issue Dec 5, 2012 · 21 comments
Open

high memory consumption #6

jfromaniello opened this issue Dec 5, 2012 · 21 comments

Comments

@jfromaniello
Copy link

I don't know if its winston-loggly or node-loggly fault but I have seen that with this enabled my application consume a lot of memory under high load.

The GC take cares of it after I stop the load tests but it goes from 28 mb to 800mb in few seconds. After I disable winston-loggly (I did it by changing the loglevel) it goes from 28mb to 60mb and remains there all the time.

I tried generating heapdumps but I couldnt gather any important information.

@indexzero
Copy link
Member

@joseph12631 Thoughts on this?

@netroy
Copy link

netroy commented Jan 3, 2013

just dumped heap on a production system where the memory is leaking like a faulty faucet.
69% of the heap usage is in ClientRequest posting to logs.loggly.com:443

Screen Shot 2013-01-03 at 5 07 23 PM

I'm also using [email protected] in the app & making a lot more https post requests than the logger, no leaks there.
updated the version of request in loggly/package.json, will report back if that fixes things.

@jfromaniello
Copy link
Author

awesome @netroy , I really need to learn how to examine and group headdumps as you did.

@netroy
Copy link

netroy commented Jan 3, 2013

@netroy
Copy link

netroy commented Jan 3, 2013

bumping the version of request did nothing.. still super-leaky.

@davidbanham
Copy link

I'm seeing the same behaviour.

screen shot 2013-08-26 at 9 07 43 pm

@dertin
Copy link

dertin commented Nov 14, 2013

Not found. any solution ?

@dsimmons
Copy link

I'm seeing this as well FWIW.

Any updates by chance? If not, I'll take a look into it when I get the bandwidth -- in the meantime, I plan on hacking together a quick-fix by wrapping node-loggly + winston

@jfromaniello
Copy link
Author

I don't knowI posted this issue two years ago and it never got solved. Not
even a response from the author.

How can you build a product for developers and never take care of such an
important bug in your SDKs?

I'll never ever use or try their product again. Open source doesn't work
like this.

We are also an startup for developers and we put a lot of work and
attention to the details of our opensource sdks.

@jcrugzz
Copy link
Contributor

jcrugzz commented Jul 16, 2014

This should be updated to allow the bulk endpoint to be used. If you are getting a high throughput of logs, you are sending A LOT of requests which will inevitably hike up memory usage. I don't think this is an actual bug its just the fact that you are queueing way too many requests in flight when your application is already under load. Doubling the actual problem.

@jfromaniello
Copy link
Author

@jcrugzz it is a bug as far as I'm concerned, a bug in the code or a bug in the documentation.

I personally will prefer the library handle this for me. But, you shouldn't care about my opinion since as I mention before I will never use loggly service. They don't care about their SDKs.

@jcrugzz
Copy link
Contributor

jcrugzz commented Jul 16, 2014

@jfromaniello The bug fix is to send logs in bulk which is now supported in node-loggly which i would happily take a PR for :) (cc @dsimmons).

Just for information clarity, the REAL bug lies in the default socket pooling in node (it caps at 5 sockets and you are queueing many many more requests) and at a point the requests will still need to be queued by the event loop.

Now that I think about it, we could just add a custom agent to mitigate some of this problem but in general the bulk endpoint will be the way to go if this issue is seen.

@dsimmons
Copy link

@jcrugzz, I don't believe this to be the case. For this particular app, the logging is actually pretty reasonable by most logging standards, but more importantly, it's bottlenecked by the Twitter rate limit. So for a particular 15-minute window, it spends maybe one or two doing actual work, and then it sits idly for another 15 - timeTaken. If there were a case where the logging got clogged, the idle time should be more than sufficient to catch up.

@jfromaniello, I could be mistaken (feel free to correct me @indexzero), but this project isn't sponsored by Loggly. They make the official node-loggly package instead, which this wraps. Just didn't want them getting caught in the cross-fire.

Edit: Whoops, the node-loggly project above looks to be sponsored by Nodejitsu, my mistake.

Edit 2: I stumbled across an open issue for a potential memory leak for another library that I'm using, so it's entirely possibly that this issue isn't the culprit.

@davidbanham
Copy link

Poking my head in here because I got the notifications, only to echo @dsimmons in pointing out that this only exists because @indexzero is a nice man who likes to share things, and nodejitsu is a nice company that like to share things.

@jfromaniello I know it's frustrating when things don't work, but abusing the author of an open source project is pretty much always poor form.

By all means, drop Loggly an email telling them you'd love to use their product in node, and with Winston, so they should invest some developer effort in making that more possible.

@indexzero thanks for being rad.

@jfromaniello
Copy link
Author

Not attacking anyone I thought the underlying loggly library was official.

BTW we are all good people here, I myself also maintain a lot of OSS
software even for things I don't own.

@ilkkao
Copy link

ilkkao commented Jan 13, 2015

This issue caused a hard to debug bug in my app. Root cause was the fact that my app generates couple thousand log messages during the startup. This means 2000 almost parallel HTTP request which translates to 2000 parallel TCP connections. That put my macbook to its knees.

It'd be really nice if loggly transport could do little buffering and switch to bulk API if that's possible.

@framerate
Copy link
Contributor

Does anyone official know the status of this? Where is the actual bug?

@davidbanham
Copy link

Batching in log files is one of those issues that is way more complex in its implications than it's implementation.

What happens if we have 10 log lines buffered, waiting to be sent as a batch, then the process dies/server crashes/network fails? In the simple case, those log lines get dropped on the floor. Annoyingly, those are probably exactly the ones we wanted because they describe what was occurring right when the problem occurred.

The approach I've used in the past is to persist log lines to disk as immediately as possible (file-system write concerns being a whole different kettle of fish) then check that store for un-persisted lines the next time the process starts and send them off.

That is possible, but it gets real messy real fast. I think the issue we're really trying to solve is "HTTP connections are expensive. When we have 1000 of them at once things are awful."

I suggest the better way to solve that is to have a single long-lived connection to loggly open that we then stream the log events down, rather than negotiating a separate connection for each one.

This could be solved to varying success with keepalive, SPDY, websockets, etc. Depends what kinds of things the Loggly API supports.

@emostar
Copy link

emostar commented Nov 8, 2015

The main issue is that winston-loggly uses Loggly's HTTP interface. This involves a new HTTP request for every log line.

I wrote a new winston transport that uses Loggly's syslog interface. It creates one permanent connection to Loggly's server, over SSL, and will reconnect if disconnected. It includes configurable options on how to handle the reconnection as well. It is RFC5424 compliant, so your winston log levels get translated to syslog levels and are indexed on Loggly.

Please feel free to check it out at https://github.com/mathrawka/winston-loggly-syslog

@indexzero
Copy link
Member

That's awesome @mathrawka. There will be a number of ways to fix this in [email protected] once transports are normal streams. What I mean by this is that streams have a bulk write interface which, when implemented will solve this problem.

@mostlyjason
Copy link

mostlyjason commented Jun 28, 2016

We forked and added bulk support in this repo https://github.com/loggly/winston-loggly-bulk. This should fix the problem.

OtterCode pushed a commit to OtterCode/winston-loggly that referenced this issue Jun 22, 2018
Update README.md and package.json file
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests