Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JavaScript heap out of memory #3

Open
pgudge opened this issue Sep 2, 2018 · 4 comments
Open

JavaScript heap out of memory #3

pgudge opened this issue Sep 2, 2018 · 4 comments

Comments

@pgudge
Copy link

pgudge commented Sep 2, 2018

Hi,

Thanks for the container.

When mounting a large photo library from a remote NAS the containers runs for about 5 minutes then dies with

<--- Last few GCs --->

[56:0x556c18623000]   518186 ms: Mark-sweep 976.0 (989.2) -> 976.0 (988.2) MB, 309.3 / 3.2 ms  (average mu = 0.522, current mu = 0.000) last resort GC in old space requested
[56:0x556c18623000]   518487 ms: Mark-sweep 976.0 (988.2) -> 976.0 (988.2) MB, 300.4 / 3.2 ms  (average mu = 0.348, current mu = 0.000) last resort GC in old space requested


<--- JS stacktrace --->

==== JS stack trace =========================================

    0: ExitFrame [pc: 0x20317f7841bd]
Security context: 0xb087121e6c9 <JSObject>
    1: readTag [0x195e0b020a71] [/usr/src/app/node_modules/exif-reader/index.js:~73] [pc=0x20317faf549b](this=0x3e40ac988d81 <JSGlobal Object>,buffer=0x3f876b95d3c1 <Uint8Array map = 0xedf11eeadc9>,offset=451,bigEndian=0x2b488aa823f1 <false>)
    2: readTags [0x195e0b020a11] [/usr/src/app/node_modules/exif-reader/index.js:59] [bytecode=0x28f8e6678221 offset=9...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
Aborted (core dumped)
npm ERR! code ELIFECYCLE
npm ERR! errno 134
npm ERR! [email protected] start: `node ./bin/www`
npm ERR! Exit status 134
npm ERR!
npm ERR! Failed at the [email protected] start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2018-09-02T15_32_55_368Z-debug.log

What's the best way to give more RAM to node, so that it can continue?

Thanks,
Paul.

@jwater7
Copy link
Owner

jwater7 commented Sep 2, 2018

Wow, I've never encountered that error before and I have some larger albums, but maybe increasing the node memory allocation as you mention might help in your case. I found a mention of the argument --max-old-space-size=<size> that you could try passing to node. I would try myself but I'm not sure I could reproduce the issue.

In order for the argument to work inside the docker container, you could try using the NODE_OPTIONS environment variable to pass the argument to node through docker. To do this, the docker run command might looks something similar to docker run -e 'NODE_OPTIONS=--max-old-space-size=4096' jwater7/responsive-photo-gallery for a 4GB memory allocation. Not sure what the default size is, so you might try playing around with it a bit.

If you get that to work, you wouldn't need to do this, but to directly modify the node command when running the docker container, you could substitute the command for a different one by passing it into the docker run command like this: docker run -it jwater7/responsive-photo-gallery npm run build-frontend && node --max-old-space-size=4096 ./bin/www

If this continues to be a problem, maybe we should look into using a different node module instead of exif-reader. Please let me know how it goes and thanks for the nice writeup.

@pgudge
Copy link
Author

pgudge commented Sep 3, 2018

Thanks for the info. I added that option, and indeed it ran for longer, 7 hours! Just checked now and it looks like an exception on the thumb generation for dates or something.

{"log":"      returnMetadata['modifyDate'] = new Date(metadata.format.tags.creation_time); //'2018-03-26 18:07:46'\n","stream":"stderr","time":"2018-09-03T07:41:47.320204321Z"}
{"log":"                                                                   ^\n","stream":"stderr","time":"2018-09-03T07:41:47.320211442Z"}
{"log":"\n","stream":"stderr","time":"2018-09-03T07:41:47.320214757Z"}
{"log":"TypeError: Cannot read property 'creation_time' of undefined\n","stream":"stderr","time":"2018-09-03T07:41:47.320217784Z"}
{"log":"    at /usr/src/app/node_modules/fast-image-processing/index.js:233:68\n","stream":"stderr","time":"2018-09-03T07:41:47.320221026Z"}
{"log":"    at handleCallback (/usr/src/app/node_modules/fluent-ffmpeg/lib/ffprobe.js:106:9)\n","stream":"stderr","time":"2018-09-03T07:41:47.320224209Z"}
{"log":"    at handleExit (/usr/src/app/node_modules/fluent-ffmpeg/lib/ffprobe.js:223:11)\n","stream":"stderr","time":"2018-09-03T07:41:47.320227398Z"}
{"log":"    at Socket.\u003canonymous\u003e (/usr/src/app/node_modules/fluent-ffmpeg/lib/ffprobe.js:248:9)\n","stream":"stderr","time":"2018-09-03T07:41:47.320230568Z"}
{"log":"    at Socket.emit (events.js:187:15)\n","stream":"stderr","time":"2018-09-03T07:41:47.320234175Z"}
{"log":"    at Pipe._handle.close (net.js:598:12)\n","stream":"stderr","time":"2018-09-03T07:41:47.320255442Z"}
{"log":"\n","stream":"stderr","time":"2018-09-03T07:42:49.574190899Z"}
{"log":"(sharp:56): GLib-CRITICAL **: g_hash_table_lookup: assertion 'hash_table != NULL' failed\n","stream":"stderr","time":"2018-09-03T07:42:49.574213438Z"}
{"log":"\n","stream":"stderr","time":"2018-09-03T07:42:49.574504Z"}
{"log":"(sharp:56): GLib-CRITICAL **: g_hash_table_lookup: assertion 'hash_table != NULL' failed\n","stream":"stderr","time":"2018-09-03T07:42:49.574513253Z"}
{"log":"\n","stream":"stderr","time":"2018-09-03T07:42:49.5842583Z"}
{"log":"(sharp:56): GLib-CRITICAL **: g_hash_table_lookup: assertion 'hash_table != NULL' failed\n","stream":"stderr","time":"2018-09-03T07:42:49.584267493Z"}
{"log":"\n","stream":"stderr","time":"2018-09-03T07:42:49.584536576Z"}
{"log":"(sharp:56): GLib-CRITICAL **: g_hash_table_lookup: assertion 'hash_table != NULL' failed\n","stream":"stderr","time":"2018-09-03T07:42:49.584545541Z"}
{"log":"\n","stream":"stderr","time":"2018-09-03T07:42:49.766828416Z"}
{"log":"(sharp:56): GLib-CRITICAL **: g_hash_table_lookup: assertion 'hash_table != NULL' failed\n","stream":"stderr","time":"2018-09-03T07:42:49.766856065Z"}
...
...
{"log":"Segmentation fault (core dumped)\n","stream":"stderr","time":"2018-09-03T07:56:15.656834537Z"}
{"log":"npm ERR! code ELIFECYCLE\n","stream":"stderr","time":"2018-09-03T07:56:15.658551152Z"}
{"log":"npm ERR! errno 139\n","stream":"stderr","time":"2018-09-03T07:56:15.658730459Z"}
{"log":"npm ERR! [email protected] start: `node ./bin/www`\n","stream":"stderr","time":"2018-09-03T07:56:15.659375497Z"}
{"log":"npm ERR! Exit status 139\n","stream":"stderr","time":"2018-09-03T07:56:15.659443863Z"}
{"log":"npm ERR! \n","stream":"stderr","time":"2018-09-03T07:56:15.659517228Z"}
{"log":"npm ERR! Failed at the [email protected] start script.\n","stream":"stderr","time":"2018-09-03T07:56:15.659525997Z"}
{"log":"npm ERR! This is probably not a problem with npm. There is likely additional logging output above.\n","stream":"stderr","time":"2018-09-03T07:56:15.659608165Z"}

Sorry about the log format, it's from Docker container log file. Those lines above the ... are repeated thousands of times. It looks like it has created all the thumbs in the persistent/storage/thumbs/NAS/ folders.

@jwater7
Copy link
Owner

jwater7 commented Sep 4, 2018

Could you describe your albums a bit - are there lots of small videos? or just a couple larger ones and the rest photos? etc

Video support is still pretty new and I'm working out the kinks. I'm committing some stuff that should at least help with the date TypeErrors in your log.
My hunch is that node changed a bit when I updated and it's running LOTS of ffmpeg stuff concurrently, so I also added some limits there too. Hope that fixes what you're seeing if you docker pull latest.

I'm thinking that for video, it might be time to implement a small database or cache the metadata in some form a bit better.

Please let me know how it goes if you feel like trudging forward with this. I'll be interested if any of the latest stuff helps. Thanks

@jwater7
Copy link
Owner

jwater7 commented Aug 20, 2019

I did end up committing a limiter for the number of generated thumbnails (previews) of the album and also modified the album view to show more progress as images load, so this should no longer be an issue as long as people take care not to add too many images to one album.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants