-
-
Notifications
You must be signed in to change notification settings - Fork 791
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch to faster hashing for deflate #195
Comments
Thanks for info. Any fork or alternative is good - that's how opensource make things move forward. |
Would you be OK, then, with a PR that changes this line to force fewer hash collisions? Obviously this would differ from true Zlib. |
By default, i would prefer to keep output binary equal with ZLIB. BUT! Probably this can be done via global I can not give final answer without experimenting with benchmarks. At first glance chances are high. |
After digging internet a bit...
|
|
FYI:
crc32 takes 20%, at least here. No ideas why. But the only difference between deflate/gzip should be adler32/crc32. |
That seems quite strange to me, |
May be, JIT differences for es5/es6 syntax. They do strange things sometime. For example, if you compare node.js from v8, there were series of notable regressions in es5. Anyway, Slicing-By-4 needs uint32 aligned access. Madness with multiple ArrayBuffer views (8-bit and 32-bit) not worth efforts. |
I've started to experiment with the codebase and will report back if I make progress. The hashing is not centralized, i.e. you set it in many different places, so I need to deep-dive into the code to tune its performance. On another note, I saw you pushed some new commits for v2.0.0. If you're going to be releasing a new version, I'd recommend you consider a new idea I came up with: auto-workerization. Typical workerization techniques do not allow you to reuse existing synchronous code in an asynchronous fashion, but I developed a function that can take a function (along with all of its dependencies), generate an inline worker string, and cache it for higher performance. It's been specifically optimized to work even after mangling, minification, and usage in any bundler. More importantly, you can reference other functions, classes, and static consants with this new method. The main reason I suggest this is that very, very few people want to cause the environment (which could very well be a user's browser) to hang while running a CPU intensive task on the main thread, such as You can see how I did it in fflate if you'd like to consider it. |
Workers are out of scope at this moment. This lib is still zlib port by nature, a build-block for upper abstraction layers. If my intent could be to create just inflate/deflate... But i'd prefer full-featured zlib port. Anyway, it's still possible to bump version again :) I'm familiar with workification, pica uses it actively. Browserify has I've pushed temporary es6 branch for your info. There are just syntax changes. Still not covered inflate & tests. The last unpleasant problem is lack of original zlib 1.2.9 for binary compare deflate output. node.js switched to another lib after v12.16. Still not decided how to solve. Ideally - would be nice to have docker image to build light webassembly wrapper for testing purposes.
It's just unrolled macro, easy to search. I'll be ok with any form of PR. Even if that will be not working example for single place, with comment "spread to all such entries" PS. performance of crc32 self-fixed a some moment :) |
To answer your question: the current system requires you to pass a dependency array, but beyond that all workerization is handled for you. This is done at runtime and supports any environment for that reason. Not possible still to workerize without dependency list, which is why the library author must provide the async form with my solution. The benefit over existing solutions is that it works in any bundler or environment and has no impact on bundle size. The change in the Node 12 Zlib binary is precisely why I made this issue; shifting - away from Node.js parity can be justified now. I checked again and you are right, it's simply a macro. I may move this to a function instead. By the way, my solution relies on memLevel and requires between 4kB and 16MB extra memory (for most users 64kB). |
Go it. Nice idea. |
I've landed all changes to master, and moved hash to separate function 677ba48. If i switch to alternate hasher, result is strange - deflate faster, but inflate (from "improved deflate") slower: HASH_ZLIB:
HASH_FAST:
any ideas? |
All above with default memLevel (8) => hash table size 32K (not 64K). memLevel 9 improve deflate speed 15 => 16 (not too much). So, left default to simplify investigations. |
Unfortunately I've not had much time this week to spend on this PR, but I'll get to it in a day or two (weekend). The hash function you referred to in that code is suboptimal. UZIP uses it, but as it turns out UZIP is much slower for large files than See this snippet for what I used. As for why |
Hmm i don't understand how to express that hash via "standard" signature Line 100 in 677ba48
Anyway, take a look at inflate speed after YOUR compression, will be interesting to know. BTW, AFAIK cloudflare zlib fork switched to crc32 with guaranteed result distribution. |
Quick-checked
|
For mine you need access to three bytes at ind, ind + 1, and ind + 2. prev can be ind, data can be ind + 1, and you need a new parameter for ind + 2. I'm really confused as to how your inflate performance is hurt that much. I'll definitely look into it. |
I've been trying to add my hash variant but the performance is somehow much worse (2x slower). There may be something I'm doing wrong; all I changed was adding an extra parameter for an extra byte, which I accessed in each call to |
What about inflate speed? https://github.com/nodeca/pako/blob/master/benchmark/benchmark.js#L49 - replace here pako output with your output. |
I found reason why inflate speed decrease - because compression ratio much worse (bigger inflate input => slower)
Inrease of memLevel to 9 (for 64k hash table) does not help. |
I'm still working on this, I've just been struggling with implementing the actual hash function. Could you find a way to get three bytes to the hasher instead of just the previous hash and the current byte? |
AFAIK hash = 2 bytes. To be honest, i don't understand where concept of "3 bytes" is taken from. Usually hashes for sequences are "recursive" |
3 bytes is taken from the fact that the minimum length of an LZ77 sequence is 3 bytes in DEFLATE. Therefore, it's better to ensure all three bytes are unique than just two. It's true the recursive hash works as well for this, but it just seems to be less accurate from my testing. |
I think, that's not too critical, because:
I mean, your variant should be a bit better, but would require significant code rework for this concrete package. IMO "classic" recursive hash would be more optimal start point. |
I've been looking into optimizing the function with a rolling hash, but I'm not sure how to get it done. The only thing I can conceive is capping the previous hash at 2 * memLevel bits, then appending memLevel bits to the end for each new byte. Would that work? EDIT: After much experimentation, the original hash seems about as fast as it gets given the constraints. So this PR is dead until a rework is possible. |
I saw what was mentioned in #135 and #136 and decided to try out the performance benchmarks for myself.
If you would like to experiment with the patches I made, I forked the repo: 101arrowz/pako.
Note the addition of
fflate
anduzip
.uzip
was mentioned in the previous issues but is, as you mentioned, slightly unstable. It hangs on bad input rather than throwing an error, cannot be configured much, cannot be used asynchronously or in parallel (multiple calls toinflate
in separate callbacks, for example, causes it to fail), and, worst of all, does not support streaming.I created
fflate
to resolve these issues while implementing some of the popular features requested here (e.g. ES6 modules). In addition,fflate
adds asynchronous support with Web and Node workers (offload to a separate thread entirely rather than just add to event loop) and ZIP support that is parallelized to easily beat out JSZip. The bundle size ends up much smaller thanpako
too because of the much simpler code. As you can see it is much faster thanpako
.The purpose of this issue is not to boast about my library or to call
pako
a bad one. I believepako
can become much faster while still being more similar thanfflate
oruzip
to the realzlib
(i.e. the internallib/
directory). The overhauled implementation in Node 12 as mentioned in #193 is the perfect chance to escape the trap of trying to match the C source nearly line-for-line and instead become a canonical but high-performance rework in JavaScript. For this reason, I suggest you try to optimize some of the slower parts of the library. I found an even better hashing function that produced very few collisions in my testing, and for larger files/image files it outperforms even Node.jszlib
in C; if you'd like to discuss, please let me know.Personally: thank you for your incredible contributions to the open source community. I use
pako
in multiple projects and greatly appreciate its stability, not present in libraries such as my own.The text was updated successfully, but these errors were encountered: