-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why bytes.push instead of Buffer #12
Comments
How would you know the size of the buffer to use beforehand? Currently, the object to encode is passed through to get the needed size, and then the content of the array ( |
I would create a temporary buffer, maybe 8Kbs, and then grow a destination buffer (re-alloc & copy). For tiny objects it should be much faster than now. For larger ones depends on the overhead of growing the array, but V8 does something similar for arrays anyways so probably it will still be faster. |
That's interesting! Would you have time to implement it, so we can benchmark it against the current implementation? |
I will not really have time in the short term. I saw that this project uses the buffer approach, and it is very fast according to their benchmarks: https://github.com/phretaddin/schemapack/blob/master/schemapack.js |
I'll keep this open then, if anyone wants to submit a pull request. |
There is a package that could be useful for that: https://github.com/rvagg/bl |
i tried the following changes:
tiny/small/medium test cases got improvements but on large, notepack still beats it. i am also wondering why i got different ops/sec on notepack on tiny, i was using node 10 on windows 7 x64. |
@davalapar impressive, great job! I think that it will be difficult to beat those numbers... |
@davalapar I am wondering, any insights on why decode did not get any improvement? |
@davalapar it seems replacing
Way to go, but that's definitely better! |
@darrachequesne goddamn that's fricken awesome!
I think it's due to similar decoding implementation with notepack, which is same way of reading the buffer values and creating of objects and arrays. I had the crazy idea of inlining the equivalent of function calls such as
Results below, I'm on Windows 7 x64 with some open apps JSON stringify tiny x 1,364,321 ops/sec ±0.91% (90 runs sampled) << JSON stringify small x 219,199 ops/sec ±0.34% (92 runs sampled) JSON stringify medium x 110,615 ops/sec ±1.82% (90 runs sampled) JSON stringify large x 24.22 ops/sec ±0.52% (44 runs sampled) JSON parse tiny x 1,336,891 ops/sec ±0.40% (91 runs sampled) << JSON parse small x 267,013 ops/sec ±0.22% (96 runs sampled) << JSON parse medium x 134,864 ops/sec ±0.21% (95 runs sampled) JSON parse large x 31.81 ops/sec ±0.56% (54 runs sampled) |
@davalapar I think these are really interesting results. In some cases even faster than JSON, very encouraging. |
I wonder why the encoder is using a standard js array and calls to
push
. Wouldn't be much faster to work using a Buffer?I understand the challenges of growing the buffer, but it should be much faster considering the overhead of a function call for every element written to the array.
The text was updated successfully, but these errors were encountered: