-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Potential small performance optimizations #27
Comments
To clarify 5, not everything needs inlined, and in particular, manual inlining only really helps in cases that are already highly polymorphous. Also, it only makes sense if the intent is just as clear after the inlining (like with |
in my experience tuning down https://github.com/rrbit-org/lib-rrbit
|
Now that I'm running a few benchmarks, I stand corrected. (It's still negligible in practice, and I do recall V8 used to be a little more even.)
That's true with smaller functions, especially ones that just type-check (the engine normally inlines them), but in some megamorphic contexts, I've gotten some serious performance gains from inlining manually, because the engine didn't think to inline a megamorphic call, but it could get better type information after inlining. For example, this library I managed to get up to about 50% of Lodash's |
I tried a few of these but didn't see too much a difference in artificial benchmarks. Still I've gone ahead and removed Please submit a PR if you think there are further micro-optimizations opportunities. I'm hesitant to inline much else however unless it results in a significant performance boost. |
@mattbierner what are you using to test performance? |
I put together some simple benchmarks a while back: https://github.com/mattbierner/js-hashtrie-benchmark They only cover node and are completely artificial however |
@mjbvz When I looked at the posted results, I noticed they're pretty out of date. A few things of note:
|
I've rerun the benchmarks with the most recent versions of the libs and posted the results. Some nice perf gains across the board on node 8.6, although I suspect upgrading the test machine from a 2009 laptop also helped matters |
Edit: My benchmark memories are apparently out of date...
I haven't actually profiled this, but I thought I'd take a gander through a few things.
In this function, it's much faster to useconst out = arr.slice()
, since engines provide optimized code gen for it.In this function and this function, see if starting with an empty array literal is faster. (For some reason, engines are often faster with that than with a pre-allocated array.)Consider using
k != null ? k.key : undefined
, etc. instead ofk && k.key
. Engines struggle to optimize the latter, especially when it's not likeif (x) ...
.Be careful about polymorphism. That will come back to bite you very quickly performance-wise if you're not careful, and the library has a very large amount of it, largely by necessity.
If it's super simple like this, this, or this, you might as well just inline it. The function call overhead, especially when dealing with higher order functions, could screw up the engine's optimizer, giving you megamorphic perf hits earlier than what you'd like.
Avoid closures where you can. They get deadly in a hurry, and no JS engine is quite as smart and magical as LuaJIT is when it comes to optimizing higher order functions.
Also, have you considered using
class Map { ... }
forhamt.Map
? It might smooth out your code a little by moving the class aliasing boilerplate out of the functional API stuff.The text was updated successfully, but these errors were encountered: