Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

caching parse results for rebuilds #3145

Closed
lkmill opened this issue Dec 18, 2017 · 5 comments
Closed

caching parse results for rebuilds #3145

lkmill opened this issue Dec 18, 2017 · 5 comments
Labels

Comments

@lkmill
Copy link

lkmill commented Dec 18, 2017

I am currently working on a project with quite a substantial amount of less. i am using gulp.watch to run rebuilds.

i was hoping the rebuilds would be faster than the initial build, but they take exactly as long. is there a way to cache parsing results between rebuilds? optimally i would like to be able to store the cache to a file as well so the cache can be read and used for the initial builds as well.

i am willing to dig through code and try implement some kind of solution myself but would appreciate some pointers in the right direction. For example, should the FileManager or ImportManager try to cache parsing results?

@seven-phases-max
Copy link
Member

seven-phases-max commented Dec 18, 2017

See #2640, #2746... (I would also add that it's also a common misassumption to believe that the bottleneck is the parsing while it's actually the rendering - like 30%/60% or even more of overall time (depending on actual code complexity and features used)... So ask yourself if some 10sec -> 7sec improvement would be worth your efforts all).

@lkmill
Copy link
Author

lkmill commented Dec 19, 2017

@seven-phases-max thanks for the pointers... is there anyway to cache rendering results or partial rendering results?

@seven-phases-max
Copy link
Member

seven-phases-max commented Dec 19, 2017

is there anyway to cache rendering results or partial rendering results?

Probably... Though I suspect checking if partial rendering results are affected will take the same time (simply because it's the same thing as the rendering. There's no independently compiled units within a single Less->CSS session so anything may be affected by everything).

In general I would not do anything at all w/o performing a detailed benchmark first, as you never know where the real bottleneck might be (and it's almost always turns out to be found in some contra-intuitive place... for example try to disable some unused/not-so-important features - e.g. sourcemap output and see if there're significant changes in compilation time).

@matthew-dean
Copy link
Member

matthew-dean commented Dec 20, 2017

@lohfu

Actually, in 3.x, I did add some additional caching of the imported files in the import manager IIRC, but I'm not certain it would have any effect with gulp.watch. It's really to prevent unnecessary duplicate XHR calls when the same file is imported twice from separate files within the same project.

With Less, I don't think you can really cache any parts of the evaluation, because the evaluation of any Less node depends on its context, and that context would change based on how the tree changes. And if, say, what's updated in your parsing is a @plugin call, and that plugin adds a visitor, then even seemingly-static nodes could have altered output. The only way to evaluate nodes is to evaluate them.

In short: as @seven-phases-max indicated, caching is not the answer. If you want to speed up performance, then you have to look at optimizing evaluation paths. There are some proposals currently that SHOULD improve evaluation speed by removing leaky variables from Less, which would effectively mean that mixin / DR calls wouldn't have to be re-evaluated over and over if the same vars are passed in.... BUT that's more a side-benefit of simplifying the language, and a) it's not clear how much performance would be gained, and b) it's of course entirely dependent on your specific .less code.

Having spent time in the codebase for some time now (although some have spent more time there than me, like @lukeapage and probably even @seven-phases-max), my suspicion is that there are a few potential targets for improving Less performance:

  1. Ruleset evaluation and flattening - I noticed when looking at the evaluation of rulesets that the ruleset objects are sometimes altered / eval'd several times. Object creation in JS is expensive, and Less does a lot of it, sometimes for objects that are immediately discarded. And in the evaluation cycle, the rules are looped through more than once. (I think 3 times per ruleset?) I think this is to evaluate specific node types in a certain order. My suspicion is this could be optimized somehow into sorting for evaluation where not so many loops are required.
  2. The logic of :extend and how nodes are searched - I think there are some notes there about improvements that could be made. I also thought that rather than recursive search, perhaps selectors could be indexed in a way that wouldn't affect performance. But I don't know the trade-offs.
  3. Expression parsing and evaluation - A lot of code was added in 2.x for math and when guards. It feels bloated, but I may be wrong about that. It could be entirely necessary; it's just a hunch.
  4. Variable (and property) lookup and flattening - Again, this is just a hunch. I think some improvements could be made there.

The key to all of this is that you need to have accurate benchmarks to compare code changes. In all honesty, I haven't the faintest how one would go about that. At some point I added the performance-now library for benchmarking, but that will only give you a rough idea of speed on your particular system at a particular moment in time. I'm not sure how you would do A/B testing to see if a change you made made any difference whatsoever. I think if you figure that out, then actually testing other changes would be a lot easier.

And maybe there is some additional caching that can be done, I don't know. You're welcome to look! And if you find areas of potential improvement, by all means make a PR that demos how you tracked that improvement.

Hope that helps.

@stale
Copy link

stale bot commented Apr 19, 2018

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Apr 19, 2018
@stale stale bot closed this as completed May 3, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants