-
Notifications
You must be signed in to change notification settings - Fork 191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rendering performance regression between 0.84.3 and 0.85.1? #1590
Comments
I took a stab at reverting some of the extra assertions & iterator changes introduced in 64eb186 (see davidtaylorhq#1). It does improve things by a couple of percentage points, but it's nothing compared to the 30-40% regression shown above 😢 |
It looks like a further rendering-speed regression has been released as part of Ember 5.10 😭 (https://emberperf.discourse.org) Edit: although it looks like glimmer-vm was not bumped between Ember 5.9 and Ember 5.10, so I guess this must be caused by a change in Ember itself |
For reference, the most recent upgrade pr (tho, this shipped in ember 5.9): We went from 0.87.1 to 0.92.0 Likely suspects:
Also, a few deprecations were added (to the AST utils). I wonder how much that code being present (extra branches, etc) contributes to slowing down - especially since ember has a bunch of extra transformations it uses |
@davidtaylorhq are those benches all with classic builds? I'm curious how using vite could affect the resulting score. As i look through the changelog in ember, the only things that isn't add deprecation, or delete old code, are changes supporting vite's strictness. |
@NullVoxPopuli I opened an ember.js issue at emberjs/ember.js#20719 with more details on the most recent regression. It looks like the culprit is emberjs/ember.js@53b2de8. emberperf uses classic builds, and is pretty dependent on AMD resolution. So I think it'll need some pretty significant refactoring to work under Vite/Embroider. |
It doesn’t unless discourse/the test suite used here loads the template compiler at runtime which is atypical |
Yeah, both Discourse and the emberperf test suite compile templates in advance 👍 |
emberperf does load the template compiler into the browser, although at first glance it does it on a completely separate pageload from the one that measures rendering. I bring it up because there's definitely atypical stuff in there, but so far nothing I can see that would skew the results. |
This issue might be releated - such a massive bump in bundle size could lead to a big slowdown in performance. |
Mixed news so far
I've published the two apps here: |
Second update, after fixing some things:
We have a similar degradation in both |
Did a memory allocation timeline and the graph looked like this: which aligns with the work from @bendemboski in #1440 Nice work, @bendemboski ! |
So far:
|
I uploaded a performance profile captured with firefox -- ya'll can inspect and poke about here: |
I've added 5.11 and 6.0-alpha.1 As you can see, there is still some variance, as there isn't really a lot that changed emberjs/ember.js@3dfb8a4...85a4f29 (but some logic around EXTEND_PROTOTYPES.Array did change). 3dfb8a4 is the actual v6 alpha.1 sha |
I added another set of apps for comparing classic production builds. On my personal laptop, comparing with embroider: Note: it seems it's hard to control noise my laptop |
Embroider (w/ 20 (I think) x CPU slowdown because I have a lot of machine "noise") From this PR: #1606 |
In Discourse, and in Emberperf, we saw a fairly significant rendering-performance hit as part of the Ember 5.5 -> 5.6 bump:
Ember 5.6 included a bump of glimmer-vm from 0.84.3 to 0.85.1 (emberjs/ember.js#20561)
Unfortunately 0.84.3 -> 0.85.1 include a lot of structural changes in glimmer-vm, much of which was done without glimmer-vm's own performance testing in working order.
I was able to boot the glimmer-vm benchmark app on a handful of old commits, and run tachometer on them to compare the 'render'
performance.measure
metric.These numbers are clearly going in the wrong direction. Although it is also worth mentioning: the benchmark app itself underwent a bunch of refactoring across these commits... so it might not be a perfect comparison.
I would love to be able to bisect into specific commits to identify what caused the regressions. Unfortunately, on all the intermediate commits I've tried, I've been unable to get the benchmark app to boot because of various import/dependency/package-json errors. It seems the '
perf.yml
' GitHub CI job was disabled for much of this time, so I assume this was a known problem on these commits, and not a problem with my local setup.So... I don't really know where that leaves us. Does anyone have any pointers for what else we can do to isolate the source of the regression(s)?
Footnotes
with 56ddfa cherry picked on top to make the benchmark app work ↩
The text was updated successfully, but these errors were encountered: