-
Notifications
You must be signed in to change notification settings - Fork 191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Attempt to get perf testing work #1532
Attempt to get perf testing work #1532
Conversation
46f985c
to
3a4e2d0
Compare
3a4e2d0
to
b23d206
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just got some questions so I can understand some things.
Thanks for doing this, this is a huge help!!! <3
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
excellent, so in a future PR we could make different render calls or click around and measure the results afterwards.
like, some things we probably want to measure:
- appending 1 row
- appending 1000 rows
- clearing all rows
- switching two rows' positions
- updating data within one row
- ideally this doesn't cause every row to re-compute, even though I think today it does... 🙈 (depends on where the data is located)
- probably other stuff as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, all right. We may need to explore/cleanup existing logic (from prev bench, located https://github.com/glimmerjs/glimmer-vm/tree/main/packages/%40glimmer-workspace/benchmark-env)
enforcePaintEvent(); | ||
} | ||
|
||
function enforcePaintEvent() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we know why this exists?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NullVoxPopuli, yep, tracerbench require to have paint event after last mark (TracerBench/tracerbench#305), we may remove it in future, once proper events will be settlled-up and no issues spotted.
} else if (id === '@glimmer/local-debug-flags') { | ||
return '\0@glimmer/local-debug-flags'; | ||
} | ||
}, | ||
load(id) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this needed? the build / package.json#exports should have already handled this 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh I see it was pre-existing from here: https://github.com/glimmerjs/glimmer-vm/pull/1532/files#diff-860eda2adf31f087997e1c92eab487cb90113b3f303653d36adabd42213d5c42L26
We can probably evaluate removing / cleaning it up in a separate PR then.
process.env['EXPERIMENT_BRANCH_NAME'] || (await $`git rev-parse --abbrev-ref HEAD`).stdout.trim(); | ||
const controlBranchName = process.env['CONTROL_BRANCH_NAME'] || 'main'; | ||
const markers = | ||
process.env['MARKERS'] || |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there any way to track all markers? or do we have to list them out? (I'm a tracerbench newb)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NullVoxPopuli we don't really need all markers (at least, in current step), it's confusing way to measure things, because we must know what we trying to measure.
// we can't do it in parallel on CI, | ||
|
||
// setup experiment | ||
await within(async () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
within is nice!
await $`pnpm build`.quiet(); | ||
|
||
if (isMacOs) { | ||
await $`find ./packages -name 'package.json' -exec sed -i '' 's|"main": "index.ts",|"main": "./dist/prod/index.js","module": "./dist/prod/index.js",|g' {} \\;`; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note for myself and a later PR, I bet we can do this from part of the script that @chancancode made that allows the linking, where a whole new monorepo is created, but just with the published assets and resolved publishConfigs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, or we could expose special "build-like-for-bublishing" command (and use it inside folders) to avoid this magic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, I think that's what I'll make (already implemented for the linking) and share it with this code
pnpm run benchmark:setup
Comment: this is just setup with build, checkout and auto artifact upload, comment posting (if configured), it not provide actual bench code. We just making sure it's working. Proper performance marks and benchmark logic should be settled independently.
Note, to get GH message working for PR's made from forks, we need to explore this topic: https://github.com/marketplace/actions/add-pr-comment#proxy-for-fork-based-prs
Note: commit/change need to be pushed to gh to be "benchable" (same for local), because we "chechout" experiment branch