Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ensure! is better than assert!? #6322

Closed
AurevoirXavier opened this issue Nov 1, 2024 · 2 comments
Closed

ensure! is better than assert!? #6322

AurevoirXavier opened this issue Nov 1, 2024 · 2 comments

Comments

@AurevoirXavier
Copy link
Contributor

AurevoirXavier commented Nov 1, 2024

          ensure is better than assert, because the benchmarking pipeline can handle errors

Originally posted by @shawntabrizi in #6310 (comment)

I noticed that the benchmark code throughout this repository heavily utilizes assert!. Should we consider replacing all instances of it?

@AurevoirXavier AurevoirXavier changed the title ensure! is better than assert! ensure! is better than assert!? Nov 1, 2024
@shawntabrizi
Copy link
Member

Wherever we can return an error, we should probably use ensure, as it allows the benchmarking pipeline to exit gracefully.

For example, imagine you run benchmarks on the whole runtime, all of that data will be saved in memory before it is parsed and put into JSON / handlebars templates.

If the benchmark process panics, all of that data is just thrown away.

AFAIK, it should be possible to get lots of reasonable and usable data even when a benchmark returns an error.

Some functions may not handle returning an error (like some kind of setup function), in which case assert is fine. But ensure is def better on average afaik.

cc @ggwpez

@ggwpez
Copy link
Member

ggwpez commented Nov 1, 2024

The benchmarking CLI handles panics and errors in the same way, since it spawns a new wasm instance, these panics never propagate upwards anyway.

If the benchmark process panics, all of that data is just thrown away.

This is pretty much mitigated since we run them per-pallet (and also continuously in CI to check that they are not broken).
So it would throw away the results of the pallet, but not of the other pallets. I think there is no way around this, as we cannot write partial pallet results to the weight files. Then there would be some calls missing.

AFAIK, it should be possible to get lots of reasonable and usable data even when a benchmark returns an error.

Yea it could be improved to still write partial weight files or output the intermediary JSON for recovery.

Should we consider replacing all instances of it?

No this would be too much time and reviews wasted on something minuscle. For new code we can keep it in mind, but i would not dig up old code just for that change.

@ggwpez ggwpez closed this as completed Nov 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants