You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> One remaining item may be how we want to handle repeats for network benchmarks, but I think we can deal with hat in a separate Issue/PR.
A follow-up sounds good
TBH I think we could just set a repeat attribute for consistency, and then have a basic for loop that goes around the context + operation, appending a samples: list from each iteration, and that's what it returns - this would produce identical structure in the results to what we see in the timing tests
The only problem might be for any tests that have any kind of caching (including in-memory LRU) the operations would be performed on the same process, and so the first run may have heterogeneous statistics from the rest of the samples 🤔for comparison, the reason the timing tests can repeat so easily is because that's a built-in feature of timeit, which runs on new processes each time
The only problem might be for any tests that have any kind of caching (including in-memory LRU) the operations would be performed on the same process, and so the first run may have heterogeneous statistics from the rest of the samples
I think that depends in part on what we put into the setup method. For the network benchmarks we are controlling what is being measured via the network tracking decorator, i.e., if necessary, we could put setup and clean-up code inside the benchmark function to make sure we have clean repeats
CodyCBakerPhD
changed the title
Discuss support for repeat of network benchmarks
Support repeat for network and other custom benchmarks
Mar 11, 2024
CodyCBakerPhD
changed the title
Support repeat for network and other custom benchmarks
Support repeat for network and other custom tracking
Mar 11, 2024
A follow-up sounds good
TBH I think we could just set a
repeat
attribute for consistency, and then have a basicfor
loop that goes around the context + operation, appending asamples: list
from each iteration, and that's what it returns - this would produce identical structure in the results to what we see in the timing testsThe only problem might be for any tests that have any kind of caching (including in-memory LRU) the operations would be performed on the same process, and so the first run may have heterogeneous statistics from the rest of the samples 🤔for comparison, the reason the timing tests can repeat so easily is because that's a built-in feature of
timeit
, which runs on new processes each timeOriginally posted by @CodyCBakerPhD in #21 (comment)
The text was updated successfully, but these errors were encountered: