Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Summarize improvements #323

Open
wants to merge 3 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions scripts/performance/benchmark
Original file line number Diff line number Diff line change
Expand Up @@ -79,9 +79,9 @@ def run_benchmark(pid, output_file, data_interval):
memory_used = process_to_measure.memory_info().rss
cpu_percent = process_to_measure.cpu_percent()
current_net = psutil.net_io_counters(pernic=True)[INTERFACE]
except psutil.AccessDenied:
# Trying to get process information from a closed process will
# result in AccessDenied.
except (psutil.AccessDenied, psutil.ZombieProcess):
# Trying to get process information from a closed or zombie process will
# result in corresponding exceptions.
break

# Collect data on the in/out network io.
Expand Down
154 changes: 108 additions & 46 deletions scripts/performance/summarize
Original file line number Diff line number Diff line change
Expand Up @@ -12,19 +12,19 @@ Run this script with::

And that should output::

+------------------------+----------+----------------------+
| Metric over 1 run(s) | Mean | Standard Deviation |
+========================+==========+======================+
| Total Time (seconds) | 1.200 | 0.0 |
+------------------------+----------+----------------------+
| Maximum Memory | 42.3 MiB | 0 Bytes |
+------------------------+----------+----------------------+
| Maximum CPU (percent) | 88.1 | 0.0 |
+------------------------+----------+----------------------+
| Average Memory | 33.9 MiB | 0 Bytes |
+------------------------+----------+----------------------+
| Average CPU (percent) | 30.5 | 0.0 |
+------------------------+----------+----------------------+
+------------------------+---------------------+-----------+----------------------+
| Metric over 1 run(s) | Run 1 | Mean | Standard Deviation |
+========================+=====================+===========+======================+
| Total Time (seconds) | 263.8085448741913 | 263.809 | 0.0 |
+------------------------+---------------------+-----------+----------------------+
| Maximum Memory | 117.9 MiB | 117.9 MiB | 0 Bytes |
+------------------------+---------------------+-----------+----------------------+
| Maximum CPU (percent) | 0.2 | 0.2 | 0.0 |
+------------------------+---------------------+-----------+----------------------+
| Average Memory | 117.5 MiB | 117.5 MiB | 0 Bytes |
+------------------------+---------------------+-----------+----------------------+
| Average CPU (percent) | 0.07325581395348836 | 0.1 | 0.0 |
+------------------------+---------------------+-----------+----------------------+


The script can also be ran with multiple files:
Expand All @@ -33,34 +33,56 @@ The script can also be ran with multiple files:

And will have a similar output:

+------------------------+----------+----------------------+
| Metric over 2 run(s) | Mean | Standard Deviation |
+========================+==========+======================+
| Total Time (seconds) | 1.155 | 0.0449999570847 |
+------------------------+----------+----------------------+
| Maximum Memory | 42.5 MiB | 110.0 KiB |
+------------------------+----------+----------------------+
| Maximum CPU (percent) | 94.5 | 6.45 |
+------------------------+----------+----------------------+
| Average Memory | 35.6 MiB | 1.7 MiB |
+------------------------+----------+----------------------+
| Average CPU (percent) | 27.5 | 3.03068181818 |
+------------------------+----------+----------------------+
+------------------------+---------------------+---------------------+-----------+----------------------+
| Metric over 2 run(s) | Run 1 | Run 2 | Mean | Standard Deviation |
+========================+=====================+=====================+===========+======================+
| Total Time (seconds) | 263.8085448741913 | 198.05210328102112 | 230.930 | 32.87822079658508 |
+------------------------+---------------------+---------------------+-----------+----------------------+
| Maximum Memory | 117.9 MiB | 112.4 MiB | 115.2 MiB | 2.7 MiB |
+------------------------+---------------------+---------------------+-----------+----------------------+
| Maximum CPU (percent) | 0.2 | 0.2 | 0.2 | 0.0 |
+------------------------+---------------------+---------------------+-----------+----------------------+
| Average Memory | 117.5 MiB | 111.0 MiB | 114.2 MiB | 3.2 MiB |
+------------------------+---------------------+---------------------+-----------+----------------------+
| Average CPU (percent) | 0.07325581395348836 | 0.09432989690721647 | 0.1 | 0.010537041476864052 |
+------------------------+---------------------+---------------------+-----------+----------------------+
Comment on lines +36 to +48
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having this table be generated dynamically seems like it will make it unreadable in the terminal when summarizing more than a few runs. Have you tried testing this?



You can also specify the ``--output-format json`` option to print the
summary as JSON instead of a pretty printed table::

{
"executions": [
{
"total_time": 72.76999998092651,
"std_dev_average_memory": 0.0,
"std_dev_total_time": 0.0,
"average_memory": 56884518.57534247,
"std_dev_average_cpu": 0.0,
"std_dev_max_memory": 0.0,
"average_cpu": 61.19315068493151,
"max_memory": 58331136.0
"execution_time": 263.8085448741913,
"average_memory": 123170974.75968993,
"max_memory": 123600896.0,
"average_cpu": 0.07325581395348836,
"max_cpu": 0.2,
"end_time": 1735586030.26786
},
{
"execution_time": 198.05210328102112,
"average_memory": 116410262.43298969,
"max_memory": 117899264.0,
"average_cpu": 0.09432989690721647,
"max_cpu": 0.2,
"end_time": 1735586230.422445
}
],
"aggregate_stats": {
"execution_time": 230.9303240776062,
"std_dev_execution_time": 32.87822079658508,
"average_memory": 119790618.5963398,
"std_dev_average_memory": 3380356.16335012,
"max_memory": 120750080.0,
"std_dev_max_memory": 2850816.0,
"average_cpu": 0.08379285543035242,
"std_dev_average_cpu": 0.010537041476864052,
"max_cpu": 0.2,
"std_dev_max_cpu": 0.0
}
}

"""

Expand Down Expand Up @@ -96,7 +118,7 @@ class Summarizer:
self.total_files = 0
self._num_rows = 0
self._start_time = None
self._end_time = None
self._end_times = []
self._totals = {
'time': [],
'average_memory': [],
Expand Down Expand Up @@ -168,22 +190,46 @@ class Summarizer:
table = [
[
'Total Time (seconds)',
*[
f'{self._totals['time'][file]}'
for file in range(0, self.total_files)
],
f'{self.total_time:.3f}',
self.std_dev_total_time,
],
['Maximum Memory', h(self.max_memory), h(self.std_dev_max_memory)],
[
'Maximum Memory',
*[
f'{h(self._totals['max_memory'][file])}'
for file in range(0, self.total_files)
],
h(self.max_memory),
h(self.std_dev_max_memory),
],
[
'Maximum CPU (percent)',
*[
f'{self._totals['max_cpu'][file]}'
for file in range(0, self.total_files)
],
f'{self.max_cpu:.1f}',
self.std_dev_max_cpu,
],
[
'Average Memory',
*[
f'{h(self._totals['average_memory'][file])}'
for file in range(0, self.total_files)
],
h(self.average_memory),
h(self.std_dev_average_memory),
],
[
'Average CPU (percent)',
*[
f'{self._totals['average_cpu'][file]}'
for file in range(0, self.total_files)
],
f'{self.average_cpu:.1f}',
self.std_dev_average_cpu,
],
Expand All @@ -192,6 +238,7 @@ class Summarizer:
table,
headers=[
f'Metric over {self.total_files} run(s)',
*[f'Run {n}' for n in range(1, self.total_files + 1)],
'Mean',
'Standard Deviation',
],
Expand All @@ -205,14 +252,29 @@ class Summarizer:
"""
return json.dumps(
{
'total_time': self.total_time,
'std_dev_total_time': self.std_dev_total_time,
'max_memory': self.max_memory,
'std_dev_max_memory': self.std_dev_max_memory,
'average_memory': self.average_memory,
'std_dev_average_memory': self.std_dev_average_memory,
'average_cpu': self.average_cpu,
'std_dev_average_cpu': self.std_dev_average_cpu,
'executions': [
{
'execution_time': self._totals['time'][file],
'average_memory': self._totals['average_memory'][file],
'max_memory': self._totals['max_memory'][file],
'average_cpu': self._totals['average_cpu'][file],
'max_cpu': self._totals['max_cpu'][file],
'end_time': self._end_times[file],
}
for file in range(self.total_files)
],
'aggregate_stats': {
'execution_time': self.total_time,
'std_dev_execution_time': self.std_dev_total_time,
'average_memory': self.average_memory,
'std_dev_average_memory': self.std_dev_average_memory,
'max_memory': self.max_memory,
'std_dev_max_memory': self.std_dev_max_memory,
'average_cpu': self.average_cpu,
'std_dev_average_cpu': self.std_dev_average_cpu,
'max_cpu': self.max_cpu,
'std_dev_max_cpu': self.std_dev_max_cpu,
},
},
indent=2,
)
Expand All @@ -232,7 +294,7 @@ class Summarizer:
self._validate_row(row, benchmark_file)
self.process_data_row(row)
self._validate_row(row, benchmark_file)
self._end_time = self._get_time(row)
self._end_times.append(self._get_time(row))
self._finalize_processed_data_for_file()

def _validate_row(self, row, filename):
Expand Down Expand Up @@ -261,7 +323,7 @@ class Summarizer:
def _finalize_processed_data_for_file(self):
# Add numbers to the total, which keeps track of data over
# all files provided.
self._totals['time'].append(self._end_time - self._start_time)
self._totals['time'].append(self._end_times[-1] - self._start_time)
self._totals['max_cpu'].append(self._maximums['cpu'])
self._totals['max_memory'].append(self._maximums['memory'])
self._totals['average_cpu'].append(
Expand Down
Loading