-
Notifications
You must be signed in to change notification settings - Fork 10
add test coverage table to github summary #368
Conversation
collects code coverage report as JSON to upload to build artifacts, then parses the file to generate a markdown table for presentation in the GitHub summary. Removes Temporarily limits test cases and platforms in nm-remote-push, and skips benchmark, etc in nm-build-test to make testing go faster.
temporarily restrict tests to one file for debugging
may be easier than passing the multi-line table from one action to another.
this time with correction.
this time with correction.
and aligning columns
and revert changes used for testing purposes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looking good
pip3 install tabulate | ||
# As a multiline response we cannot pass the table directly to github | ||
# so redirect it to a file, then cat the file to the output | ||
python3 ./.github/scripts/coverage_report_breakdown.py ${{ inputs.coverage_json }} > COVERAGE_MD |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the table ends up looking nice.
args = parser.parse_args() | ||
cc = CodeCoverage(Path(args.coverage_json_file)) | ||
|
||
print(cc.to_github_markdown()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i like that you are outputting to a file. this makes it easy to run locally.
ignore_index=True) | ||
# clean up the `nan` values for display purposes | ||
summary_df = summary_df.astype(str) | ||
summary_df.replace({"nan": None}, inplace=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
quick question, how do we generate "nan"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this was "fun" 😉 . to get that empty row and header row between the overall info and the per-sub-directory breakdown I needed to add that empty_row_df
and header_row_df
. To get the concat
to work successfully I needed to populate those with pd.NA
, (which is nan
). once the concat
was done, I could clean up those nan
s
our test runs won't ever enable it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good to me.
moved to nm-vllm-ent |
This PR introduces the creation of a JSON form of the test coverage information that has only been generated as HTML up until this point. The JSON content is then parsed by a script to generate a table that is compatible with
github-markdown
so that it can be included in the test job summary. The table shows the overall test coverage, and the coverage provided by each sub-directory at the level just below vllm.This nm remote push job demonstrates that the code changes are working (no failures), and shows the expected test coverage summary table (on the summary page, scroll down to the TEST summary), and JSON artifact (on the summary page, scroll down to the artifacts listing and look for the files starting with
cc-vllm-json-*
.Also included are some additional functions that will be excluded/omitted from the test coverage. see pyproject.toml