Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add more benchmarking documentation #822

Merged
merged 7 commits into from
Dec 4, 2023

Conversation

ernestum
Copy link
Collaborator

Adds the benchmarking README and the Benchmark Summary to the Sphinx documentation.

Copy link

codecov bot commented Nov 30, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (1af5e4d) 95.67% compared to head (e2a53e0) 95.63%.

❗ Current head e2a53e0 differs from pull request most recent head 6b2c89b. Consider uploading reports for the commit 6b2c89b to get more accurate results

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #822      +/-   ##
==========================================
- Coverage   95.67%   95.63%   -0.04%     
==========================================
  Files         102      102              
  Lines        9654     9655       +1     
==========================================
- Hits         9236     9234       -2     
- Misses        418      421       +3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@ernestum ernestum requested review from Rocamonde and removed request for dan-pandori November 30, 2023 13:58
@AdamGleave AdamGleave requested review from tomtseng and qxcv and removed request for Rocamonde November 30, 2023 19:04
Copy link
Member

@qxcv qxcv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

docs/main-concepts/benchmarks.md Show resolved Hide resolved

# -- Download the latest benchmark summary -------------------------------------
download_url = (
"https://github.com/HumanCompatibleAI/imitation/releases/latest/"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

using /latest redirects to the most recent release, which I think means we'll need to provide benchmark_runs.zip on every subsequent release — is that something we're committed to doing?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would be. What do you think @AdamGleave ? I eventually would love to make this part of the release pipeline.

@ernestum
Copy link
Collaborator Author

ernestum commented Dec 4, 2023

Btw I only did type annotation changes in mce_irl.py where codecov indicates changes in coverage. So I think that change is purious and we can ignore it. @AdamGleave would you merge this?

@AdamGleave AdamGleave merged commit 928d576 into master Dec 4, 2023
7 of 9 checks passed
@AdamGleave AdamGleave deleted the add_benchmarks_to_documentation branch December 4, 2023 23:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants