Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add benchmarks for get_for_dialect #1862

Merged
merged 6 commits into from
Jan 28, 2025

Conversation

markus-96
Copy link
Contributor

these are high-level-benchmarks that will somehow call Field.get_for_dialect

Description

The above mentioned method has some potential to be optimized.

Motivation and Context

I want to optimize the method, like in this comment: #1859 (comment)

It is not used very often, but for example if you call any tortoise.expressions.Function, filter through/retrieve values of related models,...

How Has This Been Tested?

there seams to be a slight improvement with my optimizations

Checklist:

  • My code follows the code style of this project.
  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.
  • I have added the changelog accordingly.
  • I have read the CONTRIBUTING document.
  • I have added tests to cover my changes.
  • All new and existing tests passed.

these are high-level-benchmarks that will somehow call Field.get_for_benchmark
Copy link

codspeed-hq bot commented Jan 23, 2025

CodSpeed Performance Report

Merging #1862 will not alter performance

Comparing markus-96:benchmark-get_for_dialect (ad1e48b) with develop (3a5e836)

Summary

✅ 8 untouched benchmarks
🆕 4 new benchmarks

Benchmarks breakdown

Benchmark BASE HEAD Change
🆕 test_expressions_count N/A 972.7 µs N/A
🆕 test_expressions_f N/A 849.4 µs N/A
🆕 test_field_attribute_lookup_get_for_dialect N/A 621.9 µs N/A
🆕 test_relations_values_related_m2m N/A 1.2 ms N/A

@coveralls
Copy link

coveralls commented Jan 23, 2025

Pull Request Test Coverage Report for Build 13008190876

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 89.423%

Totals Coverage Status
Change from base Build 12927974311: 0.0%
Covered Lines: 6494
Relevant Lines: 7079

💛 - Coveralls

CHANGELOG.rst Outdated Show resolved Hide resolved
tests/benchmarks/test_annotate.py Outdated Show resolved Hide resolved
loop.run_until_complete(_bench())


def test_values_related_m2m(benchmark, create_team_with_participants):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one doesn't have .annotate but it is located in the test_annotate. Maybe, move to test_filter.py?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

moved it to test_relations.py for now. Maybe we can add more tests benchmarking relational behaviour in the future

from tortoise.functions import Count


def test_function_count(benchmark, few_fields_benchmark_dataset):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All the functions in this file are going to be the names of the benchmarks, so it would be nice to name them well:

  • test_annotate_with_count
  • test_annotate_with_decimal
  • etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test_annotate.py was a working title I did not change yet, sorry for that. I simply wanted some functions that call get_for_dialect, so I went ahead and looked where the function is used, placed a 1/0 before/after the function call, ran make ci and hoped that an error is thrown somewhere. Then, I copied these "failing" tests and adopted them as a benchmark. Duriung this, I skipped everything in schema_generator.py because this is only executed once during startup and would not benchmark any real-world scenario.

Copy link
Contributor

@henadzit henadzit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me!

@henadzit henadzit merged commit 95f9467 into tortoise:develop Jan 28, 2025
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants