Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: improve logging and add optional perf logging #248

Merged
merged 2 commits into from
Jan 17, 2025

Conversation

nherment
Copy link
Contributor

@nherment nherment commented Jan 14, 2025

Mostly minor changes:

  • Remove the use of Console from server.py. This cascades into other changes downstream
  • Add latency logs behind a LOG_PERFORMANCE env var:
    • Add a wrapper for all API calls to measure the latency at API level
    • Add detailed performance monitoring to the tool_calling_llm core call() function
  • Align the http server (uvicorn) logging pattern with the other logging calls

Example of perf metrics for tool_calling_llm:

2025-01-14 09:41:17.177 INFO     tool_calling_llm.call(TOTAL) 8167ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(get_all_tools_openai_format) +0ms  0ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(start iteration 0) +0ms  0ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(count tokens) +6ms  6ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(llm.completion) +1831ms  1837ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(pre-tool-calls) +0ms  1837ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(tool completed fetch_finding_by_id) +570ms  2407ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(end iteration 0) +0ms  2407ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(start iteration 1) +0ms  2407ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(count tokens) +7ms  2415ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(llm.completion) +5752ms  8167ms

@nherment nherment marked this pull request as ready for review January 14, 2025 10:06
Copy link
Contributor

@arikalon1 arikalon1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice work

@arikalon1 arikalon1 merged commit 05cf2e5 into master Jan 17, 2025
12 checks passed
@arikalon1 arikalon1 deleted the feat_improve_logging branch January 17, 2025 16:05
moshemorad pushed a commit that referenced this pull request Jan 27, 2025
Mostly minor changes:

- Remove the use of `Console` from server.py. This cascades into other
changes downstream
- Add latency logs behind a `LOG_PERFORMANCE` env var:
  - Add a wrapper for all API calls to measure the latency at API level
- Add detailed performance monitoring to the tool_calling_llm core
`call()` function
- Align the http server (uvicorn) logging pattern with the other logging
calls

Example of perf metrics for tool_calling_llm:

```
2025-01-14 09:41:17.177 INFO     tool_calling_llm.call(TOTAL) 8167ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(get_all_tools_openai_format) +0ms  0ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(start iteration 0) +0ms  0ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(count tokens) +6ms  6ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(llm.completion) +1831ms  1837ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(pre-tool-calls) +0ms  1837ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(tool completed fetch_finding_by_id) +570ms  2407ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(end iteration 0) +0ms  2407ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(start iteration 1) +0ms  2407ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(count tokens) +7ms  2415ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(llm.completion) +5752ms  8167ms
```
moshemorad pushed a commit that referenced this pull request Jan 27, 2025
Mostly minor changes:

- Remove the use of `Console` from server.py. This cascades into other
changes downstream
- Add latency logs behind a `LOG_PERFORMANCE` env var:
  - Add a wrapper for all API calls to measure the latency at API level
- Add detailed performance monitoring to the tool_calling_llm core
`call()` function
- Align the http server (uvicorn) logging pattern with the other logging
calls

Example of perf metrics for tool_calling_llm:

```
2025-01-14 09:41:17.177 INFO     tool_calling_llm.call(TOTAL) 8167ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(get_all_tools_openai_format) +0ms  0ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(start iteration 0) +0ms  0ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(count tokens) +6ms  6ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(llm.completion) +1831ms  1837ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(pre-tool-calls) +0ms  1837ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(tool completed fetch_finding_by_id) +570ms  2407ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(end iteration 0) +0ms  2407ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(start iteration 1) +0ms  2407ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(count tokens) +7ms  2415ms
2025-01-14 09:41:17.177 INFO     	tool_calling_llm.call(llm.completion) +5752ms  8167ms
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants