Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Token Usage Tracking #27

Merged
merged 7 commits into from
Oct 29, 2024
Merged

Implement Token Usage Tracking #27

merged 7 commits into from
Oct 29, 2024

Conversation

ZachZimm
Copy link
Contributor

This PR resolves issue #8 by adding token usage information to the response messages. The response conforms to the schema I found here for non-streaming and here for streaming, at the very bottom.
In this implementation, all streamed chunks include up-to-date information about token usage.

This streaming implementation differs slightly from openai's in that their api accepts a stream_options: dict parameter which can include "include_usage"=True, otherwise it will not stream back usage information. Additionally, they put forth a few rules about which chunks of the stream will contain usage data.

I did not add a count_tokens function as suggested in the issue, as it seems better to do this during generation since we are already using the tokenizer and in the case of non-streaming we already had an array of prompt tokens.

@Blaizzy
Copy link
Collaborator

Blaizzy commented Aug 24, 2024

Thank you very much for the great PR, @ZachZimm!

@Blaizzy
Copy link
Collaborator

Blaizzy commented Aug 24, 2024

  1. This streaming implementation differs slightly from openai's in that their api accepts a stream_options: dict parameter which can include "include_usage"=True, otherwise it will not stream back usage information.
  2. Additionally, they put forth a few rules about which chunks of the stream will contain usage data.

Can we implement this as well?

We need 1, to make it easier to define when we need it.

And 2 is a simple implementation, all you need to do is to pass the usage after streaming all tokens and just before yielding done (i.e., https://github.com/Blaizzy/fastmlx/blob/main/fastmlx/utils.py#L407).

fastmlx/utils.py Outdated Show resolved Hide resolved
fastmlx/utils.py Outdated Show resolved Hide resolved
fastmlx/utils.py Outdated Show resolved Hide resolved
fastmlx/utils.py Show resolved Hide resolved
@Blaizzy
Copy link
Collaborator

Blaizzy commented Aug 24, 2024

Please run pre-commit to fix styling check errors.

@ZachZimm
Copy link
Contributor Author

I pushed a new commit based on your feedback. I did run pre-commit this time (sorry, I'd never heard of it before) and there shouldn't be any extra comments or prints.

@Blaizzy
Copy link
Collaborator

Blaizzy commented Sep 10, 2024

@ZachZimm please update the tests to support token info, this is what causing them to fail.

@Blaizzy Blaizzy merged commit cd199d8 into arcee-ai:main Oct 29, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants