-
Notifications
You must be signed in to change notification settings - Fork 687
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: the OpenAI compatible interface returns an incorrect choices when the 'n' parameter is not supported. #1153
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @coolbeevip !
I think there are 2 parts we can further improve:
- if model supports stream mode and user turn
stream=True
we would get error like below, so maybe make sure the response is inChatCompletion
format?
camel/agents/chat_agent.py:936: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices" [union-attr]
camel/agents/chat_agent.py:939: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices" [union-attr]
camel/agents/chat_agent.py:943: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices" [union-attr]
- we can update all configs under
camel/configs
to addn
as this would be supported, but also remember to remove this parameter in the model backend running since it's not natively supported and may raise error ifn
is passed to the model client
By the way, you can run pre-commit run --all-files
before the push to ensure the code pass pre-commit check
Thank you for your suggestion. I also noticed this issue. The way to determine choices is different when using streaming returns. So I plan to improve the code a bit. The following code has already implemented differentiated handling of return results. I think I can directly check the number of output_messages to avoid redundant conditional code. if isinstance(response, ChatCompletion):
output_messages, finish_reasons, usage_dict, response_id = (
self.handle_batch_response(response)
)
else:
output_messages, finish_reasons, usage_dict, response_id = (
self.handle_stream_response(response, num_tokens)
)
Sorry, I didn't understand what you said. Could you explain it again?
|
Hey @coolbeevip , For the 2nd point, we can take |
… OpenAI-compatible interfaces
I did a bit of research, and to meet the requirements, we need to make changes in these two areas.
https://github.com/camel-ai/camel/blob/master/camel/configs/mistral_config.py#L66
https://github.com/camel-ai/camel/blob/master/camel/models/mistral_model.py#L219 Do I understand this correctly? |
Yeah exactly! Mistral is one example, I think we also have other models that don't support |
I understand, maybe we can add this feature in another PR. |
Sure, let's do this further support in another PR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @coolbeevip , after some consideration I think we need to add this feature in a more systematic way, like add attribute to distinguish whether the model supports n
natively and handle the n
parameter passing case by case, I plan do this part in next week based on the work you have done, for now you can modify the source code locally to meet your immediate needs, thanks again for the contribution!
expected_completion_choices = self.model_backend.model_config_dict.get( | ||
'n', 1 | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we also need to check whether the model supports n
natively, if it's natively supported, then we shouldn't handle the generation by ourselves
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the model includes an attribute indicating whether it supports n, it can be determined whether the model natively supports it.
In addition, To confirm support for the n parameter in the OAI-compatible interface, it may only be possible at deployment.
Description
When OpenAI-compatible interfaces do not support the 'n' parameter, we will call the API multiple times until we accumulate the expected number of completion choices.
Add the ability to call repeatedly and print warning logs.
Motivation and Context
close #996
Types of changes
What types of changes does your code introduce? Put an
x
in all the boxes that apply:Checklist
Go over all the following points, and put an
x
in all the boxes that apply.If you are unsure about any of these, don't hesitate to ask. We are here to help!