Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: the OpenAI compatible interface returns an incorrect choices when the 'n' parameter is not supported. #1153

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

coolbeevip
Copy link
Collaborator

Description

When OpenAI-compatible interfaces do not support the 'n' parameter, we will call the API multiple times until we accumulate the expected number of completion choices.

Add the ability to call repeatedly and print warning logs.

Final task prompt:
Task: Assist the Postdoc in drafting a research proposal to develop innovative, environmentally-friendly pre-training methods for large-scale language models, focusing on reducing energy consumption and carbon footprint while enhancing model performance and scalability. Aim for a minimum 20% efficiency improvement without compromising accuracy. The proposal should also suggest potential applications and societal impacts of these eco-conscious models.

RoleType.USER[Postdoc] expected 3 completion choices, but only 1 were returned. I will make another call until I accumulate 3 choices
RoleType.USER[Postdoc] expected 3 completion choices, but only 2 were returned. I will make another call until I accumulate 3 choices
Multiple messages returned in `step()`, message won't be recorded automatically. Please call `record_message()` to record the selected message manually.
> Proposals from Postdoc (RoleType.USER). Please choose an option:
Option 1:
Instruction: Research relevant literature on state-of-the-art pre-training methods for large language models.
Input: None

Option 2:
Instruction: Research relevant literature on state-of-the-art eco-friendly AI training methods.
Input: None

Option 3:
Instruction: Suggest an outline for the research proposal's structure.
Input: None

Please first enter your choice ([1-3]) and then your explanation and comparison: 

Motivation and Context

close #996

Types of changes

What types of changes does your code introduce? Put an x in all the boxes that apply:

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds core functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update in the documentation)
  • Example (update in the folder of example)

Checklist

Go over all the following points, and put an x in all the boxes that apply.
If you are unsure about any of these, don't hesitate to ask. We are here to help!

  • I have read the CONTRIBUTION guide. (required)
  • My change requires a change to the documentation.
  • I have updated the tests accordingly. (required for a bug fix or a new feature)
  • I have updated the documentation accordingly.

@coolbeevip coolbeevip marked this pull request as draft November 5, 2024 08:36
Copy link
Member

@Wendong-Fan Wendong-Fan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @coolbeevip !
I think there are 2 parts we can further improve:

  1. if model supports stream mode and user turn stream=True we would get error like below, so maybe make sure the response is in ChatCompletion format?
camel/agents/chat_agent.py:936: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices"  [union-attr]
camel/agents/chat_agent.py:939: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices"  [union-attr]
camel/agents/chat_agent.py:943: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices"  [union-attr]
  1. we can update all configs under camel/configs to add n as this would be supported, but also remember to remove this parameter in the model backend running since it's not natively supported and may raise error if n is passed to the model client

By the way, you can run pre-commit run --all-files before the push to ensure the code pass pre-commit check

@coolbeevip
Copy link
Collaborator Author

Thanks @coolbeevip ! I think there are 2 parts we can further improve:

  1. if model supports stream mode and user turn stream=True we would get error like below, so maybe make sure the response is in ChatCompletion format?
camel/agents/chat_agent.py:936: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices"  [union-attr]
camel/agents/chat_agent.py:939: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices"  [union-attr]
camel/agents/chat_agent.py:943: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices"  [union-attr]

Thank you for your suggestion. I also noticed this issue. The way to determine choices is different when using streaming returns. So I plan to improve the code a bit.

The following code has already implemented differentiated handling of return results. I think I can directly check the number of output_messages to avoid redundant conditional code.

        if isinstance(response, ChatCompletion):
            output_messages, finish_reasons, usage_dict, response_id = (
                self.handle_batch_response(response)
            )
        else:
            output_messages, finish_reasons, usage_dict, response_id = (
                self.handle_stream_response(response, num_tokens)
            )
  1. we can update all configs under camel/configs to add n as this would be supported, but also remember to remove this parameter in the model backend running since it's not natively supported and may raise error if n is passed to the model client

Sorry, I didn't understand what you said. Could you explain it again?

By the way, you can run pre-commit run --all-files before the push to ensure the code pass pre-commit check

@Wendong-Fan
Copy link
Member

Thanks @coolbeevip ! I think there are 2 parts we can further improve:

  1. if model supports stream mode and user turn stream=True we would get error like below, so maybe make sure the response is in ChatCompletion format?
camel/agents/chat_agent.py:936: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices"  [union-attr]
camel/agents/chat_agent.py:939: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices"  [union-attr]
camel/agents/chat_agent.py:943: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices"  [union-attr]

Thank you for your suggestion. I also noticed this issue. The way to determine choices is different when using streaming returns. So I plan to improve the code a bit.

The following code has already implemented differentiated handling of return results. I think I can directly check the number of output_messages to avoid redundant conditional code.

        if isinstance(response, ChatCompletion):
            output_messages, finish_reasons, usage_dict, response_id = (
                self.handle_batch_response(response)
            )
        else:
            output_messages, finish_reasons, usage_dict, response_id = (
                self.handle_stream_response(response, num_tokens)
            )
  1. we can update all configs under camel/configs to add n as this would be supported, but also remember to remove this parameter in the model backend running since it's not natively supported and may raise error if n is passed to the model client

Sorry, I didn't understand what you said. Could you explain it again?

By the way, you can run pre-commit run --all-files before the push to ensure the code pass pre-commit check

Hey @coolbeevip ,

For the 2nd point, we can take camel/configs/mistral_config.py as one example, mistral's model also don't support n natively, but as we added the support by ourselves, then we can add n into MistralConfig, but in camel/models/mistral_model.py,the MistralConfig would be passed to Mistral.chat.complete(), this is not acceptable, so we need to remove n in MistralConfig before pass to Mistral.chat.complete(), we can schedule a quick call if you feel necessary~

@coolbeevip
Copy link
Collaborator Author

coolbeevip commented Nov 6, 2024

Thanks @coolbeevip ! I think there are 2 parts we can further improve:

  1. if model supports stream mode and user turn stream=True we would get error like below, so maybe make sure the response is in ChatCompletion format?
camel/agents/chat_agent.py:936: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices"  [union-attr]
camel/agents/chat_agent.py:939: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices"  [union-attr]
camel/agents/chat_agent.py:943: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices"  [union-attr]

Thank you for your suggestion. I also noticed this issue. The way to determine choices is different when using streaming returns. So I plan to improve the code a bit.
The following code has already implemented differentiated handling of return results. I think I can directly check the number of output_messages to avoid redundant conditional code.

        if isinstance(response, ChatCompletion):
            output_messages, finish_reasons, usage_dict, response_id = (
                self.handle_batch_response(response)
            )
        else:
            output_messages, finish_reasons, usage_dict, response_id = (
                self.handle_stream_response(response, num_tokens)
            )
  1. we can update all configs under camel/configs to add n as this would be supported, but also remember to remove this parameter in the model backend running since it's not natively supported and may raise error if n is passed to the model client

Sorry, I didn't understand what you said. Could you explain it again?

By the way, you can run pre-commit run --all-files before the push to ensure the code pass pre-commit check

Hey @coolbeevip ,

For the 2nd point, we can take camel/configs/mistral_config.py as one example, mistral's model also don't support n natively, but as we added the support by ourselves, then we can add n into MistralConfig, but in camel/models/mistral_model.py,the MistralConfig would be passed to Mistral.chat.complete(), this is not acceptable, so we need to remove n in MistralConfig before pass to Mistral.chat.complete(), we can schedule a quick call if you feel necessary~

I did a bit of research, and to meet the requirements, we need to make changes in these two areas.

  1. add n parameter to the following

https://github.com/camel-ai/camel/blob/master/camel/configs/mistral_config.py#L66

  1. remove the n parameter on the following

https://github.com/camel-ai/camel/blob/master/camel/models/mistral_model.py#L219

Do I understand this correctly?

@coolbeevip coolbeevip marked this pull request as ready for review November 6, 2024 08:50
@Wendong-Fan
Copy link
Member

Thanks @coolbeevip ! I think there are 2 parts we can further improve:

  1. if model supports stream mode and user turn stream=True we would get error like below, so maybe make sure the response is in ChatCompletion format?
camel/agents/chat_agent.py:936: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices"  [union-attr]
camel/agents/chat_agent.py:939: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices"  [union-attr]
camel/agents/chat_agent.py:943: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices"  [union-attr]

Thank you for your suggestion. I also noticed this issue. The way to determine choices is different when using streaming returns. So I plan to improve the code a bit.
The following code has already implemented differentiated handling of return results. I think I can directly check the number of output_messages to avoid redundant conditional code.

        if isinstance(response, ChatCompletion):
            output_messages, finish_reasons, usage_dict, response_id = (
                self.handle_batch_response(response)
            )
        else:
            output_messages, finish_reasons, usage_dict, response_id = (
                self.handle_stream_response(response, num_tokens)
            )
  1. we can update all configs under camel/configs to add n as this would be supported, but also remember to remove this parameter in the model backend running since it's not natively supported and may raise error if n is passed to the model client

Sorry, I didn't understand what you said. Could you explain it again?

By the way, you can run pre-commit run --all-files before the push to ensure the code pass pre-commit check

Hey @coolbeevip ,
For the 2nd point, we can take camel/configs/mistral_config.py as one example, mistral's model also don't support n natively, but as we added the support by ourselves, then we can add n into MistralConfig, but in camel/models/mistral_model.py,the MistralConfig would be passed to Mistral.chat.complete(), this is not acceptable, so we need to remove n in MistralConfig before pass to Mistral.chat.complete(), we can schedule a quick call if you feel necessary~

I did a bit of research, and to meet the requirements, we need to make changes in these two areas.

  1. add n parameter to the following

https://github.com/camel-ai/camel/blob/master/camel/configs/mistral_config.py#L66

  1. remove the n parameter on the following

https://github.com/camel-ai/camel/blob/master/camel/models/mistral_model.py#L219

Do I understand this correctly?

Yeah exactly! Mistral is one example, I think we also have other models that don't support n natively need to be handled like this

@coolbeevip
Copy link
Collaborator Author

Thanks @coolbeevip ! I think there are 2 parts we can further improve:

  1. if model supports stream mode and user turn stream=True we would get error like below, so maybe make sure the response is in ChatCompletion format?
camel/agents/chat_agent.py:936: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices"  [union-attr]
camel/agents/chat_agent.py:939: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices"  [union-attr]
camel/agents/chat_agent.py:943: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices"  [union-attr]

Thank you for your suggestion. I also noticed this issue. The way to determine choices is different when using streaming returns. So I plan to improve the code a bit.
The following code has already implemented differentiated handling of return results. I think I can directly check the number of output_messages to avoid redundant conditional code.

        if isinstance(response, ChatCompletion):
            output_messages, finish_reasons, usage_dict, response_id = (
                self.handle_batch_response(response)
            )
        else:
            output_messages, finish_reasons, usage_dict, response_id = (
                self.handle_stream_response(response, num_tokens)
            )
  1. we can update all configs under camel/configs to add n as this would be supported, but also remember to remove this parameter in the model backend running since it's not natively supported and may raise error if n is passed to the model client

Sorry, I didn't understand what you said. Could you explain it again?

By the way, you can run pre-commit run --all-files before the push to ensure the code pass pre-commit check

Hey @coolbeevip ,
For the 2nd point, we can take camel/configs/mistral_config.py as one example, mistral's model also don't support n natively, but as we added the support by ourselves, then we can add n into MistralConfig, but in camel/models/mistral_model.py,the MistralConfig would be passed to Mistral.chat.complete(), this is not acceptable, so we need to remove n in MistralConfig before pass to Mistral.chat.complete(), we can schedule a quick call if you feel necessary~

I did a bit of research, and to meet the requirements, we need to make changes in these two areas.

  1. add n parameter to the following

https://github.com/camel-ai/camel/blob/master/camel/configs/mistral_config.py#L66

  1. remove the n parameter on the following

https://github.com/camel-ai/camel/blob/master/camel/models/mistral_model.py#L219
Do I understand this correctly?

Yeah exactly! Mistral is one example, I think we also have other models that don't support n natively need to be handled like this

I understand, maybe we can add this feature in another PR.

@Wendong-Fan
Copy link
Member

Thanks @coolbeevip ! I think there are 2 parts we can further improve:

  1. if model supports stream mode and user turn stream=True we would get error like below, so maybe make sure the response is in ChatCompletion format?
camel/agents/chat_agent.py:936: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices"  [union-attr]
camel/agents/chat_agent.py:939: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices"  [union-attr]
camel/agents/chat_agent.py:943: error: Item "Stream[ChatCompletionChunk]" of "ChatCompletion | Stream[ChatCompletionChunk]" has no attribute "choices"  [union-attr]

Thank you for your suggestion. I also noticed this issue. The way to determine choices is different when using streaming returns. So I plan to improve the code a bit.
The following code has already implemented differentiated handling of return results. I think I can directly check the number of output_messages to avoid redundant conditional code.

        if isinstance(response, ChatCompletion):
            output_messages, finish_reasons, usage_dict, response_id = (
                self.handle_batch_response(response)
            )
        else:
            output_messages, finish_reasons, usage_dict, response_id = (
                self.handle_stream_response(response, num_tokens)
            )
  1. we can update all configs under camel/configs to add n as this would be supported, but also remember to remove this parameter in the model backend running since it's not natively supported and may raise error if n is passed to the model client

Sorry, I didn't understand what you said. Could you explain it again?

By the way, you can run pre-commit run --all-files before the push to ensure the code pass pre-commit check

Hey @coolbeevip ,
For the 2nd point, we can take camel/configs/mistral_config.py as one example, mistral's model also don't support n natively, but as we added the support by ourselves, then we can add n into MistralConfig, but in camel/models/mistral_model.py,the MistralConfig would be passed to Mistral.chat.complete(), this is not acceptable, so we need to remove n in MistralConfig before pass to Mistral.chat.complete(), we can schedule a quick call if you feel necessary~

I did a bit of research, and to meet the requirements, we need to make changes in these two areas.

  1. add n parameter to the following

https://github.com/camel-ai/camel/blob/master/camel/configs/mistral_config.py#L66

  1. remove the n parameter on the following

https://github.com/camel-ai/camel/blob/master/camel/models/mistral_model.py#L219
Do I understand this correctly?

Yeah exactly! Mistral is one example, I think we also have other models that don't support n natively need to be handled like this

I understand, maybe we can add this feature in another PR.

Sure, let's do this further support in another PR

Copy link
Member

@Wendong-Fan Wendong-Fan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @coolbeevip , after some consideration I think we need to add this feature in a more systematic way, like add attribute to distinguish whether the model supports n natively and handle the n parameter passing case by case, I plan do this part in next week based on the work you have done, for now you can modify the source code locally to meet your immediate needs, thanks again for the contribution!

Comment on lines +969 to +971
expected_completion_choices = self.model_backend.model_config_dict.get(
'n', 1
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we also need to check whether the model supports n natively, if it's natively supported, then we shouldn't handle the generation by ourselves

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the model includes an attribute indicating whether it supports n, it can be determined whether the model natively supports it.

In addition, To confirm support for the n parameter in the OAI-compatible interface, it may only be possible at deployment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature Request] Critic selection mechanism for models don't support multiple response at one time
2 participants