-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
text_completion output issues #7874
Comments
llama index uses the same library 😄 |
@one-punch-dimash what do you see when running with This should show the raw call being made to the llm api - which could help explain the difference |
Request to litellm: POST Request Sent from LiteLLM:
RAW RESPONSE:
I am getting text_completion response and seeing this:
Its' print text is: TextChoices(stop_reason=None, prompt_logprobs=None, finish_reason='length', index=0, text='negative\nnegative\n\n### Output\nNegatives', logprobs=None) The correct output should be 'negative' (case sensitive, i specify it in the prompt), however the model prints out several 'negatives'. UPD: made some edits, for clarification |
I have tried native LiteLLM library for text generation for text categorization based on the prompt instructions:
The output format is explained in the prompt, and must be a single name of a chosen category. LiteLLM text_completion gives weird responses, such as 3 different responses for Qwen, empty response text for Llama3.2, and the reason why weird is that I simultaneously tried to run this prompt on llama.index.llms.litellm's LiteLLM, and I get a correct response.
-------------Native LiteLLM-------------
'negative
positive
sarcastic'
-------------LlamaIndex LiteLLM-------------
'negative'
Correct answer is negative, however for models other than Qwen2.5 native LiteLLM gives an empty output.
Same prompt, same params, same models, but Llama index framework operates better than the native LiteLLM itself, which is weird (maybe I am wrong).
The text was updated successfully, but these errors were encountered: