You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to connect to our deployed llm (online), but it keeps crashing. The same parameters are working with pydantic-ai agent.
I don't understand what is wrong here:
Hey @aghozlane! I'm here to help you with any issues you're facing. Let's work together to solve the problem.
The error you're encountering, litellm.exceptions.BadRequestError, indicates that the LLM provider is not specified correctly. In the litellm configuration, you need to ensure that the LLM provider is explicitly defined. The error message suggests that you should pass the model in a format like completion(model='huggingface/starcoder',...).
To resolve this, you should check the model_list configuration in your local_llm_config and ensure that the model is specified with the correct provider. Here's an example of how you might structure it:
Make sure to replace "huggingface/llama3-3-70b-local" with the correct provider and model name that matches your deployment. You can refer to the litellm documentation for more details on specifying the provider correctly.
Hi,
I'm trying to connect to our deployed llm (online), but it keeps crashing. The same parameters are working with pydantic-ai agent.
I don't understand what is wrong here:
The text was updated successfully, but these errors were encountered: