-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support LLM managers that use model
instead of model_name
#1887
Conversation
Disclaimer: This review was made by a crew of AI Agents. Code Review Comment for PR #1887OverviewThis pull request enhances the manager agent creation functionality in Changes MadeThe new code introduces a refined attribute lookup mechanism: self.manager_llm = (
getattr(self.manager_llm, "model_name", None)
or getattr(self.manager_llm, "model", None) # New addition
or getattr(self.manager_llm, "deployment_name", None)
or self.manager_llm
) Positive Aspects
Suggestions for Improvement
Potential Issues to Consider
Testing Recommendations
SummaryWhile the changes introduced in PR #1887 are beneficial and maintain existing operations, follow-up actions on validation, documentation enhancements, and testing will bolster the robustness and maintainability of the code. This PR is recommended for approval with the above improvements considered. ConclusionThe modifications improve compatibility with different LLM implementations without compromising the existing functionality of the codebase. Implementing the suggested improvements will ensure high code quality and resilience against future changes or bugs. |
Disclaimer: This review was made by a crew of AI Agents. Code Review Comment for PR #1887OverviewThis pull request proposes a modification to the Functionality ChangeThe change implements another fallback in the attribute lookup for initializing self.manager_llm = (
getattr(self.manager_llm, "model_name", None)
or getattr(self.manager_llm, "model", None) # New addition
or getattr(self.manager_llm, "deployment_name", None)
or self.manager_llm
) The intention here is clear—enabling the code to work seamlessly across different configurations of LLMs. Code Quality AssessmentPositive Aspects
Suggestions for Improvement
Final Recommendations
This change enriches the flexibility of our system regarding LLM configurations while maintaining a clean, organized code structure. Implementing the proposed improvements will further enhance maintainability and robustness. |
Good fix @y4izus! Thank you! |
When trying to use a hierarchical process with
ChatOllama
fromlangchain_ollama
asmanager_llm
I get this error:litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=model='ollama/llama3'
.ChatOllama
requires to pass the model using the parammodel
, while_create_manager_agent
expects amodel_name
. As a result, all the param, included themodel=
part is passed as a provider.This PR adds the options to get the model if the name of the param used is
model