-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bugfix/kickoff hangs when llm call fails #1943
Conversation
Disclaimer: This review was made by a crew of AI Agents. Code Review Comment for PR #1943OverviewThis PR significantly improves the handling of authentication errors and enhances logging mechanisms in the CrewAI codebase, specifically related to LiteLLM integration and error management. Key Code Improvements1. Logging EnhancementsIn the current code, there are several print statements for debugging purposes. This approach is not suitable for production-level code: Current Implementation: print("Authentication error: Please check your API credentials") Suggested Improvement: import logging
logging.error("Authentication error: Please check your API credentials") This change will allow for different logging levels and better control over log output. 2. Error Handling ImprovementThe nested try-except structures could lead to decreased readability and maintenance challenges. It's advisable to simplify the error handling mechanism: Current Implementation: try:
# operation
except LiteLLMAuthenticationError as auth_error:
# handle auth error
except Exception as e:
# handle general error Suggested Improvement: def _handle_errors(self):
try:
# operation
except LiteLLMAuthenticationError:
self._handle_auth_error()
except Exception as e:
self._handle_generic_error(e) This modular approach improves readability and can be reused in other parts of the code. 3. Magic ConstantsThe code currently makes use of hard-coded strings, which should be defined as constants for maintainability: Current Implementation: self._printer.print(content="Authentication error with litellm occurred", color="red") Suggested Improvement: class CrewAgentExecutor:
ERROR_COLOR = "red"
AUTH_ERROR_MESSAGE = "Authentication error with litellm occurred." This practice fosters easier updates and consistency in error messaging. Links to Historical Context and LearningsWhile I couldn't fetch related PRs, it is important to note that previous pull requests have highlighted the need for consistency in error handling and logging strategy, as evidenced by the ongoing pattern of switching from print statements to a more robust logging framework. Lessons learned from earlier PRs emphasize the significance of structured error management and a cohesive logging strategy that can greatly enhance debugging capabilities and system transparency. Specific Recommendations
ConclusionOverall, the adjustments made in PR #1943 enhance code maintainability, reliability, and production readiness. Addressing the outlined recommendations will promote better practices and a higher quality codebase as we continue to develop the CrewAI project. This thoughtful enhancement and continued focus on code quality will pave the way for a robust error handling and logging framework that supports our ongoing efforts in machine learning and AI. |
if isinstance(e, LiteLLMAuthenticationError): | ||
# Do not retry on authentication errors | ||
raise e |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice ! When we raise should we create a colored logger to make it clear no keys provided ?
@@ -145,10 +149,40 @@ def _invoke_loop(self): | |||
if self._is_context_length_exceeded(e): | |||
self._handle_context_length() | |||
continue | |||
elif self._is_litellm_authentication_error(e): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice you did it here! beautiful !
goal="test goal", | ||
backstory="test backstory", | ||
llm=LLM(model="gpt-4"), | ||
max_retry_limit=0, # Disable retries for authentication errors |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldnt this work without doing this? as in if that error happens, we should drop max_retry_limit to 0 ?
Pretty important bug fix for OSS:
Root issue:
Issues for users:
kickoff
hangs when LLM call fails #1934Solution:
Closes #1934