Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] crewAI training error #1935

Open
tituslhy opened this issue Jan 21, 2025 · 0 comments
Open

[BUG] crewAI training error #1935

tituslhy opened this issue Jan 21, 2025 · 0 comments
Assignees
Labels
bug Something isn't working

Comments

@tituslhy
Copy link

Description

I ran a crewAI training job but ran into a key error. I think it's a bug with the source code in crewAI's TaskEvaluator code.

Steps to Reproduce

All I did was run crew.train() with the appropriate arguments but there appears to be a bug in the TaskEvaluator.evaluate_training_data() method. There's no "improved_output" key.

Expected behavior

I expected the code to run appropriately and save the results into the appropriate pkl file

Screenshots/Code snippets

Image

Operating System

macOS Sonoma

Python Version

3.11

crewAI Version

0.95.0

crewAI Tools Version

0.32.1

Virtual Environment

Conda

Evidence

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
Cell In[16], [line 1](vscode-notebook-cell:?execution_count=16&line=1)
----> [1](vscode-notebook-cell:?execution_count=16&line=1) job_crew.train(
      [2](vscode-notebook-cell:?execution_count=16&line=2)     inputs = {
      [3](vscode-notebook-cell:?execution_count=16&line=3)          'job_requirements': 'Generative AI related data scientist jobs or management positions.'
      [4](vscode-notebook-cell:?execution_count=16&line=4)     },
      [5](vscode-notebook-cell:?execution_count=16&line=5)     n_iterations = 1,
      [6](vscode-notebook-cell:?execution_count=16&line=6)     filename="titus_training.pkl"
      [7](vscode-notebook-cell:?execution_count=16&line=7) )

File /opt/anaconda3/envs/crewai/lib/python3.11/site-packages/crewai/crew.py:502, in Crew.train(self, n_iterations, filename, inputs)
    [500](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/crewai/lib/python3.11/site-packages/crewai/crew.py:500) for agent in train_crew.agents:
    [501](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/crewai/lib/python3.11/site-packages/crewai/crew.py:501)     if training_data.get(str(agent.id)):
--> [502](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/crewai/lib/python3.11/site-packages/crewai/crew.py:502)         result = TaskEvaluator(agent).evaluate_training_data(
    [503](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/crewai/lib/python3.11/site-packages/crewai/crew.py:503)             training_data=training_data, agent_id=str(agent.id)
    [504](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/crewai/lib/python3.11/site-packages/crewai/crew.py:504)         )
    [506](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/crewai/lib/python3.11/site-packages/crewai/crew.py:506)         CrewTrainingHandler(filename).save_trained_data(
    [507](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/crewai/lib/python3.11/site-packages/crewai/crew.py:507)             agent_id=str(agent.role), trained_data=result.model_dump()
    [508](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/crewai/lib/python3.11/site-packages/crewai/crew.py:508)         )

File /opt/anaconda3/envs/crewai/lib/python3.11/site-packages/crewai/utilities/evaluators/task_evaluator.py:101, in TaskEvaluator.evaluate_training_data(self, training_data, agent_id)
     [96](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/crewai/lib/python3.11/site-packages/crewai/utilities/evaluators/task_evaluator.py:96) final_aggregated_data = ""
     [97](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/crewai/lib/python3.11/site-packages/crewai/utilities/evaluators/task_evaluator.py:97) for _, data in output_training_data.items():
     [98](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/crewai/lib/python3.11/site-packages/crewai/utilities/evaluators/task_evaluator.py:98)     final_aggregated_data += (
     [99](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/crewai/lib/python3.11/site-packages/crewai/utilities/evaluators/task_evaluator.py:99)         f"Initial Output:\n{data['initial_output']}\n\n"
    [100](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/crewai/lib/python3.11/site-packages/crewai/utilities/evaluators/task_evaluator.py:100)         f"Human Feedback:\n{data['human_feedback']}\n\n"
--> [101](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/crewai/lib/python3.11/site-packages/crewai/utilities/evaluators/task_evaluator.py:101)         f"Improved Output:\n{data['improved_output']}\n\n"
    [102](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/crewai/lib/python3.11/site-packages/crewai/utilities/evaluators/task_evaluator.py:102)     )
    [104](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/crewai/lib/python3.11/site-packages/crewai/utilities/evaluators/task_evaluator.py:104) evaluation_query = (
    [105](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/crewai/lib/python3.11/site-packages/crewai/utilities/evaluators/task_evaluator.py:105)     "Assess the quality of the training data based on the llm output, human feedback , and llm output improved result.\n\n"
    [106](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/crewai/lib/python3.11/site-packages/crewai/utilities/evaluators/task_evaluator.py:106)     f"{final_aggregated_data}"
   (...)
    [109](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/crewai/lib/python3.11/site-packages/crewai/utilities/evaluators/task_evaluator.py:109)     "- A score from 0 to 10 evaluating on completion, quality, and overall performance from the improved output to the initial output based on the human feedback\n"
    [110](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/crewai/lib/python3.11/site-packages/crewai/utilities/evaluators/task_evaluator.py:110) )
    [111](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/crewai/lib/python3.11/site-packages/crewai/utilities/evaluators/task_evaluator.py:111) instructions = "I'm gonna convert this raw text into valid JSON."

KeyError: 'improved_output'

Possible Solution

None

Additional context

None.

@tituslhy tituslhy added the bug Something isn't working label Jan 21, 2025
@bhancockio bhancockio self-assigned this Jan 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants