Skip to content

Commit

Permalink
Merge branch 'main' into memory_path
Browse files Browse the repository at this point in the history
  • Loading branch information
arnaudgelas authored Jan 17, 2025
2 parents b418cb5 + 30d0271 commit c3bb84a
Show file tree
Hide file tree
Showing 31 changed files with 4,932 additions and 257 deletions.
7 changes: 7 additions & 0 deletions docs/concepts/knowledge.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,12 @@ result = crew.kickoff(inputs={"question": "What city does John live in and how o

Here's another example with the `CrewDoclingSource`. The CrewDoclingSource is actually quite versatile and can handle multiple file formats including TXT, PDF, DOCX, HTML, and more.

<Note>
You need to install `docling` for the following example to work: `uv add docling`
</Note>



```python Code
from crewai import LLM, Agent, Crew, Process, Task
from crewai.knowledge.source.crew_docling_source import CrewDoclingSource
Expand Down Expand Up @@ -282,6 +288,7 @@ The `embedder` parameter supports various embedding model providers that include
- `ollama`: Local embeddings with Ollama
- `vertexai`: Google Cloud VertexAI embeddings
- `cohere`: Cohere's embedding models
- `voyageai`: VoyageAI's embedding models
- `bedrock`: AWS Bedrock embeddings
- `huggingface`: Hugging Face models
- `watson`: IBM Watson embeddings
Expand Down
20 changes: 20 additions & 0 deletions docs/concepts/memory.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -293,6 +293,26 @@ my_crew = Crew(
}
)
```
### Using VoyageAI embeddings

```python Code
from crewai import Crew, Agent, Task, Process

my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "voyageai",
"config": {
"api_key": "YOUR_API_KEY",
"model_name": "<model_name>"
}
}
)
```
### Using HuggingFace embeddings

```python Code
Expand Down
1 change: 1 addition & 0 deletions docs/how-to/llm-connections.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ LiteLLM supports a wide range of providers, including but not limited to:
- Azure OpenAI
- AWS (Bedrock, SageMaker)
- Cohere
- VoyageAI
- Hugging Face
- Ollama
- Mistral AI
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ class OutputConverter(BaseModel, ABC):
llm: Any = Field(description="The language model to be used to convert the text.")
model: Any = Field(description="The model to be used to convert the text.")
instructions: str = Field(description="Conversion instructions to the LLM.")
max_attempts: Optional[int] = Field(
max_attempts: int = Field(
description="Max number of attempts to try to get the output formatted.",
default=3,
)
Expand Down
11 changes: 6 additions & 5 deletions src/crewai/agents/agent_builder/utilities/base_token_process.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,12 @@


class TokenProcess:
total_tokens: int = 0
prompt_tokens: int = 0
cached_prompt_tokens: int = 0
completion_tokens: int = 0
successful_requests: int = 0
def __init__(self):
self.total_tokens: int = 0
self.prompt_tokens: int = 0
self.cached_prompt_tokens: int = 0
self.completion_tokens: int = 0
self.successful_requests: int = 0

def sum_prompt_tokens(self, tokens: int):
self.prompt_tokens = self.prompt_tokens + tokens
Expand Down
Loading

0 comments on commit c3bb84a

Please sign in to comment.