Skip to content

Commit

Permalink
feat: update with instrumentation module (#27)
Browse files Browse the repository at this point in the history
* fix: update LICENSE_HEADER

* feat: update with instrumentation

* feat: update helpers

* feat: update init

* feat: update versions

* feat: update tests for handler

* feat: add separated test for helpers and update it

* docs: update documentation

* Update src/argilla_llama_index/llama_index_handler.py

Co-authored-by: David Berenstein <[email protected]>

* Update src/argilla_llama_index/llama_index_handler.py

Co-authored-by: David Berenstein <[email protected]>

* Update src/argilla_llama_index/llama_index_handler.py

Co-authored-by: David Berenstein <[email protected]>

* feat: bump argilla version (for chatfield)

* fix: update to ArgillaHandler

* fix: add number_of_retrievals message and update default

* feat: add independent field for scores

* feat: add ChatField

* docs: modify images

* feat: remove chat_to_html from tests

* feat: add logic to handle events

* feat: add events to tree and show in a different color

* typo

* feat: update helper tests

* feat: update main tests

* docs: update previous documentation

* docs: add new tutorial

* update license header

* docs: add initial context

---------

Co-authored-by: David Berenstein <[email protected]>
  • Loading branch information
sdiazlor and davidberenstein1957 authored Oct 7, 2024
1 parent 010a401 commit 7172ed2
Show file tree
Hide file tree
Showing 16 changed files with 1,374 additions and 720 deletions.
4 changes: 2 additions & 2 deletions LICENSE_HEADER
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Copyright 2023-present, Argilla, Inc.
Copyright 2024-present, Argilla, Inc.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
Expand All @@ -10,4 +10,4 @@ Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
limitations under the License.
23 changes: 12 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ If you already have deployed Argilla, you can skip this step. Otherwise, you can

## Basic Usage

To easily log your data into Argilla within your LlamaIndex workflow, you only need a simple step. Just call the Argilla global handler for Llama Index before starting production with your LLM.
To easily log your data into Argilla within your LlamaIndex workflow, you only need to initialize the handler and attach it to the LlamaIndex dispatcher. This ensured that the predictions obtained using LlamaIndex are automatically logged to the Argilla instance.

- `dataset_name`: The name of the dataset. If the dataset does not exist, it will be created with the specified name. Otherwise, it will be updated.
- `api_url`: The URL to connect to the Argilla instance.
Expand All @@ -33,23 +33,24 @@ To easily log your data into Argilla within your LlamaIndex workflow, you only n
> For more information about the credentials, check the documentation for [users](https://docs.argilla.io/latest/how_to_guides/user/) and [workspaces](https://docs.argilla.io/latest/how_to_guides/workspace/).
```python
from llama_index.core import set_global_handler
from llama_index.core.instrumentation import get_dispatcher
from argilla_llama_index import ArgillaHandler

set_global_handler(
"argilla",
dataset_name="query_model",
argilla_handler = ArgillaHandler(
dataset_name="query_llama_index",
api_url="http://localhost:6900",
api_key="argilla.apikey",
number_of_retrievals=2,
)
root_dispatcher = get_dispatcher()
root_dispatcher.add_span_handler(argilla_handler)
root_dispatcher.add_event_handler(argilla_handler)
```

Let's log some data into Argilla. With the code below, you can create a basic LlamaIndex workflow. We will use GPT3.5 from OpenAI as our LLM ([OpenAI API key](https://openai.com/blog/openai-api)). Moreover, we will use an example `.txt` file obtained from the [Llama Index documentation](https://docs.llamaindex.ai/en/stable/getting_started/starter_example.html).


Let's log some data into Argilla. With the code below, you can create a basic LlamaIndex workflow. We will use GPT3.5 from OpenAI as our LLM ([OpenAI API key](https://openai.com/blog/openai-api)). Moreover, we will use an example `.txt` file obtained from the [LlamaIndex documentation](https://docs.llamaindex.ai/en/stable/getting_started/starter_example.html).

```python
import os
import os

from llama_index.core import Settings, VectorStoreIndex, SimpleDirectoryReader
from llama_index.llms.openai import OpenAI
Expand All @@ -63,8 +64,8 @@ Settings.llm = OpenAI(
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)

# Create the query engine
query_engine = index.as_query_engine()
# Create the query engine with the same similarity top k as the number of retrievals
query_engine = index.as_query_engine(similarity_top_k=2)
```

Now, let's run the `query_engine` to have a response from the model. The generated response will be logged into Argilla.
Expand Down
Binary file modified docs/assets/UI-screenshot-github.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/assets/UI-screenshot.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/UI-screeshot-workflow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/screenshot-workflow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
48 changes: 27 additions & 21 deletions docs/tutorials/getting_started.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@
"source": [
"# ✨🦙 Getting started with Argilla's LlamaIndex Integration\n",
"\n",
"In this tutorial, we will show the basic usage of this integration that allows the user to include the feedback loop that Argilla offers into the LlamaIndex ecosystem. It's based on a callback handler to be run within the LlamaIndex workflow. \n",
"In this tutorial, we will show the basic usage of this integration that allows the user to include the feedback loop that Argilla offers into the LlamaIndex ecosystem. It's based on the span and event handlers to be run within the LlamaIndex workflow.\n",
"\n",
"Don't hesitate to check out both [LlamaIndex](https://github.com/run-llama/llama_index) and [Argilla](https://github.com/argilla-io/argilla)"
"Don't hesitate to check out both [LlamaIndex](https://github.com/run-llama/llama_index) and [Argilla](https://github.com/argilla-io/argilla)\n"
]
},
{
Expand All @@ -19,7 +19,7 @@
"\n",
"### Deploy the Argilla server¶\n",
"\n",
"If you already have deployed Argilla, you can skip this step. Otherwise, you can quickly deploy Argilla following [this guide](https://docs.argilla.io/latest/getting_started/quickstart/)."
"If you already have deployed Argilla, you can skip this step. Otherwise, you can quickly deploy Argilla following [this guide](https://docs.argilla.io/latest/getting_started/quickstart/).\n"
]
},
{
Expand All @@ -28,7 +28,7 @@
"source": [
"### Set up the environment¶\n",
"\n",
"To complete this tutorial, you need to install this integration."
"To complete this tutorial, you need to install this integration.\n"
]
},
{
Expand All @@ -37,14 +37,14 @@
"metadata": {},
"outputs": [],
"source": [
"!pip install argilla-llama-index"
"%pip install \"argilla-llama-index>=2.1.0\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's make the required imports:"
"Let's make the required imports:\n"
]
},
{
Expand All @@ -57,9 +57,11 @@
" Settings,\n",
" VectorStoreIndex,\n",
" SimpleDirectoryReader,\n",
" set_global_handler,\n",
")\n",
"from llama_index.llms.openai import OpenAI"
"from llama_index.core.instrumentation import get_dispatcher\n",
"from llama_index.llms.openai import OpenAI\n",
"\n",
"from argilla_llama_index import ArgillaHandler"
]
},
{
Expand Down Expand Up @@ -87,15 +89,15 @@
"source": [
"## Set the Argilla's LlamaIndex handler\n",
"\n",
"To easily log your data into Argilla within your LlamaIndex workflow, you only need a simple step. Just call the Argilla global handler for Llama Index before starting production with your LLM. This ensured that the predictions obtained using Llama Index are automatically logged to the Argilla instance.\n",
"To easily log your data into Argilla within your LlamaIndex workflow, you only need to initialize the Argilla handler and attach it to the Llama Index dispatcher for spans and events. This ensures that the predictions obtained using Llama Index are automatically logged to the Argilla instance, along with the useful metadata.\n",
"\n",
"- `dataset_name`: The name of the dataset. If the dataset does not exist, it will be created with the specified name. Otherwise, it will be updated.\n",
"- `api_url`: The URL to connect to the Argilla instance.\n",
"- `api_key`: The API key to authenticate with the Argilla instance.\n",
"- `number_of_retrievals`: The number of retrieved documents to be logged. Defaults to 0.\n",
"- `workspace_name`: The name of the workspace to log the data. By default, the first available workspace.\n",
"\n",
"> For more information about the credentials, check the documentation for [users](https://docs.argilla.io/latest/how_to_guides/user/) and [workspaces](https://docs.argilla.io/latest/how_to_guides/workspace/)."
"> For more information about the credentials, check the documentation for [users](https://docs.argilla.io/latest/how_to_guides/user/) and [workspaces](https://docs.argilla.io/latest/how_to_guides/workspace/).\n"
]
},
{
Expand All @@ -104,27 +106,29 @@
"metadata": {},
"outputs": [],
"source": [
"set_global_handler(\n",
" \"argilla\",\n",
" dataset_name=\"query_model\",\n",
"argilla_handler = ArgillaHandler(\n",
" dataset_name=\"query_llama_index\",\n",
" api_url=\"http://localhost:6900\",\n",
" api_key=\"argilla.apikey\",\n",
" number_of_retrievals=2,\n",
")"
")\n",
"root_dispatcher = get_dispatcher()\n",
"root_dispatcher.add_span_handler(argilla_handler)\n",
"root_dispatcher.add_event_handler(argilla_handler)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Log the data to Argilla"
"## Log the data to Argilla\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With the code below, you can create a basic LlamaIndex workflow. We will use an example `.txt` file obtained from the [Llama Index documentation](https://docs.llamaindex.ai/en/stable/getting_started/starter_example.html)."
"With the code below, you can create a basic LlamaIndex workflow. We will use an example `.txt` file obtained from the [Llama Index documentation](https://docs.llamaindex.ai/en/stable/getting_started/starter_example.html).\n"
]
},
{
Expand All @@ -145,21 +149,23 @@
"outputs": [],
"source": [
"# LLM settings\n",
"Settings.llm = OpenAI(model=\"gpt-3.5-turbo\", temperature=0.8, openai_api_key=openai_api_key)\n",
"Settings.llm = OpenAI(\n",
" model=\"gpt-3.5-turbo\", temperature=0.8, openai_api_key=openai_api_key\n",
")\n",
"\n",
"# Load the data and create the index\n",
"documents = SimpleDirectoryReader(\"../../data\").load_data()\n",
"index = VectorStoreIndex.from_documents(documents)\n",
"\n",
"# Create the query engine\n",
"query_engine = index.as_query_engine()"
"# Create the query engine with the same similarity top k as the number of retrievals\n",
"query_engine = index.as_query_engine(similarity_top_k=2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, let's run the `query_engine` to have a response from the model."
"Now, let's run the `query_engine` to have a response from the model.\n"
]
},
{
Expand All @@ -178,7 +184,7 @@
"source": [
"The prompt given and the response obtained will be logged in as a chat, as well as the indicated number of retrieved documents.\n",
"\n",
"![Argilla UI](../assets/UI-screenshot.png)"
"![Argilla UI](../assets/UI-screenshot.png)\n"
]
}
],
Expand Down
Loading

0 comments on commit 7172ed2

Please sign in to comment.