diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 9422f02..8ba6efb 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -17,13 +17,14 @@ Always be careful when approving any code! Prerequisites: -- OpenAI API key - Cloned repository - Installed [python](https://www.python.org/) - Installed [poetry](https://python-poetry.org/) - Installed [node.js](https://nodejs.org/) -### Interpreter +Also have a look at the overall [architecture](/docs/architecture.png) first. + +### Services First, create a venv somewhere on your system, where ipython will run your code: @@ -35,24 +36,34 @@ pip install ipython pip install ... # install whatever packages you like ``` -Now open a terminal in the *interpreter* folder and run the following: +Now open a terminal in the *services* folder and run the following: ```shell poetry install poetry shell export IPYTHON_PATH=/home/user/venv_interpreter/bin/ipython export WORKING_DIRECTORY=/home/user/ipilot -uvicorn main:app --reload +uvicorn main_interpreter:app --reload --port 8000 ``` +Open a second terminal in the *services* folder and run the following: + +```shell +poetry shell +export OPENAI_API_KEY=sk-your-api-key +uvicorn main_llm:app --reload --port 8001 +``` + +If you want to use something else than OpenAI, adjust the env-variables as explained in the [Readme](/README.md). + ### UI Open a terminal in the *ui* folder and run the following: ```shell npm install -export INTERPRETER_URL=localhost:8000 -export OPENAI_API_KEY=sk-your-api-key +export NEXT_PUBLIC_INTERPRETER_URL=localhost:8000 +export NEXT_PUBLIC_LLM_URL=localhost:8001 npm run dev ``` diff --git a/README.md b/README.md index a50dfaa..2221fd1 100644 --- a/README.md +++ b/README.md @@ -2,29 +2,40 @@ logo

-

Your local AI code interpreter

+

An AI code interpreter for sensitive data, powered by GPT-4 or Llama 2.

-**Incognito Pilot** combines a large language model with a Python interpreter, so it can run code and execute tasks for you. -It is similar to **ChatGPT Code Interpreter**, but the interpreter runs locally. -This allows you to work with sensitive data without uploading it to the cloud. -To still be able to use powerful models available via API only (like GPT-4), there is an approval mechanism in the UI, which separates your local data from the remote services. +**Incognito Pilot** combines a Large Language Model (LLM) with a Python interpreter, so it can run code and execute tasks for you. +It is similar to **ChatGPT Code Interpreter**, but the interpreter runs **locally** and it can use open-source models like **Llama 2**. -With **Incognito Pilot**, you can: +**Incognito Pilot** allows you to work with **sensitive data** without uploading it to the cloud. +Either you use a local LLM (like Llama 2), or an API (like GPT-4). +For the latter case, there is an **approval mechanism** in the UI, which separates your local data from the remote services. -- analyse data and create visualizations -- convert your files, e.g. a video to a gif -- automate tasks, like renaming all files in a directory +With **Incognito Pilot**, you can: +- :white_check_mark: analyse data and create visualizations +- :white_check_mark: convert your files, e.g. a video to a gif +- :white_check_mark: **access the internet** to e.g. download data and much more! -It runs on every hardware, so you can for example analyze large datasets on powerful machines. -We also plan to support more models like Llama 2 in the future. -

- screenshot
- Screenshot of Incognito Pilot v1.0.0 -

+## :bulb: Demo + +https://github.com/silvanmelchior/IncognitoPilot/assets/6033305/05b0a874-6f76-4d22-afca-36c11f90b1ff + +The video shows Incognito Pilot with GPT-4. +While your conversation and approved code results are sent to OpenAI's API, your **data is kept locally** on your machine. +The interpreter is running locally as well and processes your data right there. +And you can go even further and use Llama 2 to have everything running on your machine. + +## :package: Installation (GPT via OpenAI API) + +This section shows how to install **Incognito Pilot** using a GPT model via OpenAI's API. For + +- **Llama 2**, check [Installation for Llama 2](/docs/INSTALLATION_LLAMA.md) instead, and for +- **GPT on Azure**, check [Installation with Azure](/docs/INSTALLATION_AZURE.md) instead. +- If you don't have docker, you can install **Incognito Pilot** on your system directly, using the development setup (see below). -## :package: Installation +Follow these steps: 1. Install [docker](https://www.docker.com/). 2. Create an empty folder somewhere on your system. @@ -38,7 +49,7 @@ We also plan to support more models like Llama 2 in the future. ```shell docker run -i -t \ - -p 3030:3030 -p 3031:3031 \ + -p 3030:80 \ -e OPENAI_API_KEY="sk-your-api-key" \ -v /home/user/ipilot:/mnt/data \ silvanmelchior/incognito-pilot:latest-slim @@ -46,13 +57,10 @@ docker run -i -t \ You can now visit http://localhost:3030 and should see the **Incognito Pilot** interface. -Some final remarks: +It's also possible to run **Incognito Pilot** with the free trial credits of OpenAI, without adding a credit card. +At the moment, this does not include GPT-4 however, so see below how to change the model to GPT-3.5. -- If you don't have docker, you can install **Incognito Pilot** on your system directly, using the development setup (see below). -- You can also run **Incognito Pilot** with the free trial credits of OpenAI, without adding a credit card. - At the moment, this does not include GPT-4 however, so see below how to change the model to GPT-3.5. - -## :rocket: Getting started +## :rocket: Getting started (GPT) In the **Incognito Pilot** interface, you will see a chat interface, with which you can interact with the model. Let's try it out! @@ -68,15 +76,16 @@ Let's try it out! After the approval, the model will confirm you the execution. Check your working directory now (e.g. */home/user/ipilot*): You should see the file! -Now you should be ready to use **Incognito Pilot** for your own tasks. +Now you should be ready to use **Incognito Pilot** for your own tasks. Just remember: +- Everything you type or every code result you approve is sent to the OpenAI / Azure API +- Your data stays and is processed locally + One more thing: The version you just used has nearly no packages shipped with the Python interpreter. This means, things like reading images or Excel files will not work. To change this, head back to the console and press Ctrl-C to stop the container. Now re-run the command, but remove the `-slim` suffix from the image. This will download a much larger version, equipped with [many packages](/docker/requirements_full.txt). -## :gear: Settings - ### Change model To use another model than the default one (GPT-4), set the environment variable `LLM`. @@ -86,32 +95,40 @@ OpenAI's GPT models have the prefix `gpt:`, so to use GPT-3.5 for example (the o -e LLM="gpt:gpt-3.5-turbo" ``` -Please note that GPT-4 is considerably better in this interpreter setup than GPT-3.5. +Please note that GPT-4 is considerably better in the interpreter setup than GPT-3.5. + +## :gear: Settings ### Change port -Per default, the UI is served on port 3030 and contacts the interpreter at port 3031. -This can be changed to any ports using the port mapping of docker. -However, the new port for the interpreter also needs to be communicated to the UI, using the environment variable `INTERPRETER_URL`. -For example, to serve the UI on port 8080 and the interpreter on port 8081, run the following: +To serve the UI at a different port than 3030, just expose the internal port 80 to a different one, for example 8080: ```shell docker run -i -t \ - -p 8080:3030 -p 8081:3031 \ - -e OPENAI_API_KEY="sk-your-api-key" \ - -e INTERPRETER_PORT=8081 \ - -v /home/user/ipilot:/mnt/data \ + -p 8080:80 \ + ... \ silvanmelchior/incognito-pilot ``` -### Further settings +### Timeout -The following further settings are available +Per default, the Python interpreter stops after 30 seconds. +To change this, set the environment variable `INTERPRETER_TIMEOUT`. +For 2 minutes for example, add the following to the docker run command: -- Per default, the Python interpreter stops after 30 seconds. - To change this, set the environment variable `INTERPRETER_TIMEOUT`. -- To automatically start **Incognito Pilot** with docker / at startup, remove the remove `-i -t` from the run command and add `--restart always`. - Together with a bookmark of the UI URL, you'll have **Incognito Pilot** at your fingertips whenever you need it. +```shell +-e INTERPRETER_TIMEOUT="120" +``` + +### Autostart + +To automatically start **Incognito Pilot** with docker / at startup, remove the `-i -t` from the run command and add the following: + +```shell +--restart always +``` + +Together with a bookmark of the UI URL, you'll have **Incognito Pilot** at your fingertips whenever you need it. ## :toolbox: Own dependencies @@ -146,12 +163,42 @@ Then run the container like this: ```shell docker run -i -t \ - -p 3030:3030 -p 3031:3031 \ - -e OPENAI_API_KEY="sk-your-api-key" \ - -v /home/user/ipilot:/mnt/data \ + ... \ incognito-pilot-custom ``` +## :question: FAQs + +### Is it as good as ChatGPT Code Interpreter? + +No, it has its limits. +The tradeoff between privacy and capabilities is not an easy one in this case. +For things like images, it is as powerful as ChatGPT code interpreter, because it doesn't need to know about the content of the image to edit it. +But for things like spreadsheets, if ChatGPT doesn't see the content, it has to guess for example the data format from the header, which can go wrong. + +However, in certain aspects, it's even better than ChatGPT code interpreter: +The interpreter has internet access, allowing for a bunch of new tasks which were not possible before. +Also, you can run the interpreter on any machine, including very powerful ones, so you can solve much larger tasks than with ChatGPT code interpreter. + +### Why not just use ChatGPT to generate the code and run it myself? + +You can of course do this. There are quite some advantages of using **Incognito Pilot** however: + +- Incognito Pilot can run code in multiple rounds (e.g. first getting the file name of a csv, then the structure, and then analyze the content). + It can even correct itself, seeing the stack trace of its failed execution. + You can of course also copy back and forth code and result to achieve all of this manually, but it gets cumbersome quite quickly. +- You have tons of pre-installed dependencies in Incognito Pilot +- The code runs in a sandbox, protecting your computer + +### How can it be private if you use public cloud APIs? + +Whatever you type and all code results you approve are indeed not private, in the sense that they are sent to the cloud API. +Your data however stays local. +The interpreter runs locally as well, processing your data right where it is. +For certain things, you will have to tell the model something about your data (e.g. the file-name of structure), +but it usually is meta-data which you actively approve in the UI and not the actual data. +At every step in the execution, you can just reject that something is sent to the API. + ## :house: Architecture ![Architecture Diagram](/docs/architecture.png) diff --git a/VERSION b/VERSION index 3eefcb9..227cea2 100644 --- a/VERSION +++ b/VERSION @@ -1 +1 @@ -1.0.0 +2.0.0 diff --git a/docker/Dockerfile b/docker/Dockerfile index 5d4de6f..d763c75 100644 --- a/docker/Dockerfile +++ b/docker/Dockerfile @@ -1,23 +1,30 @@ ARG INTERPRETER_IMAGE -FROM $INTERPRETER_IMAGE -SHELL ["/bin/bash", "-c"] - -RUN apt update && apt install -y nodejs npm +FROM node:18-alpine WORKDIR /opt/app -COPY interpreter interpreter -RUN python3 -m venv venv_backend && \ - source venv_backend/bin/activate && \ - pip3 install ./interpreter - COPY ui ui +COPY VERSION VERSION + RUN cd ui && \ npm install && \ - npm run build && \ - cd .. + npm run build + +FROM $INTERPRETER_IMAGE +SHELL ["/bin/bash", "-c"] + +WORKDIR /opt/app +COPY --from=0 /opt/app/ui/out /opt/app/ui + +RUN apt update && apt install -y nginx + +COPY services services +RUN python3 -m venv venv_services && \ + source venv_services/bin/activate && \ + pip3 install ./services + +COPY docker/nginx.conf /etc/nginx/ -COPY VERSION VERSION COPY docker/start* . RUN chmod 755 start* CMD ["/opt/app/start.sh"] diff --git a/docker/nginx.conf b/docker/nginx.conf new file mode 100644 index 0000000..97a96d0 --- /dev/null +++ b/docker/nginx.conf @@ -0,0 +1,41 @@ +user www-data; +worker_processes auto; +pid /run/nginx.pid; +include /etc/nginx/modules-enabled/*.conf; + +events { + worker_connections 768; +} + +http { + sendfile on; + tcp_nopush on; + types_hash_max_size 2048; + include /etc/nginx/mime.types; + default_type application/octet-stream; + + server { + listen 80; + listen [::]:80; + + root /opt/app/ui; + index index.html index.htm index.nginx-debian.html; + + location /api/llm { + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection "Upgrade"; + proxy_pass http://localhost:8081; + } + + location /api/interpreter { + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection "Upgrade"; + proxy_pass http://localhost:8080; + } + + location / { + try_files $uri $uri/ =404; + } + } + +} diff --git a/docker/start.sh b/docker/start.sh index c5d37b4..403b2b5 100644 --- a/docker/start.sh +++ b/docker/start.sh @@ -1,4 +1,6 @@ #!/bin/bash /opt/app/start_interpreter.sh & -/opt/app/start_ui.sh +/opt/app/start_llm.sh & + +nginx -g "daemon off;" diff --git a/docker/start_interpreter.sh b/docker/start_interpreter.sh index d370b9e..9f6150d 100644 --- a/docker/start_interpreter.sh +++ b/docker/start_interpreter.sh @@ -1,11 +1,11 @@ #!/bin/bash cd /opt/app -. venv_backend/bin/activate -cd interpreter +. venv_services/bin/activate +cd services mkdir -p /mnt/data export WORKING_DIRECTORY=/mnt/data export IPYTHON_PATH=/opt/app/venv_interpreter/bin/ipython -uvicorn main:app --host 0.0.0.0 --port 3031 +uvicorn main_interpreter:app --host 0.0.0.0 --port 8080 diff --git a/docker/start_llm.sh b/docker/start_llm.sh new file mode 100644 index 0000000..070d44c --- /dev/null +++ b/docker/start_llm.sh @@ -0,0 +1,7 @@ +#!/bin/bash + +cd /opt/app +. venv_services/bin/activate +cd services + +uvicorn main_llm:app --host 0.0.0.0 --port 8081 diff --git a/docker/start_ui.sh b/docker/start_ui.sh deleted file mode 100644 index 7cd63fc..0000000 --- a/docker/start_ui.sh +++ /dev/null @@ -1,11 +0,0 @@ -#!/bin/bash - -cd /opt/app/ui - -if [[ -z "${INTERPRETER_PORT}" ]]; then - export INTERPRETER_URL="localhost:3031" -else - export INTERPRETER_URL="localhost:${INTERPRETER_PORT}" -fi - -npm run start diff --git a/docs/INSTALLATION_AZURE.md b/docs/INSTALLATION_AZURE.md new file mode 100644 index 0000000..f55bc04 --- /dev/null +++ b/docs/INSTALLATION_AZURE.md @@ -0,0 +1,30 @@ +# :package: Installation (GPT via Azure) + +This section shows how to install **Incognito Pilot** using a GPT model via Azure. +Follow these steps: + +1. Install [docker](https://www.docker.com/). +2. Create an empty folder somewhere on your system. + This will be the working directory to which **Incognito Pilot** has access to. + The code interpreter can read your files in this folder and store any results. + In the following, we assume it to be */home/user/ipilot*. +3. Login to Azure portal and create an [Azure OpenAI Service](https://azure.microsoft.com/en-us/products/ai-services/openai-service-b). +4. You will see the access key and endpoint, which we will use later. +5. Open Azure OpenAI Studio and deploy a model. +6Now, just run the following command (replace your working directory, model-name and API information): + +```shell +docker run -i -t \ + -p 3030:80 \ + -e LLM="gpt-azure:your-deployment-name" \ + -e AZURE_API_KEY="your-azure-openai-api-key" \ + -e AZURE_API_BASE="https://your-azure-openai-service-name.openai.azure.com/" \ + -v /home/user/ipilot:/mnt/data \ + silvanmelchior/incognito-pilot:latest-slim +``` + +You can now visit http://localhost:3030 and should see the **Incognito Pilot** interface. + +Make sure you have access to a model which is capable of function calling, otherwise you will get an error similar to "unknown argument 'function'". + +Let's head back to the [Getting Started](/README.md#rocket-getting-started-gpt) section. diff --git a/docs/INSTALLATION_LLAMA.md b/docs/INSTALLATION_LLAMA.md new file mode 100644 index 0000000..9eefd36 --- /dev/null +++ b/docs/INSTALLATION_LLAMA.md @@ -0,0 +1,127 @@ +# :package: Installation (Llama 2) + +This section shows how to install **Incognito Pilot** using Llama 2. +Please note that you will only get satisfactory results with the largest model *llama-2-70b-chat*, which needs considerable hardware resources. +And even then, the experience will not be comparable to GPT-4, since Llama 2 was not fine-tuned for this task. + +Nevertheless, it's a lot of fun to see what's already possible with open-source models. +At the moment, there are two ways of using **Incognito Pilot** with Llama 2: + +- Using a cloud API from [replicate](https://replicate.com/). + While you don't have the advantage of a fully local setup here, you can try out the 70B model in a quick way without owning powerful hardware. +- Using Hugging Face's [Text Generation Inference](https://github.com/huggingface/text-generation-inference) container, + which allows you to run llama 2 locally with a simple `docker run` command. + +## Replicate + +Follow these steps: + +1. Install [docker](https://www.docker.com/). +2. Create an empty folder somewhere on your system. + This will be the working directory to which **Incognito Pilot** has access to. + The code interpreter can read your files in this folder and store any results. + In the following, we assume it to be */home/user/ipilot*. +3. Create a [Replicate](https://replicate.com/) account, + add a [credit card](https://replicate.com/account/billing) + and copy your [API key](https://replicate.com/account/api-tokens). +4. Now, just run the following command (replace your working directory and API key): + +```shell +docker run -i -t \ + -p 3030:80 \ + -e LLM="llama-replicate:replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1" \ + -e REPLICATE_API_KEY="your-replicate-api-key" \ + -v /home/user/ipilot:/mnt/data \ + silvanmelchior/incognito-pilot:latest-slim +``` + +You can of course also choose a [different model](https://replicate.com/blog/all-the-llamas), but the smaller ones are much less suited for this task. + +Now visit http://localhost:3030 and should see the **Incognito Pilot** interface. +Before you continue, remember: +- Everything you type or every code result you approve is sent to the Replicate API +- Your data stays and is processed locally + +Does it work? Great, let's move to the [Getting started](#rocket-getting-started-llama-2) section. + +## Text Generation Inference + +Follow these steps: + +1. Install [docker](https://www.docker.com/). +2. Create an empty folder somewhere on your system. + This will be the working directory to which **Incognito Pilot** has access to. + The code interpreter can read your files in this folder and store any results. + In the following, we assume it to be */home/user/ipilot*. +3. Create a [Hugging Face](https://huggingface.co/) account. +4. Make sure you get access to the [Llama 2 model weights](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) on Hugging Face. +5. In the *Files and versions* tab, download the following three files (we assume them to be in */home/user/tokenizer*): + - tokenizer.json + - tokenizer.model + - tokenizer_config.json +6. Create an [access token](https://huggingface.co/settings/tokens). + +Now, let's first run the *Text Generation Inference* service. +Check out their [Readme](https://github.com/huggingface/text-generation-inference#readme). +I had to run something similar to this: + +```shell +docker run \ + --gpus all \ + --shm-size 1g \ + -p 8080:80 \ + -v /home/user/tgi_cache:/data + -e HUGGING_FACE_HUB_TOKEN=hf_your-huggingface-api-token + ghcr.io/huggingface/text-generation-inference \ + --model-id "meta-llama/Llama-2-70b-chat-hf" +``` + +You can of course also choose a different model, but the smaller ones are much less suited for this task. +Once the container shows a success message, you are ready for the next step. + +Visit http://localhost:8080/info. +You should see a JSON with model information. +Check out the value for *max_total_tokens*. +It tells you how many tokens fit in the context for this model on your system. +**Incognito Pilot** needs this information to not send too long messages to the service. + +Now, just run the following command (replace your directories and max tokens): + +```shell +docker run -i -t \ + -p 3030:80 \ + -e LLM="llama-tgi:http://host.docker.internal:8080" \ + -e MAX_TOKENS="your-max-tokens" \ + -e TOKENIZER_PATH="/mnt/tokenizer/tokenizer.model" \ + -v /home/user/tokenizer:/mnt/tokenizer \ + -v /home/user/ipilot:/mnt/data \ + silvanmelchior/incognito-pilot:latest-slim +``` + +Visit http://localhost:3030 and should see the **Incognito Pilot** interface. +Congrats! You have a fully local setup, everything is running on your own system :partying_face:. + +## :rocket: Getting started (Llama 2) + +In the **Incognito Pilot** interface, you will see a chat interface, with which you can interact with the model. +Let's try it out! + +1. **File Access**: Type "Create a text file with all numbers from 0 to 100". + You will see how the *Code* part of the UI shows you a Python snippet. + As soon as you approve, the code will be executed on your machine (within the docker container). + You will see the result in the *Result* part of the UI. + As soon as you approve it, it will be sent back to the model. + In the case of using an API (like Replicate), this of course also means that this result will be sent to their services. + After the approval, the model will confirm you the execution. + Check your working directory now (e.g. */home/user/ipilot*): You should see the file! +2. **Math**: Type "What is 1 + 2 * 3 + 4 * 5 + 6 * 7 + 8 * 9?". + The model will use the Python interpreter to come to the correct result. + +Now you should be ready to use **Incognito Pilot** for your own tasks. +One more thing: The version you just used has nearly no packages shipped with the Python interpreter. +This means, things like reading images or Excel files will not work. +To change this, head back to the console and press Ctrl-C to stop the container. +Now re-run the command, but remove the `-slim` suffix from the image. +This will download a much larger version, equipped with [many packages](/docker/requirements_full.txt). + +Let's head back to the [Settings](/README.md#gear-settings) section. diff --git a/docs/architecture.drawio b/docs/architecture.drawio index b492e12..519e132 100644 --- a/docs/architecture.drawio +++ b/docs/architecture.drawio @@ -1,44 +1,46 @@ - + - + - + - - + + + + - - + + - - + + - + - - + + - + - - + + - + @@ -47,46 +49,64 @@ - + - + - + - + - + - + - + - + - + - + - + - + - + - + + + + + + + + + + + + + + + + + + + diff --git a/docs/architecture.png b/docs/architecture.png index 6289aba..296c80e 100644 Binary files a/docs/architecture.png and b/docs/architecture.png differ diff --git a/docs/screenshot.png b/docs/screenshot.png index 267f83a..18aded5 100644 Binary files a/docs/screenshot.png and b/docs/screenshot.png differ diff --git a/interpreter/.gitignore b/services/.gitignore similarity index 100% rename from interpreter/.gitignore rename to services/.gitignore diff --git a/interpreter/interpreter/__init__.py b/services/interpreter/__init__.py similarity index 100% rename from interpreter/interpreter/__init__.py rename to services/interpreter/__init__.py diff --git a/interpreter/interpreter/ipython_interpreter.py b/services/interpreter/ipython_interpreter.py similarity index 100% rename from interpreter/interpreter/ipython_interpreter.py rename to services/interpreter/ipython_interpreter.py diff --git a/services/llm/__init__.py b/services/llm/__init__.py new file mode 100644 index 0000000..af76ee4 --- /dev/null +++ b/services/llm/__init__.py @@ -0,0 +1,3 @@ +from .base import BaseLLM, LLMException +from .types import Message, Response +from .selector import get_llm diff --git a/services/llm/base.py b/services/llm/base.py new file mode 100644 index 0000000..b7fdc01 --- /dev/null +++ b/services/llm/base.py @@ -0,0 +1,14 @@ +from abc import ABC, abstractmethod +from typing import Generator + +from llm.types import Message, Response + + +class BaseLLM(ABC): + @abstractmethod + def chat(self, history: list[Message]) -> Generator[Response, None, None]: + """Given a chat history, return a generator which streams the response.""" + + +class LLMException(Exception): + """If an error occurs in the LLM, raise this exception, will be shown in UI.""" diff --git a/services/llm/gpt/__init__.py b/services/llm/gpt/__init__.py new file mode 100644 index 0000000..e31b5cf --- /dev/null +++ b/services/llm/gpt/__init__.py @@ -0,0 +1,2 @@ +from .gpt_openai import GPTOpenAI +from .gpt_azure import GPTAzure diff --git a/services/llm/gpt/gpt.py b/services/llm/gpt/gpt.py new file mode 100644 index 0000000..0ccc888 --- /dev/null +++ b/services/llm/gpt/gpt.py @@ -0,0 +1,53 @@ +from typing import Generator + +import openai +from openai import OpenAIError + +from llm.base import BaseLLM, LLMException +from llm.types import Message, Response +from .parsing import msg_to_gpt_msg, lazy_parse_args, fill_dict +from .prompt import FUNCTIONS + + +class GPT(BaseLLM): + def __init__(self, model_selection: dict): + self._model_selection = model_selection + + def chat(self, history: list[Message]) -> Generator[Response, None, None]: + messages = [msg_to_gpt_msg(msg) for msg in history] + + try: + chunk_generator = openai.ChatCompletion.create( + **self._model_selection, + messages=messages, + temperature=0, + functions=FUNCTIONS, + function_call="auto", + stream=True, + ) + + response = {} + previous_code = None + for chunk_all in chunk_generator: + chunk = chunk_all["choices"][0]["delta"] + fill_dict(response, chunk) + + text = None + if "content" in response: + text = response["content"] + + code = None + if ( + "function_call" in response + and "arguments" in response["function_call"] + ): + args = response["function_call"]["arguments"] + code = lazy_parse_args(args) + if code is None: + code = previous_code + previous_code = code + + yield Response(text=text, code=code) + + except OpenAIError as e: + raise LLMException(str(e)) diff --git a/services/llm/gpt/gpt_azure.py b/services/llm/gpt/gpt_azure.py new file mode 100644 index 0000000..1b31377 --- /dev/null +++ b/services/llm/gpt/gpt_azure.py @@ -0,0 +1,13 @@ +import openai + +from utils import get_env_var +from .gpt import GPT + + +class GPTAzure(GPT): + def __init__(self, engine_name: str): + openai.api_type = "azure" + openai.api_base = get_env_var("AZURE_API_BASE") + openai.api_version = "2023-07-01-preview" + openai.api_key = get_env_var("AZURE_API_KEY") + super().__init__({"engine": engine_name}) diff --git a/services/llm/gpt/gpt_openai.py b/services/llm/gpt/gpt_openai.py new file mode 100644 index 0000000..d0f3302 --- /dev/null +++ b/services/llm/gpt/gpt_openai.py @@ -0,0 +1,10 @@ +import openai + +from utils import get_env_var +from .gpt import GPT + + +class GPTOpenAI(GPT): + def __init__(self, model_name: str): + openai.api_key = get_env_var("OPENAI_API_KEY") + super().__init__({"model": model_name}) diff --git a/services/llm/gpt/parsing.py b/services/llm/gpt/parsing.py new file mode 100644 index 0000000..65552ca --- /dev/null +++ b/services/llm/gpt/parsing.py @@ -0,0 +1,58 @@ +import re +import json + +from llm.types import Message + + +def msg_to_gpt_msg(msg: Message) -> dict: + if msg.role == "user": + return {"role": "user", "content": msg.text} + if msg.role == "model": + response = { + "role": "assistant", + "content": msg.text or None, + } + if msg.code: + response["function_call"] = { + "name": "run_python_code", + "arguments": json.dumps({"code": msg.code}), + } + return response + if msg.role == "interpreter": + return { + "role": "function", + "name": "run_python_code", + "content": msg.code_result, + } + raise ValueError(f"Invalid message role {msg.role}") + + +def lazy_parse_args(args_partial): + args = args_partial + if not re.sub(r"\s+", "", args).endswith('"}'): + args += '"}' + + try: + args = json.loads(args) + if "code" not in args: + return None + except json.JSONDecodeError: + return None + + return args["code"] + + +def fill_dict(dst: dict, chunk: dict): + for key in chunk: + if chunk[key] is None: + dst[key] = None + elif isinstance(chunk[key], dict): + if key not in dst: + dst[key] = {} + fill_dict(dst[key], chunk[key]) + elif isinstance(chunk[key], str): + if key not in dst: + dst[key] = "" + dst[key] += chunk[key] + else: + raise ValueError(f"Unsupported type {type(chunk[key])}") diff --git a/services/llm/gpt/prompt.py b/services/llm/gpt/prompt.py new file mode 100644 index 0000000..7cd7389 --- /dev/null +++ b/services/llm/gpt/prompt.py @@ -0,0 +1,22 @@ +FUNCTIONS = [ + { + "name": "run_python_code", + "description": "Runs arbitrary Python code and returns stdout and stderr. " + + "The code is executed in an interactive shell, imports and variables are preserved between calls. " + + "The environment has internet and file system access. " + + "The current working directory is shared with the user, so files can be exchanged. " + + "There are many libraries pre-installed, including numpy, pandas, matplotlib, and scikit-learn. " + + "You cannot show rich outputs like plots or images, but you can store them in the working directory and point the user to them. " + + "If the code runs too long, there will be a timeout.", + "parameters": { + "type": "object", + "properties": { + "code": { + "type": "string", + "description": "The Python code to run", + }, + }, + "required": ["code"], + }, + }, +] diff --git a/services/llm/llama/__init__.py b/services/llm/llama/__init__.py new file mode 100644 index 0000000..f190763 --- /dev/null +++ b/services/llm/llama/__init__.py @@ -0,0 +1,2 @@ +from .llama_replicate import LlamaReplicate +from .llama_tgi import LlamaTGI diff --git a/services/llm/llama/llama_replicate.py b/services/llm/llama/llama_replicate.py new file mode 100644 index 0000000..c25f8c0 --- /dev/null +++ b/services/llm/llama/llama_replicate.py @@ -0,0 +1,43 @@ +from typing import Generator, Optional + +import replicate +from replicate.exceptions import ReplicateException + +from llm.base import BaseLLM, LLMException +from llm.types import Message, Response +from utils import get_env_var + +from .prompt import SYSTEM_PROMPT +from .parsing import msg_to_llama_msg, split_output + + +class LlamaReplicate(BaseLLM): + def __init__(self, model_name: str): + self._model_name = model_name + self._client = replicate.Client(api_token=get_env_var("REPLICATE_API_KEY")) + + def chat(self, history: list[Message]) -> Generator[Response, None, None]: + messages = [msg_to_llama_msg(msg) for msg in history] + try: + output = self._client.run( + self._model_name, + input={ + "prompt": " ".join(messages), + "system_prompt": SYSTEM_PROMPT, + "temperature": 0.01, + }, + ) + + full_text = "" + for item in output: + full_text += item + + text, code, finished = split_output(full_text) + if text is not None or code is not None: + yield Response(text=text, code=code) + + if finished: + break + + except ReplicateException as e: + raise LLMException(str(e)) diff --git a/services/llm/llama/llama_tgi.py b/services/llm/llama/llama_tgi.py new file mode 100644 index 0000000..3b345f1 --- /dev/null +++ b/services/llm/llama/llama_tgi.py @@ -0,0 +1,78 @@ +from typing import Generator + +from requests import RequestException +from sentencepiece import SentencePieceProcessor +from text_generation import Client +from text_generation.errors import ValidationError + +from llm.base import BaseLLM, LLMException +from llm.types import Message, Response +from utils import get_env_var + +from .prompt import SYSTEM_PROMPT +from .parsing import msg_to_llama_msg, split_output + + +B_SYS, E_SYS = "<>\n", "\n<>\n\n" +B_TURN, E_TURN = "", "" + + +class LlamaTGI(BaseLLM): + """Llama, served via text generation inference by huggingface.""" + + def __init__(self, client_url: str): + self._client = Client(client_url) + self._max_tokens = int(get_env_var("MAX_TOKENS")) + self._tokenizer = SentencePieceProcessor( + model_file=get_env_var("TOKENIZER_PATH") + ) + + @staticmethod + def get_prompt(history: list[Message]) -> str: + if len(history) == 0 or history[0].role != "user": + raise ValueError("First message must be user message") + + first_message = Message( + role="user", text=f"{B_SYS}{SYSTEM_PROMPT}{E_SYS}{history[0].text}" + ) + system_history = [first_message] + history[1:] + + messages = [msg_to_llama_msg(msg) for msg in system_history] + + if history[-1].role == "model": + raise ValueError("Last message must be user or interpreter message") + + previous_turns = [ + f"{B_TURN}{messages[idx]} {messages[idx+1]}{E_TURN}" + for idx in range(len(messages))[0:-1:2] + ] + last_turn = f"{B_TURN}{messages[-1]}" + + prompt = "".join(previous_turns + [last_turn]) + return prompt + + def get_max_new_tokens(self, prompt: str) -> int: + prompt_tokens = len(self._tokenizer.encode(prompt)) + return self._max_tokens - prompt_tokens + + def chat(self, history: list[Message]) -> Generator[Response, None, None]: + prompt = self.get_prompt(history) + max_new_tokens = self.get_max_new_tokens(prompt) + + try: + full_text = "" + for response in self._client.generate_stream( + prompt, temperature=0.75, max_new_tokens=max_new_tokens + ): + if not response.token.special: + full_text += response.token.text + + text, code, finished = split_output(full_text) + if text is not None or code is not None: + yield Response(text=text, code=code) + + if finished: + break + + except (RequestException, ValidationError) as e: + raise LLMException(str(e)) diff --git a/services/llm/llama/parsing.py b/services/llm/llama/parsing.py new file mode 100644 index 0000000..b11c4fd --- /dev/null +++ b/services/llm/llama/parsing.py @@ -0,0 +1,70 @@ +from typing import Optional + +from llm.base import Message + +CODE_BEGIN, CODE_END = "```python\n", "\n```" +CODE_RESULT = "RESULT_PYTHON: " +B_INST, E_INST = "[INST]", "[/INST]" + + +def msg_to_llama_msg(msg: Message) -> str: + if msg.role == "user": + return f"{B_INST} {msg.text} {E_INST}" + if msg.role == "model": + return merge_output(msg.text, msg.code) + if msg.role == "interpreter": + return f"{B_INST} {format_code_result(msg.code_result)} {E_INST}" + raise ValueError(f"Invalid message role {msg.role}") + + +def split_output(text: str) -> tuple[Optional[str], Optional[str], bool]: + """Assumes that there is at most one code block, which is at the end. + Last value in tuple is true if full code block was found. This allows to + enforce this policy by stopping generation as soon as it becomes true.""" + + finished = False + + # split text and code if possible + if CODE_BEGIN in text: + text, code = text.split(CODE_BEGIN) + else: + text, code = text, None + + # if couldn't split, might need to remove partially generated CODE_BEGIN + if code is None: + for i in range(len(CODE_BEGIN) - 1, 0, -1): + if text.endswith(CODE_BEGIN[:i]): + text = text[:-i] + break + + # clean text + text = text.strip() + if text == "": + text = None + + # if have code, might need to remove partially generated CODE_END + if code is not None: + if CODE_END in code: + code = code.split(CODE_END)[0] + finished = True + else: + for i in range(len(CODE_END) - 1, 0, -1): + if code.endswith(CODE_END[:i]): + code = code[:-i] + break + + if code == "": + code = None + + return text, code, finished + + +def merge_output(text: Optional[str], code: Optional[str]) -> str: + response = text if text else "" + if code: + response += f"\n{CODE_BEGIN}{code}{CODE_END}" + return response + + +def format_code_result(code_result: str) -> str: + return f"{CODE_RESULT}{code_result}" diff --git a/services/llm/llama/prompt.py b/services/llm/llama/prompt.py new file mode 100644 index 0000000..5d03902 --- /dev/null +++ b/services/llm/llama/prompt.py @@ -0,0 +1,18 @@ +SYSTEM_PROMPT = """\ +You are a helpful AI assistant. + +You have access to a python code interpreter, which supports you in your tasks. +The code is executed in an interactive shell, imports and variables are preserved between calls. +The environment has internet and file system access. +The current working directory is shared with the user, so files can be exchanged. +There are many libraries pre-installed, including numpy, pandas, matplotlib, and scikit-learn. +You cannot show rich outputs like plots or images, but you can store them in the working directory and point the user to them. +If the code runs too long, there will be a timeout. + +To access the interpreter, use the following format: +```python + +``` +If you want to call Python and still say something, do only output text above the code block, NOT below. +Only provide at most one code block per message. +The code will be executed automatically and the result will be sent back to you.""" diff --git a/services/llm/selector.py b/services/llm/selector.py new file mode 100644 index 0000000..d864627 --- /dev/null +++ b/services/llm/selector.py @@ -0,0 +1,19 @@ +from llm.llama import LlamaReplicate, LlamaTGI +from llm.gpt import GPTAzure, GPTOpenAI +from llm.base import BaseLLM + + +MAP_LLM = { + "gpt-openai": GPTOpenAI, + "gpt-azure": GPTAzure, + "llama-replicate": LlamaReplicate, + "llama-tgi": LlamaTGI, +} + + +def get_llm(llm_setting: str) -> BaseLLM: + for prefix, llm_class in MAP_LLM.items(): + if llm_setting.startswith(prefix): + return llm_class(llm_setting[len(prefix) + 1 :]) + + raise ValueError(f"Unknown LLM setting: {llm_setting}") diff --git a/services/llm/types.py b/services/llm/types.py new file mode 100644 index 0000000..0858736 --- /dev/null +++ b/services/llm/types.py @@ -0,0 +1,19 @@ +from typing import Literal, Optional + +from pydantic import BaseModel + + +# a user sends a text +# a model sends a text, code, or both +# an interpreter sends a code_result +# duplicate definition in frontend +class Message(BaseModel): + role: Literal["user", "model", "interpreter"] + text: Optional[str] = None + code: Optional[str] = None + code_result: Optional[str] = None + + +class Response(BaseModel): + text: Optional[str] = None + code: Optional[str] = None diff --git a/interpreter/main.py b/services/main_interpreter.py similarity index 53% rename from interpreter/main.py rename to services/main_interpreter.py index d8f3932..fbf21ac 100644 --- a/interpreter/main.py +++ b/services/main_interpreter.py @@ -1,34 +1,18 @@ -import os -import sys from pathlib import Path -from fastapi import FastAPI, WebSocket, WebSocketDisconnect -from fastapi.middleware.cors import CORSMiddleware +from fastapi import WebSocket, WebSocketDisconnect +from websockets.exceptions import ConnectionClosedError from interpreter import IPythonInterpreter +from utils import get_app, get_env_var -app = FastAPI() -app.add_middleware( - CORSMiddleware, - allow_origins=["*"], - allow_credentials=True, - allow_methods=["*"], - allow_headers=["*"], -) -TIMEOUT = ( - int(os.environ["INTERPRETER_TIMEOUT"]) - if "INTERPRETER_TIMEOUT" in os.environ - else 30 -) -TIMEOUT_MESSAGE = "ERROR: TIMEOUT REACHED" +app = get_app() -try: - WORKING_DIRECTORY = Path(os.environ["WORKING_DIRECTORY"]) - IPYTHON_PATH = Path(os.environ["IPYTHON_PATH"]) -except KeyError: - print("ERROR: Missing environment variables, exiting...", file=sys.stderr) - sys.exit(1) +WORKING_DIRECTORY = Path(get_env_var("WORKING_DIRECTORY")) +IPYTHON_PATH = Path(get_env_var("IPYTHON_PATH")) +TIMEOUT = int(get_env_var("INTERPRETER_TIMEOUT", "30")) +TIMEOUT_MESSAGE = "ERROR: TIMEOUT REACHED" def get_interpreter() -> IPythonInterpreter: @@ -41,15 +25,29 @@ def get_interpreter() -> IPythonInterpreter: return interpreter -@app.websocket("/run") +@app.websocket("/api/interpreter/run") async def run(websocket: WebSocket): - await websocket.accept() + ws_exceptions = WebSocketDisconnect, ConnectionClosedError + + try: + await websocket.accept() + except ws_exceptions: + return + try: interpreter = get_interpreter() except Exception as e: - await websocket.send_text(str(e)) + try: + await websocket.send_text(str(e)) + except ws_exceptions: + return + return + + try: + await websocket.send_text("_ready_") + except ws_exceptions: + interpreter.stop() return - await websocket.send_text("_ready_") try: while True: @@ -62,7 +60,7 @@ async def run(websocket: WebSocket): except Exception as e: response = f"_error_ {e}" await websocket.send_text(response) - except WebSocketDisconnect: + except ws_exceptions: pass interpreter.stop() diff --git a/services/main_llm.py b/services/main_llm.py new file mode 100644 index 0000000..544b50c --- /dev/null +++ b/services/main_llm.py @@ -0,0 +1,56 @@ +import json + +from fastapi import WebSocket +from fastapi.websockets import WebSocketDisconnect +from pydantic import BaseModel +from websockets.exceptions import ConnectionClosedError + +from llm import LLMException, Message, get_llm +from utils import get_app, get_env_var + + +app = get_app() + +LLM_SETTING = get_env_var("LLM", "gpt-openai:gpt-4") +llm = get_llm(LLM_SETTING) + + +class Request(BaseModel): + history: list[Message] + + +@app.websocket("/api/llm/chat") +async def chat(websocket: WebSocket): + ws_exceptions = WebSocketDisconnect, ConnectionClosedError + + try: + await websocket.accept() + history = await websocket.receive_text() + except ws_exceptions: + return + + try: + history = json.loads(history) + history = Request(history=history).history + response_generator = llm.chat(history) + try: + for response in response_generator: + msg = "_success_ " + response.json(exclude_none=True) + await websocket.send_text(msg) + await websocket.close() + + except ws_exceptions: + response_generator.close() + return + + except Exception as e: + try: + if isinstance(e, LLMException): + error = str(e) + else: + print(e, type(e)) + error = "Internal error" + await websocket.send_text("_error_ " + error) + await websocket.close() + except ws_exceptions: + return diff --git a/interpreter/poetry.lock b/services/poetry.lock similarity index 78% rename from interpreter/poetry.lock rename to services/poetry.lock index dd174ed..d5cb25a 100644 --- a/interpreter/poetry.lock +++ b/services/poetry.lock @@ -2,7 +2,7 @@ name = "aiohttp" version = "3.8.5" description = "Async http client/server framework (asyncio)" -category = "dev" +category = "main" optional = false python-versions = ">=3.6" @@ -22,21 +22,13 @@ speedups = ["Brotli", "aiodns", "cchardet"] name = "aiosignal" version = "1.3.1" description = "aiosignal: a list of registered asynchronous callbacks" -category = "dev" +category = "main" optional = false python-versions = ">=3.7" [package.dependencies] frozenlist = ">=1.1.0" -[[package]] -name = "annotated-types" -version = "0.5.0" -description = "Reusable constraint types to use with typing.Annotated" -category = "main" -optional = false -python-versions = ">=3.7" - [[package]] name = "anyio" version = "3.7.1" @@ -81,7 +73,7 @@ test = ["astroid", "pytest"] name = "async-timeout" version = "4.0.2" description = "Timeout context manager for asyncio programs" -category = "dev" +category = "main" optional = false python-versions = ">=3.6" @@ -89,7 +81,7 @@ python-versions = ">=3.6" name = "attrs" version = "23.1.0" description = "Classes Without Boilerplate" -category = "dev" +category = "main" optional = false python-versions = ">=3.7" @@ -132,11 +124,19 @@ d = ["aiohttp (>=3.7.4)"] jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"] uvloop = ["uvloop (>=0.15.2)"] +[[package]] +name = "certifi" +version = "2023.7.22" +description = "Python package for providing Mozilla's CA Bundle." +category = "main" +optional = false +python-versions = ">=3.6" + [[package]] name = "charset-normalizer" version = "3.2.0" description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet." -category = "dev" +category = "main" optional = false python-versions = ">=3.7.0" @@ -205,14 +205,58 @@ typing-extensions = ">=4.5.0" [package.extras] all = ["email-validator (>=2.0.0)", "httpx (>=0.23.0)", "itsdangerous (>=1.1.0)", "jinja2 (>=2.11.2)", "orjson (>=3.2.1)", "pydantic-extra-types (>=2.0.0)", "pydantic-settings (>=2.0.0)", "python-multipart (>=0.0.5)", "pyyaml (>=5.3.1)", "ujson (>=4.0.1,!=4.0.2,!=4.1.0,!=4.2.0,!=4.3.0,!=5.0.0,!=5.1.0)", "uvicorn[standard] (>=0.12.0)"] +[[package]] +name = "filelock" +version = "3.12.2" +description = "A platform independent file lock." +category = "main" +optional = false +python-versions = ">=3.7" + +[package.extras] +docs = ["furo (>=2023.5.20)", "sphinx (>=7.0.1)", "sphinx-autodoc-typehints (>=1.23,!=1.23.4)"] +testing = ["covdefaults (>=2.3)", "coverage (>=7.2.7)", "diff-cover (>=7.5)", "pytest (>=7.3.1)", "pytest-cov (>=4.1)", "pytest-mock (>=3.10)", "pytest-timeout (>=2.1)"] + [[package]] name = "frozenlist" version = "1.4.0" description = "A list-like structure which implements collections.abc.MutableSequence" -category = "dev" +category = "main" optional = false python-versions = ">=3.8" +[[package]] +name = "fsspec" +version = "2023.6.0" +description = "File-system specification" +category = "main" +optional = false +python-versions = ">=3.8" + +[package.extras] +abfs = ["adlfs"] +adl = ["adlfs"] +arrow = ["pyarrow (>=1)"] +dask = ["dask", "distributed"] +devel = ["pytest", "pytest-cov"] +dropbox = ["dropbox", "dropboxdrivefs", "requests"] +full = ["adlfs", "aiohttp (!=4.0.0a0,!=4.0.0a1)", "dask", "distributed", "dropbox", "dropboxdrivefs", "fusepy", "gcsfs", "libarchive-c", "ocifs", "panel", "paramiko", "pyarrow (>=1)", "pygit2", "requests", "s3fs", "smbprotocol", "tqdm"] +fuse = ["fusepy"] +gcs = ["gcsfs"] +git = ["pygit2"] +github = ["requests"] +gs = ["gcsfs"] +gui = ["panel"] +hdfs = ["pyarrow (>=1)"] +http = ["aiohttp (!=4.0.0a0,!=4.0.0a1)", "requests"] +libarchive = ["libarchive-c"] +oci = ["ocifs"] +s3 = ["s3fs"] +sftp = ["paramiko"] +smb = ["smbprotocol"] +ssh = ["paramiko"] +tqdm = ["tqdm"] + [[package]] name = "h11" version = "0.14.0" @@ -221,6 +265,35 @@ category = "main" optional = false python-versions = ">=3.7" +[[package]] +name = "huggingface-hub" +version = "0.16.4" +description = "Client library to download and publish models, datasets and other repos on the huggingface.co hub" +category = "main" +optional = false +python-versions = ">=3.7.0" + +[package.dependencies] +filelock = "*" +fsspec = "*" +packaging = ">=20.9" +pyyaml = ">=5.1" +requests = "*" +tqdm = ">=4.42.1" +typing-extensions = ">=3.7.4.3" + +[package.extras] +all = ["InquirerPy (==0.3.4)", "Jinja2", "Pillow", "aiohttp", "black (>=23.1,<24.0)", "gradio", "jedi", "mypy (==0.982)", "numpy", "pydantic", "pytest", "pytest-asyncio", "pytest-cov", "pytest-env", "pytest-vcr", "pytest-xdist", "ruff (>=0.0.241)", "soundfile", "types-PyYAML", "types-requests", "types-simplejson", "types-toml", "types-tqdm", "types-urllib3", "urllib3 (<2.0)"] +cli = ["InquirerPy (==0.3.4)"] +dev = ["InquirerPy (==0.3.4)", "Jinja2", "Pillow", "aiohttp", "black (>=23.1,<24.0)", "gradio", "jedi", "mypy (==0.982)", "numpy", "pydantic", "pytest", "pytest-asyncio", "pytest-cov", "pytest-env", "pytest-vcr", "pytest-xdist", "ruff (>=0.0.241)", "soundfile", "types-PyYAML", "types-requests", "types-simplejson", "types-toml", "types-tqdm", "types-urllib3", "urllib3 (<2.0)"] +fastai = ["fastai (>=2.4)", "fastcore (>=1.3.27)", "toml"] +inference = ["aiohttp", "pydantic"] +quality = ["black (>=23.1,<24.0)", "mypy (==0.982)", "ruff (>=0.0.241)"] +tensorflow = ["graphviz", "pydot", "tensorflow"] +testing = ["InquirerPy (==0.3.4)", "Jinja2", "Pillow", "aiohttp", "gradio", "jedi", "numpy", "pydantic", "pytest", "pytest-asyncio", "pytest-cov", "pytest-env", "pytest-vcr", "pytest-xdist", "soundfile", "urllib3 (<2.0)"] +torch = ["torch"] +typing = ["pydantic", "types-PyYAML", "types-requests", "types-simplejson", "types-toml", "types-tqdm", "types-urllib3"] + [[package]] name = "idna" version = "3.4" @@ -304,7 +377,7 @@ traitlets = "*" name = "multidict" version = "6.0.4" description = "multidict implementation" -category = "dev" +category = "main" optional = false python-versions = ">=3.7" @@ -316,11 +389,30 @@ category = "dev" optional = false python-versions = ">=3.5" +[[package]] +name = "openai" +version = "0.27.8" +description = "Python client library for the OpenAI API" +category = "main" +optional = false +python-versions = ">=3.7.1" + +[package.dependencies] +aiohttp = "*" +requests = ">=2.20" +tqdm = "*" + +[package.extras] +datalib = ["numpy", "openpyxl (>=3.0.7)", "pandas (>=1.2.3)", "pandas-stubs (>=1.1.0.11)"] +dev = ["black (>=21.6b0,<22.0)", "pytest (>=6.0.0,<7.0.0)", "pytest-asyncio", "pytest-mock"] +embeddings = ["matplotlib", "numpy", "openpyxl (>=3.0.7)", "pandas (>=1.2.3)", "pandas-stubs (>=1.1.0.11)", "plotly", "scikit-learn (>=1.0.2)", "scipy", "tenacity (>=8.0.1)"] +wandb = ["numpy", "openpyxl (>=3.0.7)", "pandas (>=1.2.3)", "pandas-stubs (>=1.1.0.11)", "wandb"] + [[package]] name = "packaging" version = "23.1" description = "Core utilities for Python packages" -category = "dev" +category = "main" optional = false python-versions = ">=3.7" @@ -419,30 +511,18 @@ tests = ["pytest"] [[package]] name = "pydantic" -version = "2.1.1" -description = "Data validation using Python type hints" +version = "1.10.12" +description = "Data validation and settings management using python type hints" category = "main" optional = false python-versions = ">=3.7" [package.dependencies] -annotated-types = ">=0.4.0" -pydantic-core = "2.4.0" -typing-extensions = ">=4.6.1" +typing-extensions = ">=4.2.0" [package.extras] -email = ["email-validator (>=2.0.0)"] - -[[package]] -name = "pydantic-core" -version = "2.4.0" -description = "" -category = "main" -optional = false -python-versions = ">=3.7" - -[package.dependencies] -typing-extensions = ">=4.6.0,<4.7.0 || >4.7.0" +dotenv = ["python-dotenv (>=0.10.4)"] +email = ["email-validator (>=1.0.3)"] [[package]] name = "pygments" @@ -474,6 +554,56 @@ tomli = {version = ">=1.0.0", markers = "python_version < \"3.11\""} [package.extras] testing = ["argcomplete", "attrs (>=19.2.0)", "hypothesis (>=3.56)", "mock", "nose", "pygments (>=2.7.2)", "requests", "setuptools", "xmlschema"] +[[package]] +name = "pyyaml" +version = "6.0.1" +description = "YAML parser and emitter for Python" +category = "main" +optional = false +python-versions = ">=3.6" + +[[package]] +name = "replicate" +version = "0.11.0" +description = "Python client for Replicate" +category = "main" +optional = false +python-versions = ">=3.8" + +[package.dependencies] +packaging = "*" +pydantic = ">1" +requests = ">2" + +[package.extras] +dev = ["black", "mypy", "pytest", "responses", "ruff"] + +[[package]] +name = "requests" +version = "2.31.0" +description = "Python HTTP for Humans." +category = "main" +optional = false +python-versions = ">=3.7" + +[package.dependencies] +certifi = ">=2017.4.17" +charset-normalizer = ">=2,<4" +idna = ">=2.5,<4" +urllib3 = ">=1.21.1,<3" + +[package.extras] +socks = ["PySocks (>=1.5.6,!=1.5.7)"] +use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"] + +[[package]] +name = "sentencepiece" +version = "0.1.99" +description = "SentencePiece python wrapper" +category = "main" +optional = false +python-versions = "*" + [[package]] name = "six" version = "1.16.0" @@ -521,6 +651,19 @@ typing-extensions = {version = ">=3.10.0", markers = "python_version < \"3.10\"" [package.extras] full = ["httpx (>=0.22.0)", "itsdangerous", "jinja2", "python-multipart", "pyyaml"] +[[package]] +name = "text-generation" +version = "0.6.0" +description = "Hugging Face Text Generation Python Client" +category = "main" +optional = false +python-versions = ">=3.7,<4.0" + +[package.dependencies] +aiohttp = ">=3.8,<4.0" +huggingface-hub = ">=0.12,<1.0" +pydantic = ">=1.10,<2.0" + [[package]] name = "tomli" version = "2.0.1" @@ -529,6 +672,23 @@ category = "dev" optional = false python-versions = ">=3.7" +[[package]] +name = "tqdm" +version = "4.66.1" +description = "Fast, Extensible Progress Meter" +category = "main" +optional = false +python-versions = ">=3.7" + +[package.dependencies] +colorama = {version = "*", markers = "platform_system == \"Windows\""} + +[package.extras] +dev = ["pytest (>=6)", "pytest-cov", "pytest-timeout", "pytest-xdist"] +notebook = ["ipywidgets (>=6)"] +slack = ["slack-sdk"] +telegram = ["requests"] + [[package]] name = "traitlets" version = "5.9.0" @@ -549,6 +709,20 @@ category = "main" optional = false python-versions = ">=3.7" +[[package]] +name = "urllib3" +version = "2.0.4" +description = "HTTP library with thread-safe connection pooling, file post, and more." +category = "main" +optional = false +python-versions = ">=3.7" + +[package.extras] +brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)"] +secure = ["certifi", "cryptography (>=1.9)", "idna (>=2.0.0)", "pyopenssl (>=17.1.0)", "urllib3-secure-extra"] +socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"] +zstd = ["zstandard (>=0.18.0)"] + [[package]] name = "uvicorn" version = "0.23.1" @@ -585,7 +759,7 @@ python-versions = ">=3.7" name = "yarl" version = "1.9.2" description = "Yet another URL library" -category = "dev" +category = "main" optional = false python-versions = ">=3.7" @@ -595,8 +769,8 @@ multidict = ">=4.0" [metadata] lock-version = "1.1" -python-versions = ">=3.9" -content-hash = "d0c9a9ea8374b4d8ca3d1b96220b626d496ddbc173f29f613527eeaf36e3995b" +python-versions = ">=3.9, <4.0" +content-hash = "f3ae21397ca0a253dc085551c064a3760869d881ea6ea0559bdaaa0a1efe5070" [metadata.files] aiohttp = [ @@ -692,10 +866,6 @@ aiosignal = [ {file = "aiosignal-1.3.1-py3-none-any.whl", hash = "sha256:f8376fb07dd1e86a584e4fcdec80b36b7f81aac666ebc724e2c090300dd83b17"}, {file = "aiosignal-1.3.1.tar.gz", hash = "sha256:54cd96e15e1649b75d6c87526a6ff0b6c1b0dd3459f43d9ca11d48c339b68cfc"}, ] -annotated-types = [ - {file = "annotated_types-0.5.0-py3-none-any.whl", hash = "sha256:58da39888f92c276ad970249761ebea80ba544b77acddaa1a4d6cf78287d45fd"}, - {file = "annotated_types-0.5.0.tar.gz", hash = "sha256:47cdc3490d9ac1506ce92c7aaa76c579dc3509ff11e098fc867e5130ab7be802"}, -] anyio = [ {file = "anyio-3.7.1-py3-none-any.whl", hash = "sha256:91dee416e570e92c64041bd18b900d1d6fa78dff7048769ce5ac5ddad004fbb5"}, {file = "anyio-3.7.1.tar.gz", hash = "sha256:44a3c9aba0f5defa43261a8b3efb97891f2bd7d804e0e1f56419befa1adfc780"}, @@ -744,6 +914,10 @@ black = [ {file = "black-23.7.0-py3-none-any.whl", hash = "sha256:9fd59d418c60c0348505f2ddf9609c1e1de8e7493eab96198fc89d9f865e7a96"}, {file = "black-23.7.0.tar.gz", hash = "sha256:022a582720b0d9480ed82576c920a8c1dde97cc38ff11d8d8859b3bd6ca9eedb"}, ] +certifi = [ + {file = "certifi-2023.7.22-py3-none-any.whl", hash = "sha256:92d6037539857d8206b8f6ae472e8b77db8058fec5937a1ef3f54304089edbb9"}, + {file = "certifi-2023.7.22.tar.gz", hash = "sha256:539cc1d13202e33ca466e88b2807e29f4c13049d6d87031a3c110744495cb082"}, +] charset-normalizer = [ {file = "charset-normalizer-3.2.0.tar.gz", hash = "sha256:3bb3d25a8e6c0aedd251753a79ae98a093c7e7b471faa3aa9a93a81431987ace"}, {file = "charset_normalizer-3.2.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:0b87549028f680ca955556e3bd57013ab47474c3124dc069faa0b6545b6c9710"}, @@ -845,6 +1019,10 @@ fastapi = [ {file = "fastapi-0.100.1-py3-none-any.whl", hash = "sha256:ec6dd52bfc4eff3063cfcd0713b43c87640fefb2687bbbe3d8a08d94049cdf32"}, {file = "fastapi-0.100.1.tar.gz", hash = "sha256:522700d7a469e4a973d92321ab93312448fbe20fca9c8da97effc7e7bc56df23"}, ] +filelock = [ + {file = "filelock-3.12.2-py3-none-any.whl", hash = "sha256:cbb791cdea2a72f23da6ac5b5269ab0a0d161e9ef0100e653b69049a7706d1ec"}, + {file = "filelock-3.12.2.tar.gz", hash = "sha256:002740518d8aa59a26b0c76e10fb8c6e15eae825d34b6fdf670333fd7b938d81"}, +] frozenlist = [ {file = "frozenlist-1.4.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:764226ceef3125e53ea2cb275000e309c0aa5464d43bd72abd661e27fffc26ab"}, {file = "frozenlist-1.4.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d6484756b12f40003c6128bfcc3fa9f0d49a687e171186c2d85ec82e3758c559"}, @@ -908,10 +1086,18 @@ frozenlist = [ {file = "frozenlist-1.4.0-cp39-cp39-win_amd64.whl", hash = "sha256:6221d84d463fb110bdd7619b69cb43878a11d51cbb9394ae3105d082d5199167"}, {file = "frozenlist-1.4.0.tar.gz", hash = "sha256:09163bdf0b2907454042edb19f887c6d33806adc71fbd54afc14908bfdc22251"}, ] +fsspec = [ + {file = "fsspec-2023.6.0-py3-none-any.whl", hash = "sha256:1cbad1faef3e391fba6dc005ae9b5bdcbf43005c9167ce78c915549c352c869a"}, + {file = "fsspec-2023.6.0.tar.gz", hash = "sha256:d0b2f935446169753e7a5c5c55681c54ea91996cc67be93c39a154fb3a2742af"}, +] h11 = [ {file = "h11-0.14.0-py3-none-any.whl", hash = "sha256:e3fe4ac4b851c468cc8363d500db52c2ead036020723024a109d37346efaa761"}, {file = "h11-0.14.0.tar.gz", hash = "sha256:8f19fbbe99e72420ff35c00b27a34cb9937e902a8b810e2c88300c6f0a3b699d"}, ] +huggingface-hub = [ + {file = "huggingface_hub-0.16.4-py3-none-any.whl", hash = "sha256:0d3df29932f334fead024afc7cb4cc5149d955238b8b5e42dcf9740d6995a349"}, + {file = "huggingface_hub-0.16.4.tar.gz", hash = "sha256:608c7d4f3d368b326d1747f91523dbd1f692871e8e2e7a4750314a2dd8b63e14"}, +] idna = [ {file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"}, {file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"}, @@ -1012,6 +1198,10 @@ mypy-extensions = [ {file = "mypy_extensions-1.0.0-py3-none-any.whl", hash = "sha256:4392f6c0eb8a5668a69e23d168ffa70f0be9ccfd32b5cc2d26a34ae5b844552d"}, {file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"}, ] +openai = [ + {file = "openai-0.27.8-py3-none-any.whl", hash = "sha256:e0a7c2f7da26bdbe5354b03c6d4b82a2f34bd4458c7a17ae1a7092c3e397e03c"}, + {file = "openai-0.27.8.tar.gz", hash = "sha256:2483095c7db1eee274cebac79e315a986c4e55207bb4fa7b82d185b3a2ed9536"}, +] packaging = [ {file = "packaging-23.1-py3-none-any.whl", hash = "sha256:994793af429502c4ea2ebf6bf664629d07c1a9fe974af92966e4b8d2df7edc61"}, {file = "packaging-23.1.tar.gz", hash = "sha256:a392980d2b6cffa644431898be54b0045151319d1e7ec34f0cfed48767dd334f"}, @@ -1053,111 +1243,42 @@ pure-eval = [ {file = "pure_eval-0.2.2.tar.gz", hash = "sha256:2b45320af6dfaa1750f543d714b6d1c520a1688dec6fd24d339063ce0aaa9ac3"}, ] pydantic = [ - {file = "pydantic-2.1.1-py3-none-any.whl", hash = "sha256:43bdbf359d6304c57afda15c2b95797295b702948082d4c23851ce752f21da70"}, - {file = "pydantic-2.1.1.tar.gz", hash = "sha256:22d63db5ce4831afd16e7c58b3192d3faf8f79154980d9397d9867254310ba4b"}, -] -pydantic-core = [ - {file = "pydantic_core-2.4.0-cp310-cp310-macosx_10_7_x86_64.whl", hash = "sha256:2ca4687dd996bde7f3c420def450797feeb20dcee2b9687023e3323c73fc14a2"}, - {file = "pydantic_core-2.4.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:782fced7d61469fd1231b184a80e4f2fa7ad54cd7173834651a453f96f29d673"}, - {file = "pydantic_core-2.4.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6213b471b68146af97b8551294e59e7392c2117e28ffad9c557c65087f4baee3"}, - {file = "pydantic_core-2.4.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:63797499a219d8e81eb4e0c42222d0a4c8ec896f5c76751d4258af95de41fdf1"}, - {file = "pydantic_core-2.4.0-cp310-cp310-manylinux_2_24_armv7l.whl", hash = "sha256:0455876d575a35defc4da7e0a199596d6c773e20d3d42fa1fc29f6aa640369ed"}, - {file = "pydantic_core-2.4.0-cp310-cp310-manylinux_2_24_ppc64le.whl", hash = "sha256:8c938c96294d983dcf419b54dba2d21056959c22911d41788efbf949a29ae30d"}, - {file = "pydantic_core-2.4.0-cp310-cp310-manylinux_2_24_s390x.whl", hash = "sha256:878a5017d93e776c379af4e7b20f173c82594d94fa073059bcc546789ad50bf8"}, - {file = "pydantic_core-2.4.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:69159afc2f2dc43285725f16143bc5df3c853bc1cb7df6021fce7ef1c69e8171"}, - {file = "pydantic_core-2.4.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:54df7df399b777c1fd144f541c95d351b3aa110535a6810a6a569905d106b6f3"}, - {file = "pydantic_core-2.4.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:e412607ca89a0ced10758dfb8f9adcc365ce4c1c377e637c01989a75e9a9ec8a"}, - {file = "pydantic_core-2.4.0-cp310-none-win32.whl", hash = "sha256:853f103e2b9a58832fdd08a587a51de8b552ae90e1a5d167f316b7eabf8d7dde"}, - {file = "pydantic_core-2.4.0-cp310-none-win_amd64.whl", hash = "sha256:3ba2c9c94a9176f6321a879c8b864d7c5b12d34f549a4c216c72ce213d7d953c"}, - {file = "pydantic_core-2.4.0-cp311-cp311-macosx_10_7_x86_64.whl", hash = "sha256:a8b7acd04896e8f161e1500dc5f218017db05c1d322f054e89cbd089ce5d0071"}, - {file = "pydantic_core-2.4.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:16468bd074fa4567592d3255bf25528ed41e6b616d69bf07096bdb5b66f947d1"}, - {file = "pydantic_core-2.4.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cba5ad5eef02c86a1f3da00544cbc59a510d596b27566479a7cd4d91c6187a11"}, - {file = "pydantic_core-2.4.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b7206e41e04b443016e930e01685bab7a308113c0b251b3f906942c8d4b48fcb"}, - {file = "pydantic_core-2.4.0-cp311-cp311-manylinux_2_24_armv7l.whl", hash = "sha256:c1375025f0bfc9155286ebae8eecc65e33e494c90025cda69e247c3ccd2bab00"}, - {file = "pydantic_core-2.4.0-cp311-cp311-manylinux_2_24_ppc64le.whl", hash = "sha256:3534118289e33130ed3f1cc487002e8d09b9f359be48b02e9cd3de58ce58fba9"}, - {file = "pydantic_core-2.4.0-cp311-cp311-manylinux_2_24_s390x.whl", hash = "sha256:94d2b36a74623caab262bf95f0e365c2c058396082bd9d6a9e825657d0c1e7fa"}, - {file = "pydantic_core-2.4.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:af24ad4fbaa5e4a2000beae0c3b7fd1c78d7819ab90f9370a1cfd8998e3f8a3c"}, - {file = "pydantic_core-2.4.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:bf10963d8aed8bbe0165b41797c9463d4c5c8788ae6a77c68427569be6bead41"}, - {file = "pydantic_core-2.4.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:68199ada7c310ddb8c76efbb606a0de656b40899388a7498954f423e03fc38be"}, - {file = "pydantic_core-2.4.0-cp311-none-win32.whl", hash = "sha256:6f855bcc96ed3dd56da7373cfcc9dcbabbc2073cac7f65c185772d08884790ce"}, - {file = "pydantic_core-2.4.0-cp311-none-win_amd64.whl", hash = "sha256:de39eb3bab93a99ddda1ac1b9aa331b944d8bcc4aa9141148f7fd8ee0299dafc"}, - {file = "pydantic_core-2.4.0-cp312-cp312-macosx_10_7_x86_64.whl", hash = "sha256:f773b39780323a0499b53ebd91a28ad11cde6705605d98d999dfa08624caf064"}, - {file = "pydantic_core-2.4.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:a297c0d6c61963c5c3726840677b798ca5b7dfc71bc9c02b9a4af11d23236008"}, - {file = "pydantic_core-2.4.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:546064c55264156b973b5e65e5fafbe5e62390902ce3cf6b4005765505e8ff56"}, - {file = "pydantic_core-2.4.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:36ba9e728588588f0196deaf6751b9222492331b5552f865a8ff120869d372e0"}, - {file = "pydantic_core-2.4.0-cp312-cp312-manylinux_2_24_armv7l.whl", hash = "sha256:57a53a75010c635b3ad6499e7721eaa3b450e03f6862afe2dbef9c8f66e46ec8"}, - {file = "pydantic_core-2.4.0-cp312-cp312-manylinux_2_24_ppc64le.whl", hash = "sha256:4b262bbc13022f2097c48a21adcc360a81d83dc1d854c11b94953cd46d7d3c07"}, - {file = "pydantic_core-2.4.0-cp312-cp312-manylinux_2_24_s390x.whl", hash = "sha256:01947ad728f426fa07fcb26457ebf90ce29320259938414bc0edd1476e75addb"}, - {file = "pydantic_core-2.4.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b2799c2eaf182769889761d4fb4d78b82bc47dae833799fedbf69fc7de306faa"}, - {file = "pydantic_core-2.4.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:a08fd490ba36d1fbb2cd5dcdcfb9f3892deb93bd53456724389135712b5fc735"}, - {file = "pydantic_core-2.4.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:1e8a7c62d15a5c4b307271e4252d76ebb981d6251c6ecea4daf203ef0179ea4f"}, - {file = "pydantic_core-2.4.0-cp312-none-win32.whl", hash = "sha256:9206c14a67c38de7b916e486ae280017cf394fa4b1aa95cfe88621a4e1d79725"}, - {file = "pydantic_core-2.4.0-cp312-none-win_amd64.whl", hash = "sha256:884235507549a6b2d3c4113fb1877ae263109e787d9e0eb25c35982ab28d0399"}, - {file = "pydantic_core-2.4.0-cp37-cp37m-macosx_10_7_x86_64.whl", hash = "sha256:4cbe929efa77a806e8f1a97793f2dc3ea3475ae21a9ed0f37c21320fe93f6f50"}, - {file = "pydantic_core-2.4.0-cp37-cp37m-macosx_11_0_arm64.whl", hash = "sha256:9137289de8fe845c246a8c3482dd0cb40338846ba683756d8f489a4bd8fddcae"}, - {file = "pydantic_core-2.4.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c5d8e764b5646623e57575f624f8ebb8f7a9f7fd1fae682ef87869ca5fec8dcf"}, - {file = "pydantic_core-2.4.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8fba0aff4c407d0274e43697e785bcac155ad962be57518d1c711f45e72da70f"}, - {file = "pydantic_core-2.4.0-cp37-cp37m-manylinux_2_24_armv7l.whl", hash = "sha256:30527d173e826f2f7651f91c821e337073df1555e3b5a0b7b1e2c39e26e50678"}, - {file = "pydantic_core-2.4.0-cp37-cp37m-manylinux_2_24_ppc64le.whl", hash = "sha256:bd7d1dde70ff3e09e4bc7a1cbb91a7a538add291bfd5b3e70ef1e7b45192440f"}, - {file = "pydantic_core-2.4.0-cp37-cp37m-manylinux_2_24_s390x.whl", hash = "sha256:72f1216ca8cef7b8adacd4c4c6b89c3b0c4f97503197f5284c80f36d6e4edd30"}, - {file = "pydantic_core-2.4.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b013c7861a7c7bfcec48fd709513fea6f9f31727e7a0a93ca0dd12e056740717"}, - {file = "pydantic_core-2.4.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:478f5f6d7e32bd4a04d102160efb2d389432ecf095fe87c555c0a6fc4adfc1a4"}, - {file = "pydantic_core-2.4.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:d9610b47b5fe4aacbbba6a9cb5f12cbe864eec99dbfed5710bd32ef5dd8a5d5b"}, - {file = "pydantic_core-2.4.0-cp37-none-win32.whl", hash = "sha256:ff246c0111076c8022f9ba325c294f2cb5983403506989253e04dbae565e019b"}, - {file = "pydantic_core-2.4.0-cp37-none-win_amd64.whl", hash = "sha256:d0c2b713464a8e263a243ae7980d81ce2de5ac59a9f798a282e44350b42dc516"}, - {file = "pydantic_core-2.4.0-cp38-cp38-macosx_10_7_x86_64.whl", hash = "sha256:12ef6838245569fd60a179fade81ca4b90ae2fa0ef355d616f519f7bb27582db"}, - {file = "pydantic_core-2.4.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:49db206eb8fdc4b4f30e6e3e410584146d813c151928f94ec0db06c4f2595538"}, - {file = "pydantic_core-2.4.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0a507d7fa44688bbac76af6521e488b3da93de155b9cba6f2c9b7833ce243d59"}, - {file = "pydantic_core-2.4.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ffe18407a4d000c568182ce5388bbbedeb099896904e43fc14eee76cfae6dec5"}, - {file = "pydantic_core-2.4.0-cp38-cp38-manylinux_2_24_armv7l.whl", hash = "sha256:fa8e48001b39d54d97d7b380a0669fa99fc0feeb972e35a2d677ba59164a9a22"}, - {file = "pydantic_core-2.4.0-cp38-cp38-manylinux_2_24_ppc64le.whl", hash = "sha256:394f12a2671ff8c4dfa2e85be6c08be0651ad85bc1e6aa9c77c21671baaf28cd"}, - {file = "pydantic_core-2.4.0-cp38-cp38-manylinux_2_24_s390x.whl", hash = "sha256:2f9ea0355f90db2a76af530245fa42f04d98f752a1236ed7c6809ec484560d5b"}, - {file = "pydantic_core-2.4.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:61d4e713f467abcdd59b47665d488bb898ad3dd47ce7446522a50e0cbd8e8279"}, - {file = "pydantic_core-2.4.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:453862ab268f6326b01f067ed89cb3a527d34dc46f6f4eeec46a15bbc706d0da"}, - {file = "pydantic_core-2.4.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:56a85fa0dab1567bd0cac10f0c3837b03e8a0d939e6a8061a3a420acd97e9421"}, - {file = "pydantic_core-2.4.0-cp38-none-win32.whl", hash = "sha256:0d726108c1c0380b88b6dd4db559f0280e0ceda9e077f46ff90bc85cd4d03e77"}, - {file = "pydantic_core-2.4.0-cp38-none-win_amd64.whl", hash = "sha256:047580388644c473b934d27849f8ed8dbe45df0adb72104e78b543e13bf69762"}, - {file = "pydantic_core-2.4.0-cp39-cp39-macosx_10_7_x86_64.whl", hash = "sha256:867d3eea954bea807cabba83cfc939c889a18576d66d197c60025b15269d7cc0"}, - {file = "pydantic_core-2.4.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:664402ef0c238a7f8a46efb101789d5f2275600fb18114446efec83cfadb5b66"}, - {file = "pydantic_core-2.4.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:64e8012ad60a5f0da09ed48725e6e923d1be25f2f091a640af6079f874663813"}, - {file = "pydantic_core-2.4.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ac2b680de398f293b68183317432b3d67ab3faeba216aec18de0c395cb5e3060"}, - {file = "pydantic_core-2.4.0-cp39-cp39-manylinux_2_24_armv7l.whl", hash = "sha256:8efc1be43b036c2b6bcfb1451df24ee0ddcf69c31351003daf2699ed93f5687b"}, - {file = "pydantic_core-2.4.0-cp39-cp39-manylinux_2_24_ppc64le.whl", hash = "sha256:d93aedbc4614cc21b9ab0d0c4ccd7143354c1f7cffbbe96ae5216ad21d1b21b5"}, - {file = "pydantic_core-2.4.0-cp39-cp39-manylinux_2_24_s390x.whl", hash = "sha256:af788b64e13d52fc3600a68b16d31fa8d8573e3ff2fc9a38f8a60b8d94d1f012"}, - {file = "pydantic_core-2.4.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:97c6349c81cee2e69ef59eba6e6c08c5936e6b01c2d50b9e4ac152217845ae09"}, - {file = "pydantic_core-2.4.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:cc086ddb6dc654a15deeed1d1f2bcb1cb924ebd70df9dca738af19f64229b06c"}, - {file = "pydantic_core-2.4.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:e953353180bec330c3b830891d260b6f8e576e2d18db3c78d314e56bb2276066"}, - {file = "pydantic_core-2.4.0-cp39-none-win32.whl", hash = "sha256:6feb4b64d11d5420e517910d60a907d08d846cacaf4e029668725cd21d16743c"}, - {file = "pydantic_core-2.4.0-cp39-none-win_amd64.whl", hash = "sha256:153a61ac4030fa019b70b31fb7986461119230d3ba0ab661c757cfea652f4332"}, - {file = "pydantic_core-2.4.0-pp310-pypy310_pp73-macosx_10_7_x86_64.whl", hash = "sha256:3fcf529382b282a30b466bd7af05be28e22aa620e016135ac414f14e1ee6b9e1"}, - {file = "pydantic_core-2.4.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2edef05b63d82568b877002dc4cb5cc18f8929b59077120192df1e03e0c633f8"}, - {file = "pydantic_core-2.4.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:da055a1b0bfa8041bb2ff586b2cb0353ed03944a3472186a02cc44a557a0e661"}, - {file = "pydantic_core-2.4.0-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:77dadc764cf7c5405e04866181c5bd94a447372a9763e473abb63d1dfe9b7387"}, - {file = "pydantic_core-2.4.0-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:a4ea23b07f29487a7bef2a869f68c7ee0e05424d81375ce3d3de829314c6b5ec"}, - {file = "pydantic_core-2.4.0-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:382f0baa044d674ad59455a5eff83d7965572b745cc72df35c52c2ce8c731d37"}, - {file = "pydantic_core-2.4.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:08f89697625e453421401c7f661b9d1eb4c9e4c0a12fd256eeb55b06994ac6af"}, - {file = "pydantic_core-2.4.0-pp37-pypy37_pp73-macosx_10_7_x86_64.whl", hash = "sha256:43a405ce520b45941df9ff55d0cd09762017756a7b413bbad3a6e8178e64a2c2"}, - {file = "pydantic_core-2.4.0-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:584a7a818c84767af16ce8bda5d4f7fedb37d3d231fc89928a192f567e4ef685"}, - {file = "pydantic_core-2.4.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:04922fea7b13cd480586fa106345fe06e43220b8327358873c22d8dfa7a711c7"}, - {file = "pydantic_core-2.4.0-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:17156abac20a9feed10feec867fddd91a80819a485b0107fe61f09f2117fe5f3"}, - {file = "pydantic_core-2.4.0-pp37-pypy37_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:4e562cc63b04636cde361fd47569162f1daa94c759220ff202a8129902229114"}, - {file = "pydantic_core-2.4.0-pp37-pypy37_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:90f3785146f701e053bb6b9e8f53acce2c919aca91df88bd4975be0cb926eb41"}, - {file = "pydantic_core-2.4.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:e40b1e97edd3dc127aa53d8a5e539a3d0c227d71574d3f9ac1af02d58218a122"}, - {file = "pydantic_core-2.4.0-pp38-pypy38_pp73-macosx_10_7_x86_64.whl", hash = "sha256:b27f3e67f6e031f6620655741b7d0d6bebea8b25d415924b3e8bfef2dd7bd841"}, - {file = "pydantic_core-2.4.0-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:be86c2eb12fb0f846262ace9d8f032dc6978b8cb26a058920ecb723dbcb87d05"}, - {file = "pydantic_core-2.4.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4665f7ed345012a8d2eddf4203ef145f5f56a291d010382d235b94e91813f88a"}, - {file = "pydantic_core-2.4.0-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:79262be5a292d1df060f29b9a7cdd66934801f987a817632d7552534a172709a"}, - {file = "pydantic_core-2.4.0-pp38-pypy38_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:5fd905a69ac74eaba5041e21a1e8b1a479dab2b41c93bdcc4c1cede3c12a8d86"}, - {file = "pydantic_core-2.4.0-pp38-pypy38_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:2ad538b7e07343001934417cdc8584623b4d8823c5b8b258e75ec8d327cec969"}, - {file = "pydantic_core-2.4.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:dd2429f7635ad4857b5881503f9c310be7761dc681c467a9d27787b674d1250a"}, - {file = "pydantic_core-2.4.0-pp39-pypy39_pp73-macosx_10_7_x86_64.whl", hash = "sha256:efff8b6761a1f6e45cebd1b7a6406eb2723d2d5710ff0d1b624fe11313693989"}, - {file = "pydantic_core-2.4.0-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:32a1e0352558cd7ccc014ffe818c7d87b15ec6145875e2cc5fa4bb7351a1033d"}, - {file = "pydantic_core-2.4.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a027f41c5008571314861744d83aff75a34cf3a07022e0be32b214a5bc93f7f1"}, - {file = "pydantic_core-2.4.0-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1927f0e15d190f11f0b8344373731e28fd774c6d676d8a6cfadc95c77214a48b"}, - {file = "pydantic_core-2.4.0-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:7aa82d483d5fb867d4fb10a138ffd57b0f1644e99f2f4f336e48790ada9ada5e"}, - {file = "pydantic_core-2.4.0-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:b85778308bf945e9b33ac604e6793df9b07933108d20bdf53811bc7c2798a4af"}, - {file = "pydantic_core-2.4.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:3ded19dcaefe2f6706d81e0db787b59095f4ad0fbadce1edffdf092294c8a23f"}, - {file = "pydantic_core-2.4.0.tar.gz", hash = "sha256:ec3473c9789cc00c7260d840c3db2c16dbfc816ca70ec87a00cddfa3e1a1cdd5"}, + {file = "pydantic-1.10.12-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:a1fcb59f2f355ec350073af41d927bf83a63b50e640f4dbaa01053a28b7a7718"}, + {file = "pydantic-1.10.12-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:b7ccf02d7eb340b216ec33e53a3a629856afe1c6e0ef91d84a4e6f2fb2ca70fe"}, + {file = "pydantic-1.10.12-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8fb2aa3ab3728d950bcc885a2e9eff6c8fc40bc0b7bb434e555c215491bcf48b"}, + {file = "pydantic-1.10.12-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:771735dc43cf8383959dc9b90aa281f0b6092321ca98677c5fb6125a6f56d58d"}, + {file = "pydantic-1.10.12-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:ca48477862372ac3770969b9d75f1bf66131d386dba79506c46d75e6b48c1e09"}, + {file = "pydantic-1.10.12-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:a5e7add47a5b5a40c49b3036d464e3c7802f8ae0d1e66035ea16aa5b7a3923ed"}, + {file = "pydantic-1.10.12-cp310-cp310-win_amd64.whl", hash = "sha256:e4129b528c6baa99a429f97ce733fff478ec955513630e61b49804b6cf9b224a"}, + {file = "pydantic-1.10.12-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:b0d191db0f92dfcb1dec210ca244fdae5cbe918c6050b342d619c09d31eea0cc"}, + {file = "pydantic-1.10.12-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:795e34e6cc065f8f498c89b894a3c6da294a936ee71e644e4bd44de048af1405"}, + {file = "pydantic-1.10.12-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:69328e15cfda2c392da4e713443c7dbffa1505bc9d566e71e55abe14c97ddc62"}, + {file = "pydantic-1.10.12-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2031de0967c279df0d8a1c72b4ffc411ecd06bac607a212892757db7462fc494"}, + {file = "pydantic-1.10.12-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:ba5b2e6fe6ca2b7e013398bc7d7b170e21cce322d266ffcd57cca313e54fb246"}, + {file = "pydantic-1.10.12-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:2a7bac939fa326db1ab741c9d7f44c565a1d1e80908b3797f7f81a4f86bc8d33"}, + {file = "pydantic-1.10.12-cp311-cp311-win_amd64.whl", hash = "sha256:87afda5539d5140cb8ba9e8b8c8865cb5b1463924d38490d73d3ccfd80896b3f"}, + {file = "pydantic-1.10.12-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:549a8e3d81df0a85226963611950b12d2d334f214436a19537b2efed61b7639a"}, + {file = "pydantic-1.10.12-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:598da88dfa127b666852bef6d0d796573a8cf5009ffd62104094a4fe39599565"}, + {file = "pydantic-1.10.12-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ba5c4a8552bff16c61882db58544116d021d0b31ee7c66958d14cf386a5b5350"}, + {file = "pydantic-1.10.12-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:c79e6a11a07da7374f46970410b41d5e266f7f38f6a17a9c4823db80dadf4303"}, + {file = "pydantic-1.10.12-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:ab26038b8375581dc832a63c948f261ae0aa21f1d34c1293469f135fa92972a5"}, + {file = "pydantic-1.10.12-cp37-cp37m-win_amd64.whl", hash = "sha256:e0a16d274b588767602b7646fa05af2782576a6cf1022f4ba74cbb4db66f6ca8"}, + {file = "pydantic-1.10.12-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6a9dfa722316f4acf4460afdf5d41d5246a80e249c7ff475c43a3a1e9d75cf62"}, + {file = "pydantic-1.10.12-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:a73f489aebd0c2121ed974054cb2759af8a9f747de120acd2c3394cf84176ccb"}, + {file = "pydantic-1.10.12-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6b30bcb8cbfccfcf02acb8f1a261143fab622831d9c0989707e0e659f77a18e0"}, + {file = "pydantic-1.10.12-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2fcfb5296d7877af406ba1547dfde9943b1256d8928732267e2653c26938cd9c"}, + {file = "pydantic-1.10.12-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:2f9a6fab5f82ada41d56b0602606a5506aab165ca54e52bc4545028382ef1c5d"}, + {file = "pydantic-1.10.12-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:dea7adcc33d5d105896401a1f37d56b47d443a2b2605ff8a969a0ed5543f7e33"}, + {file = "pydantic-1.10.12-cp38-cp38-win_amd64.whl", hash = "sha256:1eb2085c13bce1612da8537b2d90f549c8cbb05c67e8f22854e201bde5d98a47"}, + {file = "pydantic-1.10.12-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:ef6c96b2baa2100ec91a4b428f80d8f28a3c9e53568219b6c298c1125572ebc6"}, + {file = "pydantic-1.10.12-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:6c076be61cd0177a8433c0adcb03475baf4ee91edf5a4e550161ad57fc90f523"}, + {file = "pydantic-1.10.12-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2d5a58feb9a39f481eda4d5ca220aa8b9d4f21a41274760b9bc66bfd72595b86"}, + {file = "pydantic-1.10.12-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e5f805d2d5d0a41633651a73fa4ecdd0b3d7a49de4ec3fadf062fe16501ddbf1"}, + {file = "pydantic-1.10.12-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:1289c180abd4bd4555bb927c42ee42abc3aee02b0fb2d1223fb7c6e5bef87dbe"}, + {file = "pydantic-1.10.12-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:5d1197e462e0364906cbc19681605cb7c036f2475c899b6f296104ad42b9f5fb"}, + {file = "pydantic-1.10.12-cp39-cp39-win_amd64.whl", hash = "sha256:fdbdd1d630195689f325c9ef1a12900524dceb503b00a987663ff4f58669b93d"}, + {file = "pydantic-1.10.12-py3-none-any.whl", hash = "sha256:b749a43aa51e32839c9d71dc67eb1e4221bb04af1033a32e3923d46f9effa942"}, + {file = "pydantic-1.10.12.tar.gz", hash = "sha256:0fe8a415cea8f340e7a9af9c54fc71a649b43e8ca3cc732986116b3cb135d303"}, ] pygments = [ {file = "Pygments-2.15.1-py3-none-any.whl", hash = "sha256:db2db3deb4b4179f399a09054b023b6a586b76499d36965813c71aa8ed7b5fd1"}, @@ -1167,6 +1288,103 @@ pytest = [ {file = "pytest-7.4.0-py3-none-any.whl", hash = "sha256:78bf16451a2eb8c7a2ea98e32dc119fd2aa758f1d5d66dbf0a59d69a3969df32"}, {file = "pytest-7.4.0.tar.gz", hash = "sha256:b4bf8c45bd59934ed84001ad51e11b4ee40d40a1229d2c79f9c592b0a3f6bd8a"}, ] +pyyaml = [ + {file = "PyYAML-6.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d858aa552c999bc8a8d57426ed01e40bef403cd8ccdd0fc5f6f04a00414cac2a"}, + {file = "PyYAML-6.0.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:fd66fc5d0da6d9815ba2cebeb4205f95818ff4b79c3ebe268e75d961704af52f"}, + {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:69b023b2b4daa7548bcfbd4aa3da05b3a74b772db9e23b982788168117739938"}, + {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:81e0b275a9ecc9c0c0c07b4b90ba548307583c125f54d5b6946cfee6360c733d"}, + {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ba336e390cd8e4d1739f42dfe9bb83a3cc2e80f567d8805e11b46f4a943f5515"}, + {file = "PyYAML-6.0.1-cp310-cp310-win32.whl", hash = "sha256:bd4af7373a854424dabd882decdc5579653d7868b8fb26dc7d0e99f823aa5924"}, + {file = "PyYAML-6.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:fd1592b3fdf65fff2ad0004b5e363300ef59ced41c2e6b3a99d4089fa8c5435d"}, + {file = "PyYAML-6.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6965a7bc3cf88e5a1c3bd2e0b5c22f8d677dc88a455344035f03399034eb3007"}, + {file = "PyYAML-6.0.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f003ed9ad21d6a4713f0a9b5a7a0a79e08dd0f221aff4525a2be4c346ee60aab"}, + {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:42f8152b8dbc4fe7d96729ec2b99c7097d656dc1213a3229ca5383f973a5ed6d"}, + {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:062582fca9fabdd2c8b54a3ef1c978d786e0f6b3a1510e0ac93ef59e0ddae2bc"}, + {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d2b04aac4d386b172d5b9692e2d2da8de7bfb6c387fa4f801fbf6fb2e6ba4673"}, + {file = "PyYAML-6.0.1-cp311-cp311-win32.whl", hash = "sha256:1635fd110e8d85d55237ab316b5b011de701ea0f29d07611174a1b42f1444741"}, + {file = "PyYAML-6.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:bf07ee2fef7014951eeb99f56f39c9bb4af143d8aa3c21b1677805985307da34"}, + {file = "PyYAML-6.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50550eb667afee136e9a77d6dc71ae76a44df8b3e51e41b77f6de2932bfe0f47"}, + {file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1fe35611261b29bd1de0070f0b2f47cb6ff71fa6595c077e42bd0c419fa27b98"}, + {file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:704219a11b772aea0d8ecd7058d0082713c3562b4e271b849ad7dc4a5c90c13c"}, + {file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:afd7e57eddb1a54f0f1a974bc4391af8bcce0b444685d936840f125cf046d5bd"}, + {file = "PyYAML-6.0.1-cp36-cp36m-win32.whl", hash = "sha256:fca0e3a251908a499833aa292323f32437106001d436eca0e6e7833256674585"}, + {file = "PyYAML-6.0.1-cp36-cp36m-win_amd64.whl", hash = "sha256:f22ac1c3cac4dbc50079e965eba2c1058622631e526bd9afd45fedd49ba781fa"}, + {file = "PyYAML-6.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b1275ad35a5d18c62a7220633c913e1b42d44b46ee12554e5fd39c70a243d6a3"}, + {file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:18aeb1bf9a78867dc38b259769503436b7c72f7a1f1f4c93ff9a17de54319b27"}, + {file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:596106435fa6ad000c2991a98fa58eeb8656ef2325d7e158344fb33864ed87e3"}, + {file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:baa90d3f661d43131ca170712d903e6295d1f7a0f595074f151c0aed377c9b9c"}, + {file = "PyYAML-6.0.1-cp37-cp37m-win32.whl", hash = "sha256:9046c58c4395dff28dd494285c82ba00b546adfc7ef001486fbf0324bc174fba"}, + {file = "PyYAML-6.0.1-cp37-cp37m-win_amd64.whl", hash = "sha256:4fb147e7a67ef577a588a0e2c17b6db51dda102c71de36f8549b6816a96e1867"}, + {file = "PyYAML-6.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1d4c7e777c441b20e32f52bd377e0c409713e8bb1386e1099c2415f26e479595"}, + {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a0cd17c15d3bb3fa06978b4e8958dcdc6e0174ccea823003a106c7d4d7899ac5"}, + {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28c119d996beec18c05208a8bd78cbe4007878c6dd15091efb73a30e90539696"}, + {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e07cbde391ba96ab58e532ff4803f79c4129397514e1413a7dc761ccd755735"}, + {file = "PyYAML-6.0.1-cp38-cp38-win32.whl", hash = "sha256:184c5108a2aca3c5b3d3bf9395d50893a7ab82a38004c8f61c258d4428e80206"}, + {file = "PyYAML-6.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:1e2722cc9fbb45d9b87631ac70924c11d3a401b2d7f410cc0e3bbf249f2dca62"}, + {file = "PyYAML-6.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9eb6caa9a297fc2c2fb8862bc5370d0303ddba53ba97e71f08023b6cd73d16a8"}, + {file = "PyYAML-6.0.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:c8098ddcc2a85b61647b2590f825f3db38891662cfc2fc776415143f599bb859"}, + {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5773183b6446b2c99bb77e77595dd486303b4faab2b086e7b17bc6bef28865f6"}, + {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b786eecbdf8499b9ca1d697215862083bd6d2a99965554781d0d8d1ad31e13a0"}, + {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc1bf2925a1ecd43da378f4db9e4f799775d6367bdb94671027b73b393a7c42c"}, + {file = "PyYAML-6.0.1-cp39-cp39-win32.whl", hash = "sha256:faca3bdcf85b2fc05d06ff3fbc1f83e1391b3e724afa3feba7d13eeab355484c"}, + {file = "PyYAML-6.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:510c9deebc5c0225e8c96813043e62b680ba2f9c50a08d3724c7f28a747d1486"}, + {file = "PyYAML-6.0.1.tar.gz", hash = "sha256:bfdf460b1736c775f2ba9f6a92bca30bc2095067b8a9d77876d1fad6cc3b4a43"}, +] +replicate = [ + {file = "replicate-0.11.0-py3-none-any.whl", hash = "sha256:fbb8815068864dc822cd4fa7b6103d6f4089d6ef122abd6c3441ca0f0f110c46"}, + {file = "replicate-0.11.0.tar.gz", hash = "sha256:4d54b5838c1552a6f76cc37c3af8d9a7998105382082d672acad31636ad443b5"}, +] +requests = [ + {file = "requests-2.31.0-py3-none-any.whl", hash = "sha256:58cd2187c01e70e6e26505bca751777aa9f2ee0b7f4300988b709f44e013003f"}, + {file = "requests-2.31.0.tar.gz", hash = "sha256:942c5a758f98d790eaed1a29cb6eefc7ffb0d1cf7af05c3d2791656dbd6ad1e1"}, +] +sentencepiece = [ + {file = "sentencepiece-0.1.99-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:0eb528e70571b7c02723e5804322469b82fe7ea418c96051d0286c0fa028db73"}, + {file = "sentencepiece-0.1.99-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:77d7fafb2c4e4659cbdf303929503f37a26eabc4ff31d3a79bf1c5a1b338caa7"}, + {file = "sentencepiece-0.1.99-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:be9cf5b9e404c245aeb3d3723c737ba7a8f5d4ba262ef233a431fa6c45f732a0"}, + {file = "sentencepiece-0.1.99-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:baed1a26464998f9710d20e52607c29ffd4293e7c71c6a1f83f51ad0911ec12c"}, + {file = "sentencepiece-0.1.99-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9832f08bb372d4c8b567612f8eab9e36e268dff645f1c28f9f8e851be705f6d1"}, + {file = "sentencepiece-0.1.99-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:019e7535108e309dae2b253a75834fc3128240aa87c00eb80732078cdc182588"}, + {file = "sentencepiece-0.1.99-cp310-cp310-win32.whl", hash = "sha256:fa16a830416bb823fa2a52cbdd474d1f7f3bba527fd2304fb4b140dad31bb9bc"}, + {file = "sentencepiece-0.1.99-cp310-cp310-win_amd64.whl", hash = "sha256:14b0eccb7b641d4591c3e12ae44cab537d68352e4d3b6424944f0c447d2348d5"}, + {file = "sentencepiece-0.1.99-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:6d3c56f24183a1e8bd61043ff2c58dfecdc68a5dd8955dc13bab83afd5f76b81"}, + {file = "sentencepiece-0.1.99-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ed6ea1819fd612c989999e44a51bf556d0ef6abfb553080b9be3d347e18bcfb7"}, + {file = "sentencepiece-0.1.99-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:a2a0260cd1fb7bd8b4d4f39dc2444a8d5fd4e0a0c4d5c899810ef1abf99b2d45"}, + {file = "sentencepiece-0.1.99-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8a1abff4d1ff81c77cac3cc6fefa34fa4b8b371e5ee51cb7e8d1ebc996d05983"}, + {file = "sentencepiece-0.1.99-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:004e6a621d4bc88978eecb6ea7959264239a17b70f2cbc348033d8195c9808ec"}, + {file = "sentencepiece-0.1.99-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:db361e03342c41680afae5807590bc88aa0e17cfd1a42696a160e4005fcda03b"}, + {file = "sentencepiece-0.1.99-cp311-cp311-win32.whl", hash = "sha256:2d95e19168875b70df62916eb55428a0cbcb834ac51d5a7e664eda74def9e1e0"}, + {file = "sentencepiece-0.1.99-cp311-cp311-win_amd64.whl", hash = "sha256:f90d73a6f81248a909f55d8e6ef56fec32d559e1e9af045f0b0322637cb8e5c7"}, + {file = "sentencepiece-0.1.99-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:62e24c81e74bd87a6e0d63c51beb6527e4c0add67e1a17bac18bcd2076afcfeb"}, + {file = "sentencepiece-0.1.99-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:57efcc2d51caff20d9573567d9fd3f854d9efe613ed58a439c78c9f93101384a"}, + {file = "sentencepiece-0.1.99-cp36-cp36m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6a904c46197993bd1e95b93a6e373dca2f170379d64441041e2e628ad4afb16f"}, + {file = "sentencepiece-0.1.99-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d89adf59854741c0d465f0e1525b388c0d174f611cc04af54153c5c4f36088c4"}, + {file = "sentencepiece-0.1.99-cp36-cp36m-win32.whl", hash = "sha256:47c378146928690d1bc106fdf0da768cebd03b65dd8405aa3dd88f9c81e35dba"}, + {file = "sentencepiece-0.1.99-cp36-cp36m-win_amd64.whl", hash = "sha256:9ba142e7a90dd6d823c44f9870abdad45e6c63958eb60fe44cca6828d3b69da2"}, + {file = "sentencepiece-0.1.99-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b7b1a9ae4d7c6f1f867e63370cca25cc17b6f4886729595b885ee07a58d3cec3"}, + {file = "sentencepiece-0.1.99-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d0f644c9d4d35c096a538507b2163e6191512460035bf51358794a78515b74f7"}, + {file = "sentencepiece-0.1.99-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c8843d23a0f686d85e569bd6dcd0dd0e0cbc03731e63497ca6d5bacd18df8b85"}, + {file = "sentencepiece-0.1.99-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:33e6f690a1caebb4867a2e367afa1918ad35be257ecdb3455d2bbd787936f155"}, + {file = "sentencepiece-0.1.99-cp37-cp37m-win32.whl", hash = "sha256:8a321866c2f85da7beac74a824b4ad6ddc2a4c9bccd9382529506d48f744a12c"}, + {file = "sentencepiece-0.1.99-cp37-cp37m-win_amd64.whl", hash = "sha256:c42f753bcfb7661c122a15b20be7f684b61fc8592c89c870adf52382ea72262d"}, + {file = "sentencepiece-0.1.99-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:85b476406da69c70586f0bb682fcca4c9b40e5059814f2db92303ea4585c650c"}, + {file = "sentencepiece-0.1.99-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:cfbcfe13c69d3f87b7fcd5da168df7290a6d006329be71f90ba4f56bc77f8561"}, + {file = "sentencepiece-0.1.99-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:445b0ec381af1cd4eef95243e7180c63d9c384443c16c4c47a28196bd1cda937"}, + {file = "sentencepiece-0.1.99-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c6890ea0f2b4703f62d0bf27932e35808b1f679bdb05c7eeb3812b935ba02001"}, + {file = "sentencepiece-0.1.99-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fb71af492b0eefbf9f2501bec97bcd043b6812ab000d119eaf4bd33f9e283d03"}, + {file = "sentencepiece-0.1.99-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:27b866b5bd3ddd54166bbcbf5c8d7dd2e0b397fac8537991c7f544220b1f67bc"}, + {file = "sentencepiece-0.1.99-cp38-cp38-win32.whl", hash = "sha256:b133e8a499eac49c581c3c76e9bdd08c338cc1939e441fee6f92c0ccb5f1f8be"}, + {file = "sentencepiece-0.1.99-cp38-cp38-win_amd64.whl", hash = "sha256:0eaf3591dd0690a87f44f4df129cf8d05d8a4029b5b6709b489b8e27f9a9bcff"}, + {file = "sentencepiece-0.1.99-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:38efeda9bbfb55052d482a009c6a37e52f42ebffcea9d3a98a61de7aee356a28"}, + {file = "sentencepiece-0.1.99-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6c030b081dc1e1bcc9fadc314b19b740715d3d566ad73a482da20d7d46fd444c"}, + {file = "sentencepiece-0.1.99-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:84dbe53e02e4f8a2e45d2ac3e430d5c83182142658e25edd76539b7648928727"}, + {file = "sentencepiece-0.1.99-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0b0f55d0a0ee1719b4b04221fe0c9f0c3461dc3dabd77a035fa2f4788eb3ef9a"}, + {file = "sentencepiece-0.1.99-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:18e800f206cd235dc27dc749299e05853a4e4332e8d3dfd81bf13d0e5b9007d9"}, + {file = "sentencepiece-0.1.99-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2ae1c40cda8f9d5b0423cfa98542735c0235e7597d79caf318855cdf971b2280"}, + {file = "sentencepiece-0.1.99-cp39-cp39-win32.whl", hash = "sha256:c84ce33af12ca222d14a1cdd37bd76a69401e32bc68fe61c67ef6b59402f4ab8"}, + {file = "sentencepiece-0.1.99-cp39-cp39-win_amd64.whl", hash = "sha256:350e5c74d739973f1c9643edb80f7cc904dc948578bcb1d43c6f2b173e5d18dd"}, + {file = "sentencepiece-0.1.99.tar.gz", hash = "sha256:189c48f5cb2949288f97ccdb97f0473098d9c3dcf5a3d99d4eabe719ec27297f"}, +] six = [ {file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"}, {file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"}, @@ -1183,10 +1401,18 @@ starlette = [ {file = "starlette-0.27.0-py3-none-any.whl", hash = "sha256:918416370e846586541235ccd38a474c08b80443ed31c578a418e2209b3eef91"}, {file = "starlette-0.27.0.tar.gz", hash = "sha256:6a6b0d042acb8d469a01eba54e9cda6cbd24ac602c4cd016723117d6a7e73b75"}, ] +text-generation = [ + {file = "text-generation-0.6.0.tar.gz", hash = "sha256:48560e7a67b9a88b38335382d357f66e23b5a75f53971ccd436fc6f696a00815"}, + {file = "text_generation-0.6.0-py3-none-any.whl", hash = "sha256:42ae7f7c9ff11f3a6c9d210f94fe708fe693eede79c6776da727456da1606ef9"}, +] tomli = [ {file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"}, {file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"}, ] +tqdm = [ + {file = "tqdm-4.66.1-py3-none-any.whl", hash = "sha256:d302b3c5b53d47bce91fea46679d9c3c6508cf6332229aa1e7d8653723793386"}, + {file = "tqdm-4.66.1.tar.gz", hash = "sha256:d88e651f9db8d8551a62556d3cff9e3034274ca5d66e93197cf2490e2dcb69c7"}, +] traitlets = [ {file = "traitlets-5.9.0-py3-none-any.whl", hash = "sha256:9e6ec080259b9a5940c797d58b613b5e31441c2257b87c2e795c5228ae80d2d8"}, {file = "traitlets-5.9.0.tar.gz", hash = "sha256:f6cde21a9c68cf756af02035f72d5a723bf607e862e7be33ece505abf4a3bad9"}, @@ -1195,6 +1421,10 @@ typing-extensions = [ {file = "typing_extensions-4.7.1-py3-none-any.whl", hash = "sha256:440d5dd3af93b060174bf433bccd69b0babc3b15b1a8dca43789fd7f61514b36"}, {file = "typing_extensions-4.7.1.tar.gz", hash = "sha256:b75ddc264f0ba5615db7ba217daeb99701ad295353c45f9e95963337ceeeffb2"}, ] +urllib3 = [ + {file = "urllib3-2.0.4-py3-none-any.whl", hash = "sha256:de7df1803967d2c2a98e4b11bb7d6bd9210474c46e8a0401514e3a42a75ebde4"}, + {file = "urllib3-2.0.4.tar.gz", hash = "sha256:8d22f86aae8ef5e410d4f539fde9ce6b2113a001bb4d189e0aed70642d602b11"}, +] uvicorn = [ {file = "uvicorn-0.23.1-py3-none-any.whl", hash = "sha256:1d55d46b83ee4ce82b4e82f621f2050adb3eb7b5481c13f9af1744951cae2f1f"}, {file = "uvicorn-0.23.1.tar.gz", hash = "sha256:da9b0c8443b2d7ee9db00a345f1eee6db7317432c9d4400f5049cc8d358383be"}, diff --git a/interpreter/pyproject.toml b/services/pyproject.toml similarity index 63% rename from interpreter/pyproject.toml rename to services/pyproject.toml index fcd4b78..d4d4cbf 100644 --- a/interpreter/pyproject.toml +++ b/services/pyproject.toml @@ -1,16 +1,24 @@ [tool.poetry] -name = "interpreter" +name = "services" version = "0.0.0" description = "" authors = ["Silvan Melchior"] license = "MIT" -packages = [{include = "interpreter"}] +packages = [ + {include = "interpreter"}, + {include = "llm"}, + {include = "utils"}, +] [tool.poetry.dependencies] -python = ">=3.9" +python = ">=3.9, <4.0" fastapi = "^0.100.0" uvicorn = "^0.23.0" websockets = "^11.0.3" +openai = "^0.27.8" +replicate = "^0.11.0" +text-generation = "^0.6.0" +sentencepiece = "^0.1.99" [tool.poetry.group.dev.dependencies] pytest = "^7.4.0" diff --git a/interpreter/tests/test_interpreter.py b/services/tests/test_interpreter.py similarity index 100% rename from interpreter/tests/test_interpreter.py rename to services/tests/test_interpreter.py diff --git a/services/utils/__init__.py b/services/utils/__init__.py new file mode 100644 index 0000000..a4594a3 --- /dev/null +++ b/services/utils/__init__.py @@ -0,0 +1,2 @@ +from .app import get_app +from .env_var import get_env_var diff --git a/services/utils/app.py b/services/utils/app.py new file mode 100644 index 0000000..8e6460c --- /dev/null +++ b/services/utils/app.py @@ -0,0 +1,14 @@ +from fastapi import FastAPI +from fastapi.middleware.cors import CORSMiddleware + + +def get_app() -> FastAPI: + app = FastAPI() + app.add_middleware( + CORSMiddleware, + allow_origins=["*"], + allow_credentials=True, + allow_methods=["*"], + allow_headers=["*"], + ) + return app diff --git a/services/utils/env_var.py b/services/utils/env_var.py new file mode 100644 index 0000000..58b94f6 --- /dev/null +++ b/services/utils/env_var.py @@ -0,0 +1,13 @@ +import os +import sys + + +def get_env_var(key: str, default: str = None) -> str: + if key in os.environ: + return os.environ[key] + + if default is not None: + return default + + print(f"ERROR: Missing environment variables {key}, exiting...", file=sys.stderr) + sys.exit(1) diff --git a/ui/app/api/chat/route.ts b/ui/app/api/chat/route.ts deleted file mode 100644 index 5aa8767..0000000 --- a/ui/app/api/chat/route.ts +++ /dev/null @@ -1,29 +0,0 @@ -import { NextRequest, NextResponse } from "next/server"; -import { LLMException, Message } from "@/llm/base"; -import { chat as gptChat } from "@/llm/gpt"; - -const DEFAULT_MODEL = "gpt:gpt-4"; - -function getLLM() { - const setting = process.env.LLM ?? DEFAULT_MODEL; - if (setting.startsWith("gpt:")) { - return gptChat(setting.slice(4)); - } - throw new LLMException("Invalid LLM setting"); -} - -export async function POST( - request: NextRequest, -): Promise | Response> { - try { - const history = (await request.json()) as Message[]; - const chat = getLLM(); - const response = await chat(history); - return NextResponse.json(response); - } catch (e) { - if (e instanceof LLMException) { - return new Response(e.message, { status: 500 }); - } - throw e; - } -} diff --git a/ui/app/helper/scroller.tsx b/ui/app/helper/scroller.tsx new file mode 100644 index 0000000..eae5676 --- /dev/null +++ b/ui/app/helper/scroller.tsx @@ -0,0 +1,35 @@ +import React from "react"; + +class Throttler { + private actionQueued = false; + constructor(private actionHandler: () => void) {} + + actionRun = () => { + this.actionQueued = false; + this.actionHandler(); + }; + + action = () => { + if (!this.actionQueued) { + this.actionQueued = true; + setTimeout(() => this.actionRun(), 200); + } + }; +} + +export default function useScroller(state: any) { + const scrollRef = React.useRef(null); + + const throttler = React.useRef( + new Throttler(() => { + const div = scrollRef.current; + if (div !== null) { + div.scrollTo({ top: div.scrollHeight, behavior: "smooth" }); + } + }), + ); + + React.useEffect(() => throttler.current.action(), [state]); + + return scrollRef; +} diff --git a/ui/app/page.tsx b/ui/app/page.tsx index 658408f..540bd85 100644 --- a/ui/app/page.tsx +++ b/ui/app/page.tsx @@ -1,18 +1,8 @@ import React from "react"; -import Session_manager from "@/app/session/session_manager"; +import SessionManager from "@/app/session/session_manager"; import path from "path"; import * as fs from "fs"; -export const dynamic = "force-dynamic"; - -function getInterpreterUrl() { - const interpreterUrl = process.env.INTERPRETER_URL; - if (interpreterUrl === undefined) { - throw new Error("INTERPRETER_URL is undefined"); - } - return interpreterUrl; -} - function getVersion(): Promise { const versionDir = path.dirname( path.dirname(path.dirname(path.dirname(__dirname))), @@ -30,7 +20,5 @@ function getVersion(): Promise { } export default async function Home() { - const interpreterUrl = getInterpreterUrl(); - const version = await getVersion(); - return ; + return ; } diff --git a/ui/app/session/approval/approver.tsx b/ui/app/session/approval/approver.tsx index e65539c..ab2ad9c 100644 --- a/ui/app/session/approval/approver.tsx +++ b/ui/app/session/approval/approver.tsx @@ -1,10 +1,9 @@ import React from "react"; export class Approver { - private resolveHandler: ((value: void) => void) | null = null; + private resolveHandler: ((result: boolean) => void) | null = null; constructor( - private readonly setContent: (content: string) => void, private autoApprove: boolean, private readonly _setAutoApprove: (autoApprove: boolean) => void, private readonly setAskApprove: (askApprove: boolean) => void, @@ -13,24 +12,21 @@ export class Approver { setAutoApprove = (autoApprove: boolean) => { this.autoApprove = autoApprove; this._setAutoApprove(autoApprove); - if (this.resolveHandler !== null) { - this.approve(); - } + this.approve(true); }; - approve = () => { + approve = (approval: boolean) => { if (this.resolveHandler !== null) { this.setAskApprove(false); - this.resolveHandler(); + this.resolveHandler(approval); this.resolveHandler = null; } }; - getApproval = (content: string, tmpAutoApprove: boolean = false) => { - this.setContent(content); - return new Promise((resolve, reject) => { - if (this.autoApprove || tmpAutoApprove) { - resolve(); + getApproval = () => { + return new Promise((resolve, reject) => { + if (this.autoApprove) { + resolve(true); } else { this.resolveHandler = resolve; this.setAskApprove(true); @@ -39,12 +35,11 @@ export class Approver { }; } -export function useApprover(): [Approver, string | null, boolean, boolean] { - const [content, setContent] = React.useState(null); +export function useApprover(): [Approver, boolean, boolean] { const [askApprove, setAskApprove] = React.useState(false); const [autoApprove, setAutoApprove] = React.useState(false); const approverRef = React.useRef( - new Approver(setContent, autoApprove, setAutoApprove, setAskApprove), + new Approver(autoApprove, setAutoApprove, setAskApprove), ); - return [approverRef.current, content, askApprove, autoApprove]; + return [approverRef.current, askApprove, autoApprove]; } diff --git a/ui/app/session/approval/interpreter_io.tsx b/ui/app/session/approval/interpreter_io.tsx index 5ca2b41..f58929d 100644 --- a/ui/app/session/approval/interpreter_io.tsx +++ b/ui/app/session/approval/interpreter_io.tsx @@ -1,6 +1,9 @@ import React from "react"; +import SyntaxHighlighter from "react-syntax-highlighter"; +import { docco } from "react-syntax-highlighter/dist/esm/styles/hljs"; import { Approver } from "@/app/session/approval/approver"; import Running from "@/app/session/approval/running"; +import useScroller from "@/app/helper/scroller"; export default function InterpreterIO({ title, @@ -10,6 +13,7 @@ export default function InterpreterIO({ autoApprove, disabled, busy, + language, }: { title: string; content: string | null; @@ -18,18 +22,34 @@ export default function InterpreterIO({ autoApprove: boolean; disabled: boolean; busy: boolean; + language: string; }) { + const scrollRef = useScroller(content); + return (
{title}
- {busy ? : content} + {busy ? ( +
+ +
+ ) : ( + + {content ?? ""} + + )}
@@ -42,15 +62,20 @@ export default function InterpreterIO({ />{" "} auto-approve
-
- -
+ +
); diff --git a/ui/app/session/chat/brand.tsx b/ui/app/session/chat/brand.tsx index 092bf4f..a832e32 100644 --- a/ui/app/session/chat/brand.tsx +++ b/ui/app/session/chat/brand.tsx @@ -1,11 +1,8 @@ -import Image from "next/image"; -import iconColor from "./icon_color.png"; - export default function Brand() { return (
- Brand + Brand
Incognito Pilot
diff --git a/ui/app/session/chat/chat_history.tsx b/ui/app/session/chat/chat_history.tsx index b7d5b58..32ef434 100644 --- a/ui/app/session/chat/chat_history.tsx +++ b/ui/app/session/chat/chat_history.tsx @@ -1,53 +1,56 @@ -import { Message } from "@/llm/base"; +import { Message } from "@/app/session/communication/message"; import { TbUser } from "react-icons/tb"; -import Image from "next/image"; -import robotIcon from "../../icon.png"; import React from "react"; -export default function ChatHistory({ history }: { history: Message[] }) { - const bottomRef = React.useRef(null); - - React.useEffect(() => { - setTimeout(() => { - bottomRef.current?.scrollIntoView({ behavior: "smooth" }); - }, 100); - }, [history]); +export default function ChatHistory({ + history, + thinking, +}: { + history: Message[]; + thinking: boolean; +}) { + const historyFiltered = history.filter( + (msg, idx) => + msg.role === "user" || + (msg.role === "model" && + (msg.text !== undefined || (thinking && idx == history.length - 1))), + ); return (
- {history - .filter( - (msg) => - msg.role === "user" || - (msg.role === "model" && msg.text !== undefined), - ) - .map((msg, idx) => ( -
- {msg.role === "model" ? ( -
- robot -
- ) : ( -
- )} -
- {msg.text} + {historyFiltered.map((msg, idx) => ( +
+ {msg.role === "model" ? ( +
+ robot + {thinking && idx === historyFiltered.length - 1 && ( + thinking + )}
- {msg.role === "user" ? ( -
- -
- ) : ( -
- )} + ) : ( +
+ )} +
+ {msg.text === "" || msg.text === undefined ? "..." : msg.text}
- ))} -
+ {msg.role === "user" ? ( +
+ +
+ ) : ( +
+ )} +
+ ))}
); } diff --git a/ui/app/session/chat/chat_input.tsx b/ui/app/session/chat/chat_input.tsx index f53c85e..18de2fa 100644 --- a/ui/app/session/chat/chat_input.tsx +++ b/ui/app/session/chat/chat_input.tsx @@ -1,17 +1,13 @@ import React from "react"; import { BiSend } from "react-icons/bi"; -import thinkingImg from "./thinking.gif"; -import Image from "next/image"; export default function ChatInput({ innerRef, disabled, - llmAnimation, onMessage, }: { innerRef: React.MutableRefObject; disabled: boolean; - llmAnimation: boolean; onMessage: (message: string) => void; }) { const [message, setMessage] = React.useState(""); @@ -36,8 +32,8 @@ export default function ChatInput({ }; return ( -
-
+
+