Skip to content

Commit

Permalink
Merge pull request #33 from silvanmelchior/dev
Browse files Browse the repository at this point in the history
Dev
  • Loading branch information
silvanmelchior authored Aug 26, 2023
2 parents 9cd674a + 8aadf6e commit 7694dad
Show file tree
Hide file tree
Showing 50 changed files with 279 additions and 134 deletions.
20 changes: 7 additions & 13 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,19 +41,14 @@ Now open a terminal in the *services* folder and run the following:
```shell
poetry install
poetry shell
export IPYTHON_PATH=/home/user/venv_interpreter/bin/ipython
export WORKING_DIRECTORY=/home/user/ipilot
export ALLOWED_HOSTS="localhost:3000"
uvicorn main_interpreter:app --reload --port 8000
```

Open a second terminal in the *services* folder and run the following:

```shell
poetry shell
export OPENAI_API_KEY=sk-your-api-key
export IPYTHON_PATH="/home/user/venv_interpreter/bin/ipython"
export WORKING_DIRECTORY="/home/user/ipilot"
export ALLOWED_HOSTS="localhost:3000"
uvicorn main_llm:app --reload --port 8001
export ENABLE_CORS="TRUE"
export OPENAI_API_KEY=sk-your-api-key

uvicorn main:app --reload
```

If you want to use something else than OpenAI, adjust the env-variables as explained in the [Readme](/README.md).
Expand All @@ -64,8 +59,7 @@ Open a terminal in the *ui* folder and run the following:

```shell
npm install
export NEXT_PUBLIC_INTERPRETER_URL=localhost:8000
export NEXT_PUBLIC_LLM_URL=localhost:8001
export NEXT_PUBLIC_SERVICES_URL="http://localhost:8000"
npm run dev
```

Expand Down
17 changes: 16 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,8 @@ docker run -i -t \
silvanmelchior/incognito-pilot:latest-slim
```

You can now visit http://localhost:3030 and should see the **Incognito Pilot** interface.
In the console, you should now see a URL.
Open it, and you should see the **Incognito Pilot** interface.

It's also possible to run **Incognito Pilot** with the free trial credits of OpenAI, without adding a credit card.
At the moment, this does not include GPT-4 however, so see below how to change the model to GPT-3.5.
Expand Down Expand Up @@ -113,6 +114,19 @@ docker run -i -t \
silvanmelchior/incognito-pilot
```

### Authentication

Per default, the authentication token, which is part of the URL you open, is randomly generated at startup.
This means, whenever you restart the container, you need to re-copy the URL.
If you want to prevent this, you can also fix the token to a certain value, by adding the following to the docker run command:

```shell
-e AUTH_TOKEN="some-secret-token"
```

Once you opened the URL with the new token, the browser will remember it.
Thus, from now on, you can access **Incognito Pilot** by just opening http://localhost:3030, without having to add a token to the URL.

### Timeout

Per default, the Python interpreter stops after 30 seconds.
Expand All @@ -132,6 +146,7 @@ To automatically start **Incognito Pilot** with docker / at startup, remove the
```

Together with a bookmark of the UI URL, you'll have **Incognito Pilot** at your fingertips whenever you need it.
Alternatively, you can use docker-compose.

## :toolbox: Own dependencies

Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
2.0.2
2.1.0
8 changes: 2 additions & 6 deletions docker/nginx.conf
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,9 @@ http {
root /opt/app/ui;
index index.html index.htm index.nginx-debian.html;

location /api/llm {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://localhost:8081;
}
add_header X-Frame-Options "DENY";

location /api/interpreter {
location /api {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://localhost:8080;
Expand Down
3 changes: 1 addition & 2 deletions docker/start.sh
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
#!/bin/bash

/opt/app/start_interpreter.sh &
/opt/app/start_llm.sh &
/opt/app/start_services.sh &

nginx -g "daemon off;"
7 changes: 0 additions & 7 deletions docker/start_llm.sh

This file was deleted.

2 changes: 1 addition & 1 deletion docker/start_interpreter.sh → docker/start_services.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@ mkdir -p /mnt/data
export WORKING_DIRECTORY=/mnt/data
export IPYTHON_PATH=/opt/app/venv_interpreter/bin/ipython

uvicorn main_interpreter:app --host 0.0.0.0 --port 8080
uvicorn main:app --host 0.0.0.0 --port 8080
5 changes: 3 additions & 2 deletions docs/INSTALLATION_AZURE.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Follow these steps:
3. Login to Azure portal and create an [Azure OpenAI Service](https://azure.microsoft.com/en-us/products/ai-services/openai-service-b).
4. You will see the access key and endpoint, which we will use later.
5. Open Azure OpenAI Studio and deploy a model.
6Now, just run the following command (replace your working directory, model-name and API information):
6. Now, just run the following command (replace your working directory, model-name and API information):

```shell
docker run -i -t \
Expand All @@ -24,7 +24,8 @@ docker run -i -t \
silvanmelchior/incognito-pilot:latest-slim
```

You can now visit http://localhost:3030 and should see the **Incognito Pilot** interface.
In the console, you should now see a URL.
Open it, and you should see the **Incognito Pilot** interface.

Make sure you have access to a model which is capable of function calling, otherwise you will get an error similar to "unknown argument 'function'".

Expand Down
8 changes: 6 additions & 2 deletions docs/INSTALLATION_LLAMA.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,9 @@ docker run -i -t \

You can of course also choose a [different model](https://replicate.com/blog/all-the-llamas), but the smaller ones are much less suited for this task.

Now visit http://localhost:3030 and should see the **Incognito Pilot** interface.
In the console, you should now see a URL.
Open it, and you should see the **Incognito Pilot** interface.

Before you continue, remember:
- Everything you type or every code result you approve is sent to the Replicate API
- Your data stays and is processed locally
Expand Down Expand Up @@ -99,7 +101,9 @@ docker run -i -t \
silvanmelchior/incognito-pilot:latest-slim
```

Visit http://localhost:3030 and should see the **Incognito Pilot** interface.
In the console, you should now see a URL.
Open it, and you should see the **Incognito Pilot** interface.

Congrats! You have a fully local setup, everything is running on your own system :partying_face:.

## :rocket: Getting started (Llama 2)
Expand Down
22 changes: 22 additions & 0 deletions services/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware

from services.auth import auth_router, welcome_lifespan
from services.interpreter import interpreter_router
from services.llm.service import llm_router
from services.utils import get_env_var


app = FastAPI(lifespan=welcome_lifespan)
if get_env_var("ENABLE_CORS", "FALSE") == "TRUE":
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)

app.include_router(auth_router, prefix="/api/auth")
app.include_router(interpreter_router, prefix="/api/interpreter")
app.include_router(llm_router, prefix="/api/llm")
6 changes: 1 addition & 5 deletions services/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,7 @@ version = "0.0.0"
description = ""
authors = ["Silvan Melchior"]
license = "MIT"
packages = [
{include = "interpreter"},
{include = "llm"},
{include = "utils"},
]
packages = [{include = "services"}]

[tool.poetry.dependencies]
python = ">=3.9, <4.0"
Expand Down
Empty file added services/services/__init__.py
Empty file.
4 changes: 4 additions & 0 deletions services/services/auth/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
from .token import AUTH_TOKEN
from .service import auth_router
from .websocket import verify_websocket, ALLOWED_HOSTS
from .welcome import welcome_lifespan
17 changes: 17 additions & 0 deletions services/services/auth/service.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
from fastapi import APIRouter, HTTPException
from pydantic import BaseModel

from .token import AUTH_TOKEN

auth_router = APIRouter()


class TokenRequest(BaseModel):
token: str


@auth_router.post("/verify")
def verify(request: TokenRequest):
if request.token == AUTH_TOKEN:
return {"status": "success"}
raise HTTPException(status_code=401, detail="Invalid token")
5 changes: 5 additions & 0 deletions services/services/auth/token.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
import secrets

from services.utils import get_env_var

AUTH_TOKEN = get_env_var("AUTH_TOKEN", secrets.token_urlsafe(32))
26 changes: 26 additions & 0 deletions services/services/auth/websocket.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
import logging

from fastapi import WebSocket

from services.auth import AUTH_TOKEN
from services.utils.env_var import get_env_var

ALLOWED_HOSTS = get_env_var("ALLOWED_HOSTS").split(" ")


def _verify_origin(origin: str) -> bool:
origin_raw = origin.replace("http://", "").replace("https://", "")
return origin_raw in ALLOWED_HOSTS


async def verify_websocket(websocket: WebSocket):
if not _verify_origin(websocket.headers["origin"]):
logging.warning("Websocket with invalid origin")
return False

token = await websocket.receive_text()
if token != AUTH_TOKEN:
logging.warning("Websocket with invalid token")
return False

return True
20 changes: 20 additions & 0 deletions services/services/auth/welcome.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
from contextlib import asynccontextmanager

from fastapi import FastAPI

from .websocket import ALLOWED_HOSTS
from .token import AUTH_TOKEN


@asynccontextmanager
async def welcome_lifespan(app: FastAPI):
print("***")
print("Welcome to Incognito Pilot")
if len(ALLOWED_HOSTS) == 1:
print("To start, open the following URL:")
else:
print("To start, open one of the the following URLs:")
for host in ALLOWED_HOSTS:
print(f" http://{host}#token={AUTH_TOKEN}")
print("***")
yield
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
from .ipython_interpreter import IPythonInterpreter
from .service import interpreter_router
File renamed without changes.
Original file line number Diff line number Diff line change
@@ -1,19 +1,22 @@
from pathlib import Path

from fastapi import WebSocket, WebSocketDisconnect
from fastapi import APIRouter, WebSocket, WebSocketDisconnect
from websockets.exceptions import ConnectionClosedError

from interpreter import IPythonInterpreter
from utils import get_app, get_env_var, verify_origin
from services.interpreter import IPythonInterpreter
from services.utils import get_env_var
from services.auth import verify_websocket

app = get_app()

WORKING_DIRECTORY = Path(get_env_var("WORKING_DIRECTORY"))
IPYTHON_PATH = Path(get_env_var("IPYTHON_PATH"))
TIMEOUT = int(get_env_var("INTERPRETER_TIMEOUT", "30"))
TIMEOUT_MESSAGE = "ERROR: TIMEOUT REACHED"


interpreter_router = APIRouter()


def get_interpreter() -> IPythonInterpreter:
interpreter = IPythonInterpreter(
working_dir=WORKING_DIRECTORY,
Expand All @@ -24,15 +27,14 @@ def get_interpreter() -> IPythonInterpreter:
return interpreter


@app.websocket("/api/interpreter/run")
@interpreter_router.websocket("/run")
async def run(websocket: WebSocket):
if not verify_origin(websocket.headers["origin"]):
return

ws_exceptions = WebSocketDisconnect, ConnectionClosedError

await websocket.accept()
try:
await websocket.accept()
if not await verify_websocket(websocket):
return
except ws_exceptions:
return

Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
from .base import BaseLLM, LLMException
from .types import Message, Response
from .selector import get_llm
from .service import llm_router
2 changes: 1 addition & 1 deletion services/llm/base.py → services/services/llm/base.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
from abc import ABC, abstractmethod
from typing import Generator

from llm.types import Message, Response
from services.llm.types import Message, Response


class BaseLLM(ABC):
Expand Down
File renamed without changes.
4 changes: 2 additions & 2 deletions services/llm/gpt/gpt.py → services/services/llm/gpt/gpt.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
import openai
from openai import OpenAIError

from llm.base import BaseLLM, LLMException
from llm.types import Message, Response
from services.llm.base import BaseLLM, LLMException
from services.llm.types import Message, Response
from .parsing import msg_to_gpt_msg, lazy_parse_args, fill_dict
from .prompt import FUNCTIONS

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import openai

from utils import get_env_var
from services.utils import get_env_var
from .gpt import GPT


Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import openai

from utils import get_env_var
from services.utils import get_env_var
from .gpt import GPT


Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import re
import json

from llm.types import Message
from services.llm.types import Message


def msg_to_gpt_msg(msg: Message) -> dict:
Expand Down
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@
from typing import Generator, Optional
from typing import Generator

import replicate
from replicate.exceptions import ReplicateException

from llm.base import BaseLLM, LLMException
from llm.types import Message, Response
from utils import get_env_var

from services.llm.base import BaseLLM, LLMException
from services.llm.types import Message, Response
from services.utils import get_env_var
from .prompt import SYSTEM_PROMPT
from .parsing import msg_to_llama_msg, split_output

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,9 @@
from text_generation import Client
from text_generation.errors import ValidationError

from llm.base import BaseLLM, LLMException
from llm.types import Message, Response
from utils import get_env_var

from services.llm.base import BaseLLM, LLMException
from services.llm.types import Message, Response
from services.utils import get_env_var
from .prompt import SYSTEM_PROMPT
from .parsing import msg_to_llama_msg, split_output

Expand Down
Loading

0 comments on commit 7694dad

Please sign in to comment.