Skip to content

Commit

Permalink
Merge pull request #163 from wandb/feature/openui-v2
Browse files Browse the repository at this point in the history
2 weeks of work for the big redesign with lots of exciting new features!
  • Loading branch information
vanpelt authored Jun 23, 2024
2 parents 908b7f7 + b3ac3b6 commit 6fcec93
Show file tree
Hide file tree
Showing 134 changed files with 382,596 additions and 47,048 deletions.
Empty file added .env
Empty file.
202 changes: 202 additions & 0 deletions .github/workflows/docker.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,202 @@
name: Build, test and release OpenUI

on:
push:
branches:
- "**"
workflow_dispatch:

env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}

jobs:
build-frontend:
runs-on: ubuntu-latest
permissions:
contents: write
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- uses: pnpm/action-setup@v4
name: Install pnpm
with:
version: 9
run_install: false
- name: Install Node.js
uses: actions/setup-node@v4
with:
cache-dependency-path: frontend/pnpm-lock.yaml
node-version: 20
cache: "pnpm"
- name: Get pnpm store directory
shell: bash
working-directory: ./frontend
run: |
echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
- uses: actions/cache@v4
name: Setup pnpm cache
with:
path: ${{ env.STORE_PATH }}
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-pnpm-store-
- name: Install dependencies
working-directory: ./frontend
run: pnpm install
# We use npm here because pnpm wasn't executing post hooks
- name: Build frontend
working-directory: ./frontend
run: npm run build
- name: Upload build artifacts
uses: actions/upload-artifact@v4
with:
name: frontend-${{ github.sha }}
path: ./frontend/dist
- name: Checking in frontend assets
if: github.ref == 'refs/heads/main'
run: |
git config user.name github-actions
git config user.email [email protected]
git add backend/openui/dist
git commit -m "Updated frontend assets"
git push
build-and-push-image:
needs: build-frontend
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
attestations: write
id-token: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Download build artifacts
uses: actions/download-artifact@v4
with:
name: frontend-${{ github.sha }}
path: ./backend/openui/dist
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to the Container registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
flavor: |
latest=false
tags: |
type=ref,event=branch
type=ref,event=tag
type=ref,event=pr
type=sha
- name: Build and push Docker image
id: push
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64
context: backend/.
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Generate artifact attestation
uses: actions/attest-build-provenance@v1
with:
subject-name: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME}}
subject-digest: ${{ steps.push.outputs.digest }}
push-to-registry: true

test:
permissions:
contents: read
packages: write
attestations: write
id-token: write
needs: build-and-push-image
timeout-minutes: 10
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: pnpm/action-setup@v4
name: Install pnpm
with:
version: 9
run_install: false
- name: Install Node.js
uses: actions/setup-node@v4
with:
cache-dependency-path: frontend/pnpm-lock.yaml
node-version: 20
cache: "pnpm"
- name: Get pnpm store directory
shell: bash
working-directory: ./frontend
run: |
echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
- uses: actions/cache@v4
name: Setup pnpm cache
with:
path: ${{ env.STORE_PATH }}
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-pnpm-store-
- name: Install dependencies
working-directory: ./frontend
run: pnpm install
- name: Install Playwright Browsers
working-directory: ./frontend
run: pnpm exec playwright install --with-deps chromium webkit
- name: Get short SHA
id: get_short_sha
run: echo "short_sha=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
- name: Run Playwright tests
env:
DOCKER_TAG: sha-${{ steps.get_short_sha.outputs.short_sha }}
working-directory: ./frontend
run: pnpm exec playwright test
- uses: actions/upload-artifact@v4
if: always()
with:
name: playwright-report
path: |
./frontend/playwright-report/
./frontend/screenshots/
retention-days: 30

release:
needs: test
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
attestations: write
id-token: write
steps:
- name: Log in to the Container registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Convert full SHA to short SHA
id: get_short_sha
run: echo "short_sha=$(echo ${{ github.sha }} | cut -c1-7)" >> $GITHUB_OUTPUT
- name: Tag latest image
if: github.ref == 'refs/heads/main'
env:
IMAGE: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
run: |
docker manifest inspect ${{ env.IMAGE }}:sha-${{ steps.get_short_sha.outputs.short_sha }}
docker buildx imagetools create --tag ${{ env.IMAGE }}:latest ${{ env.IMAGE }}:sha-${{ steps.get_short_sha.outputs.short_sha }}
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
.DS_Store
nohup.out
.cache/
.env
64 changes: 46 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,32 @@ OpenUI let's you describe UI using your imagination, then see it rendered live.

## Running Locally

You can also run OpenUI locally and use models available to [Ollama](https://ollama.com). [Install Ollama](https://ollama.com/download) and pull a model like [CodeLlama](https://ollama.com/library/codellama), then assuming you have git and python installed:
OpenUI supports [OpenAI](https://platform.openai.com/api-keys), [Groq](https://console.groq.com/keys), and any model [LiteLLM](https://docs.litellm.ai/docs/) supports such as [Gemini](https://aistudio.google.com/app/apikey) or [Anthropic (Claude)](https://console.anthropic.com/settings/keys). The following environment variables are optional, but need to be set in your environment for these services to work:

- **OpenAI** `OPENAI_API_KEY`
- **Groq** `GROQ_API_KEY`
- **Gemini** `GEMINI_API_KEY`
- **Anthropic** `ANTHROPIC_API_KEY`
- **Cohere** `COHERE_API_KEY`
- **Mistral** `MISTRAL_API_KEY`

You can also use models available to [Ollama](https://ollama.com). [Install Ollama](https://ollama.com/download) and pull a model like [Llava](https://ollama.com/library/llava). If Ollama is not running on http://127.0.0.1:11434, you can set the `OLLAMA_HOST` environment variable to the host and port of your Ollama instance.

### Docker (preferred)

The following command would forward the specified API keys from your shell environment and tell Docker to use the Ollama instance running on your machine.

```bash
export ANTHROPIC_API_KEY=xxx
export OPENAI_API_KEY=xxx
docker run --rm --name openui -p 7878:7878 -e OPENAI_API_KEY -e ANTHROPIC_API_KEY -e OLLAMA_HOST=http://host.docker.internal:11434 gchr.io/wandb/openui
```

Now you can goto [http://localhost:7878](http://localhost:7878) and generate new UI's!

### From Source / Python

Assuming you have git and python installed:

> **Note:** There's a .python-version file that specifies **openui** as the virtual env name. Assuming you have pyenv and pyenv-virtualenv you can run the following from the root of the repository or just run `pyenv local 3.X` where X is the version of python you have installed.
> ```bash
Expand All @@ -31,25 +56,39 @@ git clone https://github.com/wandb/openui
cd openui/backend
# You probably want to do this from a virtual environment
pip install .
# This must be set to use OpenAI models, find your api key here: https://platform.openai.com/api-keys
# Set API keys for any LLM's you want to use
export OPENAI_API_KEY=xxx
# You may change the base url to use an OpenAI-compatible api by setting the OPENAI_BASE_URL environment variable
# export OPENAI_BASE_URL=https://api.myopenai.com/v1
python -m openui
```
## Groq
## LiteLLM

To use the super fast [Groq](https://groq.com) models, set `GROQ_API_KEY` to your Groq api key which you can [find here](https://console.groq.com/keys). To use one of the Groq models, click the settings icon in the sidebar and choose from the list:
[LiteLLM](https://docs.litellm.ai/docs/) can be used to connect to basically any LLM service available. We generate a config automatically based on your environment variables. You can create your own [proxy config](https://litellm.vercel.app/docs/proxy/configs) to override this behavior. We look for a custom config in the following locations:

<img src="./assets/settings.jpeg" width="500" alt="Select Groq models" />
1. `litellm-config.yaml` in the current directory
2. `/app/litellm-config.yaml` when running in a docker container
3. An arbitrary path specified by the `OPENUI_LITELLM_CONFIG` environment variable

You can also change the default base url used for Groq (if necessary), i.e.
For example to use a custom config in docker you can run:

```bash
export GROQ_BASE_URL=https://api.groq.com/openai/v1
docker run -n openui -p 7878:7878 -v $(pwd)/litellm-config.yaml:/app/litellm-config.yaml gchr.io/wandb/openui
```

To use litellm from source you can run:

```bash
pip install .[litellm]
export ANTHROPIC_API_KEY=xxx
python -m openui --litellm
```

## Groq

To use the super fast [Groq](https://groq.com) models, set `GROQ_API_KEY` to your Groq api key which you can [find here](https://console.groq.com/keys). To use one of the Groq models, click the settings icon in the nav bar.

### Docker Compose

> **DISCLAIMER:** This is likely going to be very slow. If you have a GPU you may need to change the tag of the `ollama` container to one that supports it. If you're running on a Mac, follow the instructions above and run Ollama natively to take advantage of the M1/M2.
Expand All @@ -65,17 +104,6 @@ If you have your OPENAI_API_KEY set in the environment already, just remove `=xx

*If you make changes to the frontend or backend, you'll need to run `docker-compose build` to have them reflected in the service.*

### Docker

You can build and run the docker file manually from the `/backend` directory:

```bash
docker build . -t wandb/openui --load
docker run -p 7878:7878 -e OPENAI_API_KEY -e GROQ_API_KEY wandb/openui
```

Now you can goto [http://localhost:7878](http://localhost:7878)

## Development

A [dev container](https://github.com/wandb/openui/blob/main/.devcontainer/devcontainer.json) is configured in this repository which is the quickest way to get started.
Expand Down
Binary file modified assets/demo.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 2 additions & 1 deletion backend/.dockerignore
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,10 @@
**/*.egg-info
**/.DS_Store
**/build
**/wandb
**/*.db

# flyctl launch added from openui/eval/.gitignore
openui/eval/**/wandb
openui/eval/**/datasets
openui/eval/**/components
fly.toml
4 changes: 2 additions & 2 deletions backend/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ COPY README.md .
RUN mkdir -p openui/util && \
python -m venv /venv && \
/venv/bin/pip install --upgrade pip setuptools wheel && \
/venv/bin/pip install --disable-pip-version-check .
/venv/bin/pip install --disable-pip-version-check .[litellm]

# Copy the virtualenv into a distroless image
FROM python:3.12-slim-bookworm
Expand All @@ -22,4 +22,4 @@ WORKDIR /app

RUN pip install --no-deps -U /app

ENTRYPOINT ["python", "-m", "openui"]
ENTRYPOINT ["python", "-m", "openui", "--litellm"]
9 changes: 9 additions & 0 deletions backend/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,3 +57,12 @@ pytest
## Evaluation

The [eval](./openui/eval) folder contains scripts for evaluating the performance of a model. It automates generating UI, taking screenshots of the UI, then asking `gpt-4-vision-preview` to rate the elements. More details about the eval pipeline coming soon...


## Google Vertex AI

Create a service account with the appropriate permissions and authenticate with:

```
gcloud auth application-default login --impersonate-service-account ${GCLOUD_SERVICE_ACCOUNT}@${GCLOUD_PROJECT}.iam.gserviceaccount.com
```
Loading

0 comments on commit 6fcec93

Please sign in to comment.