API γ» VSCode extension γ» Documentation
Quack companion helps software teams ship faster. See it as an instantly onboarded team member with full knowledge of your internal libraries and coding standards π¦
The backend API is the gatekeeper for your LLM inference container (powered by our friend at Ollama). With your services up and running, you can use the code chat endpoint as coding-specific LLM chat.
Check our LLM latency benchmark on a few cloud providers if you want to run this in the cloud.
With the service running, you can navigate to http://localhost:5050/docs
to interact with the API (or do it through HTTP requests) and explore the documentation.
- Git
- Docker
- Docker compose
- NVIDIA Container Toolkit and a GPU (>= 6 Gb VRAM for good performance/latency balance)
git clone https://github.com/quack-ai/companion.git && cd companion
First copy the examples
cp .env.example .env
and then edit it:
nano .env
Replace the value of SUPERADMIN_GH_PAT with your GitHub user Personal Access Token. You can create one here (no need for extra permissions i.e. read-only).
docker compose pull
docker compose up
You can now access:
- your backend API at http://localhost:5050/docs
- your APM dashboard at http://localhost:3000
- your Gradio chat interface at http://localhost:7860
Oh hello there π If you've scrolled this far, we bet it's because you like open-source. Do you feel like integrating a new LLM backend? Or perhaps improve our documentation? Or contributing in any other way?
You're in luck! You'll find everything you need in our contributing guide to help grow this project! And if you're interested, you can join us on Discord π€
Copyright (C) 2023-2024, Quack AI.
This program is licensed under the Apache License 2.0. See LICENSE or go to https://www.apache.org/licenses/LICENSE-2.0 for full license details.