High-performance OpenAI API proxy with rate limiting and streaming support.
- Asynchronous request forwarding
- SSE streaming support
- Configurable rate limiting
- Multi-platform support (Linux, macOS, FreeBSD)
- Docker images for easy deployment
# Set your OpenAI API key
export OPENAI_API_KEY="your-api-key"
# Run RustyGate
docker run -p 8080:8080 -e OPENAI_API_KEY=$OPENAI_API_KEY ghcr.io/3loc/rustygate
Download from releases page and install:
# Example for Linux AMD64
curl -LO https://github.com/3loc/rustygate/releases/latest/download/rustygate-linux-amd64
chmod +x rustygate-linux-amd64
sudo mv rustygate-linux-amd64 /usr/local/bin/rustygate
from openai import OpenAI
# Initialize client with RustyGate URL
client = OpenAI(
base_url="http://localhost:8080/v1", # RustyGate proxy URL
api_key="not-needed" # API key is handled by RustyGate
)
# Non-streaming request
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
# Streaming request
stream = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
Environment variables:
OPENAI_API_KEY
(required): Your OpenAI API keyPORT
: Server port (default: 8080)RATE_LIMIT
: Requests per second (default: 10)RATE_LIMIT_BURST
: Burst capacity (default: 20)RUST_LOG
: Log level (default: debug)
# Clone and build
git clone https://github.com/3loc/rustygate.git
cd rustygate
cargo build --release
# Run tests
docker compose up tests
MIT License