Releases: BerriAI/litellm
v1.55.12
What's Changed
- Add 'end_user', 'user' and 'requested_model' on more prometheus metrics by @krrishdholakia in #7399
- (feat)
/batches
Add support for using/batches
endpoints in OAI format by @ishaan-jaff in #7402 - (feat)
/batches
- trackuser_api_key_alias
,user_api_key_team_alias
etc for /batch requests by @ishaan-jaff in #7401 - Litellm dev 12 24 2024 p3 by @krrishdholakia in #7403
- (Feat) add `"/v1/batches/{batch_id:path}/cancel" endpoint by @ishaan-jaff in #7406
- Litellm dev 12 24 2024 p4 by @krrishdholakia in #7407
Full Changelog: v1.55.11...v1.55.12
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.12
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 220.0 | 241.51418849604215 | 6.334659319234715 | 0.0 | 1895 | 0 | 191.11329300005764 | 3854.987871999924 |
Aggregated | Passed ✅ | 220.0 | 241.51418849604215 | 6.334659319234715 | 0.0 | 1895 | 0 | 191.11329300005764 | 3854.987871999924 |
v1.55.11
What's Changed
- LiteLLM Minor Fixes & Improvements (12/23/2024) - p3 by @krrishdholakia in #7394
Full Changelog: v1.55.10...v1.55.11
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.11
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 250.0 | 290.3865391657403 | 6.034920682874279 | 0.0 | 1804 | 0 | 229.06071099987457 | 2909.605226000167 |
Aggregated | Passed ✅ | 250.0 | 290.3865391657403 | 6.034920682874279 | 0.0 | 1804 | 0 | 229.06071099987457 | 2909.605226000167 |
v1.55.10
What's Changed
- (Admin UI) - Test Key Tab - Allow typing in
model
name + Add wrapping for text response by @ishaan-jaff in #7347 - (Admin UI) - Test Key Tab - Allow using
UI Session
instead of manually creating a virtual key by @ishaan-jaff in #7348 - (refactor) - fix from enterprise.utils import ui_get_spend_by_tags by @ishaan-jaff in #7352
- (chore) - enforce model budgets on virtual keys as enterprise feature by @ishaan-jaff in #7353
- (Admin UI) correctly render provider name in /models with wildcard routing by @ishaan-jaff in #7349
- (Admin UI) - maintain history on chat UI by @ishaan-jaff in #7351
- Litellm enforce enterprise features by @krrishdholakia in #7357
- Document team admins + Enforce assigning team admins as an enterprise feature by @krrishdholakia in #7359
- Litellm docs update by @krrishdholakia in #7365
- Complete 'requests' library removal by @krrishdholakia in #7350
- (chore) remove unused code files by @ishaan-jaff in #7363
- (security fix) - update base image for all docker images to
python:3.13.1-slim
by @ishaan-jaff in #7388 - LiteLLM Minor Fixes & Improvements (12/23/2024) - p1 by @krrishdholakia in #7383
- LiteLLM Minor Fixes & Improvements (12/23/2024) - P2 by @krrishdholakia in #7386
- [Bug Fix]: Errors in LiteLLM When Using Embeddings Model with Usage-Based Routing by @ishaan-jaff in #7390
- (Feat) Add input_cost_per_token_batches, output_cost_per_token_batches for OpenAI cost tracking Batches API by @ishaan-jaff in #7391
- (feat) Add basic logging support for
/batches
endpoints by @ishaan-jaff in #7381 - (feat) Add cost tracking for /batches requests OpenAI by @ishaan-jaff in #7384
- dd logger fix - handle objects that can't be JSON dumped by @ishaan-jaff in #7393
Full Changelog: v1.55.9...v1.55.10
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.10
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 200.0 | 218.24862748744047 | 6.256831142894005 | 0.0 | 1871 | 0 | 177.71721199983403 | 1940.1571020000574 |
Aggregated | Passed ✅ | 200.0 | 218.24862748744047 | 6.256831142894005 | 0.0 | 1871 | 0 | 177.71721199983403 | 1940.1571020000574 |
v1.55.8-stable
Full Changelog: v1.55.8...v1.55.8-stable
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:litellm_stable_release_branch-v1.55.8-stable
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 230.0 | 256.12454035233407 | 6.158450406948531 | 0.0 | 1842 | 0 | 207.30311900001652 | 2232.342858000038 |
Aggregated | Passed ✅ | 230.0 | 256.12454035233407 | 6.158450406948531 | 0.0 | 1842 | 0 | 207.30311900001652 | 2232.342858000038 |
v1.55.9
What's Changed
- Controll fallback prompts client-side by @krrishdholakia in #7334
- [Bug fix ]: Triton /infer handler incompatible with batch responses by @ishaan-jaff in #7337
- Litellm dev 12 20 2024 p3 by @krrishdholakia in #7339
- Litellm dev 2024 12 20 p1 by @krrishdholakia in #7335
- (fix) LiteLLM Proxy fix GET
/files/{file_id:path}/content"
endpoint by @ishaan-jaff in #7342 - (Bug fix) Azure cost calculation -
dall-e-3
by @ishaan-jaff in #7343
Full Changelog: v1.55.8...v1.55.9
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.9
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 240.0 | 270.2192842992925 | 6.152704904591068 | 0.0 | 1841 | 0 | 213.0105499999786 | 2430.5650640000067 |
Aggregated | Passed ✅ | 240.0 | 270.2192842992925 | 6.152704904591068 | 0.0 | 1841 | 0 | 213.0105499999786 | 2430.5650640000067 |
v1.55.8
What's Changed
- fix(proxy_server.py): pass model access groups to get_key/get_team mo… by @krrishdholakia in #7281
- Litellm security fixes by @krrishdholakia in #7282
- Added sambanova cloud models by @rodrigo-92 in #7187
- Re-add prompt caching based model filtering (route to previous model) by @krrishdholakia in #7299
- (Fix) deprecated Pydantic Config class with model_config BerriAI/li… by @ishaan-jaff in #7300
- (feat - proxy) Add
status_code
tolitellm_proxy_total_requests_metric_total
by @ishaan-jaff in #7293 - fix(hosted_vllm/transformation.py): return fake api key, if none give… by @krrishdholakia in #7301
- LiteLLM Minor Fixes & Improvements (2024/12/18) p1 by @krrishdholakia in #7295
- (feat proxy) v2 - model max budgets by @ishaan-jaff in #7302
- (proxy admin ui) - show Teams sorted by
Team Alias
by @ishaan-jaff in #7296 - (Refactor) use separate file for track_cost_callback by @ishaan-jaff in #7304
- o1 - add image param handling by @krrishdholakia in #7312
- (code quality) run ruff rule to ban unused imports by @ishaan-jaff in #7313
- [Bug Fix]: ImportError: cannot import name 'T' from 're' by @ishaan-jaff in #7314
- (code refactor) - Add
BaseRerankConfig
. UseBaseRerankConfig
forcohere/rerank
andazure_ai/rerank
by @ishaan-jaff in #7319 - (feat) add infinity rerank models by @ishaan-jaff in #7321
- Litellm dev 12 19 2024 p2 by @krrishdholakia in #7315
- Langfuse Prompt Management Support by @krrishdholakia in #7322
- Fix LiteLLM Fireworks AI Documentation by @jravi-fireworks in #7333
New Contributors
- @rodrigo-92 made their first contribution in #7187
- @jravi-fireworks made their first contribution in #7333
Full Changelog: v1.55.4...v1.55.8
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.8
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 220.0 | 237.6551034099362 | 6.125601230624555 | 0.0 | 1832 | 0 | 193.92061900009594 | 1182.1513959999947 |
Aggregated | Passed ✅ | 220.0 | 237.6551034099362 | 6.125601230624555 | 0.0 | 1832 | 0 | 193.92061900009594 | 1182.1513959999947 |
v1.55.4
What's Changed
- (feat) Add Azure Blob Storage Logging Integration by @ishaan-jaff in #7265
- (feat) Add Bedrock knowledge base pass through endpoints by @ishaan-jaff in #7267
- docs(input.md): document 'extra_headers' param support by @krrishdholakia in #7268
- fix(utils.py): fix openai-like api response format parsing by @krrishdholakia in #7273
- LITELLM: Remove
requests
library usage by @krrishdholakia in #7235 - Litellm dev 12 17 2024 p2 by @krrishdholakia in #7277
- Litellm dev 12 17 2024 p3 by @krrishdholakia in #7279
- LiteLLM Minor Fixes & Improvements (12/16/2024) - p1 by @krrishdholakia in #7263
- Add Azure Llama 3.3 by @emerzon in #7283
- (feat) proxy Azure Blob Storage - Add support for
AZURE_STORAGE_ACCOUNT_KEY
Auth by @ishaan-jaff in #7280 - Correct max_tokens on Model DB by @emerzon in #7284
- (fix) unable to pass input_type parameter to Voyage AI embedding mode by @ishaan-jaff in #7276
Full Changelog: v1.55.3...v1.55.4
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.4
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.4
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.4
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.4
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.4
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.4
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 260.0 | 295.9831253378703 | 6.145780132592515 | 0.0 | 1838 | 0 | 220.05146400010744 | 2923.6937059999946 |
Aggregated | Passed ✅ | 260.0 | 295.9831253378703 | 6.145780132592515 | 0.0 | 1838 | 0 | 220.05146400010744 | 2923.6937059999946 |
v1.55.3
What's Changed
- LiteLLM Minor Fixes & Improvements (12/13/2024) pt.1 by @krrishdholakia in #7219
- (feat - Router / Proxy ) Allow setting budget limits per LLM deployment by @ishaan-jaff in #7220
- build(deps): bump nanoid from 3.3.7 to 3.3.8 in /ui/litellm-dashboard by @dependabot in #7216
- Litellm add router to base llm testing by @ishaan-jaff in #7202
- fix(main.py): fix retries being multiplied when using openai sdk by @krrishdholakia in #7221
- (proxy) - Auth fix, ensure re-using safe request body for checking
model
field by @ishaan-jaff in #7222 - (UI fix) - Allow editing Key Metadata by @ishaan-jaff in #7230
- (UI) Fix Usage Tab - Don't make expensive UI queries after SpendLogs crosses 1M Rows by @ishaan-jaff in #7229
- (code quality) Add ruff check to ban
print
in repo by @ishaan-jaff in #7233 - (UI QA) - stop making expensive UI queries when 1M + spendLogs in DB by @ishaan-jaff in #7234
- Fix vllm import by @ivanvykopal in #7224
- Add new Gemini 2.0 Flash model to Vertex AI. by @Manouchehri in #7193
- Litellm remove circular imports by @krrishdholakia in #7232
- (feat) Add Tag-based budgets on litellm router / proxy by @ishaan-jaff in #7236
- Litellm dev 12 14 2024 p1 by @krrishdholakia in #7231
New Contributors
- @ivanvykopal made their first contribution in #7224
Full Changelog: v1.55.2...v1.55.3
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.3
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 250.0 | 281.1265187306242 | 6.1657490001280255 | 0.0033418693767631575 | 1845 | 1 | 119.36488499998177 | 3755.8482019999815 |
Aggregated | Passed ✅ | 250.0 | 281.1265187306242 | 6.1657490001280255 | 0.0033418693767631575 | 1845 | 1 | 119.36488499998177 | 3755.8482019999815 |
v1.55.1-stable
Full Changelog: v1.55.1...v1.55.1-stable
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:litellm_stable_dec_14-stable
litellm-database
image
ghcr.io/berriai/litellm-database:litellm_stable_dec_14-stable
litellm-non-root
image
ghcr.io/berriai/litellm-non_root:litellm_stable_dec_14-stable
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 200.0 | 217.72878062246997 | 6.2754597178458145 | 0.0033415653449658223 | 1878 | 1 | 76.6410740000083 | 1257.3869729999956 |
Aggregated | Passed ✅ | 200.0 | 217.72878062246997 | 6.2754597178458145 | 0.0033415653449658223 | 1878 | 1 | 76.6410740000083 | 1257.3869729999956 |
v1.55.2
What's Changed
- Litellm dev 12 12 2024 by @krrishdholakia in #7203
- Litellm dev 12 11 2024 v2 by @krrishdholakia in #7215
Full Changelog: v1.55.1...v1.55.2
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.2
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.2
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.2
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 250.0 | 282.51255728779716 | 6.192691226975396 | 0.0 | 1852 | 0 | 223.9336790000266 | 3178.0424589999257 |
Aggregated | Passed ✅ | 250.0 | 282.51255728779716 | 6.192691226975396 | 0.0 | 1852 | 0 | 223.9336790000266 | 3178.0424589999257 |