Releases: BerriAI/litellm
v1.57.0
What's Changed
- (Fix) make sure
init
custom loggers is non blocking by @ishaan-jaff in #7554 - (Feat) Hashicorp Secret Manager - Allow storing virtual keys in secret manager by @ishaan-jaff in #7549
- Create and view organizations + assign org admins on the Proxy UI by @krrishdholakia in #7557
- (perf) fix [PROXY] don't use
f
string inadd_litellm_data_to_request()
by @ishaan-jaff in #7558 - fix(groq/chat/transformation.py): fix groq response_format transforma… by @krrishdholakia in #7565
- Support deleting keys by key_alias by @krrishdholakia in #7552
- (proxy perf improvement) - use
asyncio.create_task
forservice_logger_obj.async_service_success_hook
in pre_call by @ishaan-jaff in #7563 - add
fireworks_ai/accounts/fireworks/models/deepseek-v3
by @Fredy in #7567 - FriendliAI: Documentation Updates by @minpeter in #7517
- Prevent istio injection for db migrations cron job by @lowjiansheng in #7513
New Contributors
Full Changelog: v1.56.10...v1.57.0
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.0
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 200.0 | 212.84027329611826 | 6.1961289027318704 | 0.0 | 1854 | 0 | 174.45147399996586 | 1346.3216149999653 |
Aggregated | Passed ✅ | 200.0 | 212.84027329611826 | 6.1961289027318704 | 0.0 | 1854 | 0 | 174.45147399996586 | 1346.3216149999653 |
v1.56.10
What's Changed
- fix(aws_secret_manager_V2.py): Error reading secret from AWS Secrets … by @krrishdholakia in #7541
- Support checking provider-specific
/models
endpoints for available models based on key by @krrishdholakia in #7538 - feat(router.py): support request prioritization for text completion c… by @krrishdholakia in #7540
- (Fix) - Docker build error with pyproject.toml by @ishaan-jaff in #7550
- (Fix) - Slack Alerting , don't send duplicate spend report when used on multi instance settings by @ishaan-jaff in #7546
- add
cohere/command-r7b-12-2024
by @ishaan-jaff in #7553
Full Changelog: v1.56.9...v1.56.10
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.10
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 230.0 | 268.3301603401397 | 6.21711064668469 | 0.0 | 1861 | 0 | 212.36320399998476 | 3556.7401620000396 |
Aggregated | Passed ✅ | 230.0 | 268.3301603401397 | 6.21711064668469 | 0.0 | 1861 | 0 | 212.36320399998476 | 3556.7401620000396 |
v1.56.9
What's Changed
- (fix) GCS bucket logger - apply truncate_standard_logging_payload_content to standard_logging_payload and ensure GCS flushes queue on fails by @ishaan-jaff in #7519
- (Fix) - Hashicorp secret manager - don't print hcorp secrets in debug logs by @ishaan-jaff in #7529
- [Bug-Fix]: None metadata not handled for
_PROXY_VirtualKeyModelMaxBudgetLimiter
hook by @ishaan-jaff in #7523 - Bump anthropic.claude-3-5-haiku-20241022-v1:0 to new limits by @Manouchehri in #7118
- Fix langfuse prompt management on proxy by @krrishdholakia in #7535
- (Feat) - Hashicorp secret manager, use TLS cert authentication by @ishaan-jaff in #7532
- Fix OTEL message redaction + Langfuse key leak in logs by @krrishdholakia in #7516
- feat: implement support for limit, order, before, and after parameters in get_assistants by @jeansouzak in #7537
- Add missing prefix for deepseek by @SmartManoj in #7508
- (fix)
aiohttp_openai/
route - get to 1K RPS on single instance by @ishaan-jaff in #7539 - Revert "feat: implement support for limit, order, before, and after parameters in get_assistants" by @krrishdholakia in #7542
- [Feature]: - allow print alert log to console by @ishaan-jaff in #7534
- (fix proxy perf) use
_read_request_body
instead of ast.literal_eval to get better performance by @ishaan-jaff in #7545
New Contributors
- @jeansouzak made their first contribution in #7537
- @SmartManoj made their first contribution in #7508
Full Changelog: v1.56.8...v1.56.9
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.9
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 240.0 | 269.3983699320639 | 6.149252570882109 | 0.0 | 1840 | 0 | 211.95807399999467 | 2571.210135000001 |
Aggregated | Passed ✅ | 240.0 | 269.3983699320639 | 6.149252570882109 | 0.0 | 1840 | 0 | 211.95807399999467 | 2571.210135000001 |
v1.56.8-dev2
What's Changed
- (fix) GCS bucket logger - apply truncate_standard_logging_payload_content to standard_logging_payload and ensure GCS flushes queue on fails by @ishaan-jaff in #7519
- (Fix) - Hashicorp secret manager - don't print hcorp secrets in debug logs by @ishaan-jaff in #7529
- [Bug-Fix]: None metadata not handled for
_PROXY_VirtualKeyModelMaxBudgetLimiter
hook by @ishaan-jaff in #7523 - Bump anthropic.claude-3-5-haiku-20241022-v1:0 to new limits by @Manouchehri in #7118
- Fix langfuse prompt management on proxy by @krrishdholakia in #7535
- (Feat) - Hashicorp secret manager, use TLS cert authentication by @ishaan-jaff in #7532
- Fix OTEL message redaction + Langfuse key leak in logs by @krrishdholakia in #7516
- feat: implement support for limit, order, before, and after parameters in get_assistants by @jeansouzak in #7537
- Add missing prefix for deepseek by @SmartManoj in #7508
- (fix)
aiohttp_openai/
route - get to 1K RPS on single instance by @ishaan-jaff in #7539
New Contributors
- @jeansouzak made their first contribution in #7537
- @SmartManoj made their first contribution in #7508
Full Changelog: v1.56.8...v1.56.8-dev2
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.8-dev2
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Failed ❌ | 260.0 | 302.69986428167584 | 6.1480113905567375 | 0.0 | 1839 | 0 | 230.89517400001114 | 2985.9468520000405 |
Aggregated | Failed ❌ | 260.0 | 302.69986428167584 | 6.1480113905567375 | 0.0 | 1839 | 0 | 230.89517400001114 | 2985.9468520000405 |
v1.56.3-stable
Full Changelog: v1.56.3...v1.56.3-stable
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.3-stable
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 250.0 | 285.39144223780414 | 6.0307890213828905 | 0.0033430094353563695 | 1804 | 1 | 125.146089999987 | 3186.0641239999836 |
Aggregated | Passed ✅ | 250.0 | 285.39144223780414 | 6.0307890213828905 | 0.0033430094353563695 | 1804 | 1 | 125.146089999987 | 3186.0641239999836 |
v1.56.8-dev1
Full Changelog: v1.56.8...v1.56.8-dev1
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.8-dev1
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 250.0 | 284.69056873304487 | 6.157751312397796 | 0.0 | 1843 | 0 | 211.56842700003153 | 2410.6343400000014 |
Aggregated | Passed ✅ | 250.0 | 284.69056873304487 | 6.157751312397796 | 0.0 | 1843 | 0 | 211.56842700003153 | 2410.6343400000014 |
v1.56.8
What's Changed
- Prometheus - custom metrics support + other improvements by @krrishdholakia in #7489
- (feat) POST
/fine_tuning/jobs
support passing vertex specific hyper params by @ishaan-jaff in #7490 - (Feat) - LiteLLM Use
UsernamePasswordCredential
for Azure OpenAI by @ishaan-jaff in #7496 - (docs) Add docs on load testing benchmarks by @ishaan-jaff in #7499
- (Feat) Add support for reading secrets from Hashicorp vault by @ishaan-jaff in #7497
- Litellm dev 12 30 2024 p2 by @krrishdholakia in #7495
- Refactor Custom Metrics on Prometheus - allow setting k,v pairs on all metrics via config.yaml by @krrishdholakia in #7498
- (fix) GCS bucket logger - apply
truncate_standard_logging_payload_content
tostandard_logging_payload
and ensure GCS flushes queue on fails by @ishaan-jaff in #7500 - Litellm dev 01 01 2025 p3 by @krrishdholakia in #7503
- Litellm dev 01 02 2025 p2 by @krrishdholakia in #7512
- Revert "(fix) GCS bucket logger - apply
truncate_standard_logging_payload_content
tostandard_logging_payload
and ensure GCS flushes queue on fails" by @ishaan-jaff in #7515 - (perf) use
aiohttp
forcustom_openai
by @ishaan-jaff in #7514 - (perf) use threadpool executor - for sync logging integrations by @ishaan-jaff in #7509
Full Changelog: v1.56.6...v1.56.8
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.8
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 230.0 | 247.81903455189286 | 6.181081075067931 | 0.0 | 1850 | 0 | 191.81740900000932 | 2126.8676100000903 |
Aggregated | Passed ✅ | 230.0 | 247.81903455189286 | 6.181081075067931 | 0.0 | 1850 | 0 | 191.81740900000932 | 2126.8676100000903 |
v1.56.6.dev1
What's Changed
- Prometheus - custom metrics support + other improvements by @krrishdholakia in #7489
- (feat) POST
/fine_tuning/jobs
support passing vertex specific hyper params by @ishaan-jaff in #7490 - (Feat) - LiteLLM Use
UsernamePasswordCredential
for Azure OpenAI by @ishaan-jaff in #7496 - (docs) Add docs on load testing benchmarks by @ishaan-jaff in #7499
- (Feat) Add support for reading secrets from Hashicorp vault by @ishaan-jaff in #7497
- Litellm dev 12 30 2024 p2 by @krrishdholakia in #7495
- Refactor Custom Metrics on Prometheus - allow setting k,v pairs on all metrics via config.yaml by @krrishdholakia in #7498
- (fix) GCS bucket logger - apply
truncate_standard_logging_payload_content
tostandard_logging_payload
and ensure GCS flushes queue on fails by @ishaan-jaff in #7500 - Litellm dev 01 01 2025 p3 by @krrishdholakia in #7503
Full Changelog: v1.56.6...v1.56.6.dev1
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.6.dev1
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 230.0 | 255.89973974836954 | 6.151774848433542 | 0.003343355895887794 | 1840 | 1 | 94.9865199999067 | 1259.9916519999965 |
Aggregated | Passed ✅ | 230.0 | 255.89973974836954 | 6.151774848433542 | 0.003343355895887794 | 1840 | 1 | 94.9865199999067 | 1259.9916519999965 |
v1.56.6
What's Changed
- (fix)
v1/fine_tuning/jobs
with VertexAI by @ishaan-jaff in #7487 - (docs) Add docs on using Vertex with Fine Tuning APIs by @ishaan-jaff in #7491
- Fix team-based logging to langfuse + allow custom tokenizer on
/token_counter
endpoint by @krrishdholakia in #7493 - Fix team admin create key flow on UI + other improvements by @krrishdholakia in #7488
- docs: added missing quote by @dsdanielko in #7481
- fix ollama embedding model response #7451 by @svenseeberg in #7473
- (Feat) - Add PagerDuty Alerting Integration by @ishaan-jaff in #7478
New Contributors
- @dsdanielko made their first contribution in #7481
- @svenseeberg made their first contribution in #7473
Full Changelog: v1.56.5...v1.56.6
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.6
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 250.0 | 287.411814751915 | 6.114731230663012 | 0.0 | 1830 | 0 | 228.32058200003758 | 3272.637599999939 |
Aggregated | Passed ✅ | 250.0 | 287.411814751915 | 6.114731230663012 | 0.0 | 1830 | 0 | 228.32058200003758 | 3272.637599999939 |
v1.56.5
What's Changed
- Refactor: move all bedrock invoke providers to BaseConfig by @krrishdholakia in #7463
- (fix)
litellm.amoderation
- support usingmodel=openai/omni-moderation-latest
,model=omni-moderation-latest
,model=None
by @ishaan-jaff in #7475 - [Bug Fix]: rerank restfulapi response parse still too strict by @ishaan-jaff in #7476
- Litellm dev 12 30 2024 p1 by @krrishdholakia in #7480
- HumanLoop integration for Prompt Management by @krrishdholakia in #7479
Full Changelog: v1.56.4...v1.56.5
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.5
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 230.0 | 268.0630784626629 | 6.174316845767241 | 0.0 | 1848 | 0 | 212.08500100010497 | 3189.481879000027 |
Aggregated | Passed ✅ | 230.0 | 268.0630784626629 | 6.174316845767241 | 0.0 | 1848 | 0 | 212.08500100010497 | 3189.481879000027 |