We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When using acompletion and logging to langfuse we create duplicate traces. Here's the code im running
import os import asyncio os.environ["OPENAI_API_KEY"] = "openai_api_key" os.environ["ANTHROPIC_API_KEY"] = "anthropic_api_key" os.environ['LITELLM_LOG'] = 'DEBUG' os.environ["LANGFUSE_DEBUG"] = "True" os.environ["LANGFUSE_PUBLIC_KEY"] = "pk" os.environ["LANGFUSE_SECRET_KEY"] = "sk" import litellm litellm.set_verbose = True litellm.success_callback = ["langfuse"] litellm.failure_callback = ["langfuse"] # logs errors to langfuse async def main(): models = ["gpt-4o-mini", "claude-3-5-haiku-20241022"] messages = [ { "role": "user", "content": "Hello, how are you?"}, ] resp = await litellm.acompletion( model=models[0], messages=messages, temperature=0.0, fallbacks=models[1:], metadata = { "generation_name": "test-gen", "project": "litellm-test" } ) return resp if __name__ == "__main__": asyncio.run(main())
And here are the traces in langfuse, you can see the completion one is a single trace while the acompletion ones are duplicates.
No response
Yes
v1.56.4
The text was updated successfully, but these errors were encountered:
able to repro with script. investigating
Sorry, something went wrong.
seems to be caused by the fallbacks: param
fallbacks:
investigating
fix(utils.py): prevent double logging when passing 'fallbacks=' to .c…
94fa390
…ompletion() Fixes #7477
Fixed - 94fa390
Will be live in today's release.
@N13T Can we do a 10min call sometime this/next week?
Want to learn how you're using litellm sdk, so we can improve for your team.
Attaching my calendly, for your convenience - https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
krrishdholakia
No branches or pull requests
What happened?
When using acompletion and logging to langfuse we create duplicate traces. Here's the code im running
And here are the traces in langfuse, you can see the completion one is a single trace while the acompletion ones are duplicates.
Relevant log output
No response
Are you a ML Ops Team?
Yes
What LiteLLM version are you on ?
v1.56.4
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: