Replies: 1 comment 4 replies
-
Can you share a conversation where this happens to you? Use export feature from top right, you can redact the content if you wish. |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, I'm consistently getting this error ONLY when using Agents.
Interestingly, it works perfectly well with Azure Assistants (I need the features that Agents offer tho).:
The librechat-api log indicates the following:
2024-12-26 09:01:49 error: [handleAbortError] AI response error; aborting request: 400 This model's maximum context length is 128000 tokens. However, your messages resulted in 202177 tokens (202122 in the messages, 55 in the functions). Please reduce the length of the messages or functions.
The token count may vary, the largest number I got so far is around 450k.
I'm using Librechat v0.76, last commit as of today (12/26) (not like it worked at some point, it didn't), to interact with an Azure AI Search Index using Librechat's Agents.
AzureOpenAI provider, gpt-4o model (the error also happens with gpt-35-turbo-16k).
My librechat.yml file looks like this:
Any guidance is welcomed!
Beta Was this translation helpful? Give feedback.
All reactions