Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for parallel_tool_calls option when configuring Langchain::Assistant #813

Open
2 of 5 tasks
andreibondarev opened this issue Oct 4, 2024 · 4 comments · Fixed by #827
Open
2 of 5 tasks
Assignees
Labels
assistants Related to Langchain::Assistant class enhancement New feature or request

Comments

@andreibondarev
Copy link
Collaborator

andreibondarev commented Oct 4, 2024

Is your feature request related to a problem? Please describe.
We'd like to enable better control of tool calling when using Langchain::Assistant. Some of the supported LLMs (Anthropic and OpenAI) let you modify whether parallel tool calls ("multiple tool calls") can be made or not. In some use-cases the Assistant must call tools sequentially hence we should be able to toggle that option on the Assistant instance.

Describe the solution you'd like
Similar to tool_choice enable the developer to toggle:

assistant = Langchain::Assistant.new(parallel_tool_calls: true/false, ...)
assistant.parallel_tool_calls = true/false

Tasks

  • Langchain::Assistant::LLM::Adapters::Anthropic support
  • Langchain::Assistant::LLM::Adapters::OpenAI support
  • Langchain::Assistant::LLM::Adapters::GoogleGemini support (not currently supported)
  • Langchain::Assistant::LLM::Adapters::MistralAI support (not currently supported)
  • Langchain::Assistant::LLM::Adapters::Ollama support (not currently supported)
@sergiobayona
Copy link
Contributor

@andreibondarev
Copy link
Collaborator Author

I seems Google Gemini does support parallel function calling, see:

https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling#supported_models https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling#parallel-samples

It does, but there's no way to configure whether functions can be called in parallel or not.

@ms-ati
Copy link

ms-ati commented Jan 10, 2025

If we pass parallel_tool_calls = false, could we stop the debug output each time telling us that the adapter doesn't support parallel tool calls?

@andreibondarev
Copy link
Collaborator Author

If we pass parallel_tool_calls = false, could we stop the debug output each time telling us that the adapter doesn't support parallel tool calls?

Have you tried changing the logger level? Something like Langchain.logger.level = Logger::ERROR should work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
assistants Related to Langchain::Assistant class enhancement New feature or request
Projects
None yet
3 participants