Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

graph stream with stream_mode=updates miss tool messages when using tools that return Command #2831

Closed
4 tasks done
rayshen92 opened this issue Dec 19, 2024 · 3 comments · Fixed by #2903
Closed
4 tasks done

Comments

@rayshen92
Copy link

Checked other resources

  • This is a bug, not a usage question. For questions, please use GitHub Discussions.
  • I added a clear and detailed title that summarizes the issue.
  • I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
  • I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.

Example Code

import os

from langchain_core.messages import ToolMessage
from langchain_core.runnables import RunnableConfig
from langchain_core.tools import tool
from langchain_core.tools.base import InjectedToolCallId
from langgraph.types import Command

from typing_extensions import Annotated


@tool
def add(
    a: int,
    b: int,
    tool_call_id: Annotated[str, InjectedToolCallId],
    config: RunnableConfig,
):
    """add two numbers"""

    result = a + b

    return Command(
        update={
            "messages": [
                ToolMessage(f"add result: {result}", tool_call_id=tool_call_id)
            ],
        }
    )


@tool
def sub(
    a: int,
    b: int,
    tool_call_id: Annotated[str, InjectedToolCallId],
    config: RunnableConfig,
):
    """sub two numbers"""

    result = a + b

    return f"sub result: {result}"


from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent

model = ChatOpenAI(
    model="gpt-4o",
)

from langgraph.checkpoint.memory import MemorySaver

memory = MemorySaver()

tools = [add, sub]
agent = create_react_agent(model, tools=tools, checkpointer=memory)

config = {
    "configurable": {"thread_id": "1"},
}

# use add tool
for chunk in agent.stream(
    input={
        "messages": [
            (
                "user",
                "add(1,1), add(1,2), add(1,3) at once",
            ),
        ]
    },
    config=config,
    stream_mode="updates",
):
    for node, values in chunk.items():
        print(f"Receiving update from node: '{node}'")
        print(values)
        print("\n\n")

# use sub tool
for chunk in agent.stream(
    input={
        "messages": [
            (
                "user",
                "sub(1,1), sub(1,2), sub(1,3) at once",
            ),
        ]
    },
    config=config,
    stream_mode="updates",
):
    for node, values in chunk.items():
        print(f"Receiving update from node: '{node}'")
        print(values)
        print("\n\n")

print("======================message history=================\n\n")
cur_state = agent.get_state(config)
messages = cur_state.values.get("messages", [])
for message in messages:
    message.pretty_print()

Error Message and Stack Trace (if applicable)

Receiving update from node: 'agent'
{'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_vvaLfOBoaWoxaCtkF0kKAWAJ', 'function': {'arguments': '{"a": 1, "b": 1}', 'name': 'add'}, 'type': 'function'}, {'id': 'call_aYTuKfiWaF8ldAR4cfqU5VXj', 'function': {'arguments': '{"a": 1, "b": 2}', 'name': 'add'}, 'type': 'function'}, {'id': 'call_qO6RfVysJhQ8gP6DaSNpSg6v', 'function': {'arguments': '{"a": 1, "b": 3}', 'name': 'add'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 67, 'prompt_tokens': 85, 'total_tokens': 152, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_f3927aa00d', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-63311600-238f-4c51-a4cf-70617c0e9da3-0', tool_calls=[{'name': 'add', 'args': {'a': 1, 'b': 1}, 'id': 'call_vvaLfOBoaWoxaCtkF0kKAWAJ', 'type': 'tool_call'}, {'name': 'add', 'args': {'a': 1, 'b': 2}, 'id': 'call_aYTuKfiWaF8ldAR4cfqU5VXj', 'type': 'tool_call'}, {'name': 'add', 'args': {'a': 1, 'b': 3}, 'id': 'call_qO6RfVysJhQ8gP6DaSNpSg6v', 'type': 'tool_call'}], usage_metadata={'input_tokens': 85, 'output_tokens': 67, 'total_tokens': 152, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})]}



Receiving update from node: 'tools'
{'messages': [ToolMessage(content='add result: 4', name='add', id='11837479-f7b7-4f4d-b245-8a32e5d02776', tool_call_id='call_qO6RfVysJhQ8gP6DaSNpSg6v')]}



Receiving update from node: 'agent'
{'messages': [AIMessage(content='The results of the additions are as follows:\n- \\(1 + 1 = 2\\)\n- \\(1 + 2 = 3\\)\n- \\(1 + 3 = 4\\)', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 43, 'prompt_tokens': 183, 'total_tokens': 226, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_f3927aa00d', 'finish_reason': 'stop', 'logprobs': None}, id='run-8b12c0d2-9b5a-4a6a-8d5f-56cc773c83a4-0', usage_metadata={'input_tokens': 183, 'output_tokens': 43, 'total_tokens': 226, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})]}



Receiving update from node: 'agent'
{'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_vMYScsWHPGVSLMD9hS8fDA5Q', 'function': {'arguments': '{"a": 1, "b": 1}', 'name': 'sub'}, 'type': 'function'}, {'id': 'call_nxtqJoafmwh6qtFO9f5lhVBt', 'function': {'arguments': '{"a": 1, "b": 2}', 'name': 'sub'}, 'type': 'function'}, {'id': 'call_vXrXRc4IClKzKZ3dRmABGISU', 'function': {'arguments': '{"a": 1, "b": 3}', 'name': 'sub'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 67, 'prompt_tokens': 253, 'total_tokens': 320, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_04751d0b65', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-bb8e32b7-eea5-4d59-a7a1-69973a99a1be-0', tool_calls=[{'name': 'sub', 'args': {'a': 1, 'b': 1}, 'id': 'call_vMYScsWHPGVSLMD9hS8fDA5Q', 'type': 'tool_call'}, {'name': 'sub', 'args': {'a': 1, 'b': 2}, 'id': 'call_nxtqJoafmwh6qtFO9f5lhVBt', 'type': 'tool_call'}, {'name': 'sub', 'args': {'a': 1, 'b': 3}, 'id': 'call_vXrXRc4IClKzKZ3dRmABGISU', 'type': 'tool_call'}], usage_metadata={'input_tokens': 253, 'output_tokens': 67, 'total_tokens': 320, 'input_token_details': {}, 'output_token_details': {}})]}



Receiving update from node: 'tools'
{'messages': [ToolMessage(content='sub result: 2', name='sub', id='f7dfa2fb-d578-46b6-a39d-4523ac1abaed', tool_call_id='call_vMYScsWHPGVSLMD9hS8fDA5Q'), ToolMessage(content='sub result: 3', name='sub', id='c3be1d0d-7575-4ec0-be00-4e181964241a', tool_call_id='call_nxtqJoafmwh6qtFO9f5lhVBt'), ToolMessage(content='sub result: 4', name='sub', id='8d5fad1f-9fbb-4acf-be78-a766de53d322', tool_call_id='call_vXrXRc4IClKzKZ3dRmABGISU')]}



Receiving update from node: 'agent'
{'messages': [AIMessage(content='The results of the subtractions are as follows:\n- \\(1 - 1 = 0\\)\n- \\(1 - 2 = -1\\)\n- \\(1 - 3 = -2\\)', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 44, 'prompt_tokens': 351, 'total_tokens': 395, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_f3927aa00d', 'finish_reason': 'stop', 'logprobs': None}, id='run-f94e074c-18ce-4513-9f1a-ff19be9c2532-0', usage_metadata={'input_tokens': 351, 'output_tokens': 44, 'total_tokens': 395, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})]}



======================message history=================


================================ Human Message =================================

add(1,1), add(1,2), add(1,3) at once
================================== Ai Message ==================================
Tool Calls:
  add (call_vvaLfOBoaWoxaCtkF0kKAWAJ)
 Call ID: call_vvaLfOBoaWoxaCtkF0kKAWAJ
  Args:
    a: 1
    b: 1
  add (call_aYTuKfiWaF8ldAR4cfqU5VXj)
 Call ID: call_aYTuKfiWaF8ldAR4cfqU5VXj
  Args:
    a: 1
    b: 2
  add (call_qO6RfVysJhQ8gP6DaSNpSg6v)
 Call ID: call_qO6RfVysJhQ8gP6DaSNpSg6v
  Args:
    a: 1
    b: 3
================================= Tool Message =================================
Name: add

add result: 2
================================= Tool Message =================================
Name: add

add result: 3
================================= Tool Message =================================
Name: add

add result: 4
================================== Ai Message ==================================

The results of the additions are as follows:
- \(1 + 1 = 2\)
- \(1 + 2 = 3\)
- \(1 + 3 = 4\)
================================ Human Message =================================

sub(1,1), sub(1,2), sub(1,3) at once
================================== Ai Message ==================================
Tool Calls:
  sub (call_vMYScsWHPGVSLMD9hS8fDA5Q)
 Call ID: call_vMYScsWHPGVSLMD9hS8fDA5Q
  Args:
    a: 1
    b: 1
  sub (call_nxtqJoafmwh6qtFO9f5lhVBt)
 Call ID: call_nxtqJoafmwh6qtFO9f5lhVBt
  Args:
    a: 1
    b: 2
  sub (call_vXrXRc4IClKzKZ3dRmABGISU)
 Call ID: call_vXrXRc4IClKzKZ3dRmABGISU
  Args:
    a: 1
    b: 3
================================= Tool Message =================================
Name: sub

sub result: 2
================================= Tool Message =================================
Name: sub

sub result: 3
================================= Tool Message =================================
Name: sub

sub result: 4
================================== Ai Message ==================================

The results of the subtractions are as follows:
- \(1 - 1 = 0\)
- \(1 - 2 = -1\)
- \(1 - 3 = -2\)

Description

I'm trying to use Command in tools to update graph state from tools,
when i call stream func with stream_mode=updates, if the llm call multiple tools at once, only the tool messages related to last tool is stream out, others not output.

if i define tool without return command, it works.

System Info

System Information

OS: Linux
OS Version: #1 SMP Sat Oct 7 17:52:50 CST 2023
Python Version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0]

Package Information

langchain_core: 0.3.25
langsmith: 0.1.140
langchain_openai: 0.2.6
langgraph_sdk: 0.1.47

Optional packages not installed

langserve

Other Dependencies

httpx: 0.27.0
jsonpatch: 1.33
openai: 1.54.3
orjson: 3.10.3
packaging: 24.0
pydantic: 2.7.4
PyYAML: 6.0.1
requests: 2.32.3
requests-toolbelt: 1.0.0
tenacity: 9.0.0
tiktoken: 0.8.0
typing-extensions: 4.12.2

@rayshen92
Copy link
Author

add tool call 3 times, but only stream out the last call tool messages.

@rayshen92
Copy link
Author

i found the bug is from this:

grouped[node] = value[0]

@vbarda
Copy link
Collaborator

vbarda commented Dec 19, 2024

Thanks for reporting -- this is indeed a broader issue with nodes returning lists of updates, e.g.:

from langgraph.graph import StateGraph, START
from langgraph.types import Command
from typing import TypedDict, Annotated
import operator

class State(TypedDict):
    foo: Annotated[str, operator.add]

def node_a(state):
    return [Command(update={"foo": "a1"}), Command(update={"foo": "a2"})]

def node_b(state):
    return {"foo": "b"}

graph = StateGraph(State).add_sequence([node_a, node_b]).add_edge(START, "node_a").compile()

# graph.invoke({"foo": ""}) # -> returns full expected output

# only streams the last update from node_a
for chunk in graph.stream({"foo": ""}, stream_mode="updates"):
    print(chunk)
    print("\n\n")

We'll fix this up

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants