Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why isn’t the state updated to the last executed node on interrupt? #2586

Open
5 tasks done
minki-j opened this issue Nov 30, 2024 · 4 comments
Open
5 tasks done

Why isn’t the state updated to the last executed node on interrupt? #2586

minki-j opened this issue Nov 30, 2024 · 4 comments

Comments

@minki-j
Copy link

minki-j commented Nov 30, 2024

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangGraph/LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangGraph/LangChain rather than my code.
  • I am sure this is better as an issue rather than a GitHub discussion, since this is a LangGraph bug and not a design question.

Example Code

from pydantic import BaseModel
from langgraph.graph import START, END, StateGraph
from langgraph.checkpoint.memory import MemorySaver


class OverallState(BaseModel):
  input_message: str = Field(default="")
  middle_way: str = Field(default="")

g_sub = StateGraph(OverallState)
g_sub.add_edge(START, "node1")
g_sub.add_node("node1", lambda state: {"middle_way": "Hello " + state.input_message})
g_sub.add_edge("node1", "node2")
g_sub.add_node("node2", lambda state: {"middle_way": ""})
g_sub.add_edge("node2", END)

graph_sub = g_sub.compile(checkpointer=MemorySaver(), interrupt_after=["node1"])

g = StateGraph(OverallState)
g.add_edge(START, "graph_sub")
g.add_node("graph_sub", graph_sub)
g.add_edge("graph_sub", END)

graph = g.compile(checkpointer=MemorySaver())

config = {"configurable": {"thread_id": 2}}

output_middle_way = graph.invoke({"input_message": "Minki", "middle_way": ""}, config)
print("output_middle_way:", output_middle_way["middle_way"], ".")
# printed <output_middle_way:  . > whereas I expect <output_middle_way:  Hello Minki.>

Error Message and Stack Trace (if applicable)

No response

Description

I expected the interrupt method to return the state updated up to the point of the last executed node. However, based on the example code I provided, this doesn’t seem to be the case when the node that is interrupted is in subgraph. Could you help clarify this behavior?

If I’ve misunderstood or misused the feature, please let me know. I’d appreciate any guidance on the correct usage.

Thank you for your support and for building LangGraph!

System Info

langgraph==0.2.53
langgraph-api-inmem==0.0.4
langgraph-checkpoint==2.0.6
langgraph-checkpoint-sqlite==1.0.4
langgraph-cli==0.1.55
langgraph-sdk==0.1.36

@minki-j minki-j closed this as completed Dec 1, 2024
@minki-j minki-j reopened this Dec 1, 2024
@minki-j
Copy link
Author

minki-j commented Dec 1, 2024

Ahh, I found the solution. I had to access the state of subgraph like this

state = graph.get_state(config, subgraphs=True)
subgraph_state = state.tasks[0].state

https://langchain-ai.github.io/langgraph/how-tos/human_in_the_loop/breakpoints/#simple-usage

@minki-j
Copy link
Author

minki-j commented Dec 1, 2024

This method feels overly complex, especially when retrieving the state of multi-layer nested subgraphs. I had to write recursive code to determine the deepest state, which seems cumbersome.

Do you have plans to simplify this process? For instance, it would be extremely helpful to have a straightforward way to query the state of the subgraph where the last interruption occurred.

@gbaian10
Copy link
Contributor

gbaian10 commented Dec 1, 2024

I have the same question.
Has there been any consideration of providing a prebuilt feature to quickly find the deepest node?
This feature would be very useful as long as there aren’t multiple nodes in a disrupted state simultaneously.

I have also implemented a recursive function myself to solve this problem.

@minki-j
Copy link
Author

minki-j commented Dec 4, 2024

@gbaian10 good to see that you also faced the same problem. Here is my implementation. Did you do it similarly? Any suggestions for a better way to do that?

def get_deepest_state(state):
    # Base case
    if len(state.tasks) == 0 or not state.tasks[0].state:
        return state
    
    # Recursive case
    next_level_state = state.tasks[0].state
    return get_deepest_state(next_level_state)

To see how it fits in the system, check this snippet of code: https://github.com/minki-j/ai-mock-coding-interview-agent/blob/main/backend/main.py#L209-L260

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants