โ If you’re exploring LangChain, agents, or building AI apps that can interactively involve humans when needed this tutorial is your roadmap.๐ฏ By the end of this guide, you’ll be able to build a modular, reusable Human-in-the-Loop agent with dynamic control flow using LangGraph.
๐ What You’ll Learn
- ๐ค What is a Human-in-the-Loop agent?
- ๐ How to build agent flows using LangGraph (state, nodes, edges)
- ๐ ๏ธ How tools are defined and invoked by the agent
- ๐ How to pause and ask the user for input
- ๐ฌ How to resume the graph after human feedback
- ๐งฉ How to reuse and scale this template for any AI agent
๐ฆ Tools & Libraries Used
# LangChain & LangGraph Core
from langchain_core.messages import HumanMessage, AIMessage, ToolMessage
from langgraph.graph import StateGraph, END
from langchain.agents import tool
from langchain_core.runnables import Runnable
# Typing and Data Management
from typing import TypedDict, Annotated, List
import os
from dotenv import load_dotenv
load_dotenv()
os.environ["OPENAI_API_KEY"] = "your-api-key-here"
๐ง What is a Human-in-the-Loop Agent?
Human-in-the-Loop is a design pattern where an AI agent can pause and wait for user input before continuing, especially when:
- It lacks sufficient information
- Decisions are risky or subjective
- A tool fails or gives ambiguous results
๐ Why LangGraph?
LangGraph allows you to create AI agents with custom, stateful, and interruptible workflows. Think of it like a flowchart for AI reasoning, where each block (node) does something, and edges determine the path.
PART 1 Overview
๐บ๏ธ What You can do
- Define a shared state with TypedDict to store messages and next steps.
- Create a dummy tool (e.g., search function) using @tool.
- Write node functions for the agent to:
- Use conditions in should_continue to check whether to continue or stop.
- Build a LangGraph using StateGraph(), with defined edges and entry/exit points.
- Test the graph by running it step by step and checking outputs.
โ agent_node
โ If tool_call โ tool_node
โ If asks user โ ask_user
โ If done โ END
๐ Purpose of This Flow
- Simulates how a real-world agent might try first, then ask for help.
- Helps beginners visualize a controlled agent flow.
- Establishes the backbone for future.
PART 2 Overview
This focuses on how to handle user responses and tool call tracking across multiple turns of interaction.
๐บ๏ธ What You can do
- Simulate a multi-step interaction where the agent calls a tool but doesn’t know how to proceed.
- Use ToolMessage to track tool outputs and re-feed it into the conversation.
- Introduce user interruption like agent asks for more context.
1. Agent calls a tool โ tool_node
2. ToolMessage gets created โ response from the tool
3. Agent asks user โ ask_user
4. User responds โ ToolMessage with user input
5. Agent continues โ calls another tool or completes
๐ก Key Concept: Tool Call ID
- Each tool has a tool_call_id to map the output back to the request.
- Needed to maintain continuity in the loop.
๐ Purpose
- Make the graph loop-friendly and reactive.
- Add flexibility to real-time interruptions or context updates.
PART 3 Overview
This is a realistic extension that simulates how a LangGraph can handle real-time human responses during runtime, just like in production systems.
๐งฐ Key Concepts
- Using graph.update_state(…) to inject new messages dynamically.
- Treating the user’s response as a ToolMessage tied to a tool_call_id.
๐บ๏ธ What You can do
- Execute a LangGraph.
- When a human is required (agent asks a question), pause the flow.
- Capture the last tool_call ID.
- Inject user’s answer back into the graph using graph.update_state(…).
- The agent resumes processing.
1. Run graph until it pauses (asks user)
2. Capture tool_call_id
3. Construct ToolMessage with user response
4. Update graph state using update_state()
5. Continue execution
โ Purpose
- Enables true human-agent interactivity.
- Supports applications like Chatbots needing verification, Knowledge workers needing decision input and Agents requiring real-time human augmentation
๐ง Step-by-Step Implementation
๐น Step 1: Define State
This state will track the conversation messages and control whether to continue or stop.
class AgentState(TypedDict):
messages: Annotated[List], "Conversation memory"
next: str # "continue", "ask_user", or "end"
๐น Step 2: Define a Tool
Letโs define a mock tool. You can plug in your own APIs later.
@tool
def search_tool(query: str) -> str:
return f"Search result for: {query}"
๐น Step 3: Create Node Functions
Each node in the graph is a function. Here are three main types:
๐ง Agent Node
Invokes the LLM and decides what to do next (call tool, ask user, or finish).
def agent_node(state: AgentState) -> AgentState:
messages = state["messages"]
# Call your LLM here (mocked or OpenAI)
response = AIMessage(content="What location do you want to search?")
return {"messages": messages + [response], "next": "ask_user"}
๐ ๏ธ Tool Node
Handles tool execution.
def tool_node(state: AgentState) -> AgentState:
messages = state["messages"]
last_tool_call = messages[-1].tool_calls[0]
output = search_tool.invoke({"query": last_tool_call["args"]["query"]})
tool_msg = ToolMessage(tool_call_id=last_tool_call["id"], content=output)
return {"messages": messages + [tool_msg], "next": "continue"}
๐ Ask User Node
Pauses for user input.
def ask_user(state: AgentState) -> AgentState:
print("Agent is asking user for more info...")
return state
๐น Step 4: Define Edges with Conditions
def should_continue(state: AgentState):
if state["next"] == "ask_user":
return "ask_user"
elif state["next"] == "continue":
return "agent"
return END
๐น Step 5: Build and Compile the Graph
graph_builder = StateGraph(AgentState)
graph_builder.add_node("agent", agent_node)
graph_builder.add_node("tool", tool_node)
graph_builder.add_node("ask_user", ask_user)
graph_builder.set_entry_point("agent")
graph_builder.add_conditional_edges("agent", should_continue)
graph_builder.add_edge("ask_user", END)
graph_builder.add_edge("tool", "agent")
graph = graph_builder.compile()
๐น Step 6: Execute and Interact
Run the graph and simulate a pause for user input.
inputs = {"messages": [HumanMessage(content="Search latest COVID update")], "next": "continue"}
snapshot = graph.invoke(inputs)
# Simulate user response
tool_call_id = snapshot.values["messages"][-1].tool_calls[0]["id"]
from langchain_core.messages import ToolMessage
user_response = ToolMessage(content="For Singapore", tool_call_id=tool_call_id)
graph.update_state(snapshot.config, {"messages": [user_response]}, as_node="ask_user")
๐น Step 7: Gradio UI
import gradio as gr
from typing import TypedDict, Annotated, List
from langchain_core.messages import HumanMessage, AIMessage, ToolMessage
from langchain.agents import tool
from langgraph.graph import StateGraph, END
import uuid
# --- Step 1: Define the State ---
class AgentState(TypedDict):
messages: Annotated[List], "Conversation memory"
next: str # "continue", "ask_user", or "end"
# --- Step 2: Define Tool ---
@tool
def search_tool(query: str) -> str:
return f"Simulated search results for: {query}"
# --- Step 3: Define Node Functions ---
def agent_node(state: AgentState) -> AgentState:
messages = state["messages"]
last_msg = messages[-1].content.lower()
if "clarify" in last_msg or "?" in last_msg:
response = AIMessage(content="Can you please specify the location?")
return {"messages": messages + [response], "next": "ask_user"}
# Mock tool_call if it's a direct query
tool_call_id = str(uuid.uuid4())
response = AIMessage(
content="Searching...",
tool_calls=[{"name": "search_tool", "args": {"query": last_msg}, "id": tool_call_id}]
)
return {"messages": messages + [response], "next": "tool"}
def tool_node(state: AgentState) -> AgentState:
messages = state["messages"]
tool_call = messages[-1].tool_calls[0]
result = search_tool.invoke(tool_call["args"])
tool_msg = ToolMessage(tool_call_id=tool_call["id"], content=result)
return {"messages": messages + [tool_msg], "next": "end"}
def ask_user(state: AgentState) -> AgentState:
return state
# --- Step 4: Conditional Logic ---
def should_continue(state: AgentState):
if state["next"] == "ask_user":
return "ask_user"
elif state["next"] == "continue":
return "agent"
elif state["next"] == "tool":
return "tool"
return END
# --- Step 5: Compile Graph ---
builder = StateGraph(AgentState)
builder.add_node("agent", agent_node)
builder.add_node("tool", tool_node)
builder.add_node("ask_user", ask_user)
builder.set_entry_point("agent")
builder.add_conditional_edges("agent", should_continue)
builder.add_edge("tool", "agent")
builder.add_edge("ask_user", END)
graph = builder.compile()
# --- Gradio Interface State ---
session = {}
def interact(query, session_id):
state = session.get(session_id, {"messages": [], "next": "continue"})
state["messages"].append(HumanMessage(content=query))
result = graph.invoke(state)
messages = result["messages"]
session[session_id] = result
display = "\n".join([f"{m.type.capitalize()}: {m.content}" for m in messages])
return display
def clarify_response(clarification, session_id):
prev = session[session_id]
tool_call_id = prev["messages"][-1].tool_calls[0]["id"]
tool_msg = ToolMessage(tool_call_id=tool_call_id, content=clarification)
result = graph.update_state(prev["config"], {"messages": [tool_msg]}, as_node="ask_user")
session[session_id] = result
messages = result["messages"]
display = "\n".join([f"{m.type.capitalize()}: {m.content}" for m in messages])
return display
# --- Gradio UI ---
with gr.Blocks() as demo:
gr.Markdown("## ๐ค Human-in-the-Loop Agent (LangGraph Demo)")
session_id = gr.State(str(uuid.uuid4()))
with gr.Row():
query = gr.Textbox(label="Ask something")
submit_btn = gr.Button("Send")
output = gr.Textbox(label="Conversation", lines=10)
with gr.Row():
clarification = gr.Textbox(label="User Clarification")
clarify_btn = gr.Button("Submit Clarification")
submit_btn.click(interact, inputs=[query, session_id], outputs=output)
clarify_btn.click(clarify_response, inputs=[clarification, session_id], outputs=output)
demo.launch()
๐ก Real-World Use Cases
- ๐ Customer support bots that route to humans when unsure
- ๐ง๐ซ Learning assistants that ask clarification before answering
- ๐งพ RPA automation with manual checkpoints
- ๐ Financial bots asking for user intent before investing
๐งฐ Reusability
Yes ,the structure (state, nodes, flow, user check, tool call) is reusable. You can:
- Plug in different tools (e.g., web search, APIs, RAG)
- Customize the agent logic per app
- Control flow via HITL checkpoints
๐งฉ Reusability Tips
- Swap search_tool with real tools (e.g., web search, calculator, database)
- Use ask_user node to plug in UI (web app, chatbot, etc.)
- Maintain stateful context across long conversations
- Integrate with langchain.memory or databases for persistence
๐ Final Thoughts
This LangGraph gives you a production-ready foundation for building intelligent, interruptible AI agents that blend automation with human supervision.If you’re working on agents, bots, or decision workflows Human-in-the-Loop is not optional; itโs essential.