add langchain human-in-the-loop middleware#1543
add langchain human-in-the-loop middleware#1543huiwq1990 wants to merge 1 commit intokagent-dev:mainfrom
Conversation
There was a problem hiding this comment.
Pull request overview
Adds a LangChain Human-in-the-Loop (HITL) middleware adapter to kagent-langgraph and introduces a new LangGraph sample agent demonstrating tool-approval flow.
Changes:
- Added
KAgentHumanInTheLoopMiddleware(LangChain middleware integration) and exported it fromkagent.langgraph. - Added a new
hitl-agentsample (pyproject, agent code, Dockerfile, k8s Agent manifest). - Updated Python workspace lockfile/dependencies (including adding
langchain).
Reviewed changes
Copilot reviewed 11 out of 12 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| python/uv.lock | Adds hitl-agent workspace member; introduces langchain and bumps related LangGraph/LangChain packages. |
| python/samples/langgraph/hitl-agent/pyproject.toml | Defines the new sample package and console script entrypoint. |
| python/samples/langgraph/hitl-agent/hitl_agent/cli.py | Uvicorn CLI entrypoint for running the sample agent. |
| python/samples/langgraph/hitl-agent/hitl_agent/agent.py | Sample agent graph/tools wiring and HITL middleware usage. |
| python/samples/langgraph/hitl-agent/hitl_agent/agent-card.json | A2A agent card metadata for the sample. |
| python/samples/langgraph/hitl-agent/hitl_agent/init.py | Package init for the sample. |
| python/samples/langgraph/hitl-agent/agent.yaml | KAgent Agent manifest for deploying the sample. |
| python/samples/langgraph/hitl-agent/Dockerfile | Container build for the sample runtime. |
| python/packages/kagent-langgraph/src/kagent/langgraph/_hitl.py | New HITL middleware implementation bridging LangChain middleware to LangGraph interrupts. |
| python/packages/kagent-langgraph/src/kagent/langgraph/init.py | Exports KAgentHumanInTheLoopMiddleware. |
| python/packages/kagent-langgraph/pyproject.toml | Adds langchain dependency to support middleware imports. |
| python/Makefile | Adds hitl-agent-sample docker build target. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| import os | ||
|
|
||
| import uvicorn | ||
| from agent import graph |
There was a problem hiding this comment.
from agent import graph will fail when the package is executed via the console script entrypoint (hitl-agent = "hitl_agent.cli:main"), because it imports a top-level agent module instead of hitl_agent.agent. Use a package-qualified import (e.g., from hitl_agent.agent import graph) or a relative import (from .agent import graph) and run the module accordingly.
| from agent import graph | |
| from .agent import graph |
| kagent_checkpointer = KAgentCheckpointer( | ||
| client=httpx.AsyncClient(base_url=KAgentConfig().url), | ||
| app_name=KAgentConfig().app_name, | ||
| ) |
There was a problem hiding this comment.
httpx.AsyncClient(...) is created at import time and never closed. In a long-running server this can leak open connections/file descriptors. Consider creating the client in an app lifespan/startup hook and closing it on shutdown (or using an async context manager) and passing it into KAgentCheckpointer.
| def __init__( | ||
| self, | ||
| interrupt_on: list[str] | dict[str, bool] | None = None, | ||
| **kwargs: Any, | ||
| ): | ||
| super().__init__(interrupt_on=interrupt_on, **kwargs) | ||
|
|
||
| def after_model(self, state: AgentState[Any], runtime: Runtime[ContextT]) -> dict[str, Any] | None: | ||
| messages = state["messages"] | ||
| if not messages: | ||
| return None | ||
|
|
||
| last_ai_msg = next((msg for msg in reversed(messages) if isinstance(msg, AIMessage)), None) | ||
| if not last_ai_msg or not last_ai_msg.tool_calls: | ||
| return None | ||
|
|
||
| # Create action requests and review configs for tools that need approval | ||
| interrupt_indices: list[int] = [] | ||
| decisions: list[dict] = [] | ||
|
|
||
| for idx, tool_call in enumerate(last_ai_msg.tool_calls): | ||
| if (config := self.interrupt_on.get(tool_call["name"])) is not None: | ||
| interrupt_indices.append(idx) |
There was a problem hiding this comment.
interrupt_on is declared as list[str] | dict[str, bool] | None, but after_model() assumes self.interrupt_on is a mapping (uses .get(...) and [...]). Either normalize interrupt_on to a dict in __init__ (e.g., convert lists to {name: True}) or tighten the accepted type so callers can’t pass a list that would break at runtime.
| def convert_to_langchain_decision(decision: dict) -> Decision: | ||
| decision_type = decision.get("decision_type", "reject") if isinstance(decision, dict) else "reject" | ||
| if decision_type != "approve": | ||
| reason = "" | ||
| if isinstance(decision, dict): | ||
| reasons = decision.get("rejection_reasons", {}) | ||
| reason = reasons.get("*", "") if isinstance(reasons, dict) else "" | ||
| rejection_msg = "Tool call was rejected by user." | ||
| if reason: | ||
| rejection_msg += f" Reason: {reason}" | ||
|
|
||
| return RejectDecision(type=decision_type, message=rejection_msg) | ||
| else: | ||
| return ApproveDecision(type="approve") |
There was a problem hiding this comment.
convert_to_langchain_decision() passes through any non-"approve" decision_type into RejectDecision(type=...). The kagent executor can resume with decision_type="batch" (see LangGraphAgentExecutor._handle_resume), which would produce RejectDecision(type="batch") and may fail validation or behave unexpectedly. Map all non-approve outcomes to a proper reject decision (and, if batch is supported, read the per-call decision from decisions for the current tool call id).
666b608 to
6d5d229
Compare
Signed-off-by: huiwq1990 <huiwq1990@163.com>
supreme-gg-gg
left a comment
There was a problem hiding this comment.
Hey @huiwq1990, thanks for the PR! We currently use Langgraph's interrupt primitive to handle HITL events in langgraph BYO, what would be the behaviour of using the middleware from lang chain? Would it be similar to how the existing hitl-tool agent in samples?
I'm not an expert in lang chain / lang graph, so this might be a simpler approach than the current one we have, but Kagent does use some custom HITL logic documented in docs/architecture/human-in-the-loop.md that might require some special handling with the middleware and the tool
No description provided.