Skip to content

add langchain human-in-the-loop middleware#1543

Closed
huiwq1990 wants to merge 1 commit intokagent-dev:mainfrom
huiwq1990:feat-lc
Closed

add langchain human-in-the-loop middleware#1543
huiwq1990 wants to merge 1 commit intokagent-dev:mainfrom
huiwq1990:feat-lc

Conversation

@huiwq1990
Copy link
Copy Markdown

No description provided.

Copilot AI review requested due to automatic review settings March 25, 2026 07:55
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a LangChain Human-in-the-Loop (HITL) middleware adapter to kagent-langgraph and introduces a new LangGraph sample agent demonstrating tool-approval flow.

Changes:

  • Added KAgentHumanInTheLoopMiddleware (LangChain middleware integration) and exported it from kagent.langgraph.
  • Added a new hitl-agent sample (pyproject, agent code, Dockerfile, k8s Agent manifest).
  • Updated Python workspace lockfile/dependencies (including adding langchain).

Reviewed changes

Copilot reviewed 11 out of 12 changed files in this pull request and generated 9 comments.

Show a summary per file
File Description
python/uv.lock Adds hitl-agent workspace member; introduces langchain and bumps related LangGraph/LangChain packages.
python/samples/langgraph/hitl-agent/pyproject.toml Defines the new sample package and console script entrypoint.
python/samples/langgraph/hitl-agent/hitl_agent/cli.py Uvicorn CLI entrypoint for running the sample agent.
python/samples/langgraph/hitl-agent/hitl_agent/agent.py Sample agent graph/tools wiring and HITL middleware usage.
python/samples/langgraph/hitl-agent/hitl_agent/agent-card.json A2A agent card metadata for the sample.
python/samples/langgraph/hitl-agent/hitl_agent/init.py Package init for the sample.
python/samples/langgraph/hitl-agent/agent.yaml KAgent Agent manifest for deploying the sample.
python/samples/langgraph/hitl-agent/Dockerfile Container build for the sample runtime.
python/packages/kagent-langgraph/src/kagent/langgraph/_hitl.py New HITL middleware implementation bridging LangChain middleware to LangGraph interrupts.
python/packages/kagent-langgraph/src/kagent/langgraph/init.py Exports KAgentHumanInTheLoopMiddleware.
python/packages/kagent-langgraph/pyproject.toml Adds langchain dependency to support middleware imports.
python/Makefile Adds hitl-agent-sample docker build target.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread python/packages/kagent-langgraph/src/kagent/langgraph/_hitl.py Outdated
Comment thread python/samples/langgraph/hitl-agent/agent.yaml
import os

import uvicorn
from agent import graph
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from agent import graph will fail when the package is executed via the console script entrypoint (hitl-agent = "hitl_agent.cli:main"), because it imports a top-level agent module instead of hitl_agent.agent. Use a package-qualified import (e.g., from hitl_agent.agent import graph) or a relative import (from .agent import graph) and run the module accordingly.

Suggested change
from agent import graph
from .agent import graph

Copilot uses AI. Check for mistakes.
Comment on lines +34 to +37
kagent_checkpointer = KAgentCheckpointer(
client=httpx.AsyncClient(base_url=KAgentConfig().url),
app_name=KAgentConfig().app_name,
)
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

httpx.AsyncClient(...) is created at import time and never closed. In a long-running server this can leak open connections/file descriptors. Consider creating the client in an app lifespan/startup hook and closing it on shutdown (or using an async context manager) and passing it into KAgentCheckpointer.

Copilot uses AI. Check for mistakes.
Comment on lines +20 to +42
def __init__(
self,
interrupt_on: list[str] | dict[str, bool] | None = None,
**kwargs: Any,
):
super().__init__(interrupt_on=interrupt_on, **kwargs)

def after_model(self, state: AgentState[Any], runtime: Runtime[ContextT]) -> dict[str, Any] | None:
messages = state["messages"]
if not messages:
return None

last_ai_msg = next((msg for msg in reversed(messages) if isinstance(msg, AIMessage)), None)
if not last_ai_msg or not last_ai_msg.tool_calls:
return None

# Create action requests and review configs for tools that need approval
interrupt_indices: list[int] = []
decisions: list[dict] = []

for idx, tool_call in enumerate(last_ai_msg.tool_calls):
if (config := self.interrupt_on.get(tool_call["name"])) is not None:
interrupt_indices.append(idx)
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

interrupt_on is declared as list[str] | dict[str, bool] | None, but after_model() assumes self.interrupt_on is a mapping (uses .get(...) and [...]). Either normalize interrupt_on to a dict in __init__ (e.g., convert lists to {name: True}) or tighten the accepted type so callers can’t pass a list that would break at runtime.

Copilot uses AI. Check for mistakes.
Comment on lines +94 to +107
def convert_to_langchain_decision(decision: dict) -> Decision:
decision_type = decision.get("decision_type", "reject") if isinstance(decision, dict) else "reject"
if decision_type != "approve":
reason = ""
if isinstance(decision, dict):
reasons = decision.get("rejection_reasons", {})
reason = reasons.get("*", "") if isinstance(reasons, dict) else ""
rejection_msg = "Tool call was rejected by user."
if reason:
rejection_msg += f" Reason: {reason}"

return RejectDecision(type=decision_type, message=rejection_msg)
else:
return ApproveDecision(type="approve")
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

convert_to_langchain_decision() passes through any non-"approve" decision_type into RejectDecision(type=...). The kagent executor can resume with decision_type="batch" (see LangGraphAgentExecutor._handle_resume), which would produce RejectDecision(type="batch") and may fail validation or behave unexpectedly. Map all non-approve outcomes to a proper reject decision (and, if batch is supported, read the per-call decision from decisions for the current tool call id).

Copilot uses AI. Check for mistakes.
Comment thread python/Makefile Outdated
Comment thread python/samples/langgraph/hitl-agent/hitl_agent/agent.py Outdated
Comment thread python/packages/kagent-langgraph/src/kagent/langgraph/_hitl.py Outdated
@huiwq1990 huiwq1990 force-pushed the feat-lc branch 2 times, most recently from 666b608 to 6d5d229 Compare March 25, 2026 13:43
Signed-off-by: huiwq1990 <huiwq1990@163.com>
Copy link
Copy Markdown
Contributor

@supreme-gg-gg supreme-gg-gg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @huiwq1990, thanks for the PR! We currently use Langgraph's interrupt primitive to handle HITL events in langgraph BYO, what would be the behaviour of using the middleware from lang chain? Would it be similar to how the existing hitl-tool agent in samples?

I'm not an expert in lang chain / lang graph, so this might be a simpler approach than the current one we have, but Kagent does use some custom HITL logic documented in docs/architecture/human-in-the-loop.md that might require some special handling with the middleware and the tool

@huiwq1990 huiwq1990 closed this Mar 30, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants