fix: resolve architectural gaps in praisonai wrapper (fixes #1623)#1624
fix: resolve architectural gaps in praisonai wrapper (fixes #1623)#1624praisonai-triage-agent[bot] wants to merge 4 commits intomainfrom
Conversation
- Fix Issue 1: LLM registry now registers built-in providers (openai, anthropic, google) with LiteLLM backend - Fix Issue 2: DB adapter get_runs() and get_traces() now read from state store instead of returning empty lists - Fix Issue 3: LangfuseSink uses LIFO stack correlation for proper parallel tool span matching and replaces print with logging All fixes maintain backward compatibility and follow protocol-driven architecture per AGENTS.md. Co-authored-by: MervinPraison <MervinPraison@users.noreply.github.com>
|
@coderabbitai review |
|
/review |
ⓘ You've reached your Qodo monthly free-tier limit. Reviews pause until next month — upgrade your plan to continue now, or link your paid account if you already have one. |
✅ Actions performedReview triggered.
|
|
Important Review skippedBot user detected. To trigger a single review, invoke the ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughThis PR implements three independent wrapper gaps identified in issue ChangesRun Persistence Access
LLM Registry Initialization
Observability & Tool Span Improvements
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@copilot Do a thorough review of this PR. Read ALL existing reviewer comments above from Qodo, Coderabbit, and Gemini first — incorporate their findings. Review areas:
|
Greptile SummaryThis PR fixes three architectural gaps in the PraisonAI wrapper: activating built-in LLM providers via LiteLLM-backed factories, implementing
Confidence Score: 5/5Safe to merge; all three fixes are narrowly scoped, backward-compatible, and covered by new unit tests. The DB adapter and registry changes are straightforward and well-tested. The LangfuseSink LIFO refactor is correct; the only defect is a dead-code duplicate cleanup loop in close() that has no runtime impact. langfuse.py — dead-code duplicate _tool_stacks loop in close() is harmless but worth cleaning up before it causes confusion in future edits. Important Files Changed
Sequence DiagramsequenceDiagram
participant Agent
participant PraisonAIDB
participant StateStore
participant LangfuseSink
Agent->>PraisonAIDB: "on_agent_start() -> _init_stores()"
PraisonAIDB->>StateStore: initialize
Agent->>PraisonAIDB: on_run_start(session_id, run_id)
PraisonAIDB->>StateStore: set(run:session:run_id, data)
Agent->>LangfuseSink: emit(AGENT_START)
LangfuseSink->>LangfuseSink: create root span in _spans[agent_key]
Agent->>LangfuseSink: emit(TOOL_START, tool_name)
LangfuseSink->>LangfuseSink: push to _tool_stacks[(agent_key, tool_name)]
Agent->>LangfuseSink: emit(TOOL_END, tool_name)
LangfuseSink->>LangfuseSink: pop from _tool_stacks, span.update().end()
Agent->>LangfuseSink: emit(AGENT_END)
LangfuseSink->>LangfuseSink: drain dangling stacks, end root span
Agent->>PraisonAIDB: on_run_end(session_id, run_id)
PraisonAIDB->>StateStore: update run status to completed
Note over PraisonAIDB,StateStore: get_runs() scans run:session:* keys
Reviews (3): Last reviewed commit: "fix: tighten wrapper edge cases and alig..." | Re-trigger Greptile |
There was a problem hiding this comment.
Actionable comments posted: 8
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/praisonai/praisonai/db/adapter.py`:
- Around line 349-352: In get_runs (and likewise in get_traces) wrap each
per-key read from self._state_store.get(k) in a try/except so a single backend
error or malformed payload doesn't abort the whole loop: call
self._state_store.get(k) inside a try block, catch and log the exception,
continue to the next key; after a successful read, perform a type/check (e.g.,
ensure run_data is a dict or expected Run/Trace type and contains required
fields) before appending to runs, and skip/log entries that fail validation so
only valid records are added.
- Line 355: The slice logic incorrectly treats limit=0 as "no limit"; update the
conditional that currently uses "if limit" to an explicit None check (e.g., "if
limit is not None") so that return runs[:limit] is used when limit==0; apply
this change to the occurrences in adapter.py that use "return runs[:limit] if
limit else runs" (including the instance around line 355 and the similar one
near line 542) and ensure the variable names 'runs' and 'limit' are preserved.
In `@src/praisonai/praisonai/llm/registry.py`:
- Around line 280-286: The except ImportError block that re-raises a new
ImportError should preserve the original traceback by chaining the exception;
modify the import block that attempts "import litellm" in registry.py so that
you catch the ImportError as e and re-raise the new ImportError(...) from e
(i.e., use "except ImportError as e: raise ImportError(... ) from e") to retain
the original context.
- Around line 280-286: The ImportError raised when lazy-importing litellm
discards the original traceback; update the except block to capture the caught
exception (e.g., except ImportError as e) and re-raise the new ImportError using
exception chaining (raise ImportError("LiteLLM is required for built-in
providers. Install with: pip install litellm") from e) so the original error
context is preserved when importing litellm in registry.py.
- Around line 288-292: The call to litellm.completion in the registry (the
return in the method that builds the LiteLLM request) incorrectly forwards
self.config which always contains a "provider" key, causing downstream APIs to
receive a stray provider kwarg and return 400s; fix by making a copy of
self.config with the "provider" key removed (e.g., filter out "provider" from
self.config) and then merge that sanitized config with kwargs when calling
litellm.completion (reference the litellm.completion invocation and self.config
in registry.py to locate the change).
- Around line 288-292: The call to litellm.completion in the method leaks
self.config which always contains a "provider" key, causing TypeError; before
forwarding kwargs to litellm.completion (the call that uses model=full_model or
self.model_id), strip out the "provider" entry from self.config (e.g., copy
self.config and pop("provider") or build a new dict excluding that key) and then
pass that cleaned config plus **kwargs; if you need to convey provider info use
the model string (self.model_id/full_model) or pass it via an explicit
custom_llm_provider param instead of letting "provider" be forwarded.
In `@src/praisonai/praisonai/observability/langfuse.py`:
- Around line 77-80: The close() implementation currently ends only root spans
in self._spans but leaves in-flight tool spans stored in self._tool_stacks,
causing orphaned observations and retained references; update close() in the
Langfuse observability class to iterate over self._tool_stacks (keys are tuples
of (agent_key, tool_name)) and for each stacked span end/close any remaining
spans (using the same span-end method used for tool exit), then clear the stack
and finally clear self._tool_stacks and self._spans; ensure thread-safety by
performing this cleanup while holding self._lock and reuse existing span-ending
logic to preserve consistency.
- Around line 210-223: The current _handle_tool_end uses (agent_key, tool_name)
and stack.pop(), which mis-correlates concurrent calls of the same tool; change
correlation to first try matching by a unique call identifier on the event
(e.g., event.call_id or similar) stored with the span when pushing into
self._tool_stacks entries, and only fall back to LIFO by tool name if no
matching call_id entry is found; update where spans are pushed (the code that
appends to self._tool_stacks[(agent_key, tool_name)]) to store the call_id with
the span and modify _handle_tool_end to locate and remove the span by call_id
before falling back to pop().
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1560d2e2-d23e-48f5-b90d-a7a3050fea24
📒 Files selected for processing (3)
src/praisonai/praisonai/db/adapter.pysrc/praisonai/praisonai/llm/registry.pysrc/praisonai/praisonai/observability/langfuse.py
| for k in keys: | ||
| run_data = self._state_store.get(k) | ||
| if run_data: | ||
| runs.append(run_data) |
There was a problem hiding this comment.
Guard store reads so one bad record doesn’t fail the whole query.
A backend/read error (or malformed payload) in a single key currently aborts the entire get_runs/get_traces call. Add per-key exception handling and type checks to keep reads resilient.
Suggested fix
runs = []
for k in keys:
- run_data = self._state_store.get(k)
- if run_data:
- runs.append(run_data)
+ try:
+ run_data = self._state_store.get(k)
+ except Exception as exc:
+ logger.warning("Failed to load run key '%s': %s", k, exc)
+ continue
+ if isinstance(run_data, dict):
+ runs.append(run_data) traces = []
for k in keys:
- trace_data = self._state_store.get(k)
- if trace_data:
- traces.append(trace_data)
+ try:
+ trace_data = self._state_store.get(k)
+ except Exception as exc:
+ logger.warning("Failed to load trace key '%s': %s", k, exc)
+ continue
+ if isinstance(trace_data, dict):
+ traces.append(trace_data)Also applies to: 532-535
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/praisonai/praisonai/db/adapter.py` around lines 349 - 352, In get_runs
(and likewise in get_traces) wrap each per-key read from
self._state_store.get(k) in a try/except so a single backend error or malformed
payload doesn't abort the whole loop: call self._state_store.get(k) inside a try
block, catch and log the exception, continue to the next key; after a successful
read, perform a type/check (e.g., ensure run_data is a dict or expected
Run/Trace type and contains required fields) before appending to runs, and
skip/log entries that fail validation so only valid records are added.
| runs.append(run_data) | ||
|
|
||
| runs.sort(key=lambda r: r.get("started_at", 0), reverse=True) | ||
| return runs[:limit] if limit else runs |
There was a problem hiding this comment.
Handle limit=0 correctly in slice logic.
Current truthy check treats limit=0 as “no limit” and returns all results. Use an explicit None check.
Suggested fix
- return runs[:limit] if limit else runs
+ return runs[:limit] if limit is not None else runs- return traces[:limit] if limit else traces
+ return traces[:limit] if limit is not None else tracesAlso applies to: 542-542
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/praisonai/praisonai/db/adapter.py` at line 355, The slice logic
incorrectly treats limit=0 as "no limit"; update the conditional that currently
uses "if limit" to an explicit None check (e.g., "if limit is not None") so that
return runs[:limit] is used when limit==0; apply this change to the occurrences
in adapter.py that use "return runs[:limit] if limit else runs" (including the
instance around line 355 and the similar one near line 542) and ensure the
variable names 'runs' and 'limit' are preserved.
| try: | ||
| import litellm # lazy | ||
| except ImportError: | ||
| raise ImportError( | ||
| "LiteLLM is required for built-in providers. " | ||
| "Install with: pip install litellm" | ||
| ) |
There was a problem hiding this comment.
Missing exception chaining in except clause (Ruff B904).
Re-raising inside an except block without from loses the original traceback context.
🛠️ Proposed fix
- except ImportError:
- raise ImportError(
- "LiteLLM is required for built-in providers. "
- "Install with: pip install litellm"
- )
+ except ImportError as err:
+ raise ImportError(
+ "LiteLLM is required for built-in providers. "
+ "Install with: pip install litellm"
+ ) from err🧰 Tools
🪛 Ruff (0.15.12)
[warning] 283-286: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/praisonai/praisonai/llm/registry.py` around lines 280 - 286, The except
ImportError block that re-raises a new ImportError should preserve the original
traceback by chaining the exception; modify the import block that attempts
"import litellm" in registry.py so that you catch the ImportError as e and
re-raise the new ImportError(...) from e (i.e., use "except ImportError as e:
raise ImportError(... ) from e") to retain the original context.
Missing exception chaining in except clause (Ruff B904).
Re-raising inside an except block without from discards the original traceback.
🛠️ Proposed fix
- except ImportError:
- raise ImportError(
- "LiteLLM is required for built-in providers. "
- "Install with: pip install litellm"
- )
+ except ImportError as err:
+ raise ImportError(
+ "LiteLLM is required for built-in providers. "
+ "Install with: pip install litellm"
+ ) from err🧰 Tools
🪛 Ruff (0.15.12)
[warning] 283-286: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/praisonai/praisonai/llm/registry.py` around lines 280 - 286, The
ImportError raised when lazy-importing litellm discards the original traceback;
update the except block to capture the caught exception (e.g., except
ImportError as e) and re-raise the new ImportError using exception chaining
(raise ImportError("LiteLLM is required for built-in providers. Install with:
pip install litellm") from e) so the original error context is preserved when
importing litellm in registry.py.
| return litellm.completion( | ||
| model=full_model or self.model_id, | ||
| messages=[{"role": "user", "content": prompt}], | ||
| **{**self.config, **kwargs}, | ||
| ) |
There was a problem hiding this comment.
provider key forwarded to downstream provider API, causing 400 errors at runtime.
self.config always contains {"provider": "<prefix>"} (set via cfg.setdefault in _make_litellm_factory). Spreading **self.config into litellm.completion(...) passes provider as a kwarg. LiteLLM assumes any non-OpenAI param is provider-specific and passes it in the request body as a kwarg — so provider="openai" gets forwarded verbatim to the OpenAI/Anthropic/Google API, which will reject it with a 400 error on every generate() call.
Strip the internal provider key before forwarding config to LiteLLM:
🐛 Proposed fix
- full_model = f"{self.config.get('provider', '')}/{self.model_id}".strip("/")
- return litellm.completion(
- model=full_model or self.model_id,
- messages=[{"role": "user", "content": prompt}],
- **{**self.config, **kwargs},
- )
+ provider_prefix = self.config.get("provider", "")
+ full_model = f"{provider_prefix}/{self.model_id}".strip("/") or self.model_id
+ passthrough_config = {k: v for k, v in self.config.items() if k != "provider"}
+ return litellm.completion(
+ model=full_model,
+ messages=[{"role": "user", "content": prompt}],
+ **{**passthrough_config, **kwargs},
+ )🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/praisonai/praisonai/llm/registry.py` around lines 288 - 292, The call to
litellm.completion in the registry (the return in the method that builds the
LiteLLM request) incorrectly forwards self.config which always contains a
"provider" key, causing downstream APIs to receive a stray provider kwarg and
return 400s; fix by making a copy of self.config with the "provider" key removed
(e.g., filter out "provider" from self.config) and then merge that sanitized
config with kwargs when calling litellm.completion (reference the
litellm.completion invocation and self.config in registry.py to locate the
change).
🧩 Analysis chain
🌐 Web query:
Does litellm.completion accept a provider keyword argument?
💡 Result:
No. liteLLM’s completion() does not define or document a provider keyword argument; instead, it uses custom_llm_provider (and/or the provider prefix in model) to choose the backend provider. In the published completion() signature, the parameter list includes custom_llm_provider: Optional[str] = None and then kwargs, but it does not include any provider parameter name. [1] So passing provider=... would not be an officially supported argument for litellm.completion(); if you need to select a provider, use model (e.g., openai/gpt-4o) and/or custom_llm_provider=.... [1][2]
Citations:
- 1: https://docs.litellm.ai/completion/input
- 2: https://berriai-litellm.mintlify.app/advanced/custom-providers
provider key leaked into litellm.completion kwargs will cause TypeError at runtime.
self.config always contains {"provider": "<prefix>", ...} (set via cfg.setdefault("provider", provider_prefix) in the factory). Spreading **self.config into litellm.completion(...) passes provider as a keyword argument, which LiteLLM does not accept — every call to generate() will raise a TypeError.
The provider key should be excluded before forwarding config to litellm.completion. Use the model string (e.g., "openai/gpt-4o") to specify the provider, or pass custom_llm_provider if needed, but do not pass provider as a direct kwarg.
🐛 Proposed fix
- full_model = f"{self.config.get('provider', '')}/{self.model_id}".strip("/")
- return litellm.completion(
- model=full_model or self.model_id,
- messages=[{"role": "user", "content": prompt}],
- **{**self.config, **kwargs},
- )
+ provider_prefix = self.config.get("provider", "")
+ full_model = f"{provider_prefix}/{self.model_id}".strip("/") if provider_prefix else self.model_id
+ passthrough_config = {k: v for k, v in self.config.items() if k != "provider"}
+ return litellm.completion(
+ model=full_model,
+ messages=[{"role": "user", "content": prompt}],
+ **{**passthrough_config, **kwargs},
+ )🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/praisonai/praisonai/llm/registry.py` around lines 288 - 292, The call to
litellm.completion in the method leaks self.config which always contains a
"provider" key, causing TypeError; before forwarding kwargs to
litellm.completion (the call that uses model=full_model or self.model_id), strip
out the "provider" entry from self.config (e.g., copy self.config and
pop("provider") or build a new dict excluding that key) and then pass that
cleaned config plus **kwargs; if you need to convey provider info use the model
string (self.model_id/full_model) or pass it via an explicit custom_llm_provider
param instead of letting "provider" be forwarded.
| self._spans: Dict[str, Any] = {} # agent_key -> root span | ||
| # (agent_key, tool_name) -> stack of in-flight tool spans | ||
| self._tool_stacks: Dict[Tuple[str, str], List[Any]] = defaultdict(list) | ||
| self._lock = threading.Lock() |
There was a problem hiding this comment.
_tool_stacks lifecycle is not cleaned up on shutdown.
close() ends root spans only; in-flight tool spans in _tool_stacks are never ended/cleared. This can leave orphaned observations and retained references.
Proposed fix in close()
with self._lock:
+ for stack in self._tool_stacks.values():
+ while stack:
+ try:
+ stack.pop().end()
+ except Exception:
+ pass
+ self._tool_stacks.clear()
for span in self._spans.values():
try:
span.end()
except Exception:
passAlso applies to: 299-307
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/praisonai/praisonai/observability/langfuse.py` around lines 77 - 80, The
close() implementation currently ends only root spans in self._spans but leaves
in-flight tool spans stored in self._tool_stacks, causing orphaned observations
and retained references; update close() in the Langfuse observability class to
iterate over self._tool_stacks (keys are tuples of (agent_key, tool_name)) and
for each stacked span end/close any remaining spans (using the same span-end
method used for tool exit), then clear the stack and finally clear
self._tool_stacks and self._spans; ensure thread-safety by performing this
cleanup while holding self._lock and reuse existing span-ending logic to
preserve consistency.
| # Push to stack for LIFO correlation | ||
| self._tool_stacks[(agent_key, tool_name)].append(tool_span) | ||
|
|
||
| def _handle_tool_end(self, event: ActionEvent, agent_name: str) -> None: | ||
| """Handle TOOL_END -> end tool span observation.""" | ||
| agent_key = f"{event.agent_id or agent_name}-{agent_name}" | ||
| tool_name = event.tool_name or "unknown-tool" | ||
|
|
||
| # Find the most recent matching tool span | ||
| tool_key = None | ||
| for key in self._spans: | ||
| if key.startswith(f"{agent_key}:{tool_name}:") and key != agent_key: | ||
| tool_key = key | ||
|
|
||
| if not tool_key: | ||
| stack = self._tool_stacks.get((agent_key, tool_name)) | ||
| if not stack: | ||
| return | ||
|
|
||
| tool_span = self._spans.pop(tool_key, None) | ||
| if tool_span: | ||
| tool_span = stack.pop() | ||
| if not stack: | ||
| self._tool_stacks.pop((agent_key, tool_name), None) | ||
| try: |
There was a problem hiding this comment.
LIFO-by-name can mis-correlate concurrent same-tool calls.
Using (agent_key, tool_name) + stack.pop() breaks if two overlapping calls of the same tool finish out of order; TOOL_END may close the wrong span.
Proposed fix (prefer call-id correlation, fallback to LIFO)
- # Push to stack for LIFO correlation
- self._tool_stacks[(agent_key, tool_name)].append(tool_span)
+ tool_call_id = None
+ if event.metadata:
+ tool_call_id = event.metadata.get("tool_call_id")
+ if tool_call_id:
+ # self._tool_spans_by_id: Dict[str, Any]
+ self._tool_spans_by_id[tool_call_id] = tool_span
+ else:
+ # Fallback when no explicit id exists
+ self._tool_stacks[(agent_key, tool_name)].append(tool_span)- stack = self._tool_stacks.get((agent_key, tool_name))
- if not stack:
- return
- tool_span = stack.pop()
- if not stack:
- self._tool_stacks.pop((agent_key, tool_name), None)
+ tool_call_id = event.metadata.get("tool_call_id") if event.metadata else None
+ if tool_call_id and tool_call_id in self._tool_spans_by_id:
+ tool_span = self._tool_spans_by_id.pop(tool_call_id)
+ else:
+ stack = self._tool_stacks.get((agent_key, tool_name))
+ if not stack:
+ return
+ tool_span = stack.pop()
+ if not stack:
+ self._tool_stacks.pop((agent_key, tool_name), None)🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/praisonai/praisonai/observability/langfuse.py` around lines 210 - 223,
The current _handle_tool_end uses (agent_key, tool_name) and stack.pop(), which
mis-correlates concurrent calls of the same tool; change correlation to first
try matching by a unique call identifier on the event (e.g., event.call_id or
similar) stored with the span when pushing into self._tool_stacks entries, and
only fall back to LIFO by tool name if no matching call_id entry is found;
update where spans are pushed (the code that appends to
self._tool_stacks[(agent_key, tool_name)]) to store the call_id with the span
and modify _handle_tool_end to locate and remove the span by call_id before
falling back to pop().
|
@claude You are the FINAL architecture reviewer. If the branch is under MervinPraison/PraisonAI (not a fork), you are able to make modifications to this branch and push directly. SCOPE: Focus ONLY on Python packages (praisonaiagents, praisonai). Do NOT modify praisonai-rust or praisonai-ts. Read ALL comments above from Gemini, Qodo, CodeRabbit, and Copilot carefully before responding. Phase 1: Review per AGENTS.md
Phase 2: FIX Valid Issues Phase 3: Final Verdict |
|
Claude finished @MervinPraison's task —— View job FINAL Architecture Review & Fix Implementation ✅Todo List:
✅ FINAL VERDICT: APPROVED WITH FIXES APPLIEDArchitecture Review Summary ✅Protocol-driven compliance: ✅ PASS
Backward compatibility: ✅ PASS
Performance: ✅ PASS
Critical Issues Fixed 🔧All P1 BLOCKING and P2 MAJOR issues identified by reviewers have been RESOLVED:
Commit Applied 📝Commit: Testing Recommendations 🧪Before merging, verify:
Result: All architectural gaps from Issue #1623 are now resolved. The PR maintains the protocol-driven design while fixing critical runtime failures. |
- Fix LiteLLM provider TypeError by stripping 'provider' key from config - Fix DB adapter write-side gap allowing state store writes without conversation store - Fix tool stacks cleanup on LangfuseSink shutdown - Fix limit=0 logic error in get_runs/get_traces - Add exception chaining for ImportError - Add error handling for state store reads Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
300ecb7 to
e386411
Compare
Agent-Logs-Url: https://github.com/MervinPraison/PraisonAI/sessions/62fe6186-39b5-43e2-85e3-491cc09facdf Co-authored-by: MervinPraison <454862+MervinPraison@users.noreply.github.com>
Agent-Logs-Url: https://github.com/MervinPraison/PraisonAI/sessions/62fe6186-39b5-43e2-85e3-491cc09facdf Co-authored-by: MervinPraison <454862+MervinPraison@users.noreply.github.com>
Completed a full review and applied focused fixes in
No UI changes were made in this PR. |
Fixes #1623
Summary
This PR addresses the 3 critical architectural gaps identified in the wrapper layer:
Issue 1: LLM Registry Empty Built-in Providers ✅
Issue 2: DB Adapter Silent Stubs ✅
Issue 3: LangfuseSink Span Correlation Issues ✅
Additional Improvements
Testing
Generated with Claude Code
Summary by CodeRabbit
Release Notes
Bug Fixes
New Features
Improvements