Fix Codex prolite support and trust reported model availability#2006
Fix Codex prolite support and trust reported model availability#2006ElSargo wants to merge 6 commits intopingdotgg:mainfrom
Conversation
- add prolite plan support for Spark eligibility and auth labels\n- preserve built-in display names for known app-server models\n- treat non-empty model/list results as trusted model availability\n- ignore empty listed models and wait for model/list in one-shot discovery
…3code into feat/codex-prolite-fix
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
ApprovabilityVerdict: Needs human review This PR introduces significant runtime behavior changes: adding a new 'prolite' plan type that enables spark access, and fundamentally changing model resolution to trust app-server reported model availability over hardcoded account-plan rules. These changes affect which models users can access and how model selection decisions are made, warranting human review. You can customize Macroscope's approvability policy. Learn more. |
Note
This is a revised version of #1980 with a narrower scope and a clearer write-up of the known tradeoffs.
What Changed
This PR adds support for the new distinction between
proandprolitein Codex account handling.Reference:
https://help.openai.com/en/articles/9793128-what-is-chatgpt-pro
The change has two parts:
proliteaccount profile alongside the existing plan types.model/listresponse when it returns a non-empty model list.The second part is important because the app-server can currently report this account type as
unknown, and in that case account-based gating alone is not sufficient to determine whethergpt-5.3-codex-sparkshould be available.Why
In my testing, the Codex app-server currently reports
proliteaccounts asunknown.Before this change, T3 Code only treated
proaccounts as Spark-capable, sogpt-5.3-codex-sparkwas filtered out forproliteusers even when the app-server itself reported Spark as available.This PR fixes that in two ways:
prolitesupport for when the app-server catches up and reports the newer plan type directly.Potential Issues
Technically possible startup latency increase
This revised PR explicitly keeps
model/listin the one-shot app-server discovery probe, so provider discovery now waits foraccount/read,skills/list, andmodel/list.If
model/listwere to hang while the other requests succeeded, startup could take longer than before.Why I think this is acceptable:
model/listhung while the other probe requests succeeded.model/listis uniquely failure-prone relative to the other probe requests.Possibility of selecting a model that later fails server-side
Because this PR is specifically meant to handle cases where account type is reported as
unknown, model availability may depend onmodel/listrather than account classification alone.Why I think this is acceptable:
UI Changes
gpt-5.3-codex-sparknow appears in the model selector forproliteusers.Before:
57d7746
After:
Other Considerations
I was not able to test this change against other account types.
This PR may look larger than the behavior change suggests because it also factors shared Codex model parsing and metadata into
apps/server/src/provider/codexModels.ts.Note
Medium Risk
Changes Codex model gating/selection to trust app-server
model/listand introduces custom model passthrough; mistakes could cause unexpected model fallback or startup/discovery regressions.Overview
Adds
proliteas a first-class Codex plan type and treats it as Spark-capable, updating auth labeling and related tests.Updates Codex session startup and provider discovery to call/consume app-server
model/list, store the resulting available model set, and prefer reported availability when resolving models (including deterministic fallbacks when the requested/default model isn’t available).Introduces configurable
customModelsplumbing from server settings through the adapter/manager and ensures custom selections are preserved even when the app-server reports a different model list, backed by new parsing utilities/tests incodexModelsand expanded discovery snapshot coverage.Reviewed by Cursor Bugbot for commit cac0551. Bugbot is set up for automated code reviews on this repo. Configure here.
Note
Fix Codex prolite plan support and trust app-server reported model availability
'prolite'toCODEX_SPARK_ENABLED_PLAN_TYPESand adds a'ChatGPT Pro Lite Subscription'label incodexAccount.ts, so prolite accounts correctly get spark models.resolveCodexModelForAccountto prefer the app-server's reported model list when non-empty, falling back to account-based spark gating only when no models are reported.probeCodexDiscoveryincodexAppServer.tsto callmodel/listand include results in theCodexDiscoverySnapshot.parseCodexModelListResultincodexModels.tsto normalize app-server model list responses intoServerProviderModelentries, filtering hidden models and applying built-in capabilities.customModelsfromCodexAdapterthrough to the session manager, preserving user-configured model selections during resolution.Macroscope summarized 9007527.