Skip to content

Fix Codex prolite support and trust reported model availability#2006

Open
ElSargo wants to merge 6 commits intopingdotgg:mainfrom
ElSargo:feat/codex-prolite-fix
Open

Fix Codex prolite support and trust reported model availability#2006
ElSargo wants to merge 6 commits intopingdotgg:mainfrom
ElSargo:feat/codex-prolite-fix

Conversation

@ElSargo
Copy link
Copy Markdown
Contributor

@ElSargo ElSargo commented Apr 13, 2026

Note

This is a revised version of #1980 with a narrower scope and a clearer write-up of the known tradeoffs.

What Changed

This PR adds support for the new distinction between pro and prolite in Codex account handling.

Reference:
https://help.openai.com/en/articles/9793128-what-is-chatgpt-pro

The change has two parts:

  1. Add a prolite account profile alongside the existing plan types.
  2. Respect the Codex app-server model/list response when it returns a non-empty model list.

The second part is important because the app-server can currently report this account type as unknown, and in that case account-based gating alone is not sufficient to determine whether gpt-5.3-codex-spark should be available.

Why

In my testing, the Codex app-server currently reports prolite accounts as unknown.

Before this change, T3 Code only treated pro accounts as Spark-capable, so gpt-5.3-codex-spark was filtered out for prolite users even when the app-server itself reported Spark as available.

This PR fixes that in two ways:

  1. Add explicit prolite support for when the app-server catches up and reports the newer plan type directly.
  2. Prefer the app-server's advertised model list when it is available, so stale account taxonomy does not incorrectly hide supported models.

Potential Issues

Technically possible startup latency increase

This revised PR explicitly keeps model/list in the one-shot app-server discovery probe, so provider discovery now waits for account/read, skills/list, and model/list.

If model/list were to hang while the other requests succeeded, startup could take longer than before.

Why I think this is acceptable:

  • I reviewed prior Codex app-server versions and was not able to reproduce a case where model/list hung while the other probe requests succeeded.
  • Based on that investigation, I do not have evidence that model/list is uniquely failure-prone relative to the other probe requests.
  • In practice, this appears more likely to behave like the other discovery calls: they either succeed together or fail together.

Possibility of selecting a model that later fails server-side

Because this PR is specifically meant to handle cases where account type is reported as unknown, model availability may depend on model/list rather than account classification alone.

Why I think this is acceptable:

  • Any newly exposed models come from the app-server's own advertised model list.
  • This means the UI is following the server's stated capabilities rather than guessing based on incomplete account metadata.

UI Changes

gpt-5.3-codex-spark now appears in the model selector for prolite users.

Before:
57d7746

Before

After:

After

Other Considerations

I was not able to test this change against other account types.

This PR may look larger than the behavior change suggests because it also factors shared Codex model parsing and metadata into apps/server/src/provider/codexModels.ts.


Note

Medium Risk
Changes Codex model gating/selection to trust app-server model/list and introduces custom model passthrough; mistakes could cause unexpected model fallback or startup/discovery regressions.

Overview
Adds prolite as a first-class Codex plan type and treats it as Spark-capable, updating auth labeling and related tests.

Updates Codex session startup and provider discovery to call/consume app-server model/list, store the resulting available model set, and prefer reported availability when resolving models (including deterministic fallbacks when the requested/default model isn’t available).

Introduces configurable customModels plumbing from server settings through the adapter/manager and ensures custom selections are preserved even when the app-server reports a different model list, backed by new parsing utilities/tests in codexModels and expanded discovery snapshot coverage.

Reviewed by Cursor Bugbot for commit cac0551. Bugbot is set up for automated code reviews on this repo. Configure here.

Note

Fix Codex prolite plan support and trust app-server reported model availability

  • Adds 'prolite' to CODEX_SPARK_ENABLED_PLAN_TYPES and adds a 'ChatGPT Pro Lite Subscription' label in codexAccount.ts, so prolite accounts correctly get spark models.
  • Updates resolveCodexModelForAccount to prefer the app-server's reported model list when non-empty, falling back to account-based spark gating only when no models are reported.
  • Extends probeCodexDiscovery in codexAppServer.ts to call model/list and include results in the CodexDiscoverySnapshot.
  • Adds parseCodexModelListResult in codexModels.ts to normalize app-server model list responses into ServerProviderModel entries, filtering hidden models and applying built-in capabilities.
  • Forwards configured customModels from CodexAdapter through to the session manager, preserving user-configured model selections during resolution.
  • Behavioral Change: model selection during session start and turns now uses app-server reported models as the source of truth when available, which may change the model sent to the app-server.

Macroscope summarized 9007527.

ElSargo added 3 commits April 14, 2026 09:09
- add prolite plan support for Spark eligibility and auth labels\n- preserve built-in display names for known app-server models\n- treat non-empty model/list results as trusted model availability\n- ignore empty listed models and wait for model/list in one-shot discovery
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 13, 2026

Important

Review skipped

Auto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 45ade295-2981-479e-b1a3-0f2290ce0cf9

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions bot added size:L 100-499 changed lines (additions + deletions). vouch:unvouched PR author is not yet trusted in the VOUCHED list. labels Apr 13, 2026
@ElSargo ElSargo marked this pull request as draft April 13, 2026 21:56
@github-actions github-actions bot added size:XL 500-999 changed lines (additions + deletions). and removed size:L 100-499 changed lines (additions + deletions). labels Apr 13, 2026
@ElSargo ElSargo marked this pull request as ready for review April 13, 2026 22:08
@macroscopeapp
Copy link
Copy Markdown
Contributor

macroscopeapp bot commented Apr 13, 2026

Approvability

Verdict: Needs human review

This PR introduces significant runtime behavior changes: adding a new 'prolite' plan type that enables spark access, and fundamentally changing model resolution to trust app-server reported model availability over hardcoded account-plan rules. These changes affect which models users can access and how model selection decisions are made, warranting human review.

You can customize Macroscope's approvability policy. Learn more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:XL 500-999 changed lines (additions + deletions). vouch:unvouched PR author is not yet trusted in the VOUCHED list.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant