feat: add Enterprise support, dynamic model sync, and Node.js compatibility#9
feat: add Enterprise support, dynamic model sync, and Node.js compatibility#9oniz93 wants to merge 7 commits intorsvedant:masterfrom
Conversation
📝 WalkthroughWalkthroughThe pull request introduces model synchronization capabilities by adding a script that discovers Windsurf models from local SQLite storage, parses their definitions, and populates the OpenCode configuration automatically. It extends the plugin with a Node.js runtime fallback, improves CSRF token extraction with platform-specific logic, adds gRPC-based model fetching, and extends the model enum to support private model slots. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant CLI as CLI<br/>(sync-models)
participant FS as Filesystem
participant SQLite as Windsurf<br/>SQLite DB
participant Config as OpenCode<br/>Config JSON
User->>CLI: bun run sync-models
CLI->>FS: Resolve Windsurf DB path
CLI->>SQLite: Query ItemTable for<br/>windsurfConfigurations
SQLite-->>CLI: Return base64-encoded blob
CLI->>CLI: Decode base64 & parse<br/>with regex patterns
CLI->>CLI: Extract (Label, Model UID)<br/>pairs & normalize IDs
CLI->>CLI: Apply fallback models<br/>for missing entries
CLI->>Config: Read OpenCode config
Config-->>CLI: Return current config
CLI->>Config: Update models array<br/>in windsurf provider
Config-->>FS: Write updated config
FS-->>User: Success summary logged
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/sync-models.ts`:
- Around line 17-19: Update the failure branches that currently just log and
return (e.g., the DB_PATH existence check and the other branches referenced
around the same areas) so the process exits with a non-zero status: set
process.exitCode = 1 (or throw) immediately after logging the error before
returning; ensure any callers like authorize() will observe the non-zero exit
code instead of a successful exit. Target the checks that print "Windsurf
database not found." and the other similar error logs and replace the plain
return with setting process.exitCode = 1 (or throwing an Error) so failures
propagate correctly.
- Around line 84-86: The code assumes opencodeConfig.provider.windsurf exists
when assigning models; initialize missing nesting first to avoid exceptions on
fresh opencode.json: after parsing CONFIG_PATH into opencodeConfig, ensure
opencodeConfig.provider is an object (create if falsy) and ensure
opencodeConfig.provider.windsurf is an object (create if falsy) before setting
opencodeConfig.provider.windsurf.models = discovered; reference CONFIG_PATH,
opencodeConfig, provider.windsurf.models, and discovered when making this change
so the writeFileSync call always writes a valid structure.
- Around line 68-77: Remove the hardcoded fallback injection so account-scoped
sync only records models actually discovered: delete (or disable) the `standard`
array and the for-loop that assigns into `discovered` (the block starting with
const standard = [...] and for (const id of standard) {...}), or guard it behind
a non-account/global-only flag; ensure `discovered` is left unmodified by this
script during account-scoped runs so the generated opencode.json reflects only
enabled models.
- Line 23: The script currently shells out via execSync to call the sqlite3 CLI
(the line using execSync and DB_PATH to query ItemTable for
key='windsurfConfigurations') but package.json installs the Node sqlite3
binding; replace the execSync call and the configRaw assignment with a
programmatic query using the installed sqlite3 binding: open a
sqlite3.Database(DB_PATH), run db.get("SELECT value FROM ItemTable WHERE key =
?", ['windsurfConfigurations'], ...) to retrieve the row, set configRaw from
row.value (handle null/err), and close the DB (db.close()) — update any
surrounding async/await flow to accommodate the callback or promisify the query
so configRaw is available where used.
In `@src/plugin.ts`:
- Around line 881-899: The authorize() method currently always runs Bun; change
it to detect runtime (reuse the same Bun-vs-Node runtime check used for the
proxy server) and conditionally run the appropriate command via execSync: if Bun
is available run `bun install --silent && bun ${scriptPath}` (current behavior),
otherwise run a Node fallback such as `npm install --silent && node -r
ts-node/register ${scriptPath}` or the project's documented Node invocation;
update the authorize() branch that constructs and calls execSync (and uses
scriptPath) so it picks the command based on the runtime probe rather than
hard-coding Bun, and ensure the catch block remains unchanged.
In `@src/plugin/auth.ts`:
- Around line 138-155: The /proc linux fallback currently returns the raw
WINDSURF_CSRF_TOKEN value without validating its format; update the block that
reads environ (look for process.platform === 'linux', pidMatch,
fs.readFileSync(`/proc/${pid}/environ`), lines, and the WINDSURF_CSRF_TOKEN
handling) to validate the extracted token against the same regex used in stages
1–2 (e.g. /^[a-f0-9-]+$/i or the project’s existing pattern) and only return it
if it matches; if it does not match, continue searching/ignore and allow the
fallback to fall through as before.
In `@src/plugin/grpc-client.ts`:
- Around line 698-775: The Promise returned for the GetModelStatuses call
currently ignores request-level errors and gRPC trailers, causing failures to
resolve to an empty list; add a req.on('error', ...) handler that rejects the
Promise on stream errors and add req.on('trailers', (headers) => { ... }) to
inspect 'grpc-status' and 'grpc-message' from the trailers (rejecting when
grpc-status !== '0' with a descriptive Error) and ensure the http2 client is
closed on error; reference the request stream variable req, the http2 client
variable client, and the GetModelStatuses path
'/exa.language_server_pb.LanguageServerService/GetModelStatuses' when making
these additions so behavior matches the error handling used elsewhere in this
file (see the pattern around lines handling other gRPC calls).
In `@src/plugin/models.ts`:
- Around line 643-673: ENUM_TO_MODEL_NAME currently exposes many placeholder
entries (ModelEnum.MODEL_PRIVATE_1 .. ModelEnum.MODEL_PRIVATE_30) which makes
getCanonicalModels() advertise them publicly; remove or hide these generic
"private-*" entries from the public mapping or filter them out when building the
canonical list. Concretely, either move the ModelEnum.MODEL_PRIVATE_* entries
out of ENUM_TO_MODEL_NAME into an internal-only map, or modify
getCanonicalModels() to skip any mapping whose value matches /^private-/ or
whose key is one of ModelEnum.MODEL_PRIVATE_1..MODEL_PRIVATE_30 so only
discovered/real models are returned. Ensure references to ENUM_TO_MODEL_NAME and
getCanonicalModels remain consistent and add a comment noting why private-*
entries are kept internal.
In `@src/plugin/types.ts`:
- Around line 215-218: ModelEnum.GPT_OSS_120B was added but there's no mapping
for the name "gpt-oss-120b" in the model name↔id maps, so requests with that id
will fall back to the wrong provider; add the forward and reverse mapping
entries for "gpt-oss-120b" ↔ ModelEnum.GPT_OSS_120B in the model mapping
structures used by the conversion functions (the model name-to-id and id-to-name
maps in the models module) so both lookup directions resolve to GPT_OSS_120B.
In `@tests/live/verify-plugin.ts`:
- Around line 106-127: The test currently hardcodes model 'claude-3.7-sonnet'
(in the generator call inside the Test 6 block using streamChatGenerator and
messages3), which can fail if the account doesn't expose it; modify the test to
first probe available models (e.g., call an existing helper like
getAvailableModels(credentials) or implement a quick
isModelSupported(credentials, 'claude-3.7-sonnet') check) and only run the
streaming block with model: 'claude-3.7-sonnet' if it is reported as supported;
otherwise either pick a supported Claude model from the returned list (e.g.,
find a model name containing 'claude') and use that in the call to
streamChatGenerator, or log and skip the Claude 3.7 test without failing the
suite.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0076f56c-8067-4282-b6e3-5cc40884bb4c
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (9)
README.mdpackage.jsonscripts/sync-models.tssrc/plugin.tssrc/plugin/auth.tssrc/plugin/grpc-client.tssrc/plugin/models.tssrc/plugin/types.tstests/live/verify-plugin.ts
| if (!fs.existsSync(DB_PATH)) { | ||
| console.error("Windsurf database not found."); | ||
| return; |
There was a problem hiding this comment.
Return a non-zero exit code on sync failures.
All of these branches just log and return, so the process exits successfully and authorize() reports success even when nothing was synchronized. Throw or set process.exitCode = 1 for every failure path.
Also applies to: 24-26, 79-81, 93-95
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/sync-models.ts` around lines 17 - 19, Update the failure branches
that currently just log and return (e.g., the DB_PATH existence check and the
other branches referenced around the same areas) so the process exits with a
non-zero status: set process.exitCode = 1 (or throw) immediately after logging
the error before returning; ensure any callers like authorize() will observe the
non-zero exit code instead of a successful exit. Target the checks that print
"Windsurf database not found." and the other similar error logs and replace the
plain return with setting process.exitCode = 1 (or throwing an Error) so
failures propagate correctly.
| } | ||
|
|
||
| // Extract the raw binary blob from the SQLite database | ||
| const configRaw = execSync(`sqlite3 "${DB_PATH}" "SELECT value FROM ItemTable WHERE key = 'windsurfConfigurations'"`).toString('utf8'); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check the package.json for sqlite3 dependency
cat package.json | grep -A 20 -B 5 "sqlite3"Repository: rsvedant/opencode-windsurf-auth
Length of output: 596
🏁 Script executed:
# Now let's examine the scripts/sync-models.ts file around line 23
if [ -f scripts/sync-models.ts ]; then
echo "=== scripts/sync-models.ts ===" && cat -n scripts/sync-models.ts | head -40
fiRepository: rsvedant/opencode-windsurf-auth
Length of output: 1875
🏁 Script executed:
# Check if there are other sqlite3 references in the codebase
rg "sqlite3" --type ts --type js --type jsonRepository: rsvedant/opencode-windsurf-auth
Length of output: 865
🏁 Script executed:
# Let's also check what the npm sqlite3 package provides
npm info sqlite3 2>/dev/null | head -50Repository: rsvedant/opencode-windsurf-auth
Length of output: 1039
sqlite3 here is a system CLI requirement, not the npm binding package you added.
This script shells out to sqlite3, but package.json only installs the Node.js binding (sqlite3@5.1.7), which is not a CLI executable. Fresh machines without a global sqlite3 CLI will fail even after bun install; use the installed JS library programmatically or explicitly document the system CLI dependency.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/sync-models.ts` at line 23, The script currently shells out via
execSync to call the sqlite3 CLI (the line using execSync and DB_PATH to query
ItemTable for key='windsurfConfigurations') but package.json installs the Node
sqlite3 binding; replace the execSync call and the configRaw assignment with a
programmatic query using the installed sqlite3 binding: open a
sqlite3.Database(DB_PATH), run db.get("SELECT value FROM ItemTable WHERE key =
?", ['windsurfConfigurations'], ...) to retrieve the row, set configRaw from
row.value (handle null/err), and close the DB (db.close()) — update any
surrounding async/await flow to accommodate the callback or promisify the query
so configRaw is available where used.
| // Always ensure base standard models are present as fallbacks | ||
| const standard = ["gpt-4o", "claude-3-5-sonnet", "claude-3-7-sonnet"]; | ||
| for (const id of standard) { | ||
| if (!discovered[id]) { | ||
| discovered[id] = { | ||
| name: id.replace(/-/g, ' ').replace(/\b\w/g, c => c.toUpperCase()) + " (Windsurf)", | ||
| limit: { context: 200000, output: 8192 } | ||
| }; | ||
| } | ||
| } |
There was a problem hiding this comment.
Don't inject fallback models into an account-scoped sync.
The contract of this tool is “models actually enabled for your account”, but these hardcoded additions reintroduce models that may be unavailable for the current tier or region. That makes the generated opencode.json inaccurate again.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/sync-models.ts` around lines 68 - 77, Remove the hardcoded fallback
injection so account-scoped sync only records models actually discovered: delete
(or disable) the `standard` array and the for-loop that assigns into
`discovered` (the block starting with const standard = [...] and for (const id
of standard) {...}), or guard it behind a non-account/global-only flag; ensure
`discovered` is left unmodified by this script during account-scoped runs so the
generated opencode.json reflects only enabled models.
| const opencodeConfig = JSON.parse(fs.readFileSync(CONFIG_PATH, 'utf8')); | ||
| opencodeConfig.provider.windsurf.models = discovered; | ||
| fs.writeFileSync(CONFIG_PATH, JSON.stringify(opencodeConfig, null, 2)); |
There was a problem hiding this comment.
Initialize provider.windsurf before assigning models.
On a fresh opencode.json without that provider block, this throws and the sync aborts. The sync flow should create the missing nesting instead of assuming it already exists.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/sync-models.ts` around lines 84 - 86, The code assumes
opencodeConfig.provider.windsurf exists when assigning models; initialize
missing nesting first to avoid exceptions on fresh opencode.json: after parsing
CONFIG_PATH into opencodeConfig, ensure opencodeConfig.provider is an object
(create if falsy) and ensure opencodeConfig.provider.windsurf is an object
(create if falsy) before setting opencodeConfig.provider.windsurf.models =
discovered; reference CONFIG_PATH, opencodeConfig, provider.windsurf.models, and
discovered when making this change so the writeFileSync call always writes a
valid structure.
| async authorize() { | ||
| console.log('Synchronizing Windsurf models...'); | ||
| try { | ||
| const { execSync } = await import('child_process'); | ||
| const path = await import('path'); | ||
| const { fileURLToPath } = await import('url'); | ||
|
|
||
| // Find the root of the plugin package | ||
| const currentFile = fileURLToPath(import.meta.url); | ||
| const pluginRoot = path.join(path.dirname(currentFile), '..', '..'); | ||
| const scriptPath = path.join(pluginRoot, 'scripts', 'sync-models.ts'); | ||
|
|
||
| console.log(`Executing sync script: ${scriptPath}`); | ||
| // Run bun install first to ensure sqlite3 exists in the cache folder, then run script | ||
| execSync(`cd ${pluginRoot} && bun install --silent && bun ${scriptPath}`, { stdio: 'inherit' }); | ||
| return { type: 'success', key: 'synced' }; | ||
| } catch (err) { | ||
| console.error('Failed to sync models:', err); | ||
| return { type: 'failed' }; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check the file exists and get line count
wc -l src/plugin.tsRepository: rsvedant/opencode-windsurf-auth
Length of output: 92
🏁 Script executed:
# Read the specific lines mentioned in the review
sed -n '881,899p' src/plugin.tsRepository: rsvedant/opencode-windsurf-auth
Length of output: 1141
🏁 Script executed:
# Get broader context around those lines to understand the function better
sed -n '870,910p' src/plugin.tsRepository: rsvedant/opencode-windsurf-auth
Length of output: 1716
🏁 Script executed:
# Search for any Node.js compatibility logic or fallbacks in the file
rg -n "node|Node|fallback|Bun" src/plugin.ts | head -20Repository: rsvedant/opencode-windsurf-auth
Length of output: 1233
🏁 Script executed:
# Check if there's a PR description or git context
git log --oneline -20Repository: rsvedant/opencode-windsurf-auth
Length of output: 100
The authorize() function lacks a Node.js fallback despite the PR adding Node.js compatibility elsewhere.
The sync action at line 892 hard-codes bun install --silent && bun ${scriptPath}, with no conditional logic for Node.js environments. While the proxy server itself includes a Node.js fallback (lines 735–776), the new "Sync Models" button will fail in any Node.js-only environment where Bun is unavailable. Add a runtime check similar to the one used for the proxy server to either use Bun when available or provide a Node.js alternative.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/plugin.ts` around lines 881 - 899, The authorize() method currently
always runs Bun; change it to detect runtime (reuse the same Bun-vs-Node runtime
check used for the proxy server) and conditionally run the appropriate command
via execSync: if Bun is available run `bun install --silent && bun
${scriptPath}` (current behavior), otherwise run a Node fallback such as `npm
install --silent && node -r ts-node/register ${scriptPath}` or the project's
documented Node invocation; update the authorize() branch that constructs and
calls execSync (and uses scriptPath) so it picks the command based on the
runtime probe rather than hard-coding Bun, and ensure the catch block remains
unchanged.
| // 3. Linux fallback: check /proc if ps aux didn't show it | ||
| if (process.platform === 'linux') { | ||
| try { | ||
| const pidMatch = processInfo.match(/^\s*\S+\s+(\d+)/); | ||
| if (pidMatch) { | ||
| const pid = pidMatch[1]; | ||
| const environ = fs.readFileSync(`/proc/${pid}/environ`, 'utf8'); | ||
| const lines = environ.split('\0'); | ||
| for (const line of lines) { | ||
| if (line.startsWith('WINDSURF_CSRF_TOKEN=')) { | ||
| return line.split('=')[1]; | ||
| } | ||
| } | ||
| } | ||
| } catch { | ||
| // Fall through | ||
| } | ||
| } |
There was a problem hiding this comment.
Linux /proc fallback lacks token format validation.
Stages 1 and 2 validate the CSRF token format using regex [a-f0-9-]+, but the /proc fallback returns the value directly without the same validation. This inconsistency could allow malformed tokens to pass through.
🛡️ Proposed fix to add format validation
if (process.platform === 'linux') {
try {
const pidMatch = processInfo.match(/^\s*\S+\s+(\d+)/);
if (pidMatch) {
const pid = pidMatch[1];
const environ = fs.readFileSync(`/proc/${pid}/environ`, 'utf8');
const lines = environ.split('\0');
for (const line of lines) {
if (line.startsWith('WINDSURF_CSRF_TOKEN=')) {
- return line.split('=')[1];
+ const token = line.substring('WINDSURF_CSRF_TOKEN='.length);
+ if (/^[a-f0-9-]+$/.test(token)) {
+ return token;
+ }
}
}
}
} catch {
// Fall through
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // 3. Linux fallback: check /proc if ps aux didn't show it | |
| if (process.platform === 'linux') { | |
| try { | |
| const pidMatch = processInfo.match(/^\s*\S+\s+(\d+)/); | |
| if (pidMatch) { | |
| const pid = pidMatch[1]; | |
| const environ = fs.readFileSync(`/proc/${pid}/environ`, 'utf8'); | |
| const lines = environ.split('\0'); | |
| for (const line of lines) { | |
| if (line.startsWith('WINDSURF_CSRF_TOKEN=')) { | |
| return line.split('=')[1]; | |
| } | |
| } | |
| } | |
| } catch { | |
| // Fall through | |
| } | |
| } | |
| // 3. Linux fallback: check /proc if ps aux didn't show it | |
| if (process.platform === 'linux') { | |
| try { | |
| const pidMatch = processInfo.match(/^\s*\S+\s+(\d+)/); | |
| if (pidMatch) { | |
| const pid = pidMatch[1]; | |
| const environ = fs.readFileSync(`/proc/${pid}/environ`, 'utf8'); | |
| const lines = environ.split('\0'); | |
| for (const line of lines) { | |
| if (line.startsWith('WINDSURF_CSRF_TOKEN=')) { | |
| const token = line.substring('WINDSURF_CSRF_TOKEN='.length); | |
| if (/^[a-f0-9-]+$/.test(token)) { | |
| return token; | |
| } | |
| } | |
| } | |
| } | |
| } catch { | |
| // Fall through | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/plugin/auth.ts` around lines 138 - 155, The /proc linux fallback
currently returns the raw WINDSURF_CSRF_TOKEN value without validating its
format; update the block that reads environ (look for process.platform ===
'linux', pidMatch, fs.readFileSync(`/proc/${pid}/environ`), lines, and the
WINDSURF_CSRF_TOKEN handling) to validate the extracted token against the same
regex used in stages 1–2 (e.g. /^[a-f0-9-]+$/i or the project’s existing
pattern) and only return it if it matches; if it does not match, continue
searching/ignore and allow the fallback to fall through as before.
| return new Promise((resolve, reject) => { | ||
| const client = http2.connect(`http://localhost:${port}`); | ||
| const chunks: Buffer[] = []; | ||
|
|
||
| client.on('error', reject); | ||
|
|
||
| const req = client.request({ | ||
| ':method': 'POST', | ||
| ':path': '/exa.language_server_pb.LanguageServerService/GetModelStatuses', | ||
| 'content-type': 'application/grpc', | ||
| 'te': 'trailers', | ||
| 'x-codeium-csrf-token': csrfToken, | ||
| }); | ||
|
|
||
| req.on('data', (chunk: Buffer) => { | ||
| chunks.push(chunk); | ||
| }); | ||
|
|
||
| req.on('end', () => { | ||
| client.close(); | ||
| const fullBuffer = Buffer.concat(chunks); | ||
| if (fullBuffer.length < 5) { | ||
| resolve([]); | ||
| return; | ||
| } | ||
|
|
||
| const messageLength = fullBuffer.readUInt32BE(1); | ||
| const payload = fullBuffer.subarray(5, 5 + messageLength); | ||
|
|
||
| try { | ||
| const models: DiscoveredModel[] = []; | ||
| let offset = 0; | ||
|
|
||
| // Parse GetModelStatusesResponse | ||
| // Field 1: model_status_infos (repeated ModelStatusInfo) | ||
| while (offset < payload.length) { | ||
| const field = parseProtobufField(payload, offset); | ||
| if (!field) break; | ||
| offset += field.bytesConsumed; | ||
|
|
||
| if (field.fieldNum === 1 && field.wireType === 2 && Buffer.isBuffer(field.value)) { | ||
| // Parse ModelStatusInfo | ||
| // Field 1: model (enum) | ||
| // Field 4: model_uid (string) | ||
| let infoOffset = 0; | ||
| let modelEnum = 0; | ||
| let modelUid = ''; | ||
|
|
||
| const infoBuf = field.value; | ||
| while (infoOffset < infoBuf.length) { | ||
| const infoField = parseProtobufField(infoBuf, infoOffset); | ||
| if (!infoField) break; | ||
| infoOffset += infoField.bytesConsumed; | ||
|
|
||
| if (infoField.fieldNum === 1 && infoField.wireType === 0) { | ||
| modelEnum = Number(infoField.value); | ||
| } else if (infoField.fieldNum === 4 && infoField.wireType === 2 && Buffer.isBuffer(infoField.value)) { | ||
| modelUid = infoField.value.toString('utf8'); | ||
| } | ||
| } | ||
|
|
||
| if (modelEnum > 0) { | ||
| models.push({ | ||
| id: modelUid || `model-${modelEnum}`, | ||
| name: modelUid || `Model ${modelEnum}`, | ||
| enumValue: modelEnum | ||
| }); | ||
| } | ||
| } | ||
| } | ||
| resolve(models); | ||
| } catch (err) { | ||
| reject(err); | ||
| } | ||
| }); | ||
|
|
||
| req.end(body); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check the file exists and read the specific lines mentioned
wc -l src/plugin/grpc-client.tsRepository: rsvedant/opencode-windsurf-auth
Length of output: 104
🏁 Script executed:
# Read the specific lines (698-775) to verify the code snippet
sed -n '698,775p' src/plugin/grpc-client.tsRepository: rsvedant/opencode-windsurf-auth
Length of output: 2642
🏁 Script executed:
# Find the getModels function definition
rg -n "getModels\s*\(" src/plugin/grpc-client.ts | head -20Repository: rsvedant/opencode-windsurf-auth
Length of output: 176
🏁 Script executed:
# Look for any trailer handling or grpc-status patterns in the file
rg -n "trailer|grpc-status" src/plugin/grpc-client.tsRepository: rsvedant/opencode-windsurf-auth
Length of output: 454
🏁 Script executed:
# Check for req error handlers or other error patterns
rg -n "req\.on\('error" src/plugin/grpc-client.tsRepository: rsvedant/opencode-windsurf-auth
Length of output: 144
🏁 Script executed:
# Get the full context of one of the properly-implemented functions for comparison
sed -n '510,560p' src/plugin/grpc-client.tsRepository: rsvedant/opencode-windsurf-auth
Length of output: 1493
Add error handlers to reject on gRPC failures instead of returning empty list.
getModels() requests trailers with 'te': 'trailers' but never validates them, and lacks a request error handler. Failures like authentication errors or service unavailability are currently silently resolved to [], making callers unable to distinguish between "no models enabled" and actual errors. Add req.on('trailers', ...) to check grpc-status and req.on('error', ...) to handle request-stream failures, matching the error handling pattern used in other functions in this file (e.g., lines 529–556).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/plugin/grpc-client.ts` around lines 698 - 775, The Promise returned for
the GetModelStatuses call currently ignores request-level errors and gRPC
trailers, causing failures to resolve to an empty list; add a req.on('error',
...) handler that rejects the Promise on stream errors and add
req.on('trailers', (headers) => { ... }) to inspect 'grpc-status' and
'grpc-message' from the trailers (rejecting when grpc-status !== '0' with a
descriptive Error) and ensure the http2 client is closed on error; reference the
request stream variable req, the http2 client variable client, and the
GetModelStatuses path
'/exa.language_server_pb.LanguageServerService/GetModelStatuses' when making
these additions so behavior matches the error handling used elsewhere in this
file (see the pattern around lines handling other gRPC calls).
| // Generic Private slots | ||
| [ModelEnum.MODEL_PRIVATE_1]: 'private-1', | ||
| [ModelEnum.MODEL_PRIVATE_2]: 'private-2', | ||
| [ModelEnum.MODEL_PRIVATE_3]: 'private-3', | ||
| [ModelEnum.MODEL_PRIVATE_4]: 'private-4', | ||
| [ModelEnum.MODEL_PRIVATE_5]: 'private-5', | ||
| [ModelEnum.MODEL_PRIVATE_6]: 'private-6', | ||
| [ModelEnum.MODEL_PRIVATE_7]: 'private-7', | ||
| [ModelEnum.MODEL_PRIVATE_8]: 'private-8', | ||
| [ModelEnum.MODEL_PRIVATE_9]: 'private-9', | ||
| [ModelEnum.MODEL_PRIVATE_10]: 'private-10', | ||
| [ModelEnum.MODEL_PRIVATE_11]: 'private-11', | ||
| [ModelEnum.MODEL_PRIVATE_12]: 'private-12', | ||
| [ModelEnum.MODEL_PRIVATE_13]: 'private-13', | ||
| [ModelEnum.MODEL_PRIVATE_14]: 'private-14', | ||
| [ModelEnum.MODEL_PRIVATE_15]: 'private-15', | ||
| [ModelEnum.MODEL_PRIVATE_16]: 'private-16', | ||
| [ModelEnum.MODEL_PRIVATE_17]: 'private-17', | ||
| [ModelEnum.MODEL_PRIVATE_18]: 'private-18', | ||
| [ModelEnum.MODEL_PRIVATE_19]: 'private-19', | ||
| [ModelEnum.MODEL_PRIVATE_20]: 'private-20', | ||
| [ModelEnum.MODEL_PRIVATE_21]: 'private-21', | ||
| [ModelEnum.MODEL_PRIVATE_22]: 'private-22', | ||
| [ModelEnum.MODEL_PRIVATE_23]: 'private-23', | ||
| [ModelEnum.MODEL_PRIVATE_24]: 'private-24', | ||
| [ModelEnum.MODEL_PRIVATE_25]: 'private-25', | ||
| [ModelEnum.MODEL_PRIVATE_26]: 'private-26', | ||
| [ModelEnum.MODEL_PRIVATE_27]: 'private-27', | ||
| [ModelEnum.MODEL_PRIVATE_28]: 'private-28', | ||
| [ModelEnum.MODEL_PRIVATE_29]: 'private-29', | ||
| [ModelEnum.MODEL_PRIVATE_30]: 'private-30', |
There was a problem hiding this comment.
Don't surface generic private-* slots as canonical public models.
getCanonicalModels() builds its output from ENUM_TO_MODEL_NAME, so these entries will make /v1/models advertise 30 placeholder models to every account. Keep them internal or filter them out until they're actually discovered for the current user.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/plugin/models.ts` around lines 643 - 673, ENUM_TO_MODEL_NAME currently
exposes many placeholder entries (ModelEnum.MODEL_PRIVATE_1 ..
ModelEnum.MODEL_PRIVATE_30) which makes getCanonicalModels() advertise them
publicly; remove or hide these generic "private-*" entries from the public
mapping or filter them out when building the canonical list. Concretely, either
move the ModelEnum.MODEL_PRIVATE_* entries out of ENUM_TO_MODEL_NAME into an
internal-only map, or modify getCanonicalModels() to skip any mapping whose
value matches /^private-/ or whose key is one of
ModelEnum.MODEL_PRIVATE_1..MODEL_PRIVATE_30 so only discovered/real models are
returned. Ensure references to ENUM_TO_MODEL_NAME and getCanonicalModels remain
consistent and add a comment noting why private-* entries are kept internal.
| // ============================================================================ | ||
| // Additional Models | ||
| // ============================================================================ | ||
| GPT_OSS_120B: 326, |
There was a problem hiding this comment.
Add the gpt-oss-120b name mapping before exposing this enum.
ModelEnum.GPT_OSS_120B is new here, but src/plugin/models.ts still has no forward or reverse mapping for it. If the sync flow writes that model id, requests will silently fall back to the default Claude route instead of GPT OSS.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/plugin/types.ts` around lines 215 - 218, ModelEnum.GPT_OSS_120B was added
but there's no mapping for the name "gpt-oss-120b" in the model name↔id maps, so
requests with that id will fall back to the wrong provider; add the forward and
reverse mapping entries for "gpt-oss-120b" ↔ ModelEnum.GPT_OSS_120B in the model
mapping structures used by the conversion functions (the model name-to-id and
id-to-name maps in the models module) so both lookup directions resolve to
GPT_OSS_120B.
| // Test 6: Test Claude 3.7 | ||
| console.log('6. Testing with Claude 3.7 Sonnet...'); | ||
| const messages3: ChatMessage[] = [ | ||
| { role: 'user', content: 'Reply with just: "Claude 3.7 OK"' } | ||
| ]; | ||
|
|
||
| try { | ||
| const chunks: string[] = []; | ||
| const generator = streamChatGenerator(credentials, { | ||
| model: 'claude-3.7-sonnet', | ||
| messages: messages3, | ||
| }); | ||
|
|
||
| process.stdout.write(' Response: '); | ||
| for await (const chunk of generator) { | ||
| chunks.push(chunk); | ||
| process.stdout.write(chunk); | ||
| } | ||
| console.log('\n OK: Claude 3.7 streaming completed\n'); | ||
| } catch (error) { | ||
| console.error(`\n FAIL: Claude 3.7 failed: ${error instanceof Error ? error.message : error}`); | ||
| process.exit(1); |
There was a problem hiding this comment.
Skip claude-3.7-sonnet when the account does not expose it.
This repo now supports account-scoped model availability, but this test hard-fails on one fixed model. That turns a healthy install into a false negative for tiers or regions where Claude 3.7 is unavailable; gate it on discovered support or fall back to any supported Claude model.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/live/verify-plugin.ts` around lines 106 - 127, The test currently
hardcodes model 'claude-3.7-sonnet' (in the generator call inside the Test 6
block using streamChatGenerator and messages3), which can fail if the account
doesn't expose it; modify the test to first probe available models (e.g., call
an existing helper like getAvailableModels(credentials) or implement a quick
isModelSupported(credentials, 'claude-3.7-sonnet') check) and only run the
streaming block with model: 'claude-3.7-sonnet' if it is reported as supported;
otherwise either pick a supported Claude model from the returned list (e.g.,
find a model name containing 'claude') and use that in the call to
streamChatGenerator, or log and skip the Claude 3.7 test without failing the
suite.
Description
This PR introduces major improvements and fixes to the opencode-windsurf-auth plugin, expanding its compatibility and feature set for both standard and Enterprise users. These changes were implemented using Gemini CLI.
Key Changes
These updates ensure the plugin remains functional with the latest Windsurf versions and provides a seamless, data-driven experience.
Summary by CodeRabbit
New Features
Bug Fixes