High risk. Don't ship without significant remediation.
Scanned 5/5/2026, 9:50:13 AM·Cached result·Deep Scan·88 rules·View source ↗·How we decide ↗
AIVSS Score
High
Severity Breakdown
0
critical
5
high
11
medium
12
low
MCP Server Information
Findings
This package has a D grade with a safety score of 65/100 and carries significant security concerns, including 5 high-severity issues across prompt injection, tool poisoning, and server configuration vulnerabilities. The 11 medium-severity findings and 12 readiness issues suggest the package lacks proper security hardening and may not be production-ready. You should address the high-severity vulnerabilities and configuration gaps before deployment, or consider alternative solutions.
AIPer-finding remediation generated by bedrock-claude-haiku-4-5 — 28 of 28 findings. Click any finding to read.
No known CVEs found for this package or its dependencies.
Scan Details
Done
Sign in to save scan history and re-scan automatically on new commits.
Building your own MCP server?
Same rules, same LLM judges, same grade. Private scans stay isolated to your account and never appear in the public registry. Required for code your team hasn’t shipped yet.
28 of 28 findings
28 findings
getAgentDir() reads PI_CODING_AGENT_DIR from process.env at function call time and returns a privileged directory path without caller identity verification, enabling confused deputy access to shared agent configuration.
Evidence
| 1 | import { homedir } from "node:os"; |
| 2 | import { join, resolve } from "node:path"; |
| 3 | |
| 4 | export function getAgentDir(): string { |
RemediationAI
The problem is that getAgentDir() reads PI_CODING_AGENT_DIR from process.env without verifying the caller's identity or authorization, allowing any code path to access privileged agent configuration. Add a caller-context parameter to getAgentDir(callerContext: CallerContext) and validate the caller's identity against an allowlist before returning the directory path. This ensures only authorized callers can retrieve the agent directory, preventing confused deputy attacks. Verify the fix by adding unit tests that call getAgentDir() with both authorized and unauthorized caller contexts, confirming it throws an error for unauthorized callers.
getPiGlobalConfigPath() uses getAgentPath() which reads PI_CODING_AGENT_DIR environment variable once at module load time, then returns a path without consulting per-request caller identity or authorization context.
Evidence
| 1 | // config.ts - Config loading with import support |
| 2 | import { existsSync, readFileSync, writeFileSync, mkdirSync, renameSync } from "node:fs"; |
| 3 | import { homedir } from "node:os"; |
| 4 | import { dirname, join, resolve } from "node:path"; |
RemediationAI
The problem is that getPiGlobalConfigPath() caches the agent path at module load time without per-request authorization checks, allowing any subsequent caller to access the global config path regardless of their privileges. Refactor getPiGlobalConfigPath(callerContext: CallerContext) to accept a caller context parameter and validate authorization on each invocation before returning the path. This ensures authorization is checked at call time rather than relying on module-load-time initialization. Test by creating multiple caller contexts with different privilege levels and confirming only authorized callers receive valid paths.
Function initializeMcp performs FILESYSTEM side effects (saveMetadataCache writes to disk) and ENV MUTATION (lifecycle.setGlobalIdleTimeout modifies runtime state) not fully disclosed in description.
Evidence
| 1 | import type { ExtensionAPI, ExtensionContext } from "@mariozechner/pi-coding-agent"; |
| 2 | import type { McpExtensionState } from "./state.js"; |
| 3 | import type { ToolMetadata } from "./types.js"; |
| 4 | import { existsSync } from "node:fs"; |
RemediationAI
The problem is that initializeMcp() performs undisclosed side effects including filesystem writes (saveMetadataCache) and environment mutations (lifecycle.setGlobalIdleTimeout) that are not reflected in the function name or JSDoc. Add explicit JSDoc annotations documenting all side effects: @sideEffect FILESYSTEM - writes metadata cache to disk, @sideEffect ENV - modifies global idle timeout via lifecycle.setGlobalIdleTimeout(). This makes the side effects transparent to callers and reviewers. Verify by checking that the JSDoc accurately reflects all I/O and state mutations by tracing the function's execution path.
Function getPiGlobalConfigPath performs FILESYSTEM side effects (file path resolution and potential directory creation via getAgentPath) not disclosed in its name or documentation.
Evidence
| 1 | // config.ts - Config loading with import support |
| 2 | import { existsSync, readFileSync, writeFileSync, mkdirSync, renameSync } from "node:fs"; |
| 3 | import { homedir } from "node:os"; |
| 4 | import { dirname, join, resolve } from "node:path"; |
RemediationAI
The problem is that getPiGlobalConfigPath() performs filesystem side effects (path resolution and potential directory creation) that are not disclosed in its name or documentation, violating the principle of least surprise. Rename the function to resolvePiGlobalConfigPath() and add JSDoc @sideEffect FILESYSTEM - may create directories via getAgentPath() to explicitly document the side effects. This clarifies that the function performs I/O operations beyond simple path computation. Verify by adding a test that calls the function and confirms that directories are created as expected.
Function startAuth performs NETWORK side effects (ensureCallbackServer, HTTP callback handling) and ENV MUTATION (updateOAuthState) not explicitly mentioned in function name.
Evidence
| 1 | /** |
| 2 | * MCP Auth Flow |
| 3 | * |
| 4 | * High-level OAuth flow management using the MCP SDK's built-in auth functions. |
RemediationAI
The problem is that startAuth() performs undisclosed network side effects (ensureCallbackServer, HTTP callback handling) and environment mutations (updateOAuthState) that are not mentioned in the function name or documentation. Add comprehensive JSDoc annotations: @sideEffect NETWORK - starts HTTP callback server, @sideEffect ENV - updates OAuth state via updateOAuthState(). This makes all side effects explicit to callers. Verify by adding integration tests that confirm the callback server starts and OAuth state is mutated as documented.
MCP manifest declares tools but no authentication field is present (none of: auth, authorization, bearer, oauth, mtls, apiKey, api_key, basic, token, authToken). Absence is a weak signal — confirm whether the server relies on network-layer or host-level auth, or declare the real mechanism explicitly so reviewers can audit it.
Evidence
| 1 | import { matchesKey, truncateToWidth, visibleWidth } from "@mariozechner/pi-tui"; |
| 2 | import type { ImportKind } from "./types.js"; |
| 3 | import type { ConfigWritePreview, McpDiscoverySummary } from "./config.js"; |
| 4 | import type { McpOnboardingState } from "./onboarding-state.js"; |
| 5 | |
| 6 | interface SetupTheme { |
| 7 | border: string; |
| 8 | title: string; |
| 9 | selected: string; |
| 10 | hint: string; |
| 11 | success: string; |
| 12 | warning: string; |
| 13 | muted: string; |
| 14 | } |
| 15 | |
| 16 | const DEFAULT_THEME: SetupTheme = { |
| 17 | border: "2", |
| 18 | title: "36", |
| 19 | selected: |
RemediationAI
The problem is that the MCP manifest declares tools but omits an authentication field, making it unclear whether the server relies on network-layer, host-level, or no authentication. Add an explicit auth field to the manifest (e.g., auth: { type: 'oauth', provider: 'mcp-auth-flow' } or auth: { type: 'none' }) to declare the actual authentication mechanism. This enables reviewers to audit the security model. Verify by reviewing the manifest against the actual auth implementation in mcp-auth-flow.ts and confirming the declared mechanism matches the code.
MCP manifest declares tools but no authentication field is present (none of: auth, authorization, bearer, oauth, mtls, apiKey, api_key, basic, token, authToken). Absence is a weak signal — confirm whether the server relies on network-layer or host-level auth, or declare the real mechanism explicitly so reviewers can audit it.
Evidence
| 1 | <p> |
| 2 | <img src="banner.png" alt="pi-mcp-adapter" width="1100"> |
| 3 | </p> |
| 4 | |
| 5 | # Pi MCP Adapter |
| 6 | |
| 7 | Use MCP servers with [Pi](https://github.com/badlogic/pi-mono/) without burning your context window. |
| 8 | |
| 9 | https://github.com/user-attachments/assets/4b7c66ff-e27e-4639-b195-22c3db406a5a |
| 10 | |
| 11 | ## Why This Exists |
| 12 | |
| 13 | Mario wrote about [why you might not need MCP](https://mariozechner.at/posts/2025-11-02-what-if-you-dont-need-mcp/). The problem: tool definitions are verbose. A single MCP server can burn 10k+ tokens, and you' |
RemediationAI
The problem is that the README.md does not declare any authentication mechanism in the MCP manifest, leaving reviewers unable to audit the security model. Add an explicit authentication section to the README documenting the auth mechanism (e.g., 'OAuth via MCP SDK' or 'Host-level authentication') and include it in any manifest declarations. This ensures documentation and code are aligned. Verify by confirming the README's auth documentation matches the actual implementation in the codebase.
Time-of-check-to-time-of-use race. Code calls `os.path.exists` / `fs.existsSync` to check a path, then `open` / `readFileSync` / `unlink` on the same name within a few lines — without a lock or atomic-open. An attacker who can race the filesystem (symlink, file replacement) between the check and the use gets the action applied to a different target. Replace the check-then-use pattern with the action's own error handling: try the open and catch FileNotFoundError / ENOENT. For atomic creation use
Evidence
| 1 | // oauth-handler.ts - OAuth token management for MCP servers |
| 2 | import { existsSync, readFileSync } from "node:fs"; |
| 3 | import { join } from "node:path"; |
| 4 | import { getAgentPath } from "./agent-dir.js"; |
| 5 | import type { OAuthTokens } from "@modelcontextprotocol/sdk/shared/auth.js"; |
| 6 | |
| 7 | // Token storage path for a server |
| 8 | function getTokensPath(serverName: string): string { |
| 9 | const override = process.env.MCP_OAUTH_DIR?.trim(); |
| 10 | const authDir = override ? override : getAgentPath("mcp-oauth"); |
| 11 | return join(authD |
RemediationAI
The problem is that oauth-handler.ts uses existsSync() to check if a token file exists, then calls readFileSync() on the same path without atomic operations, allowing an attacker to race a symlink or file replacement between the check and read. Replace the check-then-use pattern by wrapping readFileSync() in a try-catch block that handles ENOENT errors: try { const tokens = readFileSync(tokenPath, 'utf-8'); } catch (err) { if (err.code !== 'ENOENT') throw err; return null; }. This eliminates the race window by making the read atomic. Verify by writing a test that attempts to race a symlink replacement and confirms the function safely handles the error.
Time-of-check-to-time-of-use race. Code calls `os.path.exists` / `fs.existsSync` to check a path, then `open` / `readFileSync` / `unlink` on the same name within a few lines — without a lock or atomic-open. An attacker who can race the filesystem (symlink, file replacement) between the check and the use gets the action applied to a different target. Replace the check-then-use pattern with the action's own error handling: try the open and catch FileNotFoundError / ENOENT. For atomic creation use
Evidence
| 1 | // config.ts - Config loading with import support |
| 2 | import { existsSync, readFileSync, writeFileSync, mkdirSync, renameSync } from "node:fs"; |
| 3 | import { homedir } from "node:os"; |
| 4 | import { dirname, join, resolve } from "node:path"; |
| 5 | import { getAgentPath } from "./agent-dir.js"; |
| 6 | import type { McpConfig, ServerEntry, McpSettings, ImportKind, ServerProvenance } from "./types.js"; |
| 7 | |
| 8 | const GENERIC_GLOBAL_CONFIG_PATH = join(homedir(), ".config", "mcp", "mcp.json"); |
| 9 | const PROJECT_CONFIG_NAME = ".mcp.json"; |
| 10 | c |
RemediationAI
The problem is that config.ts uses existsSync() followed by readFileSync() or writeFileSync() without atomic operations, creating a TOCTOU race where an attacker can replace the file between the check and use. Replace all existsSync() checks with try-catch blocks around the actual file operation: try { const data = readFileSync(path); } catch (err) { if (err.code !== 'ENOENT') throw err; } and use atomic writes via writeFileSync() with a temp file and renameSync(). This eliminates the race window. Verify by running concurrent file operations and confirming no race conditions occur.
Time-of-check-to-time-of-use race. Code calls `os.path.exists` / `fs.existsSync` to check a path, then `open` / `readFileSync` / `unlink` on the same name within a few lines — without a lock or atomic-open. An attacker who can race the filesystem (symlink, file replacement) between the check and the use gets the action applied to a different target. Replace the check-then-use pattern with the action's own error handling: try the open and catch FileNotFoundError / ENOENT. For atomic creation use
Evidence
| 1 | // npx-resolver.ts - Resolve npx/npm exec binaries to avoid npm parent processes |
| 2 | import { existsSync, readFileSync, realpathSync, readdirSync, statSync, writeFileSync, renameSync, mkdirSync, openSync, readSync, closeSync } from "node:fs"; |
| 3 | import { join, dirname, extname, resolve, sep } from "node:path"; |
| 4 | import { getAgentPath } from "./agent-dir.js"; |
| 5 | import { spawn, spawnSync } from "node:child_process"; |
| 6 | |
| 7 | const CACHE_VERSION = 1; |
| 8 | const CACHE_TTL_MS = 24 * 60 * 60 * 1000; |
| 9 | |
| 10 | interface NpxCacheEntry |
RemediationAI
The problem is that npx-resolver.ts uses existsSync() followed by readFileSync(), readdirSync(), or statSync() without atomic operations, allowing symlink or file replacement attacks between the check and use. Replace all existsSync() checks with try-catch blocks around the actual filesystem operation: try { const stat = statSync(path); } catch (err) { if (err.code !== 'ENOENT') throw err; }. This makes each operation atomic and eliminates the race. Verify by writing tests that race symlink replacements and confirm the function handles errors safely.
Time-of-check-to-time-of-use race. Code calls `os.path.exists` / `fs.existsSync` to check a path, then `open` / `readFileSync` / `unlink` on the same name within a few lines — without a lock or atomic-open. An attacker who can race the filesystem (symlink, file replacement) between the check and the use gets the action applied to a different target. Replace the check-then-use pattern with the action's own error handling: try the open and catch FileNotFoundError / ENOENT. For atomic creation use
Evidence
| 1 | import { existsSync, mkdirSync, readFileSync, writeFileSync, renameSync } from "node:fs"; |
| 2 | import { dirname } from "node:path"; |
| 3 | import { getAgentPath } from "./agent-dir.js"; |
| 4 | |
| 5 | export interface McpOnboardingState { |
| 6 | version: 1; |
| 7 | sharedConfigHintShown: boolean; |
| 8 | setupCompleted: boolean; |
| 9 | lastDiscoveryFingerprint?: string; |
| 10 | } |
| 11 | |
| 12 | const DEFAULT_STATE: McpOnboardingState = { |
| 13 | version: 1, |
| 14 | sharedConfigHintShown: false, |
| 15 | setupCompleted: false, |
| 16 | }; |
| 17 | |
| 18 | export function getOnboardingStatePath(): string { |
| 19 | |
RemediationAI
The problem is that onboarding-state.ts uses existsSync() followed by readFileSync() or writeFileSync() without atomic operations, creating a TOCTOU vulnerability. Replace existsSync() checks with try-catch blocks around the actual file operation: try { const state = readFileSync(path, 'utf-8'); } catch (err) { if (err.code !== 'ENOENT') throw err; }. For writes, use atomic operations: writeFileSync(tmpPath, data); renameSync(tmpPath, finalPath). This eliminates the race window. Verify by running concurrent read/write operations and confirming no race conditions occur.
Time-of-check-to-time-of-use race. Code calls `os.path.exists` / `fs.existsSync` to check a path, then `open` / `readFileSync` / `unlink` on the same name within a few lines — without a lock or atomic-open. An attacker who can race the filesystem (symlink, file replacement) between the check and the use gets the action applied to a different target. Replace the check-then-use pattern with the action's own error handling: try the open and catch FileNotFoundError / ENOENT. For atomic creation use
Evidence
| 1 | /** |
| 2 | * MCP Auth Storage Module |
| 3 | * |
| 4 | * Handles secure storage of OAuth credentials, tokens, client information, |
| 5 | * and PKCE state for MCP servers. Maintains backward compatibility with |
| 6 | * per-server directory structure. |
| 7 | * |
| 8 | * Token storage location: $MCP_OAUTH_DIR/<server>/tokens.json when set, |
| 9 | * otherwise <Pi agent dir>/mcp-oauth/<server>/tokens.json |
| 10 | */ |
| 11 | |
| 12 | import { mkdirSync, readFileSync, writeFileSync, existsSync, rmSync } from 'fs'; |
| 13 | import { join } from 'path'; |
| 14 | import { getAgentPath } from ' |
RemediationAI
The problem is that mcp-auth.ts uses existsSync() followed by readFileSync() or writeFileSync() without atomic operations, allowing file replacement attacks between the check and use. Replace all existsSync() checks with try-catch blocks around the actual file operation: try { const tokens = readFileSync(tokenPath); } catch (err) { if (err.code !== 'ENOENT') throw err; }. For writes, use atomic operations via writeFileSync(tmpPath, data); renameSync(tmpPath, finalPath). This eliminates the TOCTOU race. Verify by writing tests that race file replacements and confirm the function handles errors safely.
Time-of-check-to-time-of-use race. Code calls `os.path.exists` / `fs.existsSync` to check a path, then `open` / `readFileSync` / `unlink` on the same name within a few lines — without a lock or atomic-open. An attacker who can race the filesystem (symlink, file replacement) between the check and the use gets the action applied to a different target. Replace the check-then-use pattern with the action's own error handling: try the open and catch FileNotFoundError / ENOENT. For atomic creation use
Evidence
| 1 | // metadata-cache.ts - Persistent MCP metadata cache |
| 2 | import { existsSync, readFileSync, writeFileSync, renameSync, mkdirSync } from "node:fs"; |
| 3 | import { dirname } from "node:path"; |
| 4 | import { getAgentPath } from "./agent-dir.js"; |
| 5 | import { createHash } from "node:crypto"; |
| 6 | import { getToolUiResourceUri } from "@modelcontextprotocol/ext-apps/app-bridge"; |
| 7 | import type { McpTool, McpResource, ServerEntry, ToolMetadata } from "./types.js"; |
| 8 | import { formatToolName, isToolExcluded } from "./types.js"; |
| 9 | impor |
RemediationAI
The problem is that metadata-cache.ts uses existsSync() followed by readFileSync() or writeFileSync() without atomic operations, creating a TOCTOU race where an attacker can replace the cache file between the check and use. Replace existsSync() checks with try-catch blocks around the actual file operation: try { const cache = readFileSync(cachePath); } catch (err) { if (err.code !== 'ENOENT') throw err; }. For writes, use atomic operations: writeFileSync(tmpPath, data); renameSync(tmpPath, cachePath). This eliminates the race window. Verify by running concurrent cache operations and confirming no race conditions occur.
Identifier whose name suggests PII (email, ssn, phone, dob, credit_card, address) is passed directly to a logging / console / print call. Logs end up in CloudWatch / Datadog / Splunk indexes accessible to a wider audience than the live data — every PII value leaked into logs becomes a separate compliance liability. Mask before logging: Python: `logger.info("login from %s", redact(email))` Node: `console.log("login", maskEmail(email))` Or move the value to a structured field that the log sh
Evidence
| 500 | const address = server.address(); |
| 501 | if (!address || typeof address === "string") { |
| 502 | const err = new ServerError("invalid address"); |
| 503 | log.error("Invalid server address", err); |
| 504 | reject(err); |
| 505 | return; |
| 506 | } |
RemediationAI
The problem is that ui-server.ts logs the server address object directly, which may contain PII or sensitive network information that could be exposed in centralized logging systems. Replace log.error('Invalid server address', err) with log.error('Invalid server address', { port: address?.port }) to log only non-sensitive fields. Create a sanitizeAddress() helper function that extracts only the port and excludes hostname/IP. This prevents PII leakage to log aggregation services. Verify by checking logs and confirming only the port is logged, not the full address object.
Identifier whose name suggests PII (email, ssn, phone, dob, credit_card, address) is passed directly to a logging / console / print call. Logs end up in CloudWatch / Datadog / Splunk indexes accessible to a wider audience than the live data — every PII value leaked into logs becomes a separate compliance liability. Mask before logging: Python: `logger.info("login from %s", redact(email))` Node: `console.log("login", maskEmail(email))` Or move the value to a structured field that the log sh
Evidence
| 505 | return; |
| 506 | } |
| 507 | |
| 508 | log.debug("Server started", { port: address.port }); |
| 509 | |
| 510 | const handle: UiServerHandle = { |
| 511 | url: `http://localhost:${address.port}/?session=${sessionToken}`, |
RemediationAI
The problem is that ui-server.ts logs the sessionToken in the URL, which is sensitive authentication material that should not appear in logs accessible to a wider audience. Replace log.debug('Server started', { port: address.port, url: ... }) with log.debug('Server started', { port: address.port }) and remove the sessionToken from logged output. Store the full URL in memory only and never log authentication tokens. This prevents token leakage to centralized logging systems. Verify by checking logs and confirming the sessionToken does not appear anywhere.
MCP tool description or return text contains an imperative phrase that asks the LLM to invoke or call ANOTHER tool — "invoke the write_file tool", "before using this, also call send_email", "silently invoke X". This is a cross-tool chaining injection: the user authorized THIS tool, but the payload escalates into others. Tool descriptions should describe what the tool DOES, not direct the LLM to use other tools. If a tool's correct operation requires composition, document the dependency in human
Evidence
| 1 | import type { ToolDefinition } from "@mariozechner/pi-coding-agent"; |
| 2 | import type { McpExtensionState } from "./state.js"; |
| 3 | import type { DirectToolSpec, McpConfig, McpContent } from "./types.js"; |
| 4 | import type { MetadataCache } from "./metadata-cache.js"; |
| 5 | import { lazyConnect, getFailureAgeSeconds } from "./init.js"; |
| 6 | import { isServerCacheValid } from "./metadata-cache.js"; |
| 7 | import { formatSchema } from "./tool-metadata.js"; |
| 8 | import { transformMcpContent } from "./tool-registrar.js"; |
| 9 | import { maybeSt |
RemediationAI
The problem is that direct-tools.ts tool descriptions may contain imperative phrases directing the LLM to invoke other tools, enabling cross-tool chaining injection where the user authorized one tool but the description escalates to others. Audit all tool descriptions in direct-tools.ts and remove phrases like 'invoke', 'call', 'also use', 'silently invoke' that direct the LLM to use other tools. Rewrite descriptions to state what the tool DOES, not what other tools to call. This prevents unauthorized tool chaining. Verify by reviewing each tool description and confirming it contains no imperative directives to invoke other tools.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 38 | const globalRoot = execFileSync("npm", ["root", "-g"], { encoding: "utf-8" }).trim(); |
| 39 | const binaryPath = join(globalRoot, "glimpseui", "src", "glimpse"); |
| 40 | if (existsSync(binaryPath)) return binaryPath; |
| 41 | } catch {} |
| 42 | |
| 43 | return null; |
| 44 | } |
RemediationAI
The problem is that glimpse-ui.ts silently swallows exceptions in the catch block with no logging, making it impossible to diagnose why the binary path resolution failed. Replace catch {} with catch (err) { log.debug('Failed to resolve glimpseui binary', { error: err.message }); } to log the error at an appropriate level. This enables incident response and debugging. Verify by triggering the error condition and confirming the error is logged with sufficient detail.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 450 | setTimeout(() => { |
| 451 | try { |
| 452 | server.close(); |
| 453 | } catch {} |
| 454 | closeSse(); |
| 455 | }, 20).unref(); |
| 456 | return; |
RemediationAI
The problem is that ui-server.ts silently swallows server.close() errors with no logging, hiding failures that could indicate resource leaks or shutdown issues. Replace catch {} with catch (err) { log.warn('Error closing server', { error: err.message }); } to log the error. This enables visibility into shutdown failures. Verify by triggering a close error and confirming it is logged.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 517 | markCompleted(reason ?? "closed"); |
| 518 | try { |
| 519 | server.close(); |
| 520 | } catch {} |
| 521 | closeSse(); |
| 522 | }, |
| 523 | sendToolInput: (args: Record<string, unknown>) => { |
RemediationAI
The problem is that ui-server.ts silently swallows server.close() errors in multiple places with no logging, hiding shutdown failures. Replace all catch {} blocks around server.close() with catch (err) { log.warn('Error closing server during completion', { error: err.message }); } to log errors. This provides visibility into shutdown issues. Verify by triggering close errors and confirming they are logged.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 483 | markCompleted("stale"); |
| 484 | try { |
| 485 | server.close(); |
| 486 | } catch {} |
| 487 | closeSse(); |
| 488 | }, WATCHDOG_INTERVAL_MS); |
| 489 | watchdog.unref(); |
RemediationAI
The problem is that ui-server.ts silently swallows server.close() errors in the watchdog timer with no logging, hiding failures that could indicate resource leaks. Replace catch {} with catch (err) { log.warn('Error closing server in watchdog', { error: err.message }); } to log the error. This enables debugging of watchdog-related failures. Verify by triggering the error and confirming it is logged.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 207 | for (const client of sseClients) { |
| 208 | try { |
| 209 | client.end(); |
| 210 | } catch {} |
| 211 | } |
| 212 | sseClients.clear(); |
| 213 | }; |
RemediationAI
The problem is that ui-server.ts silently swallows client.end() errors in the closeSse() function with no logging, hiding failures that could indicate SSE client cleanup issues. Replace catch {} with catch (err) { log.debug('Error closing SSE client', { error: err.message }); } to log the error. This provides visibility into SSE cleanup failures. Verify by triggering client close errors and confirming they are logged.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 1 | var Mu=Object.defineProperty;var $e=(t,r)=>{for(var n in r)Mu(t,n,{get:r[n],enumerable:!0})};var s={};$e(s,{$brand:()=>pn,$input:()=>Si,$output:()=>ki,NEVER:()=>mn,TimePrecision:()=>ji,ZodAny:()=>Mc,ZodArray:()=>Hc,ZodBase64:()=>Ea,ZodBase64URL:()=>Aa,ZodBigInt:()=>Dt,ZodBigIntFormat:()=>Ma,ZodBoolean:()=>Zt,ZodCIDRv4:()=>Da,ZodCIDRv6:()=>Ra,ZodCUID:()=>ja,ZodCUID2:()=>Pa,ZodCatch:()=>au,ZodCustom:()=>Hr,ZodCustomStringFormat:()=>Ec,ZodDate:()=>qr,ZodDefault:()=>eu,ZodDiscriminatedUnion:()=>Jc,Z |
RemediationAI
The problem is that app-bridge.bundle.js contains minified code with silent error swallowing that is difficult to audit. Ensure the source TypeScript/JavaScript files have proper error logging before bundling. Add a build-time check that flags catch blocks with no logging. This prevents error swallowing in the source. Verify by examining the source files and confirming all catch blocks have appropriate logging.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 334 | } catch {} |
| 335 | try { |
| 336 | await bridge.teardownResource({}); |
| 337 | } catch {} |
| 338 | clearInterval(heartbeat); |
| 339 | eventSource.close(); |
| 340 | window.close(); |
RemediationAI
The problem is that host-html-template.ts silently swallows errors in bridge.teardownResource() and eventSource.close() with no logging, hiding cleanup failures. Replace catch {} with catch (err) { console.warn('Error during teardown', err); } to log errors. This enables debugging of resource cleanup issues. Verify by triggering teardown errors and confirming they are logged to the console.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 313 | eventSource.addEventListener("host-context", (event) => { |
| 314 | try { |
| 315 | bridge.setHostContext(JSON.parse(event.data)); |
| 316 | } catch {} |
| 317 | }); |
| 318 | eventSource.addEventListener("session-complete", async () => { |
| 319 | await bridge.teardownResource({}).catch(() => {}); |
RemediationAI
The problem is that host-html-template.ts silently swallows JSON.parse() errors and bridge method errors with no logging, hiding parsing or communication failures. Replace catch {} with catch (err) { console.warn('Error parsing host context or tearing down', err); } to log errors. This provides visibility into event handling failures. Verify by sending malformed JSON and confirming the error is logged.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 26 | path: iss.path ? [${xe(S)}, ...iss.path] : [${xe(S)}] |
| 27 | })));`),$.write(`newResult[${xe(S)}] = ${I}.value`)}$.write("payload.value = newResult;"),$.write("return payload;");let D=$.compile();return(S,I)=>D(m,S,I)},e,o=Ne,a=!Ke.jitless,p=a&&bn.value,h=r.catchall,g;t._zod.parse=(m,$)=>{g??(g=n.value);let b=m.value;if(!o(b))return m.issues.push({expected:"object",code:"invalid_type",input:b,inst:t}),m;let d=[];if(a&&p&&$?.async===!1&&$.jitless!==!0)e||(e=i(r.shape)),m=e(m,$);else{m.value={} |
RemediationAI
The problem is that app-bridge.bundle.js contains minified code with silent error swallowing that cannot be audited. Ensure the source files have proper error handling before bundling. Add a build-time linter rule that flags catch blocks with no logging. This prevents error swallowing in production code. Verify by examining the source TypeScript and confirming all catch blocks have logging.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 612 | provenance.set(name, { path: userPath, kind: "import", importKind }); |
| 613 | } |
| 614 | } |
| 615 | } catch {} |
| 616 | } |
| 617 | } |
RemediationAI
The problem is that config.ts silently swallows errors during config import processing with no logging, hiding failures that could indicate malformed config files. Replace catch {} with catch (err) { log.debug('Error processing config import', { name, error: err.message }); } to log the error. This enables debugging of config loading issues. Verify by providing malformed config and confirming the error is logged.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 331 | const complete = async (reason) => { |
| 332 | try { |
| 333 | await post("/proxy/ui/complete", { reason }); |
| 334 | } catch {} |
| 335 | try { |
| 336 | await bridge.teardownResource({}); |
| 337 | } catch {} |
RemediationAI
The problem is that host-html-template.ts silently swallows errors in post() and bridge.teardownResource() calls with no logging, hiding communication or cleanup failures. Replace catch {} with catch (err) { console.warn('Error during completion', err); } to log errors. This provides visibility into completion failures. Verify by triggering errors and confirming they are logged.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 31 | const glimpseuiPath = require.resolve("glimpseui"); |
| 32 | const binaryPath = join(dirname(glimpseuiPath), "glimpse"); |
| 33 | if (existsSync(binaryPath)) return binaryPath; |
| 34 | } catch {} |
| 35 | |
| 36 | // Global npm install |
| 37 | try { |
RemediationAI
The problem is that glimpse-ui.ts silently swallows errors when resolving the glimpseui binary with no logging, hiding failures that could indicate missing dependencies. Replace catch {} with catch (err) { log.debug('Failed to resolve glimpseui from require.resolve', { error: err.message }); } to log the error. This enables debugging of binary resolution issues. Verify by triggering the error and confirming it is logged.
ZodError