Use with caution. Address findings before production.
Scanned 5/12/2026, 7:16:09 PMยทCached resultยทDeep Scanยท91 rulesยทHow we decide โ
AIVSS Score
Medium
Severity Breakdown
0
critical
1
high
20
medium
0
low
MCP Server Information
Findings
This package carries a C grade with 20 medium-severity issues that pose meaningful security risks, primarily centered on server configuration weaknesses (14 findings) and resource exhaustion vulnerabilities (5 findings). One high-severity command injection vulnerability and one hardcoded secret represent direct attack vectors that should be addressed before deployment. The 70/100 safety score indicates this package requires careful review and likely remediation before use in production environments.
AIPer-finding remediation generated by bedrock-claude-haiku-4-5 โ 21 of 21 findings. Click any finding to read.
No known CVEs found for this package or its dependencies.
Scan Details
Done
Sign in to save scan history and re-scan automatically on new commits.
Building your own MCP server?
Same rules, same LLM judges, same grade. Private scans stay isolated to your account and never appear in the public registry. Required for code your team hasnโt shipped yet.
21 of 21 findings
21 findings
Command injection risk. Shell-execution sink called with interpolated / attacker-controllable input. Use list-arg subprocess with shell=False, or escape every variable via shlex.quote (Python) / shell-escape (Node).
Evidence
| 50 | console.log(`Downloaded to ${publisherPath}`); |
| 51 | |
| 52 | // Create the new server.json in the temporary directory |
| 53 | execSync(`${publisherPath} init`, {cwd: tmpDir, stdio: 'inherit'}); |
| 54 | |
| 55 | const newServerJsonPath = path.join(tmpDir, 'server.json'); |
| 56 | const newServerJson = JSON.parse(fs.readFileSync(newServerJsonPath, 'utf-8')); |
RemediationAI
The problem is that `execSync()` is called with a template string containing `publisherPath`, which is attacker-controllable and can inject arbitrary shell commands. Replace the shell command execution with the array form of `execSync()` by changing `execSync(\`${publisherPath} init\`, {cwd: tmpDir, stdio: 'inherit'})` to `execSync([publisherPath, 'init'], {cwd: tmpDir, stdio: 'inherit'})` (or use `spawnSync()` with the array argument form). This eliminates shell interpretation entirely, preventing command injection even if `publisherPath` contains special characters or shell metacharacters. Verify by testing with a path containing spaces, semicolons, or backticksโthe command should fail gracefully or treat them as literal filename characters rather than executing them.
Hardcoded secret detected in source. MCP servers often proxy between the model and a third-party API, so any committed credential grants that access to anyone who can read the repo. Move to an environment variable or secret manager and rotate the leaked value.
Evidence
| 235 | const cruxManager = DevTools.CrUXManager.instance(); |
| 236 | // go/jtfbx. Yes, we're aware this API key is public. ;) |
| 237 | cruxManager.setEndpointForTesting( |
| 238 | 'https://chromeuxreport.googleapis.com/v1/records:queryRecord?key=AIzaSyBn5gimNjhiEyA_euicSKko6IlD3HdgUfk', |
| 239 | ); |
| 240 | const cruxSetting = |
| 241 | DevTools.Common.Settings.Settings.instance().createSetting('field-data', { |
RemediationAI
The problem is that a Google API key is hardcoded in the source file `src/tools/performance.ts` in the CrUX endpoint URL, which grants anyone with repository access direct API quota and potential abuse. Move the API key to an environment variable by replacing the hardcoded URL with `process.env.CRUX_API_KEY` and updating the endpoint to `https://chromeuxreport.googleapis.com/v1/records:queryRecord?key=${process.env.CRUX_API_KEY}`, then rotate the exposed key in the Google Cloud Console. This ensures credentials are never committed to version control and can be managed securely per deployment environment. Verify by confirming the environment variable is set before running the tool, and check that the API call succeeds with the new variable-based URL.
Network / IO / subprocess call without an explicit timeout. A malicious or hung upstream (HTTP host, socket peer, child process) can pin threads, exhaust connection/process pools, and make the MCP server unresponsive. Always pass a bounded timeout. v2 extends v1 with subprocess coverage (R03 from the legacy readiness audit).
Evidence
| 348 | Example without arguments: `() => { |
| 349 | return document.title |
| 350 | }` or `async () => { |
| 351 | return await fetch("example.com") |
| 352 | }`. |
| 353 | Example with arguments: `(el) => { |
| 354 | return el.innerText; |
RemediationAI
The problem is that the example code in the documentation shows `fetch('example.com')` without any timeout parameter, which could hang indefinitely if the remote server is unresponsive or malicious. Add an explicit timeout to the fetch call by wrapping it with `Promise.race()` and `setTimeout()`, or use the `AbortController` with a timeout: `const controller = new AbortController(); const timeout = setTimeout(() => controller.abort(), 5000); await fetch('example.com', {signal: controller.signal})`. This prevents thread exhaustion and connection pool starvation by ensuring the MCP server can recover from hung requests. Verify by testing with a non-responsive endpoint and confirming the request aborts after the specified timeout (e.g., 5 seconds).
Network / IO / subprocess call without an explicit timeout. A malicious or hung upstream (HTTP host, socket peer, child process) can pin threads, exhaust connection/process pools, and make the MCP server unresponsive. Always pass a bounded timeout. v2 extends v1 with subprocess coverage (R03 from the legacy readiness audit).
Evidence
| 178 | name: 'function', |
| 179 | type: 'string', |
| 180 | description: |
| 181 | 'A JavaScript function declaration to be executed by the tool in the currently selected page.\nExample without arguments: `() => {\n return document.title\n}` or `async () => {\n return await fetch("example.com")\n}`.\nExample with arguments: `(el) => {\n return el.innerText;\n}`\n', |
| 182 | required: true, |
| 183 | }, |
| 184 | args: { |
RemediationAI
The problem is that the tool description in `src/bin/chrome-devtools-cli-options.ts` documents executing arbitrary JavaScript functions via `fetch()` without mentioning or enforcing timeouts, allowing malicious or slow upstream hosts to hang the MCP server. Update the description and implementation to document and enforce a timeout parameter (e.g., 30 seconds) for all network operations, and add validation in the execution handler to reject functions without timeout specifications or wrap them with `AbortController` and a default timeout. This ensures user-provided functions cannot pin threads indefinitely. Verify by executing a test function that makes a slow fetch and confirming it aborts after the timeout.
Network / IO / subprocess call without an explicit timeout. A malicious or hung upstream (HTTP host, socket peer, child process) can pin threads, exhaust connection/process pools, and make the MCP server unresponsive. Always pass a bounded timeout. v2 extends v1 with subprocess coverage (R03 from the legacy readiness audit).
Evidence
| 16 | htmlContent: ` |
| 17 | <h1>Network Test</h1> |
| 18 | <script> |
| 19 | fetch('/network_test.html'); // Self fetch to ensure at least one request |
| 20 | </script> |
| 21 | `, |
| 22 | }, |
RemediationAI
The problem is that the test HTML in `scripts/eval_scenarios/network_test.ts` contains a bare `fetch('/network_test.html')` call without a timeout, which could hang the test suite if the server is slow or unresponsive. Wrap the fetch call with a timeout using `Promise.race()` or `AbortController`: `const controller = new AbortController(); setTimeout(() => controller.abort(), 5000); fetch('/network_test.html', {signal: controller.signal})`. This prevents test hangs and ensures the scenario completes within a bounded time. Verify by running the test scenario and confirming it completes within the timeout even if the endpoint is slow.
Network / IO / subprocess call without an explicit timeout. A malicious or hung upstream (HTTP host, socket peer, child process) can pin threads, exhaust connection/process pools, and make the MCP server unresponsive. Always pass a bounded timeout. v2 extends v1 with subprocess coverage (R03 from the legacy readiness audit).
Evidence
| 37 | if (cachePath) { |
| 38 | try { |
| 39 | const response = await fetch(`${getRegistry()}/chrome-devtools-mcp/latest`); |
| 40 | const data = response.ok ? await response.json() : null; |
| 41 | |
| 42 | if ( |
RemediationAI
The problem is that `src/bin/check-latest-version.ts` calls `fetch()` without a timeout when querying the registry, allowing a slow or compromised registry to hang the version-check operation and block the MCP server startup. Add an explicit timeout by using `AbortController` with a 10-second limit: `const controller = new AbortController(); const timeout = setTimeout(() => controller.abort(), 10000); const response = await fetch(\`${getRegistry()}/chrome-devtools-mcp/latest\`, {signal: controller.signal})`. This ensures the version check fails fast and the server can start even if the registry is unreachable. Verify by simulating a slow registry endpoint and confirming the fetch aborts after 10 seconds.
Network / IO / subprocess call without an explicit timeout. A malicious or hung upstream (HTTP host, socket peer, child process) can pin threads, exhaust connection/process pools, and make the MCP server unresponsive. Always pass a bounded timeout. v2 extends v1 with subprocess coverage (R03 from the legacy readiness audit).
Evidence
| 30 | Example without arguments: \`() => { |
| 31 | return document.title |
| 32 | }\` or \`async () => { |
| 33 | return await fetch("example.com") |
| 34 | }\`. |
| 35 | Example with arguments: \`(el) => { |
| 36 | return el.innerText; |
RemediationAI
The problem is that the documentation in `src/tools/script.ts` shows example code with `fetch('example.com')` without timeout handling, which could allow user-provided scripts to hang the MCP server indefinitely. Update the example and add a note that all fetch calls must include a timeout via `AbortController` or `Promise.race()`, and enforce this in the script execution handler by wrapping user code with a default 30-second timeout. This prevents malicious or slow scripts from exhausting server resources. Verify by executing a test script with a slow fetch and confirming it aborts after the timeout.
Package declares an install-time hook (npm postinstall/preinstall/prepare, setup.py cmdclass override, custom setuptools install class, or non-default pyproject build-backend). Anyone installing this package runs the hook. Confirm the hook is necessary and review its contents; prefer shipping a plain library without install-time execution.
Evidence
| 24 | "test:no-build": "node scripts/test.mjs", |
| 25 | "test:only": "npm run build && node scripts/test.mjs --test-only", |
| 26 | "test:update-snapshots": "npm run build && node scripts/test.mjs --test-update-snapshots", |
| 27 | "prepare": "node --experimental-strip-types scripts/prepare.ts", |
| 28 | "verify-server-json-version": "node --experimental-strip-types scripts/verify-server-json-version.ts", |
| 29 | "update-lighthouse": "node --experimental-strip-types scripts/update-lighthouse.ts", |
| 30 | "update-metrics": "node |
RemediationAI
The problem is that `package.json` declares a `prepare` hook that runs `node --experimental-strip-types scripts/prepare.ts` at install time, allowing arbitrary code execution on every `npm install` without explicit user consent. Remove or rename the `prepare` hook to `postinstall` with clear documentation, or better yet, move the type-stripping step to the build process only by removing the hook and adding it to the `build` script instead. This ensures type-stripping happens only during development/CI builds, not on every installation. Verify by running `npm install` in a clean environment and confirming the prepare script does not execute (or document why it must run at install time if it is truly necessary).
MCP manifest declares tools but no authentication field is present (none of: auth, authorization, bearer, oauth, mtls, apiKey, api_key, basic, token, authToken). Absence is a weak signal โ confirm whether the server relies on network-layer or host-level auth, or declare the real mechanism explicitly so reviewers can audit it.
Evidence
| 1 | { |
| 2 | "mcpServers": { |
| 3 | "chrome-devtools": { |
| 4 | "command": "npx", |
| 5 | "args": ["chrome-devtools-mcp@latest"] |
| 6 | } |
| 7 | } |
| 8 | } |
RemediationAI
The problem is that `.mcp.json` declares MCP tools but includes no `auth`, `authorization`, `apiKey`, or other authentication field, making it unclear whether the server enforces any access control. Add an explicit authentication field documenting the actual mechanism, such as `"auth": "none"` if the server is public, or `"auth": "host-level"` if it relies on the host environment's security. This allows reviewers to audit the security model clearly. Verify by checking that the authentication field matches the actual implementation and that documentation explains the security implications.
MCP manifest declares tools but no authentication field is present (none of: auth, authorization, bearer, oauth, mtls, apiKey, api_key, basic, token, authToken). Absence is a weak signal โ confirm whether the server relies on network-layer or host-level auth, or declare the real mechanism explicitly so reviewers can audit it.
Evidence
| 1 | # How to contribute |
| 2 | |
| 3 | We'd love to accept your patches and contributions to this project. |
| 4 | |
| 5 | ## Before you begin |
| 6 | |
| 7 | ### Sign our Contributor License Agreement |
| 8 | |
| 9 | Contributions to this project must be accompanied by a |
| 10 | [Contributor License Agreement](https://cla.developers.google.com/about) (CLA). |
| 11 | You (or your employer) retain the copyright to your contribution; this simply |
| 12 | gives us permission to use and redistribute your contributions as part of the |
| 13 | project. |
| 14 | |
| 15 | If you or your current employer have already |
RemediationAI
The problem is that `CONTRIBUTING.md` does not include an authentication field in any MCP manifest declaration, leaving the security model undocumented for contributors. Add an explicit `auth` or `authorization` field to any embedded MCP configuration examples in the documentation, or add a section explaining that the chrome-devtools MCP server uses host-level authentication and does not require explicit credentials. This clarifies the security posture for contributors and reviewers. Verify by confirming that the authentication model is documented and matches the actual server implementation.
MCP manifest declares tools but no authentication field is present (none of: auth, authorization, bearer, oauth, mtls, apiKey, api_key, basic, token, authToken). Absence is a weak signal โ confirm whether the server relies on network-layer or host-level auth, or declare the real mechanism explicitly so reviewers can audit it.
Evidence
| 1 | { |
| 2 | "name": "chrome-devtools-mcp", |
| 3 | "version": "0.26.0", |
| 4 | "description": "Reliable automation, in-depth debugging, and performance analysis in Chrome using Chrome DevTools and Puppeteer", |
| 5 | "mcpServers": { |
| 6 | "chrome-devtools": { |
| 7 | "command": "npx", |
| 8 | "args": [ |
| 9 | "chrome-devtools-mcp@latest" |
| 10 | ] |
| 11 | } |
| 12 | } |
| 13 | } |
RemediationAI
The problem is that `.github/plugin/plugin.json` declares MCP tools without an authentication field, making the security model opaque to GitHub Actions users and plugin consumers. Add an explicit `"auth": "host-level"` or appropriate authentication field to the manifest, and document in the plugin README what authentication mechanism is used. This ensures users understand the security implications of installing the plugin. Verify by checking that the authentication field is present and the plugin documentation explains the security model.
MCP manifest declares tools but no authentication field is present (none of: auth, authorization, bearer, oauth, mtls, apiKey, api_key, basic, token, authToken). Absence is a weak signal โ confirm whether the server relies on network-layer or host-level auth, or declare the real mechanism explicitly so reviewers can audit it.
Evidence
| 1 | { |
| 2 | "name": "chrome-devtools-mcp", |
| 3 | "version": "latest", |
| 4 | "mcpServers": { |
| 5 | "chrome-devtools": { |
| 6 | "command": "npx", |
| 7 | "args": ["chrome-devtools-mcp@latest"] |
| 8 | } |
| 9 | } |
| 10 | } |
RemediationAI
The problem is that `gemini-extension.json` declares MCP tools without an authentication field, leaving the security model unclear for Gemini extension users. Add an explicit `"auth": "none"` or appropriate authentication field to the manifest, and document whether the server requires any credentials or relies on the host environment's security. This allows Gemini users to understand the security implications. Verify by confirming the authentication field is present and the extension documentation explains the security model.
MCP manifest declares tools but no authentication field is present (none of: auth, authorization, bearer, oauth, mtls, apiKey, api_key, basic, token, authToken). Absence is a weak signal โ confirm whether the server relies on network-layer or host-level auth, or declare the real mechanism explicitly so reviewers can audit it.
Evidence
| 1 | { |
| 2 | "name": "chrome-devtools-mcp", |
| 3 | "version": "0.26.0", |
| 4 | "description": "Reliable automation, in-depth debugging, and performance analysis in Chrome using Chrome DevTools and Puppeteer", |
| 5 | "mcpServers": { |
| 6 | "chrome-devtools": { |
| 7 | "command": "npx", |
| 8 | "args": [ |
| 9 | "chrome-devtools-mcp@latest" |
| 10 | ] |
| 11 | } |
| 12 | } |
| 13 | } |
RemediationAI
The problem is that `.claude-plugin/plugin.json` declares MCP tools without an authentication field, making the security model undocumented for Claude plugin users. Add an explicit `"auth": "host-level"` or appropriate authentication field to the manifest, and ensure the plugin documentation explains the authentication mechanism. This clarifies the security posture for Claude users. Verify by checking that the authentication field is present and the plugin README documents the security model.
MCP manifest declares tools but no authentication field is present (none of: auth, authorization, bearer, oauth, mtls, apiKey, api_key, basic, token, authToken). Absence is a weak signal โ confirm whether the server relies on network-layer or host-level auth, or declare the real mechanism explicitly so reviewers can audit it.
Evidence
| 1 | # Troubleshooting |
| 2 | |
| 3 | ## General tips |
| 4 | |
| 5 | - Run `npx chrome-devtools-mcp@latest --help` to test if the MCP server runs on your machine. |
| 6 | - Make sure that your MCP client uses the same npm and node version as your terminal. |
| 7 | - When configuring your MCP client, try using the `--yes` argument to `npx` to |
| 8 | auto-accept installation prompt. |
| 9 | - Find a specific error in the output of the `chrome-devtools-mcp` server. |
| 10 | Usually, if your client is an IDE, logs would be in the Output pane. |
| 11 | - Search the [GitHub rep |
RemediationAI
The problem is that `docs/troubleshooting.md` does not document or declare an authentication mechanism for the MCP server, leaving users uncertain about the security model. Add a section to the troubleshooting guide explaining the authentication model (e.g., "The chrome-devtools MCP server uses host-level authentication and does not require explicit credentials"), and ensure any MCP manifest examples in the documentation include an explicit `auth` field. This helps users understand the security implications. Verify by confirming the documentation clearly explains the authentication mechanism.
GitHub Actions `uses:` reference is not pinned to a 40-character commit SHA. Tags (`@v4`) and branches (`@main`) are mutable โ a compromised maintainer or a tag rewrite can substitute malicious code into your CI pipeline silently. Pin to a SHA: `uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab`. For readability, include the version as a trailing comment: `# v4.1.1`. Tools like `pinact` / `ratchet` automate this. Allowed unpinned forms (excluded by the rule): - Local actions `.
Evidence
| 10 | release-please: |
| 11 | runs-on: ubuntu-latest |
| 12 | steps: |
| 13 | - uses: googleapis/release-please-action@v5 |
| 14 | with: |
| 15 | token: ${{ secrets.BROWSER_AUTOMATION_BOT_TOKEN }} |
| 16 | target-branch: main |
RemediationAI
The problem is that `.github/workflows/release-please.yml` uses `uses: googleapis/release-please-action@v5`, which is a mutable tag that can be rewritten by a compromised maintainer to inject malicious code into the CI pipeline. Pin the action to a specific commit SHA by replacing `@v5` with the full 40-character commit hash, e.g., `uses: googleapis/release-please-action@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # v5.x.x`. This ensures the exact version of the action is used and prevents silent substitution of malicious code. Verify by running the workflow and confirming it uses the pinned SHA, and use tools like `ratchet` or `pinact` to automate SHA pinning in future updates.
Time-of-check-to-time-of-use race. Code calls `os.path.exists` / `fs.existsSync` to check a path, then `open` / `readFileSync` / `unlink` on the same name within a few lines โ without a lock or atomic-open. An attacker who can race the filesystem (symlink, file replacement) between the check and the use gets the action applied to a different target. Replace the check-then-use pattern with the action's own error handling: try the open and catch FileNotFoundError / ENOENT. For atomic creation use
Evidence
| 1 | /** |
| 2 | * @license |
| 3 | * Copyright 2026 Google LLC |
| 4 | * SPDX-License-Identifier: Apache-2.0 |
| 5 | */ |
| 6 | |
| 7 | import {spawn} from 'node:child_process'; |
| 8 | import fs from 'node:fs'; |
| 9 | import net from 'node:net'; |
| 10 | |
| 11 | import {logger} from '../logger.js'; |
| 12 | import type {CallToolResult} from '../third_party/index.js'; |
| 13 | import {PipeTransport} from '../third_party/index.js'; |
| 14 | import {getTempFilePath} from '../utils/files.js'; |
| 15 | |
| 16 | import type {DaemonMessage, DaemonResponse} from './types.js'; |
| 17 | import { |
| 18 | DAEMON_SCRIPT_PATH, |
| 19 | getSocketPath |
RemediationAI
The problem is that `src/daemon/client.ts` likely contains a check-then-use pattern (e.g., `fs.existsSync()` followed by `fs.readFileSync()` or `fs.unlink()`) without atomic operations, allowing an attacker to race the filesystem and apply operations to a different target. Replace the check-then-use pattern with direct error handling: remove the `existsSync()` call and wrap the file operation in a try-catch block that handles `ENOENT` errors, e.g., `try { const data = fs.readFileSync(path); } catch (err) { if (err.code !== 'ENOENT') throw err; }`. This eliminates the race window by making the check and use atomic. Verify by writing a test that races symlink creation between the check and use, confirming the operation either succeeds atomically or fails safely.
Time-of-check-to-time-of-use race. Code calls `os.path.exists` / `fs.existsSync` to check a path, then `open` / `readFileSync` / `unlink` on the same name within a few lines โ without a lock or atomic-open. An attacker who can race the filesystem (symlink, file replacement) between the check and the use gets the action applied to a different target. Replace the check-then-use pattern with the action's own error handling: try the open and catch FileNotFoundError / ENOENT. For atomic creation use
Evidence
| 1 | /** |
| 2 | * @license |
| 3 | * Copyright 2026 Google LLC |
| 4 | * SPDX-License-Identifier: Apache-2.0 |
| 5 | */ |
| 6 | |
| 7 | import fs from 'node:fs'; |
| 8 | import os from 'node:os'; |
| 9 | import path from 'node:path'; |
| 10 | import process from 'node:process'; |
| 11 | |
| 12 | import {logger} from '../logger.js'; |
| 13 | import type {YargsOptions} from '../third_party/index.js'; |
| 14 | |
| 15 | export const DAEMON_SCRIPT_PATH = path.join(import.meta.dirname, 'daemon.js'); |
| 16 | export const INDEX_SCRIPT_PATH = path.join( |
| 17 | import.meta.dirname, |
| 18 | '..', |
| 19 | 'bin', |
| 20 | 'chrome-devtools-mcp.js', |
| 21 | ); |
| 22 | |
| 23 |
RemediationAI
The problem is that `src/daemon/utils.ts` likely contains a time-of-check-to-time-of-use race where `fs.existsSync()` is called before `fs.readFileSync()`, `fs.writeFileSync()`, or `fs.unlinkSync()`, allowing an attacker to replace or redirect the file via symlink. Replace the check-then-use pattern with atomic error handling: remove the `existsSync()` call and wrap the file operation in a try-catch block, e.g., `try { fs.writeFileSync(path, data); } catch (err) { if (err.code !== 'EEXIST') throw err; }`. This makes the operation atomic and eliminates the race. Verify by testing with a symlink that points to a different file and confirming the operation either succeeds on the intended file or fails safely.
Time-of-check-to-time-of-use race. Code calls `os.path.exists` / `fs.existsSync` to check a path, then `open` / `readFileSync` / `unlink` on the same name within a few lines โ without a lock or atomic-open. An attacker who can race the filesystem (symlink, file replacement) between the check and the use gets the action applied to a different target. Replace the check-then-use pattern with the action's own error handling: try the open and catch FileNotFoundError / ENOENT. For atomic creation use
Evidence
| 1 | /** |
| 2 | * @license |
| 3 | * Copyright 2026 Google LLC |
| 4 | * SPDX-License-Identifier: Apache-2.0 |
| 5 | */ |
| 6 | |
| 7 | import fs from 'node:fs'; |
| 8 | import path from 'node:path'; |
| 9 | import {pathToFileURL} from 'node:url'; |
| 10 | import {parseArgs} from 'node:util'; |
| 11 | |
| 12 | import {GoogleGenAI, mcpToTool} from '@google/genai'; |
| 13 | import {Client} from '@modelcontextprotocol/sdk/client/index.js'; |
| 14 | import {StdioClientTransport} from '@modelcontextprotocol/sdk/client/stdio.js'; |
| 15 | |
| 16 | import {TestServer} from '../build/tests/server.js'; |
| 17 | |
| 18 | const ROOT_DIR = path. |
RemediationAI
The problem is that `scripts/eval_gemini.ts` likely contains a check-then-use race where `fs.existsSync()` or similar is called before reading or writing a file, allowing an attacker to race the filesystem. Replace the check-then-use pattern with atomic error handling by removing the existence check and wrapping the file operation in try-catch: `try { const data = fs.readFileSync(path); } catch (err) { if (err.code !== 'ENOENT') throw err; }`. This ensures the check and use are atomic. Verify by writing a test that races file replacement between the check and use, confirming the operation either succeeds atomically or fails safely.
Time-of-check-to-time-of-use race. Code calls `os.path.exists` / `fs.existsSync` to check a path, then `open` / `readFileSync` / `unlink` on the same name within a few lines โ without a lock or atomic-open. An attacker who can race the filesystem (symlink, file replacement) between the check and the use gets the action applied to a different target. Replace the check-then-use pattern with the action's own error handling: try the open and catch FileNotFoundError / ENOENT. For atomic creation use
Evidence
| 1 | #!/usr/bin/env node |
| 2 | |
| 3 | /** |
| 4 | * @license |
| 5 | * Copyright 2026 Google LLC |
| 6 | * SPDX-License-Identifier: Apache-2.0 |
| 7 | */ |
| 8 | |
| 9 | import fs from 'node:fs'; |
| 10 | import {createServer, type Server} from 'node:net'; |
| 11 | import path from 'node:path'; |
| 12 | import process from 'node:process'; |
| 13 | |
| 14 | import {logger} from '../logger.js'; |
| 15 | import { |
| 16 | Client, |
| 17 | PipeTransport, |
| 18 | StdioClientTransport, |
| 19 | } from '../third_party/index.js'; |
| 20 | import {VERSION} from '../version.js'; |
| 21 | |
| 22 | import type {DaemonMessage} from './types.js'; |
| 23 | import { |
| 24 | DAEMON_CLIENT_NA |
RemediationAI
The problem is that `src/daemon/daemon.ts` likely contains a check-then-use race where `fs.existsSync()` is called before `fs.readFileSync()` or similar, allowing an attacker to replace the file via symlink between the check and use. Replace the check-then-use pattern with atomic error handling: remove the `existsSync()` call and wrap the file operation in a try-catch block, e.g., `try { const config = fs.readFileSync(configPath, 'utf8'); } catch (err) { if (err.code !== 'ENOENT') throw err; }`. This eliminates the race window. Verify by testing with a symlink that points to a sensitive file and confirming the operation either succeeds on the intended file or fails safely.
Time-of-check-to-time-of-use race. Code calls `os.path.exists` / `fs.existsSync` to check a path, then `open` / `readFileSync` / `unlink` on the same name within a few lines โ without a lock or atomic-open. An attacker who can race the filesystem (symlink, file replacement) between the check and the use gets the action applied to a different target. Replace the check-then-use pattern with the action's own error handling: try the open and catch FileNotFoundError / ENOENT. For atomic creation use
Evidence
| 1 | /** |
| 2 | * @license |
| 3 | * Copyright 2026 Google LLC |
| 4 | * SPDX-License-Identifier: Apache-2.0 |
| 5 | */ |
| 6 | |
| 7 | import * as fs from 'node:fs'; |
| 8 | import * as path from 'node:path'; |
| 9 | |
| 10 | import { |
| 11 | cliOptions, |
| 12 | parseArguments, |
| 13 | } from '../build/src/bin/chrome-devtools-mcp-cli-options.js'; |
| 14 | import { |
| 15 | getPossibleFlagMetrics, |
| 16 | type FlagMetric, |
| 17 | } from '../build/src/telemetry/flagUtils.js'; |
| 18 | import { |
| 19 | applyToExisting, |
| 20 | applyToExistingMetrics, |
| 21 | generateToolMetrics, |
| 22 | type ToolMetric, |
| 23 | } from '../build/src/telemetry/toolMetricsUti |
RemediationAI
The problem is that `scripts/update_metrics.ts` likely contains a check-then-use race where `fs.existsSync()` is called before `fs.readFileSync()` or `fs.writeFileSync()`, allowing an attacker to race the filesystem. Replace the check-then-use pattern with atomic error handling by removing the existence check and wrapping the file operation in try-catch: `try { const metrics = fs.readFileSync(metricsPath); } catch (err) { if (err.code !== 'ENOENT') throw err; }`. This makes the operation atomic and eliminates the race. Verify by writing a test that races file replacement and confirming the operation either succeeds atomically or fails safely.
Time-of-check-to-time-of-use race. Code calls `os.path.exists` / `fs.existsSync` to check a path, then `open` / `readFileSync` / `unlink` on the same name within a few lines โ without a lock or atomic-open. An attacker who can race the filesystem (symlink, file replacement) between the check and the use gets the action applied to a different target. Replace the check-then-use pattern with the action's own error handling: try the open and catch FileNotFoundError / ENOENT. For atomic creation use
Evidence
| 1 | /** |
| 2 | * Copyright 2021 Google LLC. |
| 3 | * Copyright (c) Microsoft Corporation. |
| 4 | * |
| 5 | * Licensed under the Apache License, Version 2.0 (the "License"); |
| 6 | * you may not use this file except in compliance with the License. |
| 7 | * You may obtain a copy of the License at |
| 8 | * |
| 9 | * http://www.apache.org/licenses/LICENSE-2.0 |
| 10 | * |
| 11 | * Unless required by applicable law or agreed to in writing, software |
| 12 | * distributed under the License is distributed on an "AS IS" BASIS, |
| 13 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, |
RemediationAI
The problem is that `rollup.config.mjs` likely contains a check-then-use race where `fs.existsSync()` is called before `fs.readFileSync()` or similar during the build process, allowing an attacker to race the filesystem. Replace the check-then-use pattern with atomic error handling: remove the `existsSync()` call and wrap the file operation in try-catch, e.g., `try { const config = fs.readFileSync(configPath); } catch (err) { if (err.code !== 'ENOENT') throw err; }`. This eliminates the race window during build time. Verify by testing the build process with a symlink that points to a different file and confirming the build either succeeds atomically or fails safely.