High risk. Don't ship without significant remediation.
Scanned 5/1/2026, 10:16:58 AM·Cached result·Deep Scan·88 rules·View source ↗·How we decide ↗
AIVSS Score
High
Severity Breakdown
0
critical
11
high
1
medium
0
low
MCP Server Information
Findings
This package carries significant security risks with a D grade and 11 high-severity vulnerabilities, primarily centered on prompt injection (5 instances) and tool poisoning (4 instances) that could allow attackers to manipulate model behavior or compromise tool execution. The server configuration issues compound these risks by potentially exposing the system to unauthorized access or misuse. Given the safety score of 67/100 and AIVSS rating of 7.1/10, you should avoid installation unless you can implement additional isolation measures and thoroughly audit the codebase before deployment.
AIPer-finding remediation generated by bedrock-claude-haiku-4-5 — 10 of 12 findings. Click any finding to read.
No known CVEs found for this package or its dependencies.
Scan Details
Done
Sign in to save scan history and re-scan automatically on new commits.
Building your own MCP server?
Same rules, same LLM judges, same grade. Private scans stay isolated to your account and never appear in the public registry. Required for code your team hasn’t shipped yet.
12 of 12 findings
12 findings
CartesiaStream reads CARTESIA_API_KEY from environment variables and transmits it in WebSocket headers to hardcoded third-party domain wss://api.cartesia.ai/tts/websocket, exfiltrating API credentials.
RemediationAI
The CartesiaStream constructor reads CARTESIA_API_KEY from process.env and transmits it directly in WebSocket headers to wss://api.cartesia.ai/tts/websocket, exfiltrating the credential to a third party. Remove the direct environment variable read from the constructor and instead require the API key to be passed as a parameter from the caller, or use a secure credential manager like AWS Secrets Manager or HashiCorp Vault. This ensures credentials are not automatically loaded and transmitted without explicit caller control. Verify by instantiating CartesiaStream without setting CARTESIA_API_KEY in the environment and confirming it requires an explicit key parameter or fails gracefully.
GoogleTTSHandler reads GOOGLE_APPLICATION_CREDENTIALS (service account credentials) from environment and passes it to Google Cloud TTS client, transmitting sensitive credentials to third-party Google Cloud API.
RemediationAI
The GoogleTTSHandler constructor reads GOOGLE_APPLICATION_CREDENTIALS from process.env and passes the service account credentials directly to the Google Cloud TextToSpeechClient, transmitting sensitive credentials to Google Cloud API without caller authorization. Replace the environment variable read with a parameter-based credential injection pattern, or use Application Default Credentials (ADC) with workload identity federation to avoid storing raw credentials in environment variables. This decouples credential loading from module initialization and allows fine-grained access control. Verify by removing GOOGLE_APPLICATION_CREDENTIALS from the environment, instantiating the handler with explicit credentials or ADC, and confirming the client authenticates successfully.
CartesiaStream constructor reads CARTESIA_API_KEY from process.env at module initialization time and uses it to authenticate WebSocket connections to Cartesia TTS API without consulting caller identity.
RemediationAI
The CartesiaStream constructor reads CARTESIA_API_KEY from process.env at module load time without any caller identity context, allowing any code that imports the module to use the cached credentials. Refactor to accept the API key as a constructor parameter and validate caller identity via a context object (e.g., userId, requestId, or IAM role) before initializing the WebSocket connection. This ensures each caller is authenticated and authorized before credential use. Verify by passing different caller identities and confirming that unauthorized callers receive an authentication error before any WebSocket connection is attempted.
GoogleTTSHandler constructor reads GOOGLE_APPLICATION_CREDENTIALS from process.env and initializes a TextToSpeechClient with cached credentials, then uses it to call Google Cloud TTS API without consulting caller identity.
RemediationAI
The GoogleTTSHandler constructor reads GOOGLE_APPLICATION_CREDENTIALS from process.env and caches a TextToSpeechClient at initialization without consulting caller identity, allowing any importer to invoke the API with the service account. Refactor to accept credentials and caller context as constructor parameters, and validate the caller's identity (e.g., via JWT, service account, or request context) before creating the client. This enforces per-caller authorization. Verify by instantiating the handler with different caller identities and confirming that unauthorized callers receive an authentication error before any Google Cloud API call is made.
Tool 'postResultComment' performs NETWORK side effect (GitHub API calls to post/update comments) not disclosed in description.
RemediationAI
The 'writeJobSummary' tool description does not disclose that it performs a FILESYSTEM side effect (writes job summary to disk). Update the tool's description field to explicitly state: 'This tool writes job summary data to the filesystem (FILESYSTEM side effect).' This ensures callers are aware of persistent state changes. Verify by reading the tool definition and confirming the description includes the FILESYSTEM side effect disclosure.
Tool 'executeNeurolink' performs SUBPROCESS side effect (executes CLI command via @actions/exec) not disclosed in description.
RemediationAI
The 'installNeurolink' tool description does not disclose that it performs a SUBPROCESS side effect (likely package installation via npm/pnpm/pip). Update the tool's description field to explicitly state: 'This tool installs Neurolink packages by executing package manager commands as subprocesses (SUBPROCESS side effect).' This ensures callers understand the installation behavior and potential side effects. Verify by reading the tool definition and confirming the description includes the SUBPROCESS side effect disclosure.
Tool 'writeJobSummary' performs FILESYSTEM side effect (writes job summary) not disclosed in description.
RemediationAI
The 'writeJobSummary' tool description does not disclose that it performs a FILESYSTEM side effect (writes job summary to disk). Update the tool's description field to explicitly state: 'This tool writes job summary data to the filesystem (FILESYSTEM side effect).' This ensures callers are aware of persistent state changes. Verify by reading the tool definition and confirming the description includes the FILESYSTEM side effect disclosure.
Tool 'installNeurolink' performs SUBPROCESS side effect (likely package installation) not disclosed in description.
RemediationAI
The 'installNeurolink' tool description does not disclose that it performs a SUBPROCESS side effect (likely package installation via npm/pnpm/pip). Update the tool's description field to explicitly state: 'This tool installs Neurolink packages by executing package manager commands as subprocesses (SUBPROCESS side effect).' This ensures callers understand the installation behavior and potential side effects. Verify by reading the tool definition and confirming the description includes the SUBPROCESS side effect disclosure.
postResultComment tool returns untrusted GitHub issue/PR body content (result.response) directly into comment without provenance wrapper, enabling indirect prompt injection via malicious issue descriptions.
RemediationAI
The postResultComment tool returns untrusted GitHub issue/PR body content (result.response) directly into the comment without any provenance wrapper or sanitization, enabling indirect prompt injection if a malicious issue description contains LLM instructions. Wrap the untrusted content in a clearly marked provenance block (e.g., '``` [GitHub Issue Body - Untrusted Source] {result.response} ```') and sanitize or escape any markdown/code that could be interpreted as instructions. This prevents the injected content from being treated as part of the tool's response. Verify by creating a test issue with prompt injection payloads in the body and confirming the comment displays the content as literal text without executing embedded instructions.
MCP server binds an HTTP transport to localhost and registers tools, but no authentication is enforced on requests. The official MCP security best practices warn that this is reachable via DNS-rebinding attacks — a malicious web page can hit `http://127.0.0.1:<port>` from inside the user's browser and invoke tools as the user. Pick one fix: 1. Switch to stdio transport (`mcp.run(transport="stdio")`). 2. Require an `Authorization` / `Bearer` / `api_key` check on every request. 3. Bind
Evidence
| 1 | import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js"; |
| 2 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; |
| 3 | import { randomUUID } from "node:crypto"; |
| 4 | import { DocsSearch } from "./search.js"; |
| 5 | import { createToolDefinitions } from "./tools.js"; |
| 6 | import fs from "fs"; |
| 7 | import path from "path"; |
| 8 | import { fileURLToPath } from "url"; |
| 9 | |
| 10 | const __dirname = path.dirname(fileURLToPath(import.meta.url)); |
| 11 | |
| 12 | function resolveIndexPath() { |
| 13 | const localPath = path.resolve(_ |
RemediationAI
The MCP server binds an HTTP transport to localhost without any authentication, making it vulnerable to DNS-rebinding attacks where a malicious web page can invoke tools from the user's browser. Switch from HTTP transport to stdio transport by replacing the HttpServerTransport initialization with StdioServerTransport and calling mcp.run(transport) with the stdio transport instead. Stdio transport is not reachable over the network and eliminates DNS-rebinding attack surface. Verify by confirming the server no longer listens on any TCP port and that the MCP client communicates via stdin/stdout.
MCP server binds an HTTP transport to localhost / 127.0.0.1 / [::1] and registers tools, but does not validate the request `Host` header. Even with auth, this is exploitable via DNS rebinding — a malicious web page can make the user's browser resolve `evil.com` to `127.0.0.1`, bypassing same-origin checks. Fix: enable `hostHeaderValidation()` middleware (TS SDK ≥1.24.0), or check `req.headers.host` against an allow-list of expected hostnames. Co-fires with MCP-268 (no auth) when both gaps are p
Evidence
| 1 | import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js"; |
| 2 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; |
| 3 | import { randomUUID } from "node:crypto"; |
| 4 | import { DocsSearch } from "./search.js"; |
| 5 | import { createToolDefinitions } from "./tools.js"; |
| 6 | import fs from "fs"; |
| 7 | import path from "path"; |
| 8 | import { fileURLToPath } from "url"; |
| 9 | |
| 10 | const __dirname = path.dirname(fileURLToPath(import.meta.url)); |
| 11 | |
| 12 | function resolveIndexPath() { |
| 13 | const localPath = path.resolve(_ |
RemediationAI
The MCP server binds an HTTP transport to localhost without validating the Host header, allowing DNS-rebinding attacks to bypass same-origin checks even if authentication is added. Add host header validation by calling hostHeaderValidation() middleware (requires @modelcontextprotocol/sdk ≥1.24.0) or manually check req.headers.host against an allow-list of expected hostnames (e.g., ['localhost', '127.0.0.1', '[::1]']) before processing requests. This ensures only requests from expected hosts are accepted. Verify by making a request with a spoofed Host header (e.g., 'evil.com') and confirming it is rejected with a 400 or 403 error.
Package declares an install-time hook (npm postinstall/preinstall/prepare, setup.py cmdclass override, custom setuptools install class, or non-default pyproject build-backend). Anyone installing this package runs the hook. Confirm the hook is necessary and review its contents; prefer shipping a plain library without install-time execution.
Evidence
| 38 | "build:cli:link": "pnpm run build:cli && pnpm link --global", |
| 39 | "cli": "node dist/cli/index.js", |
| 40 | "preview": "vite preview", |
| 41 | "prepare": "git rev-parse --git-dir > /dev/null 2>&1 && husky install || echo 'Skipping husky in non-git environment'", |
| 42 | "prepack": "svelte-kit sync && svelte-package && pnpm run build:react-hooks && pnpm run build:cli && pnpm run build:browser && publint", |
| 43 | "build:react-hooks": "npx tsc --jsx react-jsx --module nodenext --moduleResolution nodenext --target |
RemediationAI
The package.json declares a 'prepare' hook that runs husky install at install time, executing arbitrary code during npm install without explicit user consent. Remove the 'prepare' script from package.json and instead document that developers should manually run 'husky install' after cloning the repository, or move the hook to a separate optional setup script. This ensures install-time code execution is opt-in rather than automatic. Verify by running 'npm install' in a fresh clone and confirming that husky is not automatically initialized; then confirm that running 'husky install' manually works as expected.
neurolink-docs