High risk. Don't ship without significant remediation.
Scanned 5/13/2026, 6:09:42 AM·Cached result·Deep Scan·91 rules·How we decide ↗
AIVSS Score
High
Severity Breakdown
0
critical
9
high
263
medium
0
low
MCP Server Information
Findings
This package has a D security grade with a safety score of 40/100, driven primarily by 113 resource exhaustion vulnerabilities and 90 vulnerable dependencies that could enable denial-of-service attacks or exploitation through outdated libraries. The 9 high-severity findings include prompt injection and tool poisoning risks alongside server configuration weaknesses, making this unsuitable for production use without substantial remediation. Installation is not recommended unless you have the expertise and resources to address these critical gaps.
AIPer-finding remediation generated by bedrock-claude-haiku-4-5 — 20 of 24 findings. Click any finding to read.
Dependencies
@hono/node-server (2)
hono (54)
@cloudflare/vite-plugin (1)
Scan Details
Done
Sign in to save scan history and re-scan automatically on new commits.
Building your own MCP server?
Same rules, same LLM judges, same grade. Private scans stay isolated to your account and never appear in the public registry. Required for code your team hasn’t shipped yet.
24 of 24 findings
24 findings
Tool 'get_log_details' performs NETWORK side effect (calls Cloudflare API client) not disclosed in description.
Evidence
| 85 | return { |
| 86 | content: [ |
| 87 | { |
| 88 | type: 'text', |
| 89 | text: JSON.stringify({ |
| 90 | result: r.result, |
| 91 | result_info: r.result_info, |
RemediationAI
The problem is that 'get_log_details' makes a network call to the Cloudflare API but its tool description does not disclose this side effect, violating the MCP contract that tool descriptions must accurately reflect all side effects. Add 'Fetches log details from Cloudflare API' or similar language to the tool description string passed to agent.server.tool(). This ensures the LLM and calling code understand the tool performs network I/O and can make informed decisions about tool invocation. Verify by checking that the description parameter in agent.server.tool('get_log_details', '<description>', ...) now explicitly mentions the network call.
Tool 'list_gateways' performs NETWORK side effect (calls Cloudflare API client) not disclosed in description.
Evidence
| 8 | export function registerAIGatewayTools(agent: AIGatewayMCP) { |
| 9 | agent.server.tool( |
| 10 | 'list_gateways', |
| 11 | 'List Gateways', |
| 12 | { |
| 13 | page: pageParam, |
RemediationAI
The problem is that 'list_gateways' makes a network call to the Cloudflare API but its tool description does not disclose this side effect, violating the MCP contract. Add 'Retrieves gateways from Cloudflare API' or similar language to the tool description string in agent.server.tool('list_gateways', '<description>', ...). This ensures callers understand the tool performs network I/O. Verify by confirming the description parameter now explicitly mentions the API call.
LLM consensus
Tool 'list_logs' performs NETWORK side effect (calls Cloudflare API client) not disclosed in description.
Evidence
| 47 | ], |
| 48 | } |
| 49 | } catch (error) { |
| 50 | return { |
| 51 | content: [ |
| 52 | { |
| 53 | type: 'text', |
RemediationAI
The problem is that 'list_logs' makes a network call to the Cloudflare API but its tool description does not disclose this side effect, violating the MCP contract. Add 'Retrieves logs from Cloudflare API' or similar language to the tool description string in agent.server.tool('list_logs', '<description>', ...). This ensures callers understand the tool performs network I/O. Verify by confirming the description parameter now explicitly mentions the API call.
Tool 'list_logs' accepts gateway_id identifier from caller and lists logs for that gateway as sole filter without verifying the caller owns the gateway.
Evidence
| 37 | return { |
| 38 | content: [ |
| 39 | { |
| 40 | type: 'text', |
| 41 | text: JSON.stringify({ |
| 42 | result: r.result, |
| 43 | result_info: r.result_info, |
| 44 | }), |
| 45 | }, |
| 46 | ], |
| 47 | } |
| 48 | } catch (error) { |
| 49 | return { |
| 50 | content: [ |
| 51 | { |
| 52 | type: 'text', |
| 53 | text: `Error listing gateways: ${error instanceof Error && error.message}`, |
| 54 | }, |
| 55 | ], |
| 56 | } |
| 57 | } |
| 58 | } |
| 59 | ) |
| 60 | |
| 61 | agent.server.tool('list_logs', 'List Logs', ListLogsParams, async (params) => { |
| 62 | try { |
| 63 | const accountId = await agent. |
RemediationAI
The problem is that 'list_logs' accepts a gateway_id parameter from the caller and uses it as the sole filter without verifying the caller owns that gateway, allowing unauthorized access to logs. Add an authorization check before the API call: call a function like `await agent.verifyGatewayOwnership(accountId, gateway_id)` that confirms the active account owns the gateway, and throw an error if not. This prevents callers from listing logs for gateways they do not own. Verify by writing a test that attempts to list logs for a gateway owned by a different account and confirms the tool rejects the request.
LLM consensus
Tool 'get_log_details' accepts gateway_id and log_id identifiers from caller and fetches the log record by these IDs as sole filter without verifying the caller owns the gateway or log.
Evidence
| 70 | text: 'No currently active accountId. Try listing your accounts (accounts_list) and then setting an active account (set_active_account)', |
| 71 | }, |
| 72 | ], |
| 73 | } |
| 74 | } |
| 75 | |
| 76 | const { gateway_id, ...filters } = params |
| 77 | |
| 78 | const props = getProps(agent) |
| 79 | const client = getCloudflareClient(props.accessToken) |
| 80 | const r = await client.aiGateway.logs.list(gateway_id, { |
| 81 | ...filters, |
| 82 | account_id: accountId, |
| 83 | } as LogListParams) |
| 84 | |
| 85 | return { |
| 86 | content: [ |
| 87 | { |
| 88 | type: 'text', |
| 89 | text: JSON |
RemediationAI
The problem is that 'get_log_details' accepts gateway_id and log_id parameters from the caller and uses them as sole filters without verifying the caller owns the gateway or log, allowing unauthorized access. Add authorization checks before the API call: call functions like `await agent.verifyGatewayOwnership(accountId, gateway_id)` and `await agent.verifyLogOwnership(accountId, log_id)` to confirm the active account owns both resources, and throw an error if not. This prevents callers from fetching logs they do not own. Verify by writing a test that attempts to fetch a log from a gateway or account the caller does not own and confirms the tool rejects the request.
LLM consensus
Tool 'get_log_details' returns untrusted API response from Cloudflare client as plain JSON text without provenance wrapper, enabling indirect prompt injection via log details.
Evidence
| 94 | ], |
| 95 | } |
| 96 | } catch (error) { |
| 97 | return { |
| 98 | content: [ |
| 99 | { |
| 100 | type: 'text', |
| 101 | text: `Error listing logs: ${error instanceof Error && error.message}`, |
| 102 | }, |
| 103 | ], |
| 104 | } |
| 105 | } |
| 106 | }) |
| 107 | |
| 108 | agent.server.tool( |
| 109 | 'get_log_details', |
| 110 | 'Get a single Log details', |
| 111 | { |
| 112 | gateway_id: GatewayIdParam, |
| 113 | log_id: LogIdParam, |
| 114 | }, |
| 115 | async (params) => { |
| 116 | const accountId = await agent.getActiveAccountId() |
| 117 | if (!accountId) { |
| 118 | return { |
| 119 | content: [ |
| 120 | { |
| 121 | type: 'text', |
| 122 | text: 'No curre |
RemediationAI
The problem is that 'get_log_details' returns the untrusted Cloudflare API response (r.result, r.result_info) as plain JSON text without a provenance wrapper, enabling indirect prompt injection if the API response contains malicious content. Wrap the API response in a provenance marker such as `{ type: 'text', text: '[API Response from Cloudflare] ' + JSON.stringify({...}) }` or use a dedicated provenance annotation supported by the MCP SDK. This signals to the LLM that the content originates from an external API and should be treated with caution. Verify by checking that the returned content includes a clear attribution or provenance marker indicating the source is the Cloudflare API.
LLM consensus
Tool 'list_gateways' returns untrusted API response from Cloudflare client (r.result, r.result_info) as plain JSON text without provenance wrapper, enabling indirect prompt injection via gateway metadata.
Evidence
| 11 | 'list_gateways', |
| 12 | 'List Gateways', |
| 13 | { |
| 14 | page: pageParam, |
| 15 | per_page: perPageParam, |
| 16 | }, |
| 17 | async (params) => { |
| 18 | const accountId = await agent.getActiveAccountId() |
| 19 | if (!accountId) { |
| 20 | return { |
| 21 | content: [ |
| 22 | { |
| 23 | type: 'text', |
| 24 | text: 'No currently active accountId. Try listing your accounts (accounts_list) and then setting an active account (set_active_account)', |
| 25 | }, |
| 26 | ], |
| 27 | } |
| 28 | } |
| 29 | try { |
| 30 | const props = getProps(agent) |
| 31 | const client = getCloudflareClien |
RemediationAI
The problem is that 'list_gateways' returns the untrusted Cloudflare API response (r.result, r.result_info) as plain JSON text without a provenance wrapper, enabling indirect prompt injection if the API response contains malicious content. Wrap the API response in a provenance marker such as `{ type: 'text', text: '[API Response from Cloudflare] ' + JSON.stringify({...}) }` or use a dedicated provenance annotation. This signals to the LLM that the content originates from an external API. Verify by checking that the returned content includes a clear attribution or provenance marker indicating the source is the Cloudflare API.
Tool 'list_logs' returns untrusted API response from Cloudflare client (r.result, r.result_info) as plain JSON text without provenance wrapper, enabling indirect prompt injection via log entries.
Evidence
| 49 | } catch (error) { |
| 50 | return { |
| 51 | content: [ |
| 52 | { |
| 53 | type: 'text', |
| 54 | text: `Error listing gateways: ${error instanceof Error && error.message}`, |
| 55 | }, |
| 56 | ], |
| 57 | } |
| 58 | } |
| 59 | } |
| 60 | ) |
| 61 | |
| 62 | agent.server.tool('list_logs', 'List Logs', ListLogsParams, async (params) => { |
| 63 | try { |
| 64 | const accountId = await agent.getActiveAccountId() |
| 65 | if (!accountId) { |
| 66 | return { |
| 67 | content: [ |
| 68 | { |
| 69 | type: 'text', |
| 70 | text: 'No currently active accountId. Try listing your accounts (accounts_li |
RemediationAI
The problem is that 'list_logs' returns the untrusted Cloudflare API response (r.result, r.result_info) as plain JSON text without a provenance wrapper, enabling indirect prompt injection if the API response contains malicious content. Wrap the API response in a provenance marker such as `{ type: 'text', text: '[API Response from Cloudflare] ' + JSON.stringify({...}) }` or use a dedicated provenance annotation. This signals to the LLM that the content originates from an external API. Verify by checking that the returned content includes a clear attribution or provenance marker indicating the source is the Cloudflare API.
LLM consensus
MCP tool returns content marked as HTML (`{type: "html"}`, `Content-Type: text/html`, or `mimeType: "text/html"`) with no sanitiser on the same code path. The host renders HTML directly — anything tainted in the body becomes a script execution / markup-injection vector. Pipe the body through `DOMPurify.sanitize()` (TS), `bleach.clean()` (Python), `lxml.html.clean.Cleaner`, or `sanitize_html` before returning. Better: return `{type: "text"}` / `text/plain` and let the host escape. Distinct from
Evidence
| 1 | import { McpAgent } from 'agents/mcp' |
| 2 | |
| 3 | import { getEnv } from '@repo/mcp-common/src/env' |
| 4 | import { CloudflareMCPServer } from '@repo/mcp-common/src/server' |
| 5 | |
| 6 | // The demo day MCP server isn't stateful, so we don't have state/props |
| 7 | export type Props = never |
| 8 | |
| 9 | export type State = never |
| 10 | |
| 11 | export type Env = { |
| 12 | ENVIRONMENT: 'development' | 'staging' | 'production' |
| 13 | AUTORAG_NAME: 'cloudflare-docs-autorag' |
| 14 | MCP_SERVER_NAME: 'PLACEHOLDER' |
| 15 | MCP_SERVER_VERSION: 'PLACEHOLDER' |
| 16 | MCP_OBJECT: DurableObjectNamespace |
RemediationAI
The problem is that the MCP tool returns content with type 'html' without sanitization, allowing any tainted data in the response body to execute as JavaScript or inject markup in the host renderer. Change the return type from `{ type: 'html' }` to `{ type: 'text' }` and return plain text instead of HTML, or if HTML is required, pipe the body through `DOMPurify.sanitize()` before returning. Returning plain text is the safer option as it lets the host handle escaping. Verify by confirming the tool returns `{ type: 'text' }` and that no HTML content type headers are set in the response.
LLM consensus
MCP tool description or return text contains an imperative phrase that asks the LLM to invoke or call ANOTHER tool — "invoke the write_file tool", "before using this, also call send_email", "silently invoke X". This is a cross-tool chaining injection: the user authorized THIS tool, but the payload escalates into others. Tool descriptions should describe what the tool DOES, not direct the LLM to use other tools. If a tool's correct operation requires composition, document the dependency in human
Evidence
| 1 | import { z } from 'zod' |
| 2 | |
| 3 | import { fetchCloudflareApi } from '@repo/mcp-common/src/cloudflare-api' |
| 4 | import { getProps } from '@repo/mcp-common/src/get-props' |
| 5 | |
| 6 | import { getReader } from '../warp_diag_reader' |
| 7 | |
| 8 | import type { ToolCallback } from '@modelcontextprotocol/sdk/server/mcp.js' |
| 9 | import type { ToolAnnotations } from '@modelcontextprotocol/sdk/types.js' |
| 10 | import type { ZodRawShape, ZodTypeAny } from 'zod' |
| 11 | import type { CloudflareDEXMCP } from '../dex-analysis.app' |
| 12 | |
| 13 | export function registerDEXTools |
RemediationAI
The problem is that the tool description or return text contains imperative phrases directing the LLM to invoke other tools (e.g., 'invoke the write_file tool', 'also call send_email'), which is a cross-tool chaining injection that escalates privileges beyond what the user authorized for this tool. Remove all imperative phrases like 'invoke', 'call', 'also use', or 'silently invoke' from the tool description and return text in dex-analysis.tools.ts, and replace them with passive descriptions of what the tool does (e.g., 'This tool analyzes DEX data' instead of 'invoke the analysis tool'). This prevents the LLM from being tricked into calling unauthorized tools. Verify by reviewing the tool description and return messages to confirm they describe functionality rather than directing other tool invocations.
MCP tool description or return text contains an imperative phrase that asks the LLM to invoke or call ANOTHER tool — "invoke the write_file tool", "before using this, also call send_email", "silently invoke X". This is a cross-tool chaining injection: the user authorized THIS tool, but the payload escalates into others. Tool descriptions should describe what the tool DOES, not direct the LLM to use other tools. If a tool's correct operation requires composition, document the dependency in human
Evidence
| 1 | import { McpAgent } from 'agents/mcp' |
| 2 | |
| 3 | import { getProps } from '@repo/mcp-common/src/get-props' |
| 4 | import { CloudflareMCPServer } from '@repo/mcp-common/src/server' |
| 5 | |
| 6 | import { ExecParams, FilePathParam, FileWrite } from '../shared/schema' |
| 7 | import { BASE_INSTRUCTIONS } from './prompts' |
| 8 | import { stripProtocolFromFilePath } from './utils' |
| 9 | |
| 10 | import type { Props, UserContainer } from './sandbox.server.app' |
| 11 | import type { Env } from './sandbox.server.context' |
| 12 | |
| 13 | export class ContainerMcpAgent extends McpAgent |
RemediationAI
The problem is that the tool description or return text contains imperative phrases directing the LLM to invoke other tools, which is a cross-tool chaining injection that escalates privileges. Remove all imperative phrases like 'invoke', 'call', 'also use', or 'silently invoke' from the tool description and return text in containerMcp.ts, and replace them with passive descriptions of what the tool does. This prevents the LLM from being tricked into calling unauthorized tools. Verify by reviewing the tool description and return messages to confirm they describe functionality rather than directing other tool invocations.
@modelcontextprotocol/sdk==1.20.2 has 3 known CVEs [HIGH]: GHSA-345p-7cg4-v4c7, GHSA-8r9q-7v3j-jr4g, GHSA-w48q-cv73-mx4w. Upgrade to a patched version.
RemediationAI
The problem is that @modelcontextprotocol/sdk version 1.20.2 contains 3 known HIGH-severity CVEs (GHSA-345p-7cg4-v4c7, GHSA-8r9q-7v3j-jr4g, GHSA-w48q-cv73-mx4w) that expose the application to known attacks. Update the dependency in apps/docs-autorag/package.json by running `npm install @modelcontextprotocol/sdk@latest` or specifying a patched version (1.21.0 or later if available). This removes the vulnerable code paths. Verify by running `npm audit` and confirming no HIGH-severity vulnerabilities remain for this package.
@modelcontextprotocol/sdk==1.20.2 has 3 known CVEs [HIGH]: GHSA-345p-7cg4-v4c7, GHSA-8r9q-7v3j-jr4g, GHSA-w48q-cv73-mx4w. Upgrade to a patched version.
RemediationAI
The problem is that hono version 4.7.6 contains 25 known HIGH-severity CVEs that expose the application to multiple attack vectors. Update the dependency in apps/ai-gateway/package.json by running `npm install hono@latest` or specifying a patched version (4.8.0 or later if available). This removes the vulnerable code paths. Verify by running `npm audit` and confirming no HIGH-severity vulnerabilities remain for hono.
@modelcontextprotocol/sdk==1.20.2 has 3 known CVEs [HIGH]: GHSA-345p-7cg4-v4c7, GHSA-8r9q-7v3j-jr4g, GHSA-w48q-cv73-mx4w. Upgrade to a patched version.
RemediationAI
The problem is that agents version 0.2.19 contains 3 known MEDIUM-severity CVEs that expose the application to known attacks. Update the dependency in apps/autorag/package.json by running `npm install agents@latest` or specifying a patched version. This removes the vulnerable code paths. Verify by running `npm audit` and confirming no MEDIUM-severity vulnerabilities remain for agents.
wrangler==4.10.0 has 1 known CVE [HIGH]: GHSA-36p8-mvp6-cv38. Upgrade to a patched version.
RemediationAI
The problem is that wrangler version 4.10.0 contains 1 known HIGH-severity CVE (GHSA-36p8-mvp6-cv38) that exposes the application to known attacks. Update the dependency in packages/mcp-common/package.json by running `npm install wrangler@latest` or specifying a patched version (4.11.0 or later if available). This removes the vulnerable code. Verify by running `npm audit` and confirming no HIGH-severity vulnerabilities remain for wrangler.
hono==4.7.6 has 25 known CVEs [HIGH]: GHSA-26pp-8wgv-hjvm, GHSA-3vhc-576x-3qv4, GHSA-458j-xx4x-4375 (+22 more). Upgrade to a patched version.
RemediationAI
The problem is that hono version 4.7.6 contains 25 known HIGH-severity CVEs that expose the application to multiple attack vectors. Update the dependency in apps/ai-gateway/package.json by running `npm install hono@latest` or specifying a patched version (4.8.0 or later if available). This removes the vulnerable code paths. Verify by running `npm audit` and confirming no HIGH-severity vulnerabilities remain for hono.
wrangler==4.10.0 has 1 known CVE [HIGH]: GHSA-36p8-mvp6-cv38. Upgrade to a patched version.
RemediationAI
The problem is that agents version 0.2.19 contains 3 known MEDIUM-severity CVEs that expose the application to known attacks. Update the dependency in apps/cloudflare-one-casb/package.json by running `npm install agents@latest` or specifying a patched version. This removes the vulnerable code paths. Verify by running `npm audit` and confirming no MEDIUM-severity vulnerabilities remain for agents.
wrangler==4.10.0 has 1 known CVE [HIGH]: GHSA-36p8-mvp6-cv38. Upgrade to a patched version.
RemediationAI
The problem is that ai version 4.3.10 contains 1 known LOW-severity CVE (GHSA-rwvc-j5jr-mgvh) that may expose the application to attacks. Update the dependency in apps/sandbox-container/package.json by running `npm install ai@latest` or specifying a patched version. This removes the vulnerable code. Verify by running `npm audit` and confirming no LOW-severity vulnerabilities remain for ai.
hono==4.7.6 has 25 known CVEs [HIGH]: GHSA-26pp-8wgv-hjvm, GHSA-3vhc-576x-3qv4, GHSA-458j-xx4x-4375 (+22 more). Upgrade to a patched version.
RemediationAI
The problem is that hono version 4.7.6 contains 25 known HIGH-severity CVEs that expose the application to multiple attack vectors. Update the dependency in apps/dns-analytics/package.json by running `npm install hono@latest` or specifying a patched version. This removes the vulnerable code paths. Verify by running `npm audit` and confirming no HIGH-severity vulnerabilities remain for hono.
agents==0.2.19 has 3 known CVEs [MEDIUM]: GHSA-cvhv-6xm6-c3v4, GHSA-r7x9-8ph7-w8cg, GHSA-w5cr-2qhr-jqc5. Upgrade to a patched version.
RemediationAI
The problem is that agents version 0.2.19 contains 3 known MEDIUM-severity CVEs (GHSA-cvhv-6xm6-c3v4, GHSA-r7x9-8ph7-w8cg, GHSA-w5cr-2qhr-jqc5) that expose the application to known attacks. Update the dependency in apps/logpush/package.json by running `npm install agents@latest` or specifying a patched version. This removes the vulnerable code paths. Verify by running `npm audit` and confirming no MEDIUM-severity vulnerabilities remain for agents.
agents==0.2.19 has 3 known CVEs [MEDIUM]: GHSA-cvhv-6xm6-c3v4, GHSA-r7x9-8ph7-w8cg, GHSA-w5cr-2qhr-jqc5. Upgrade to a patched version.
RemediationAI
The problem is that agents version 0.2.19 contains 3 known MEDIUM-severity CVEs that expose the application to known attacks. Update the dependency in apps/cloudflare-one-casb/package.json by running `npm install agents@latest` or specifying a patched version. This removes the vulnerable code paths. Verify by running `npm audit` and confirming no MEDIUM-severity vulnerabilities remain for agents.
agents==0.2.19 has 3 known CVEs [MEDIUM]: GHSA-cvhv-6xm6-c3v4, GHSA-r7x9-8ph7-w8cg, GHSA-w5cr-2qhr-jqc5. Upgrade to a patched version.
RemediationAI
The problem is that agents version 0.2.19 contains 3 known MEDIUM-severity CVEs that expose the application to known attacks. Update the dependency in apps/autorag/package.json by running `npm install agents@latest` or specifying a patched version. This removes the vulnerable code paths. Verify by running `npm audit` and confirming no MEDIUM-severity vulnerabilities remain for agents.
ai==4.3.10 has 1 known CVE [LOW]: GHSA-rwvc-j5jr-mgvh. Upgrade to a patched version.
RemediationAI
The problem is that ai version 4.3.10 contains 1 known LOW-severity CVE (GHSA-rwvc-j5jr-mgvh) that may expose the application to attacks. Update the dependency in apps/sandbox-container/package.json by running `npm install ai@latest` or specifying a patched version. This removes the vulnerable code. Verify by running `npm audit` and confirming no LOW-severity vulnerabilities remain for ai.
wrangler==4.10.0 has 1 known CVE [HIGH]: GHSA-36p8-mvp6-cv38. Upgrade to a patched version.
RemediationAI
The problem is that wrangler version 4.10.0 contains 1 known HIGH-severity CVE (GHSA-36p8-mvp6-cv38) that exposes the application to known attacks. Update the dependency in packages/eval-tools/package.json by running `npm install wrangler@latest` or specifying a patched version. This removes the vulnerable code. Verify by running `npm audit` and confirming no HIGH-severity vulnerabilities remain for wrangler.
list_logs
get_log_details
list_gateways
mcp_demo_day_info
container_initialize