High risk. Don't ship without significant remediation.
Scanned 5/13/2026, 5:06:20 AMยทCached resultยทDeep Scanยท91 rulesยทHow we decide โ
AIVSS Score
High
Severity Breakdown
0
critical
8
high
16
medium
2
low
MCP Server Information
Findings
This package receives a D grade with a safety score of 66/100 and poses significant security concerns, primarily from 8 high-severity issues across prompt injection, tool poisoning, and server configuration vulnerabilities. The 16 medium-severity findingsโdominated by 12 server configuration problemsโindicate systemic weaknesses in how the package handles requests and manages its runtime environment. Installing this package would require substantial security hardening and careful isolation before use in any production environment.
AIPer-finding remediation generated by bedrock-claude-haiku-4-5 โ 26 of 26 findings. Click any finding to read.
No known CVEs found for this package or its dependencies.
Scan Details
Done
Sign in to save scan history and re-scan automatically on new commits.
Building your own MCP server?
Same rules, same LLM judges, same grade. Private scans stay isolated to your account and never appear in the public registry. Required for code your team hasnโt shipped yet.
26 of 26 findings
26 findings
inspector-simple.js reads PLUGGEDIN_API_KEY and PLUGGEDIN_API_BASE_URL from .env.local and passes them as environment variables to a spawned MCP inspector process with DANGEROUSLY_OMIT_AUTH flag, potentially exfiltrating credentials.
Evidence
| 1 | #!/usr/bin/env node |
| 2 | |
| 3 | import { spawn, execFile } from 'child_process'; |
| 4 | import { readFileSync } from 'fs'; |
| 5 | import { fileURLToPath } from 'url'; |
| 6 | import { dirname, join } from 'path'; |
| 7 | |
| 8 | const __filename = fileURLToPath(import.meta.url); |
| 9 | const __dirname = dirname(__filename); |
| 10 | |
| 11 | console.log('๐ Starting MCP Inspector with auto-open...'); |
| 12 | |
| 13 | // Load environment variables from .env.local with proper parsing |
| 14 | const envPath = join(__dirname, '..', '.env.local'); |
| 15 | let envVars = {}; |
| 16 | |
| 17 | // Secure environment variable |
RemediationAI
The problem is that inspector-simple.js reads PLUGGEDIN_API_KEY and PLUGGEDIN_API_BASE_URL from .env.local and passes them directly as environment variables to a spawned child process, exposing credentials to the subprocess and potentially to process inspection tools. Remove the DANGEROUSLY_OMIT_AUTH flag and instead use the `env` option in `spawn()` to pass only a minimal, sanitized environmentโexplicitly exclude credential variables or use a secrets management library like `dotenv-safe` to validate required keys before spawning. This ensures credentials are never passed to untrusted child processes and reduces the attack surface for credential exfiltration. Verify by running the inspector and confirming via `ps aux` or `cat /proc/[pid]/environ` that PLUGGEDIN_API_KEY and PLUGGEDIN_API_BASE_URL do not appear in the spawned process environment.
inspector-http.js reads PLUGGEDIN_API_KEY from .env.local and passes it as an environment variable to a spawned process that connects to an external MCP inspector endpoint, potentially exfiltrating the API key to an attacker-controlled or unvalidated destination.
Evidence
| 1 | #!/usr/bin/env node |
| 2 | |
| 3 | import { spawn, execFile } from 'child_process'; |
| 4 | import { readFileSync } from 'fs'; |
| 5 | import { fileURLToPath } from 'url'; |
| 6 | import { dirname, join } from 'path'; |
| 7 | |
| 8 | const __filename = fileURLToPath(import.meta.url); |
| 9 | const __dirname = dirname(__filename); |
| 10 | |
| 11 | console.log('๐ Starting Streamable HTTP MCP Server with Inspector...'); |
| 12 | |
| 13 | // Load environment variables from .env.local |
| 14 | const envPath = join(__dirname, '..', '.env.local'); |
| 15 | let envVars = {}; |
| 16 | |
| 17 | // Secure environment variable parser |
RemediationAI
The problem is that inspector-http.js reads PLUGGEDIN_API_KEY from .env.local and passes it as an environment variable to a spawned process that connects to an external MCP inspector endpoint without validating the endpoint URL, enabling credential exfiltration to attacker-controlled servers. Replace the direct environment variable passing with an in-process credential injection mechanism: use the `env` option in `spawn()` to exclude credential variables, and instead pass the API key only to validated, hardcoded endpoints via HTTPS with certificate pinning or mutual TLS. This prevents the credential from being leaked to arbitrary external destinations. Verify by inspecting the spawned process environment with `ps aux` and confirming the API key is absent, then test that the inspector still authenticates correctly to the intended endpoint.
inspector-auto.js reads PLUGGEDIN_API_KEY and PLUGGEDIN_API_BASE_URL from .env.local and passes them as environment variables to a spawned MCP inspector process, potentially exfiltrating credentials to an unvalidated external endpoint.
Evidence
| 1 | #!/usr/bin/env node |
| 2 | |
| 3 | import { spawn, execFile } from 'child_process'; |
| 4 | import { readFileSync } from 'fs'; |
| 5 | import { fileURLToPath } from 'url'; |
| 6 | import { dirname, join } from 'path'; |
| 7 | |
| 8 | const __filename = fileURLToPath(import.meta.url); |
| 9 | const __dirname = dirname(__filename); |
| 10 | |
| 11 | console.log('๐ Starting MCP Inspector with auto-open...'); |
| 12 | |
| 13 | // Load environment variables from .env.local with proper parsing |
| 14 | const envPath = join(__dirname, '..', '.env.local'); |
| 15 | let envVars = {}; |
| 16 | |
| 17 | // Secure environment variable |
RemediationAI
The problem is that inspector-auto.js reads PLUGGEDIN_API_KEY and PLUGGEDIN_API_BASE_URL from .env.local and passes them as environment variables to a spawned MCP inspector process without validating the destination endpoint, creating a credential exfiltration risk. Modify the `spawn()` call to use the `env` option with an explicit allowlist of safe environment variables, excluding all credential keys; instead, pass credentials through secure in-process channels (e.g., file descriptors, Unix sockets, or authenticated HTTP headers to a validated endpoint). This prevents credentials from leaking to unvalidated external services. Verify by checking the spawned process environment with `cat /proc/[pid]/environ` and confirming PLUGGEDIN_API_KEY and PLUGGEDIN_API_BASE_URL are absent, then confirm the inspector still functions with the intended backend.
createPluggedinMCPClient tool uses module-level serverParams.url to construct SSEClientTransport without consulting per-request caller identity or authorization context, enabling confused deputy attacks when multiple callers share the same MCP server instance.
Evidence
| 63 | debugError(`Invalid command for server ${serverParams.name}: ${serverParams.command}`); |
| 64 | return { client: undefined, transport: undefined }; |
| 65 | } |
| 66 | |
| 67 | const stdioParams: StdioServerParameters = { |
| 68 | command: serverParams.command, |
| 69 | args: serverParams.args ? validateArgs(serverParams.args) : undefined, |
| 70 | env: serverParams.env ? validateEnv(serverParams.env) : undefined, |
| 71 | // Use default values for other optional properties |
| 72 | // stderr and cwd will use their default values |
RemediationAI
The problem is that `createPluggedinMCPClient()` uses module-level `serverParams.url` to construct `SSEClientTransport` without consulting per-request caller identity or authorization context, allowing multiple callers to share the same transport and enabling confused deputy attacks where one caller's request can be attributed to or executed by another. Refactor the function signature to accept a `callerContext` or `requestContext` parameter containing the caller's identity and authorization scope, then validate that the caller is authorized to access the specific `serverParams.url` before constructing the transport; store per-caller transports in a context-keyed cache (e.g., `Map<string, Client>` keyed by caller ID). This ensures each caller's requests are isolated and authorized independently. Verify by creating two separate caller contexts with different permissions and confirming that a low-privilege caller cannot access resources authorized only for a high-privilege caller.
createPluggedinMCPClient tool uses module-level serverParams to construct StdioClientTransport without consulting per-request caller identity or authorization context, enabling confused deputy attacks when multiple callers share the same MCP server instance.
Evidence
| 43 | for (const [key, value] of Object.entries(env)) { |
| 44 | // Only allow valid environment variable names |
| 45 | if (/^[A-Z0-9_]+$/i.test(key)) { |
| 46 | // Sanitize the value to prevent injection |
| 47 | validated[key] = String(value).replace(/[\0\r\n]/g, ''); |
| 48 | } |
| 49 | } |
| 50 | return validated; |
| 51 | } |
| 52 | |
| 53 | export const createPluggedinMCPClient = ( |
| 54 | serverParams: ServerParameters |
| 55 | ): { client: Client | undefined; transport: Transport | undefined } => { |
| 56 | let transport: Transport | undefined; |
| 57 | |
| 58 | // Create the appropriate |
RemediationAI
The problem is that `createPluggedinMCPClient()` uses module-level `serverParams` to construct `StdioClientTransport` without consulting per-request caller identity or authorization context, enabling confused deputy attacks when multiple callers share the same MCP server instance. Refactor the function to accept a `callerContext` parameter (containing caller identity and authorization scope), then validate the caller's permissions against the requested `serverParams` before constructing the transport; maintain per-caller transport instances in a context-keyed cache to prevent request mixing. This ensures each caller's stdio transport is isolated and authorized independently. Verify by instantiating the client with two different caller contexts and confirming that commands from one caller do not leak into or affect the other caller's transport stream.
inspector-http.js spawns an MCP server and polls its /health endpoint, but the file is truncated mid-execution; the complete handler logic cannot be verified for remote code execution or dynamic behavior swapping patterns.
Evidence
| 1 | #!/usr/bin/env node |
| 2 | |
| 3 | import { spawn, execFile } from 'child_process'; |
| 4 | import { readFileSync } from 'fs'; |
| 5 | import { fileURLToPath } from 'url'; |
| 6 | import { dirname, join } from 'path'; |
| 7 | |
| 8 | const __filename = fileURLToPath(import.meta.url); |
| 9 | const __dirname = dirname(__filename); |
| 10 | |
| 11 | console.log('๐ Starting Streamable HTTP MCP Server with Inspector...'); |
| 12 | |
| 13 | // Load environment variables from .env.local |
| 14 | const envPath = join(__dirname, '..', '.env.local'); |
| 15 | let envVars = {}; |
| 16 | |
| 17 | // Secure environment variable parser |
RemediationAI
The problem is that inspector-http.js spawns an MCP server and polls its /health endpoint, but the file is truncated mid-execution, preventing verification of the complete handler logic for remote code execution or dynamic behavior swapping patterns. Provide the complete, untruncated source code of inspector-http.js so that the full request/response handling, environment variable usage, and any dynamic reconfiguration logic can be audited for injection vulnerabilities, credential leakage, and unsafe deserialization. This enables a complete security review of the health-check polling mechanism and any side effects it may trigger. Verify by running a full static analysis scan (e.g., `npm audit`, ESLint with security plugins) on the complete file and confirming no new vulnerabilities are introduced.
client.ts createPluggedinMCPClient function validates and constructs transport objects but is truncated; the complete handler and any dynamic transport reconfiguration logic cannot be verified.
Evidence
| 1 | import { Client } from "@modelcontextprotocol/sdk/client/index.js"; |
| 2 | import { |
| 3 | StdioClientTransport, |
| 4 | StdioServerParameters, |
| 5 | } from "@modelcontextprotocol/sdk/client/stdio.js"; |
| 6 | import { SSEClientTransport } from "@modelcontextprotocol/sdk/client/sse.js"; |
| 7 | import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js"; |
| 8 | import { Transport } from "@modelcontextprotocol/sdk/shared/transport.js"; |
| 9 | import { ServerParameters } from "./types.js"; |
| 10 | import { createRequire |
RemediationAI
The problem is that `client.ts` `createPluggedinMCPClient()` function validates and constructs transport objects but is truncated mid-execution, preventing verification of the complete handler and any dynamic transport reconfiguration logic that could introduce vulnerabilities. Provide the complete, untruncated source code of `client.ts` including the full function body, error handlers, and any dynamic transport switching or reconfiguration logic so that the entire flow can be audited for confused deputy attacks, credential injection, and unsafe state management. This enables comprehensive security review of transport lifecycle and caller isolation. Verify by running a full static analysis scan (e.g., TypeScript strict mode, ESLint with security plugins) on the complete file and confirming no new vulnerabilities are introduced.
File mounts an HTTP route that handles MCP `tools/list` (Express / Fastify / FastAPI / Flask) but the route โ and the router it sits behind โ has no auth middleware applied. An anonymous client can enumerate every tool the server exposes, scope the attack surface, and (if `tools/call` shares the route) invoke them. Apply auth at the route or router level: Express `passport.authenticate(...)` / a `requireAuth`-style middleware, FastAPI `Depends(get_current_user)` or `Depends(verify_jwt)`, Flask
Evidence
| 1 | /** |
| 2 | * Streamable HTTP Server Transport for MCP Proxy |
| 3 | * |
| 4 | * MCP Protocol Compliance: |
| 5 | * - Headers use Title-Case per MCP spec (Mcp-Session-Id, Mcp-Protocol-Version) |
| 6 | * - CORS headers expose custom headers to clients |
| 7 | * - Protocol version validation (2024-11-05) |
| 8 | * - JSON-RPC 2.0 compliant error codes |
| 9 | * |
| 10 | * JSON-RPC Error Codes Used: |
| 11 | * - -32600: Invalid Request (malformed request, unsupported protocol version) |
| 12 | * - -32601: Method not found (HTTP method not allowed) |
| 13 | * - -32603: Internal error (s |
RemediationAI
The problem is that the HTTP route handling MCP `tools/list` has no authentication middleware applied, allowing anonymous clients to enumerate all exposed tools and scope the attack surface. Apply authentication middleware at the route or router level: for Express, add `passport.authenticate('jwt')` or a custom `requireAuth` middleware before the route handler; for FastAPI, add `Depends(verify_jwt)` to the route function signature; for Flask, use `@login_required` or a custom `@require_auth` decorator. This ensures only authenticated and authorized callers can enumerate tools. Verify by making an unauthenticated HTTP request to the `tools/list` endpoint and confirming it returns a 401 Unauthorized or 403 Forbidden response, then repeat with valid credentials and confirm the tool list is returned.
User-controlled value printed to terminal without ANSI escape sanitization. Malicious input can inject cursor-control sequences, rewrite earlier output, or hide shell commands from the operator.
Evidence
| 433 | query: validatedArgs.query, |
| 434 | includeMetadata: true // Always request metadata |
| 435 | }; |
| 436 | console.error(`[DEBUG] Request body:`, JSON.stringify(requestBody)); |
| 437 | |
| 438 | const response = await axios.post( |
| 439 | ragApiUrl, |
RemediationAI
The problem is that user-controlled values are printed to the terminal via `console.error()` without ANSI escape sequence sanitization, allowing malicious input to inject cursor-control sequences that rewrite earlier output or hide shell commands. Replace the direct `console.error()` call with a sanitized output function that strips or escapes ANSI control characters: use a library like `chalk` with `.stripColor()` or implement a simple regex filter `/\x1b\[[0-9;]*m/g` to remove escape sequences before logging. This prevents terminal injection attacks. Verify by passing a string containing ANSI escape codes (e.g., `\x1b[2J` to clear screen) to the debug output and confirming the terminal is not cleared or manipulated.
Network / IO / subprocess call without an explicit timeout. A malicious or hung upstream (HTTP host, socket peer, child process) can pin threads, exhaust connection/process pools, and make the MCP server unresponsive. Always pass a bounded timeout. v2 extends v1 with subprocess coverage (R03 from the legacy readiness audit).
Evidence
| 74 | attempts++; |
| 75 | try { |
| 76 | // Try to connect to the health endpoint |
| 77 | const response = await fetch(`http://localhost:${port}/health`); |
| 78 | if (response.ok) { |
| 79 | console.log(`โ Server is ready! (took ${Date.now() - startTime}ms, ${attempts} attempts)`); |
| 80 | return true; |
RemediationAI
The problem is that the `fetch()` call to the health endpoint in inspector-http.js has no explicit timeout, allowing a malicious or hung upstream server to pin threads and exhaust connection pools, making the MCP server unresponsive. Add an explicit timeout to the `fetch()` call using the `AbortController` API: create an `AbortController`, set a timeout (e.g., 5 seconds) that calls `controller.abort()`, and pass `{ signal: controller.signal }` to the `fetch()` options. This ensures the health check fails fast if the upstream is unresponsive. Verify by starting the health check against a non-responsive server (e.g., `nc -l 127.0.0.1 9999` without responding) and confirming the fetch times out within the specified duration.
MCP tool input schema exposes an unconstrained string/any field with a risky name (command/query/sql/code/script/url/path/expr/ eval). Any caller can pass arbitrary values, which typically widens the tool's blast radius well beyond its intent. Narrow the schema with `.enum()`, `.regex()`, `.max()`, `Literal[...]`, Pydantic `Field(max_length=..., pattern=...)`, or a JSON Schema `enum` / `pattern` / `maxLength`.
Evidence
| 8 | // Define the schema for asking questions to the knowledge base |
| 9 | export const AskKnowledgeBaseInputSchema = z.object({ |
| 10 | query: z.string() |
| 11 | .min(1, "Query cannot be empty") |
| 12 | .max(1000, "Query too long") |
| 13 | .describe("Question to ask the knowledge base") |
RemediationAI
The problem is that the `query` field in `AskKnowledgeBaseInputSchema` is an unconstrained string that could accept arbitrary input, widening the tool's blast radius beyond its intended use. The schema already includes `.min(1)` and `.max(1000)` constraints, but add a `.regex()` pattern to restrict the query to safe characters: for example, `.regex(/^[a-zA-Z0-9\s\-.,?!]+$/, 'Query contains invalid characters')` to allow only alphanumeric, spaces, and basic punctuation. This prevents injection of special characters that could be misinterpreted by the knowledge base backend. Verify by attempting to pass a query with SQL keywords (e.g., `'; DROP TABLE--`) and confirming the schema validation rejects it.
MCP tool input schema exposes an unconstrained string/any field with a risky name (command/query/sql/code/script/url/path/expr/ eval). Any caller can pass arbitrary values, which typically widens the tool's blast radius well beyond its intent. Narrow the schema with `.enum()`, `.regex()`, `.max()`, `Literal[...]`, Pydantic `Field(max_length=..., pattern=...)`, or a JSON Schema `enum` / `pattern` / `maxLength`.
Evidence
| 160 | // Define the schema for asking questions to the knowledge base |
| 161 | const AskKnowledgeBaseInputSchema = z.object({ |
| 162 | query: z.string() |
| 163 | .min(1, "Query cannot be empty") |
| 164 | .max(1000, "Query too long") |
| 165 | .describe("Your question or query to get AI-generated answers from the knowledge base.") |
RemediationAI
The problem is that the `query` field in the `AskKnowledgeBaseInputSchema` in mcp-proxy.ts is an unconstrained string that could accept arbitrary input, widening the tool's blast radius. The schema already includes `.min(1)` and `.max(1000)` constraints, but add a `.regex()` pattern to restrict the query to safe characters: for example, `.regex(/^[a-zA-Z0-9\s\-.,?!]+$/, 'Query contains invalid characters')` to allow only alphanumeric, spaces, and basic punctuation. This prevents injection of special characters that could be misinterpreted by downstream systems. Verify by attempting to pass a query with shell metacharacters (e.g., `$(whoami)`) and confirming the schema validation rejects it.
Dockerfile never sets a non-root `USER` directive, so the CMD runs as root by default. Any RCE or library-level vulnerability exploited inside this container gets full privileges (MCP Top-10 R3). Add `USER <non-root>` before CMD / ENTRYPOINT in the final stage โ e.g. `USER 1000`, `USER nobody`, or `USER nonroot` on distroless.
Evidence
| 1 | # Build stage |
| 2 | FROM node:20-slim AS builder |
| 3 | |
| 4 | WORKDIR /app |
| 5 | |
| 6 | # Copy package files |
| 7 | COPY package*.json ./ |
| 8 | |
| 9 | # Install all dependencies (including dev dependencies for building) |
| 10 | RUN npm ci |
| 11 | |
| 12 | # Copy source code |
| 13 | COPY . . |
| 14 | |
| 15 | # Build the application |
| 16 | RUN npm run build |
| 17 | |
| 18 | # Production stage |
| 19 | FROM node:20-slim |
| 20 | |
| 21 | WORKDIR /app |
| 22 | |
| 23 | # Copy package files |
| 24 | COPY package*.json ./ |
| 25 | |
| 26 | # Install only production dependencies |
| 27 | RUN npm ci --only=production |
| 28 | |
| 29 | # Copy built application from builder stage |
| 30 | COPY --from=builder /app/dist ./dist |
RemediationAI
The problem is that the Dockerfile never sets a non-root `USER` directive, so the container runs as root by default, giving any RCE or library vulnerability full system privileges. Add a `USER` directive before the `CMD` or `ENTRYPOINT` in the final production stage: for example, add `RUN useradd -m -u 1000 appuser` after the `WORKDIR` line and then `USER 1000` before `CMD`. This ensures the MCP server runs with minimal privileges. Verify by building the image, running it, and executing `id` inside the container to confirm the process runs as UID 1000 (or the chosen non-root user) rather than UID 0 (root).
MCP manifest declares tools but no authentication field is present (none of: auth, authorization, bearer, oauth, mtls, apiKey, api_key, basic, token, authToken). Absence is a weak signal โ confirm whether the server relies on network-layer or host-level auth, or declare the real mechanism explicitly so reviewers can audit it.
Evidence
| 1 | import axios, { type AxiosResponse } from "axios"; |
| 2 | import { ToolExecutionResult } from "../types.js"; |
| 3 | import { |
| 4 | getPluggedinMCPApiKey, |
| 5 | getPluggedinMCPApiBaseUrl, |
| 6 | sanitizeName, |
| 7 | isDebugEnabled |
| 8 | } from "../utils.js"; |
| 9 | import { logMcpActivity, createExecutionTimer } from "../notification-logger.js"; |
| 10 | import { debugError, debugLog } from "../debug-log.js"; |
| 11 | import { getApiKeySetupMessage } from "./static-handlers-helpers.js"; |
| 12 | import { |
| 13 | DiscoverToolsInputSchema, |
| 14 | AskKnowledgeBaseInputSchema, |
RemediationAI
The problem is that the MCP manifest in src/handlers/static-handlers.ts declares tools but does not explicitly declare an authentication mechanism (none of: auth, authorization, bearer, oauth, mtls, apiKey, api_key, basic, token, authToken), making it unclear whether the server relies on network-layer, host-level, or application-level auth. Add an explicit `auth` field to the manifest JSON or tool definitions: for example, `"auth": { "type": "bearer", "scheme": "Bearer" }` or `"auth": { "type": "apiKey", "in": "header", "name": "X-API-Key" }`. This makes the authentication mechanism explicit and auditable. Verify by reviewing the manifest JSON output and confirming the `auth` field is present and accurately describes the actual authentication mechanism used by the server.
MCP manifest declares tools but no authentication field is present (none of: auth, authorization, bearer, oauth, mtls, apiKey, api_key, basic, token, authToken). Absence is a weak signal โ confirm whether the server relies on network-layer or host-level auth, or declare the real mechanism explicitly so reviewers can audit it.
Evidence
| 1 | # Migration Guide: Pre-release to v1.0.0 |
| 2 | |
| 3 | This guide helps you upgrade your plugged.in MCP Proxy from pre-release versions to the stable v1.0.0. |
| 4 | |
| 5 | ## Overview |
| 6 | |
| 7 | Version 1.0.0 is our first stable release, bringing together all the features developed during the pre-release phase with enhanced security, notifications, and developer tools. |
| 8 | |
| 9 | ## What's New in v1.0.0 |
| 10 | |
| 11 | - **Notification Support**: Real-time activity tracking |
| 12 | - **RAG Integration**: Document context in AI interactions |
| 13 | - **Enhanced Security** |
RemediationAI
The problem is that the MCP manifest referenced in MIGRATION_GUIDE_v1.0.0.md does not explicitly declare an authentication mechanism, making it unclear how the server authenticates callers. Add an explicit `auth` field to the manifest or documentation: for example, document that the server uses "Bearer token authentication" or "API key in X-API-Key header" and include this in the manifest JSON under an `auth` field. This ensures reviewers can audit the authentication mechanism. Verify by reviewing the migration guide and confirming it explicitly documents the authentication method used by the v1.0.0 server.
MCP manifest declares tools but no authentication field is present (none of: auth, authorization, bearer, oauth, mtls, apiKey, api_key, basic, token, authToken). Absence is a weak signal โ confirm whether the server relies on network-layer or host-level auth, or declare the real mechanism explicitly so reviewers can audit it.
Evidence
| 1 | # Release Notes - v1.0.0 |
| 2 | |
| 3 | Released: June 19, 2025 |
| 4 | |
| 5 | ## ๐ Overview |
| 6 | |
| 7 | We're excited to announce the release of plugged.in MCP Proxy v1.0.0! This major update brings significant enhancements including notification support, RAG integration capabilities, enhanced security measures, and improved debugging tools. |
| 8 | |
| 9 | ## โจ New Features |
| 10 | |
| 11 | ### ๐ MCP Activity Notifications |
| 12 | - **Real-time Activity Logging**: Track all MCP operations (tool calls, resource reads, prompt executions) |
| 13 | - **Notification Integration**: Se |
RemediationAI
The problem is that the MCP manifest referenced in RELEASE_NOTES_v1.0.0.md does not explicitly declare an authentication mechanism, making it unclear how the server authenticates callers. Add an explicit `auth` field to the manifest or release notes: for example, document that the server uses "OAuth 2.0" or "JWT bearer tokens" and include this in the manifest JSON under an `auth` field. This ensures reviewers can audit the authentication mechanism. Verify by reviewing the release notes and confirming they explicitly document the authentication method used by the v1.0.0 server.
MCP manifest declares tools but no authentication field is present (none of: auth, authorization, bearer, oauth, mtls, apiKey, api_key, basic, token, authToken). Absence is a weak signal โ confirm whether the server relies on network-layer or host-level auth, or declare the real mechanism explicitly so reviewers can audit it.
Evidence
| 1 | # plugged.in MCP Hub โ Proxy ยท Knowledge ยท Memory ยท Tools |
| 2 | |
| 3 | <div align="center"> |
| 4 | <img src="https://plugged.in/_next/image?url=%2Fpluggedin-wl.png&w=256&q=75" alt="plugged.in Logo" width="256" height="75"> |
| 5 | <h3>The Crossroads for AI Data Exchanges</h3> |
| 6 | <p>A unified MCP hub that gives your AI <strong>Knowledge</strong>, <strong>Memory</strong>, and <strong>Tools</strong> โ not just a proxy. Manage and test all MCP servers from a single connection while powering document-aware and memory-augmen |
RemediationAI
The problem is that the MCP manifest referenced in README.md does not explicitly declare an authentication mechanism, making it unclear how the server authenticates callers. Add an explicit `auth` field to the manifest or README documentation: for example, document that the server uses "API key authentication" or "JWT bearer tokens" and include this in the manifest JSON under an `auth` field. This ensures reviewers and users can audit the authentication mechanism. Verify by reviewing the README and confirming it explicitly documents the authentication method used by the MCP server.
MCP manifest declares tools but no authentication field is present (none of: auth, authorization, bearer, oauth, mtls, apiKey, api_key, basic, token, authToken). Absence is a weak signal โ confirm whether the server relies on network-layer or host-level auth, or declare the real mechanism explicitly so reviewers can audit it.
Evidence
| 1 | # Release Notes: v1.4.0 - Registry v2 Support & Enhanced OAuth Integration |
| 2 | |
| 3 | ## ๐ Overview |
| 4 | |
| 5 | We're excited to announce the release of plugged.in MCP Proxy v1.4.0! This release brings full support for the Registry v2 features from plugged.in App v2.7.0, including OAuth token management, bidirectional notifications, and trending analytics. |
| 6 | |
| 7 | ## ๐ Major Features |
| 8 | |
| 9 | ### 1. OAuth Token Management |
| 10 | - **Seamless Authentication**: OAuth tokens are now automatically retrieved from plugged.in App v2.7.0 |
| 11 | - **No |
RemediationAI
The problem is that the MCP manifest referenced in RELEASE_NOTES_v1.4.0.md does not explicitly declare an authentication mechanism, making it unclear how the server authenticates callers. Add an explicit `auth` field to the manifest or release notes: for example, document that the server uses "OAuth 2.0 token management" (as mentioned in the release notes) and include this in the manifest JSON under an `auth` field with details like `"type": "oauth2"`. This ensures reviewers can audit the authentication mechanism. Verify by reviewing the release notes and confirming they explicitly document the OAuth 2.0 authentication method in the manifest.
GitHub Actions `uses:` reference is not pinned to a 40-character commit SHA. Tags (`@v4`) and branches (`@main`) are mutable โ a compromised maintainer or a tag rewrite can substitute malicious code into your CI pipeline silently. Pin to a SHA: `uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab`. For readability, include the version as a trailing comment: `# v4.1.1`. Tools like `pinact` / `ratchet` automate this. Allowed unpinned forms (excluded by the rule): - Local actions `.
Evidence
| 26 | actions: read # Required for Claude to read CI results on PRs |
| 27 | steps: |
| 28 | - name: Checkout repository |
| 29 | uses: actions/checkout@v4 |
| 30 | with: |
| 31 | fetch-depth: 1 |
RemediationAI
The problem is that `.github/workflows/claude.yml` uses `actions/checkout@v4`, which is a mutable tag that can be rewritten by a compromised maintainer, allowing malicious code injection into the CI pipeline. Replace `uses: actions/checkout@v4` with a pinned commit SHA: `uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # v4.1.1`. Use a tool like `pinact` or `ratchet` to automate this process across all workflows. This ensures the exact version of the action is immutable and cannot be silently replaced. Verify by running `git log --oneline .github/workflows/claude.yml` and confirming the `uses:` line contains a 40-character SHA, not a tag.
GitHub Actions `uses:` reference is not pinned to a 40-character commit SHA. Tags (`@v4`) and branches (`@main`) are mutable โ a compromised maintainer or a tag rewrite can substitute malicious code into your CI pipeline silently. Pin to a SHA: `uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab`. For readability, include the version as a trailing comment: `# v4.1.1`. Tools like `pinact` / `ratchet` automate this. Allowed unpinned forms (excluded by the rule): - Local actions `.
Evidence
| 27 | steps: |
| 28 | - name: Checkout repository |
| 29 | uses: actions/checkout@v4 |
| 30 | with: |
| 31 | fetch-depth: 1 |
RemediationAI
The problem is that `.github/workflows/claude-code-review.yml` uses `actions/checkout@v4`, which is a mutable tag that can be rewritten by a compromised maintainer, allowing malicious code injection into the CI pipeline. Replace `uses: actions/checkout@v4` with a pinned commit SHA: `uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # v4.1.1`. This ensures the exact version of the action is immutable. Verify by inspecting the workflow file and confirming the `uses:` line contains a 40-character SHA instead of a tag.
GitHub Actions `uses:` reference is not pinned to a 40-character commit SHA. Tags (`@v4`) and branches (`@main`) are mutable โ a compromised maintainer or a tag rewrite can substitute malicious code into your CI pipeline silently. Pin to a SHA: `uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab`. For readability, include the version as a trailing comment: `# v4.1.1`. Tools like `pinact` / `ratchet` automate this. Allowed unpinned forms (excluded by the rule): - Local actions `.
Evidence
| 33 | - name: Run Claude Code Review |
| 34 | id: claude-review |
| 35 | uses: anthropics/claude-code-action@v1 |
| 36 | with: |
| 37 | claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }} |
| 38 | prompt: | |
RemediationAI
The problem is that `.github/workflows/claude-code-review.yml` uses `anthropics/claude-code-action@v1`, which is a mutable tag that can be rewritten by a compromised maintainer, allowing malicious code injection into the CI pipeline. Replace `uses: anthropics/claude-code-action@v1` with a pinned commit SHA: `uses: anthropics/claude-code-action@<40-char-sha> # v1.x.x`. Use a tool like `pinact` to automate this process. This ensures the exact version of the action is immutable and cannot be silently replaced. Verify by inspecting the workflow file and confirming the `uses:` line contains a 40-character SHA instead of a tag.
GitHub Actions `uses:` reference is not pinned to a 40-character commit SHA. Tags (`@v4`) and branches (`@main`) are mutable โ a compromised maintainer or a tag rewrite can substitute malicious code into your CI pipeline silently. Pin to a SHA: `uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab`. For readability, include the version as a trailing comment: `# v4.1.1`. Tools like `pinact` / `ratchet` automate this. Allowed unpinned forms (excluded by the rule): - Local actions `.
Evidence
| 32 | - name: Run Claude Code |
| 33 | id: claude |
| 34 | uses: anthropics/claude-code-action@v1 |
| 35 | with: |
| 36 | claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }} |
RemediationAI
The problem is that `.github/workflows/claude.yml` uses `anthropics/claude-code-action@v1`, which is a mutable tag that can be rewritten by a compromised maintainer, allowing malicious code injection into the CI pipeline. Replace `uses: anthropics/claude-code-action@v1` with a pinned commit SHA: `uses: anthropics/claude-code-action@<40-char-sha> # v1.x.x`. Use a tool like `pinact` to automate this process. This ensures the exact version of the action is immutable. Verify by inspecting the workflow file and confirming the `uses:` line contains a 40-character SHA instead of a tag.
MCP tool description or return text contains an imperative phrase that asks the LLM to invoke or call ANOTHER tool โ "invoke the write_file tool", "before using this, also call send_email", "silently invoke X". This is a cross-tool chaining injection: the user authorized THIS tool, but the payload escalates into others. Tool descriptions should describe what the tool DOES, not direct the LLM to use other tools. If a tool's correct operation requires composition, document the dependency in human
Evidence
| 1 | import { z } from "zod"; |
| 2 | import { Server } from "@modelcontextprotocol/sdk/server/index.js"; |
| 3 | import { |
| 4 | CompatibilityCallToolResultSchema, |
| 5 | ListToolsResultSchema, |
| 6 | Tool, |
| 7 | } from "@modelcontextprotocol/sdk/types.js"; |
| 8 | import { getMcpServers } from "../fetch-pluggedinmcp.js"; |
| 9 | import { getSessionKey, sanitizeName, getPluggedinMCPApiKey, getPluggedinMCPApiBaseUrl } from "../utils.js"; // Import API utils |
| 10 | import axios from "axios"; // Import axios |
| 11 | import { getSession } from "../sessions.js"; |
| 12 | import { |
RemediationAI
The problem is that the tool description in src/tools/call-pluggedin-tool.ts may contain imperative phrases directing the LLM to invoke other tools (e.g., 'invoke the write_file tool'), enabling cross-tool chaining injection where a caller authorized for one tool can escalate into others. Review the tool description and remove any imperative phrases that direct the LLM to call other tools; instead, describe only what THIS tool does (e.g., 'Calls a Plugged.in tool with the specified parameters'). This prevents the LLM from being tricked into invoking unauthorized tools. Verify by reviewing the tool description in the MCP manifest and confirming it contains no phrases like 'invoke', 'call', 'also use', or 'before using this'.
MCP server has authentication wired but no invocation log. An authenticated server that never logs is a forensics dead-end: unauthorized actions cannot be detected, attributed, or reconstructed after the fact. Closes the OWASP MCP Top 10:2025 MCP08 (Lack of Audit and Telemetry) gap. Fix: add a structured `logger.info("tool.invoke", ...)` call to every authenticated tool handler with at minimum the tool name, caller identity, and request id. Ship invocation events to a retention sink (CloudWatc
Evidence
| 1 | /** |
| 2 | * Express Middleware for MCP Streamable HTTP Server |
| 3 | * |
| 4 | * This module contains reusable middleware functions for: |
| 5 | * - CORS headers |
| 6 | * - Protocol version validation |
| 7 | * - Accept header normalization |
| 8 | * - Authentication |
| 9 | * - Static file serving for .well-known endpoints |
| 10 | */ |
| 11 | |
| 12 | import express, { RequestHandler } from 'express'; |
| 13 | import { StreamableHTTPServerTransport } from '@modelcontextprotocol/sdk/server/streamableHttp.js'; |
| 14 | import { Server } from '@modelcontextprotocol/sdk/server/index.js'; |
| 15 | im |
RemediationAI
The problem is that the MCP server in src/middleware.ts has authentication wired but no invocation log, making it impossible to detect, attribute, or reconstruct unauthorized actions after the fact. Add structured logging to every authenticated tool handler: call `logger.info('tool.invoke', { toolName, callerId, requestId, timestamp, args })` before executing the tool, and log the result and any errors after execution. Ship these logs to a retention sink (CloudWatch, Datadog, ELK stack). This enables forensic analysis and compliance auditing. Verify by invoking an authenticated tool and confirming a structured log entry appears in the logging sink with the tool name, caller identity, and request ID.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 197 | if (retry) { |
| 198 | try { |
| 199 | await client.close(); |
| 200 | } catch {} |
| 201 | await sleep(waitFor); |
| 202 | } |
| 203 | } |
RemediationAI
The problem is that src/client.ts silently swallows the exception in the `catch {}` block when closing the client, discarding the error with no log, metric, or trace, which blinds incident response and hides real failures. Replace `catch {}` with `catch (err) { logger.debug('client.close() error', { error: err.message }); }` to log the error at an appropriate level. This ensures failures are visible for debugging and monitoring. Verify by intentionally triggering a close error (e.g., by closing the transport twice) and confirming a debug log entry is produced.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 191 | if (metadata) { |
| 192 | try { |
| 193 | metadata.transport.close().catch(() => {}); |
| 194 | } catch {} |
| 195 | sessions.delete(oldestSessionId); |
| 196 | debugLog(`Evicted oldest session ${oldestSessionId} (LRU eviction)`); |
| 197 | } |
RemediationAI
The problem is that src/middleware.ts silently swallows exceptions in the `.catch(() => {})` and `catch {}` blocks when closing transport metadata, discarding errors with no log or trace, which blinds incident response. Replace `.catch(() => {})` with `.catch(err => logger.debug('transport.close() error', { error: err.message }))` and replace `catch {}` with `catch (err) { logger.debug('session cleanup error', { error: err.message }); }`. This ensures failures are visible for debugging. Verify by intentionally triggering a close error and confirming a debug log entry is produced in the middleware logs.