High risk. Don't ship without significant remediation.
Scanned 5/13/2026, 6:10:17 AMยทCached resultยทDeep Scanยท91 rulesยทHow we decide โ
AIVSS Score
High
Severity Breakdown
0
critical
7
high
18
medium
1
low
MCP Server Information
Findings
This package receives a D grade with a safety score of 69/100 and carries significant security concerns, including 7 high-severity findings dominated by 6 prompt injection vulnerabilities and 16 server configuration issues. The 18 medium-severity findings suggest problems with tool poisoning, resource exhaustion, and behavioral mismatches that could be exploited or cause unexpected behavior. You should address these vulnerabilities or seek alternatives before deploying this in production.
AIPer-finding remediation generated by bedrock-claude-haiku-4-5 โ 26 of 26 findings. Click any finding to read.
No known CVEs found for this package or its dependencies.
Scan Details
Done
Sign in to save scan history and re-scan automatically on new commits.
Building your own MCP server?
Same rules, same LLM judges, same grade. Private scans stay isolated to your account and never appear in the public registry. Required for code your team hasnโt shipped yet.
26 of 26 findings
26 findings
Tool 'postgrestRequest' accepts arbitrary path and method arguments allowing direct object reference attacks; caller can construct POST/PUT/PATCH/DELETE requests to any PostgREST endpoint without ownership validation, enabling unauthorized access to or modification of records belonging to other users.
Evidence
| 70 | body: z |
| 71 | .union([ |
| 72 | z.record(z.string(), z.unknown()), |
| 73 | z.array(z.record(z.string(), z.unknown())), |
| 74 | ]) |
| 75 | .optional(), |
| 76 | }), |
| 77 | outputSchema: z.object({ result: z.unknown() }), |
| 78 | async execute({ method, path, body }) { |
| 79 | // normalize path concating to apiUrl |
| 80 | const { pathname, search } = new URL(path, 'http://mock/'); |
| 81 | const normalizedPath = `${pathname}${search}`; |
| 82 | const url = new URL(`$ |
RemediationAI
The problem is that `postgrestRequest` accepts arbitrary `path` and `method` parameters without validating ownership, allowing callers to construct requests to any PostgREST endpoint and access or modify records belonging to other users. Add row-level security (RLS) policy validation by checking the `Authorization` header or caller identity against the PostgREST JWT claims before executing the request, or implement a whitelist of allowed paths using `path: z.string().regex(/^\/(allowed_table_1|allowed_table_2)(\?.*)?$/)`. This ensures only authenticated users can access their own data. Verify the fix by testing that a request to `/users/999` (another user's ID) returns a 403 Forbidden when the caller's JWT does not own that record.
Tool 'postgrestRequest' shadows reserved 'request' (http/network category); lacks server-specific prefix.
RemediationAI
The tool name `postgrestRequest` shadows the reserved `request` category in MCP and lacks a server-specific prefix, creating ambiguity and potential conflicts. Rename the tool to `postgrest_query` or `postgrest_api_request` using the `name` property in the tool definition. This eliminates naming collisions and makes the tool's origin explicit. Verify by checking that the renamed tool appears correctly in the MCP server's tool list and does not conflict with other `request`-named tools.
Tool 'postgrestRequest' uses module-level apiKey credential (from options) to call PostgREST API without consulting caller identity or per-request authorization.
RemediationAI
The problem is that `postgrestRequest` uses a module-level `apiKey` credential without consulting the caller's identity or per-request authorization, allowing any caller to act as the service account. Modify the `execute` function to extract and validate the caller's JWT token from the MCP request context (e.g., via `context.authorization` or a custom header), then pass that token to PostgREST instead of the module-level key, or implement caller-specific API key rotation. This ensures each request is authorized under the caller's own identity. Verify by logging the authorization header used in each request and confirming it matches the caller's identity, not the service account.
Tool 'sqlToRest' uses module-level apiKey credential (from options) to call PostgREST API without consulting caller identity or per-request authorization.
RemediationAI
The problem is that `sqlToRest` uses a module-level `apiKey` credential without consulting the caller's identity or per-request authorization, allowing any caller to execute SQL as the service account. Modify the tool's `execute` function to accept an optional `authorization` parameter or extract it from the MCP context, then pass the caller's JWT to PostgREST instead of the module-level key. This ensures SQL execution is authorized under the caller's own identity. Verify by testing that a caller without SELECT permission on a table receives a 403 error when attempting to query it via `sqlToRest`.
Tool 'postgrestRequest' performs NETWORK side effects (HTTP requests with POST/PUT/PATCH/DELETE methods that modify remote state) but description only mentions 'performs an HTTP request' without disclosing that it can mutate data on the PostgREST API.
RemediationAI
The problem is that `postgrestRequest` performs destructive network side effects (POST/PUT/PATCH/DELETE mutations) but its description only states 'performs an HTTP request', failing to disclose that it can mutate remote data. Update the `description` field to explicitly state: `'Performs an HTTP request against the PostgREST API. WARNING: POST, PUT, PATCH, and DELETE methods will modify data on the remote server.'` This makes the destructive capability transparent to callers. Verify by reading the tool description in the MCP server's introspection output and confirming the mutation warning is present.
Tool '/spec' resource fetches OpenAPI spec from PostgREST API and returns untrusted JSON response verbatim without provenance wrapper, enabling indirect prompt injection.
Evidence
| 37 | }; |
| 38 | |
| 39 | if (apiKey) { |
| 40 | headers.apikey = apiKey; |
| 41 | headers.authorization = `Bearer ${apiKey}`; |
| 42 | } |
| 43 | |
| 44 | return headers; |
| 45 | } |
| 46 | |
| 47 | return createMcpServer({ |
| 48 | name: 'supabase/postgrest', |
| 49 | version, |
| 50 | resources: resources('postgrest', [ |
| 51 | jsonResource('/spec', { |
| 52 | name: 'OpenAPI spec', |
| 53 | description: 'OpenAPI spec for the PostgREST API', |
| 54 | async read(uri) { |
| 55 | const response = await fetch(ensureTrailingSlash(apiUrl), { |
RemediationAI
The problem is that the `/spec` resource fetches the OpenAPI spec from PostgREST and returns the untrusted JSON response verbatim without a provenance wrapper, enabling indirect prompt injection if the spec contains malicious content. Wrap the response in a provenance object: `{ source: 'postgrest-api', spec: response, timestamp: new Date().toISOString() }` and add a note in the resource description that the spec originates from an external API. This signals to the LLM that the content is untrusted. Verify by inspecting the resource output and confirming it includes a `source` field and timestamp.
Tool 'postgrestRequest' fetches from caller-supplied URL path and returns untrusted JSON response verbatim without provenance wrapper, enabling indirect prompt injection via PostgREST API responses.
Evidence
| 65 | postgrestRequest: tool({ |
| 66 | description: 'Performs an HTTP request against the PostgREST API', |
| 67 | parameters: z.object({ |
| 68 | method: z.enum(['GET', 'POST', 'PUT', 'PATCH', 'DELETE']), |
| 69 | path: z.string(), |
| 70 | body: z |
| 71 | .union([ |
| 72 | z.record(z.string(), z.unknown()), |
| 73 | z.array(z.record(z.string(), z.unknown())), |
| 74 | ]) |
| 75 | .optional(), |
| 76 | }), |
| 77 | outputSchema: z.object({ result: z.unknown() }), |
| 78 | async exe |
RemediationAI
The problem is that `postgrestRequest` fetches from a caller-supplied `path` parameter and returns the untrusted JSON response verbatim without a provenance wrapper, enabling indirect prompt injection if the PostgREST API returns malicious content. Wrap the response in a provenance object: `{ source: 'postgrest-api', path: path, data: response, timestamp: new Date().toISOString() }` and document in the tool description that responses are untrusted external data. This signals to the LLM that the content originates from an external API. Verify by testing that the tool output includes a `source` field and that the original response is nested under a `data` key.
Network / IO / subprocess call without an explicit timeout. A malicious or hung upstream (HTTP host, socket peer, child process) can pin threads, exhaust connection/process pools, and make the MCP server unresponsive. Always pass a bounded timeout. v2 extends v1 with subprocess coverage (R03 from the legacy readiness audit).
Evidence
| 52 | name: 'OpenAPI spec', |
| 53 | description: 'OpenAPI spec for the PostgREST API', |
| 54 | async read(uri) { |
| 55 | const response = await fetch(ensureTrailingSlash(apiUrl), { |
| 56 | headers: getHeaders(), |
| 57 | }); |
RemediationAI
The problem is that the `fetch` call to PostgREST lacks an explicit timeout, allowing a hung or malicious upstream server to pin threads and exhaust connection pools, making the MCP server unresponsive. Add a `timeout` option to the `fetch` call: `fetch(ensureTrailingSlash(apiUrl), { headers: getHeaders(), signal: AbortSignal.timeout(5000) })` (5 seconds is a reasonable default). This ensures the request fails fast if the upstream server does not respond. Verify by testing that a request to a non-responsive endpoint times out within 5 seconds and the server remains responsive.
MCP tool input schema exposes an unconstrained string/any field with a risky name (command/query/sql/code/script/url/path/expr/ eval). Any caller can pass arbitrary values, which typically widens the tool's blast radius well beyond its intent. Narrow the schema with `.enum()`, `.regex()`, `.max()`, `Literal[...]`, Pydantic `Field(max_length=..., pattern=...)`, or a JSON Schema `enum` / `pattern` / `maxLength`.
Evidence
| 231 | "$schema": "http://json-schema.org/draft-07/schema#", |
| 232 | "additionalProperties": false, |
| 233 | "properties": { |
| 234 | "sql": { |
| 235 | "type": "string", |
| 236 | }, |
| 237 | }, |
| 238 | "required": [ |
| 239 | "sql", |
RemediationAI
The problem is that the `sql` field in the test schema is an unconstrained string, allowing callers to pass arbitrary SQL queries with no length or pattern restrictions, widening the tool's blast radius. Add constraints to the schema: `sql: z.string().max(10000).regex(/^[A-Za-z0-9\s,;()\-*'"=<>!]+$/)` to limit length and allow only safe SQL characters, or use a whitelist of allowed query patterns. This prevents excessively long or malformed SQL from being executed. Verify by testing that a query exceeding 10,000 characters is rejected and that a query with shell metacharacters is rejected.
MCP tool input schema exposes an unconstrained string/any field with a risky name (command/query/sql/code/script/url/path/expr/ eval). Any caller can pass arbitrary values, which typically widens the tool's blast radius well beyond its intent. Narrow the schema with `.enum()`, `.regex()`, `.max()`, `Literal[...]`, Pydantic `Field(max_length=..., pattern=...)`, or a JSON Schema `enum` / `pattern` / `maxLength`.
Evidence
| 123 | export const applyMigrationOptionsSchema = z.object({ |
| 124 | name: z.string().min(1), |
| 125 | query: z.string().min(1), |
| 126 | }); |
| 127 | |
| 128 | export const migrationSchema = z.object({ |
RemediationAI
The problem is that the `query` field in `applyMigrationOptionsSchema` is an unconstrained string, allowing callers to pass arbitrary SQL queries with no length or pattern restrictions. Add constraints: `query: z.string().min(1).max(50000).regex(/^[A-Za-z0-9\s,;()\-*'"=<>!]+$/)` to limit length and restrict to safe SQL characters. This prevents excessively long or malformed SQL from being executed. Verify by testing that a query exceeding 50,000 characters is rejected and that a query with shell metacharacters is rejected.
MCP tool input schema exposes an unconstrained string/any field with a risky name (command/query/sql/code/script/url/path/expr/ eval). Any caller can pass arbitrary values, which typically widens the tool's blast radius well beyond its intent. Narrow the schema with `.enum()`, `.regex()`, `.max()`, `Literal[...]`, Pydantic `Field(max_length=..., pattern=...)`, or a JSON Schema `enum` / `pattern` / `maxLength`.
Evidence
| 206 | ], |
| 207 | "type": "string", |
| 208 | }, |
| 209 | "path": { |
| 210 | "type": "string", |
| 211 | }, |
| 212 | }, |
| 213 | "required": [ |
| 214 | "method", |
RemediationAI
The problem is that the `path` field in the test schema is an unconstrained string, allowing callers to pass arbitrary paths with no length or pattern restrictions, potentially enabling directory traversal or injection attacks. Add constraints: `path: z.string().max(500).regex(/^\/[a-zA-Z0-9_\-/.?=&]+$/)` to limit length and restrict to safe path characters. This prevents malicious paths from being constructed. Verify by testing that a path exceeding 500 characters is rejected and that a path with shell metacharacters is rejected.
MCP tool input schema exposes an unconstrained string/any field with a risky name (command/query/sql/code/script/url/path/expr/ eval). Any caller can pass arbitrary values, which typically widens the tool's blast radius well beyond its intent. Narrow the schema with `.enum()`, `.regex()`, `.max()`, `Literal[...]`, Pydantic `Field(max_length=..., pattern=...)`, or a JSON Schema `enum` / `pattern` / `maxLength`.
Evidence
| 100 | description: |
| 101 | 'Converts SQL query to a PostgREST API request (method, path)', |
| 102 | parameters: z.object({ |
| 103 | sql: z.string(), |
| 104 | }), |
| 105 | outputSchema: z.object({ |
| 106 | method: z.string(), |
RemediationAI
The problem is that the `sql` field in `sqlToRest` is an unconstrained string, allowing callers to pass arbitrary SQL queries with no length or pattern restrictions. Add constraints: `sql: z.string().min(1).max(10000).regex(/^(SELECT|INSERT|UPDATE|DELETE|WITH)[A-Za-z0-9\s,;()\-*'"=<>!]+$/i)` to limit length and restrict to allowed SQL keywords. This prevents excessively long or malformed SQL from being executed. Verify by testing that a query exceeding 10,000 characters is rejected and that a query starting with DROP is rejected.
MCP tool input schema exposes an unconstrained string/any field with a risky name (command/query/sql/code/script/url/path/expr/ eval). Any caller can pass arbitrary values, which typically widens the tool's blast radius well beyond its intent. Narrow the schema with `.enum()`, `.regex()`, `.max()`, `Literal[...]`, Pydantic `Field(max_length=..., pattern=...)`, or a JSON Schema `enum` / `pattern` / `maxLength`.
Evidence
| 100 | const executeSqlInputSchema = z.object({ |
| 101 | project_id: z.string(), |
| 102 | query: z.string().describe('The SQL query to execute'), |
| 103 | }); |
| 104 | |
| 105 | const executeSqlOutputSchema = z.object({ |
RemediationAI
The problem is that the `query` field in `executeSqlInputSchema` is an unconstrained string, allowing callers to pass arbitrary SQL queries with no length or pattern restrictions. Add constraints: `query: z.string().min(1).max(50000).regex(/^[A-Za-z0-9\s,;()\-*'"=<>!]+$/)` to limit length and restrict to safe SQL characters. This prevents excessively long or malformed SQL from being executed. Verify by testing that a query exceeding 50,000 characters is rejected and that a query with shell metacharacters is rejected.
MCP tool input schema exposes an unconstrained string/any field with a risky name (command/query/sql/code/script/url/path/expr/ eval). Any caller can pass arbitrary values, which typically widens the tool's blast radius well beyond its intent. Narrow the schema with `.enum()`, `.regex()`, `.max()`, `Literal[...]`, Pydantic `Field(max_length=..., pattern=...)`, or a JSON Schema `enum` / `pattern` / `maxLength`.
Evidence
| 89 | search: tool({ |
| 90 | description: 'Search text', |
| 91 | parameters: z.object({ |
| 92 | query: z.string(), |
| 93 | caseSensitive: z.boolean().default(false), |
| 94 | }), |
| 95 | outputSchema: z.object({ |
RemediationAI
The problem is that the `query` field in the search tool is an unconstrained string, allowing callers to pass arbitrary search queries with no length or pattern restrictions. Add constraints: `query: z.string().min(1).max(1000)` to limit length and prevent excessively long or resource-intensive searches. This prevents denial-of-service attacks via large search queries. Verify by testing that a query exceeding 1,000 characters is rejected and that search performance remains acceptable.
MCP tool input schema exposes an unconstrained string/any field with a risky name (command/query/sql/code/script/url/path/expr/ eval). Any caller can pass arbitrary values, which typically widens the tool's blast radius well beyond its intent. Narrow the schema with `.enum()`, `.regex()`, `.max()`, `Literal[...]`, Pydantic `Field(max_length=..., pattern=...)`, or a JSON Schema `enum` / `pattern` / `maxLength`.
Evidence
| 91 | const applyMigrationInputSchema = z.object({ |
| 92 | project_id: z.string(), |
| 93 | name: z.string().describe('The name of the migration in snake_case'), |
| 94 | query: z.string().describe('The SQL query to apply'), |
| 95 | }); |
| 96 | |
| 97 | const applyMigrationOutputSchema = z.object({ |
RemediationAI
The problem is that the `query` field in `applyMigrationInputSchema` is an unconstrained string, allowing callers to pass arbitrary SQL queries with no length or pattern restrictions. Add constraints: `query: z.string().min(1).max(50000).regex(/^[A-Za-z0-9\s,;()\-*'"=<>!]+$/)` to limit length and restrict to safe SQL characters. This prevents excessively long or malformed SQL from being executed. Verify by testing that a query exceeding 50,000 characters is rejected and that a query with shell metacharacters is rejected.
MCP tool input schema exposes an unconstrained string/any field with a risky name (command/query/sql/code/script/url/path/expr/ eval). Any caller can pass arbitrary values, which typically widens the tool's blast radius well beyond its intent. Narrow the schema with `.enum()`, `.regex()`, `.max()`, `Literal[...]`, Pydantic `Field(max_length=..., pattern=...)`, or a JSON Schema `enum` / `pattern` / `maxLength`.
Evidence
| 116 | }); |
| 117 | |
| 118 | export const executeSqlOptionsSchema = z.object({ |
| 119 | query: z.string().min(1), |
| 120 | parameters: z.array(z.unknown()).optional(), |
| 121 | read_only: z.boolean().optional(), |
| 122 | }); |
RemediationAI
The problem is that the `query` field in `executeSqlOptionsSchema` is an unconstrained string, allowing callers to pass arbitrary SQL queries with no length or pattern restrictions. Add constraints: `query: z.string().min(1).max(50000).regex(/^[A-Za-z0-9\s,;()\-*'"=<>!]+$/)` to limit length and restrict to safe SQL characters. This prevents excessively long or malformed SQL from being executed. Verify by testing that a query exceeding 50,000 characters is rejected and that a query with shell metacharacters is rejected.
MCP tool input schema exposes an unconstrained string/any field with a risky name (command/query/sql/code/script/url/path/expr/ eval). Any caller can pass arbitrary values, which typically widens the tool's blast radius well beyond its intent. Narrow the schema with `.enum()`, `.regex()`, `.max()`, `Literal[...]`, Pydantic `Field(max_length=..., pattern=...)`, or a JSON Schema `enum` / `pattern` / `maxLength`.
Evidence
| 104 | }), |
| 105 | outputSchema: z.object({ |
| 106 | method: z.string(), |
| 107 | path: z.string(), |
| 108 | }), |
| 109 | execute: async ({ sql }) => { |
| 110 | const statement = await processSql(sql); |
RemediationAI
The problem is that the `sql` field in the output schema is an unconstrained string, allowing the tool to return arbitrary SQL with no length or pattern restrictions. Add constraints to the input schema: `sql: z.string().min(1).max(10000).regex(/^[A-Za-z0-9\s,;()\-*'"=<>!]+$/)` to limit length and restrict to safe SQL characters. This prevents excessively long or malformed SQL from being processed. Verify by testing that a query exceeding 10,000 characters is rejected and that the tool output respects the constraints.
MCP tool input schema exposes an unconstrained string/any field with a risky name (command/query/sql/code/script/url/path/expr/ eval). Any caller can pass arbitrary values, which typically widens the tool's blast radius well beyond its intent. Narrow the schema with `.enum()`, `.regex()`, `.max()`, `Literal[...]`, Pydantic `Field(max_length=..., pattern=...)`, or a JSON Schema `enum` / `pattern` / `maxLength`.
Evidence
| 66 | description: 'Performs an HTTP request against the PostgREST API', |
| 67 | parameters: z.object({ |
| 68 | method: z.enum(['GET', 'POST', 'PUT', 'PATCH', 'DELETE']), |
| 69 | path: z.string(), |
| 70 | body: z |
| 71 | .union([ |
| 72 | z.record(z.string(), z.unknown()), |
RemediationAI
The problem is that the `path` field in `postgrestRequest` is an unconstrained string, allowing callers to pass arbitrary paths with no length or pattern restrictions, potentially enabling directory traversal or injection attacks. Add constraints: `path: z.string().max(500).regex(/^\/[a-zA-Z0-9_\-/.?=&]+$/)` to limit length and restrict to safe path characters. This prevents malicious paths from being constructed. Verify by testing that a path exceeding 500 characters is rejected and that a path with shell metacharacters is rejected.
MCP tool input schema exposes an unconstrained string/any field with a risky name (command/query/sql/code/script/url/path/expr/ eval). Any caller can pass arbitrary values, which typically widens the tool's blast radius well beyond its intent. Narrow the schema with `.enum()`, `.regex()`, `.max()`, `Literal[...]`, Pydantic `Field(max_length=..., pattern=...)`, or a JSON Schema `enum` / `pattern` / `maxLength`.
Evidence
| 93 | caseSensitive: z.boolean().default(false), |
| 94 | }), |
| 95 | outputSchema: z.object({ |
| 96 | query: z.string(), |
| 97 | caseSensitive: z.boolean(), |
| 98 | }), |
| 99 | execute: async (args) => { |
RemediationAI
The problem is that the `query` field in the output schema is an unconstrained string, allowing the tool to return arbitrary query strings with no length or pattern restrictions. Add constraints to the input schema: `query: z.string().min(1).max(1000)` to limit length and prevent excessively long or resource-intensive queries. This prevents denial-of-service attacks via large queries. Verify by testing that a query exceeding 1,000 characters is rejected and that the tool output respects the constraints.
MCP tool input schema exposes an unconstrained string/any field with a risky name (command/query/sql/code/script/url/path/expr/ eval). Any caller can pass arbitrary values, which typically widens the tool's blast radius well beyond its intent. Narrow the schema with `.enum()`, `.regex()`, `.max()`, `Literal[...]`, Pydantic `Field(max_length=..., pattern=...)`, or a JSON Schema `enum` / `pattern` / `maxLength`.
Evidence
| 13 | }); |
| 14 | |
| 15 | const getProjectUrlOutputSchema = z.object({ |
| 16 | url: z.string(), |
| 17 | }); |
| 18 | |
| 19 | const getPublishableKeysInputSchema = z.object({ |
RemediationAI
The problem is that the `url` field in `getProjectUrlOutputSchema` is an unconstrained string, allowing the tool to return arbitrary URLs with no validation or pattern restrictions, potentially enabling open redirect attacks. Add constraints: `url: z.string().url().regex(/^https:\/\/(.*\.)?supabase\.(co|dev)/)` to restrict to HTTPS and Supabase domains only. This prevents malicious URLs from being returned. Verify by testing that a non-HTTPS URL is rejected and that a URL from a non-Supabase domain is rejected.
MCP tool input schema exposes an unconstrained string/any field with a risky name (command/query/sql/code/script/url/path/expr/ eval). Any caller can pass arbitrary values, which typically widens the tool's blast radius well beyond its intent. Narrow the schema with `.enum()`, `.regex()`, `.max()`, `Literal[...]`, Pydantic `Field(max_length=..., pattern=...)`, or a JSON Schema `enum` / `pattern` / `maxLength`.
Evidence
| 9 | import { z } from 'zod/v4'; |
| 10 | |
| 11 | export const graphqlRequestSchema = z.object({ |
| 12 | query: z.string(), |
| 13 | variables: z.record(z.string(), z.unknown()).optional(), |
| 14 | }); |
RemediationAI
The problem is that the `query` field in `graphqlRequestSchema` is an unconstrained string, allowing callers to pass arbitrary GraphQL queries with no length or pattern restrictions, potentially enabling denial-of-service attacks. Add constraints: `query: z.string().min(1).max(10000)` to limit length and prevent excessively long or resource-intensive queries. This prevents denial-of-service attacks via large GraphQL queries. Verify by testing that a query exceeding 10,000 characters is rejected and that query performance remains acceptable.
MCP manifest declares tools but no authentication field is present (none of: auth, authorization, bearer, oauth, mtls, apiKey, api_key, basic, token, authToken). Absence is a weak signal โ confirm whether the server relies on network-layer or host-level auth, or declare the real mechanism explicitly so reviewers can audit it.
Evidence
| 1 | # Contributing |
| 2 | |
| 3 | ## Development setup |
| 4 | |
| 5 | This repo uses pnpm for package management and the active LTS version of Node.js. Node.js and pnpm versions are managed via [mise](https://mise.jdx.dev/) (see `mise.toml`). |
| 6 | |
| 7 | > **Why mise?** We use mise to ensure all contributors use consistent versions of tools, reducing instances where code behaves differently on different machines. This is useful not only for managing Node.js and pnpm versions, but also binaries published outside of the npm ecosystem such |
RemediationAI
The problem is that the MCP manifest in CONTRIBUTING.md does not declare an authentication mechanism, making it unclear how the server validates caller identity and authorization. Add an explicit `authentication` section to the README or manifest documenting the auth mechanism: 'Authentication: The server uses PostgREST JWT tokens passed via the Authorization header. Callers must provide a valid JWT signed by the PostgREST instance.' This makes the security model transparent to reviewers. Verify by checking that the README includes an authentication section and that the security model is clearly documented.
MCP manifest declares tools but no authentication field is present (none of: auth, authorization, bearer, oauth, mtls, apiKey, api_key, basic, token, authToken). Absence is a weak signal โ confirm whether the server relies on network-layer or host-level auth, or declare the real mechanism explicitly so reviewers can audit it.
Evidence
| 1 | # @supabase/mcp-server-postgrest |
| 2 | |
| 3 | This is an MCP server for [PostgREST](https://postgrest.org). It allows LLMs to perform CRUD operations on your app via REST API. |
| 4 | |
| 5 | This server works with Supabase projects (which run PostgREST) and any standalone PostgREST server. |
| 6 | |
| 7 | ## Tools |
| 8 | |
| 9 | The following tools are available: |
| 10 | |
| 11 | ### `postgrestRequest` |
| 12 | |
| 13 | Performs an HTTP request to a [configured](#usage) PostgREST server. It accepts the following arguments: |
| 14 | |
| 15 | - `method`: The HTTP method to use (eg. `GET`, `POST`, `PA |
RemediationAI
The problem is that the MCP manifest in packages/mcp-server-postgrest/README.md does not declare an authentication mechanism, making it unclear how the server validates caller identity and authorization. Add an explicit `authentication` section to the README documenting the auth mechanism: 'Authentication: The server uses PostgREST API keys and JWT tokens. Callers must provide a valid API key or JWT token via the Authorization header.' This makes the security model transparent to reviewers. Verify by checking that the README includes an authentication section and that the security model is clearly documented.
MCP manifest declares tools but no authentication field is present (none of: auth, authorization, bearer, oauth, mtls, apiKey, api_key, basic, token, authToken). Absence is a weak signal โ confirm whether the server relies on network-layer or host-level auth, or declare the real mechanism explicitly so reviewers can audit it.
Evidence
| 1 | # Supabase MCP Server |
| 2 | |
| 3 | [](https://registry.modelcontextprotocol.io/?q=com.supabase%2Fmcp) |
| 4 | |
| 5 | > Connect your Supabase projects to Cursor, Claude, Windsurf, and other AI assistants. |
| 6 | |
| 7 | , and emits no audit event anywhere in the file. Without an audit event, an investigator cannot answer "who deleted record X on day Y?" โ the irreversible action leaves no trail. Closes the OWASP MCP Top 10:2025 MCP08 (Lack of Audit and Telemetry) gap. Distinct from MCP-201 (no confirmation) and MCP-283
Evidence
| 1 | import { Client } from '@modelcontextprotocol/sdk/client/index.js'; |
| 2 | import { AuthClient } from '@supabase/auth-js'; |
| 3 | import { StreamTransport } from '@supabase/mcp-utils'; |
| 4 | import { describe, expect, test } from 'vitest'; |
| 5 | import { createPostgrestMcpServer } from './server.js'; |
| 6 | |
| 7 | // Requires local Supabase stack running |
| 8 | const API_URL = 'http://127.0.0.1:54321'; |
| 9 | const REST_API_URL = `${API_URL}/rest/v1`; |
| 10 | const AUTH_API_URL = `${API_URL}/auth/v1`; |
| 11 | |
| 12 | /** |
| 13 | * Sets up a client and server for testing. |
| 14 | */ |
| 15 | a |
RemediationAI
The problem is that the `postgrestRequest` tool performs destructive operations (DELETE, PUT, PATCH) but emits no audit event, leaving no trail of who performed the action or when. Add an audit log call after each destructive operation: `await auditLog({ action: 'postgrest_request', method, path, caller: context.userId, timestamp: new Date(), status: 'success' })` using a centralized audit logging function. This enables investigators to answer 'who deleted record X on day Y?'. Verify by testing that a DELETE request generates an audit log entry with the caller's ID, method, path, and timestamp.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 552 | let existingEdgeFunction: EdgeFunction | undefined; |
| 553 | try { |
| 554 | existingEdgeFunction = await functions.getEdgeFunction(projectId, name); |
| 555 | } catch (error) {} |
| 556 | |
| 557 | const import_map_file = inputFiles.find((file) => |
| 558 | ['deno.json', 'import_map.json'].includes(file.name) |
RemediationAI
The problem is that the `catch (error) {}` block silently swallows the exception with no log, metric, or trace, blinding incident response and hiding real failures. Replace with: `catch (error) { console.error('Failed to get edge function:', projectId, name, error); }` or use a centralized error logging function. This ensures errors are visible for debugging and incident response. Verify by testing that when `getEdgeFunction` fails, an error message is logged to stderr or the logging system.