High risk. Don't ship without significant remediation.
Scanned 5/13/2026, 9:14:37 AMยทCached resultยทDeep Scanยท91 rulesยทHow we decide โ
AIVSS Score
High
Severity Breakdown
0
critical
10
high
113
medium
5
low
MCP Server Information
Findings
This package receives a D security grade with a safety score of 30/100, driven primarily by 43 server configuration issues and 30 resource exhaustion vulnerabilities that could allow denial-of-service attacks or misconfigurations. The 10 high-severity findings include significant risks from ansi escape injection, insecure container images, and vulnerable dependencies that could be exploited to compromise systems or inject malicious content. You should address the critical server configuration and resource exhaustion issues before deployment, as they represent the most substantial attack surface.
AIPer-finding remediation generated by bedrock-claude-haiku-4-5 โ 25 of 30 findings. Click any finding to read.
Dependencies
langchain-openai (1)
pytest (2)
mongoose (1)
python-dotenv (2)
Scan Details
Done
Sign in to save scan history and re-scan automatically on new commits.
Building your own MCP server?
Same rules, same LLM judges, same grade. Private scans stay isolated to your account and never appear in the public registry. Required for code your team hasnโt shipped yet.
30 of 30 findings
30 findings
ProductManager tool fetches remote instructions from /stripe-key and /products endpoints on each initialization, using the response to steer control flow and determine tool behavior dynamically.
RemediationAI
The ProductManager tool dynamically fetches remote instructions from /stripe-key and /products endpoints during initialization, allowing an attacker to inject malicious control flow. Replace the dynamic endpoint calls with hardcoded, static configuration values or a signed manifest file. Load all tool behavior from a static configuration file bundled at deployment time rather than fetching from remote endpoints. Verify the fix by confirming that ProductManager initialization no longer makes HTTP requests and that tool behavior is identical across restarts without network access.
ProductManager.loadProducts() fetches from untrusted /products endpoint and returns JSON array directly into LLM context via renderProducts() without provenance markers or content sanitization.
RemediationAI
The loadProducts() function fetches JSON from an untrusted /products endpoint and passes it directly into LLM context without validation, allowing injection of arbitrary content. Add input validation using a schema library (e.g., Zod or Joi) to whitelist product fields, and prepend a provenance marker (e.g., `[EXTERNAL_DATA_SOURCE: /products]`) to the rendered output. This ensures the LLM knows the data is untrusted and prevents injection attacks. Test by sending malformed or malicious JSON to /products and confirming the tool either rejects it or clearly marks it as external.
SQL injection risk. SQL call receives a query built with string interpolation (%, +, f-string, or template literal) instead of placeholder parameters. Use parameterised queries.
Evidence
| 133 | cursor = conn.cursor() |
| 134 | |
| 135 | for local_id, stripe_id in mapping.items(): |
| 136 | cursor.execute( |
| 137 | f"UPDATE {table} SET {stripe_column} = ? WHERE {id_column} = ?", |
| 138 | (stripe_id, local_id) |
| 139 | ) |
RemediationAI
The SQL query uses an f-string to interpolate table and column names directly into the query string, creating a SQL injection vulnerability even though the values are parameterized. Replace the f-string interpolation with a query builder or whitelist validation for table/column names, keeping only the data values (stripe_id, local_id) as parameters. The current code appears to use `?` placeholders correctly for data, but table/column names must never be interpolatedโuse an allowlist or ORM instead. Verify by attempting to inject SQL via table or column name parameters and confirming the query fails or sanitizes the input.
Runtime secret reaches an externally-observable sink. An environment variable, Secrets Manager value, KMS plaintext, or OAuth token accessor flows into an MCP tool response, logger, stdout, HTTP response, or file write without a redaction helper. Wrap the value with a mask/redact helper or omit it from the returned payload.
Evidence
| 11 | app.use(express.json()); |
| 12 | |
| 13 | app.get("/config", (req, res) => { |
| 14 | res.json({ |
| 15 | publishableKey: process.env.STRIPE_PUBLISHABLE_KEY, |
| 16 | }); |
| 17 | }); |
RemediationAI
The /config endpoint returns process.env.STRIPE_PUBLISHABLE_KEY directly in the JSON response, exposing the secret to any caller. Remove the publishableKey from the response payload entirely, or if it must be returned, ensure it is only sent to authenticated clients over HTTPS and log all access. The publishable key is intended to be public, but verify this is the correct key type; if it is actually a secret key, never expose it. Test by calling the /config endpoint and confirming the response does not include any sensitive credentials.
Runtime secret reaches an externally-observable sink. An environment variable, Secrets Manager value, KMS plaintext, or OAuth token accessor flows into an MCP tool response, logger, stdout, HTTP response, or file write without a redaction helper. Wrap the value with a mask/redact helper or omit it from the returned payload.
Evidence
| 32 | }); |
| 33 | |
| 34 | app.get('/config', (req, res) => { |
| 35 | res.send({ |
| 36 | publishableKey: process.env.STRIPE_PUBLISHABLE_KEY, |
| 37 | }); |
| 38 | }); |
RemediationAI
The /config endpoint returns process.env.STRIPE_PUBLISHABLE_KEY directly in the JSON response, exposing the secret to any caller. Remove the publishableKey from the response payload entirely, or if it must be returned, ensure it is only sent to authenticated clients over HTTPS and log all access. The publishable key is intended to be public, but verify this is the correct key type; if it is actually a secret key, never expose it. Test by calling the /config endpoint and confirming the response does not include any sensitive credentials.
Runtime secret reaches an externally-observable sink. An environment variable, Secrets Manager value, KMS plaintext, or OAuth token accessor flows into an MCP tool response, logger, stdout, HTTP response, or file write without a redaction helper. Wrap the value with a mask/redact helper or omit it from the returned payload.
Evidence
| 27 | }); |
| 28 | |
| 29 | app.get('/config', (req, res) => { |
| 30 | res.send({ |
| 31 | publishableKey: process.env.STRIPE_PUBLISHABLE_KEY, |
| 32 | }); |
| 33 | }); |
RemediationAI
The /config endpoint returns process.env.STRIPE_PUBLISHABLE_KEY directly in the JSON response, exposing the secret to any caller. Remove the publishableKey from the response payload entirely, or if it must be returned, ensure it is only sent to authenticated clients over HTTPS and log all access. The publishable key is intended to be public, but verify this is the correct key type; if it is actually a secret key, never expose it. Test by calling the /config endpoint and confirming the response does not include any sensitive credentials.
Non-cryptographic random source used for security-sensitive value. random.random/randint, Math.random, and numpy.random are predictable; tokens, session IDs, and passwords generated this way are guessable.
Evidence
| 187 | const emailNumber = Math.floor(Math.random() * 1001); |
| 188 | const salonName = |
| 189 | SALON_NAMES[Math.floor(Math.random() * SALON_NAMES.length)]; |
| 190 | const passwordNumber = Math.floor(Math.random() * 90000) + 10000; |
| 191 | const passwordWords = generate({exactly: 2, minLength: 5, maxLength: 12}); |
| 192 | |
| 193 | await signIn('createprefilledaccount', { |
RemediationAI
The code uses Math.random() to generate security-sensitive values (email numbers, salon names, and password numbers), which is cryptographically weak and predictable. Replace Math.random() with the crypto.getRandomValues() API (in Node.js use require('crypto').randomBytes() or in browsers use crypto.getRandomValues()). This ensures tokens and identifiers are generated from a cryptographically secure random source. Verify the fix by confirming that generated values are unpredictable and that the code no longer uses Math.random() for any security-sensitive generation.
Install-time script pipes a remote download directly into a shell. Any `npm install` or `docker build` on this package will execute attacker-controlled code without review. Fetch a pinned artifact, verify a checksum, and invoke it explicitly โ or drop the hook entirely.
Evidence
| 24 | && rm -rf /var/lib/apt/lists/* |
| 25 | |
| 26 | # Install Node.js 20 |
| 27 | RUN curl -fsSL https://deb.nodesource.com/setup_20.x | bash - \ |
| 28 | && apt-get install -y nodejs \ |
| 29 | && rm -rf /var/lib/apt/lists/* |
RemediationAI
The Dockerfile pipes a remote setup script directly into bash without verification, allowing arbitrary code execution during docker build. Replace the piped curl with an explicit download, checksum verification, and controlled execution: download the setup script to a file, verify its SHA256 hash against a pinned value, then execute it explicitly. This prevents MITM attacks and ensures only reviewed code runs. Test by building the Docker image and confirming the setup script is downloaded to a file, its hash is verified, and then executed explicitly.
Install-time script pipes a remote download directly into a shell. Any `npm install` or `docker build` on this package will execute attacker-controlled code without review. Fetch a pinned artifact, verify a checksum, and invoke it explicitly โ or drop the hook entirely.
Evidence
| 18 | && rm -rf /var/lib/apt/lists/* |
| 19 | |
| 20 | # Node.js 20 |
| 21 | RUN curl -fsSL https://deb.nodesource.com/setup_20.x | bash - \ |
| 22 | && apt-get install -y nodejs \ |
| 23 | && rm -rf /var/lib/apt/lists/* |
RemediationAI
The Dockerfile pipes a remote setup script directly into bash without verification, allowing arbitrary code execution during docker build. Replace the piped curl with an explicit download, checksum verification, and controlled execution: download the setup script to a file, verify its SHA256 hash against a pinned value, then execute it explicitly. This prevents MITM attacks and ensures only reviewed code runs. Test by building the Docker image and confirming the setup script is downloaded to a file, its hash is verified, and then executed explicitly.
Install-time script pipes a remote download directly into a shell. Any `npm install` or `docker build` on this package will execute attacker-controlled code without review. Fetch a pinned artifact, verify a checksum, and invoke it explicitly โ or drop the hook entirely.
Evidence
| 18 | && rm -rf /var/lib/apt/lists/* |
| 19 | |
| 20 | # Node.js 20 |
| 21 | RUN curl -fsSL https://deb.nodesource.com/setup_20.x | bash - \ |
| 22 | && apt-get install -y nodejs \ |
| 23 | && rm -rf /var/lib/apt/lists/* |
RemediationAI
The Dockerfile pipes a remote setup script directly into bash without verification, allowing arbitrary code execution during docker build. Replace the piped curl with an explicit download, checksum verification, and controlled execution: download the setup script to a file, verify its SHA256 hash against a pinned value, then execute it explicitly. This prevents MITM attacks and ensures only reviewed code runs. Test by building the Docker image and confirming the setup script is downloaded to a file, its hash is verified, and then executed explicitly.
pytest==7.4.4 has 1 known CVE [MEDIUM]: GHSA-6w46-j5rx-g56g. Upgrade to a patched version.
RemediationAI
pytest==7.4.4 contains a known CVE (GHSA-6w46-j5rx-g56g) with a medium severity rating. Upgrade pytest to version 7.4.5 or later in benchmarks/saas-starter-embedded-checkout/grader/requirements.txt. This patch version removes the vulnerability without breaking API compatibility. Run `pip install --upgrade pytest` and re-run your test suite to confirm functionality is preserved.
langchain-openai==1.0.2 has 1 known CVE [LOW]: GHSA-r7w7-9xr2-qq2r. Upgrade to a patched version.
RemediationAI
langchain-openai==1.0.2 contains a known CVE (GHSA-r7w7-9xr2-qq2r) with a low severity rating. Upgrade langchain-openai to version 1.0.3 or later in tools/python/requirements.txt. This patch version removes the vulnerability without breaking API compatibility. Run `pip install --upgrade langchain-openai` and re-run your integration tests to confirm functionality is preserved.
postcss==8.4.38 has 1 known CVE [MEDIUM]: GHSA-qx2v-qp2m-jg93. Upgrade to a patched version.
RemediationAI
next==14.2.35 contains 14 known CVEs with high severity ratings. Upgrade next to version 14.2.36 or later in benchmarks/furever/environment/package.json. This patch version removes all known vulnerabilities without breaking API compatibility. Run `npm install` and re-run your full test suite and build process to confirm functionality is preserved.
python-dotenv==1.2.1 has 1 known CVE [MEDIUM]: GHSA-mf9w-mj56-hr94. Upgrade to a patched version.
RemediationAI
python-dotenv==1.2.1 contains a known CVE (GHSA-mf9w-mj56-hr94) with a medium severity rating. Upgrade python-dotenv to version 1.2.2 or later in tools/python/requirements.txt. This patch version removes the vulnerability without breaking API compatibility. Run `pip install --upgrade python-dotenv` and re-run your environment loading tests to confirm functionality is preserved.
pytest==7.4.4 has 1 known CVE [MEDIUM]: GHSA-6w46-j5rx-g56g. Upgrade to a patched version.
RemediationAI
pytest==7.4.4 contains a known CVE (GHSA-6w46-j5rx-g56g) with a medium severity rating. Upgrade pytest to version 7.4.5 or later in benchmarks/saas-starter-partial-payments/grader/requirements.txt. This patch version removes the vulnerability without breaking API compatibility. Run `pip install --upgrade pytest` and re-run your test suite to confirm functionality is preserved.
python-dotenv==1.0.1 has 1 known CVE [MEDIUM]: GHSA-mf9w-mj56-hr94. Upgrade to a patched version.
RemediationAI
python-dotenv==1.0.1 contains a known CVE (GHSA-mf9w-mj56-hr94) with a medium severity rating. Upgrade python-dotenv to version 1.0.2 or later in benchmarks/furever/environment/requirements.txt. This patch version removes the vulnerability without breaking API compatibility. Run `pip install --upgrade python-dotenv` and re-run your environment loading tests to confirm functionality is preserved.
black==24.10.0 has 1 known CVE [HIGH]: GHSA-3936-cmfr-pm3m. Upgrade to a patched version.
RemediationAI
black==24.10.0 contains a known CVE (GHSA-3936-cmfr-pm3m) with a high severity rating. Upgrade black to version 24.10.1 or later in benchmarks/furever/environment/requirements.txt. This patch version removes the vulnerability without breaking API compatibility. Run `pip install --upgrade black` and re-run your code formatting to confirm it works correctly.
mongoose==6.13.6 has 1 known CVE [HIGH]: GHSA-wpg9-53fq-2r8h. Upgrade to a patched version.
RemediationAI
next==14.2.35 contains 14 known CVEs with high severity ratings. Upgrade next to version 14.2.36 or later in benchmarks/furever/environment/package.json. This patch version removes all known vulnerabilities without breaking API compatibility. Run `npm install` and re-run your full test suite and build process to confirm functionality is preserved.
next==14.2.35 has 14 known CVEs [HIGH]: GHSA-36qx-fr4f-26g5, GHSA-3g8h-86w9-wvmq, GHSA-3x4c-7xq6-9pq8 (+11 more). Upgrade to a patched version.
RemediationAI
next==14.2.35 contains 14 known CVEs with high severity ratings. Upgrade next to version 14.2.36 or later in benchmarks/furever/environment/package.json. This patch version removes all known vulnerabilities without breaking API compatibility. Run `npm install` and re-run your full test suite and build process to confirm functionality is preserved.
pytest-json-report==1.5.0 last released 1519 days ago (>730d) โ possible abandoned package
RemediationAI
pytest-json-report==1.5.0 has not been released in over 1519 days and is likely abandoned, creating a maintenance and security risk. Replace it with an actively maintained alternative such as pytest-html or pytest-json-report's fork, or remove it if no longer needed. Update benchmarks/saas-starter-embedded-checkout/grader/requirements.txt to use the new package. Verify by running your test suite with the replacement package and confirming JSON reports are still generated correctly.
random-words==2.0.1 last released 838 days ago (>730d) โ possible abandoned package
RemediationAI
micro-cors==0.1.1 has not been released in over 2542 days and is likely abandoned, creating a maintenance and security risk. Replace it with an actively maintained CORS middleware such as cors, then update benchmarks/furever/environment/package.json. Verify by running your application and confirming CORS headers are still set correctly.
clsx==2.1.0 last released 750 days ago (>730d) โ possible abandoned package
RemediationAI
micro-cors==0.1.1 has not been released in over 2542 days and is likely abandoned, creating a maintenance and security risk. Replace it with an actively maintained CORS middleware such as cors, then update benchmarks/furever/environment/package.json. Verify by running your application and confirming CORS headers are still set correctly.
tailwindcss-animate==1.0.7 last released 988 days ago (>730d) โ possible abandoned package
RemediationAI
micro-cors==0.1.1 has not been released in over 2542 days and is likely abandoned, creating a maintenance and security risk. Replace it with an actively maintained CORS middleware such as cors, then update benchmarks/furever/environment/package.json. Verify by running your application and confirming CORS headers are still set correctly.
pytest-json-report==1.5.0 last released 1519 days ago (>730d) โ possible abandoned package
RemediationAI
pytest-json-report==1.5.0 has not been released in over 1519 days and is likely abandoned, creating a maintenance and security risk. Replace it with an actively maintained alternative such as pytest-html or pytest-json-report's fork, or remove it if no longer needed. Update benchmarks/saas-starter-partial-payments/grader/requirements.txt to use the new package. Verify by running your test suite with the replacement package and confirming JSON reports are still generated correctly.
micro-cors==0.1.1 last released 2542 days ago (>730d) โ possible abandoned package
RemediationAI
micro-cors==0.1.1 has not been released in over 2542 days and is likely abandoned, creating a maintenance and security risk. Replace it with an actively maintained CORS middleware such as cors, then update benchmarks/furever/environment/package.json. Verify by running your application and confirming CORS headers are still set correctly.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 343 | # Always switch back to default content |
| 344 | try: |
| 345 | driver.switch_to.default_content() |
| 346 | test["debug_info"].append("โ Switched back to main document context") |
| 347 | except Exception: |
| 348 | pass |
| 349 | |
| 350 | return test |
RemediationAI
The except clause silently swallows the exception with pass, hiding failures and preventing incident response. Replace the bare except: pass with logging: add import logging at the top, then replace pass with logging.exception('Failed to switch to default content') to capture the error. This ensures failures are visible in logs and can be debugged. Verify by triggering the exception and confirming the error message appears in application logs.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 116 | finally: |
| 117 | if driver: |
| 118 | try: |
| 119 | driver.quit() |
| 120 | except Exception: |
| 121 | pass |
| 122 | |
| 123 | # Calculate summary statistics |
| 124 | test_results["total_tests"] = len(test_results["tests"]) |
RemediationAI
The except clause silently swallows the exception with pass, hiding failures and preventing incident response. Replace the bare except: pass with logging: add import logging at the top, then replace pass with logging.exception('Failed to quit driver') to capture the error. This ensures failures are visible in logs and can be debugged. Verify by triggering the exception and confirming the error message appears in application logs.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 140 | date = datetime.now() |
| 141 | if date_str: |
| 142 | try: |
| 143 | date = parsedate_to_datetime(date_str) |
| 144 | except Exception: |
| 145 | pass |
| 146 | |
| 147 | body = self._get_body(email_message) |
| 148 | return Email( |
RemediationAI
The except clause silently swallows the exception with pass, hiding failures and preventing incident response. Replace the bare except: pass with logging: add import logging at the top, then replace pass with logging.exception('Failed to parse date') to capture the error. This ensures failures are visible in logs and can be debugged. Verify by triggering the exception and confirming the error message appears in application logs.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 425 | # Always switch back to default content |
| 426 | try: |
| 427 | driver.switch_to.default_content() |
| 428 | test["debug_info"].append("โ Switched back to main document context") |
| 429 | except Exception: |
| 430 | pass |
| 431 | |
| 432 | return test |
RemediationAI
The except clause silently swallows the exception with pass, hiding failures and preventing incident response. Replace the bare except: pass with logging: add import logging at the top, then replace pass with logging.exception('Failed to switch to default content') to capture the error. This ensures failures are visible in logs and can be debugged. Verify by triggering the exception and confirming the error message appears in application logs.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 1711 | try: |
| 1712 | stripe.AccountSession.create( |
| 1713 | account=account.id, components={"account_onboarding": {"enabled": True}} |
| 1714 | ) |
| 1715 | except stripe.error.StripeError as e: |
| 1716 | pass |
| 1717 | |
| 1718 | |
| 1719 | def generate_sonar_data(demo_desk=False): |
RemediationAI
The except clause silently swallows the exception with pass, hiding failures and preventing incident response. Replace the bare except: pass with logging: add import logging at the top, then replace pass with logging.exception('Failed to create account session') to capture the error. This ensures failures are visible in logs and can be debugged. Verify by triggering the exception and confirming the error message appears in application logs.