Use with caution. Address findings before production.
Scanned 5/12/2026, 7:21:56 PMยทCached resultยทDeep Scanยท91 rulesยทHow we decide โ
AIVSS Score
Medium
Severity Breakdown
0
critical
6
high
338
medium
22
low
MCP Server Information
Findings
This package presents significant security concerns with a C grade and safety score of 57/100, driven primarily by 289 verbose error instances that could leak sensitive information and 30 server configuration issues that may expose the system to attacks. The presence of 10 hardcoded secrets and 6 high-severity findings creates additional risk, though the 338 medium-severity issues suggest widespread quality and security gaps throughout the codebase. Installation should be reconsidered unless these vulnerabilities can be remediated or the package's functionality is non-critical to your application.
AIPer-finding remediation generated by bedrock-claude-haiku-4-5 โ 26 of 26 findings. Click any finding to read.
AWS API MCP File Access Restriction Bypass
Scan Details
Done
Sign in to save scan history and re-scan automatically on new commits.
Building your own MCP server?
Same rules, same LLM judges, same grade. Private scans stay isolated to your account and never appear in the public registry. Required for code your team hasnโt shipped yet.
26 of 26 findings
26 findings
A variable named like a secret (secret/token/apikey/password/ credential/private_key/bearer) is emitted to a logger, stdout, HTTP response, MCP tool response, or file write without redaction. If the value is genuinely not sensitive, rename it; otherwise wrap with a redaction helper.
Evidence
| 434 | except client.exceptions.NotAuthorizedException as e: |
| 435 | logger.error(f'Authentication failed: {e}') |
| 436 | logger.error('Please check your Cognito credentials (client ID, username, password)') |
| 437 | logger.error( |
| 438 | 'Make sure the user exists in the Cognito User Pool and the password is correct' |
| 439 | ) |
RemediationAI
The problem is that exception details from Cognito authentication failures are logged without redaction, potentially exposing sensitive information about credentials or user data. Replace the generic `logger.error(f'Authentication failed: {e}')` with a redacted version by wrapping the exception message using a redaction helper function (e.g., `redact_sensitive_data(str(e))`) or by logging only a generic error code without the exception detail. This fix prevents credential hints and internal error messages from being exposed in logs that may be aggregated or reviewed by unauthorized parties. Verify by triggering an authentication failure and confirming the log output contains no credential names, user details, or specific error context.
TLS certificate verification is disabled on an outbound HTTP client. Any MITM in the network path can intercept and modify requests / responses โ credentials, tokens, and tool output flow over a channel with no integrity guarantee. Python requests / httpx: drop `verify=False`. If the peer is using a private CA, set `verify="/path/to/ca-bundle.pem"` or configure the system trust store. Node TS axios / fetch: drop `rejectUnauthorized: false` from the agent / `httpsAgent` options. Same private-CA
Evidence
| 61 | tls_context.verify_mode = ssl.CERT_REQUIRED |
| 62 | else: |
| 63 | tls_context.check_hostname = False |
| 64 | tls_context.verify_mode = ssl.CERT_NONE |
| 65 | if tls_cert_path and tls_key_path: |
| 66 | tls_context.load_cert_chain(tls_cert_path, tls_key_path) |
RemediationAI
The problem is that TLS certificate verification is disabled by setting `ssl.CERT_NONE` and `check_hostname = False`, allowing man-in-the-middle attacks to intercept and modify all traffic including credentials and tool responses. Remove the `else` branch that disables verification, or if a private CA is required, load the CA certificate bundle using `tls_context.load_verify_locations('/path/to/ca-bundle.pem')` instead of disabling verification entirely. This fix ensures all outbound connections validate the server's certificate against a trusted CA, preventing MITM interception. Verify by attempting a connection with an invalid certificate and confirming it is rejected; then test with the correct CA bundle to confirm legitimate connections succeed.
MCP server uses OAuth/JWT authentication but does not expose a `/.well-known/oauth-protected-resource` (PRM) route. The June 2025 MCP authorization spec made PRM mandatory โ clients discover the authorization server via this endpoint. Add a route handler that returns a JSON object with at minimum `resource` (this server's identifier) and `authorization_servers` (the list of issuer URLs you accept).
Evidence
| 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. |
| 2 | # |
| 3 | # Licensed under the Apache License, Version 2.0 (the "License"); |
| 4 | # you may not use this file except in compliance with the License. |
| 5 | # You may obtain a copy of the License at |
| 6 | # |
| 7 | # http://www.apache.org/licenses/LICENSE-2.0 |
| 8 | # |
| 9 | # Unless required by applicable law or agreed to in writing, software |
| 10 | # distributed under the License is distributed on an "AS IS" BASIS, |
| 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express |
RemediationAI
The problem is that the OAuth/JWT-protected MCP server does not expose the mandatory `/.well-known/oauth-protected-resource` endpoint required by the June 2025 MCP authorization spec, preventing clients from discovering the authorization server configuration. Add a route handler (e.g., using Flask `@app.route('/.well-known/oauth-protected-resource')` or FastAPI `@app.get('/.well-known/oauth-protected-resource')`) that returns a JSON object with at minimum `{"resource": "<server-identifier>", "authorization_servers": ["<issuer-url-1>", "<issuer-url-2>"]}`. This fix enables compliant MCP clients to automatically discover and validate the authorization configuration, closing the spec compliance gap. Verify by making a GET request to `/.well-known/oauth-protected-resource` and confirming it returns valid JSON with the required fields.
MCP server uses OAuth/JWT authentication but does not expose a `/.well-known/oauth-protected-resource` (PRM) route. The June 2025 MCP authorization spec made PRM mandatory โ clients discover the authorization server via this endpoint. Add a route handler that returns a JSON object with at minimum `resource` (this server's identifier) and `authorization_servers` (the list of issuer URLs you accept).
Evidence
| 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. |
| 2 | # |
| 3 | # Licensed under the Apache License, Version 2.0 (the "License"); |
| 4 | # you may not use this file except in compliance with the License. |
| 5 | # You may obtain a copy of the License at |
| 6 | # |
| 7 | # http://www.apache.org/licenses/LICENSE-2.0 |
| 8 | # |
| 9 | # Unless required by applicable law or agreed to in writing, software |
| 10 | # distributed under the License is distributed on an "AS IS" BASIS, |
| 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express |
RemediationAI
The problem is that the OAuth/JWT-protected MCP server does not expose the mandatory `/.well-known/oauth-protected-resource` endpoint required by the June 2025 MCP authorization spec, preventing clients from discovering the authorization server configuration. Add a route handler (e.g., using Flask `@app.route('/.well-known/oauth-protected-resource')` or FastAPI `@app.get('/.well-known/oauth-protected-resource')`) that returns a JSON object with at minimum `{"resource": "<server-identifier>", "authorization_servers": ["<issuer-url-1>", "<issuer-url-2>"]}`. This fix enables compliant MCP clients to automatically discover and validate the authorization configuration, closing the spec compliance gap. Verify by making a GET request to `/.well-known/oauth-protected-resource` and confirming it returns valid JSON with the required fields.
MCP server uses OAuth/JWT authentication but does not expose a `/.well-known/oauth-protected-resource` (PRM) route. The June 2025 MCP authorization spec made PRM mandatory โ clients discover the authorization server via this endpoint. Add a route handler that returns a JSON object with at minimum `resource` (this server's identifier) and `authorization_servers` (the list of issuer URLs you accept).
Evidence
| 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. |
| 2 | # |
| 3 | # Licensed under the Apache License, Version 2.0 (the "License"); |
| 4 | # you may not use this file except in compliance with the License. |
| 5 | # You may obtain a copy of the License at |
| 6 | # |
| 7 | # http://www.apache.org/licenses/LICENSE-2.0 |
| 8 | # |
| 9 | # Unless required by applicable law or agreed to in writing, software |
| 10 | # distributed under the License is distributed on an "AS IS" BASIS, |
| 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express |
RemediationAI
The problem is that the OAuth/JWT-protected MCP server does not expose the mandatory `/.well-known/oauth-protected-resource` endpoint required by the June 2025 MCP authorization spec, preventing clients from discovering the authorization server configuration. Add a route handler (e.g., using Flask `@app.route('/.well-known/oauth-protected-resource')` or FastAPI `@app.get('/.well-known/oauth-protected-resource')`) that returns a JSON object with at minimum `{"resource": "<server-identifier>", "authorization_servers": ["<issuer-url-1>", "<issuer-url-2>"]}`. This fix enables compliant MCP clients to automatically discover and validate the authorization configuration, closing the spec compliance gap. Verify by making a GET request to `/.well-known/oauth-protected-resource` and confirming it returns valid JSON with the required fields.
MCP server uses OAuth/JWT authentication but does not expose a `/.well-known/oauth-protected-resource` (PRM) route. The June 2025 MCP authorization spec made PRM mandatory โ clients discover the authorization server via this endpoint. Add a route handler that returns a JSON object with at minimum `resource` (this server's identifier) and `authorization_servers` (the list of issuer URLs you accept).
Evidence
| 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. |
| 2 | # |
| 3 | # Licensed under the Apache License, Version 2.0 (the "License"); |
| 4 | # you may not use this file except in compliance with the License. |
| 5 | # You may obtain a copy of the License at |
| 6 | # |
| 7 | # http://www.apache.org/licenses/LICENSE-2.0 |
| 8 | # |
| 9 | # Unless required by applicable law or agreed to in writing, software |
| 10 | # distributed under the License is distributed on an "AS IS" BASIS, |
| 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express |
RemediationAI
The problem is that the OAuth/JWT-protected MCP server does not expose the mandatory `/.well-known/oauth-protected-resource` endpoint required by the June 2025 MCP authorization spec, preventing clients from discovering the authorization server configuration. Add a route handler (e.g., using Flask `@app.route('/.well-known/oauth-protected-resource')` or FastAPI `@app.get('/.well-known/oauth-protected-resource')`) that returns a JSON object with at minimum `{"resource": "<server-identifier>", "authorization_servers": ["<issuer-url-1>", "<issuer-url-2>"]}`. This fix enables compliant MCP clients to automatically discover and validate the authorization configuration, closing the spec compliance gap. Verify by making a GET request to `/.well-known/oauth-protected-resource` and confirming it returns valid JSON with the required fields.
Hardcoded secret detected in source. MCP servers often proxy between the model and a third-party API, so any committed credential grants that access to anyone who can read the repo. Move to an environment variable or secret manager and rotate the leaked value.
Evidence
| 139 | ```file |
| 140 | # fictitious `.env` file with AWS temporary credentials |
| 141 | AWS_ACCESS_KEY_ID=ASIAIOSFODNN7EXAMPLE |
| 142 | AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY |
| 143 | AWS_SESSION_TOKEN=AQoEXAMPLEH4aoAH0gNCAPy...truncated...zrkuWJOgQs8IZZaIv2BXIa2R4Olgk |
| 144 | AWS_REGION=us-east-1 |
RemediationAI
The problem is that fictitious AWS credentials are hardcoded in the README.md documentation, and if any real credentials were committed this way, anyone with repository access would have full AWS account access. Replace the example credentials in the README with placeholder text like `YOUR_AWS_ACCESS_KEY_ID` and add a `.env.example` file showing the required environment variable names without values. This fix prevents accidental credential exposure and establishes a pattern where developers must explicitly load credentials from environment variables or a secrets manager at runtime. Verify by running `git log -p` or using a secret scanner tool to confirm no real AWS credentials exist in the repository history, and test that the application loads credentials from `AWS_ACCESS_KEY_ID` environment variable at startup.
Hardcoded secret detected in source. MCP servers often proxy between the model and a third-party API, so any committed credential grants that access to anyone who can read the repo. Move to an environment variable or secret manager and rotate the leaked value.
Evidence
| 134 | ```file |
| 135 | # fictitious `.env` file with AWS temporary credentials |
| 136 | AWS_ACCESS_KEY_ID=ASIAIOSFODNN7EXAMPLE |
| 137 | AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY |
| 138 | AWS_SESSION_TOKEN=AQoEXAMPLEH4aoAH0gNCAPy...truncated...zrkuWJOgQs8IZZaIv2BXIa2R4Olgk |
| 139 | ``` |
RemediationAI
The problem is that fictitious AWS credentials are hardcoded in the README.md documentation, and if any real credentials were committed this way, anyone with repository access would have full AWS account access. Replace the example credentials in the README with placeholder text like `YOUR_AWS_ACCESS_KEY_ID` and add a `.env.example` file showing the required environment variable names without values. This fix prevents accidental credential exposure and establishes a pattern where developers must explicitly load credentials from environment variables or a secrets manager at runtime. Verify by running `git log -p` or using a secret scanner tool to confirm no real AWS credentials exist in the repository history, and test that the application loads credentials from `AWS_ACCESS_KEY_ID` environment variable at startup.
Hardcoded secret detected in source. MCP servers often proxy between the model and a third-party API, so any committed credential grants that access to anyone who can read the repo. Move to an environment variable or secret manager and rotate the leaked value.
Evidence
| 323 | ```file |
| 324 | # fictitious `.env` file with AWS temporary credentials |
| 325 | AWS_ACCESS_KEY_ID=ASIAIOSFODNN7EXAMPLE # pragma: allowlist secret |
| 326 | AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY # pragma: allowlist secret |
| 327 | AWS_SESSION_TOKEN=AQoEXAMPLEH4aoAH0gNCAPy...truncated...zrkuWJOgQs8IZZaIv2BXIa2R4Olgk # pragma: allowlist secret |
| 328 | ``` |
RemediationAI
The problem is that fictitious AWS credentials are hardcoded in the README.md documentation with `# pragma: allowlist secret` comments, which suppresses secret scanning tools and creates a false sense of security if real credentials are later added. Remove the credentials and the allowlist pragma comments entirely, replacing them with placeholder text like `YOUR_AWS_ACCESS_KEY_ID`, and document that credentials must be loaded from environment variables or AWS credential files. This fix prevents the allowlist from being misused to hide real secrets and ensures secret scanning tools remain effective. Verify by removing the pragma comments and re-running your secret scanner to confirm it now flags any real credentials that might be present.
Hardcoded secret detected in source. MCP servers often proxy between the model and a third-party API, so any committed credential grants that access to anyone who can read the repo. Move to an environment variable or secret manager and rotate the leaked value.
Evidence
| 545 | ```.env |
| 546 | # contents of a .env file with fictitious AWS temporary credentials |
| 547 | AWS_ACCESS_KEY_ID=ASIAIOSFODNN7EXAMPLE |
| 548 | AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY |
| 549 | AWS_SESSION_TOKEN=AQoEXAMPLEH4aoAH0gNCAPy...truncated...zrkuWJOgQs8IZZaIv2BXIa2R4Olgk |
| 550 | ``` |
RemediationAI
The problem is that fictitious AWS credentials are hardcoded in the root README.md documentation, and if any real credentials were committed this way, anyone with repository access would have full AWS account access. Replace the example credentials with placeholder text like `YOUR_AWS_ACCESS_KEY_ID` and add a `.env.example` file showing the required environment variable names without values. This fix prevents accidental credential exposure and establishes a pattern where developers must explicitly load credentials from environment variables or a secrets manager at runtime. Verify by running `git log -p` or using a secret scanner tool to confirm no real AWS credentials exist in the repository history, and test that the application loads credentials from `AWS_ACCESS_KEY_ID` environment variable at startup.
Hardcoded secret detected in source. MCP servers often proxy between the model and a third-party API, so any committed credential grants that access to anyone who can read the repo. Move to an environment variable or secret manager and rotate the leaked value.
Evidence
| 96 | ```.env |
| 97 | # contents of a .env file with fictitious AWS temporary credentials |
| 98 | AWS_ACCESS_KEY_ID=ASIAIOSFODNN7EXAMPLE |
| 99 | AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY |
| 100 | AWS_SESSION_TOKEN=AQoEXAMPLEH4aoAH0gNCAPy...truncated...zrkuWJOgQs8IZZaIv2BXIa2R4Olgk |
| 101 | ``` |
RemediationAI
The problem is that fictitious AWS credentials are hardcoded in the docusaurus installation documentation, and if any real credentials were committed this way, anyone with repository access would have full AWS account access. Replace the example credentials with placeholder text like `YOUR_AWS_ACCESS_KEY_ID` and add a `.env.example` file showing the required environment variable names without values. This fix prevents accidental credential exposure and establishes a pattern where developers must explicitly load credentials from environment variables or a secrets manager at runtime. Verify by running `git log -p` or using a secret scanner tool to confirm no real AWS credentials exist in the repository history, and test that the application loads credentials from `AWS_ACCESS_KEY_ID` environment variable at startup.
Hardcoded secret detected in source. MCP servers often proxy between the model and a third-party API, so any committed credential grants that access to anyone who can read the repo. Move to an environment variable or secret manager and rotate the leaked value.
Evidence
| 286 | ```file |
| 287 | # fictitious `.env` file with AWS temporary credentials |
| 288 | AWS_ACCESS_KEY_ID=ASIAIOSFODNN7EXAMPLE |
| 289 | AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY |
| 290 | AWS_SESSION_TOKEN=AQoEXAMPLEH4aoAH0gNCAPy...truncated...zrkuWJOgQs8IZZaIv2BXIa2R4Olgk |
| 291 | ``` |
RemediationAI
The problem is that fictitious AWS credentials are hardcoded in the README.md documentation, and if any real credentials were committed this way, anyone with repository access would have full AWS account access. Replace the example credentials with placeholder text like `YOUR_AWS_ACCESS_KEY_ID` and add a `.env.example` file showing the required environment variable names without values. This fix prevents accidental credential exposure and establishes a pattern where developers must explicitly load credentials from environment variables or a secrets manager at runtime. Verify by running `git log -p` or using a secret scanner tool to confirm no real AWS credentials exist in the repository history, and test that the application loads credentials from `AWS_ACCESS_KEY_ID` environment variable at startup.
Hardcoded secret detected in source. MCP servers often proxy between the model and a third-party API, so any committed credential grants that access to anyone who can read the repo. Move to an environment variable or secret manager and rotate the leaked value.
Evidence
| 67 | class DynamoDBClientConfig: |
| 68 | """Configuration for DynamoDB client setup.""" |
| 69 | |
| 70 | DUMMY_ACCESS_KEY = 'AKIAIOSFODNN7EXAMPLE' # pragma: allowlist secret |
| 71 | DUMMY_SECRET_KEY = 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY' # pragma: allowlist secret |
| 72 | DEFAULT_REGION = 'us-east-1' |
RemediationAI
The problem is that hardcoded AWS access keys and secret keys are defined as class constants in `model_validation_utils.py`, even though they are marked as dummy values with a pragma allowlist comment. If real credentials are later added using the same pattern, the allowlist will suppress detection. Replace the hardcoded constants with environment variable lookups (e.g., `os.getenv('DUMMY_ACCESS_KEY', 'AKIAIOSFODNN7EXAMPLE')`) and remove the pragma allowlist comment to keep secret scanning active. This fix ensures that if real credentials are accidentally added, they will be detected by scanning tools. Verify by removing the pragma comment and re-running your secret scanner to confirm it flags the dummy credentials, then add a test that confirms the code reads from environment variables when present.
Hardcoded secret detected in source. MCP servers often proxy between the model and a third-party API, so any committed credential grants that access to anyone who can read the repo. Move to an environment variable or secret manager and rotate the leaked value.
Evidence
| 96 | ```file |
| 97 | # fictitious `.env` file with AWS temporary credentials |
| 98 | AWS_ACCESS_KEY_ID=ASIAIOSFODNN7EXAMPLE |
| 99 | AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY |
| 100 | AWS_SESSION_TOKEN=AQoEXAMPLEH4aoAH0gNCAPy...truncated...zrkuWJOgQs8IZZaIv2BXIa2R4Olgk |
| 101 | ``` |
RemediationAI
The problem is that fictitious AWS credentials are hardcoded in the README.md documentation, and if any real credentials were committed this way, anyone with repository access would have full AWS account access. Replace the example credentials with placeholder text like `YOUR_AWS_ACCESS_KEY_ID` and add a `.env.example` file showing the required environment variable names without values. This fix prevents accidental credential exposure and establishes a pattern where developers must explicitly load credentials from environment variables or a secrets manager at runtime. Verify by running `git log -p` or using a secret scanner tool to confirm no real AWS credentials exist in the repository history, and test that the application loads credentials from `AWS_ACCESS_KEY_ID` environment variable at startup.
Hardcoded secret detected in source. MCP servers often proxy between the model and a third-party API, so any committed credential grants that access to anyone who can read the repo. Move to an environment variable or secret manager and rotate the leaked value.
Evidence
| 96 | ```file |
| 97 | # fictitious `.env` file with AWS temporary credentials |
| 98 | AWS_ACCESS_KEY_ID=ASIAIOSFODNN7EXAMPLE |
| 99 | AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY |
| 100 | AWS_SESSION_TOKEN=AQoEXAMPLEH4aoAH0gNCAPy...truncated...zrkuWJOgQs8IZZaIv2BXIa2R4Olgk |
| 101 | ``` |
RemediationAI
The problem is that fictitious AWS credentials are hardcoded in the README.md documentation, and if any real credentials were committed this way, anyone with repository access would have full AWS account access. Replace the example credentials with placeholder text like `YOUR_AWS_ACCESS_KEY_ID` and add a `.env.example` file showing the required environment variable names without values. This fix prevents accidental credential exposure and establishes a pattern where developers must explicitly load credentials from environment variables or a secrets manager at runtime. Verify by running `git log -p` or using a secret scanner tool to confirm no real AWS credentials exist in the repository history, and test that the application loads credentials from `AWS_ACCESS_KEY_ID` environment variable at startup.
Hardcoded secret detected in source. MCP servers often proxy between the model and a third-party API, so any committed credential grants that access to anyone who can read the repo. Move to an environment variable or secret manager and rotate the leaked value.
Evidence
| 100 | ```file |
| 101 | # fictitious `.env` file with AWS temporary credentials |
| 102 | AWS_ACCESS_KEY_ID=ASIAIOSFODNN7EXAMPLE |
| 103 | AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY |
| 104 | AWS_SESSION_TOKEN=AQoEXAMPLEH4aoAH0gNCAPy...truncated...zrkuWJOgQs8IZZaIv2BXIa2R4Olgk |
| 105 | AWS_REGION=us-east-1 |
RemediationAI
The problem is that fictitious AWS credentials are hardcoded in the README.md documentation, and if any real credentials were committed this way, anyone with repository access would have full AWS account access. Replace the example credentials with placeholder text like `YOUR_AWS_ACCESS_KEY_ID` and add a `.env.example` file showing the required environment variable names without values. This fix prevents accidental credential exposure and establishes a pattern where developers must explicitly load credentials from environment variables or a secrets manager at runtime. Verify by running `git log -p` or using a secret scanner tool to confirm no real AWS credentials exist in the repository history, and test that the application loads credentials from `AWS_ACCESS_KEY_ID` environment variable at startup.
MCP tool file registers a tool, performs a destructive sink (fs.unlink / shutil.rmtree / DROP TABLE / DELETE FROM / TRUNCATE / UPDATE ... SET / HTTP DELETE|PUT|PATCH / subprocess / exec / spawn), and emits no audit event anywhere in the file. Without an audit event, an investigator cannot answer "who deleted record X on day Y?" โ the irreversible action leaves no trail. Closes the OWASP MCP Top 10:2025 MCP08 (Lack of Audit and Telemetry) gap. Distinct from MCP-201 (no confirmation) and MCP-283
Evidence
| 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. |
| 2 | # |
| 3 | # Licensed under the Apache License, Version 2.0 (the "License"); |
| 4 | # you may not use this file except in compliance with the License. |
| 5 | # You may obtain a copy of the License at |
| 6 | # |
| 7 | # http://www.apache.org/licenses/LICENSE-2.0 |
| 8 | # |
| 9 | # Unless required by applicable law or agreed to in writing, software |
| 10 | # distributed under the License is distributed on an "AS IS" BASIS, |
| 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express |
RemediationAI
The problem is that the PostgreSQL MCP server performs destructive database operations (DELETE, DROP TABLE, etc.) without emitting any audit events, making it impossible to investigate who performed a destructive action and when. Add audit logging by calling a centralized audit function (e.g., `audit_log(action='DELETE', table=table_name, user=current_user, timestamp=datetime.now())`) immediately after each destructive operation succeeds, and log both the action parameters and the result. This fix creates an audit trail that enables post-incident investigation and compliance reporting. Verify by executing a destructive tool (e.g., deleting a record), then checking the audit log file or centralized logging system to confirm the action, user, timestamp, and affected resource are recorded.
MCP tool file registers a tool, performs a destructive sink (fs.unlink / shutil.rmtree / DROP TABLE / DELETE FROM / TRUNCATE / UPDATE ... SET / HTTP DELETE|PUT|PATCH / subprocess / exec / spawn), and emits no audit event anywhere in the file. Without an audit event, an investigator cannot answer "who deleted record X on day Y?" โ the irreversible action leaves no trail. Closes the OWASP MCP Top 10:2025 MCP08 (Lack of Audit and Telemetry) gap. Distinct from MCP-201 (no confirmation) and MCP-283
Evidence
| 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. |
| 2 | # |
| 3 | # Licensed under the Apache License, Version 2.0 (the "License"); |
| 4 | # you may not use this file except in compliance with the License. |
| 5 | # You may obtain a copy of the License at |
| 6 | # |
| 7 | # http://www.apache.org/licenses/LICENSE-2.0 |
| 8 | # |
| 9 | # Unless required by applicable law or agreed to in writing, software |
| 10 | # distributed under the License is distributed on an "AS IS" BASIS, |
| 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express |
RemediationAI
The problem is that the SAM local invoke tool performs destructive operations (subprocess execution, file deletion) without emitting any audit events, making it impossible to investigate who executed what code and when. Add audit logging by calling a centralized audit function (e.g., `audit_log(action='INVOKE', function=function_name, user=current_user, timestamp=datetime.now())`) immediately after each invocation completes, and log both the invocation parameters and the result. This fix creates an audit trail that enables post-incident investigation and compliance reporting. Verify by executing a SAM invoke tool, then checking the audit log file or centralized logging system to confirm the action, user, timestamp, and function name are recorded.
MCP tool file registers a tool, performs a destructive sink (fs.unlink / shutil.rmtree / DROP TABLE / DELETE FROM / TRUNCATE / UPDATE ... SET / HTTP DELETE|PUT|PATCH / subprocess / exec / spawn), and emits no audit event anywhere in the file. Without an audit event, an investigator cannot answer "who deleted record X on day Y?" โ the irreversible action leaves no trail. Closes the OWASP MCP Top 10:2025 MCP08 (Lack of Audit and Telemetry) gap. Distinct from MCP-201 (no confirmation) and MCP-283
Evidence
| 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. |
| 2 | # |
| 3 | # Licensed under the Apache License, Version 2.0 (the "License"); |
| 4 | # you may not use this file except in compliance with the License. |
| 5 | # You may obtain a copy of the License at |
| 6 | # |
| 7 | # http://www.apache.org/licenses/LICENSE-2.0 |
| 8 | # |
| 9 | # Unless required by applicable law or agreed to in writing, software |
| 10 | # distributed under the License is distributed on an "AS IS" BASIS, |
| 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express |
RemediationAI
The problem is that the document loader MCP server performs destructive operations (file deletion, document removal) without emitting any audit events, making it impossible to investigate who deleted what document and when. Add audit logging by calling a centralized audit function (e.g., `audit_log(action='DELETE', document_id=doc_id, user=current_user, timestamp=datetime.now())`) immediately after each destructive operation succeeds, and log both the operation parameters and the result. This fix creates an audit trail that enables post-incident investigation and compliance reporting. Verify by executing a destructive tool (e.g., deleting a document), then checking the audit log file or centralized logging system to confirm the action, user, timestamp, and affected resource are recorded.
Full exception detail or stack trace returned to the caller. Leaking tracebacks exposes internal paths, library versions, and query structure โ useful recon for attackers.
Evidence
| 64 | result = r.srem(key, member) |
| 65 | return f"Successfully removed {result} member from set '{key}'" |
| 66 | except ValkeyError as e: |
| 67 | return f"Error removing from set '{key}': {str(e)}" |
| 68 | |
| 69 | |
| 70 | @mcp.tool() |
RemediationAI
The problem is that the full ValkeyError exception message is returned directly to the caller in the `set.py` tool, exposing internal error details, library versions, and query structure that aid attackers in reconnaissance. Replace `return f"Error removing from set '{key}': {str(e)}"` with a generic error message like `return f"Error removing from set '{key}': operation failed"` and log the full exception details server-side using `logger.error(f'Valkey error: {e}', exc_info=True)`. This fix hides internal details from the client while preserving the full error context for debugging. Verify by triggering an error condition and confirming the client receives only a generic message, while the server logs contain the full exception traceback.
Full exception detail or stack trace returned to the caller. Leaking tracebacks exposes internal paths, library versions, and query structure โ useful recon for attackers.
Evidence
| 216 | result = r.hlen(key) |
| 217 | return str(result) |
| 218 | except ValkeyError as e: |
| 219 | return f"Error getting hash length from '{key}': {str(e)}" |
| 220 | |
| 221 | |
| 222 | @mcp.tool() |
RemediationAI
The problem is that the full ValkeyError exception message is returned directly to the caller in the `hash.py` tool, exposing internal error details, library versions, and query structure that aid attackers in reconnaissance. Replace `return f"Error getting hash length from '{key}': {str(e)}"` with a generic error message like `return f"Error getting hash length from '{key}': operation failed"` and log the full exception details server-side using `logger.error(f'Valkey error: {e}', exc_info=True)`. This fix hides internal details from the client while preserving the full error context for debugging. Verify by triggering an error condition and confirming the client receives only a generic message, while the server logs contain the full exception traceback.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 2181 | 'expected': '< 80%', |
| 2182 | '_key': f'cw_mem_high|{d["id"]}', |
| 2183 | } |
| 2184 | ) |
| 2185 | except (ValueError, TypeError): |
| 2186 | pass |
| 2187 | if '๐ด' in d['status_check']: |
| 2188 | findings.append( |
| 2189 | { |
RemediationAI
The problem is that the `except (ValueError, TypeError): pass` clause silently discards parsing errors with no logging or metrics, preventing incident responders from detecting data quality issues or malformed health check responses. Replace `pass` with `logger.warning(f'Failed to parse health metric for {d.get("id")}: {e}', exc_info=True)` and optionally increment a metric counter like `metrics.increment('health_parse_errors')`. This fix ensures all errors are visible for debugging and monitoring. Verify by triggering a ValueError or TypeError in the health check parsing and confirming a warning log entry and metric increment are recorded.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 594 | $PORT = $match.Matches[0].Groups[1].Value |
| 595 | break |
| 596 | } |
| 597 | } catch {} |
| 598 | } |
| 599 | |
| 600 | if ($PORT) { |
RemediationAI
The problem is that the PowerShell `catch {}` block silently discards all exceptions with no logging or error handling, preventing incident responders from detecting port discovery failures. Replace `catch {}` with `catch { Write-Error "Failed to find port: $_"; exit 1 }` or equivalent error logging. This fix ensures all errors are visible for debugging and monitoring. Verify by triggering an error condition (e.g., invalid registry access) and confirming an error message is logged.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 117 | finally: |
| 118 | if mock_file_path and os.path.exists(mock_file_path): |
| 119 | try: |
| 120 | os.unlink(mock_file_path) |
| 121 | except OSError: |
| 122 | pass |
RemediationAI
The problem is that the `except OSError: pass` clause silently discards file deletion errors with no logging, preventing incident responders from detecting cleanup failures that could leave test artifacts behind. Replace `pass` with `logger.warning(f'Failed to clean up mock file {mock_file_path}: {e}')` to log the error. This fix ensures cleanup failures are visible for debugging and monitoring. Verify by making the file read-only or inaccessible, running the cleanup code, and confirming a warning log entry is recorded.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 499 | if isinstance(cost, str) and '$' in cost: |
| 500 | try: |
| 501 | cost_value = float(cost.replace('$', '').replace(',', '')) |
| 502 | total_cost += cost_value |
| 503 | except ValueError: |
| 504 | pass |
| 505 | |
| 506 | if total_cost > 0: |
| 507 | monthly_cost = f'${total_cost:.2f}' |
RemediationAI
The problem is that the `except ValueError: pass` clause silently discards cost parsing errors with no logging, preventing incident responders from detecting malformed pricing data that could lead to incorrect cost calculations. Replace `pass` with `logger.warning(f'Failed to parse cost value "{cost}": {e}')` to log the error. This fix ensures parsing failures are visible for debugging and monitoring. Verify by passing a malformed cost string (e.g., `'invalid$price'`) and confirming a warning log entry is recorded.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 79 | sample_{{ entity_name.lower() }}.{{ param }} |
| 80 | {%- if not loop.last %}, {% endif -%} |
| 81 | {%- endfor %}) |
| 82 | print(f" ๐๏ธ Deleted leftover {{ entity_name.lower() }} (if existed)") |
| 83 | except Exception: |
| 84 | pass # Ignore errors - item might not exist |
| 85 | {%- endfor %} |
| 86 | {%- endfor %} |
| 87 | print("โ
Pre-test cleanup completed\n") |
RemediationAI
The problem is that the Jinja2 template contains `except Exception: pass` which silently discards all deletion errors with no logging, preventing incident responders from detecting cleanup failures in generated code. Replace `pass` with `logger.warning(f'Failed to delete leftover {{ entity_name.lower() }}: {e}')` or equivalent error logging in the generated code. This fix ensures cleanup failures are visible for debugging and monitoring. Verify by generating code from the template, running it with a deletion that fails (e.g., permission denied), and confirming a warning log entry is recorded.
run_query