High risk. Don't ship without significant remediation.
Scanned 5/12/2026, 7:16:43 PMยทCached resultยทDeep Scanยท91 rulesยทHow we decide โ
AIVSS Score
High
Severity Breakdown
0
critical
12
high
10
medium
0
low
MCP Server Information
Findings
This package carries a D security grade with 12 high-severity findings, primarily centered on prompt injection vulnerabilities (8 instances) and server configuration issues (7 instances) that could allow attackers to manipulate tool behavior or exploit misconfigurations. The combination of prompt injection risks and tool poisoning threats means this package could be leveraged to bypass safety controls or execute unintended operations, making it unsuitable for production use without substantial remediation.
AIPer-finding remediation generated by bedrock-claude-haiku-4-5 โ 22 of 22 findings. Click any finding to read.
Dependencies
requests (2)
Scan Details
Done
Sign in to save scan history and re-scan automatically on new commits.
Building your own MCP server?
Same rules, same LLM judges, same grade. Private scans stay isolated to your account and never appear in the public registry. Required for code your team hasnโt shipped yet.
22 of 22 findings
22 findings
rename_function accepts old_name and new_name identifiers and renames a function by old name alone, with no ownership or access control check.
Evidence
| 72 | return safe_get("classes", {"offset": offset, "limit": limit}) |
| 73 | |
| 74 | @mcp.tool() |
| 75 | def decompile_function(name: str) -> str: |
| 76 | """ |
| 77 | Decompile a specific function by name and return the decompiled C code. |
| 78 | """ |
| 79 | return safe_post("decompile", name) |
| 80 | |
| 81 | @mcp.tool() |
| 82 | def rename_function(old_name: str, new_name: str) -> str: |
RemediationAI
The problem is that rename_function() accepts old_name and new_name parameters and performs a rename operation via safe_post() with no verification that the caller owns or has permission to modify the target function. The concrete fix is to add an access control check before calling safe_post() โ implement a function like `verify_function_ownership(old_name, user_id)` that queries the Ghidra server to confirm the current user has write permissions on that function, and raise a PermissionError if not. This eliminates the vulnerability by ensuring only authorized users can rename functions they control. To verify, test by attempting to rename a function as an unprivileged user and confirm the operation is rejected with a 403-like error.
LLM consensus
decompile_function accepts a function name identifier and returns decompiled code by posting to 'decompile' endpoint with only the name as filter, with no ownership or access control check.
Evidence
| 65 | return safe_get("methods", {"offset": offset, "limit": limit}) |
| 66 | |
| 67 | @mcp.tool() |
| 68 | def list_classes(offset: int = 0, limit: int = 100) -> list: |
| 69 | """ |
| 70 | List all namespace/class names in the program with pagination. |
| 71 | """ |
| 72 | return safe_get("classes", {"offset": offset, "limit": limit}) |
| 73 | |
| 74 | @mcp.tool() |
| 75 | def decompile_function(name: str) -> str: |
RemediationAI
The problem is that decompile_function() accepts a function name and posts it to the 'decompile' endpoint with no ownership or access control check, allowing any caller to retrieve decompiled code for any function in the binary. The concrete fix is to add an authorization check before calling safe_post("decompile", name) โ implement `verify_function_read_access(name, user_id)` that confirms the user has permission to view that function's decompiled code, raising PermissionError if denied. This eliminates the vulnerability by restricting decompilation access to authorized users only. To verify, test by attempting to decompile a restricted function as an unprivileged user and confirm access is denied.
LLM consensus
rename_data accepts an address identifier and renames a data label at that address with only the address as filter, with no ownership or access control check.
Evidence
| 79 | return safe_post("decompile", name) |
| 80 | |
| 81 | @mcp.tool() |
| 82 | def rename_function(old_name: str, new_name: str) -> str: |
| 83 | """ |
| 84 | Rename a function by its current name to a new user-defined name. |
| 85 | """ |
| 86 | return safe_post("renameFunction", {"oldName": old_name, "newName": new_name}) |
| 87 | |
| 88 | @mcp.tool() |
| 89 | def rename_data(address: str, new_name: str) -> str: |
RemediationAI
The problem is that rename_data() accepts an address parameter and renames a data label at that address via safe_post() with no ownership or access control check, allowing any caller to rename any data symbol. The concrete fix is to add an authorization check before calling safe_post("renameData", ...) โ implement `verify_data_ownership(address, user_id)` that confirms the user has write permissions on the data at that address, raising PermissionError if not. This eliminates the vulnerability by ensuring only authorized users can rename data they control. To verify, test by attempting to rename a data label at a restricted address as an unprivileged user and confirm the operation is rejected.
LLM consensus
Tool 'rename_function' performs NETWORK side effect (POST request to Ghidra server) that is not disclosed in description.
Evidence
| 70 | List all namespace/class names in the program with pagination. |
| 71 | """ |
| 72 | return safe_get("classes", {"offset": offset, "limit": limit}) |
| 73 | |
| 74 | @mcp.tool() |
| 75 | def decompile_function(name: str) -> str: |
| 76 | """ |
| 77 | Decompile a specific function by name and return the decompiled C code. |
| 78 | """ |
| 79 | return safe_post("decompile", name) |
RemediationAI
The problem is that the rename_function() docstring states 'Rename a function by its current name to a new user-defined name' but does not disclose that the function performs a POST request to the Ghidra server, which is a network side effect that modifies state. The concrete fix is to update the docstring to explicitly document the side effect: change the docstring to 'Rename a function by its current name to a new user-defined name. **This tool modifies state on the Ghidra server via a POST request.**' This eliminates the vulnerability by making the side effect transparent to callers and LLMs. To verify, confirm the updated docstring appears in the tool's metadata by calling the MCP tools/list endpoint and inspecting the description field.
LLM consensus
Tool 'rename_data' performs NETWORK side effect (POST request to Ghidra server) that is not disclosed in description.
Evidence
| 75 | def decompile_function(name: str) -> str: |
| 76 | """ |
| 77 | Decompile a specific function by name and return the decompiled C code. |
| 78 | """ |
| 79 | return safe_post("decompile", name) |
| 80 | |
| 81 | @mcp.tool() |
| 82 | def rename_function(old_name: str, new_name: str) -> str: |
| 83 | """ |
| 84 | Rename a function by its current name to a new user-defined name. |
| 85 | """ |
RemediationAI
The problem is that the rename_data() docstring states 'Rename a data label at the specified address' but does not disclose that the function performs a POST request to the Ghidra server, which is a network side effect that modifies state. The concrete fix is to update the docstring to explicitly document the side effect: change the docstring to 'Rename a data label at the specified address. **This tool modifies state on the Ghidra server via a POST request.**' This eliminates the vulnerability by making the side effect transparent to callers and LLMs. To verify, confirm the updated docstring appears in the tool's metadata by calling the MCP tools/list endpoint and inspecting the description field.
LLM consensus
Tool 'decompile_function' fetches untrusted decompiled C code from external Ghidra server and returns it verbatim without provenance markers, enabling indirect prompt injection.
Evidence
| 65 | return safe_get("methods", {"offset": offset, "limit": limit}) |
| 66 | |
| 67 | @mcp.tool() |
| 68 | def list_classes(offset: int = 0, limit: int = 100) -> list: |
| 69 | """ |
| 70 | List all namespace/class names in the program with pagination. |
| 71 | """ |
| 72 | return safe_get("classes", {"offset": offset, "limit": limit}) |
| 73 | |
| 74 | @mcp.tool() |
| 75 | def decompile_function(name: str) -> str: |
RemediationAI
The problem is that decompile_function() fetches decompiled C code from the external Ghidra server and returns it verbatim without any provenance markers or sanitization, allowing malicious or injected code in the decompiled output to influence the LLM's reasoning. The concrete fix is to wrap the return value with a provenance marker: change `return safe_post("decompile", name)` to `return f"[EXTERNAL: Ghidra decompile output for {name}]\n{safe_post('decompile', name)}\n[END EXTERNAL]"`. This eliminates the vulnerability by clearly marking untrusted external data so the LLM treats it as potentially adversarial. To verify, call decompile_function() and confirm the output is wrapped with [EXTERNAL] and [END EXTERNAL] markers.
LLM consensus
Tool 'list_segments' fetches untrusted memory segment information from external Ghidra server and returns it verbatim without provenance markers, enabling indirect prompt injection.
Evidence
| 86 | return safe_post("renameFunction", {"oldName": old_name, "newName": new_name}) |
| 87 | |
| 88 | @mcp.tool() |
| 89 | def rename_data(address: str, new_name: str) -> str: |
| 90 | """ |
| 91 | Rename a data label at the specified address. |
| 92 | """ |
| 93 | return safe_post("renameData", {"address": address, "newName": new_name}) |
| 94 | |
| 95 | @mcp.tool() |
| 96 | def list_segments(offset: int = 0, limit: int = 100) -> list: |
RemediationAI
The problem is that list_segments() fetches memory segment information from the external Ghidra server and returns it verbatim without provenance markers, allowing injected or malicious segment metadata to influence the LLM's reasoning. The concrete fix is to wrap the return value with a provenance marker: change `return safe_get("segments", ...)` to `return f"[EXTERNAL: Ghidra segments]\n{safe_get('segments', ...)}\n[END EXTERNAL]"`. This eliminates the vulnerability by clearly marking untrusted external data. To verify, call list_segments() and confirm the output is wrapped with [EXTERNAL] and [END EXTERNAL] markers.
LLM consensus
Tool 'list_methods' fetches untrusted decompiled function names from external Ghidra server and returns them verbatim without provenance markers, enabling indirect prompt injection.
Evidence
| 51 | response = requests.post(url, data=data.encode("utf-8"), timeout=5) |
| 52 | response.encoding = 'utf-8' |
| 53 | if response.ok: |
| 54 | return response.text.strip() |
| 55 | else: |
| 56 | return f"Error {response.status_code}: {response.text.strip()}" |
| 57 | except Exception as e: |
| 58 | return f"Request failed: {str(e)}" |
| 59 | |
| 60 | @mcp.tool() |
| 61 | def list_methods(offset: int = 0, limit: int = 100) -> list: |
RemediationAI
The problem is that list_methods() fetches function names from the external Ghidra server and returns them verbatim without provenance markers, allowing injected or malicious function names to influence the LLM's reasoning. The concrete fix is to wrap the return value with a provenance marker: change `return safe_get("methods", ...)` to `return f"[EXTERNAL: Ghidra methods]\n{safe_get('methods', ...)}\n[END EXTERNAL]"`. This eliminates the vulnerability by clearly marking untrusted external data. To verify, call list_methods() and confirm the output is wrapped with [EXTERNAL] and [END EXTERNAL] markers.
LLM consensus
Tool 'list_imports' fetches untrusted imported symbol information from external Ghidra server and returns it verbatim without provenance markers, enabling indirect prompt injection.
Evidence
| 93 | return safe_post("renameData", {"address": address, "newName": new_name}) |
| 94 | |
| 95 | @mcp.tool() |
| 96 | def list_segments(offset: int = 0, limit: int = 100) -> list: |
| 97 | """ |
| 98 | List all memory segments in the program with pagination. |
| 99 | """ |
| 100 | return safe_get("segments", {"offset": offset, "limit": limit}) |
| 101 | |
| 102 | @mcp.tool() |
| 103 | def list_imports(offset: int = 0, limit: int = 100) -> list: |
RemediationAI
The problem is that list_imports() fetches imported symbol information from the external Ghidra server and returns it verbatim without provenance markers, allowing injected or malicious import data to influence the LLM's reasoning. The concrete fix is to wrap the return value with a provenance marker: change `return safe_get("imports", ...)` to `return f"[EXTERNAL: Ghidra imports]\n{safe_get('imports', ...)}\n[END EXTERNAL]"`. This eliminates the vulnerability by clearly marking untrusted external data. To verify, call list_imports() and confirm the output is wrapped with [EXTERNAL] and [END EXTERNAL] markers.
LLM consensus
Tool 'list_classes' fetches untrusted namespace/class names from external Ghidra server and returns them verbatim without provenance markers, enabling indirect prompt injection.
Evidence
| 58 | return f"Request failed: {str(e)}" |
| 59 | |
| 60 | @mcp.tool() |
| 61 | def list_methods(offset: int = 0, limit: int = 100) -> list: |
| 62 | """ |
| 63 | List all function names in the program with pagination. |
| 64 | """ |
| 65 | return safe_get("methods", {"offset": offset, "limit": limit}) |
| 66 | |
| 67 | @mcp.tool() |
| 68 | def list_classes(offset: int = 0, limit: int = 100) -> list: |
RemediationAI
The problem is that list_classes() fetches namespace/class names from the external Ghidra server and returns them verbatim without provenance markers, allowing injected or malicious class names to influence the LLM's reasoning. The concrete fix is to wrap the return value with a provenance marker: change `return safe_get("classes", ...)` to `return f"[EXTERNAL: Ghidra classes]\n{safe_get('classes', ...)}\n[END EXTERNAL]"`. This eliminates the vulnerability by clearly marking untrusted external data. To verify, call list_classes() and confirm the output is wrapped with [EXTERNAL] and [END EXTERNAL] markers.
LLM consensus
MCP server binds an HTTP transport to localhost and registers tools, but no authentication is enforced on requests. The official MCP security best practices warn that this is reachable via DNS-rebinding attacks โ a malicious web page can hit `http://127.0.0.1:<port>` from inside the user's browser and invoke tools as the user. Pick one fix: 1. Switch to stdio transport (`mcp.run(transport="stdio")`). 2. Require an `Authorization` / `Bearer` / `api_key` check on every request. 3. Bind
Evidence
| 1 | # /// script |
| 2 | # requires-python = ">=3.10" |
| 3 | # dependencies = [ |
| 4 | # "requests>=2,<3", |
| 5 | # "mcp>=1.2.0,<2", |
| 6 | # ] |
| 7 | # /// |
| 8 | |
| 9 | import sys |
| 10 | import requests |
| 11 | import argparse |
| 12 | import logging |
| 13 | from urllib.parse import urljoin |
| 14 | |
| 15 | from mcp.server.fastmcp import FastMCP |
| 16 | |
| 17 | DEFAULT_GHIDRA_SERVER = "http://127.0.0.1:8080/" |
| 18 | |
| 19 | logger = logging.getLogger(__name__) |
| 20 | |
| 21 | mcp = FastMCP("ghidra-mcp") |
| 22 | |
| 23 | # Initialize ghidra_server_url with default value |
| 24 | ghidra_server_url = DEFAULT_GHIDRA_SERVER |
| 25 | |
| 26 | def safe_get(endpoint: str, params: dic |
RemediationAI
The problem is that the MCP server binds to localhost HTTP without authentication, making it vulnerable to DNS-rebinding attacks where a malicious web page can invoke tools from the user's browser. The concrete fix is to switch to stdio transport: change the server startup from `mcp.run(transport="http", ...)` to `mcp.run(transport="stdio")` and remove the HTTP binding configuration. This eliminates the vulnerability by removing the network-accessible HTTP interface entirely, so only the parent process can communicate with the server via stdin/stdout. To verify, confirm the server starts without binding to any TCP port by checking that no listening socket is created.
LLM consensus
MCP server binds an HTTP transport to localhost / 127.0.0.1 / [::1] and registers tools, but does not validate the request `Host` header. Even with auth, this is exploitable via DNS rebinding โ a malicious web page can make the user's browser resolve `evil.com` to `127.0.0.1`, bypassing same-origin checks. Fix: enable `hostHeaderValidation()` middleware (TS SDK โฅ1.24.0), or check `req.headers.host` against an allow-list of expected hostnames. Co-fires with MCP-268 (no auth) when both gaps are p
Evidence
| 1 | # /// script |
| 2 | # requires-python = ">=3.10" |
| 3 | # dependencies = [ |
| 4 | # "requests>=2,<3", |
| 5 | # "mcp>=1.2.0,<2", |
| 6 | # ] |
| 7 | # /// |
| 8 | |
| 9 | import sys |
| 10 | import requests |
| 11 | import argparse |
| 12 | import logging |
| 13 | from urllib.parse import urljoin |
| 14 | |
| 15 | from mcp.server.fastmcp import FastMCP |
| 16 | |
| 17 | DEFAULT_GHIDRA_SERVER = "http://127.0.0.1:8080/" |
| 18 | |
| 19 | logger = logging.getLogger(__name__) |
| 20 | |
| 21 | mcp = FastMCP("ghidra-mcp") |
| 22 | |
| 23 | # Initialize ghidra_server_url with default value |
| 24 | ghidra_server_url = DEFAULT_GHIDRA_SERVER |
| 25 | |
| 26 | def safe_get(endpoint: str, params: dic |
RemediationAI
The problem is that the MCP server binds to localhost without validating the Host header, making it vulnerable to DNS-rebinding attacks where a malicious web page resolves a domain to 127.0.0.1 and bypasses same-origin checks. The concrete fix is to add Host header validation middleware: implement a request interceptor that checks `req.headers.host` against an allowlist of expected hostnames (e.g., `['localhost', '127.0.0.1', '[::1]']`) and reject requests with mismatched Host headers with a 400 error. This eliminates the vulnerability by ensuring only requests from expected hosts are accepted. To verify, test by making a request with a spoofed Host header (e.g., `Host: evil.com`) and confirm it is rejected.
LLM consensus
requests==2.32.3 has 2 known CVEs [MEDIUM]: GHSA-9hjg-9r4m-mvj7, GHSA-gc5v-m9x4-r6x2. Upgrade to a patched version.
RemediationAI
The problem is that requests==2.32.3 has 2 known CVEs (GHSA-9hjg-9r4m-mvj7, GHSA-gc5v-m9x4-r6x2) that could allow attackers to exploit the HTTP client. The concrete fix is to upgrade the requests library: change `requests==2.32.3` to `requests>=2.33.0` in requirements.txt (or the latest patched version). This eliminates the vulnerability by applying security patches that fix the known CVEs. To verify, run `pip install -U requests` and confirm the installed version is โฅ2.33.0 by running `pip show requests`.
mcp==1.5.0 has 3 known CVEs [HIGH]: GHSA-3qhf-m339-9g5v, GHSA-9h52-p55h-vw2f, GHSA-j975-95f5-7wqh. Upgrade to a patched version.
RemediationAI
The problem is that mcp==1.5.0 has 3 known CVEs (GHSA-3qhf-m339-9g5v, GHSA-9h52-p55h-vw2f, GHSA-j975-95f5-7wqh) that could allow attackers to exploit the MCP framework. The concrete fix is to upgrade the mcp library: change `mcp==1.5.0` to `mcp>=1.6.0` in requirements.txt (or the latest patched version that addresses all 3 CVEs). This eliminates the vulnerability by applying security patches that fix the known CVEs. To verify, run `pip install -U mcp` and confirm the installed version is โฅ1.6.0 by running `pip show mcp`.
Full exception detail or stack trace returned to the caller. Leaking tracebacks exposes internal paths, library versions, and query structure โ useful recon for attackers.
Evidence
| 40 | else: |
| 41 | return [f"Error {response.status_code}: {response.text.strip()}"] |
| 42 | except Exception as e: |
| 43 | return [f"Request failed: {str(e)}"] |
| 44 | |
| 45 | def safe_post(endpoint: str, data: dict | str) -> str: |
| 46 | try: |
RemediationAI
The problem is that the safe_get() function returns full exception details in the error message (e.g., `f"Request failed: {str(e)}"`), which leaks internal paths, library versions, and query structure to attackers. The concrete fix is to replace the exception detail with a generic error message: change `return [f"Request failed: {str(e)}"]` to `return ["Request failed: Unable to retrieve data from Ghidra server"]` and log the full exception internally using `logging.error()`. This eliminates the vulnerability by hiding implementation details from the caller. To verify, trigger an error condition (e.g., by making the Ghidra server unreachable) and confirm the response contains only the generic message, not the full exception.
LLM consensus
Full exception detail or stack trace returned to the caller. Leaking tracebacks exposes internal paths, library versions, and query structure โ useful recon for attackers.
Evidence
| 55 | else: |
| 56 | return f"Error {response.status_code}: {response.text.strip()}" |
| 57 | except Exception as e: |
| 58 | return f"Request failed: {str(e)}" |
| 59 | |
| 60 | @mcp.tool() |
| 61 | def list_methods(offset: int = 0, limit: int = 100) -> list: |
RemediationAI
The problem is that the safe_post() function returns full exception details in the error message (e.g., `f"Request failed: {str(e)}"`), which leaks internal paths, library versions, and query structure to attackers. The concrete fix is to replace the exception detail with a generic error message: change `return f"Request failed: {str(e)}"` to `return "Request failed: Unable to complete operation on Ghidra server"` and log the full exception internally using `logging.error()`. This eliminates the vulnerability by hiding implementation details from the caller. To verify, trigger an error condition and confirm the response contains only the generic message, not the full exception.
LLM consensus
MCP manifest declares tools but no authentication field is present (none of: auth, authorization, bearer, oauth, mtls, apiKey, api_key, basic, token, authToken). Absence is a weak signal โ confirm whether the server relies on network-layer or host-level auth, or declare the real mechanism explicitly so reviewers can audit it.
Evidence
| 1 | [](https://www.apache.org/licenses/LICENSE-2.0) |
| 2 | [](https://github.com/LaurieWired/GhidraMCP/releases) |
| 3 | [](https://github.com/LaurieWired/GhidraMCP/stargazers) |
| 4 | [](https://github.com/Lauri |
RemediationAI
The problem is that the MCP manifest (README.md or mcp.json) does not declare any authentication mechanism (auth, authorization, bearer, oauth, mtls, apiKey, etc.), making it unclear to reviewers whether the server relies on network-layer auth, host-level auth, or no auth at all. The concrete fix is to add an explicit authentication field to the manifest: add a section to README.md or create/update mcp.json with `"authentication": "none (relies on network isolation via stdio transport)"` or the actual auth method being used. This eliminates the vulnerability by making the security posture transparent for audit. To verify, confirm the authentication field is present and accurately describes the server's auth mechanism.
ghidra-mcp server uses FastMCP with @mcp.tool() decorators that can be dynamically modified at runtime, but the tools/list response does not include per-tool content-bound integrity fields (version, etag, digest, sha256, hash) to detect tool definition changes, and no notifications/tools/list_changed capability is declared to signal mutations.
Evidence
| 1 | # /// script |
| 2 | # requires-python = ">=3.10" |
| 3 | # dependencies = [ |
| 4 | # "requests>=2,<3", |
RemediationAI
The problem is that FastMCP tools can be dynamically modified at runtime, but the tools/list response does not include per-tool integrity fields (version, etag, digest, sha256) to detect mutations, and no tools/list_changed notification capability is declared. The concrete fix is to add integrity metadata to each tool definition: modify the @mcp.tool() decorator or response handler to include a `sha256` field computed from the tool's docstring and signature, and declare the `notifications/tools/list_changed` capability in the server initialization. This eliminates the vulnerability by allowing clients to detect unauthorized tool modifications. To verify, compute the SHA256 of a tool definition, call tools/list, and confirm the hash is present and matches.
LLM consensus
GitHub Actions `uses:` reference is not pinned to a 40-character commit SHA. Tags (`@v4`) and branches (`@main`) are mutable โ a compromised maintainer or a tag rewrite can substitute malicious code into your CI pipeline silently. Pin to a SHA: `uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab`. For readability, include the version as a trailing comment: `# v4.1.1`. Tools like `pinact` / `ratchet` automate this. Allowed unpinned forms (excluded by the rule): - Local actions `.
Evidence
| 25 | steps: |
| 26 | - uses: actions/checkout@v4 |
| 27 | - name: Set up JDK 21 |
| 28 | uses: actions/setup-java@v4 |
| 29 | with: |
| 30 | java-version: '21' |
| 31 | distribution: 'temurin' |
RemediationAI
The problem is that .github/workflows/build.yml uses `actions/checkout@v4` which is a mutable tag that can be rewritten by a compromised maintainer, allowing malicious code injection into the CI pipeline. The concrete fix is to pin the action to a specific commit SHA: change `uses: actions/checkout@v4` to `uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # v4.1.1`. This eliminates the vulnerability by ensuring the exact version of the action is used, preventing tag rewrites from affecting the build. To verify, confirm the workflow file contains the full 40-character SHA and that the action version matches the comment.
GitHub Actions `uses:` reference is not pinned to a 40-character commit SHA. Tags (`@v4`) and branches (`@main`) are mutable โ a compromised maintainer or a tag rewrite can substitute malicious code into your CI pipeline silently. Pin to a SHA: `uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab`. For readability, include the version as a trailing comment: `# v4.1.1`. Tools like `pinact` / `ratchet` automate this. Allowed unpinned forms (excluded by the rule): - Local actions `.
Evidence
| 54 | cp bridge_mcp_ghidra.py release/ |
| 55 | |
| 56 | - name: Upload artifact |
| 57 | uses: actions/upload-artifact@v4 |
| 58 | with: |
| 59 | name: GhidraMCP-artifact |
| 60 | path: | |
RemediationAI
The problem is that .github/workflows/build.yml uses `actions/setup-java@v4` which is a mutable tag that can be rewritten by a compromised maintainer, allowing malicious code injection into the CI pipeline. The concrete fix is to pin the action to a specific commit SHA: change `uses: actions/setup-java@v4` to `uses: actions/setup-java@5ffc3wq8b370d6c329ec480221332ada57f0ab # v4.0.0` (use the actual SHA for the desired version). This eliminates the vulnerability by ensuring the exact version of the action is used. To verify, confirm the workflow file contains the full 40-character SHA.
GitHub Actions `uses:` reference is not pinned to a 40-character commit SHA. Tags (`@v4`) and branches (`@main`) are mutable โ a compromised maintainer or a tag rewrite can substitute malicious code into your CI pipeline silently. Pin to a SHA: `uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab`. For readability, include the version as a trailing comment: `# v4.1.1`. Tools like `pinact` / `ratchet` automate this. Allowed unpinned forms (excluded by the rule): - Local actions `.
Evidence
| 23 | Framework/Gui/lib/Gui.jar |
| 24 | |
| 25 | steps: |
| 26 | - uses: actions/checkout@v4 |
| 27 | - name: Set up JDK 21 |
| 28 | uses: actions/setup-java@v4 |
| 29 | with: |
RemediationAI
The problem is that .github/workflows/build.yml uses `actions/upload-artifact@v4` which is a mutable tag that can be rewritten by a compromised maintainer, allowing malicious code injection into the CI pipeline. The concrete fix is to pin the action to a specific commit SHA: change `uses: actions/upload-artifact@v4` to `uses: actions/upload-artifact@c7d193f32eddeaaf6d8e23213702579f431e579c # v4.0.0` (use the actual SHA for the desired version). This eliminates the vulnerability by ensuring the exact version of the action is used. To verify, confirm the workflow file contains the full 40-character SHA.
MCP server file uses Python's `global` keyword. `global` mutates module-level state from inside a function โ in a multi-tenant MCP server, this is almost always a cross-request data path. Closes the OWASP MCP Top 10:2025 MCP10 (Context Injection & Over-Sharing) gap. Fix: thread state through function arguments or a per-request context object. Module-level mutable singletons have no place in a tool handler.
Evidence
| 300 | args = parser.parse_args() |
| 301 | |
| 302 | # Use the global variable to ensure it's properly updated |
| 303 | global ghidra_server_url |
| 304 | if args.ghidra_server: |
| 305 | ghidra_server_url = args.ghidra_server |
RemediationAI
The problem is that the code uses Python's `global` keyword to mutate module-level state (ghidra_server_url) from inside a function, which in a multi-tenant MCP server creates a cross-request data path where one user's configuration can affect another user's requests. The concrete fix is to remove the global variable and thread the ghidra_server_url through function arguments: refactor the code to pass ghidra_server_url as a parameter to safe_get() and safe_post() instead of relying on a global, and initialize it once at server startup without mutation. This eliminates the vulnerability by ensuring each request uses its own isolated configuration. To verify, confirm that ghidra_server_url is no longer declared as global and that all functions accept it as a parameter.
LLM consensus
decompile_function
list_classes
rename_function
list_methods
+2 more โ click to filter
rename_data
list_segments