High risk. Don't ship without significant remediation.
Scanned 5/4/2026, 4:42:16 AM·Cached result·Deep Scan·88 rules·How we decide ↗
AIVSS Score
High
Severity Breakdown
0
critical
8
high
15
medium
3
low
MCP Server Information
Findings
This package has a poor security grade (D) and a low safety score (59/100), indicating significant risks. It contains 8 high-severity and 15 medium-severity issues, including prompt injection vulnerabilities (6), verbose error leaks (10), and tool poisoning risks (2), which could expose your system to attacks or unintended behavior. While no critical findings were detected, the volume and nature of these flaws suggest it may not be safe for production use without careful review and mitigation.
AIPer-finding remediation generated by bedrock-claude-haiku-4-5 — 26 of 26 findings. Click any finding to read.
No known CVEs found for this package or its dependencies.
Scan Details
Done
Sign in to save scan history and re-scan automatically on new commits.
Building your own MCP server?
Same rules, same LLM judges, same grade. Private scans stay isolated to your account and never appear in the public registry. Required for code your team hasn’t shipped yet.
26 of 26 findings
26 findings
Tool 'search_prts' shadows reserved tool name 'search' from the search category.
Evidence
| 37 | @mcp.tool() |
| 38 | async def search_prts( |
| 39 | query: Annotated[str, Field(description="搜索关键词,支持中文,如「罗德岛」、「整合运动」。")], |
| 40 | limit: Annotated[int, Field(default=5, description="返回结果数量上限,默认 5,最大建议不超过 10。")] = 5, |
| 41 | ) -> str: |
| 42 | """搜索 PRTS 明日方舟中文维基词条。 |
RemediationAI
The tool name 'search_prts' collides with the reserved 'search' category namespace, creating ambiguity in tool routing and discovery. Rename the function and its @mcp.tool() decorator from 'search_prts' to 'search_prts_wiki' (or similar non-reserved name) in server.py. This eliminates namespace collision and ensures the tool is uniquely identifiable in the MCP registry. Verify by running `mcp list-tools` or inspecting the server's tool manifest to confirm the new name appears without conflicts.
LLM consensus
Tool search_prts uses module-level global USER_AGENT and makes unauthenticated HTTP requests to PRTS_API_ENDPOINT without consulting caller identity or per-request credentials.
Evidence
| 24 | params = { |
| 25 | "action": "query", |
| 26 | "list": "search", |
| 27 | "srsearch": query, |
| 28 | "srlimit": limit, |
| 29 | "format": "json", |
| 30 | } |
RemediationAI
The search_prts tool makes HTTP requests to PRTS_API_ENDPOINT using a module-level global USER_AGENT without per-caller credentials or rate-limit awareness, allowing any caller to exhaust quotas or trigger IP bans. Add a `credentials: Optional[dict]` parameter to the tool signature and pass it through to the httpx client (e.g., via Authorization headers or custom headers dict), and implement per-caller rate-limiting keyed by caller identity. This ensures each caller is rate-limited independently and can supply their own credentials. Test by calling the tool twice in rapid succession and verifying that the second call respects a per-caller rate limit.
Function _github_headers reads GITHUB_TOKEN from os.environ once at module level and uses it for all GitHub API calls without per-caller token resolution.
Evidence
| 35 | _GITHUB_RAW_URL = "https://raw.githubusercontent.com/{owner}/{repo}/{branch}/{path}" |
| 36 | _GITHUB_UA = "PRTS-MCP-Bot/0.1 (Arknights fan-creation helper)" |
| 37 | |
| 38 | |
| 39 | def _github_headers() -> dict[str, str]: |
| 40 | headers: dict[str, str] = {"User-Agent": _GITHUB_UA} |
| 41 | token = os.environ.get("GITHUB_TOKEN") |
RemediationAI
The _github_headers() function reads GITHUB_TOKEN from os.environ once at module load time, reusing the same token for all GitHub API calls regardless of caller identity, violating multi-tenant isolation. Refactor _github_headers() to accept an optional `token: Optional[str]` parameter (defaulting to os.environ.get("GITHUB_TOKEN")), and update all call sites to pass caller-supplied or request-scoped tokens. This allows each caller to use their own GitHub credentials. Verify by setting different GITHUB_TOKEN values in separate test environments and confirming that each caller's token is used for their respective API calls.
Tool read_page uses module-level global USER_AGENT and makes unauthenticated HTTP requests to PRTS_API_ENDPOINT without consulting caller identity or per-request credentials.
Evidence
| 44 | async def read_page(title: str) -> str: |
| 45 | """Fetch plain-text extract for a PRTS wiki page.""" |
| 46 | await _rate_limit() |
| 47 | params = { |
| 48 | "action": "query", |
| 49 | "titles": title, |
| 50 | "prop": "extracts", |
RemediationAI
The read_page tool makes unauthenticated HTTP requests to PRTS_API_ENDPOINT using a module-level global USER_AGENT without consulting caller identity or per-request credentials, allowing quota exhaustion and IP bans. Add a `credentials: Optional[dict]` parameter to read_page() and pass it through to the httpx client headers, and implement per-caller rate-limiting. This ensures each caller is independently rate-limited and can supply credentials. Test by calling read_page() multiple times and verifying that rate-limit headers or per-caller throttling are enforced.
Tool 'search_prts' makes undisclosed NETWORK calls to PRTS wiki API via httpx in the handler (prts_wiki.py:search_prts).
Evidence
| 47 | """ |
| 48 | results = await _search_prts(query, limit) |
| 49 | if not results: |
| 50 | return f"未找到与 '{query}' 相关的词条。" |
| 51 | parts = [] |
| 52 | for r in results: |
| 53 | parts.append(f"**{r['title']}**\n{r['snippet']}") |
RemediationAI
The search_prts tool makes undisclosed network calls to the PRTS wiki API via httpx, but the tool definition does not declare this capability in its metadata or documentation. Add a `resources: ["network"]` field (or equivalent) to the @mcp.tool() decorator, and update the tool's docstring to explicitly state "⚠️ This tool makes network requests to the PRTS wiki API." This ensures clients are aware of the network dependency and can enforce policies accordingly. Verify by inspecting the tool's JSON schema in the MCP manifest to confirm the network resource is declared.
Tool 'read_prts_page' makes undisclosed NETWORK calls to PRTS wiki API via httpx in the handler (prts_wiki.py:read_page).
Evidence
| 62 | 返回该词条经过清洗的纯文本,已去除 Wikitext 模板、文件链接和 HTML 标签, |
| 63 | 内容可能较长。强烈建议先调用 search_prts 确认词条的准确标题,避免因 |
| 64 | 拼写错误导致读取失败。 |
| 65 | """ |
| 66 | return await _read_page(page_title) |
RemediationAI
The read_prts_page tool makes undisclosed network calls to the PRTS wiki API via httpx, but the tool definition does not declare this capability. Add a `resources: ["network"]` field to the @mcp.tool() decorator and update the docstring to state "⚠️ This tool makes network requests to the PRTS wiki API." This ensures clients are aware of the network dependency. Verify by inspecting the tool's JSON schema in the MCP manifest to confirm the network resource is declared.
Tool read_prts_page fetches untrusted wiki page content from PRTS_API_ENDPOINT and returns it verbatim to the LLM without provenance delimiters, enabling indirect prompt injection via malicious wiki edits.
Evidence
| 44 | async def read_page(title: str) -> str: |
| 45 | """Fetch plain-text extract for a PRTS wiki page.""" |
| 46 | await _rate_limit() |
| 47 | params = { |
| 48 | "action": "query", |
| 49 | "titles": title, |
| 50 | "prop": "extracts", |
RemediationAI
The read_page function fetches untrusted wiki content from PRTS_API_ENDPOINT and returns it verbatim to the LLM without provenance markers, enabling indirect prompt injection if the wiki is compromised. Wrap the returned content with explicit provenance delimiters: return f"[PRTS Wiki Source]\n{content}\n[End PRTS Wiki Source]" and add a note in the docstring warning that content is user-editable. This makes the LLM aware that the content is external and untrusted. Test by injecting a prompt-injection payload into a mock wiki response and verifying that the delimiters prevent the LLM from treating it as a system instruction.
Tool search_prts fetches untrusted wiki search snippets from PRTS_API_ENDPOINT and returns them verbatim to the LLM without provenance delimiters, enabling indirect prompt injection via malicious wiki content.
Evidence
| 25 | "action": "query", |
| 26 | "list": "search", |
| 27 | "srsearch": query, |
| 28 | "srlimit": limit, |
| 29 | "format": "json", |
| 30 | } |
| 31 | async with httpx.AsyncClient(headers={"User-Agent": USER_AGENT}, timeout=15) as client: |
RemediationAI
The search_prts function fetches untrusted wiki search snippets from PRTS_API_ENDPOINT and returns them verbatim without provenance delimiters, enabling indirect prompt injection. Wrap each snippet with explicit provenance markers: `f"[PRTS Wiki: {r['title']}]\n{r['snippet']}\n[End PRTS Wiki]"` and add a docstring warning that snippets are user-editable wiki content. This signals to the LLM that the content is external and untrusted. Test by injecting a prompt-injection payload into a mock search result and verifying that the delimiters prevent exploitation.
Full exception detail or stack trace returned to the caller. Leaking tracebacks exposes internal paths, library versions, and query structure — useful recon for attackers.
Evidence
| 196 | try: |
| 197 | zip_path = _require_story_zip(cfg) |
| 198 | except RuntimeError as e: |
| 199 | return str(e) |
| 200 | |
| 201 | try: |
| 202 | chapter = _read_story(zip_path, story_key, include_narration=include_narration) |
RemediationAI
The exception handler in server.py catches RuntimeError and returns str(e) directly to the caller, leaking internal error messages, file paths, and library versions useful for reconnaissance. Replace `return str(e)` with `return "Failed to load story data. Please check server logs."` and add logging: `import logging; logger.exception("Story load failed")` to log the full traceback server-side only. This hides implementation details while preserving debuggability. Verify by triggering the exception and confirming that the caller sees a generic message while server logs contain the full traceback.
Full exception detail or stack trace returned to the caller. Leaking tracebacks exposes internal paths, library versions, and query structure — useful recon for attackers.
Evidence
| 168 | try: |
| 169 | char_id = _resolve_char_id(name) |
| 170 | except FileNotFoundError as exc: |
| 171 | return str(exc) |
| 172 | if char_id is None: |
| 173 | return f"未找到干员 '{name}'。请使用游戏内中文名称(如'阿米娅')。" |
RemediationAI
The operator.py handler catches FileNotFoundError and returns str(exc) directly, leaking file paths and internal structure. Replace `return str(exc)` with `return f"未找到干员 '{name}'。请使用游戏内中文名称(如'阿米娅')。"` and add `logger.exception("Operator lookup failed")` to log server-side. This hides internal paths while preserving user-facing error messages. Verify by triggering the exception and confirming the caller sees only the generic Chinese message.
Full exception detail or stack trace returned to the caller. Leaking tracebacks exposes internal paths, library versions, and query structure — useful recon for attackers.
Evidence
| 125 | try: |
| 126 | charwords = _load_charword_table().get("charWords", {}) |
| 127 | except FileNotFoundError as exc: |
| 128 | return str(exc) |
| 129 | lines: list[str] = [] |
| 130 | for entry in charwords.values(): |
| 131 | if entry.get("charId") == char_id and entry.get("voiceText"): |
RemediationAI
The charword lookup catches FileNotFoundError and returns str(exc), leaking file paths. Replace `return str(exc)` with `return f"干员 '{name}' 暂无语音数据。"` and add `logger.exception("Charword load failed")`. This hides internal structure. Verify by triggering the exception and confirming only the generic message is returned.
Full exception detail or stack trace returned to the caller. Leaking tracebacks exposes internal paths, library versions, and query structure — useful recon for attackers.
Evidence
| 175 | try: |
| 176 | ct = _load_character_table() |
| 177 | except FileNotFoundError as exc: |
| 178 | return str(exc) |
| 179 | info = ct.get(char_id) |
| 180 | if info is None: |
| 181 | return f"干员 '{name}' 暂无基本信息。" |
RemediationAI
The character table lookup catches FileNotFoundError and returns str(exc), leaking file paths. Replace `return str(exc)` with `return f"干员 '{name}' 暂无基本信息。"` and add `logger.exception("Character table load failed")`. This hides internal structure. Verify by triggering the exception and confirming only the generic message is returned.
Full exception detail or stack trace returned to the caller. Leaking tracebacks exposes internal paths, library versions, and query structure — useful recon for attackers.
Evidence
| 161 | try: |
| 162 | zip_path = _require_story_zip(cfg) |
| 163 | except RuntimeError as e: |
| 164 | return str(e) |
| 165 | |
| 166 | try: |
| 167 | chapters = _list_stories(zip_path, event_id) |
RemediationAI
The list_stories handler catches RuntimeError and returns str(e), leaking internal error details. Replace `return str(e)` with `return "Failed to load story list. Please check server logs."` and add `logger.exception("Story list failed")`. This hides implementation details. Verify by triggering the exception and confirming a generic message is returned.
Full exception detail or stack trace returned to the caller. Leaking tracebacks exposes internal paths, library versions, and query structure — useful recon for attackers.
Evidence
| 238 | try: |
| 239 | zip_path = _require_story_zip(cfg) |
| 240 | except RuntimeError as e: |
| 241 | return str(e) |
| 242 | |
| 243 | try: |
| 244 | result = _read_activity( |
RemediationAI
The read_activity handler catches RuntimeError and returns str(e), leaking internal error details. Replace `return str(e)` with `return "Failed to load activity data. Please check server logs."` and add `logger.exception("Activity read failed")`. This hides implementation details. Verify by triggering the exception and confirming a generic message is returned.
Full exception detail or stack trace returned to the caller. Leaking tracebacks exposes internal paths, library versions, and query structure — useful recon for attackers.
Evidence
| 118 | try: |
| 119 | char_id = _resolve_char_id(name) |
| 120 | except FileNotFoundError as exc: |
| 121 | return str(exc) |
| 122 | if char_id is None: |
| 123 | return f"未找到干员 '{name}'。请使用游戏内中文名称(如'阿米娅')。" |
RemediationAI
The operator lookup catches FileNotFoundError and returns str(exc), leaking file paths. Replace `return str(exc)` with `return f"未找到干员 '{name}'。请使用游戏内中文名称(如'阿米娅')。"` and add `logger.exception("Operator lookup failed")`. This hides internal structure. Verify by triggering the exception and confirming only the generic message is returned.
Full exception detail or stack trace returned to the caller. Leaking tracebacks exposes internal paths, library versions, and query structure — useful recon for attackers.
Evidence
| 86 | try: |
| 87 | char_id = _resolve_char_id(name) |
| 88 | except FileNotFoundError as exc: |
| 89 | return str(exc) |
| 90 | if char_id is None: |
| 91 | return f"未找到干员 '{name}'。请使用游戏内中文名称(如'阿米娅')。" |
RemediationAI
The operator lookup catches FileNotFoundError and returns str(exc), leaking file paths. Replace `return str(exc)` with `return f"未找到干员 '{name}'。请使用游戏内中文名称(如'阿米娅')。"` and add `logger.exception("Operator lookup failed")`. This hides internal structure. Verify by triggering the exception and confirming only the generic message is returned.
Full exception detail or stack trace returned to the caller. Leaking tracebacks exposes internal paths, library versions, and query structure — useful recon for attackers.
Evidence
| 130 | try: |
| 131 | zip_path = _require_story_zip(cfg) |
| 132 | except RuntimeError as e: |
| 133 | return str(e) |
| 134 | |
| 135 | try: |
| 136 | events = _list_story_events(zip_path, category=category) |
RemediationAI
The list_story_events handler catches RuntimeError and returns str(e), leaking internal error details. Replace `return str(e)` with `return "Failed to load story events. Please check server logs."` and add `logger.exception("Story events list failed")`. This hides implementation details. Verify by triggering the exception and confirming a generic message is returned.
Full exception detail or stack trace returned to the caller. Leaking tracebacks exposes internal paths, library versions, and query structure — useful recon for attackers.
Evidence
| 93 | try: |
| 94 | handbook = _load_handbook_table().get("handbookDict", {}) |
| 95 | except FileNotFoundError as exc: |
| 96 | return str(exc) |
| 97 | entry = handbook.get(char_id) |
| 98 | if entry is None: |
| 99 | return f"干员 '{name}' 暂无档案数据。" |
RemediationAI
The handbook lookup catches FileNotFoundError and returns str(exc), leaking file paths. Replace `return str(exc)` with `return f"干员 '{name}' 暂无档案数据。"` and add `logger.exception("Handbook load failed")`. This hides internal structure. Verify by triggering the exception and confirming only the generic message is returned.
Network / IO / subprocess call without an explicit timeout. A malicious or hung upstream (HTTP host, socket peer, child process) can pin threads, exhaust connection/process pools, and make the MCP server unresponsive. Always pass a bounded timeout. v2 extends v1 with subprocess coverage (R03 from the legacy readiness audit).
Evidence
| 64 | def _get_cascading(url: str, *, timeout: float, **kwargs: object) -> httpx.Response: |
| 65 | """httpx.get() wrapper that cascades through URL candidates on failure. |
| 66 | |
| 67 | - HTTP 4xx from the direct URL propagates immediately (resource missing). |
| 68 | - Network error or HTTP 5xx from any candidate → try the next one. |
RemediationAI
The _get_cascading() function does not enforce an explicit timeout on httpx.get() calls, allowing a hung upstream server to block indefinitely and exhaust connection pools. Add `timeout=timeout` parameter to the httpx.get() call: `response = httpx.get(url, timeout=timeout, **kwargs)`. This ensures all HTTP requests respect the caller-supplied timeout. Verify by mocking a slow upstream server and confirming that the request times out after the specified duration.
Dockerfile never sets a non-root `USER` directive, so the CMD runs as root by default. Any RCE or library-level vulnerability exploited inside this container gets full privileges (MCP Top-10 R3). Add `USER <non-root>` before CMD / ENTRYPOINT in the final stage — e.g. `USER 1000`, `USER nobody`, or `USER nonroot` on distroless.
Evidence
| 1 | FROM python:3.11-slim |
| 2 | |
| 3 | WORKDIR /app |
| 4 | |
| 5 | COPY python/pyproject.toml . |
| 6 | COPY README.md . |
| 7 | COPY python/src/ src/ |
| 8 | # Bundled game data baked in at build time (pre-fetched by CI via |
| 9 | # python/scripts/fetch_gamedata.py). Serves as a read-only offline fallback |
| 10 | # when /data/gamedata (the volume mount-point) has no data yet. |
| 11 | COPY data/ data/ |
| 12 | |
| 13 | RUN pip install --no-cache-dir . |
| 14 | |
| 15 | # Tell config.py we are running inside Docker so it uses /data/gamedata |
| 16 | # (the volume mount-point) as the auto-sync target instead of th |
RemediationAI
The Dockerfile runs the MCP server as root by default, so any RCE or library vulnerability grants full container privileges. Add `USER 1000` (or `USER nobody`) before the CMD directive in the final stage of the Dockerfile. This ensures the container runs with minimal privileges. Verify by building the image, running it, and confirming that `whoami` inside the container returns a non-root user.
MCP manifest declares tools but no authentication field is present (none of: auth, authorization, bearer, oauth, mtls, apiKey, api_key, basic, token, authToken). Absence is a weak signal — confirm whether the server relies on network-layer or host-level auth, or declare the real mechanism explicitly so reviewers can audit it.
Evidence
| 1 | Metadata-Version: 2.4 |
| 2 | Name: prts-mcp |
| 3 | Version: 0.4.2 |
| 4 | Summary: MCP Server for Arknights PRTS Wiki & game data |
| 5 | Project-URL: Homepage, https://github.com/3aKHP/prts-mcp |
| 6 | Project-URL: Repository, https://github.com/3aKHP/prts-mcp |
| 7 | Project-URL: Changelog, https://github.com/3aKHP/prts-mcp/blob/main/python/CHANGELOG.md |
| 8 | Project-URL: Bug Tracker, https://github.com/3aKHP/prts-mcp/issues |
| 9 | Author: 3aKHP |
| 10 | License: MIT |
| 11 | Keywords: ai-agent,anthropic,arknights,claude,fanfiction,game-data,mcp,mcp-server,model-contex |
RemediationAI
The MCP manifest (PKG-INFO) declares tools but does not include an explicit authentication field, making it unclear whether the server enforces any auth mechanism. Add an `authentication` field to the manifest (e.g., in pyproject.toml or a separate mcp.json): `authentication = "none (network-layer auth via host firewall)"` or `authentication = "bearer_token"` if applicable. This makes the auth model explicit for reviewers. Verify by inspecting the generated manifest and confirming the authentication field is present and accurate.
MCP manifest declares tools but no authentication field is present (none of: auth, authorization, bearer, oauth, mtls, apiKey, api_key, basic, token, authToken). Absence is a weak signal — confirm whether the server relies on network-layer or host-level auth, or declare the real mechanism explicitly so reviewers can audit it.
Evidence
| 1 | # PRTS MCP Server — Docker 部署指南 |
| 2 | |
| 3 | > 服务器启动时会自动从 GitHub Release 同步干员数据压缩包到挂载的 volume(或容器内部),无需手动下载或配置数据文件。镜像内置了构建时预置的 bundled 数据作为离线保底。 |
| 4 | |
| 5 | ## 前置条件 |
| 6 | |
| 7 | - [Docker](https://docs.docker.com/get-docker/) 已安装并正常运行 |
| 8 | - (推荐)如运行环境可能命中 GitHub 匿名限流,可提供 `GITHUB_TOKEN` |
| 9 | |
| 10 | --- |
| 11 | |
| 12 | ## 1. 构建镜像 |
| 13 | |
| 14 | ```bash |
| 15 | cd /path/to/PRTS-MCP |
| 16 | docker build -t prts-mcp . |
| 17 | ``` |
| 18 | |
| 19 | > 本地构建的镜像不含 bundled 数据(游戏数据文件已从 git 历史中排除)。首次运行时 auto-sync 会自动下载,需要网络连接。如需包含 bundled 数据,先运行 `python scripts/fetch_gamedata.py` 再构建。 |
| 20 | |
| 21 | 正式发布的 Docker 镜像由 CI 在构建前预置 `data/gamedata |
RemediationAI
The deployment documentation does not declare the authentication model for the MCP server. Add a section to docs/deployment.md: "## Authentication\nThis MCP server does not enforce application-level authentication. Secure it via: (1) network-layer firewall rules, (2) host-level authentication (e.g., SSH key-based access), or (3) a reverse proxy with auth middleware." This makes the auth model explicit. Verify by reviewing the documentation and confirming the authentication section is clear.
MCP manifest declares tools but no authentication field is present (none of: auth, authorization, bearer, oauth, mtls, apiKey, api_key, basic, token, authToken). Absence is a weak signal — confirm whether the server relies on network-layer or host-level auth, or declare the real mechanism explicitly so reviewers can audit it.
Evidence
| 1 | # PRTS MCP Server — Python 实现 |
| 2 | |
| 3 | 明日方舟同人创作辅助 MCP Server,Python 版本。通过 **stdio 传输**接入 MCP 客户端(Claude Desktop、Claude Code、Chatbox 等),支持 Docker 部署。 |
| 4 | |
| 5 | 提供工具集:`search_prts` / `read_prts_page` / `get_operator_archives` / `get_operator_voicelines` / `get_operator_basic_info` / `list_story_events` / `list_stories` / `read_story` / `read_activity` |
| 6 | |
| 7 | --- |
| 8 | |
| 9 | ## 快速开始(Docker) |
| 10 | |
| 11 | ```bash |
| 12 | # 从仓库根目录构建(需先预置数据,详见下方) |
| 13 | docker build -f python/Dockerfile -t prts-mcp . |
| 14 | |
| 15 | # 运行(named volume 持久化游戏数据,推荐) |
| 16 | docker run -i --rm -v prts-mcp- |
RemediationAI
The README does not declare the authentication model for the MCP server. Add a section: "## Security\nThis server does not enforce application-level authentication. Secure deployment requires network-layer isolation or a reverse proxy with authentication." This makes the auth model explicit for users. Verify by reviewing the README and confirming the security section is present.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 496 | except Exception: |
| 497 | for tmp in tmp_paths: |
| 498 | try: |
| 499 | tmp.unlink(missing_ok=True) |
| 500 | except OSError: |
| 501 | pass |
| 502 | raise |
RemediationAI
The sync.py cleanup code catches OSError and silently discards it with `pass`, hiding cleanup failures from logs and metrics. Replace `except OSError: pass` with `except OSError as e: logger.warning(f"Failed to clean up temp file: {e}")`. This logs cleanup failures for incident response. Verify by triggering a cleanup failure and confirming that a warning appears in the server logs.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 234 | # Clean up any temp files on failure |
| 235 | for tmp, _ in tmp_pairs: |
| 236 | try: |
| 237 | tmp.unlink(missing_ok=True) |
| 238 | except OSError: |
| 239 | pass |
| 240 | raise |
RemediationAI
The sync.py cleanup code catches OSError and silently discards it with `pass`, hiding cleanup failures. Replace `except OSError: pass` with `except OSError as e: logger.warning(f"Failed to clean up temp file: {e}")`. This logs cleanup failures. Verify by triggering a cleanup failure and confirming that a warning appears in the server logs.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 389 | ).save(_release_cache_path(spec)) |
| 390 | except Exception: |
| 391 | try: |
| 392 | tmp.unlink(missing_ok=True) |
| 393 | except OSError: |
| 394 | pass |
| 395 | raise |
RemediationAI
The sync.py cleanup code catches OSError and silently discards it with `pass`, hiding cleanup failures. Replace `except OSError: pass` with `except OSError as e: logger.warning(f"Failed to clean up temp file: {e}")`. This logs cleanup failures. Verify by triggering a cleanup failure and confirming that a warning appears in the server logs.
search_prts