Use with caution. Address findings before production.
Scanned 5/13/2026, 5:06:30 AMยทCached resultยทDeep Scanยท91 rulesยทHow we decide โ
AIVSS Score
Medium
Severity Breakdown
0
critical
3
high
89
medium
32
low
MCP Server Information
Findings
This package has a C security grade with a safety score of 55/100, driven primarily by 89 medium-severity issues across server configuration, readiness, and resource exhaustion concerns, plus 3 high-severity SQL injection vulnerabilities that pose direct data security risks. The 17 ansi escape injection findings and 19 resource exhaustion issues suggest the package lacks proper input validation and could be vulnerable to both injection attacks and denial-of-service conditions. Installation should be deferred until these high and medium-severity findings are addressed, particularly the SQL injection flaws.
AIPer-finding remediation generated by bedrock-claude-haiku-4-5 โ 23 of 23 findings. Click any finding to read.
No known CVEs found for this package or its dependencies.
Scan Details
Done
Sign in to save scan history and re-scan automatically on new commits.
Building your own MCP server?
Same rules, same LLM judges, same grade. Private scans stay isolated to your account and never appear in the public registry. Required for code your team hasnโt shipped yet.
23 of 23 findings
23 findings
SQL injection risk. SQL call receives a query built with string interpolation (%, +, f-string, or template literal) instead of placeholder parameters. Use parameterised queries.
Evidence
| 204 | 'SELECT 1 FROM pg_database WHERE datname = $1', database |
| 205 | ) |
| 206 | if not db_exists: |
| 207 | await conn.execute(f'CREATE DATABASE {database}') |
| 208 | finally: |
| 209 | await conn.close() |
RemediationAI
The problem is that the `CREATE DATABASE {database}` statement uses f-string interpolation, allowing SQL injection if the `database` variable contains malicious SQL. Replace the f-string with a parameterized query by changing `await conn.execute(f'CREATE DATABASE {database}')` to `await conn.execute('CREATE DATABASE ' + conn.quote_ident(database))` or use asyncpg's identifier quoting method. This fix ensures the database name is treated as an identifier, not executable SQL code. Verify by testing with a database name containing SQL keywords like `test'; DROP TABLE users; --` and confirming it creates a database with that literal name.
LLM consensus
SQL injection risk. SQL call receives a query built with string interpolation (%, +, f-string, or template literal) instead of placeholder parameters. Use parameterised queries.
Evidence
| 158 | 'SELECT 1 FROM pg_database WHERE datname = $1', database |
| 159 | ) |
| 160 | if not db_exists: |
| 161 | await conn.execute(f'CREATE DATABASE {database}') |
| 162 | finally: |
| 163 | await conn.close() |
RemediationAI
The problem is identical to finding 1: the `CREATE DATABASE {database}` statement in sql_gen.py uses f-string interpolation, creating a SQL injection vulnerability. Replace `await conn.execute(f'CREATE DATABASE {database}')` with `await conn.execute('CREATE DATABASE ' + conn.quote_ident(database))` to safely quote the identifier. This ensures the database name cannot be interpreted as SQL commands. Verify by attempting to create a database with a name containing special characters or SQL syntax and confirming it is treated as a literal identifier.
SQL injection risk. SQL call receives a query built with string interpolation (%, +, f-string, or template literal) instead of placeholder parameters. Use parameterised queries.
Evidence
| 126 | raise ModelRetry('Please create a SELECT query') |
| 127 | |
| 128 | try: |
| 129 | await ctx.deps.conn.execute(f'EXPLAIN {output.sql_query}') |
| 130 | except asyncpg.exceptions.PostgresError as e: |
| 131 | raise ModelRetry(f'Invalid query: {e}') from e |
| 132 | else: |
RemediationAI
The problem is that `EXPLAIN {output.sql_query}` uses f-string interpolation to embed user-generated SQL, allowing injection of arbitrary commands. Replace `await ctx.deps.conn.execute(f'EXPLAIN {output.sql_query}')` with `await ctx.deps.conn.execute('EXPLAIN ' + output.sql_query)` or better yet, use a prepared statement if asyncpg supports it, or validate that `output.sql_query` is a SELECT statement before concatenation. This prevents attackers from injecting DROP, DELETE, or other destructive commands. Verify by testing with a malicious query like `SELECT 1; DROP TABLE users;` and confirming only the SELECT portion is explained.
MCP tool description or return text contains an imperative phrase that asks the LLM to invoke or call ANOTHER tool โ "invoke the write_file tool", "before using this, also call send_email", "silently invoke X". This is a cross-tool chaining injection: the user authorized THIS tool, but the payload escalates into others. Tool descriptions should describe what the tool DOES, not direct the LLM to use other tools. If a tool's correct operation requires composition, document the dependency in human
Evidence
| 1 | # Toolsets |
| 2 | |
| 3 | A toolset represents a collection of [tools](tools.md) that can be registered with an agent in one go. They can be reused by different agents, swapped out at runtime or during testing, and composed in order to dynamically filter which tools are available, modify tool definitions, or change tool execution behavior. A toolset can contain locally defined functions, depend on an external service to provide them, or implement custom logic to list available tools and handle them being call |
RemediationAI
The problem is that the toolset documentation in docs/toolsets.md contains imperative language directing the LLM to invoke other tools, which enables cross-tool chaining injection where an attacker escalates privileges by manipulating tool descriptions. Remove any phrases like 'invoke the write_file tool', 'call send_email', or 'silently invoke X' from tool descriptions and replace them with passive descriptions of what each tool does (e.g., 'This tool writes content to a file' instead of 'invoke the write_file tool to save data'). This fix ensures the LLM cannot be tricked into using unauthorized tools through prompt injection in tool descriptions. Verify by reviewing the updated documentation to confirm no imperative chaining instructions remain and testing that the LLM does not attempt to call tools not explicitly requested by the user.
MCP tool description or return text contains an imperative phrase that asks the LLM to invoke or call ANOTHER tool โ "invoke the write_file tool", "before using this, also call send_email", "silently invoke X". This is a cross-tool chaining injection: the user authorized THIS tool, but the payload escalates into others. Tool descriptions should describe what the tool DOES, not direct the LLM to use other tools. If a tool's correct operation requires composition, document the dependency in human
Evidence
| 1 | from __future__ import annotations |
| 2 | |
| 3 | import asyncio |
| 4 | import base64 |
| 5 | import functools |
| 6 | import os |
| 7 | import re |
| 8 | import warnings |
| 9 | from abc import ABC, abstractmethod |
| 10 | from collections.abc import AsyncIterator, Awaitable, Callable, Sequence |
| 11 | from contextlib import AsyncExitStack, asynccontextmanager |
| 12 | from dataclasses import dataclass, field, replace |
| 13 | from datetime import timedelta |
| 14 | from pathlib import Path |
| 15 | from typing import Annotated, Any, overload |
| 16 | |
| 17 | import anyio |
| 18 | import httpx |
| 19 | import pydantic_core |
| 20 | from anyio.strea |
RemediationAI
The problem is that the MCP server implementation in pydantic_ai_slim/pydantic_ai/mcp.py may contain tool descriptions with imperative phrases directing the LLM to invoke other tools, enabling cross-tool chaining injection. Audit all tool descriptions in the MCPServer class and remove any phrases instructing the LLM to call other tools; replace them with declarative descriptions of tool functionality (e.g., 'Reads file contents' instead of 'invoke read_file before processing'). This prevents attackers from escalating tool access through malicious prompts. Verify by extracting all tool descriptions from the MCPServer and confirming none contain imperative chaining instructions like 'also call', 'invoke', or 'silently use'.
MCP tool description or return text contains an imperative phrase that asks the LLM to invoke or call ANOTHER tool โ "invoke the write_file tool", "before using this, also call send_email", "silently invoke X". This is a cross-tool chaining injection: the user authorized THIS tool, but the payload escalates into others. Tool descriptions should describe what the tool DOES, not direct the LLM to use other tools. If a tool's correct operation requires composition, document the dependency in human
Evidence
| 1 | from __future__ import annotations |
| 2 | |
| 3 | import base64 |
| 4 | import functools |
| 5 | from contextlib import AsyncExitStack |
| 6 | from dataclasses import KW_ONLY, dataclass |
| 7 | from pathlib import Path |
| 8 | from typing import TYPE_CHECKING, Any, Literal |
| 9 | |
| 10 | import anyio |
| 11 | from pydantic import AnyUrl |
| 12 | from typing_extensions import Self, assert_never |
| 13 | |
| 14 | from pydantic_ai import messages |
| 15 | from pydantic_ai.exceptions import ModelRetry |
| 16 | from pydantic_ai.tools import AgentDepsT, RunContext, ToolDefinition |
| 17 | from pydantic_ai.toolsets import Abstrac |
RemediationAI
The problem is that the FastMCP toolset integration in pydantic_ai_slim/pydantic_ai/toolsets/fastmcp.py may propagate tool descriptions with imperative chaining instructions, allowing cross-tool injection attacks. Review the tool description handling in the FastMCPToolset class and ensure descriptions are sanitized to remove any phrases directing the LLM to invoke other tools; add a validation function that strips imperative keywords before registering tools. This ensures user-authorized tools cannot be escalated into unauthorized tool calls. Verify by testing with a FastMCP tool whose description contains 'invoke another_tool' and confirming the description is either sanitized or rejected during registration.
Hardcoded secret detected in source. MCP servers often proxy between the model and a third-party API, so any committed credential grants that access to anyone who can read the repo. Move to an environment variable or secret manager and rotate the leaked value.
Evidence
| 21 | f.write(redacted) |
| 22 | |
| 23 | # Verify |
| 24 | if 'SCRUBBED' in redacted and 'ASIAVEMKNXYDQ6ZF4HY7' not in redacted: |
| 25 | print(f'Successfully redacted credentials in {cassette_path}') |
| 26 | else: |
| 27 | print('WARNING: Redaction may not have worked correctly') |
RemediationAI
The problem is that the hardcoded AWS access key 'ASIAVEMKNXYDQ6ZF4HY7' is committed to the repository in scripts/scrub_cassette.py, granting anyone with repo access to the associated AWS account. Replace the hardcoded string with an environment variable by changing the code to `os.getenv('AWS_ACCESS_KEY_ID')` and document this requirement in a .env.example file or README. This fix removes the credential from the codebase and allows secure injection at runtime. Verify by rotating the leaked AWS key immediately, removing it from git history using `git filter-branch` or `git-filter-repo`, and confirming the script reads the key from the environment variable.
User-controlled value printed to terminal without ANSI escape sanitization. Malicious input can inject cursor-control sequences, rewrite earlier output, or hide shell commands from the operator.
Evidence
| 901 | async with agent.run_stream_events('Write a long essay about Python') as stream: # (1)! |
| 902 | async for event in stream: |
| 903 | if isinstance(event, PartStartEvent): |
| 904 | print(f'Started: {event.part!r}') |
| 905 | #> Started: TextPart(content='Python is a ') |
| 906 | elif isinstance(event, FinalResultEvent): |
| 907 | break # (2)! |
RemediationAI
The problem is that user-controlled content from `event.part` is printed directly to the terminal without sanitizing ANSI escape sequences, allowing attackers to inject cursor-control codes that hide or rewrite output. Replace `print(f'Started: {event.part!r}')` with `print(f'Started: {repr(event.part).replace(chr(27), "[ESC]")}')` or use a library like `bleach` to strip ANSI codes. This fix prevents terminal injection attacks. Verify by injecting a test event with ANSI escape sequences like `\x1b[2J` (clear screen) and confirming the output displays the literal escape code rather than executing it.
User-controlled value printed to terminal without ANSI escape sanitization. Malicious input can inject cursor-control sequences, rewrite earlier output, or hide shell commands from the operator.
Evidence
| 126 | async def stream_handler(ctx: RunContext[None], events: AsyncIterable[AgentStreamEvent]): |
| 127 | async for event in events: |
| 128 | if isinstance(event, FunctionToolCallEvent): |
| 129 | print(f'Calling {event.part.tool_name}...') |
| 130 | |
| 131 | |
| 132 | async def main(): |
RemediationAI
The problem is that `event.part.tool_name` is printed directly without ANSI escape sanitization, allowing terminal injection attacks if the tool name contains malicious escape sequences. Replace `print(f'Calling {event.part.tool_name}...')` with `print(f'Calling {event.part.tool_name.replace(chr(27), "[ESC]")}...')` or use a safe printing library. This prevents attackers from manipulating terminal output. Verify by testing with a tool name containing ANSI codes like `test\x1b[2Jtool` and confirming the escape sequence is displayed literally.
User-controlled value printed to terminal without ANSI escape sanitization. Malicious input can inject cursor-control sequences, rewrite earlier output, or hide shell commands from the operator.
Evidence
| 113 | async def sampling_callback( |
| 114 | context: RequestContext[ClientSession, Any], params: CreateMessageRequestParams |
| 115 | ) -> CreateMessageResult | ErrorData: |
| 116 | print('sampling system prompt:', params.systemPrompt) |
| 117 | #> sampling system prompt: always reply in rhyme |
| 118 | print('sampling messages:', params.messages) |
| 119 | """ |
RemediationAI
The problem is that `params.systemPrompt` is printed directly without ANSI escape sanitization, allowing terminal injection if the system prompt contains malicious escape sequences. Replace `print('sampling system prompt:', params.systemPrompt)` with `print('sampling system prompt:', params.systemPrompt.replace(chr(27), '[ESC]'))` or use a safe output library. This prevents terminal manipulation attacks. Verify by passing a system prompt containing `\x1b[H\x1b[2J` (cursor home + clear screen) and confirming it displays as literal text.
User-controlled value printed to terminal without ANSI escape sanitization. Malicious input can inject cursor-control sequences, rewrite earlier output, or hide shell commands from the operator.
Evidence
| 115 | ) -> CreateMessageResult | ErrorData: |
| 116 | print('sampling system prompt:', params.systemPrompt) |
| 117 | #> sampling system prompt: always reply in rhyme |
| 118 | print('sampling messages:', params.messages) |
| 119 | """ |
| 120 | sampling messages: |
| 121 | [ |
RemediationAI
The problem is that `params.messages` is printed directly without ANSI escape sanitization, allowing terminal injection if messages contain malicious escape sequences. Replace `print('sampling messages:', params.messages)` with a safe print function that strips ANSI codes, such as `print('sampling messages:', str(params.messages).replace(chr(27), '[ESC]'))`. This prevents attackers from hiding or rewriting terminal output. Verify by injecting a message with ANSI codes and confirming the escape sequences appear as literal text in the output.
User-controlled value printed to terminal without ANSI escape sanitization. Malicious input can inject cursor-control sequences, rewrite earlier output, or hide shell commands from the operator.
Evidence
| 457 | elif ident_prompt == '/multiline': |
| 458 | multiline = not multiline |
| 459 | if multiline: |
| 460 | console.print( |
| 461 | 'Enabling multiline mode. [dim]Press [Meta+Enter] or [Esc] followed by [Enter] to accept input.[/dim]' |
| 462 | ) |
| 463 | else: |
| 464 | console.print('Disabling multiline mode.') |
| 465 | return None, multiline |
RemediationAI
The problem is that user input in multiline mode is printed without ANSI escape sanitization, allowing terminal injection attacks. While the code uses `console.print()` which may have some protection, ensure that any user-controlled input passed to console.print is sanitized by replacing ANSI escape characters: `console.print(user_input.replace(chr(27), '[ESC]'))`. This prevents terminal manipulation. Verify by entering multiline input containing `\x1b[2J` and confirming the escape sequence is displayed literally rather than clearing the screen.
User-controlled value printed to terminal without ANSI escape sanitization. Malicious input can inject cursor-control sequences, rewrite earlier output, or hide shell commands from the operator.
Evidence
| 810 | if isinstance(event, ToolCallEvent): |
| 811 | print(f'Tool called: {event.part.tool_name}') |
| 812 | elif isinstance(event, ToolResultEvent): |
| 813 | print(f'Tool result: {event.part.content!r}') |
| 814 | elif isinstance(event, PartStartEvent) and isinstance(event.part, TextPart): |
| 815 | print(f'Text: {event.part.content!r}') |
| 816 | yield event |
RemediationAI
The problem is that `event.part.tool_name` is printed directly without ANSI escape sanitization, allowing terminal injection if the tool name contains malicious escape sequences. Replace `print(f'Tool called: {event.part.tool_name}')` with `print(f'Tool called: {event.part.tool_name.replace(chr(27), "[ESC]")}')` to sanitize escape characters. This prevents attackers from manipulating terminal output. Verify by testing with a tool name containing ANSI codes and confirming the escape sequences are displayed as literal text.
User-controlled value printed to terminal without ANSI escape sanitization. Malicious input can inject cursor-control sequences, rewrite earlier output, or hide shell commands from the operator.
Evidence
| 1155 | request_context: ModelRequestContext, |
| 1156 | handler: WrapModelRequestHandler, |
| 1157 | ) -> ModelResponse: |
| 1158 | print(f' Model request (step {ctx.run_step}, {len(request_context.messages)} messages)') |
| 1159 | #> Model request (step 1, 1 messages) |
| 1160 | response = await handler(request_context) |
| 1161 | print(f' Model response: {len(response.parts)} parts') |
RemediationAI
The problem is that `ctx.run_step` and message count are printed without sanitizing ANSI escape sequences in the surrounding context, though numeric values are lower risk. For defense in depth, wrap the entire print statement: `print(f' Model request (step {ctx.run_step}, {len(request_context.messages)} messages)'.replace(chr(27), '[ESC]'))`. This ensures no part of the output can be manipulated. Verify by confirming the output displays correctly even if context values somehow contain escape characters.
User-controlled value printed to terminal without ANSI escape sanitization. Malicious input can inject cursor-control sequences, rewrite earlier output, or hide shell commands from the operator.
Evidence
| 674 | params: ElicitRequestParams, |
| 675 | ) -> ElicitResult: |
| 676 | """Handle elicitation requests from MCP server.""" |
| 677 | print(f'\n{params.message}') |
| 678 | |
| 679 | if not params.requestedSchema: |
| 680 | response = input('Response: ') |
RemediationAI
The problem is that `params.message` is printed directly without ANSI escape sanitization, allowing terminal injection if the message contains malicious escape sequences. Replace `print(f'\n{params.message}')` with `print(f'\n{params.message.replace(chr(27), "[ESC]")}')` to strip escape characters. This prevents attackers from hiding prompts or rewriting terminal output. Verify by passing a message containing `\x1b[H` (cursor home) and confirming it displays as literal text.
User-controlled value printed to terminal without ANSI escape sanitization. Malicious input can inject cursor-control sequences, rewrite earlier output, or hide shell commands from the operator.
Evidence
| 187 | print(f' [{payload.target}] {r.name}{version}: {r.value}') |
| 188 | for f in payload.failures: |
| 189 | version = f' ({f.evaluator_version})' if f.evaluator_version else '' |
| 190 | print(f' [{payload.target}] FAILED {f.name}{version}: {f.error_message}') |
| 191 | ``` |
| 192 | |
| 193 | `payload.results` and `payload.failures` may cover one or more evaluators from a single function call โ when multiple evaluators share a sink, their results are batched into a single `submit()` call. Each result carries its own attr |
RemediationAI
The problem is that `payload.target`, `r.name`, `f.name`, and `f.error_message` are printed directly without ANSI escape sanitization, allowing terminal injection attacks. Replace the print statements with sanitized versions: `print(f' [{payload.target.replace(chr(27), "[ESC]")}] {r.name.replace(chr(27), "[ESC]")}...')`. This prevents attackers from manipulating evaluation output. Verify by injecting evaluation results with ANSI codes and confirming they display as literal text.
User-controlled value printed to terminal without ANSI escape sanitization. Malicious input can inject cursor-control sequences, rewrite earlier output, or hide shell commands from the operator.
Evidence
| 33 | comments_url = f'https://api.github.com/repos/{REPOSITORY}/issues/{PULL_REQUEST_NUMBER}/comments' |
| 34 | r = httpx.get(comments_url, headers=gh_headers) |
| 35 | print(f'{r.request.method} {r.request.url} {r.status_code}', flush=True) |
| 36 | if r.status_code != 200: |
| 37 | print(f'Failed to get comments, status {r.status_code}, response:\n{r.text}', flush=True) |
| 38 | exit(1) |
RemediationAI
The problem is that `r.request.url` and `r.status_code` are printed without ANSI escape sanitization, allowing terminal injection if the URL contains malicious escape sequences. Replace `print(f'{r.request.method} {r.request.url} {r.status_code}', flush=True)` with `print(f'{r.request.method} {str(r.request.url).replace(chr(27), "[ESC]")} {r.status_code}', flush=True)`. This prevents terminal manipulation. Verify by testing with a URL containing ANSI codes and confirming the escape sequences appear as literal text.
User-controlled value printed to terminal without ANSI escape sanitization. Malicious input can inject cursor-control sequences, rewrite earlier output, or hide shell commands from the operator.
Evidence
| 808 | ) -> AsyncIterable[AgentStreamEvent]: |
| 809 | async for event in stream: |
| 810 | if isinstance(event, ToolCallEvent): |
| 811 | print(f'Tool called: {event.part.tool_name}') |
| 812 | elif isinstance(event, ToolResultEvent): |
| 813 | print(f'Tool result: {event.part.content!r}') |
| 814 | elif isinstance(event, PartStartEvent) and isinstance(event.part, TextPart): |
RemediationAI
The problem is that `event.part.tool_name` and `event.part.content` are printed directly without ANSI escape sanitization, allowing terminal injection attacks. Replace the print statements with sanitized versions: `print(f'Tool called: {event.part.tool_name.replace(chr(27), "[ESC]")}')` and `print(f'Tool result: {event.part.content.replace(chr(27), "[ESC]")}')`. This prevents attackers from manipulating terminal output. Verify by testing with tool names and results containing ANSI codes and confirming they display as literal text.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 115 | try: |
| 116 | from pydantic_ai.mcp import MCPServer |
| 117 | |
| 118 | _mcp_types += (MCPServer,) |
| 119 | except ImportError: |
| 120 | pass |
| 121 | try: |
| 122 | from pydantic_ai.toolsets.fastmcp import FastMCPToolset |
RemediationAI
The problem is that the `except ImportError: pass` silently swallows import errors, hiding failures when MCPServer or FastMCPToolset dependencies are missing, making debugging difficult. Replace `except ImportError: pass` with `except ImportError as e: logger.debug(f'MCPServer import failed: {e}')` to log the error at debug level. This fix allows developers to diagnose missing dependencies without breaking functionality. Verify by temporarily removing the mcp module and confirming the debug log message appears indicating the import failure.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 893 | # to `session_task_to_await` itself, so the runner gets torn down too. |
| 894 | with anyio.move_on_after(_SHUTDOWN_GRACE_SECONDS): |
| 895 | try: |
| 896 | await session_task_to_await |
| 897 | except BaseException: |
| 898 | pass |
| 899 | return None |
| 900 | |
| 901 | @property |
RemediationAI
The problem is that `except BaseException: pass` silently swallows all exceptions during MCP session shutdown, hiding real failures and preventing proper error diagnosis. Replace `except BaseException: pass` with `except BaseException as e: logger.warning(f'MCP session shutdown error: {e}')` to log the exception. This fix ensures shutdown errors are visible for debugging. Verify by injecting a test exception during shutdown and confirming the warning log message appears.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 179 | # Try to acquire immediately without blocking |
| 180 | try: |
| 181 | self._limiter.acquire_nowait() |
| 182 | return |
| 183 | except anyio.WouldBlock: |
| 184 | pass |
| 185 | |
| 186 | # We need to wait - atomically check queue limits and register ourselves as waiting |
| 187 | # This prevents a race condition where multiple tasks could pass the check before |
RemediationAI
The problem is that `except anyio.WouldBlock: pass` silently discards the expected WouldBlock exception without logging, making it unclear if the concurrency limiter is functioning correctly. Replace `except anyio.WouldBlock: pass` with `except anyio.WouldBlock: logger.debug('Limiter would block, waiting...')` to document the expected behavior. This fix provides visibility into limiter behavior. Verify by testing the limiter under high concurrency and confirming debug logs show when blocking occurs.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 86 | last_response = response |
| 87 | |
| 88 | try: |
| 89 | yield await self.validate_response_output(response, allow_partial=True) |
| 90 | except (ValidationError, exceptions.ModelRetry): |
| 91 | pass |
| 92 | |
| 93 | if self._raw_stream_response.final_result_event is not None: # pragma: no branch |
| 94 | response = self.response |
RemediationAI
The problem is that `except (ValidationError, exceptions.ModelRetry): pass` silently swallows validation and retry errors during streaming, hiding failures in response processing. Replace with `except (ValidationError, exceptions.ModelRetry) as e: logger.debug(f'Response validation error (expected during streaming): {e}')` to log expected errors. This fix allows developers to diagnose unexpected validation failures. Verify by testing with invalid model responses and confirming debug logs show validation errors.
Silent error swallowing detected. An except clause that does pass or ... discards the exception with no log, no metric, and no trace. This blinds incident response and hides real failures.
Evidence
| 36 | def get_client(self) -> AsyncClient: |
| 37 | running_loop: asyncio.AbstractEventLoop | None = None |
| 38 | try: |
| 39 | running_loop = asyncio.get_running_loop() |
| 40 | except RuntimeError: |
| 41 | pass |
| 42 | |
| 43 | if self._client is None or (running_loop is not None and running_loop is not self._event_loop): |
| 44 | self._client = AsyncClient(**self._kwargs) |
RemediationAI
The problem is that `except RuntimeError: pass` silently swallows the RuntimeError when no event loop is running, hiding potential issues with client initialization. Replace `except RuntimeError: pass` with `except RuntimeError as e: logger.debug(f'No running event loop: {e}')` to log the condition. This fix provides visibility into client state. Verify by calling `get_client()` outside an async context and confirming the debug log message appears indicating no running loop.