aisentry Report

Generated: 2026-01-12 12:36:55 UTC

12
Combined Security Score
5
Vulnerability Score
19
Security Posture
1727
Files Scanned
224
Issues Found
67%
Confidence
9.9s
Scan Time

Vulnerabilities (224)

Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/partners/openai/langchain_openai/embeddings/base.py:429
Function '_tokenize' on line 429 has 4 DoS risk(s): LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _tokenize( self, texts: list[str], chunk_size: int ) -> tuple[Iterable[int], list[list[int] | str], list[int], list[int]]: """Tokenize and batch input texts. Splits texts based on `embedding_ctx_length` and groups them into batches of size `chunk_size`. Args: texts: The list of texts to tokenize. chunk_size: The maximum number of texts to include in a single batch.
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/partners/openai/langchain_openai/chat_models/base.py:1338
Function '_generate' on line 1338 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _generate( self, messages: list[BaseMessage], stop: list[str] | None = None, run_manager: CallbackManagerForLLMRun | None = None, **kwargs: Any, ) -> ChatResult: self._ensure_sync_client_available() payload = self._get_request_payload(messages, stop=stop, **kwargs) generation_info = None raw_response = None
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Critical decision without oversight in '_construct_responses_api_payload'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/partners/openai/langchain_openai/chat_models/base.py:3754
Function '_construct_responses_api_payload' on line 3754 makes critical data_modification decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
def _construct_responses_api_payload( messages: Sequence[BaseMessage], payload: dict ) -> dict: # Rename legacy parameters
Remediation
Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
Critical decision without oversight in 'get_num_tokens_from_messages'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/partners/openai/langchain_openai/chat_models/base.py:1724
Function 'get_num_tokens_from_messages' on line 1724 makes critical security decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
return encoding_model.encode(text) def get_num_tokens_from_messages( self, messages: Sequence[BaseMessage], tools: Sequence[dict[str, Any] | type | Callable | BaseTool] | None = None,
Remediation
Critical security decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/partners/ollama/langchain_ollama/chat_models.py:1605
LLM output variable 'llm' flows to 'RunnableMap' on line 1605 via direct flow. This creates a command_injection vulnerability.
) return RunnableMap(raw=llm) | parser_with_fallback return llm | output_parser
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/partners/ollama/langchain_ollama/chat_models.py:945
Function '_create_chat_stream' on line 945 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _create_chat_stream( self, messages: list[BaseMessage], stop: list[str] | None = None, **kwargs: Any, ) -> Iterator[Mapping[str, Any] | str]: chat_params = self._chat_params(messages, stop, **kwargs) if chat_params["stream"]: if self._client: yield from self._client.chat(**chat_params)
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py:1218
LLM output variable 'llm' flows to 'RunnableMap' on line 1218 via direct flow. This creates a command_injection vulnerability.
) return RunnableMap(raw=llm) | parser_with_fallback return llm | output_parser
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py:723
Function '_generate' on line 723 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _generate( self, messages: list[BaseMessage], stop: list[str] | None = None, run_manager: CallbackManagerForLLMRun | None = None, stream: bool | None = None, # noqa: FBT001 **kwargs: Any, ) -> ChatResult: should_stream = stream if stream is not None else self.streaming if _is_huggingface_textgen_inference(self.llm):
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/partners/anthropic/langchain_anthropic/chat_models.py:1701
LLM output variable 'llm' flows to 'RunnableMap' on line 1701 via direct flow. This creates a command_injection vulnerability.
) return RunnableMap(raw=llm) | parser_with_fallback return llm | output_parser
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
User input 'prompt' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/partners/anthropic/langchain_anthropic/llms.py:291
User input parameter 'prompt' is directly passed to LLM API call 'self.client.messages.create'. This is a high-confidence prompt injection vector.
response = self.client.messages.create( messages=self._format_messages(prompt),
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
Critical decision without oversight in '_call'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/partners/anthropic/langchain_anthropic/llms.py:249
Function '_call' on line 249 makes critical data_modification decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
return messages def _call( self, prompt: str, stop: list[str] | None = None,
Remediation
Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/partners/perplexity/langchain_perplexity/chat_models.py:589
Function '_generate' on line 589 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _generate( self, messages: list[BaseMessage], stop: list[str] | None = None, run_manager: CallbackManagerForLLMRun | None = None, **kwargs: Any, ) -> ChatResult: if self.streaming: stream_iter = self._stream( messages, stop=stop, run_manager=run_manager, **kwargs )
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Critical decision without oversight in 'bind_tools'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/partners/deepseek/langchain_deepseek/chat_models.py:395
Function 'bind_tools' on line 395 makes critical data_modification decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
) from e def bind_tools( self, tools: Sequence[dict[str, Any] | type | Callable | BaseTool], *,
Remediation
Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/partners/mistralai/langchain_mistralai/chat_models.py:1156
LLM output variable 'llm' flows to 'RunnableMap' on line 1156 via direct flow. This creates a command_injection vulnerability.
) return RunnableMap(raw=llm) | parser_with_fallback return llm | output_parser
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/cli/langchain_cli/namespaces/integration.py:135
LLM output from 'subprocess.run' is used in 'run(' on line 135 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
env.pop("VIRTUAL_ENV", None) subprocess.run( ["uv", "sync", "--dev", "--no-progress"], # noqa: S607 cwd=destination_dir,
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
Direct execution of LLM-generated code in 'new'
LLM08: Excessive Agency CRITICAL
/private/tmp/langchain-test/libs/cli/langchain_cli/namespaces/integration.py:60
Function 'new' on line 60 directly executes code generated or influenced by an LLM using exec()/eval() or subprocess. This creates a critical security risk where malicious or buggy LLM outputs can execute arbitrary code, potentially compromising the entire system.
return replacements @integration_cli.command() def new( name: Annotated[ str, typer.Option( help="The name of the integration to create (e.g. `my-integration`)", prompt="The name of the integration to create (e.g. `my-integration`)",
Remediation
Code Execution Security: 1. NEVER execute LLM-generated code directly with exec()/eval() 2. If code execution is necessary, use sandboxed environments (Docker, VM) 3. Implement strict code validation and static analysis before execution 4. Use allowlists for permitted functions/modules 5. Set resource limits (CPU, memory, time) for execution 6. Parse and validate code structure before running 7. Consider using safer alternatives (JSON, declarative configs) 8. Log all code execution attempts with full context 9. Require human review for generated code 10. Use tools like RestrictedPython for safer Python execution
Direct execution of LLM output in 'new'
LLM09: Overreliance CRITICAL
/private/tmp/langchain-test/libs/cli/langchain_cli/namespaces/integration.py:60
Function 'new' on line 60 directly executes LLM-generated code using subprocess.run. This is extremely dangerous and allows arbitrary code execution.
@integration_cli.command() def new( name: Annotated[ str, typer.Option(
Remediation
NEVER directly execute LLM-generated code: 1. Remove direct execution: - Do not use eval(), exec(), or os.system() - Avoid dynamic code execution - Use safer alternatives (allow-lists) 2. If code generation is required: - Generate code for review only - Require human approval before execution - Use sandboxing (containers, VMs) - Implement strict security policies 3. Use structured outputs: - Return data, not code - Use JSON schemas - Define clear interfaces 4. Add safeguards: - Static code analysis before execution - Whitelist allowed operations - Rate limiting and monitoring
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/cli/langchain_cli/namespaces/app.py:366
LLM output from 'uvicorn.run' is used in 'run(' on line 366 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
uvicorn.run( app_str, host=host_str,
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/cli/langchain_cli/namespaces/app.py:260
LLM output from 'subprocess.run' is used in 'run(' on line 260 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
typer.echo(f"Running: pip install -e \\\n {cmd_str}") subprocess.run(cmd, cwd=cwd, check=True) # noqa: S603 chain_names = []
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
Direct execution of LLM-generated code in 'add'
LLM08: Excessive Agency CRITICAL
/private/tmp/langchain-test/libs/cli/langchain_cli/namespaces/app.py:128
Function 'add' on line 128 directly executes code generated or influenced by an LLM using exec()/eval() or subprocess. This creates a critical security risk where malicious or buggy LLM outputs can execute arbitrary code, potentially compromising the entire system.
) @app_cli.command() def add( dependencies: Annotated[ list[str] | None, typer.Argument(help="The dependency to add"), ] = None, *,
Remediation
Code Execution Security: 1. NEVER execute LLM-generated code directly with exec()/eval() 2. If code execution is necessary, use sandboxed environments (Docker, VM) 3. Implement strict code validation and static analysis before execution 4. Use allowlists for permitted functions/modules 5. Set resource limits (CPU, memory, time) for execution 6. Parse and validate code structure before running 7. Consider using safer alternatives (JSON, declarative configs) 8. Log all code execution attempts with full context 9. Require human review for generated code 10. Use tools like RestrictedPython for safer Python execution
Direct execution of LLM output in 'add'
LLM09: Overreliance CRITICAL
/private/tmp/langchain-test/libs/cli/langchain_cli/namespaces/app.py:128
Function 'add' on line 128 directly executes LLM-generated code using subprocess.run. This is extremely dangerous and allows arbitrary code execution.
@app_cli.command() def add( dependencies: Annotated[ list[str] | None, typer.Argument(help="The dependency to add"),
Remediation
NEVER directly execute LLM-generated code: 1. Remove direct execution: - Do not use eval(), exec(), or os.system() - Avoid dynamic code execution - Use safer alternatives (allow-lists) 2. If code generation is required: - Generate code for review only - Require human approval before execution - Use sandboxing (containers, VMs) - Implement strict security policies 3. Use structured outputs: - Return data, not code - Use JSON schemas - Define clear interfaces 4. Add safeguards: - Static code analysis before execution - Whitelist allowed operations - Rate limiting and monitoring
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/cli/langchain_cli/namespaces/template.py:133
LLM output from 'uvicorn.run' is used in 'run(' on line 133 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
uvicorn.run( script, factory=True,
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/cli/langchain_cli/namespaces/template.py:84
LLM output from 'subprocess.run' is used in 'run(' on line 84 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
if with_poetry: subprocess.run(["poetry", "install"], cwd=destination_dir, check=True) # noqa: S607
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
Direct execution of LLM-generated code in 'new'
LLM08: Excessive Agency CRITICAL
/private/tmp/langchain-test/libs/cli/langchain_cli/namespaces/template.py:19
Function 'new' on line 19 directly executes code generated or influenced by an LLM using exec()/eval() or subprocess. This creates a critical security risk where malicious or buggy LLM outputs can execute arbitrary code, potentially compromising the entire system.
package_cli = typer.Typer(no_args_is_help=True, add_completion=False) @package_cli.command() def new( name: Annotated[str, typer.Argument(help="The name of the folder to create")], with_poetry: Annotated[ # noqa: FBT002 bool, typer.Option("--with-poetry/--no-poetry", help="Don't run poetry install"), ] = False,
Remediation
Code Execution Security: 1. NEVER execute LLM-generated code directly with exec()/eval() 2. If code execution is necessary, use sandboxed environments (Docker, VM) 3. Implement strict code validation and static analysis before execution 4. Use allowlists for permitted functions/modules 5. Set resource limits (CPU, memory, time) for execution 6. Parse and validate code structure before running 7. Consider using safer alternatives (JSON, declarative configs) 8. Log all code execution attempts with full context 9. Require human review for generated code 10. Use tools like RestrictedPython for safer Python execution
Direct execution of LLM output in 'new'
LLM09: Overreliance CRITICAL
/private/tmp/langchain-test/libs/cli/langchain_cli/namespaces/template.py:19
Function 'new' on line 19 directly executes LLM-generated code using subprocess.run. This is extremely dangerous and allows arbitrary code execution.
@package_cli.command() def new( name: Annotated[str, typer.Argument(help="The name of the folder to create")], with_poetry: Annotated[ # noqa: FBT002 bool,
Remediation
NEVER directly execute LLM-generated code: 1. Remove direct execution: - Do not use eval(), exec(), or os.system() - Avoid dynamic code execution - Use safer alternatives (allow-lists) 2. If code generation is required: - Generate code for review only - Require human approval before execution - Use sandboxing (containers, VMs) - Implement strict security policies 3. Use structured outputs: - Return data, not code - Use JSON schemas - Define clear interfaces 4. Add safeguards: - Static code analysis before execution - Whitelist allowed operations - Rate limiting and monitoring
User input 'input' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/langchain_v1/langchain/chat_models/base.py:701
User input parameter 'input' is directly passed to LLM API call 'self._model(config).invoke'. This is a high-confidence prompt injection vector.
) -> Any: return self._model(config).invoke(input, config=config, **kwargs)
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
User input 'request' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/langchain_v1/langchain/agents/middleware/tool_emulator.py:124
User input 'request' flows to LLM call via f-string in variable 'prompt'. Function 'wrap_tool_call' may be vulnerable to prompt injection attacks.
""" tool_name = request.tool_call["name"] # Check if this tool should be emulated should_emulate = self.emulate_all or tool_name in self.tools_to_emulate if not should_emulate: # Let it execute normally by calling the handler return handler(request) # Extract tool information for emulation tool_args = request.tool_call["args"] tool_description = request.tool.description if request.tool else "No description available" # Build prompt for emulator LLM prompt = ( f"You are emulating a tool call for testing purposes.\n\n" f"Tool: {tool_name}\n" f"Description: {tool_description}\n" f"Arguments: {tool_args}\n\n" f"Generate a realistic response that this tool would return " f"given these arguments.\n" f"Return ONLY the tool's output, no explanation or preamble. " f"Introduce variation into your responses." ) # Get emulated response from LLM response = self.model.invoke([HumanMessage(prompt)])
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain_v1/langchain/agents/middleware/tool_emulator.py:150
LLM output from 'self.model.invoke' is used in 'call(' on line 150 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
# Get emulated response from LLM response = self.model.invoke([HumanMessage(prompt)]) # Short-circuit: return emulated result without executing real tool
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain_v1/langchain/agents/middleware/file_search.py:281
LLM output from 'subprocess.run' is used in 'run(' on line 281 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
try: result = subprocess.run( # noqa: S603 cmd, capture_output=True,
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
Direct execution of LLM-generated code in '_ripgrep_search'
LLM08: Excessive Agency CRITICAL
/private/tmp/langchain-test/libs/langchain_v1/langchain/agents/middleware/file_search.py:259
Function '_ripgrep_search' on line 259 directly executes code generated or influenced by an LLM using exec()/eval() or subprocess. This creates a critical security risk where malicious or buggy LLM outputs can execute arbitrary code, potentially compromising the entire system.
raise ValueError(msg) from None return full_path def _ripgrep_search( self, pattern: str, base_path: str, include: str | None ) -> dict[str, list[tuple[int, str]]]: """Search using ripgrep subprocess.""" try: base_full = self._validate_and_resolve_path(base_path)
Remediation
Code Execution Security: 1. NEVER execute LLM-generated code directly with exec()/eval() 2. If code execution is necessary, use sandboxed environments (Docker, VM) 3. Implement strict code validation and static analysis before execution 4. Use allowlists for permitted functions/modules 5. Set resource limits (CPU, memory, time) for execution 6. Parse and validate code structure before running 7. Consider using safer alternatives (JSON, declarative configs) 8. Log all code execution attempts with full context 9. Require human review for generated code 10. Use tools like RestrictedPython for safer Python execution
Direct execution of LLM output in '_ripgrep_search'
LLM09: Overreliance CRITICAL
/private/tmp/langchain-test/libs/langchain_v1/langchain/agents/middleware/file_search.py:259
Function '_ripgrep_search' on line 259 directly executes LLM-generated code using subprocess.run. This is extremely dangerous and allows arbitrary code execution.
return full_path def _ripgrep_search( self, pattern: str, base_path: str, include: str | None ) -> dict[str, list[tuple[int, str]]]: """Search using ripgrep subprocess."""
Remediation
NEVER directly execute LLM-generated code: 1. Remove direct execution: - Do not use eval(), exec(), or os.system() - Avoid dynamic code execution - Use safer alternatives (allow-lists) 2. If code generation is required: - Generate code for review only - Require human approval before execution - Use sandboxing (containers, VMs) - Implement strict security policies 3. Use structured outputs: - Return data, not code - Use JSON schemas - Define clear interfaces 4. Add safeguards: - Static code analysis before execution - Whitelist allowed operations - Rate limiting and monitoring
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain_v1/langchain/agents/middleware/summarization.py:562
Function '_create_summary' on line 562 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _create_summary(self, messages_to_summarize: list[AnyMessage]) -> str: """Generate summary for the given messages.""" if not messages_to_summarize: return "No previous conversation history." trimmed_messages = self._trim_messages_for_summary(messages_to_summarize) if not trimmed_messages: return "Previous conversation was too long to summarize." # Format messages to avoid token inflation from metadata when str() is called on # message objects
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain_v1/langchain/agents/middleware/context_editing.py:245
LLM output from 'request.model.get_num_tokens_from_messages' is used in 'call(' on line 245 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
def count_tokens(messages: Sequence[BaseMessage]) -> int: return request.model.get_num_tokens_from_messages( system_msg + list(messages), request.tools )
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain_v1/langchain/agents/middleware/context_editing.py:244
Function 'count_tokens' on line 244 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def count_tokens(messages: Sequence[BaseMessage]) -> int: return request.model.get_num_tokens_from_messages( system_msg + list(messages), request.tools ) edited_messages = deepcopy(list(request.messages)) for edit in self.edits: edit.apply(edited_messages, count_tokens=count_tokens) return handler(request.override(messages=edited_messages))
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain_v1/langchain/agents/middleware/context_editing.py:281
Function 'count_tokens' on line 281 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def count_tokens(messages: Sequence[BaseMessage]) -> int: return request.model.get_num_tokens_from_messages( system_msg + list(messages), request.tools ) edited_messages = deepcopy(list(request.messages)) for edit in self.edits: edit.apply(edited_messages, count_tokens=count_tokens) return await handler(request.override(messages=edited_messages))
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
User input 'text' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/model_laboratory.py:97
User input parameter 'text' is directly passed to LLM API call 'chain.run'. This is a high-confidence prompt injection vector.
print_text(name, end="\n") output = chain.run(text) print_text(output, color=self.chain_colors[str(i)], end="\n\n")
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/model_laboratory.py:97
LLM output from 'chain.run' is used in 'run(' on line 97 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
print_text(name, end="\n") output = chain.run(text) print_text(output, color=self.chain_colors[str(i)], end="\n\n")
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/model_laboratory.py:83
Function 'compare' on line 83 has 4 DoS risk(s): LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def compare(self, text: str) -> None: """Compare model outputs on an input text. If a prompt was provided with starting the laboratory, then this text will be fed into the prompt. If no prompt was provided, then the input text is the entire prompt. Args: text: input text to run all models on. """ print(f"\033[1mInput:\033[0m\n{text}\n") # noqa: T201
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
User input 'query' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/contextual_compression.py:34
User input parameter 'query' is directly passed to LLM API call 'self.base_retriever.invoke'. This is a high-confidence prompt injection vector.
) -> list[Document]: docs = self.base_retriever.invoke( query,
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/contextual_compression.py:27
Function '_get_relevant_documents' on line 27 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _get_relevant_documents( self, query: str, *, run_manager: CallbackManagerForRetrieverRun, **kwargs: Any, ) -> list[Document]: docs = self.base_retriever.invoke( query, config={"callbacks": run_manager.get_child()}, **kwargs,
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
User input 'query' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/merger_retriever.py:69
User input parameter 'query' is directly passed to LLM API call 'retriever.invoke'. This is a high-confidence prompt injection vector.
retriever_docs = [ retriever.invoke( query,
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
User input 'query' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/re_phraser.py:76
User input parameter 'query' is directly passed to LLM API call 'self.llm_chain.invoke'. This is a high-confidence prompt injection vector.
""" re_phrased_question = self.llm_chain.invoke( query,
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/re_phraser.py:61
Function '_get_relevant_documents' on line 61 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _get_relevant_documents( self, query: str, *, run_manager: CallbackManagerForRetrieverRun, ) -> list[Document]: """Get relevant documents given a user question. Args: query: user question run_manager: callback handler to use
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
User input 'query' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/ensemble.py:224
User input parameter 'query' is directly passed to LLM API call 'retriever.invoke'. This is a high-confidence prompt injection vector.
retriever_docs = [ retriever.invoke( query,
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
User input 'query' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/multi_query.py:179
User input parameter 'query' is directly passed to LLM API call 'self.generate_queries'. This is a high-confidence prompt injection vector.
""" queries = self.generate_queries(query, run_manager) if self.include_original:
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/multi_query.py:164
Function '_get_relevant_documents' on line 164 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _get_relevant_documents( self, query: str, *, run_manager: CallbackManagerForRetrieverRun, ) -> list[Document]: """Get relevant documents given a user query. Args: query: user query run_manager: the callback handler to use.
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/memory/chat_memory.py:74
Function 'save_context' on line 74 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def save_context(self, inputs: dict[str, Any], outputs: dict[str, str]) -> None: """Save context from this conversation to buffer.""" input_str, output_str = self._get_input_output(inputs, outputs) self.chat_memory.add_messages( [ HumanMessage(content=input_str), AIMessage(content=output_str), ], ) async def asave_context(
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/memory/vectorstore.py:67
Function 'load_memory_variables' on line 67 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def load_memory_variables( self, inputs: dict[str, Any], ) -> dict[str, list[Document] | str]: """Return history buffer.""" input_key = self._get_prompt_input_key(inputs) query = inputs[input_key] docs = self.retriever.invoke(query) return self._documents_to_memory_variables(docs) async def aload_memory_variables(
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/memory/summary.py:36
Function 'predict_new_summary' on line 36 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def predict_new_summary( self, messages: list[BaseMessage], existing_summary: str, ) -> str: """Predict a new summary based on the messages and existing summary. Args: messages: List of messages to summarize. existing_summary: Existing summary to build upon.
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/memory/summary.py:103
Function 'from_messages' on line 103 has 4 DoS risk(s): LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def from_messages( cls, llm: BaseLanguageModel, chat_memory: BaseChatMessageHistory, *, summarize_step: int = 2, **kwargs: Any, ) -> ConversationSummaryMemory: """Create a ConversationSummaryMemory from a list of messages. Args:
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/memory/summary.py:157
Function 'save_context' on line 157 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def save_context(self, inputs: dict[str, Any], outputs: dict[str, str]) -> None: """Save context from this conversation to buffer.""" super().save_context(inputs, outputs) self.buffer = self.predict_new_summary( self.chat_memory.messages[-2:], self.buffer, ) def clear(self) -> None: """Clear memory contents.""" super().clear()
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/memory/entity.py:502
Function 'load_memory_variables' on line 502 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def load_memory_variables(self, inputs: dict[str, Any]) -> dict[str, Any]: """Load memory variables. Returns chat history and all generated entities with summaries if available, and updates or clears the recent entity cache. New entity name can be found when calling this method, before the entity summaries are generated, so the entity cache values may be empty if no entity descriptions are generated yet. """ # Create an LLMChain for predicting entity names from the recent chat history:
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/memory/entity.py:567
Function 'save_context' on line 567 has 5 DoS risk(s): LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def save_context(self, inputs: dict[str, Any], outputs: dict[str, str]) -> None: """Save context from this conversation history to the entity store. Generates a summary for each entity in the entity cache by prompting the model, and saves these summaries to the entity store. """ super().save_context(inputs, outputs) if self.input_key is None: prompt_input_key = get_prompt_input_key(inputs, self.memory_variables) else:
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Critical decision without oversight in 'load_memory_variables'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/langchain/langchain_classic/memory/entity.py:502
Function 'load_memory_variables' on line 502 makes critical data_modification decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
return ["entities", self.chat_history_key] def load_memory_variables(self, inputs: dict[str, Any]) -> dict[str, Any]: """Load memory variables. Returns chat history and all generated entities with summaries if available,
Remediation
Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
Critical decision without oversight in 'save_context'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/langchain/langchain_classic/memory/entity.py:567
Function 'save_context' on line 567 makes critical data_modification decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
} def save_context(self, inputs: dict[str, Any], outputs: dict[str, str]) -> None: """Save context from this conversation history to the entity store. Generates a summary for each entity in the entity cache by prompting
Remediation
Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
User input 'input' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chat_models/base.py:773
User input parameter 'input' is directly passed to LLM API call 'self._model(config).invoke'. This is a high-confidence prompt injection vector.
) -> Any: return self._model(config).invoke(input, config=config, **kwargs)
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/agent.py:1353
LLM output variable 'output' flows to 'run' on line 1353 via direct flow. This creates a command_injection vulnerability.
tool_run_kwargs = self._action_agent.tool_run_logging_kwargs() observation = ExceptionTool().run( output.tool_input, verbose=self.verbose,
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/agent.py:1351
LLM output variable 'output' flows to 'run_manager.on_agent_action' on line 1351 via direct flow. This creates a command_injection vulnerability.
if run_manager: run_manager.on_agent_action(output, color="green") tool_run_kwargs = self._action_agent.tool_run_logging_kwargs() observation = ExceptionTool().run(
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/agent.py:419
Function 'plan' on line 419 has 4 DoS risk(s): LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def plan( self, intermediate_steps: list[tuple[AgentAction, str]], callbacks: Callbacks = None, **kwargs: Any, ) -> AgentAction | AgentFinish: """Based on past history and current inputs, decide what to do. Args: intermediate_steps: Steps the LLM has taken to date, along with the observations.
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/agent.py:531
Function 'plan' on line 531 has 4 DoS risk(s): LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def plan( self, intermediate_steps: list[tuple[AgentAction, str]], callbacks: Callbacks = None, **kwargs: Any, ) -> list[AgentAction] | AgentFinish: """Based on past history and current inputs, decide what to do. Args: intermediate_steps: Steps the LLM has taken to date, along with the observations.
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/agent.py:1301
Function '_iter_next_step' on line 1301 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _iter_next_step( self, name_to_tool_map: dict[str, BaseTool], color_mapping: dict[str, str], inputs: dict[str, str], intermediate_steps: list[tuple[AgentAction, str]], run_manager: CallbackManagerForChainRun | None = None, ) -> Iterator[AgentFinish | AgentAction | AgentStep]: """Take a single step in the thought-action-observation loop. Override this to take control of how the agent makes and acts on choices.
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Critical decision without oversight in 'plan'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/agent.py:419
Function 'plan' on line 419 makes critical security decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
return self.input_keys_arg def plan( self, intermediate_steps: list[tuple[AgentAction, str]], callbacks: Callbacks = None,
Remediation
Critical security decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
Critical decision without oversight in 'plan'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/agent.py:531
Function 'plan' on line 531 makes critical security decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
return self.input_keys_arg def plan( self, intermediate_steps: list[tuple[AgentAction, str]], callbacks: Callbacks = None,
Remediation
Critical security decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
User input 'prompt_value' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/output_parsers/retry.py:245
User input parameter 'prompt_value' is directly passed to LLM API call 'self.retry_chain.run'. This is a high-confidence prompt injection vector.
if self.legacy and hasattr(self.retry_chain, "run"): completion = self.retry_chain.run( prompt=prompt_value.to_string(),
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
User input 'prompt_value' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/output_parsers/retry.py:251
User input parameter 'prompt_value' is directly passed to LLM API call 'self.retry_chain.invoke'. This is a high-confidence prompt injection vector.
else: completion = self.retry_chain.invoke( {
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/output_parsers/retry.py:245
LLM output variable 'completion' flows to 'self.retry_chain.run' on line 245 via direct flow. This creates a command_injection vulnerability.
if self.legacy and hasattr(self.retry_chain, "run"): completion = self.retry_chain.run( prompt=prompt_value.to_string(), completion=completion,
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/output_parsers/retry.py:97
Function 'parse_with_prompt' on line 97 has 4 DoS risk(s): LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def parse_with_prompt(self, completion: str, prompt_value: PromptValue) -> T: """Parse the output of an LLM call using a wrapped parser. Args: completion: The chain completion to parse. prompt_value: The prompt to use to parse the completion. Returns: The parsed completion. """ retries = 0
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/output_parsers/retry.py:234
Function 'parse_with_prompt' on line 234 has 4 DoS risk(s): LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def parse_with_prompt(self, completion: str, prompt_value: PromptValue) -> T: retries = 0 while retries <= self.max_retries: try: return self.parser.parse(completion) except OutputParserException as e: if retries == self.max_retries: raise retries += 1 if self.legacy and hasattr(self.retry_chain, "run"):
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/output_parsers/fix.py:81
LLM output variable 'completion' flows to 'self.retry_chain.run' on line 81 via direct flow. This creates a command_injection vulnerability.
if self.legacy and hasattr(self.retry_chain, "run"): completion = self.retry_chain.run( instructions=self.parser.get_format_instructions(), completion=completion,
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/output_parsers/fix.py:70
Function 'parse' on line 70 has 4 DoS risk(s): LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def parse(self, completion: str) -> T: retries = 0 while retries <= self.max_retries: try: return self.parser.parse(completion) except OutputParserException as e: if retries == self.max_retries: raise retries += 1 if self.legacy and hasattr(self.retry_chain, "run"):
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
LLM output flows to code_execution sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/evaluation/loading.py:178
LLM output variable 'llm' flows to 'evaluator_cls.from_llm' on line 178 via direct flow. This creates a code_execution vulnerability.
raise ValueError(msg) from e return evaluator_cls.from_llm(llm=llm, **kwargs) return evaluator_cls(**kwargs)
Remediation
Mitigations for Code Execution: 1. Never pass LLM output to eval() or exec() 2. Use safe alternatives (ast.literal_eval for data) 3. Implement sandboxing if code execution is required
User input 'inputs' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/llm.py:117
User input parameter 'inputs' is directly passed to LLM API call 'self.generate'. This is a high-confidence prompt injection vector.
) -> dict[str, str]: response = self.generate([inputs], run_manager=run_manager) return self.create_outputs(response)[0]
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
User input 'input_list' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/llm.py:241
User input parameter 'input_list' is directly passed to LLM API call 'self.generate'. This is a high-confidence prompt injection vector.
try: response = self.generate(input_list, run_manager=run_manager) except BaseException as e:
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/llm.py:246
LLM output variable 'outputs' flows to 'run_manager.on_chain_end' on line 246 via direct flow. This creates a command_injection vulnerability.
outputs = self.create_outputs(response) run_manager.on_chain_end({"outputs": outputs}) return outputs
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/llm.py:112
Function '_call' on line 112 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _call( self, inputs: dict[str, Any], run_manager: CallbackManagerForChainRun | None = None, ) -> dict[str, str]: response = self.generate([inputs], run_manager=run_manager) return self.create_outputs(response)[0] def generate( self, input_list: list[dict[str, Any]],
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/llm.py:120
Function 'generate' on line 120 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def generate( self, input_list: list[dict[str, Any]], run_manager: CallbackManagerForChainRun | None = None, ) -> LLMResult: """Generate LLM result from inputs.""" prompts, stop = self.prep_prompts(input_list, run_manager=run_manager) callbacks = run_manager.get_child() if run_manager else None if isinstance(self.llm, BaseLanguageModel): return self.llm.generate_prompt( prompts,
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/llm.py:224
Function 'apply' on line 224 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def apply( self, input_list: list[dict[str, Any]], callbacks: Callbacks = None, ) -> list[dict[str, str]]: """Utilize the LLM generate method for speed gains.""" callback_manager = CallbackManager.configure( callbacks, self.callbacks, self.verbose, )
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/mapreduce.py:113
LLM output from 'self.combine_documents_chain.run' is used in 'run(' on line 113 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
} outputs = self.combine_documents_chain.run( _inputs, callbacks=_run_manager.get_child(),
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/mapreduce.py:99
Function '_call' on line 99 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _call( self, inputs: dict[str, str], run_manager: CallbackManagerForChainRun | None = None, ) -> dict[str, str]: _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager() # Split the larger text into smaller chunks. doc_text = inputs.pop(self.input_key) texts = self.text_splitter.split_text(doc_text) docs = [Document(page_content=text) for text in texts] _inputs: dict[str, Any] = {
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/sequential.py:173
LLM output variable '_input' flows to 'chain.run' on line 173 via direct flow. This creates a command_injection vulnerability.
for i, chain in enumerate(self.chains): _input = chain.run( _input, callbacks=_run_manager.get_child(f"step_{i + 1}"),
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/sequential.py:179
LLM output variable '_input' flows to '_run_manager.on_text' on line 179 via direct flow. This creates a command_injection vulnerability.
_input = _input.strip() _run_manager.on_text( _input, color=color_mapping[str(i)],
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/sequential.py:164
Function '_call' on line 164 has 4 DoS risk(s): LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _call( self, inputs: dict[str, str], run_manager: CallbackManagerForChainRun | None = None, ) -> dict[str, str]: _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager() _input = inputs[self.input_key] color_mapping = get_color_mapping([str(i) for i in range(len(self.chains))]) for i, chain in enumerate(self.chains): _input = chain.run( _input,
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
User input 'inputs' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/base.py:413
User input parameter 'inputs' is directly passed to LLM API call 'self.invoke'. This is a high-confidence prompt injection vector.
return self.invoke( inputs,
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/base.py:413
LLM output from 'self.invoke' is used in 'call(' on line 413 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
return self.invoke( inputs, cast("RunnableConfig", {k: v for k, v in config.items() if v is not None}),
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/base.py:369
Function '__call__' on line 369 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def __call__( self, inputs: dict[str, Any] | Any, return_only_outputs: bool = False, # noqa: FBT001,FBT002 callbacks: Callbacks = None, *, tags: list[str] | None = None, metadata: dict[str, Any] | None = None, run_name: str | None = None, include_run_info: bool = False, ) -> dict[str, Any]:
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Critical decision without oversight in 'query'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/langchain/langchain_classic/indexes/vectorstore.py:34
Function 'query' on line 34 makes critical data_modification decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
) def query( self, question: str, llm: BaseLanguageModel | None = None,
Remediation
Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
Critical decision without oversight in 'query_with_sources'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/langchain/langchain_classic/indexes/vectorstore.py:104
Function 'query_with_sources' on line 104 makes critical data_modification decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
return (await chain.ainvoke({chain.input_key: question}))[chain.output_key] def query_with_sources( self, question: str, llm: BaseLanguageModel | None = None,
Remediation
Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
User input 'inputs' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/hyde/base.py:96
User input parameter 'inputs' is directly passed to LLM API call 'self.llm_chain.invoke'. This is a high-confidence prompt injection vector.
_run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager() return self.llm_chain.invoke( inputs,
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/hyde/base.py:89
Function '_call' on line 89 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _call( self, inputs: dict[str, Any], run_manager: CallbackManagerForChainRun | None = None, ) -> dict[str, str]: """Call the internal llm chain.""" _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager() return self.llm_chain.invoke( inputs, config={"callbacks": _run_manager.get_child()}, )
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/elasticsearch_database/base.py:140
LLM output variable 'es_cmd' flows to '_run_manager.on_text' on line 140 via direct flow. This creates a command_injection vulnerability.
_run_manager.on_text(es_cmd, color="green", verbose=self.verbose) intermediate_steps.append( es_cmd,
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/elasticsearch_database/base.py:149
LLM output variable 'result' flows to '_run_manager.on_text' on line 149 via direct flow. This creates a command_injection vulnerability.
_run_manager.on_text("\nESResult: ", verbose=self.verbose) _run_manager.on_text(result, color="yellow", verbose=self.verbose) _run_manager.on_text("\nAnswer:", verbose=self.verbose)
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/elasticsearch_database/base.py:160
LLM output variable 'final_result' flows to '_run_manager.on_text' on line 160 via direct flow. This creates a command_injection vulnerability.
intermediate_steps.append(final_result) # output: final answer _run_manager.on_text(final_result, color="green", verbose=self.verbose) chain_result: dict[str, Any] = {self.output_key: final_result} if self.return_intermediate_steps:
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/elasticsearch_database/base.py:116
Function '_call' on line 116 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _call( self, inputs: dict[str, Any], run_manager: CallbackManagerForChainRun | None = None, ) -> dict[str, Any]: _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager() input_text = f"{inputs[self.input_key]}\nESQuery:" _run_manager.on_text(input_text, verbose=self.verbose) indices = self._list_indices() indices_info = self._get_indices_infos(indices) query_inputs: dict = {
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/sql_database/query.py:33
Function 'create_sql_query_chain' on line 33 has 4 DoS risk(s): LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def create_sql_query_chain( llm: BaseLanguageModel, db: SQLDatabase, prompt: BasePromptTemplate | None = None, k: int = 5, *, get_col_comments: bool | None = None, ) -> Runnable[SQLInput | SQLInputWithTables | dict[str, Any], str]: r"""Create a chain that generates SQL queries. *Security Note*: This chain generates SQL queries for the given database.
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Critical decision without oversight in 'create_sql_query_chain'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/sql_database/query.py:33
Function 'create_sql_query_chain' on line 33 makes critical security decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
def create_sql_query_chain( llm: BaseLanguageModel, db: SQLDatabase, prompt: BasePromptTemplate | None = None,
Remediation
Critical security decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/retrieval_qa/base.py:154
LLM output from 'self.combine_documents_chain.run' is used in 'run(' on line 154 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
docs = self._get_docs(question) # type: ignore[call-arg] answer = self.combine_documents_chain.run( input_documents=docs, question=question,
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/retrieval_qa/base.py:129
Function '_call' on line 129 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _call( self, inputs: dict[str, Any], run_manager: CallbackManagerForChainRun | None = None, ) -> dict[str, Any]: """Run get_relevant_text and llm on input query. If chain has 'return_source_documents' as 'True', returns the retrieved documents as well under the key 'source_documents'. Example:
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/qa_with_sources/base.py:167
LLM output from 'self.combine_documents_chain.run' is used in 'run(' on line 167 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
answer = self.combine_documents_chain.run( input_documents=docs, callbacks=_run_manager.get_child(),
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/qa_with_sources/base.py:153
Function '_call' on line 153 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _call( self, inputs: dict[str, Any], run_manager: CallbackManagerForChainRun | None = None, ) -> dict[str, str]: _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager() accepts_run_manager = ( "run_manager" in inspect.signature(self._get_docs).parameters ) if accepts_run_manager: docs = self._get_docs(inputs, run_manager=_run_manager)
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/qa_generation/base.py:116
Function '_call' on line 116 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _call( self, inputs: dict[str, Any], run_manager: CallbackManagerForChainRun | None = None, ) -> dict[str, list]: docs = self.text_splitter.create_documents([inputs[self.input_key]]) results = self.llm_chain.generate( [{"text": d.page_content} for d in docs], run_manager=run_manager, ) qa = [json.loads(res[0].text) for res in results.generations]
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/constitutional_ai/base.py:262
LLM output variable 'response' flows to '_run_manager.on_text' on line 262 via direct flow. This creates a command_injection vulnerability.
_run_manager.on_text( text="Initial response: " + response + "\n\n", verbose=self.verbose,
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/constitutional_ai/base.py:271
LLM output variable 'response' flows to 'self.critique_chain.run' on line 271 via direct flow. This creates a command_injection vulnerability.
raw_critique = self.critique_chain.run( input_prompt=input_prompt, output_from_model=response,
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/constitutional_ai/base.py:307
LLM output variable 'critique' flows to '_run_manager.on_text' on line 307 via direct flow. This creates a command_injection vulnerability.
_run_manager.on_text( text="Critique: " + critique + "\n\n", verbose=self.verbose,
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/constitutional_ai/base.py:313
LLM output variable 'revision' flows to '_run_manager.on_text' on line 313 via direct flow. This creates a command_injection vulnerability.
_run_manager.on_text( text="Updated response: " + revision + "\n\n", verbose=self.verbose,
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
Model DoS vulnerability: LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/constitutional_ai/base.py:249
Function '_call' on line 249 has 5 DoS risk(s): LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _call( self, inputs: dict[str, Any], run_manager: CallbackManagerForChainRun | None = None, ) -> dict[str, Any]: _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager() response = self.chain.run( **inputs, callbacks=_run_manager.get_child("original"), ) initial_response = response
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Critical decision without oversight in '_call'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/constitutional_ai/base.py:249
Function '_call' on line 249 makes critical data_modification decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
return ["output"] def _call( self, inputs: dict[str, Any], run_manager: CallbackManagerForChainRun | None = None,
Remediation
Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/natbot/base.py:113
Function '_call' on line 113 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _call( self, inputs: dict[str, str], run_manager: CallbackManagerForChainRun | None = None, ) -> dict[str, str]: _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager() url = inputs[self.input_url_key] browser_content = inputs[self.input_browser_content_key] llm_cmd = self.llm_chain.invoke( { "objective": self.objective,
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/api/base.py:289
LLM output variable 'api_url' flows to '_run_manager.on_text' on line 289 via direct flow. This creates a command_injection vulnerability.
) _run_manager.on_text(api_url, color="green", end="\n", verbose=self.verbose) api_url = api_url.strip() if self.limit_to_domains and not _check_in_allowed_domain(
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/api/base.py:300
LLM output variable 'api_response' flows to '_run_manager.on_text' on line 300 via direct flow. This creates a command_injection vulnerability.
api_response = self.requests_wrapper.get(api_url) _run_manager.on_text( str(api_response), color="yellow",
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/llm_math/base.py:275
LLM output from 'self.llm_chain.predict' is used in 'call(' on line 275 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
_run_manager.on_text(inputs[self.input_key]) llm_output = self.llm_chain.predict( question=inputs[self.input_key], stop=["```output"],
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/llm_math/base.py:268
Function '_call' on line 268 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _call( self, inputs: dict[str, str], run_manager: CallbackManagerForChainRun | None = None, ) -> dict[str, str]: _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager() _run_manager.on_text(inputs[self.input_key]) llm_output = self.llm_chain.predict( question=inputs[self.input_key], stop=["```output"], callbacks=_run_manager.get_child(),
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/combine_documents/reduce.py:321
LLM output from 'self._collapse_chain.run' is used in 'run(' on line 321 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
def _collapse_docs_func(docs: list[Document], **kwargs: Any) -> str: return self._collapse_chain.run( input_documents=docs, callbacks=callbacks,
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/combine_documents/refine.py:145
Function 'combine_docs' on line 145 has 4 DoS risk(s): LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def combine_docs( self, docs: list[Document], callbacks: Callbacks = None, **kwargs: Any, ) -> tuple[str, dict]: """Combine by mapping first chain over all, then stuffing into final chain. Args: docs: List of documents to combine callbacks: Callbacks to be passed through
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/conversational_retrieval/base.py:177
LLM output variable 'docs' flows to 'self.combine_docs_chain.run' on line 177 via direct flow. This creates a command_injection vulnerability.
new_inputs["chat_history"] = chat_history_str answer = self.combine_docs_chain.run( input_documents=docs, callbacks=_run_manager.get_child(),
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
User input 'user_input' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/flare/base.py:147
User input parameter 'user_input' is directly passed to LLM API call 'self.response_chain.invoke'. This is a high-confidence prompt injection vector.
context = "\n\n".join(d.page_content for d in docs) result = self.response_chain.invoke( {
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
Model DoS vulnerability: LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/flare/base.py:135
Function '_do_generation' on line 135 has 5 DoS risk(s): LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _do_generation( self, questions: list[str], user_input: str, response: str, _run_manager: CallbackManagerForChainRun, ) -> tuple[str, bool]: callbacks = _run_manager.get_child() docs = [] for question in questions: docs.extend(self.retriever.invoke(question))
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/flare/base.py:198
Function '_call' on line 198 has 5 DoS risk(s): LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _call( self, inputs: dict[str, Any], run_manager: CallbackManagerForChainRun | None = None, ) -> dict[str, Any]: _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager() user_input = inputs[self.input_keys[0]] response = ""
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Direct execution of LLM-generated code in '_call'
LLM08: Excessive Agency CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/flare/base.py:198
Function '_call' on line 198 directly executes code generated or influenced by an LLM using exec()/eval() or subprocess. This creates a critical security risk where malicious or buggy LLM outputs can execute arbitrary code, potentially compromising the entire system.
end="\n", ) return self._do_generation(questions, user_input, response, _run_manager) def _call( self, inputs: dict[str, Any], run_manager: CallbackManagerForChainRun | None = None, ) -> dict[str, Any]: _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
Remediation
Code Execution Security: 1. NEVER execute LLM-generated code directly with exec()/eval() 2. If code execution is necessary, use sandboxed environments (Docker, VM) 3. Implement strict code validation and static analysis before execution 4. Use allowlists for permitted functions/modules 5. Set resource limits (CPU, memory, time) for execution 6. Parse and validate code structure before running 7. Consider using safer alternatives (JSON, declarative configs) 8. Log all code execution attempts with full context 9. Require human review for generated code 10. Use tools like RestrictedPython for safer Python execution
Direct execution of LLM output in '_call'
LLM09: Overreliance CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/flare/base.py:198
Function '_call' on line 198 directly executes LLM-generated code using eval(. This is extremely dangerous and allows arbitrary code execution.
return self._do_generation(questions, user_input, response, _run_manager) def _call( self, inputs: dict[str, Any], run_manager: CallbackManagerForChainRun | None = None,
Remediation
NEVER directly execute LLM-generated code: 1. Remove direct execution: - Do not use eval(), exec(), or os.system() - Avoid dynamic code execution - Use safer alternatives (allow-lists) 2. If code generation is required: - Generate code for review only - Require human approval before execution - Use sandboxing (containers, VMs) - Implement strict security policies 3. Use structured outputs: - Return data, not code - Use JSON schemas - Define clear interfaces 4. Add safeguards: - Static code analysis before execution - Whitelist allowed operations - Rate limiting and monitoring
Critical decision without oversight in 'from_llm'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/flare/base.py:250
Function 'from_llm' on line 250 makes critical data_modification decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
@classmethod def from_llm( cls, llm: BaseLanguageModel | None, max_generation_len: int = 32,
Remediation
Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/router/llm_router.py:137
LLM output from 'self.llm_chain.predict' is used in 'call(' on line 137 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
prediction = self.llm_chain.predict(callbacks=callbacks, **inputs) return cast( "dict[str, Any]",
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/evaluation/agents/trajectory_eval_chain.py:298
LLM output from 'self.eval_chain.run' is used in 'call(' on line 298 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
_run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager() raw_output = self.eval_chain.run( chain_input, callbacks=_run_manager.get_child(),
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/evaluation/agents/trajectory_eval_chain.py:280
Function '_call' on line 280 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _call( self, inputs: dict[str, str], run_manager: CallbackManagerForChainRun | None = None, ) -> dict[str, Any]: """Run the chain and generate the output. Args: inputs: The input values for the chain. run_manager: The callback manager for the chain run.
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Critical decision without oversight in 'create_openai_tools_agent'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/openai_tools/base.py:17
Function 'create_openai_tools_agent' on line 17 makes critical security decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
def create_openai_tools_agent( llm: BaseLanguageModel, tools: Sequence[BaseTool], prompt: ChatPromptTemplate,
Remediation
Critical security decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
Critical decision without oversight in 'create_openai_functions_agent'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/openai_functions_agent/base.py:287
Function 'create_openai_functions_agent' on line 287 makes critical security decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
def create_openai_functions_agent( llm: BaseLanguageModel, tools: Sequence[BaseTool], prompt: ChatPromptTemplate,
Remediation
Critical security decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
Critical decision without oversight in 'create_tool_calling_agent'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/tool_calling_agent/base.py:18
Function 'create_tool_calling_agent' on line 18 makes critical security decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
def create_tool_calling_agent( llm: BaseLanguageModel, tools: Sequence[BaseTool], prompt: ChatPromptTemplate,
Remediation
Critical security decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
Critical decision without oversight in 'create_json_chat_agent'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/json_chat/base.py:14
Function 'create_json_chat_agent' on line 14 makes critical security decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
def create_json_chat_agent( llm: BaseLanguageModel, tools: Sequence[BaseTool], prompt: ChatPromptTemplate,
Remediation
Critical security decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
Critical decision without oversight in 'create_xml_agent'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/xml/base.py:115
Function 'create_xml_agent' on line 115 makes critical security decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
def create_xml_agent( llm: BaseLanguageModel, tools: Sequence[BaseTool], prompt: BasePromptTemplate,
Remediation
Critical security decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
User input 'input' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/openai_assistant/base.py:360
User input parameter 'input' is directly passed to LLM API call 'self.client.beta.threads.messages.create'. This is a high-confidence prompt injection vector.
elif "run_id" not in input: _ = self.client.beta.threads.messages.create( input["thread_id"],
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
User input 'input_dict' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/openai_assistant/base.py:560
User input parameter 'input_dict' is directly passed to LLM API call 'self.client.beta.threads.runs.create'. This is a high-confidence prompt injection vector.
} return self.client.beta.threads.runs.create( input_dict["thread_id"],
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/openai_assistant/base.py:371
LLM output variable 'run' flows to 'self._wait_for_run' on line 371 via direct flow. This creates a command_injection vulnerability.
run = self.client.beta.threads.runs.submit_tool_outputs(**input) run = self._wait_for_run(run.id, run.thread_id) except BaseException as e: run_manager.on_chain_error(e)
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/openai_assistant/base.py:382
LLM output variable 'response' flows to 'run_manager.on_chain_end' on line 382 via direct flow. This creates a command_injection vulnerability.
else: run_manager.on_chain_end(response) return response
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/openai_assistant/base.py:379
LLM output variable 'run' flows to 'run_manager.on_chain_error' on line 379 via direct flow. This creates a command_injection vulnerability.
except BaseException as e: run_manager.on_chain_error(e, metadata=run.dict()) raise else:
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/openai_assistant/base.py:288
Function 'invoke' on line 288 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def invoke( self, input: dict, config: RunnableConfig | None = None, **kwargs: Any, ) -> OutputType: """Invoke assistant. Args: input: Runnable input dict that can have: content: User message when starting a new run.
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/openai_assistant/base.py:542
Function '_create_run' on line 542 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _create_run(self, input_dict: dict) -> Any: params = { k: v for k, v in input_dict.items() if k in ( "instructions", "model", "tools", "additional_instructions", "parallel_tool_calls",
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/openai_assistant/base.py:668
Function '_wait_for_run' on line 668 has 4 DoS risk(s): LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _wait_for_run(self, run_id: str, thread_id: str) -> Any: in_progress = True while in_progress: run = self.client.beta.threads.runs.retrieve(run_id, thread_id=thread_id) in_progress = run.status in ("in_progress", "queued") if in_progress: sleep(self.check_every_ms / 1000) return run async def _aparse_intermediate_steps( self,
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Critical decision without oversight in 'invoke'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/openai_assistant/base.py:288
Function 'invoke' on line 288 makes critical security decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
@override def invoke( self, input: dict, config: RunnableConfig | None = None,
Remediation
Critical security decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
Critical decision without oversight in 'create_react_agent'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/react/agent.py:16
Function 'create_react_agent' on line 16 makes critical security decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
def create_react_agent( llm: BaseLanguageModel, tools: Sequence[BaseTool], prompt: BasePromptTemplate,
Remediation
Critical security decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/smith/evaluation/runner_utils.py:1659
LLM output variable 'revision_id' flows to '_DatasetRunContainer.prepare' on line 1659 via direct flow. This creates a command_injection vulnerability.
client = client or Client() container = _DatasetRunContainer.prepare( client, dataset_name,
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/smith/evaluation/runner_utils.py:1674
LLM output variable 'container' flows to '_run_llm_or_chain' on line 1674 via direct flow. This creates a command_injection vulnerability.
batch_results = [ _run_llm_or_chain( example, config,
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/smith/evaluation/runner_utils.py:1685
LLM output variable 'container' flows to 'runnable_config.get_executor_for_config' on line 1685 via direct flow. This creates a command_injection vulnerability.
else: with runnable_config.get_executor_for_config(container.configs[0]) as executor: batch_results = list( executor.map(
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
LLM output flows to code_execution sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/smith/evaluation/runner_utils.py:1687
LLM output variable 'container' flows to 'executor.map' on line 1687 via direct flow. This creates a code_execution vulnerability.
batch_results = list( executor.map( functools.partial( _run_llm_or_chain,
Remediation
Mitigations for Code Execution: 1. Never pass LLM output to eval() or exec() 2. Use safe alternatives (ast.literal_eval for data) 3. Implement sandboxing if code execution is required
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/smith/evaluation/runner_utils.py:861
Function '_run_llm' on line 861 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _run_llm( llm: BaseLanguageModel, inputs: dict[str, Any], callbacks: Callbacks, *, tags: list[str] | None = None, input_mapper: Callable[[dict], Any] | None = None, metadata: dict[str, Any] | None = None, ) -> str | BaseMessage: """Run the language model on the example.
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Insecure tool function '_run_llm' executes dangerous operations
LLM07: Insecure Plugin Design HIGH
/private/tmp/langchain-test/libs/langchain/langchain_classic/smith/evaluation/runner_utils.py:861
Tool function '_run_llm' on line 861 takes LLM output as a parameter and performs dangerous operations (file_access) without proper validation. Attackers can craft malicious LLM outputs to execute arbitrary commands, access files, or perform SQL injection.
## Sync Utilities def _run_llm( llm: BaseLanguageModel, inputs: dict[str, Any], callbacks: Callbacks, *, tags: list[str] | None = None,
Remediation
Secure Tool/Plugin Implementation: 1. NEVER execute shell commands from LLM output directly 2. Use allowlists for permitted commands/operations 3. Validate all file paths against allowed directories 4. Use parameterized queries - never raw SQL from LLM 5. Validate URLs against allowlist before HTTP requests 6. Implement strict input schemas (JSON Schema, Pydantic) 7. Add rate limiting and request throttling 8. Log all tool invocations for audit 9. Use principle of least privilege 10. Implement human-in-the-loop for destructive operations
Critical decision without oversight in '_run_llm'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/langchain/langchain_classic/smith/evaluation/runner_utils.py:861
Function '_run_llm' on line 861 makes critical data_modification decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
def _run_llm( llm: BaseLanguageModel, inputs: dict[str, Any], callbacks: Callbacks,
Remediation
Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
Critical decision without oversight in 'run_on_dataset'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/langchain/langchain_classic/smith/evaluation/runner_utils.py:1512
Function 'run_on_dataset' on line 1512 makes critical security, data_modification decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
def run_on_dataset( client: Client | None, dataset_name: str, llm_or_chain_factory: MODEL_OR_CHAIN_FACTORY,
Remediation
Critical security, data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/smith/evaluation/string_run_evaluator.py:298
Function '_prepare_input' on line 298 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _prepare_input(self, inputs: dict[str, Any]) -> dict[str, str]: run: Run = inputs["run"] example: Example | None = inputs.get("example") evaluate_strings_inputs = self.run_mapper(run) if not self.string_evaluator.requires_input: # Hide warning about unused input evaluate_strings_inputs.pop("input", None) if example and self.example_mapper and self.string_evaluator.requires_reference: evaluate_strings_inputs.update(self.example_mapper(example)) elif self.string_evaluator.requires_reference: msg = (
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Critical decision without oversight in '_prepare_input'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/langchain/langchain_classic/smith/evaluation/string_run_evaluator.py:298
Function '_prepare_input' on line 298 makes critical data_modification decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
return ["feedback"] def _prepare_input(self, inputs: dict[str, Any]) -> dict[str, str]: run: Run = inputs["run"] example: Example | None = inputs.get("example") evaluate_strings_inputs = self.run_mapper(run)
Remediation
Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
User input 'query' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/document_compressors/listwise_rerank.py:95
User input parameter 'query' is directly passed to LLM API call 'self.reranker.invoke'. This is a high-confidence prompt injection vector.
"""Filter down documents based on their relevance to the query.""" results = self.reranker.invoke( {"documents": documents, "query": query},
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/document_compressors/listwise_rerank.py:88
Function 'compress_documents' on line 88 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def compress_documents( self, documents: Sequence[Document], query: str, callbacks: Callbacks | None = None, ) -> Sequence[Document]: """Filter down documents based on their relevance to the query.""" results = self.reranker.invoke( {"documents": documents, "query": query}, config={"callbacks": callbacks}, )
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/document_compressors/cross_encoder_rerank.py:31
Function 'compress_documents' on line 31 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def compress_documents( self, documents: Sequence[Document], query: str, callbacks: Callbacks | None = None, ) -> Sequence[Document]: """Rerank documents using CrossEncoder. Args: documents: A sequence of documents to compress. query: The query to use for compressing the documents.
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/document_compressors/chain_extract.py:68
Function 'compress_documents' on line 68 has 4 DoS risk(s): LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def compress_documents( self, documents: Sequence[Document], query: str, callbacks: Callbacks | None = None, ) -> Sequence[Document]: """Compress page content of raw documents.""" compressed_docs = [] for doc in documents: _input = self.get_input(query, doc) output_ = self.llm_chain.invoke(_input, config={"callbacks": callbacks})
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
User input 'query' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/self_query/base.py:316
User input parameter 'query' is directly passed to LLM API call 'self.query_constructor.invoke'. This is a high-confidence prompt injection vector.
) -> list[Document]: structured_query = self.query_constructor.invoke( {"query": query},
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/self_query/base.py:310
Function '_get_relevant_documents' on line 310 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _get_relevant_documents( self, query: str, *, run_manager: CallbackManagerForRetrieverRun, ) -> list[Document]: structured_query = self.query_constructor.invoke( {"query": query}, config={"callbacks": run_manager.get_child()}, ) if self.verbose:
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
User input 'input' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/language_models/chat_models.py:402
User input parameter 'input' is directly passed to LLM API call 'self.generate_prompt'. This is a high-confidence prompt injection vector.
"ChatGeneration", self.generate_prompt( [self._convert_input(input)],
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
User input 'input' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/language_models/chat_models.py:492
User input parameter 'input' is directly passed to LLM API call 'self.invoke'. This is a high-confidence prompt injection vector.
"AIMessageChunk", self.invoke(input, config=config, stop=stop, **kwargs), )
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
User input 'input' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/language_models/fake.py:106
User input parameter 'input' is directly passed to LLM API call 'self.invoke'. This is a high-confidence prompt injection vector.
) -> Iterator[str]: result = self.invoke(input, config) for i_c, c in enumerate(result):
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/language_models/fake.py:98
Function 'stream' on line 98 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def stream( self, input: LanguageModelInput, config: RunnableConfig | None = None, *, stop: list[str] | None = None, **kwargs: Any, ) -> Iterator[str]: result = self.invoke(input, config) for i_c, c in enumerate(result): if self.sleep is not None:
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
User input 'input' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/language_models/llms.py:378
User input parameter 'input' is directly passed to LLM API call 'self.generate_prompt'. This is a high-confidence prompt injection vector.
return ( self.generate_prompt( [self._convert_input(input)],
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
User input 'inputs' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/language_models/llms.py:431
User input parameter 'inputs' is directly passed to LLM API call 'self.generate_prompt'. This is a high-confidence prompt injection vector.
try: llm_result = self.generate_prompt( [self._convert_input(input_) for input_ in inputs],
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/language_models/llms.py:102
LLM output from 'asyncio.run' is used in 'run(' on line 102 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
except RuntimeError: asyncio.run(coro) else: if loop.is_running():
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/language_models/llms.py:109
LLM output from 'asyncio.run' is used in 'run(' on line 109 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
else: asyncio.run(coro) except Exception as e: _log_error_once(f"Error in on_retry: {e}")
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/language_models/llms.py:368
Function 'invoke' on line 368 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def invoke( self, input: LanguageModelInput, config: RunnableConfig | None = None, *, stop: list[str] | None = None, **kwargs: Any, ) -> str: config = ensure_config(config) return ( self.generate_prompt(
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/language_models/llms.py:508
Function 'stream' on line 508 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def stream( self, input: LanguageModelInput, config: RunnableConfig | None = None, *, stop: list[str] | None = None, **kwargs: Any, ) -> Iterator[str]: if type(self)._stream == BaseLLM._stream: # noqa: SLF001 # model doesn't implement streaming, so use default implementation yield self.invoke(input, config=config, stop=stop, **kwargs)
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/language_models/fake_chat_models.py:158
Function 'batch' on line 158 has 5 DoS risk(s): LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def batch( self, inputs: list[Any], config: RunnableConfig | list[RunnableConfig] | None = None, *, return_exceptions: bool = False, **kwargs: Any, ) -> list[AIMessage]: if isinstance(config, list): return [ self.invoke(m, c, **kwargs)
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
User input 'query' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/tools/retriever.py:65
User input parameter 'query' is directly passed to LLM API call 'retriever.invoke'. This is a high-confidence prompt injection vector.
) -> str | tuple[str, list[Document]]: docs = retriever.invoke(query, config={"callbacks": callbacks}) content = document_separator.join(
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/tools/retriever.py:62
Function 'func' on line 62 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def func( query: str, callbacks: Callbacks = None ) -> str | tuple[str, list[Document]]: docs = retriever.invoke(query, config={"callbacks": callbacks}) content = document_separator.join( format_document(doc, document_prompt_) for doc in docs ) if response_format == "content_and_artifact": return (content, docs) return content
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/tools/base.py:995
LLM output variable 'output' flows to 'run_manager.on_tool_end' on line 995 via direct flow. This creates a command_injection vulnerability.
output = _format_output(content, artifact, tool_call_id, self.name, status) run_manager.on_tool_end(output, color=color, name=self.name, **kwargs) return output
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/tools/base.py:992
LLM output variable 'error_to_raise' flows to 'run_manager.on_tool_error' on line 992 via direct flow. This creates a command_injection vulnerability.
if error_to_raise: run_manager.on_tool_error(error_to_raise, tool_call_id=tool_call_id) raise error_to_raise output = _format_output(content, artifact, tool_call_id, self.name, status)
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/tools/base.py:628
Function 'invoke' on line 628 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def invoke( self, input: str | dict | ToolCall, config: RunnableConfig | None = None, **kwargs: Any, ) -> Any: tool_input, kwargs = _prep_run_args(input, config, **kwargs) return self.run(tool_input, **kwargs) @override async def ainvoke(
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/callbacks/manager.py:358
LLM output from 'runner.run' is used in 'run(' on line 358 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
while pending := asyncio.all_tasks(runner.get_loop()): runner.run(asyncio.wait(pending)) else: # Before Python 3.11 we need to run each coroutine in a new event loop
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/callbacks/manager.py:364
LLM output from 'asyncio.run' is used in 'run(' on line 364 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
try: asyncio.run(coro) except Exception as e: logger.warning("Error in callback coroutine: %s", repr(e))
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/callbacks/manager.py:352
LLM output from 'runner.run' is used in 'run(' on line 352 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
try: runner.run(coro) except Exception as e: logger.warning("Error in callback coroutine: %s", repr(e))
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/callbacks/manager.py:340
Function '_run_coros' on line 340 has 4 DoS risk(s): LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _run_coros(coros: list[Coroutine[Any, Any, Any]]) -> None: if hasattr(asyncio, "Runner"): # Python 3.11+ # Run the coroutines in a new event loop, taking care to # - install signal handlers # - run pending tasks scheduled by `coros` # - close asyncgens and executors # - close the loop with asyncio.Runner() as runner: # Run the coroutine, get the result for coro in coros:
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/graph_mermaid.py:310
LLM output from 'asyncio.run' is used in 'run(' on line 310 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
if draw_method == MermaidDrawMethod.PYPPETEER: img_bytes = asyncio.run( _render_mermaid_using_pyppeteer( mermaid_syntax, output_file_path, background_color, padding
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/config.py:181
LLM output from 'ctx.run' is used in 'run(' on line 181 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
ctx = copy_context() config_token, _ = ctx.run(_set_config_context, config) try: yield ctx
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/config.py:185
LLM output from 'ctx.run' is used in 'run(' on line 185 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
finally: ctx.run(var_child_runnable_config.reset, config_token) ctx.run( _set_tracing_context,
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/config.py:186
LLM output from 'ctx.run' is used in 'run(' on line 186 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
ctx.run(var_child_runnable_config.reset, config_token) ctx.run( _set_tracing_context, {
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/config.py:553
LLM output from 'contexts.pop().run' is used in 'run(' on line 553 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
def _wrapped_fn(*args: Any) -> T: return contexts.pop().run(fn, *args) return super().map(
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/config.py:135
Function '_set_config_context' on line 135 has 4 DoS risk(s): LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _set_config_context( config: RunnableConfig, ) -> tuple[Token[RunnableConfig | None], dict[str, Any] | None]: """Set the child Runnable config + tracing context. Args: config: The config to set. Returns: The token to reset the config and the previous tracing context. """
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
User input 'input_' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/configurable.py:189
User input parameter 'input_' is directly passed to LLM API call 'bound.invoke'. This is a high-confidence prompt injection vector.
else: return bound.invoke(input_, config, **kwargs)
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
User input 'input_' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/configurable.py:185
User input parameter 'input_' is directly passed to LLM API call 'bound.invoke'. This is a high-confidence prompt injection vector.
try: return bound.invoke(input_, config, **kwargs) except Exception as e:
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/configurable.py:142
Function 'invoke' on line 142 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def invoke( self, input: Input, config: RunnableConfig | None = None, **kwargs: Any ) -> Output: runnable, config = self.prepare(config) return runnable.invoke(input, config, **kwargs) @override async def ainvoke( self, input: Input, config: RunnableConfig | None = None, **kwargs: Any ) -> Output: runnable, config = self.prepare(config)
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/configurable.py:178
Function 'invoke' on line 178 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def invoke( prepared: tuple[Runnable[Input, Output], RunnableConfig], input_: Input, ) -> Output | Exception: bound, config = prepared if return_exceptions: try: return bound.invoke(input_, config, **kwargs) except Exception as e: return e else:
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
User input 'input' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/branch.py:215
User input parameter 'input' is directly passed to LLM API call 'condition.invoke'. This is a high-confidence prompt injection vector.
expression_value = condition.invoke( input,
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
User input 'input' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/branch.py:234
User input parameter 'input' is directly passed to LLM API call 'self.default.invoke'. This is a high-confidence prompt injection vector.
else: output = self.default.invoke( input,
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
User input 'input' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/branch.py:224
User input parameter 'input' is directly passed to LLM API call 'runnable.invoke'. This is a high-confidence prompt injection vector.
if expression_value: output = runnable.invoke( input,
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
User input 'input' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/branch.py:327
User input parameter 'input' is directly passed to LLM API call 'condition.invoke'. This is a high-confidence prompt injection vector.
expression_value = condition.invoke( input,
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/branch.py:244
LLM output variable 'output' flows to 'run_manager.on_chain_end' on line 244 via direct flow. This creates a command_injection vulnerability.
raise run_manager.on_chain_end(output) return output
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
Model DoS vulnerability: LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/branch.py:189
Function 'invoke' on line 189 has 5 DoS risk(s): LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def invoke( self, input: Input, config: RunnableConfig | None = None, **kwargs: Any ) -> Output: """First evaluates the condition, then delegate to `True` or `False` branch. Args: input: The input to the `Runnable`. config: The configuration for the `Runnable`. **kwargs: Additional keyword arguments to pass to the `Runnable`. Returns:
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/branch.py:296
Function 'stream' on line 296 has 5 DoS risk(s): LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def stream( self, input: Input, config: RunnableConfig | None = None, **kwargs: Any | None, ) -> Iterator[Output]: """First evaluates the condition, then delegate to `True` or `False` branch. Args: input: The input to the `Runnable`. config: The configuration for the `Runnable`.
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
User input 'input_' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/retry.py:188
User input parameter 'input_' is directly passed to LLM API call 'super().invoke'. This is a high-confidence prompt injection vector.
with attempt: result = super().invoke( input_,
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
Model DoS vulnerability: LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/retry.py:179
Function '_invoke' on line 179 has 5 DoS risk(s): LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _invoke( self, input_: Input, run_manager: "CallbackManagerForChainRun", config: RunnableConfig, **kwargs: Any, ) -> Output: for attempt in self._sync_retrying(reraise=True): with attempt: result = super().invoke( input_,
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
User input 'input' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/fallbacks.py:193
User input parameter 'input' is directly passed to LLM API call 'context.run'. This is a high-confidence prompt injection vector.
with set_config_context(child_config) as context: output = context.run( runnable.invoke,
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
User input 'input' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/fallbacks.py:496
User input parameter 'input' is directly passed to LLM API call 'context.run'. This is a high-confidence prompt injection vector.
with set_config_context(child_config) as context: stream = context.run( runnable.stream,
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/fallbacks.py:207
LLM output variable 'output' flows to 'run_manager.on_chain_end' on line 207 via direct flow. This creates a command_injection vulnerability.
else: run_manager.on_chain_end(output) return output if first_error is None:
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/fallbacks.py:501
LLM output variable 'stream' flows to 'context.run' on line 501 via direct flow. This creates a command_injection vulnerability.
) chunk: Output = context.run(next, stream) except self.exceptions_to_handle as e: first_error = e if first_error is None else first_error
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
Model DoS vulnerability: LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/fallbacks.py:466
Function 'stream' on line 466 has 5 DoS risk(s): LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def stream( self, input: Input, config: RunnableConfig | None = None, **kwargs: Any | None, ) -> Iterator[Output]: if self.exception_key is not None and not isinstance(input, dict): msg = ( "If 'exception_key' is specified then input must be a dictionary." f"However found a type of {type(input)} for input" )
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
User input 'input_' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/router.py:162
User input parameter 'input_' is directly passed to LLM API call 'runnable.invoke'. This is a high-confidence prompt injection vector.
else: return runnable.invoke(input_, config, **kwargs)
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
User input 'input_' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/router.py:158
User input parameter 'input_' is directly passed to LLM API call 'runnable.invoke'. This is a high-confidence prompt injection vector.
try: return runnable.invoke(input_, config, **kwargs) except Exception as e:
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/router.py:107
Function 'invoke' on line 107 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def invoke( self, input: RouterInput, config: RunnableConfig | None = None, **kwargs: Any ) -> Output: key = input["key"] actual_input = input["input"] if key not in self.runnables: msg = f"No runnable associated with key '{key}'" raise ValueError(msg) runnable = self.runnables[key] return runnable.invoke(actual_input, config)
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/router.py:153
Function 'invoke' on line 153 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def invoke( runnable: Runnable[Input, Output], input_: Input, config: RunnableConfig ) -> Output | Exception: if return_exceptions: try: return runnable.invoke(input_, config, **kwargs) except Exception as e: return e else: return runnable.invoke(input_, config, **kwargs)
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
User input 'input_' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:2060
User input parameter 'input_' is directly passed to LLM API call 'context.run'. This is a high-confidence prompt injection vector.
"Output", context.run( call_func_with_variable_args, # type: ignore[arg-type]
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
User input 'input_' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:979
User input parameter 'input_' is directly passed to LLM API call 'self.invoke'. This is a high-confidence prompt injection vector.
else: out = self.invoke(input_, config, **kwargs)
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
User input 'input_' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:975
User input parameter 'input_' is directly passed to LLM API call 'self.invoke'. This is a high-confidence prompt injection vector.
try: out: Output | Exception = self.invoke(input_, config, **kwargs) except Exception as e:
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
User input 'input_' embedded in LLM prompt
LLM01: Prompt Injection CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:3861
User input parameter 'input_' is directly passed to LLM API call 'context.run'. This is a high-confidence prompt injection vector.
with set_config_context(child_config) as context: return context.run( step.invoke,
Remediation
Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:2326
LLM output variable 'iterator' flows to 'context.run' on line 2326 via direct flow. This creates a command_injection vulnerability.
while True: chunk: Output = context.run(next, iterator) yield chunk if final_output_supported:
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:1130
Function 'stream' on line 1130 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def stream( self, input: Input, config: RunnableConfig | None = None, **kwargs: Any | None, ) -> Iterator[Output]: """Default implementation of `stream`, which calls `invoke`. Subclasses must override this method if they support streaming output. Args:
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:2027
Function '_call_with_config' on line 2027 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _call_with_config( self, func: Callable[[Input], Output] | Callable[[Input, CallbackManagerForChainRun], Output] | Callable[[Input, CallbackManagerForChainRun, RunnableConfig], Output], input_: Input, config: RunnableConfig | None, run_type: str | None = None, serialized: dict[str, Any] | None = None, **kwargs: Any | None, ) -> Output:
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:2261
Function '_transform_stream_with_config' on line 2261 has 5 DoS risk(s): LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _transform_stream_with_config( self, inputs: Iterator[Input], transformer: Callable[[Iterator[Input]], Iterator[Output]] | Callable[[Iterator[Input], CallbackManagerForChainRun], Iterator[Output]] | Callable[ [Iterator[Input], CallbackManagerForChainRun, RunnableConfig], Iterator[Output], ], config: RunnableConfig | None, run_type: str | None = None,
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:3127
Function 'invoke' on line 3127 has 5 DoS risk(s): LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def invoke( self, input: Input, config: RunnableConfig | None = None, **kwargs: Any ) -> Output: # setup callbacks and context config = ensure_config(config) callback_manager = get_callback_manager_for_config(config) # start the root run run_manager = callback_manager.on_chain_start( None, input, name=config.get("run_name") or self.get_name(),
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:3830
Function 'invoke' on line 3830 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def invoke( self, input: Input, config: RunnableConfig | None = None, **kwargs: Any ) -> dict[str, Any]: # setup callbacks config = ensure_config(config) callback_manager = CallbackManager.configure( inheritable_callbacks=config.get("callbacks"), local_callbacks=None, verbose=False, inheritable_tags=config.get("tags"), local_tags=None,
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:5685
Function 'invoke' on line 5685 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def invoke( self, input: Input, config: RunnableConfig | None = None, **kwargs: Any | None, ) -> Output: return self.bound.invoke( input, self._merge_configs(config), **{**self.kwargs, **kwargs}, )
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:901
Function 'invoke' on line 901 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def invoke(input_: Input, config: RunnableConfig) -> Output | Exception: if return_exceptions: try: return self.invoke(input_, config, **kwargs) except Exception as e: return e else: return self.invoke(input_, config, **kwargs) # If there's only one input, don't bother with the executor if len(inputs) == 1:
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:970
Function 'invoke' on line 970 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def invoke( i: int, input_: Input, config: RunnableConfig ) -> tuple[int, Output | Exception]: if return_exceptions: try: out: Output | Exception = self.invoke(input_, config, **kwargs) except Exception as e: out = e else: out = self.invoke(input_, config, **kwargs)
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:3852
Function '_invoke_step' on line 3852 has 4 DoS risk(s): No rate limiting, No input length validation, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def _invoke_step( step: Runnable[Input, Any], input_: Input, config: RunnableConfig, key: str ) -> Any: child_config = patch_config( config, # mark each step as a child run callbacks=run_manager.get_child(f"map:key:{key}"), ) with set_config_context(child_config) as context: return context.run( step.invoke,
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/tracers/core.py:109
LLM output from 'self.run_map.get' is used in 'run(' on line 109 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
run.dotted_order += "." + current_dotted_order if parent_run := self.run_map.get(str(run.parent_run_id)): self._add_child_run(parent_run, run) else:
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/tracers/stdout.py:78
LLM output variable 'current_run' flows to 'self.run_map.get' on line 78 via direct flow. This creates a command_injection vulnerability.
while current_run.parent_run_id: parent = self.run_map.get(str(current_run.parent_run_id)) if parent: parents.append(parent)
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/tracers/stdout.py:105
LLM output variable 'run_type' flows to 'self.function_callback' on line 105 via direct flow. This creates a command_injection vulnerability.
run_type = run.run_type.capitalize() self.function_callback( f"{get_colored_text('[chain/start]', color='green')} " + get_bolded_text(f"[{crumbs}] Entering {run_type} run with input:\n")
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/tracers/stdout.py:114
LLM output variable 'run_type' flows to 'self.function_callback' on line 114 via direct flow. This creates a command_injection vulnerability.
run_type = run.run_type.capitalize() self.function_callback( f"{get_colored_text('[chain/end]', color='blue')} " + get_bolded_text(
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
LLM output flows to command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/tracers/stdout.py:125
LLM output variable 'run_type' flows to 'self.function_callback' on line 125 via direct flow. This creates a command_injection vulnerability.
run_type = run.run_type.capitalize() self.function_callback( f"{get_colored_text('[chain/error]', color='red')} " + get_bolded_text(
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable
Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits
LLM04: Model Denial of Service CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/tracers/stdout.py:66
Function 'get_parents' on line 66 has 4 DoS risk(s): LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits. These missing protections enable attackers to exhaust model resources through excessive requests, large inputs, or recursive calls, leading to service degradation or unavailability.
def get_parents(self, run: Run) -> list[Run]: """Get the parents of a run. Args: run: The run to get the parents of. Returns: A list of parent runs. """ parents = [] current_run = run
Remediation
Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets
LLM output used in dangerous command_injection sink
LLM02: Insecure Output Handling CRITICAL
/private/tmp/langchain-test/libs/core/langchain_core/tracers/base.py:49
LLM output from 'self.run_map.pop' is used in 'run(' on line 49 without sanitization. This creates a command_injection vulnerability where malicious LLM output can compromise application security.
self._persist_run(run) self.run_map.pop(str(run.id)) self._on_run_update(run)
Remediation
Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell
Critical decision without oversight in '_end_trace'
LLM09: Overreliance INFO
/private/tmp/langchain-test/libs/core/langchain_core/tracers/base.py:45
Function '_end_trace' on line 45 makes critical data_modification decisions based on LLM output without human oversight or verification. No action edges detected - advisory only.
self._on_run_create(run) def _end_trace(self, run: Run) -> None: """End a trace for a run.""" if not run.parent_run_id: self._persist_run(run)
Remediation
Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards
19
Overall Score
Initial
25
Controls Detected
of 61
2660
Files Analyzed
286
Total Recommendations

Category Scores

Prompt Security
28/100
  • Prompt Sanitization Advanced
  • Rate Limiting Missing
  • Input Validation Advanced
  • Output Filtering Advanced
  • Context Window Protection Missing
  • Red Team Testing Missing
  • Prompt Anomaly Detection Missing
  • System Prompt Protection Missing
3 Detected 0 Partial 5 Missing
Model Security
25/100
  • Access Control Missing
  • Model Versioning Missing
  • Dependency Scanning Missing
  • API Security Missing
  • Model Source Verification Advanced
  • Differential Privacy Intermediate
  • Model Watermarking Missing
  • Secure Model Loading Advanced
3 Detected 0 Partial 5 Missing
Data Privacy
31/100
  • PII Detection Intermediate
  • Data Redaction Advanced
  • Data Encryption Intermediate
  • Audit Logging Advanced
  • Consent Management Missing
  • NER PII Detection Missing
  • Data Retention Policy Missing
  • GDPR Compliance Missing
4 Detected 0 Partial 4 Missing
OWASP LLM Top 10
45/100
  • LLM01: Prompt Injection Defense Advanced
  • LLM02: Insecure Output Handling Intermediate
  • LLM03: Training Data Poisoning Partial
  • LLM04: Model Denial of Service Missing
  • LLM05: Supply Chain Vulnerabilities Intermediate
  • LLM06: Sensitive Information Disclosure Advanced
  • LLM07: Insecure Plugin Design Advanced
  • LLM08: Excessive Agency Partial
  • LLM09: Overreliance Advanced
  • LLM10: Model Theft Missing
6 Detected 2 Partial 2 Missing
Blue Team Operations
21/100
  • Model Monitoring Advanced
  • Drift Detection Missing
  • Anomaly Detection Missing
  • Adversarial Attack Detection Missing
  • AI Incident Response Missing
  • Model Drift Monitoring Missing
  • Data Quality Monitoring Advanced
2 Detected 0 Partial 5 Missing
AI Governance
0/100
  • Model Explainability Missing
  • Bias Detection Missing
  • Model Documentation Missing
  • Compliance Tracking Missing
  • Human Oversight Missing
0 Detected 0 Partial 5 Missing
Supply Chain Security
25/100
  • Dependency Scanning Missing
  • Model Provenance Tracking Missing
  • Model Integrity Verification Advanced
1 Detected 0 Partial 2 Missing
Hallucination Mitigation
35/100
  • RAG Implementation Advanced
  • Confidence Scoring Missing
  • Source Attribution Intermediate
  • Temperature Control Missing
  • Fact Checking Intermediate
3 Detected 0 Partial 2 Missing
Ethical AI & Bias
12/100
  • Fairness Metrics Missing
  • Model Explainability Intermediate
  • Bias Testing Missing
  • Model Cards Missing
1 Detected 0 Partial 3 Missing
Incident Response
0/100
  • Monitoring Integration Missing
  • Audit Logging Missing
  • Rollback Capability Missing
0 Detected 0 Partial 3 Missing

All Recommendations (286)

Rate Limiting
Audit critical

Detection failed: 'bool' object has no attribute 'lower'

Context Window Protection
Audit critical

Detection failed: 'bool' object has no attribute 'lower'

Red Team Testing
Audit critical

Detection failed: 'ConfigAnalyzer' object has no attribute 'file_exists'

Prompt Anomaly Detection
Audit critical

Implement statistical analysis on prompt patterns

Prompt Anomaly Detection
Audit critical

Use ML-based anomaly detection for unusual inputs

Prompt Anomaly Detection
Audit critical

Set up alerts for prompt anomaly detection

System Prompt Protection
Audit critical

Detection failed: 'bool' object has no attribute 'lower'

Access Control
Audit critical

Detection failed: 'bool' object has no attribute 'lower'

Model Versioning
Audit critical

Detection failed: 'bool' object has no attribute 'lower'

Dependency Scanning
Audit critical

Detection failed: 'bool' object has no attribute 'lower'

API Security
Audit critical

Detection failed: 'bool' object has no attribute 'lower'

Model Watermarking
Audit critical

Implement watermarking for model outputs

Model Watermarking
Audit critical

Use cryptographic watermarks for model weights

Model Watermarking
Audit critical

Track watermark verification for model theft detection

Consent Management
Audit critical

Detection failed: 'bool' object has no attribute 'lower'

NER PII Detection
Audit critical

Use Presidio or SpaCy for NER-based PII detection

NER PII Detection
Audit critical

Implement custom NER models for domain-specific PII

NER PII Detection
Audit critical

Run PII detection on all inputs and outputs

Data Retention Policy
Audit critical

Detection failed: 'bool' object has no attribute 'lower'

GDPR Compliance
Audit critical

Detection failed: 'bool' object has no attribute 'lower'

LLM04: Model Denial of Service
Audit critical

Detection failed: 'bool' object has no attribute 'lower'

LLM10: Model Theft
Audit critical

Implement rate limiting on API endpoints

LLM10: Model Theft
Audit critical

Add query logging and anomaly detection

LLM10: Model Theft
Audit critical

Monitor for extraction patterns

Drift Detection
Audit critical

Implement drift detection with evidently or alibi-detect

Drift Detection
Audit critical

Monitor input data distribution changes

Drift Detection
Audit critical

Set up automated alerts for drift events

Anomaly Detection
Audit critical

Implement anomaly detection on model inputs

Anomaly Detection
Audit critical

Monitor for unusual query patterns

Anomaly Detection
Audit critical

Use statistical methods or ML-based detection

Adversarial Attack Detection
Audit critical

Implement adversarial input detection

Adversarial Attack Detection
Audit critical

Use adversarial robustness toolkits

Adversarial Attack Detection
Audit critical

Add input perturbation analysis

AI Incident Response
Audit critical

Detection failed: 'bool' object has no attribute 'lower'

Model Drift Monitoring
Audit critical

Use Evidently or alibi-detect for drift monitoring

Model Drift Monitoring
Audit critical

Set up automated alerts for significant drift

Model Drift Monitoring
Audit critical

Implement automatic retraining pipelines

Model Explainability
Audit critical

Use SHAP or LIME for model explanations

Model Explainability
Audit critical

Provide decision explanations in outputs

Model Explainability
Audit critical

Implement feature attribution tracking

Bias Detection
Audit critical

Use Fairlearn or AIF360 for bias detection

Bias Detection
Audit critical

Implement fairness metrics tracking

Bias Detection
Audit critical

Test for demographic parity and equalized odds

Model Documentation
Audit critical

Detection failed: 'bool' object has no attribute 'lower'

Compliance Tracking
Audit critical

Detection failed: 'bool' object has no attribute 'lower'

Human Oversight
Audit critical

Detection failed: 'bool' object has no attribute 'lower'

Dependency Scanning
Audit critical

Detection failed: 'bool' object has no attribute 'lower'

Model Provenance Tracking
Audit critical

Use MLflow, DVC, or Weights & Biases for model tracking

Model Provenance Tracking
Audit critical

Implement model versioning with metadata

Model Provenance Tracking
Audit critical

Maintain model registry with provenance information

Confidence Scoring
Audit critical

Detection failed: 'bool' object has no attribute 'lower'

Temperature Control
Audit critical

Detection failed: 'bool' object has no attribute 'lower'

Fairness Metrics
Audit critical

Use Fairlearn or AIF360 for fairness metrics

Fairness Metrics
Audit critical

Implement demographic parity testing

Fairness Metrics
Audit critical

Monitor fairness metrics in production

Bias Testing
Audit critical

Implement adversarial testing for bias

Bias Testing
Audit critical

Test across demographic groups

Bias Testing
Audit critical

Use TextAttack or CheckList for NLP bias testing

Model Cards
Audit critical

Detection failed: 'ConfigAnalyzer' object has no attribute 'file_exists'

Monitoring Integration
Audit critical

Detection failed: 'bool' object has no attribute 'lower'

Audit Logging
Audit critical

Detection failed: 'bool' object has no attribute 'lower'

Rollback Capability
Audit critical

Detection failed: 'bool' object has no attribute 'lower'

Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/partners/openai/langchain_openai/embeddings/base.py:429
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/partners/openai/langchain_openai/chat_models/base.py:1338
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/partners/ollama/langchain_ollama/chat_models.py:1605
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/partners/ollama/langchain_ollama/chat_models.py:945
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py:1218
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py:723
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/partners/anthropic/langchain_anthropic/chat_models.py:1701
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

User input 'prompt' embedded in LLM prompt/private/tmp/langchain-test/libs/partners/anthropic/langchain_anthropic/llms.py:291
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/partners/perplexity/langchain_perplexity/chat_models.py:589
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/partners/mistralai/langchain_mistralai/chat_models.py:1156
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/cli/langchain_cli/namespaces/integration.py:135
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

Direct execution of LLM-generated code in 'new'/private/tmp/langchain-test/libs/cli/langchain_cli/namespaces/integration.py:60
Scan critical

Code Execution Security: 1. NEVER execute LLM-generated code directly with exec()/eval() 2. If code execution is necessary, use sandboxed environments (Docker, VM) 3. Implement strict code validation and static analysis before execution 4. Use allowlists for permitted functions/modules 5. Set resource limits (CPU, memory, time) for execution 6. Parse and validate code structure before running 7. Consider using safer alternatives (JSON, declarative configs) 8. Log all code execution attempts with full context 9. Require human review for generated code 10. Use tools like RestrictedPython for safer Python execution

Direct execution of LLM output in 'new'/private/tmp/langchain-test/libs/cli/langchain_cli/namespaces/integration.py:60
Scan critical

NEVER directly execute LLM-generated code: 1. Remove direct execution: - Do not use eval(), exec(), or os.system() - Avoid dynamic code execution - Use safer alternatives (allow-lists) 2. If code generation is required: - Generate code for review only - Require human approval before execution - Use sandboxing (containers, VMs) - Implement strict security policies 3. Use structured outputs: - Return data, not code - Use JSON schemas - Define clear interfaces 4. Add safeguards: - Static code analysis before execution - Whitelist allowed operations - Rate limiting and monitoring

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/cli/langchain_cli/namespaces/app.py:366
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/cli/langchain_cli/namespaces/app.py:260
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

Direct execution of LLM-generated code in 'add'/private/tmp/langchain-test/libs/cli/langchain_cli/namespaces/app.py:128
Scan critical

Code Execution Security: 1. NEVER execute LLM-generated code directly with exec()/eval() 2. If code execution is necessary, use sandboxed environments (Docker, VM) 3. Implement strict code validation and static analysis before execution 4. Use allowlists for permitted functions/modules 5. Set resource limits (CPU, memory, time) for execution 6. Parse and validate code structure before running 7. Consider using safer alternatives (JSON, declarative configs) 8. Log all code execution attempts with full context 9. Require human review for generated code 10. Use tools like RestrictedPython for safer Python execution

Direct execution of LLM output in 'add'/private/tmp/langchain-test/libs/cli/langchain_cli/namespaces/app.py:128
Scan critical

NEVER directly execute LLM-generated code: 1. Remove direct execution: - Do not use eval(), exec(), or os.system() - Avoid dynamic code execution - Use safer alternatives (allow-lists) 2. If code generation is required: - Generate code for review only - Require human approval before execution - Use sandboxing (containers, VMs) - Implement strict security policies 3. Use structured outputs: - Return data, not code - Use JSON schemas - Define clear interfaces 4. Add safeguards: - Static code analysis before execution - Whitelist allowed operations - Rate limiting and monitoring

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/cli/langchain_cli/namespaces/template.py:133
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/cli/langchain_cli/namespaces/template.py:84
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

Direct execution of LLM-generated code in 'new'/private/tmp/langchain-test/libs/cli/langchain_cli/namespaces/template.py:19
Scan critical

Code Execution Security: 1. NEVER execute LLM-generated code directly with exec()/eval() 2. If code execution is necessary, use sandboxed environments (Docker, VM) 3. Implement strict code validation and static analysis before execution 4. Use allowlists for permitted functions/modules 5. Set resource limits (CPU, memory, time) for execution 6. Parse and validate code structure before running 7. Consider using safer alternatives (JSON, declarative configs) 8. Log all code execution attempts with full context 9. Require human review for generated code 10. Use tools like RestrictedPython for safer Python execution

Direct execution of LLM output in 'new'/private/tmp/langchain-test/libs/cli/langchain_cli/namespaces/template.py:19
Scan critical

NEVER directly execute LLM-generated code: 1. Remove direct execution: - Do not use eval(), exec(), or os.system() - Avoid dynamic code execution - Use safer alternatives (allow-lists) 2. If code generation is required: - Generate code for review only - Require human approval before execution - Use sandboxing (containers, VMs) - Implement strict security policies 3. Use structured outputs: - Return data, not code - Use JSON schemas - Define clear interfaces 4. Add safeguards: - Static code analysis before execution - Whitelist allowed operations - Rate limiting and monitoring

User input 'input' embedded in LLM prompt/private/tmp/langchain-test/libs/langchain_v1/langchain/chat_models/base.py:701
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

User input 'request' embedded in LLM prompt/private/tmp/langchain-test/libs/langchain_v1/langchain/agents/middleware/tool_emulator.py:124
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/langchain_v1/langchain/agents/middleware/tool_emulator.py:150
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/langchain_v1/langchain/agents/middleware/file_search.py:281
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

Direct execution of LLM-generated code in '_ripgrep_search'/private/tmp/langchain-test/libs/langchain_v1/langchain/agents/middleware/file_search.py:259
Scan critical

Code Execution Security: 1. NEVER execute LLM-generated code directly with exec()/eval() 2. If code execution is necessary, use sandboxed environments (Docker, VM) 3. Implement strict code validation and static analysis before execution 4. Use allowlists for permitted functions/modules 5. Set resource limits (CPU, memory, time) for execution 6. Parse and validate code structure before running 7. Consider using safer alternatives (JSON, declarative configs) 8. Log all code execution attempts with full context 9. Require human review for generated code 10. Use tools like RestrictedPython for safer Python execution

Direct execution of LLM output in '_ripgrep_search'/private/tmp/langchain-test/libs/langchain_v1/langchain/agents/middleware/file_search.py:259
Scan critical

NEVER directly execute LLM-generated code: 1. Remove direct execution: - Do not use eval(), exec(), or os.system() - Avoid dynamic code execution - Use safer alternatives (allow-lists) 2. If code generation is required: - Generate code for review only - Require human approval before execution - Use sandboxing (containers, VMs) - Implement strict security policies 3. Use structured outputs: - Return data, not code - Use JSON schemas - Define clear interfaces 4. Add safeguards: - Static code analysis before execution - Whitelist allowed operations - Rate limiting and monitoring

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain_v1/langchain/agents/middleware/summarization.py:562
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/langchain_v1/langchain/agents/middleware/context_editing.py:245
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain_v1/langchain/agents/middleware/context_editing.py:244
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain_v1/langchain/agents/middleware/context_editing.py:281
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

User input 'text' embedded in LLM prompt/private/tmp/langchain-test/libs/langchain/langchain_classic/model_laboratory.py:97
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/model_laboratory.py:97
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/model_laboratory.py:83
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

User input 'query' embedded in LLM prompt/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/contextual_compression.py:34
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/contextual_compression.py:27
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

User input 'query' embedded in LLM prompt/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/merger_retriever.py:69
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

User input 'query' embedded in LLM prompt/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/re_phraser.py:76
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/re_phraser.py:61
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

User input 'query' embedded in LLM prompt/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/ensemble.py:224
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

User input 'query' embedded in LLM prompt/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/multi_query.py:179
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/multi_query.py:164
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/memory/chat_memory.py:74
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/memory/vectorstore.py:67
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/memory/summary.py:36
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/memory/summary.py:103
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/memory/summary.py:157
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/memory/entity.py:502
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/memory/entity.py:567
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

User input 'input' embedded in LLM prompt/private/tmp/langchain-test/libs/langchain/langchain_classic/chat_models/base.py:773
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/agent.py:1353
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/agent.py:1351
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/agent.py:419
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/agent.py:531
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/agent.py:1301
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

User input 'prompt_value' embedded in LLM prompt/private/tmp/langchain-test/libs/langchain/langchain_classic/output_parsers/retry.py:245
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

User input 'prompt_value' embedded in LLM prompt/private/tmp/langchain-test/libs/langchain/langchain_classic/output_parsers/retry.py:251
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/output_parsers/retry.py:245
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/output_parsers/retry.py:97
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/output_parsers/retry.py:234
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/output_parsers/fix.py:81
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/output_parsers/fix.py:70
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

LLM output flows to code_execution sink/private/tmp/langchain-test/libs/langchain/langchain_classic/evaluation/loading.py:178
Scan critical

Mitigations for Code Execution: 1. Never pass LLM output to eval() or exec() 2. Use safe alternatives (ast.literal_eval for data) 3. Implement sandboxing if code execution is required

User input 'inputs' embedded in LLM prompt/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/llm.py:117
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

User input 'input_list' embedded in LLM prompt/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/llm.py:241
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/llm.py:246
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/llm.py:112
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/llm.py:120
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/llm.py:224
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/mapreduce.py:113
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/mapreduce.py:99
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/sequential.py:173
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/sequential.py:179
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/sequential.py:164
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

User input 'inputs' embedded in LLM prompt/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/base.py:413
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/base.py:413
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/base.py:369
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

User input 'inputs' embedded in LLM prompt/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/hyde/base.py:96
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/hyde/base.py:89
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/elasticsearch_database/base.py:140
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/elasticsearch_database/base.py:149
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/elasticsearch_database/base.py:160
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/elasticsearch_database/base.py:116
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/sql_database/query.py:33
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/retrieval_qa/base.py:154
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/retrieval_qa/base.py:129
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/qa_with_sources/base.py:167
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/qa_with_sources/base.py:153
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/qa_generation/base.py:116
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/constitutional_ai/base.py:262
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/constitutional_ai/base.py:271
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/constitutional_ai/base.py:307
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/constitutional_ai/base.py:313
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

Model DoS vulnerability: LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/constitutional_ai/base.py:249
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/natbot/base.py:113
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/api/base.py:289
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/api/base.py:300
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/llm_math/base.py:275
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/llm_math/base.py:268
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/combine_documents/reduce.py:321
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/combine_documents/refine.py:145
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/conversational_retrieval/base.py:177
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

User input 'user_input' embedded in LLM prompt/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/flare/base.py:147
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

Model DoS vulnerability: LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/flare/base.py:135
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/flare/base.py:198
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Direct execution of LLM-generated code in '_call'/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/flare/base.py:198
Scan critical

Code Execution Security: 1. NEVER execute LLM-generated code directly with exec()/eval() 2. If code execution is necessary, use sandboxed environments (Docker, VM) 3. Implement strict code validation and static analysis before execution 4. Use allowlists for permitted functions/modules 5. Set resource limits (CPU, memory, time) for execution 6. Parse and validate code structure before running 7. Consider using safer alternatives (JSON, declarative configs) 8. Log all code execution attempts with full context 9. Require human review for generated code 10. Use tools like RestrictedPython for safer Python execution

Direct execution of LLM output in '_call'/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/flare/base.py:198
Scan critical

NEVER directly execute LLM-generated code: 1. Remove direct execution: - Do not use eval(), exec(), or os.system() - Avoid dynamic code execution - Use safer alternatives (allow-lists) 2. If code generation is required: - Generate code for review only - Require human approval before execution - Use sandboxing (containers, VMs) - Implement strict security policies 3. Use structured outputs: - Return data, not code - Use JSON schemas - Define clear interfaces 4. Add safeguards: - Static code analysis before execution - Whitelist allowed operations - Rate limiting and monitoring

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/router/llm_router.py:137
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/evaluation/agents/trajectory_eval_chain.py:298
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/evaluation/agents/trajectory_eval_chain.py:280
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

User input 'input' embedded in LLM prompt/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/openai_assistant/base.py:360
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

User input 'input_dict' embedded in LLM prompt/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/openai_assistant/base.py:560
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/openai_assistant/base.py:371
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/openai_assistant/base.py:382
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/openai_assistant/base.py:379
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/openai_assistant/base.py:288
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/openai_assistant/base.py:542
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/openai_assistant/base.py:668
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/smith/evaluation/runner_utils.py:1659
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/smith/evaluation/runner_utils.py:1674
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/langchain/langchain_classic/smith/evaluation/runner_utils.py:1685
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

LLM output flows to code_execution sink/private/tmp/langchain-test/libs/langchain/langchain_classic/smith/evaluation/runner_utils.py:1687
Scan critical

Mitigations for Code Execution: 1. Never pass LLM output to eval() or exec() 2. Use safe alternatives (ast.literal_eval for data) 3. Implement sandboxing if code execution is required

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/smith/evaluation/runner_utils.py:861
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/smith/evaluation/string_run_evaluator.py:298
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

User input 'query' embedded in LLM prompt/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/document_compressors/listwise_rerank.py:95
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/document_compressors/listwise_rerank.py:88
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/document_compressors/cross_encoder_rerank.py:31
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/document_compressors/chain_extract.py:68
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

User input 'query' embedded in LLM prompt/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/self_query/base.py:316
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/langchain/langchain_classic/retrievers/self_query/base.py:310
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

User input 'input' embedded in LLM prompt/private/tmp/langchain-test/libs/core/langchain_core/language_models/chat_models.py:402
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

User input 'input' embedded in LLM prompt/private/tmp/langchain-test/libs/core/langchain_core/language_models/chat_models.py:492
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

User input 'input' embedded in LLM prompt/private/tmp/langchain-test/libs/core/langchain_core/language_models/fake.py:106
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/language_models/fake.py:98
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

User input 'input' embedded in LLM prompt/private/tmp/langchain-test/libs/core/langchain_core/language_models/llms.py:378
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

User input 'inputs' embedded in LLM prompt/private/tmp/langchain-test/libs/core/langchain_core/language_models/llms.py:431
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/core/langchain_core/language_models/llms.py:102
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/core/langchain_core/language_models/llms.py:109
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/language_models/llms.py:368
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/language_models/llms.py:508
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/language_models/fake_chat_models.py:158
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

User input 'query' embedded in LLM prompt/private/tmp/langchain-test/libs/core/langchain_core/tools/retriever.py:65
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/tools/retriever.py:62
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/core/langchain_core/tools/base.py:995
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/core/langchain_core/tools/base.py:992
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/tools/base.py:628
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/core/langchain_core/callbacks/manager.py:358
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/core/langchain_core/callbacks/manager.py:364
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/core/langchain_core/callbacks/manager.py:352
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/callbacks/manager.py:340
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/core/langchain_core/runnables/graph_mermaid.py:310
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/core/langchain_core/runnables/config.py:181
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/core/langchain_core/runnables/config.py:185
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/core/langchain_core/runnables/config.py:186
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/core/langchain_core/runnables/config.py:553
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/runnables/config.py:135
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

User input 'input_' embedded in LLM prompt/private/tmp/langchain-test/libs/core/langchain_core/runnables/configurable.py:189
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

User input 'input_' embedded in LLM prompt/private/tmp/langchain-test/libs/core/langchain_core/runnables/configurable.py:185
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/runnables/configurable.py:142
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/runnables/configurable.py:178
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

User input 'input' embedded in LLM prompt/private/tmp/langchain-test/libs/core/langchain_core/runnables/branch.py:215
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

User input 'input' embedded in LLM prompt/private/tmp/langchain-test/libs/core/langchain_core/runnables/branch.py:234
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

User input 'input' embedded in LLM prompt/private/tmp/langchain-test/libs/core/langchain_core/runnables/branch.py:224
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

User input 'input' embedded in LLM prompt/private/tmp/langchain-test/libs/core/langchain_core/runnables/branch.py:327
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/core/langchain_core/runnables/branch.py:244
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

Model DoS vulnerability: LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/runnables/branch.py:189
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/runnables/branch.py:296
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

User input 'input_' embedded in LLM prompt/private/tmp/langchain-test/libs/core/langchain_core/runnables/retry.py:188
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

Model DoS vulnerability: LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/runnables/retry.py:179
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

User input 'input' embedded in LLM prompt/private/tmp/langchain-test/libs/core/langchain_core/runnables/fallbacks.py:193
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

User input 'input' embedded in LLM prompt/private/tmp/langchain-test/libs/core/langchain_core/runnables/fallbacks.py:496
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/core/langchain_core/runnables/fallbacks.py:207
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/core/langchain_core/runnables/fallbacks.py:501
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

Model DoS vulnerability: LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/runnables/fallbacks.py:466
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

User input 'input_' embedded in LLM prompt/private/tmp/langchain-test/libs/core/langchain_core/runnables/router.py:162
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

User input 'input_' embedded in LLM prompt/private/tmp/langchain-test/libs/core/langchain_core/runnables/router.py:158
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/runnables/router.py:107
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/runnables/router.py:153
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

User input 'input_' embedded in LLM prompt/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:2060
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

User input 'input_' embedded in LLM prompt/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:979
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

User input 'input_' embedded in LLM prompt/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:975
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

User input 'input_' embedded in LLM prompt/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:3861
Scan critical

Mitigations: 1. Use structured prompt templates (e.g., LangChain PromptTemplate) 2. Implement input sanitization to remove prompt injection patterns 3. Use separate 'user' and 'system' message roles (ChatML format) 4. Apply input validation and length limits 5. Use allowlists for expected input formats 6. Consider prompt injection detection libraries

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:2326
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:1130
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:2027
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:2261
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: LLM calls in loops, No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:3127
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:3830
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:5685
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:901
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:970
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

Model DoS vulnerability: No rate limiting, No input length validation, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/runnables/base.py:3852
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/core/langchain_core/tracers/core.py:109
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/core/langchain_core/tracers/stdout.py:78
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/core/langchain_core/tracers/stdout.py:105
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/core/langchain_core/tracers/stdout.py:114
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

LLM output flows to command_injection sink/private/tmp/langchain-test/libs/core/langchain_core/tracers/stdout.py:125
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable

Model DoS vulnerability: LLM calls in loops, No rate limiting, No timeout configuration, No token/context limits/private/tmp/langchain-test/libs/core/langchain_core/tracers/stdout.py:66
Scan critical

Model DoS Mitigations: 1. Implement rate limiting per user/IP (@limiter.limit('10/minute')) 2. Validate and limit input length (max 1000 chars) 3. Set token limits (max_tokens=500) 4. Configure timeouts (timeout=30 seconds) 5. Avoid LLM calls in unbounded loops 6. Implement circuit breakers for cascading failures 7. Monitor and alert on resource usage 8. Use queuing for batch processing 9. Implement cost controls and budgets

LLM output used in dangerous command_injection sink/private/tmp/langchain-test/libs/core/langchain_core/tracers/base.py:49
Scan critical

Mitigations for Command Injection: 1. Never pass LLM output to shell commands 2. Use subprocess with shell=False and list arguments 3. Apply allowlist validation for expected values 4. Use shlex.quote() if shell execution is unavoidable 5. Consider alternative APIs that don't use shell

Insecure tool function '_run_llm' executes dangerous operations/private/tmp/langchain-test/libs/langchain/langchain_classic/smith/evaluation/runner_utils.py:861
Scan high

Secure Tool/Plugin Implementation: 1. NEVER execute shell commands from LLM output directly 2. Use allowlists for permitted commands/operations 3. Validate all file paths against allowed directories 4. Use parameterized queries - never raw SQL from LLM 5. Validate URLs against allowlist before HTTP requests 6. Implement strict input schemas (JSON Schema, Pydantic) 7. Add rate limiting and request throttling 8. Log all tool invocations for audit 9. Use principle of least privilege 10. Implement human-in-the-loop for destructive operations

Critical decision without oversight in '_construct_responses_api_payload'/private/tmp/langchain-test/libs/partners/openai/langchain_openai/chat_models/base.py:3754
Scan low

Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards

Critical decision without oversight in 'get_num_tokens_from_messages'/private/tmp/langchain-test/libs/partners/openai/langchain_openai/chat_models/base.py:1724
Scan low

Critical security decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards

Critical decision without oversight in '_call'/private/tmp/langchain-test/libs/partners/anthropic/langchain_anthropic/llms.py:249
Scan low

Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards

Critical decision without oversight in 'bind_tools'/private/tmp/langchain-test/libs/partners/deepseek/langchain_deepseek/chat_models.py:395
Scan low

Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards

Critical decision without oversight in 'load_memory_variables'/private/tmp/langchain-test/libs/langchain/langchain_classic/memory/entity.py:502
Scan low

Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards

Critical decision without oversight in 'save_context'/private/tmp/langchain-test/libs/langchain/langchain_classic/memory/entity.py:567
Scan low

Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards

Critical decision without oversight in 'plan'/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/agent.py:419
Scan low

Critical security decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards

Critical decision without oversight in 'plan'/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/agent.py:531
Scan low

Critical security decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards

Critical decision without oversight in 'query'/private/tmp/langchain-test/libs/langchain/langchain_classic/indexes/vectorstore.py:34
Scan low

Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards

Critical decision without oversight in 'query_with_sources'/private/tmp/langchain-test/libs/langchain/langchain_classic/indexes/vectorstore.py:104
Scan low

Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards

Critical decision without oversight in 'create_sql_query_chain'/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/sql_database/query.py:33
Scan low

Critical security decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards

Critical decision without oversight in '_call'/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/constitutional_ai/base.py:249
Scan low

Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards

Critical decision without oversight in 'from_llm'/private/tmp/langchain-test/libs/langchain/langchain_classic/chains/flare/base.py:250
Scan low

Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards

Critical decision without oversight in 'create_openai_tools_agent'/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/openai_tools/base.py:17
Scan low

Critical security decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards

Critical decision without oversight in 'create_openai_functions_agent'/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/openai_functions_agent/base.py:287
Scan low

Critical security decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards

Critical decision without oversight in 'create_tool_calling_agent'/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/tool_calling_agent/base.py:18
Scan low

Critical security decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards

Critical decision without oversight in 'create_json_chat_agent'/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/json_chat/base.py:14
Scan low

Critical security decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards

Critical decision without oversight in 'create_xml_agent'/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/xml/base.py:115
Scan low

Critical security decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards

Critical decision without oversight in 'invoke'/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/openai_assistant/base.py:288
Scan low

Critical security decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards

Critical decision without oversight in 'create_react_agent'/private/tmp/langchain-test/libs/langchain/langchain_classic/agents/react/agent.py:16
Scan low

Critical security decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards

Critical decision without oversight in '_run_llm'/private/tmp/langchain-test/libs/langchain/langchain_classic/smith/evaluation/runner_utils.py:861
Scan low

Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards

Critical decision without oversight in 'run_on_dataset'/private/tmp/langchain-test/libs/langchain/langchain_classic/smith/evaluation/runner_utils.py:1512
Scan low

Critical security, data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards

Critical decision without oversight in '_prepare_input'/private/tmp/langchain-test/libs/langchain/langchain_classic/smith/evaluation/string_run_evaluator.py:298
Scan low

Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards

Critical decision without oversight in '_end_trace'/private/tmp/langchain-test/libs/core/langchain_core/tracers/base.py:45
Scan low

Critical data_modification decision requires human oversight: 1. Implement human-in-the-loop review: - Add review queue for high-stakes decisions - Require explicit human approval before execution - Log all decisions for audit trail 2. Add verification mechanisms: - Cross-reference with trusted sources - Implement multi-step verification - Use confidence thresholds 3. Include safety checks: - Set limits on transaction amounts - Require secondary confirmation - Implement rollback mechanisms 4. Add disclaimers: - Inform users output may be incorrect - Recommend professional consultation - Document limitations clearly 5. Monitor and review: - Track decision outcomes - Review failures and near-misses - Continuously improve safeguards