LLM Server-Side Request Forgery (SSRF)
Description
The web application's Large Language Model (LLM) implementation is vulnerable to Server-Side Request Forgery (SSRF), allowing attackers to manipulate the model into making unauthorized HTTP requests through specially crafted prompts. When exploited, the LLM fetches content from attacker-specified URLs and incorporates the responses into its output, effectively turning the AI system into a proxy for accessing both internal and external network resources. This vulnerability arises when LLMs are granted network access capabilities without proper input validation and request filtering controls.
Remediation
Implement defense-in-depth controls to prevent unauthorized network requests from the LLM system:
1. Network Segmentation and Access Controls:
• Deploy the LLM in an isolated network segment with strict egress filtering
• Block access to private IP ranges (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 169.254.0.0/16, 127.0.0.0/8)
• Deny access to cloud metadata endpoints by default
2. Implement URL Allowlisting:
If external requests are required, use a strict allowlist approach:
// Python example using URL validation
import re
from urllib.parse import urlparse
ALLOWED_DOMAINS = ['api.trusted-service.com', 'data.partner.com']
BLOCKED_IPS = ['127.0.0.1', '169.254.169.254', 'localhost']
def is_url_safe(url):
parsed = urlparse(url)
# Check domain allowlist
if parsed.hostname not in ALLOWED_DOMAINS:
return False
# Block private IPs and metadata endpoints
if parsed.hostname in BLOCKED_IPS:
return False
# Validate scheme
if parsed.scheme not in ['https']:
return False
return True
3. Disable Direct Network Access:
• Remove or disable LLM plugins and tools that enable URL fetching or web browsing
• If network capabilities are essential, route all requests through a hardened proxy with content inspection
4. Input Validation and Prompt Filtering:
• Implement detection rules to identify and block prompts containing URLs, IP addresses, or SSRF-indicative patterns
• Use content filtering to strip or sanitize URL-like strings from user inputs before processing
5. Monitoring and Logging:
• Log all outbound connection attempts with full request details (destination, headers, response codes)
• Set up alerts for connections to suspicious endpoints or unusual traffic patterns
• Regularly review logs for potential SSRF exploitation attempts
6. Principle of Least Privilege:
• Run the LLM service with minimal network permissions
• Use service accounts with no access to sensitive internal resources
• Implement application-level authentication for any permitted external API calls