LLM Model Detected
Description
The application has been identified as utilizing a specific Large Language Model (LLM). This finding provides visibility into the AI/ML components integrated within the application, which is important for understanding the attack surface and potential security implications associated with LLM implementations.
Remediation
1. Implement response filtering to prevent the LLM from disclosing its identity, version, or configuration details in user-facing outputs.<br/>2. Review and harden the LLM integration by implementing input validation, output sanitization, and rate limiting to prevent abuse.<br/>3. Ensure proper access controls are in place for LLM endpoints and restrict direct user interaction where possible.<br/>4. Monitor LLM interactions for suspicious patterns such as prompt injection attempts or data exfiltration queries.<br/>5. Apply the principle of least privilege to the LLM's access to backend systems, databases, and APIs.<br/>6. Regularly update the LLM implementation and review security advisories specific to the identified model.<br/>7. Consider implementing a security layer that validates and sanitizes both inputs to and outputs from the LLM before processing or displaying them to users.