The OWASP Top 10 for Large Language Model Applications has become the de facto standard for AI security risk identification. The 2025 update reflects a year of real-world attacks, enterprise deployments, and regulatory developments that changed how organisations must approach AI governance.
If your organisation deploys ChatGPT, Copilot, Claude, or any LLM-based system, you need documented controls addressing these risks. Auditors, customers, and regulators are already asking.
The 2025 Top 10: What’s on the List
LLM01: Prompt Injection — Attackers manipulate LLM behaviour through crafted inputs. This remains the top risk because it’s difficult to fully mitigate and can lead to data exfiltration, unauthorised actions, or system compromise.
LLM02: Sensitive Information Disclosure — LLMs inadvertently reveal confidential data from training sets or conversation context. This is particularly concerning for enterprise deployments where proprietary information may be exposed.
LLM03: Supply Chain Vulnerabilities — Risks from third-party models, training data, plugins, and integration components. Most organisations don’t train their own models, making supply chain risk management critical.
LLM04: Data and Model Poisoning — Manipulation of training data or fine-tuning processes to embed malicious behaviour. Relevant for organisations that customise models or use retrieval-augmented generation.
LLM05: Insecure Output Handling — Failure to validate, sanitise, or properly handle LLM outputs before use in downstream systems. Outputs may contain malicious code, incorrect information, or unexpected formats.
LLM06: Excessive Agency — LLMs granted too much autonomy or access to sensitive functions. Agentic AI systems that can take actions on behalf of users present amplified risk.
LLM07: System Prompt Leakage — Exposure of system prompts that reveal business logic, security controls, or sensitive instructions embedded in the application.
LLM08: Vector and Embedding Weaknesses — Vulnerabilities in retrieval-augmented generation systems, including manipulation of vector databases and embedding poisoning.
LLM09: Misinformation — LLMs generating false, misleading, or fabricated information presented as fact. Particularly concerning for customer-facing applications.
LLM10: Unbounded Consumption — Resource exhaustion attacks through excessive API calls, token consumption, or computationally expensive queries.
Documentation Requirements for Each Risk
Addressing OWASP LLM Top 10 risks requires more than technical controls. You need documented policies, procedures, and evidence that demonstrates governance.
For Prompt Injection (LLM01):
- Input validation procedures
- Prompt template controls
- Output filtering requirements
- Incident response procedures for injection attempts
For Sensitive Information Disclosure (LLM02):
- Data classification policy for AI systems
- Training data governance procedures
- Output review processes
- Data leakage prevention controls
For Supply Chain (LLM03):
- AI vendor assessment criteria
- Third-party model evaluation procedures
- Plugin and integration security requirements
- Continuous monitoring for supply chain risks
For Excessive Agency (LLM06):
- Principle of least privilege for AI systems
- Human-in-the-loop requirements
- Action approval workflows
- Audit logging for AI-initiated actions
Mapping to NIST AI RMF
The OWASP Top 10 for LLMs aligns with NIST AI Risk Management Framework functions:
Map — Identify where AI systems operate and their risk context. Document all LLM deployments, use cases, and data flows.
Measure — Assess risks against the OWASP Top 10. Conduct regular vulnerability assessments and penetration testing of AI systems.
Manage — Implement controls for identified risks. Document technical controls, procedural safeguards, and residual risk acceptance.
Govern — Establish oversight structures. Define roles, responsibilities, and accountability for AI risk management.
Your AI governance documentation should demonstrate coverage across all four NIST AI RMF functions while specifically addressing the OWASP Top 10 risks relevant to your deployment.
What Regulators and Customers Want to See
The EU AI Act creates legal obligations for high-risk AI systems. Even if your organisation isn’t directly subject to EU regulation, enterprise customers increasingly require AI governance evidence in security questionnaires.
Expect questions like:
- How do you prevent prompt injection attacks?
- What controls prevent sensitive data leakage through AI systems?
- How do you assess third-party AI model risks?
- What human oversight exists for AI-generated outputs?
- How do you monitor for AI system misuse?
Without documented policies and procedures, you cannot answer these questions credibly.
Building Your AI Governance Program
Start with an AI inventory. Document every LLM deployment, including shadow AI usage by employees accessing ChatGPT or similar tools through personal accounts.
Create an AI Acceptable Use Policy. Define permitted uses, prohibited activities, data handling requirements, and employee responsibilities.
Develop AI-specific risk assessment procedures. Standard IT risk assessment frameworks don’t adequately cover AI-specific risks like prompt injection or model poisoning.
Establish vendor assessment criteria for AI providers. Your standard vendor questionnaire needs AI-specific questions addressing model provenance, training data governance, and security controls.
Implement technical controls with documented procedures. Input validation, output filtering, access controls, and monitoring all require documented implementation standards.
Build incident response capabilities for AI-specific scenarios. Your incident response plan should include playbooks for prompt injection attempts, data leakage incidents, and model manipulation.
Ridgeline Cyber Defence provides AI Security Toolkit documentation mapped to NIST AI RMF and OWASP LLM Top 10 2025. Includes policies, risk assessment frameworks, vendor evaluation criteria, and incident response procedures.
Related Reading
- Risk Management Program: Assessment, BIA & Vendor Risk
- Security Policy Documentation: Enterprise Quality
AI Security Toolkit
Complete AI security program — 22 professional documents plus an intelligent browser-based governance app with 46 security controls, risk assessment, ethics reviews, and AI-powered assistance.
Implementation Services
Need this customised to your organisation?
We'll customise any product to your organisation and deliver in 1–2 weeks. Fixed price, fully async. You review it, your team runs it.
Foundation $1,997 · Toolkit $2,997 · Suite $5,997 · Program $8,997
Get compliance insights and product updates
Product launches only · No spam · Unsubscribe anytime