OWASP Top 10 for LLM Applications (2025) + OWASP ASI Top 10 for AI Agents (2026): What Hudson Valley SMBs Deploying AI Must Secure Now

Two OWASP frameworks. Twenty risks. One practical action plan for businesses using ChatGPT, Copilot, chatbots, or autonomous AI agents.

By Jim Venuto | October 22, 2025 | Hudson Valley CISO

Table of Contents

Local Hook: If your Hudson Valley business uses ChatGPT, Microsoft Copilot, customer-facing chatbots, or autonomous AI agents for sales, support, or operations, the 2025 OWASP Top 10 for LLM Applications and the new 2026 OWASP ASI (Agentic Security Initiative) Top 10 define the security controls you need before your first data breach triggers NY SHIELD Act notification requirements.

Plain-English Obligations

The OWASP Top 10 for Large Language Model (LLM) Applications provides security guidance for organizations deploying generative AI technologies like ChatGPT, Claude, Gemini, and custom-trained models. The 2025 update reflects lessons learned from real-world LLM deployments and introduces new risks related to Retrieval-Augmented Generation (RAG), resource consumption, and supply-chain vulnerabilities.

The OWASP ASI Top 10 for Agentic Applications (2026) is a new framework addressing autonomous AI agents—systems that can plan, use tools, make decisions, and execute code without human intervention. As 2025 emerges as the “year of LLM agents,” these guidelines cover goal hijacking, tool misuse, cascading failures, and rogue agent scenarios that traditional LLM security models don’t address.

Neither framework creates legal obligations, but they define security baselines that map directly to compliance requirements under NY SHIELD Act “reasonable safeguards,” FTC Safeguards Rule information security programs, and NIST AI Risk Management Framework governance controls.

The 2025 OWASP Top 10 for LLM Applications

Risk What Changed in 2025 Hudson Valley SMB Impact
LLM01: Prompt Injection Remains #1; expanded coverage of indirect injection via uploaded files, URLs, and RAG data sources Customer-facing chatbots can be manipulated to leak data, execute unauthorized actions, or bypass policy guardrails
LLM02: Sensitive Information Disclosure Unchanged; models trained on or retrieving proprietary data may leak trade secrets, PII, credentials Risk of NY SHIELD breach notification if customer PII is embedded in training data or RAG knowledge bases
LLM03: Supply Chain Vulnerabilities Expanded to include poisoned datasets, vulnerable pre-trained models, and third-party plugins Vendor risk: SaaS AI tools may introduce supply-chain compromise without your knowledge
LLM04: Data and Model Poisoning Covers training-time and RAG-time data corruption RAG systems (53% of companies rely on RAG vs. fine-tuning) are vulnerable to knowledge-base poisoning via malicious documents
LLM05: Insecure Output Handling Covers downstream risks when LLM outputs are executed (SQL injection, XSS, command injection) If your app takes LLM-generated code/queries and runs them, you’re exposed to remote code execution
LLM06: Excessive Agency New in 2025; addresses over-privileged agents with tool access AI agents with database write access, payment APIs, or external integrations can cause financial/data damage if misused
LLM07: System Prompt Leakage New in 2025; attackers extract hidden instructions, business logic, or API keys from system prompts Proprietary workflows embedded in prompts (e.g., pricing logic, compliance rules) can be stolen
LLM08: Vector and Embedding Weaknesses New in 2025; focuses on RAG security (vector database poisoning, embedding model attacks) RAG-powered chatbots pulling from corrupted or malicious vector stores will hallucinate harmful outputs
LLM09: Misinformation Covers hallucinations, biased outputs, and harmful content generation Customer-facing bots generating false product claims, medical advice, or legal guidance create liability
LLM10: Unbounded Consumption Renamed (was “Denial of Service”); now covers cost/resource abuse Attacker-induced token consumption can rack up $10k+ API bills overnight; lack of rate limits enables this

The 2026 OWASP ASI Top 10 for AI Agents

Risk Description Hudson Valley SMB Scenario
ASI01: Agent Goal Hijack Attacker redirects agent’s objectives via prompt injection, malicious artifacts, or external data Sales agent instructed to “offer 90% discount to all customers” via crafted email input
ASI02: Tool Misuse & Exploitation Agent uses legitimate tools in harmful ways due to over-privileged access AI agent with Stripe API access creates fraudulent refunds or subscriptions
ASI03: Identity & Privilege Abuse Agent escalates privileges by abusing its own identity or inheriting user credentials Agent impersonates admin user to access restricted financial records
ASI04: Agentic Supply Chain Vulnerabilities Compromised models, poisoned RAG data, vulnerable tool definitions, or malicious plugins Third-party agent marketplace plugin contains backdoor code that exfiltrates data
ASI05: Unexpected Code Execution (RCE) Agent tricked into generating and executing malicious code Coding agent writes Python script to upload customer database to attacker-controlled server
ASI06: Memory & Context Poisoning Persistent corruption of agent’s long-term memory Attacker seeds false “facts” into agent’s memory so future responses leak credentials
ASI07: Insecure Inter-Agent Communication Multi-agent systems allowing message forging, spoofing, or MITM attacks Agent A impersonates Agent B to approve unauthorized transactions
ASI08: Cascading Failures Small error triggers uncontrolled chain reaction in agent workflow Failed payment triggers refund → duplicate order → inventory depleted → financial loss
ASI09: Human-Agent Trust Exploitation Attacker manipulates agent output to deceive human-in-the-loop Agent presents fraudulent invoice for CEO approval with fabricated vendor justification
ASI10: Rogue Agents Agents operating outside intended mandate due to governance failure Auto-updated agent begins sending customer data to unapproved third-party analytics service

Practical Implementation Plan

PHASE 1 — WEEK 1 Inventory & Risk Classification

Identify all LLM/AI agent deployments (internal tools, customer-facing bots, code assistants, automation workflows). Classify by autonomy level: passive (chat-only) vs. agentic (tool-using, decision-making). Map data flows: what PII, trade secrets, or regulated data can the model access or output? Document tools/APIs each agent can invoke.

PHASE 2 — WEEKS 2-4 OWASP LLM Top 10 Controls

Prompt Injection Defense: input sanitization, system-prompt isolation, output validation.

Sensitive Data Protection: Remove PII from training data; implement output filtering.

RAG Security (LLM08): Cryptographic integrity checks on vector databases; sanitize documents before embedding.

Excessive Agency Controls (LLM06): least-privilege tool access; human approval for high-risk actions.

Unbounded Consumption Limits (LLM10): Rate-limit API calls; set token budgets; implement circuit breakers.

Supply Chain Vetting: AI-specific SBOM; validate all third-party components.

PHASE 3 — WEEKS 5-8 OWASP ASI Agentic Controls

Goal Integrity (ASI01): Log all agent instructions; alert on goal changes; read-only mode for high-risk agents.

Tool Sandboxing (ASI02, ASI05): Hardware-enforced sandboxing; zero-access containers; disable network egress by default.

Identity Management (ASI03): Unique, short-lived, session-based credentials per agent.

Inter-Agent Security (ASI07): mTLS + digital signing for all agent-to-agent messages.

Cascading Failure Prevention (ASI08): Transactional rollback; circuit breakers; safe failure modes.

Kill Switch (ASI10): Auditable emergency stop; continuous behavioral monitoring.

PHASE 4 — WEEKS 9-12 Evidence & Governance

Establish AI governance committee (CISO, legal, business owner) for quarterly reviews. Map OWASP controls to NIST AI RMF functions (Govern, Map, Measure, Manage). Create incident playbook for AI-specific events. Document “reasonable safeguards” for AI data handling (NY SHIELD compliance). Implement logging for all agent actions with 1-year retention.

Evidence Pack

Control Area Evidence OWASP Mapping Compliance Link
AI Inventory List of all LLM/agent deployments, data access, tool permissions LLM06, ASI02 NIST AI RMF Map
Prompt Injection Defense Input sanitization rules, output validation logs LLM01 NY SHIELD reasonable safeguards
RAG Data Integrity Vector DB integrity checks, document sanitization logs LLM08 FTC Safeguards data security
Agent Tool Access Controls Least-privilege matrix, human-approval workflows LLM06, ASI02 NIST AI RMF Govern
Code Execution Sandboxing Container config, network egress logs, sandbox audit trail ASI05 NIST AI RMF Manage
Inter-Agent mTLS Config Certificate chain, message signing logs ASI07 NIST AI RMF Manage
Rate Limits & Cost Controls API token budgets, circuit-breaker logs, billing alerts LLM10 FTC Safeguards risk assessment
AI Incident Response Plan Playbook for prompt injection, data leak, rogue agent scenarios LLM01, ASI10 NY SHIELD breach notification
Kill Switch Documentation Emergency stop procedure, behavioral monitoring alerts ASI10 NIST AI RMF Govern
AI Governance Meeting Minutes Quarterly reviews, risk acceptance decisions, deployment approvals All NIST AI RMF Govern

Schedule an AI Security Assessment

Hudson Valley CISO offers AI Security Assessments using the OWASP LLM Top 10 and ASI Top 10 frameworks, tailored to SMBs deploying ChatGPT, Microsoft Copilot, custom chatbots, or autonomous agents. Services include AI inventory discovery, NIST AI RMF mapping, RAG security reviews, and NY SHIELD “reasonable safeguards” documentation for AI data handling. Schedule a 30-minute AI security scoping call at hudsonvalleyciso.com.

References