LLM Adoption
Version: 1.0
Authors: David Cervigni Using GenAI
Executive Summary
This section contains an executive summary of the identified threats and their mitigation status
There are 13 unmitigated threats without proposed operational controls.
Threat ID | CVSS | Always valid? |
---|---|---|
LLM_ADOPTION_THREAT_MODEL. LLM01_PROMPT_INJECTION | 9.8 (Critical) | Yes |
LLM_ADOPTION_THREAT_MODEL. LLM03_SUPPLY_CHAIN | 9.8 (Critical) | Yes |
LLM_ADOPTION_THREAT_MODEL. LLM04_DATA_MODEL_POISONING | 9.8 (Critical) | Yes |
LLM_ADOPTION_THREAT_MODEL. LLM06_EXCESSIVE_AGENCY | 9.8 (Critical) | Yes |
LLM_ADOPTION_THREAT_MODEL. LLM05_IMPROPER_OUTPUT_HANDLING | 9.1 (Critical) | Yes |
LLM_ADOPTION_THREAT_MODEL. LLM09_MISINFORMATION | 9.1 (Critical) | Yes |
LLM_ADOPTION_THREAT_MODEL. LLM02_SENSITIVE_INFO_DISCLOSURE | 7.5 (High) | Yes |
LLM_ADOPTION_THREAT_MODEL. LLM07_SYSTEM_PROMPT_LEAKAGE | 7.5 (High) | Yes |
LLM_ADOPTION_THREAT_MODEL. LLM08_VECTOR_EMBEDDING_WEAKNESS | 7.5 (High) | Yes |
LLM_ADOPTION_THREAT_MODEL. LLM15_ABUSE_MONITORING_BYPASS | 6.8 (Medium) | Yes |
LLM_ADOPTION_THREAT_MODEL. LLM10_UNBOUNDED_CONSUMPTION | 6.5 (Medium) | Yes |
LLM_ADOPTION_THREAT_MODEL. LLM16_FEATURE_DATA_EXPOSURE | 5.9 (Medium) | Yes |
LLM_ADOPTION_THREAT_MODEL. LLM14_DATA_RESIDENCY_VIOLATION | 4.9 (Medium) | Yes |
Threats Summary
This section contains an executive summary of the threats and their mitigation status
There are a total of 13 identified threats of which 13 are not fully mitigated
by default, and 13 are unmitigated without proposed operational controls.
LLM Adoption - scope of analysis
Overview
NOTE: This threat model addresses potential risks associated with the adoption of large language models (LLMs) in enterprise environments. It is based on the OWASP Top 10 for LLM Applications 2025 and defines the assets, security objectives, assumptions, and attackers relevant to LLM deployment.
LLM Adoption security objectives
Data Security:
System Security:
Identity & Access Management:
Governance:
Operational Security:
Data Security:
Diagram:
Details:
Access Control (ACCESS_CONTROL
)
Implement robust authentication, authorization, and audit mechanisms to control access to LLM resources and ensure proper user permissions and accountability.
Priority: High
Attack tree:
Compliance & Governance (COMPLIANCE
)
Ensure LLM operations adhere to regulatory requirements, industry standards, and organizational policies while maintaining transparency and auditability.
Priority: High
Attack tree:
Data Protection (DATA_PROTECTION
)
Protect sensitive data throughout the LLM lifecycle, including training data, model weights, and user inputs/outputs, ensuring proper classification, handling, and storage.
Priority: High
Attack tree:
Model Integrity (MODEL_INTEGRITY
)
Maintain the integrity and reliability of the LLM system by preventing model poisoning, ensuring supply chain security, and validating model outputs against expected behaviors.
Priority: High
Attack tree:
Privacy Protection (PRIVACY_PROTECTION
)
Safeguard user privacy by implementing data anonymization, encryption, and access controls to prevent unauthorized data exposure or misuse.
Priority: High
Attack tree:
System Resilience (RESILIENCE
)
Maintain system availability and performance under normal and adverse conditions, including protection against resource exhaustion and service degradation.
Priority: High
Attack tree:
Linked threat Models
- Model Context Protocols (ID: LLM_ADOPTION_THREAT_MODEL.MCP)
LLM Adoption Threat Actors
Actors, agents, users and attackers may be used as synonymous.
Authorized users with malicious intent seeking to [...] (MALICIOUS_USER
)
- Description:
- Authorized users with malicious intent seeking to exploit the LLM application for unauthorized actions.
- In Scope as threat actor:
- Yes
Unauthenticated external entities attempting to ex[...] (EXTERNAL_ATTACKER
)
- Description:
- Unauthenticated external entities attempting to exploit vulnerabilities in the LLM deployment.
- In Scope as threat actor:
- Yes
Assumptions
- TRUSTED_ENVIRONMENT
- The underlying infrastructure is assumed to have baseline security controls, though LLM-specific risks remain.
- STATIC_MODEL_CONFIGURATION
- The deployed LLM is configured with fixed parameters that may not dynamically adjust to emerging threats.
- DATA_PROCESSING_ISOLATION
- Azure OpenAI Service processes data in isolation - prompts and completions are NOT: - Available to other customers - Available to OpenAI - Used to improve OpenAI models - Used to train/retrain Azure OpenAI foundation models - Used to improve Microsoft/3rd party services without permission
- GEOGRAPHIC_PROCESSING
- Data is processed within customer-specified geography unless using Global deployment type. Data at rest is always stored in customer-designated geography.
- MODEL_STATELESSNESS
- The models are stateless - no prompts or generations are stored in the model itself.
LLM Adoption Analysis
This threat model evaluates the key risks involved in adopting large language models by mapping potential threat vectors—derived from the OWASP Top 10 for LLM Applications 2025—against specific countermeasures. It is intended to support secure integration within the software development lifecycle, ensuring continuous monitoring and effective mitigation of risks.
LLM Adoption Attack tree
LLM Adoption Threats
Note This section contains the threat and mitigations identified during the analysis phase.
Prompt Injection (LLM01_PROMPT_INJECTION
)
- Threat actors:
- Threat Description
- Attackers craft inputs—either directly or indirectly—to inject malicious commands into the prompt, bypassing safety constraints and altering the intended response.
- Impact
- Malicious input may alter the model's behavior, leading to unauthorized actions, disclosure of sensitive information, or harmful outputs.
MODEL_INTEGRITY
ACCESS_CONTROL
DATA_PROTECTION
- CVSS
-
Base score: 9.8 (Critical)
Vector:CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
Counter-measures for LLM01_PROMPT_INJECTION
-
Define and enforce strict system prompts that limit the scope of user inputs, preventing unauthorized modifications.
-
Countermeasure in place? ✔ Public and disclosable? ✔
-
Apply robust filtering and validation for all data entering and exiting the model to detect and block injection attempts.
-
Countermeasure in place? ✔ Public and disclosable? ✔
-
Conduct periodic red team exercises and adversarial simulations to identify and remediate prompt injection vulnerabilities.
-
Countermeasure in place? ❌ Public and disclosable? ❌
CONSTRAINED_PROMPT
Constrain Model Prompts
INPUT_OUTPUT_FILTERING
Input and Output Filtering
ADVERSARIAL_TESTING
Regular Adversarial Testing
Sensitive Information Disclosure (LLM02_SENSITIVE_INFO_DISCLOSURE
)
- Threat actors:
- Threat Description
- Exploiting inadequate data sanitization or prompt injection flaws, an attacker can force the LLM to reveal sensitive information.
- Impact
- The unintended exposure of confidential data, proprietary algorithms, or internal configurations via LLM outputs.
DATA_PROTECTION
COMPLIANCE
- CVSS
-
Base score: 7.5 (High)
Vector:CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N
Counter-measures for LLM02_SENSITIVE_INFO_DISCLOSURE
-
Implement comprehensive scrubbing and masking techniques on both training inputs and model outputs to prevent leakage of sensitive data.
-
Countermeasure in place? ✔ Public and disclosable? ✔
-
Enforce role-based access controls and data classification policies to restrict access to sensitive information.
-
Countermeasure in place? ✔ Public and disclosable? ✔
-
Regularly review and validate outputs with automated tools and human oversight to detect and mitigate unintended disclosures.
-
Countermeasure in place? ❌ Public and disclosable? ❌
DATA_SANITIZATION
Data Sanitization
ACCESS_CONTROL
Strict Access Control
OUTPUT_VALIDATION
Output Validation and Review
Supply Chain Risks (LLM03_SUPPLY_CHAIN
)
- Threat actors:
- Threat Description
- Attackers tamper with third-party components or training data during procurement or integration, introducing malicious modifications that undermine model security.
- Impact
- Vulnerabilities in third-party models, datasets, or fine-tuning processes may compromise the integrity of the LLM.
MODEL_INTEGRITY
COMPLIANCE
- CVSS
-
Base score: 9.8 (Critical)
Vector:CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
Counter-measures for LLM03_SUPPLY_CHAIN
-
Regularly audit and verify the security posture of suppliers, including models, datasets, and fine-tuning tools, to ensure compliance with security standards.
-
Countermeasure in place? ✔ Public and disclosable? ✔
-
Implement SBOM practices to document and monitor all third-party components, ensuring timely updates and vulnerability management.
-
Countermeasure in place? ❌ Public and disclosable? ❌
-
Perform cryptographic integrity validations and provenance checks on models and datasets before deployment.
-
Countermeasure in place? ✔ Public and disclosable? ✔
SUPPLIER_AUDIT
Third-Party Supplier Audit
SBOM_INTEGRATION
Software Bill of Materials (SBOM)
INTEGRITY_CHECKS
Model and Data Integrity Checks
Data and Model Poisoning (LLM04_DATA_MODEL_POISONING
)
- Threat actors:
- Threat Description
- Attackers inject adversarial or manipulated data into the training pipeline to compromise model outputs, causing systemic errors or hidden vulnerabilities.
- Impact
- Malicious alteration of training data or fine-tuning processes can introduce biases, backdoors, or degrade model performance.
MODEL_INTEGRITY
DATA_PROTECTION
COMPLIANCE
- CVSS
-
Base score: 9.8 (Critical)
Vector:CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
Counter-measures for LLM04_DATA_MODEL_POISONING
-
Validate and verify the provenance and integrity of all training and fine-tuning datasets using version control and anomaly detection.
-
Countermeasure in place? ✔ Public and disclosable? ✔
-
Conduct red team exercises to simulate poisoning attacks and identify vulnerabilities in the training pipeline.
-
Countermeasure in place? ❌ Public and disclosable? ❌
-
Implement real-time monitoring and logging of training pipelines to quickly detect anomalies indicative of data poisoning.
-
Countermeasure in place? ✔ Public and disclosable? ✔
TRAINING_DATA_VALIDATION
Rigorous Training Data Validation
RED_TEAMING
Regular Red Teaming Exercises
PIPELINE_MONITORING
Continuous Pipeline Monitoring
Improper Output Handling (LLM05_IMPROPER_OUTPUT_HANDLING
)
- Threat actors:
- Threat Description
- Exploiting weak output controls, an attacker may trigger the model to emit outputs that reveal confidential information or misrepresent data.
- Impact
- Improperly formatted or unfiltered outputs can disclose sensitive data or be manipulated to mislead end users.
DATA_PROTECTION
COMPLIANCE
ACCESS_CONTROL
- CVSS
-
Base score: 9.1 (Critical)
Vector:CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N
Counter-measures for LLM05_IMPROPER_OUTPUT_HANDLING
-
Define and enforce deterministic output formats with strict validation rules to ensure consistency and prevent data leaks.
-
Countermeasure in place? ✔ Public and disclosable? ✔
-
Integrate manual review processes for high-risk outputs to provide an additional layer of verification.
-
Countermeasure in place? ❌ Public and disclosable? ❌
-
Deploy automated monitoring solutions to continuously analyze model outputs and detect deviations from expected patterns.
-
Countermeasure in place? ✔ Public and disclosable? ✔
OUTPUT_FORMAT_ENFORCEMENT
Enforce Standardized Output Formats
HUMAN_REVIEW
Human-in-the-Loop Review
AUTOMATED_MONITORING
Automated Output Monitoring
Excessive Agency (LLM06_EXCESSIVE_AGENCY
)
- Threat actors:
- Threat Description
- An attacker exploits overly permissive agent configurations or permissions to drive the LLM into executing tasks without proper oversight.
- Impact
- Granting excessive autonomy to LLM-driven agents can lead to unauthorized actions or unintended system modifications.
ACCESS_CONTROL
COMPLIANCE
MODEL_INTEGRITY
- CVSS
-
Base score: 9.8 (Critical)
Vector:CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
Counter-measures for LLM06_EXCESSIVE_AGENCY
-
Restrict agent permissions strictly to only those functions necessary for operation, and monitor for deviations.
-
Countermeasure in place? ✔ Public and disclosable? ✔
-
Integrate manual approval for high-risk agent actions to ensure human oversight over autonomous decisions.
-
Countermeasure in place? ❌ Public and disclosable? ❌
-
Conduct periodic audits of agent permissions and operational logs to verify adherence to security policies.
-
Countermeasure in place? ✔ Public and disclosable? ❌
LEAST_PRIVILEGE_AGENCY
Enforce Least Privilege for Autonomous Agents
HUMAN_IN_THE_LOOP
Human-in-the-Loop Controls
PERMISSION_AUDITS
Regular Permission Audits
System Prompt Leakage (LLM07_SYSTEM_PROMPT_LEAKAGE
)
- Threat actors:
- Threat Description
- An attacker gains access to internal system prompt data through vulnerabilities in prompt management or inadequate access controls.
- Impact
- Leakage of internal system prompts or configuration details can enable attackers to reverse-engineer or subvert LLM behavior.
DATA_PROTECTION
ACCESS_CONTROL
MODEL_INTEGRITY
- CVSS
-
Base score: 7.5 (High)
Vector:CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N
Counter-measures for LLM07_SYSTEM_PROMPT_LEAKAGE
-
Isolate system prompts from user-facing interfaces and restrict access using strong authentication and access controls.
-
Countermeasure in place? ✔ Public and disclosable? ✔
-
Maintain comprehensive logs of all accesses to system prompt data to detect and investigate potential breaches.
-
Countermeasure in place? ✔ Public and disclosable? ❌
-
Conduct periodic audits of prompt storage and management systems to ensure no leakage of sensitive information.
-
Countermeasure in place? ❌ Public and disclosable? ❌
PROMPT_ISOLATION
Secure Prompt Isolation
ACCESS_LOGGING
Detailed Prompt Access Logging
REGULAR_AUDITS
Regular Security Audits for Prompts
Vector and Embedding Weaknesses (LLM08_VECTOR_EMBEDDING_WEAKNESS
)
- Threat actors:
- Threat Description
- An attacker exploits insecure embedding databases or indexing methods to extract sensitive information from vector representations.
- Impact
- Vulnerabilities in the storage and retrieval of vector embeddings can lead to the unintended disclosure of sensitive context or data.
DATA_PROTECTION
ACCESS_CONTROL
MODEL_INTEGRITY
- CVSS
-
Base score: 7.5 (High)
Vector:CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N
Counter-measures for LLM08_VECTOR_EMBEDDING_WEAKNESS
-
Apply strong encryption to embedding databases and enforce robust authentication controls.
-
Countermeasure in place? ✔ Public and disclosable? ✔
-
Use secure indexing and query mechanisms to restrict unauthorized access to embeddings.
-
Countermeasure in place? ✔ Public and disclosable? ❌
-
Implement role-based access control for embedding data to limit exposure to only authorized users.
-
Countermeasure in place? ✔ Public and disclosable? ✔
ENCRYPT_EMBEDDINGS
Encrypt Embedding Storage
SECURE_INDEXING
Implement Secure Indexing
EMBEDDING_ACCESS_CONTROL
Enforce Embedding Access Controls
Misinformation (LLM09_MISINFORMATION
)
- Threat actors:
- Threat Description
- Attackers manipulate training data or craft adversarial prompts to induce the LLM to produce misleading or harmful outputs.
- Impact
- Generation of biased or false outputs can mislead users and adversely affect decision-making processes.
MODEL_INTEGRITY
COMPLIANCE
DATA_PROTECTION
- CVSS
-
Base score: 9.1 (Critical)
Vector:CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N
Counter-measures for LLM09_MISINFORMATION
-
Implement mechanisms to cross-check LLM outputs with trusted datasets and human review for critical decisions.
-
Countermeasure in place? ❌ Public and disclosable? ✔
-
Regularly update and train the model with adversarial examples to improve resistance against manipulative inputs.
-
Countermeasure in place? ✔ Public and disclosable? ✔
-
Keep detailed logs of output generation processes to enable post-incident analysis and continuous improvement.
-
Countermeasure in place? ❌ Public and disclosable? ❌
OUTPUT_CROSS_VALIDATION
Validate Outputs Against Trusted Sources
ADVERSARIAL_TRAINING
Adversarial Training
TRANSPARENCY_LOGS
Maintain Transparency Logs
Unbounded Consumption (LLM10_UNBOUNDED_CONSUMPTION
)
- Threat actors:
- Threat Description
- An attacker triggers repeated or resource-intensive operations against the LLM system, exhausting computational resources and degrading performance.
- Impact
- Excessive or uncontrolled resource consumption may lead to service degradation, denial of service, and unanticipated cost escalations.
RESILIENCE
COMPLIANCE
ACCESS_CONTROL
- CVSS
-
Base score: 6.5 (Medium)
Vector:CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H
Counter-measures for LLM10_UNBOUNDED_CONSUMPTION
-
Implement rate limiting controls on requests to the LLM application to prevent abuse and resource exhaustion.
-
Countermeasure in place? ✔ Public and disclosable? ✔
-
Deploy monitoring systems to track resource usage and trigger alerts when predefined thresholds are exceeded.
-
Countermeasure in place? ✔ Public and disclosable? ✔
-
Establish policies and automated alerts to manage and control operational costs associated with resource consumption.
-
Countermeasure in place? ❌ Public and disclosable? ❌
RATE_LIMITING
Rate Limiting
RESOURCE_MONITORING
Continuous Resource Monitoring
COST_MANAGEMENT
Cost Management Practices
Data Residency Violation (LLM14_DATA_RESIDENCY_VIOLATION
)
- Threat actors:
- Threat Description
- System configuration or deployment type choices lead to data being processed or stored outside permitted geographic boundaries.
- Impact
- Processing or storing data outside designated geographic boundaries could violate data residency requirements and regulations.
COMPLIANCE
DATA_PROTECTION
PRIVACY_PROTECTION
- CVSS
-
Base score: 4.9 (Medium)
Vector:CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:N/A:N
Counter-measures for LLM14_DATA_RESIDENCY_VIOLATION
-
Carefully control use of Global and DataZone deployment types based on data residency requirements.
-
Countermeasure in place? ✔ Public and disclosable? ✔
-
Monitor and audit data processing locations to ensure compliance with residency requirements.
-
Countermeasure in place? ✔ Public and disclosable? ✔
-
Ensure data at rest is stored only in approved geographic locations regardless of deployment type.
-
Countermeasure in place? ✔ Public and disclosable? ✔
DEPLOYMENT_TYPE_CONTROL
Deployment Type Control
GEOGRAPHY_MONITORING
Geographic Processing Monitoring
STORAGE_LOCATION_CONTROL
Storage Location Control
Abuse Monitoring System Bypass (LLM15_ABUSE_MONITORING_BYPASS
)
- Threat actors:
- Threat Description
- Attackers attempt to circumvent content filtering and abuse monitoring systems to generate prohibited content.
- Impact
- Bypassing abuse monitoring systems could allow generation of harmful content or violation of service terms.
COMPLIANCE
MODEL_INTEGRITY
- CVSS
-
Base score: 6.8 (Medium)
Vector:CVSS:3.1/AV:N/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:N
Counter-measures for LLM15_ABUSE_MONITORING_BYPASS
-
Implement synchronous content filtering during prompt processing and content generation.
-
Countermeasure in place? ✔ Public and disclosable? ✔
-
Deploy AI systems to review prompts and completions for potential abuse patterns.
-
Countermeasure in place? ✔ Public and disclosable? ✔
-
Maintain authorized human reviewer access for flagged content with proper security controls.
-
Countermeasure in place? ✔ Public and disclosable? ✔
CONTENT_FILTERING
Real-time Content Filtering
AI_REVIEW
AI-based Review System
HUMAN_REVIEW
Human Review Process
Feature Data Exposure (LLM16_FEATURE_DATA_EXPOSURE
)
- Threat actors:
- Threat Description
- Attackers target stored data used by specific Azure OpenAI features to gain unauthorized access.
- Impact
- Improper handling of data stored for specific features (Assistants API, Batch processing, etc.) could lead to unauthorized access.
DATA_PROTECTION
PRIVACY_PROTECTION
- CVSS
-
Base score: 5.9 (Medium)
Vector:CVSS:3.1/AV:N/AC:H/PR:L/UI:N/S:U/C:H/I:L/A:N
Counter-measures for LLM16_FEATURE_DATA_EXPOSURE
-
Implement double encryption at rest using AES-256 and optional customer managed keys.
-
Countermeasure in place? ✔ Public and disclosable? ✔
-
Ensure data for different features remains isolated and stored within appropriate geographic boundaries.
-
Countermeasure in place? ✔ Public and disclosable? ✔
-
Provide customers with ability to delete stored feature data at any time.
-
Countermeasure in place? ✔ Public and disclosable? ✔
DOUBLE_ENCRYPTION
Double Encryption Implementation
FEATURE_ISOLATION
Feature Data Isolation
CUSTOMER_DELETION_CONTROL
Customer Deletion Control
Model Context Protocols
Version: 1.0
Authors: David Cervigni
Model Context Protocols - scope of analysis
Overview
The Model Context Protocols (MCP) are designed to facilitate the interaction between large language models and their users, ensuring that context, state, and user preferences are effectively managed. Model Context Protocols (MCP) are a set of protocols designed to enhance the interaction between large language models (LLMs) and their users by providing a structured way to manage context, state, and user preferences. REF: https://modelcontextprotocol.io/introduction
Requests For Information
Operational Security Hardening Guide
Seq | Countermeasure Details |
---|
Testing guide
This guide lists all testable attacks described in the threat model
Seq | Attack to test | Pass/Fail/NA |
---|---|---|