Third-party breach involvement doubled to 30% in a single year. Meanwhile, vendors are embedding AI into their products faster than most organizations can assess it. The governance gap is real, and the liability sits with you.
By the numbers: 30% of breaches now involve third parties, doubled from 15% in one year (Verizon 2025 DBIR). 48% of risk professionals cite cyber as their top TPRM concern, yet only 15% have high confidence in their program data (KPMG 2026). 64% of organizations now monitor their vendors’ vendors (EY 2025). Manual programs cover 25-30% of vendors effectively; AI-powered platforms achieve 90%+.
The 2025 Verizon Data Breach Investigations Report found that third-party involvement in breaches doubled in a single year, from 15% to 30% of all confirmed data breaches. That statistic alone should reframe how U.S. organizations think about their vendor portfolios. Now layer on a complication: vendors are embedding AI into their products at a pace that outstrips most organizations’ ability to evaluate it. PwC reports that many vendors have begun integrating AI into off-the-shelf software, often without full visibility from their customers. The 2026 KPMG Global TPRM Survey found that 48% of risk professionals cite cyber risk as their top third-party concern, yet only 15% express high confidence in the data underpinning their programs. Traditional vendor risk management was built for a world of static software, defined data flows, and predictable controls. AI vendors introduce probabilistic systems, opaque model behavior, training data dependencies, and automated decision-making that traditional questionnaires were never designed to assess.
Why AI Vendors Create Risks That Traditional Vendor Management Cannot Address
A traditional SaaS vendor delivers deterministic software. The code executes the same way every time. You can audit the code, test the outputs, verify the controls. The risk profile is stable between assessments.
An AI vendor delivers a probabilistic system. Model outputs vary based on inputs, training data, and context. Behavior can change through retraining or data drift without any code modification. A model that performs fairly at assessment time may develop bias three months later as input distributions shift.
This creates five categories of risk that traditional TPRM does not capture.
Model opacity. Most AI vendors treat models as proprietary. You cannot inspect the architecture, review the training data, or verify decision logic. You are trusting a black box that makes decisions affecting your customers, employees, or operations.
Training data provenance. The AI system’s behavior is a direct function of its training data. If the vendor trained on biased or improperly sourced data, the bias is embedded in every output your organization receives. You inherit data quality risk from a dataset you never saw.
Output variability. Unlike traditional software that returns consistent results for identical inputs, AI systems can produce different results across runs. Point-in-time assessments may not reflect future behavior.
Embedded decision-making. When a vendor’s AI influences decisions about creditworthiness, employment, insurance, or medical triage, your organization bears the regulatory liability for those decisions, regardless of who built the model.
Supply chain depth. AI vendors frequently depend on foundation models (from OpenAI, Anthropic, Google, Meta), third-party datasets, and external infrastructure. Your AI vendor risk is actually fourth-party and fifth-party risk extending through layers you may not know exist. EY’s 2025 survey found that 64% of organizations now monitor their vendors’ vendors, but this capability remains rare for AI-specific supply chains.
The U.S. Regulatory Landscape for Third-Party AI Risk
No single federal law mandates a comprehensive third-party AI risk management program. Instead, pressure comes from sector-specific requirements, enforcement actions, and state legislation that collectively create a patchwork of obligations.
Financial services. OCC Bulletin 2023-17 on third-party relationships applies to banks’ use of AI vendors. SR 11-7 requires model validation, including for third-party models. The CFPB requires lenders using AI for credit decisions to provide specific adverse action notices, even when the vendor’s model is opaque. The SEC identified AI washing as a 2026 enforcement priority.
Healthcare. HIPAA Business Associate Agreement requirements extend to AI vendors processing PHI. The FDA regulates AI/ML-based Software as a Medical Device and requires predetermined change control plans for models that update post-deployment.
Employment. NYC Local Law 144 requires bias audits of automated employment decision tools, including vendor-provided ones. The EEOC has made clear that employers are liable for discriminatory AI hiring tools even when supplied by a vendor. Colorado’s AI Act (effective February 2026) requires deployers to complete impact assessments for high-risk AI, explicitly including vendor systems.
Cross-sector enforcement. The FTC has used Section 5 authority against companies whose AI vendors caused consumer harm. Executive Order 14110 directed federal agencies to evaluate AI risks including from third-party systems.
The common thread: Regulators hold the deploying organization accountable for AI outcomes, regardless of whether the AI was built in-house or procured from a vendor. Outsourcing the technology does not outsource the liability.
What to Assess: The AI Vendor Due Diligence Framework
Standard due diligence covers information security, business continuity, financial stability, and compliance. AI vendors require all of that, plus eight additional dimensions.
| Assessment Dimension | Key Questions | Framework Reference |
|---|---|---|
| Model Governance | Does the vendor have a documented AI policy? Who owns model risk decisions? What approval processes exist for changes? | ISO 42001 Clause 5.2, 5.3. NIST Govern 1.1 |
| Training Data Integrity | What data sources? How is quality validated? What bias testing on training data? Personal data included? | ISO 42001 Annex B. NIST Map 2.3 |
| Bias and Fairness | What fairness metrics? Which protected categories? How frequently? What remediation process? | ISO 42001 C.2.5. NIST Measure 2.11 |
| Transparency / Explainability | Can the vendor explain individual decisions? What documentation for architecture, limitations, known failures? | ISO 42001 C.2.3. NIST Measure 2.5 |
| Security & Adversarial Resilience | Tested for adversarial inputs, prompt injection, data poisoning, model extraction? What monitoring in production? | ISO 42001 C.2.10. NIST Measure 2.7 |
| Performance & Drift Monitoring | How is drift detected? What SLAs for degradation? What retraining triggers? | ISO 42001 Clause 9.1. NIST Measure 2.6 |
| AI Supply Chain Transparency | Foundation models from third parties? Which ones? Datasets? Can the vendor provide an AIBOM? | ISO 42001 Clause 8.1. NIST Govern 1.6 |
| AI Incident Response | AI-specific incident response plan? How are model failures, bias incidents, security compromises reported? | ISO 42001 Clause 10.2. NIST Manage 4.1 |
Tiering AI Vendors by Risk
Applying identical due diligence to every AI vendor is impractical. Assessment depth should match the risk introduced. Three factors drive tiering.
Decision impact. Does the AI affect individuals’ rights, financial outcomes, health, employment, or safety? A vendor providing AI email suggestions operates in a different tier than one providing AI loan underwriting.
Data sensitivity. Does the vendor process protected health information, financial records, biometric data, or minors’ personal information?
Autonomy. Does the AI operate autonomously or with human review of every output? Higher autonomy means lower opportunity to catch errors before they affect outcomes.
| Tier | Characteristics | Assessment Depth | Monitoring Frequency |
|---|---|---|---|
| Critical (Tier 1) | Autonomous decisions, sensitive data, regulated domain | Full AI due diligence, audit, model docs, bias evidence, AIBOM | Continuous + quarterly |
| High (Tier 2) | AI influences decisions with human review, moderate data sensitivity | Comprehensive questionnaire, bias/fairness evidence, drift monitoring | Continuous + semi-annual |
| Standard (Tier 3) | Internal efficiency, no individual impact, low sensitivity | Standard questionnaire with AI addendum | Annual + event-triggered |
Contract Provisions: What Your AI Vendor Agreements Must Include
Vague contracts create delays and confusion during incidents. For AI vendors, contractual gaps are especially dangerous because the technology changes continuously. Nine provisions are non-negotiable.
- AI disclosure obligation. The vendor must disclose all AI systems, models, and automated decision-making components. This includes embedded AI features not marketed as AI products.
- Training data transparency. Identify data sources, confirm lawful collection, disclose personal data in training sets. Notify when sources change.
- Model change notification. Advance notice before material changes to architecture, retraining data, or foundation model switches. Material changes trigger reassessment rights.
- Bias testing and reporting. Conduct and share bias testing results across protected categories on a defined schedule. Specify methodology, metrics, and remediation timelines.
- Performance SLAs with drift thresholds. Define acceptable ranges and tolerances. Threshold breaches trigger notification and contractual remedies including retraining commitments.
- Audit and assessment rights. Right to audit AI governance, model performance, bias testing, and security controls. Includes model documentation requests and individual decision explanations.
- AI incident notification. Defined timeframe for AI-specific incidents: model failures, detected bias, security compromises, adversarial attacks, data integrity issues. Generic breach clauses are insufficient.
- Subprocessor and fourth-party disclosure. Disclose all AI subprocessors, foundation model providers, and third-party datasets. Supply chain changes trigger notification and consent.
- Exit and data provisions. Model portability, data deletion (including from training sets), and transition assistance. If the vendor trained on your data, contractual certainty about post-termination handling.
Continuous Monitoring: Moving Beyond Point-in-Time Assessments
Annual reviews leave dangerous gaps. Exploit code appears within days of disclosure. A vendor breach in March goes undetected until the October assessment cycle. For AI vendors, the problem is worse: model drift can introduce bias gradually without any discrete security event to trigger an alert.
Effective continuous monitoring combines three layers.
External threat intelligence. Monitor breach databases, news, regulatory filings, and security ratings. EY’s 2025 survey found manual programs cover 25-30% of vendors effectively, while AI-powered platforms achieve 90%+.
Contractual reporting. Require periodic performance reports, bias testing results, drift metrics, and incident notifications. Build into the contract with specific formats, frequencies, and escalation triggers.
Output monitoring. Where feasible, track the vendor’s AI outputs directly: prediction distributions, error rates, demographic disparities, latency. Statistical process control detects shifts before they cause harm.
ISO 42001 Clause 9.1 requires determining what needs to be monitored, including performance of the AI management system. For AI vendors, monitoring extends beyond vendor self-reporting to your independent verification of system behavior.
Governance Structure: Who Owns Third-Party AI Risk?
Third-party AI risk falls between organizational boundaries. Procurement selects vendors. Security assesses them. Legal negotiates contracts. Data science evaluates models. Compliance monitors regulations. Without clear ownership, AI vendor risk management becomes fragmented.
Centralized AI governance committee. Cross-functional committee (procurement, legal, security, data science, compliance, business) owns the AI vendor risk policy, assessment criteria, and escalation thresholds. Works for organizations with a small number of high-risk AI vendors.
Extended TPRM team with AI specialization. The existing TPRM function adds AI assessment capabilities through staff or advisors with data science and governance expertise. Works for mature TPRM programs with growing AI vendor portfolios.
Three lines of defense. Business units manage day-to-day relationships (first line). Risk and compliance set standards and monitor (second line). Internal audit independently verifies effectiveness (third line). ISO 42001 Clause 5.3 requires defined roles for AI governance. NIST Govern 1.2 emphasizes integration with enterprise risk management.
Building Your Program: A Practical Roadmap
- Inventory all AI vendor relationships. Include embedded AI features. PwC recommends DNS/web traffic analysis supplemented by direct outreach. ISO 42001 Clause 4.3 requires AIMS scope to include externally provided AI.
- Classify and tier vendors by AI risk. Apply the tiering framework. Deep due diligence for Tier 1 and 2. Standardized questionnaires with AI addendums for Tier 3.
- Develop AI-specific assessment criteria. Augment existing questionnaires with the eight dimensions above. Align to ISO 42001 Annex A/B and NIST AI RMF functions.
- Update contracts and onboarding. Incorporate the nine provisions. Require AI representations at onboarding. Build AI disclosure into intake.
- Implement continuous monitoring. Automated external intelligence, contractual reporting cadences, output monitoring for Tier 1. Organizations report 40-50% reduction in onboarding time with AI-powered TPRM tools.
- Define escalation and exit procedures. Thresholds for risk escalation. Termination criteria. Transition procedures for vendors that fail requirements.
- Train your teams. Procurement, legal, security, and compliance need AI risk literacy. ISO/IEC 42001 Lead Implementer training provides framework-level knowledge for assessing AI governance maturity.
Common Mistakes in Third-Party AI Vendor Risk Management
Relying on the vendor’s AI ethics statement as evidence of governance. A published statement is a policy commitment, not operational controls. Ask for audit results, bias reports, model documentation, incident history. ISO 42001 certification provides third-party verified evidence.
Assessing once and never revisiting. AI systems change continuously. Annual assessments capture a snapshot. Build continuous monitoring and event-triggered reassessments into the program.
Treating AI vendors like traditional SaaS. SOC 2 and ISO 27001 do not cover model bias, training data provenance, adversarial resilience, drift, or explainability. Supplement with AI-specific evidence.
Ignoring the supply chain behind your vendor. Your AI vendor probably depends on foundation models, datasets, and infrastructure from their own third parties. Request AIBOM documentation and fourth-party disclosure.
Assuming the vendor will notify you of problems. Without contractual obligations, vendors have limited incentive to report AI drift and bias voluntarily. Specify AI incident notification requirements explicitly.
The Vendor Perimeter Is Now Your AI Governance Perimeter
Third-party AI creates a governance paradox: you bear the regulatory liability for AI decisions, but you may not have visibility into how those decisions are made. The Verizon DBIR’s 30% figure captures only the security dimension. Add bias, drift, opacity, and regulatory non-compliance, and the risk surface is significantly larger than most organizations have measured.
The practical starting point is a complete inventory of every vendor using AI in your service delivery, including those embedding it without explicit disclosure. From that inventory, tiering, assessment, contract updates, and monitoring follow a logical sequence.
GAICC offers ISO/IEC 42001 Lead Implementer training that covers third-party AI risk governance, vendor assessment frameworks, and the contractual and monitoring requirements organizations need to manage AI systems they did not build. Explore the program to build your governance capability.
