GAICC AI Conference & Awards 2026 "Governing the Future – Building Responsible, Safe and Human-centric AI"

ai vendor due diligence checklist legal teams

AI Vendor Due Diligence Checklist for Legal Teams: The Pre-Contract Assessment Guide

97% of AI-breached organizations lacked access controls. Supply chain attacks cost $4.91M average. Traditional assessment misses model-specific risks. Here is the 8-domain checklist legal teams need before signing.

The AI vendor risk gap: 13% of organizations reported AI breaches (IBM 2025). 97% of those lacked proper AI access controls. Supply chain compromise: $4.91M average cost. 47% of affected individuals from third-party attacks. Traditional questionnaires miss training data provenance, model behavior, bias, fourth-party dependencies, and AI-specific compliance.

Traditional vendor risk assessment covers financial stability, uptime, and basic security. AI vendors add layers those questionnaires miss: training data provenance, model behavior under edge cases, algorithmic bias, output reliability, fourth-party AI dependencies, and AI-specific regulatory compliance. The regulatory shift from “intent” to “evidence” means regulators demand proof of pre-deployment assessment. This checklist provides the structured framework.

Why Traditional Assessment Fails for AI

Non-deterministic behavior. Demo performance ≠ production performance. Failures happen at edge cases and scale, not in curated tests.

Training data creates upstream liability. Vendor’s data practices = customer’s legal exposure. FTC disgorgement cascades to customers.

Invisible fourth-party dependencies. Foundation model reliance rarely disclosed. Vendor may claim ISO 27001 while using uncertified AI sub-processor.

New, evolving compliance. Colorado, Illinois, NYC, EU AI Act, DPDPA create obligations most vendors haven’t addressed.

Unpredictable costs. Token usage and API costs manageable in pilot, unpredictable at scale.

Step One: Tier Your Vendors

TierCriteriaAssessment DepthExamples
Tier 1: CriticalConsequential decisions. Sensitive data. Customer-facing. Regulated sector.Full 8-domain. Deep-dive sessions. Third-party verification. Annual.Hiring AI, credit scoring, medical diagnostics, chatbots with PII, fraud detection.
Tier 2: SignificantInternal tools, moderate exposure. Influences operations, not individual decisions.6-domain. Questionnaire-based. Biennial.Internal analytics, doc summarization, project management AI, content generation.
Tier 3: CommodityLow-risk internal. Non-consequential. No regulated data.3-domain abbreviated. Self-certification. Spot checks.Spell-check, spam filtering, meeting transcription, productivity tools.

The 8-Domain Checklist

Domain 1: Corporate and Financial Viability

Standard assessment plus AI-specific: AI liability insurance, leadership governance expertise, business continuity for model availability, runway/stability if major customer exits.

Key questions: D&O/E&O coverage for AI claims? AI liability insurance? What happens to our data if you cease operations?

Domain 2: Data Governance and Training Data Provenance

Highest-risk domain. Where disgorgement risk originates. Training data collection consent, copyrighted material licenses, data isolation, retention/deletion, default training on customer data.

Key questions: Will our data train your models? Where is it processed/stored? Complete data provenance record? Third-party IP claims? What AI services touch our data?
Red flags: Cannot describe training sources. No written customer data policy. No data isolation. Copyright complaint history.

Domain 3: Model Governance and Technical Quality

Beyond demo performance: model cards, edge case testing, versioning, drift monitoring, hallucination rates.

Key questions: Accuracy across demographic groups? Edge case testing process? Drift detection/response? Version control and rollback? Documented hallucination rates?
Red flags: No model card. Testing only on curated data. No drift monitoring. No version control. No disaggregated accuracy.

Domain 4: Bias and Fairness Assessment

Mandatory for Tier 1 in employment, credit, housing. Methodology, protected categories, fairness metrics, audit results, remediation.

Key questions: Most recent bias audit report? Fairness metrics? Protected categories tested? Testing frequency? Remediation timeline? Third-party auditors?
Red flags: No bias testing. Aggregate-only performance. No third-party audit. No remediation process. Claims “not applicable.”

Domain 5: Security and Privacy

SOC 2/ISO 27001 plus AI-specific: prompt injection, model extraction, data poisoning, adversarial detection, AI incident response, model weight encryption.

Key questions: SOC 2 Type II or ISO 27001? AI security testing (MITRE ATLAS, OWASP LLM Top 10)? AI incident notification timeline? Penetration testing of AI APIs?
Red flags: No SOC 2/ISO 27001. No AI security testing. No AI incident response. No AI access controls (97% of breached orgs lacked these).

Domain 6: Regulatory Compliance Posture

Does the vendor’s compliance cover YOUR use case regulations: Colorado, Illinois, NYC, EU AI Act, DPDPA, CFPB, EEOC, sector-specific (HIPAA, GLBA, FDA).

Key questions: Which regulations assessed against? ISO 42001 or NIST alignment? Compliance documentation for [our specific regulation]? Regulatory change monitoring process? Deployer documentation provided?
Red flags: Claims “compliant” without specifying which regulations. No ISO 42001/NIST. Cannot provide deployer documentation. No change monitoring.

Domain 7: Transparency and Explainability

Can the system explain decisions (CFPB, EEOC)? Provide disclosure documentation (CA SB 942, EU AI Act Art. 13)? Audit logs for regulatory review? Support human-in-the-loop?

Key questions: Explainability methods (SHAP, LIME, counterfactuals)? Individual decision explanations for adverse action? Audit logging? Human-in-the-loop support?

Domain 8: Subprocessor and Fourth-Party Risk

Map the AI supply chain: foundation models used, subprocessors, nth-party risk, certification coverage, contingency for model unavailability.

Key questions: Complete list of AI subprocessors and foundation models? Covered by your certifications? Contingency if foundation model changes terms/pricing? Contractual rights for our use case?
Red flags: Cannot identify foundation model dependencies. No subprocessor list. Not covered by certifications. No contingency plan.

Operationalizing: Five Steps

  1. Centralize AI procurement intake. Single entry point. No contract signed without completed assessment.
  2. Tier vendors on intake. Classify as Tier 1/2/3. Determines assessment depth.
  3. Distribute structured questionnaire. Domain-specific questions with deadline (15 days Tier 1, 10 days Tier 2). Require evidence, not assertions.
  4. Validate independently. Don’t accept self-certification for Tier 1. Request SOC 2, ISO 42001, bias audit reports. Technical deep-dive for critical AI.
  5. Document for the audit trail. Completed assessment = defensible record of pre-deployment evaluation. Centralized repository. Regulators demand evidence of assessment before deployment.

Due diligence → Contract clauses: Assessment determines vendor risk. Contract clauses allocate that risk. The findings from this 8-domain checklist directly inform which of the 12 AI contract clauses need strongest negotiation for each vendor. Due diligence happens before the contract. Clauses codify what you learned.

Assess Before You Sign. Document Before You Deploy.

The shift from “intent” to “evidence” means proving pre-deployment assessment, not post-breach policies. This 8-domain checklist converts vendor promises into verifiable claims, creates the audit trail regulators demand, and informs the contract clauses that allocate identified risk.

The practical first step: take your next AI vendor evaluation and run it through all eight domains at the appropriate tier depth. The gaps between what you currently assess and what this checklist covers are the unexamined risks your organization is carrying.

GAICC offers ISO/IEC 42001 Lead Implementer training that covers the AI governance framework referenced in Domain 6 of this checklist. Understanding ISO 42001 enables legal teams to evaluate vendor governance maturity, interpret vendor compliance documentation, and specify ISO 42001 alignment as a procurement requirement. Explore the program to strengthen your due diligence capability.

Frequently Asked Questions (FAQs)

How does AI due diligence differ from standard?

Five additional dimensions: model unpredictability, training data liability (disgorgement), invisible fourth-party dependencies, AI-specific regulations, unpredictable costs at scale. Traditional questionnaires miss all five.

Which domain is most critical?

Domain 2: Data Governance. Where disgorgement risk originates. Three first questions: Will our data train your models? Where is it processed? What third-party AI services touch it?

Should we require ISO 42001?

Tier 1: yes, or credible equivalent. Tier 2: request or accept documented self-assessment. Tier 3: not required. Strongest evidence of functioning AI management system.

Biggest red flags?

Cannot describe training data. No bias testing. No model docs. No access controls (97% of breached orgs). No subprocessor list. Claims "compliant" without specifics. No disaggregated accuracy.

How often to reassess?

Tier 1: annually or on material changes. Tier 2: biennially. Tier 3: spot checks. AI evolves through retraining; risk changes faster than traditional software.

How does this relate to contract clauses?

Due diligence before contract. Clauses codify obligations from findings. Assessment determines risk. Clauses allocate it. Findings inform which clauses need strongest negotiation.

Can we automate parts?

Domains 1 and 5 have automation tools. Domains 2, 3, 4 require vendor evidence. Domain 8 partially automatable. Legal judgment essential for Tier 1.
Share it :
About the Author

Dr Faiz Rasool

Director at the Global AI Certification Council (GAICC) and PM Training School

A globally certified instructor in ISO/IEC, PMI®, TOGAF®, SAFe®, and Scrum.org disciplines. With over three years’ hands-on experience in ISO/IEC 42001 AI governance, he delivers training and consulting across New Zealand, Australia, Malaysia, the Philippines, and the UAE, combining high-end credentials with practical, real-world expertise and global reach.

Start Your ISO/IEC 42001 Lead Implementer Training Today

4.8 / 5.0 Rating