Most organizations don't build all their AI in-house. They buy it, subscribe to it, embed it, or integrate it via APIs. The v2.1 BoK added a specific performance indicator on third-party AI risk — and the exam will test it.
Before engaging an AI vendor, assess these governance dimensions:
Transparency — Does the vendor explain how its AI system works? Can they provide documentation on model architecture, training data sources, and known limitations?
Data practices — Where does the vendor's training data come from? Do they use customer data for model improvement? What data retention and deletion policies apply?
Security — What security measures protect the AI system? How are API keys managed? What about model security (adversarial robustness, prompt injection protections)?
Bias and fairness — Has the vendor conducted bias testing? Can they provide fairness metrics? Do they commit to ongoing monitoring?
Compliance — Does the vendor comply with relevant regulations (GDPR, EU AI Act, sector-specific requirements)? Can they demonstrate compliance?
Incident response — What is the vendor's process for AI incidents? Do they commit to timely notification?
Standard vendor contracts need AI-specific provisions:
Transparency obligations — Vendor must provide documentation on model capabilities, limitations, and known biases. Must notify customer of material model changes.
Audit rights — Customer has the right to audit the AI system or require independent third-party audits.
Data use restrictions — Vendor cannot use customer data for model training or improvement without explicit consent. Data must be handled according to agreed classification levels.
Incident notification — Vendor must notify customer of AI-specific incidents (model failures, bias discoveries, security breaches) within a defined timeframe.
Performance commitments — Defined SLAs for accuracy, availability, and fairness metrics, with remedies for non-compliance.
Termination and portability — Clear provisions for data return, model portability (where applicable), and transition support upon contract termination.
Liability allocation — Clear allocation of liability for AI-related harms, including indemnification provisions.
AI supply chains introduce risks that go beyond traditional vendor management:
Model provenance — A vendor's AI may incorporate open-source models, pre-trained components, or third-party datasets. Each link in the chain introduces risk.
Data sourcing — The vendor's training data may itself come from third parties, creating a chain of data provenance that's difficult to verify.
API dependencies — If your AI system depends on a third-party API (e.g., OpenAI, Anthropic), changes to that API's behavior, pricing, or availability directly affect your system.
Concentration risk — Many organizations depend on the same handful of foundation model providers. A failure at one provider could affect thousands of downstream applications.
Regulatory cascade — If a vendor's AI system is classified as high-risk under the EU AI Act, your use of that system may trigger additional obligations.
Build a practical framework for managing third-party AI risk:
1. Inventory — Maintain a register of all third-party AI systems, including shadow AI tools used by employees.
2. Risk classification — Classify each third-party AI by risk level based on: data sensitivity, decision impact, regulatory requirements, and autonomy level.
3. Due diligence — Conduct proportionate due diligence based on risk classification. High-risk AI vendors require comprehensive assessment; low-risk tools require basic review.
4. Contractual protections — Ensure contracts include AI-specific provisions appropriate to the risk level.
5. Ongoing monitoring — Don't stop at onboarding. Monitor vendor compliance, track incidents, and reassess periodically.
6. Reassessment triggers — Define events that trigger reassessment: material model changes, security incidents, regulatory changes, or significant performance degradation.