All Lessons Course Details All Courses Enroll
Courses/ AIGP Certification Prep/ Day 7
Day 7 of 30

Third-Party AI Risk Management

⏱ 18 min 📊 Medium AIGP Certification Prep

Most organizations don't build all their AI in-house. They buy it, subscribe to it, embed it, or integrate it via APIs. The v2.1 BoK added a specific performance indicator on third-party AI risk — and the exam will test it.

AI Vendor Due Diligence

Before engaging an AI vendor, assess these governance dimensions:

Transparency — Does the vendor explain how its AI system works? Can they provide documentation on model architecture, training data sources, and known limitations?

Data practices — Where does the vendor's training data come from? Do they use customer data for model improvement? What data retention and deletion policies apply?

Security — What security measures protect the AI system? How are API keys managed? What about model security (adversarial robustness, prompt injection protections)?

Bias and fairness — Has the vendor conducted bias testing? Can they provide fairness metrics? Do they commit to ongoing monitoring?

Compliance — Does the vendor comply with relevant regulations (GDPR, EU AI Act, sector-specific requirements)? Can they demonstrate compliance?

Incident response — What is the vendor's process for AI incidents? Do they commit to timely notification?

Knowledge Check
During due diligence, an AI vendor refuses to disclose their training data sources, citing trade secrets. What is the BEST governance response?
The governance response should seek alternative assurance. While training data details may be proprietary, the organization needs some form of assurance about data practices. Independent audits, published bias testing results, or contractual warranties provide this assurance without requiring full disclosure.

Contract Provisions for AI

Standard vendor contracts need AI-specific provisions:

Transparency obligations — Vendor must provide documentation on model capabilities, limitations, and known biases. Must notify customer of material model changes.

Audit rights — Customer has the right to audit the AI system or require independent third-party audits.

Data use restrictions — Vendor cannot use customer data for model training or improvement without explicit consent. Data must be handled according to agreed classification levels.

Incident notification — Vendor must notify customer of AI-specific incidents (model failures, bias discoveries, security breaches) within a defined timeframe.

Performance commitments — Defined SLAs for accuracy, availability, and fairness metrics, with remedies for non-compliance.

Termination and portability — Clear provisions for data return, model portability (where applicable), and transition support upon contract termination.

Liability allocation — Clear allocation of liability for AI-related harms, including indemnification provisions.

Supply Chain Risks in AI

AI supply chains introduce risks that go beyond traditional vendor management:

Model provenance — A vendor's AI may incorporate open-source models, pre-trained components, or third-party datasets. Each link in the chain introduces risk.

Data sourcing — The vendor's training data may itself come from third parties, creating a chain of data provenance that's difficult to verify.

API dependencies — If your AI system depends on a third-party API (e.g., OpenAI, Anthropic), changes to that API's behavior, pricing, or availability directly affect your system.

Concentration risk — Many organizations depend on the same handful of foundation model providers. A failure at one provider could affect thousands of downstream applications.

Regulatory cascade — If a vendor's AI system is classified as high-risk under the EU AI Act, your use of that system may trigger additional obligations.

Knowledge Check
Your company uses an AI vendor that, in turn, relies on a major foundation model provider's API. The foundation model provider unexpectedly changes its content policy, causing your vendor's product to reject legitimate business queries. This illustrates:
This is a supply chain risk — your organization is exposed to risks from a third party (the foundation model provider) that you don't have a direct relationship with. The concentration aspect is that many vendors depend on the same few foundation model providers, amplifying the risk.

Third-Party AI Risk Framework

Build a practical framework for managing third-party AI risk:

1. Inventory — Maintain a register of all third-party AI systems, including shadow AI tools used by employees.

2. Risk classification — Classify each third-party AI by risk level based on: data sensitivity, decision impact, regulatory requirements, and autonomy level.

3. Due diligence — Conduct proportionate due diligence based on risk classification. High-risk AI vendors require comprehensive assessment; low-risk tools require basic review.

4. Contractual protections — Ensure contracts include AI-specific provisions appropriate to the risk level.

5. Ongoing monitoring — Don't stop at onboarding. Monitor vendor compliance, track incidents, and reassess periodically.

6. Reassessment triggers — Define events that trigger reassessment: material model changes, security incidents, regulatory changes, or significant performance degradation.

Final Check
An organization discovers that employees have been using an unapproved AI transcription tool for sensitive client meetings. Under a third-party AI risk framework, what should happen FIRST?
The first step is risk assessment — understand what happened, what data was exposed, and the potential harm. This informs the appropriate response. Terminating employees, approving the tool, or banning all AI are all potential responses, but none should happen before understanding the risk. The inventory and assessment must come first.
🎯
Day 7 Complete
"Most AI risk enters through third parties. Build governance into your vendor lifecycle: due diligence at onboarding, AI-specific contract provisions, ongoing monitoring, and reassessment triggers. Shadow AI is your biggest blind spot."
Next Lesson
How Privacy Laws Apply to AI — GDPR Essentials