Welcome to Domain II — the heaviest domain on the AIGP exam (22–30 questions). You'll spend the next 10 days mastering the legal and regulatory landscape for AI governance.
We start with the GDPR, because it's the most-tested privacy regulation on the exam and has specific provisions that directly impact AI systems.
Article 22 is the GDPR's most AI-relevant provision. It gives individuals the right not to be subject to a decision based solely on automated processing that produces legal or similarly significant effects.
Key elements:
- Applies only to solely automated decisions — if a human meaningfully reviews the decision, Article 22 doesn't apply
- Must produce legal effects (e.g., credit denial) or similarly significant effects (e.g., job rejection)
- Three exceptions: explicit consent, contractual necessity, or authorization by EU/member state law
- When exceptions apply, the data controller must implement suitable safeguards, including the right to obtain human intervention, express a point of view, and contest the decision
Exam trap: Article 22 does NOT apply to AI that merely assists human decision-making. If a human reviews and can override the AI's recommendation, it's not "solely automated."
GDPR Articles 13, 14, and 15 require data controllers to provide "meaningful information about the logic involved" in automated decision-making. This creates a right to explanation for AI decisions.
What must be explained:
- The existence of automated decision-making
- Meaningful information about the logic involved
- The significance and envisaged consequences for the individual
What this means in practice:
- You don't need to disclose the full algorithm or source code
- You DO need to explain the general logic, key factors, and how the decision was reached
- The explanation must be understandable to the individual — not a technical model architecture description
Under Article 35, a DPIA is mandatory when processing is likely to result in a high risk to rights and freedoms. AI processing almost always triggers this requirement.
Specific triggers relevant to AI:
- Systematic and extensive profiling with significant effects
- Large-scale processing of special category data
- Systematic monitoring of publicly accessible areas
- Use of new technologies (including AI)
A DPIA must include:
1. Systematic description of the processing
2. Assessment of necessity and proportionality
3. Assessment of risks to data subjects
4. Measures to address those risks
Exam tip: If a question involves deploying a new AI system that processes personal data, the answer almost always includes "conduct a DPIA."
Every use of personal data for AI requires a lawful basis under GDPR Article 6:
Consent — Freely given, specific, informed, and unambiguous. Rarely practical for AI training at scale. Must be withdrawable.
Contractual necessity — Processing necessary for a contract with the data subject. Limited applicability — the AI processing must be truly necessary, not just convenient.
Legitimate interest — Most common basis for AI processing. Requires a balancing test: organization's legitimate interest vs. data subjects' rights and expectations. Must document the assessment.
Legal obligation — Processing required by law.
Public interest — Processing necessary for a task in the public interest (mainly public-sector).
Vital interest — Protecting someone's life (emergency medical AI, for example).
Key exam point: "Legitimate interest" requires a documented balancing test. You can't just assert legitimate interest — you must assess whether the individual's rights override your business interest.