Yesterday you learned why AI needs its own governance. Today, you'll learn to categorize the risks AI creates — because you can't govern what you can't classify.
The AIGP exam tests your ability to distinguish between different types of AI risks and map them to appropriate governance responses. Let's build your risk taxonomy.
AI can harm people directly in several ways:
Discrimination and bias — AI systems that systematically disadvantage people based on race, gender, age, disability, or other protected characteristics. A hiring algorithm that screens out women. A lending model that charges higher rates to minorities.
Privacy violations — AI trained on personal data without consent. Facial recognition used for mass surveillance. Generative AI that memorizes and reproduces private information from training data.
Safety risks — Autonomous vehicles causing accidents. Medical AI providing incorrect diagnoses. AI-controlled systems making decisions that endanger physical safety.
Manipulation — AI-generated deepfakes used for fraud. Recommendation algorithms designed to maximize engagement through psychological manipulation. AI-powered social engineering attacks.
Environmental harm — The massive computational resources required to train large AI models contribute to carbon emissions and energy consumption.
AI creates specific risks for the organizations that build or deploy it:
Legal liability — Violations of anti-discrimination laws, privacy regulations, consumer protection statutes, or the EU AI Act can result in lawsuits, fines, and enforcement actions.
Reputational damage — A single AI failure can become a global news story. The reputational cost often exceeds the legal penalties.
Financial risk — Beyond fines, AI failures can cause direct financial losses through incorrect automated decisions, trading errors, or business disruption.
Operational risk — Over-reliance on AI systems that fail, produce drift, or become unavailable. Shadow AI use by employees introducing ungoverned tools into workflows.
Intellectual property risk — AI trained on copyrighted material. Ownership questions around AI-generated content. Trade secrets inadvertently disclosed to AI tools.
Beyond individuals and organizations, AI poses risks to society at large:
Democratic processes — AI-generated disinformation, deepfakes targeting elections, and algorithmic amplification of polarizing content can undermine democratic institutions.
Labor displacement — AI automation may eliminate jobs faster than new ones are created, particularly affecting certain industries and demographics disproportionately.
Concentration of power — AI development requires massive resources, potentially concentrating technological and economic power in a small number of organizations or nations.
Misalignment and loss of control — As AI systems become more capable, the risk of systems pursuing goals that diverge from human intentions grows. This is the "alignment problem."
The AIGP exam expects you to connect risk categories to appropriate governance actions:
Bias and discrimination risks → Fairness testing, bias audits, representative training data, demographic parity monitoring
Privacy risks → Data protection impact assessments, purpose limitation policies, consent management, anonymization
Safety risks → Red teaming, adversarial testing, human oversight requirements, kill switches
Operational risks → Monitoring frameworks, drift detection, fallback procedures, incident response plans
IP risks → Acceptable use policies, data classification, contractual protections, access controls
This mapping is foundational — you'll use it throughout the rest of this course.