ISO 42001 is the first international standard for AI Management Systems (AIMS). It's certifiable — meaning organizations can be audited and certified against it. The AIGP exam tests your understanding of its structure and how it relates to other ISO standards.
ISO 42001 follows the familiar ISO management system structure (Annex SL), making it compatible with ISO 27001 (information security) and ISO 27701 (privacy).
Core structure:
Context of the organization — Understand internal and external factors affecting AI, interested parties and their requirements, scope of the AIMS.
Leadership — Top management commitment, AI policy, roles and responsibilities.
Planning — AI risk assessment, treatment of risks and opportunities, AI objectives.
Support — Resources, competence, awareness, communication, documented information.
Operation — AI system impact assessment, AI system lifecycle processes, third-party relationships.
Performance evaluation — Monitoring, measurement, internal audit, management review.
Improvement — Nonconformities, corrective actions, continual improvement.
AI risk assessment — Organizations must identify AI-specific risks, assess their likelihood and impact, and determine treatment options. This includes risks to individuals, groups, organizations, and society.
AI system impact assessment — Before developing or deploying AI, organizations must assess potential impacts on individuals and society, including fairness, transparency, and accountability impacts.
AI lifecycle management — Controls must be applied throughout the AI lifecycle: design, development, testing, deployment, monitoring, and retirement.
Third-party management — When using third-party AI components, organizations must assess and manage associated risks.
Documentation — Extensive documentation requirements including AI policies, risk assessments, impact assessments, and operational records.
ISO 23894 (AI Risk Management) — Provides guidance on managing risks specifically related to AI systems. It aligns with ISO 31000 (general risk management) and complements the NIST AI RMF.
ISO 22989 (AI Concepts and Terminology) — Establishes common vocabulary and concepts for AI. Important for ensuring consistent communication across governance frameworks.
ISO 23053 (Framework for AI Systems Using Machine Learning) — Technical framework for ML-based AI systems, covering the ML pipeline from data to deployment.
ISO 38507 (Governance of AI) — Guidance for governing bodies on AI governance, addressing board-level oversight responsibilities.
The OECD AI Principles (adopted 2019, updated 2024) are not legally binding but have influenced virtually every AI governance framework globally:
1. Inclusive growth, sustainable development, and well-being — AI should benefit people and the planet
2. Human-centered values and fairness — Respect for rule of law, human rights, democratic values, and diversity
3. Transparency and explainability — Meaningful information about AI systems
4. Robustness, security, and safety — AI systems should function appropriately and not pose unreasonable safety risks
5. Accountability — Organizations and individuals should be accountable for AI systems
These principles are referenced in the EU AI Act, NIST AI RMF, and ISO 42001. They're the common thread linking global AI governance frameworks.