Principles without policies are just aspirations. Today you'll learn how to turn governance principles into enforceable organizational policy — a skill the AIGP exam tests heavily.
An AI governance charter is the foundational document that establishes the governance program. It typically includes:
Purpose and scope — Why the governance program exists and which AI activities it covers (internal development, third-party procurement, shadow AI, research, etc.)
Guiding principles — The organization's AI principles, aligned with industry standards like the OECD AI Principles or the EU HLEG trustworthy AI requirements.
Governance structure — The roles, committees, and reporting lines described in Lesson 4.
Authority and mandate — The governance program's decision-making authority, including the power to halt deployments, require remediation, or escalate to leadership.
Scope of applicability — Which teams, systems, and use cases fall under the governance framework.
Review cadence — How often the charter is reviewed and updated (typically annually or when significant regulatory changes occur).
An acceptable use policy (AUP) for AI defines what employees can and cannot do with AI tools. This is one of the most practical and immediately impactful governance documents.
A well-designed AUP addresses:
- Approved AI tools — Which AI tools are sanctioned for use? Which are prohibited?
- Data classification — What types of data can be input into AI systems? (e.g., public data: yes; confidential client data: never)
- Use case boundaries — What decisions can AI inform vs. make autonomously?
- Output review — When must AI outputs be reviewed by a human before use?
- Prohibited uses — Specific uses that are never acceptable (e.g., autonomous hiring decisions, surveillance of employees)
- Incident reporting — How to report AI misuse or unexpected behavior
Every organization must define its risk appetite for AI — the level and type of AI risk it's willing to accept in pursuit of its objectives.
Risk appetite — The broad statement of willingness to accept risk. "We are willing to accept moderate AI risk for customer-facing applications that have undergone bias testing and human oversight."
Risk tolerance — The specific, measurable thresholds that define acceptable risk levels. "No AI system may be deployed with a fairness gap exceeding 5% across demographic groups."
Risk capacity — The maximum risk the organization can absorb before facing existential harm.
For the AIGP exam, remember:
- Risk appetite is set by the board or senior leadership
- Risk tolerance is defined by the AI risk committee or governance office
- Risk tolerance must be measurable and auditable
- Different AI use cases may have different risk tolerances (a chatbot answering FAQs vs. an AI making credit decisions)
Most organizations already have Governance, Risk, and Compliance (GRC) frameworks. AI governance should integrate with — not duplicate — these existing structures.
Integration points:
- Enterprise risk management — Add AI-specific risk categories to existing risk registers
- Compliance management — Map AI regulatory requirements alongside existing compliance obligations
- Internal audit — Include AI systems in the audit universe; train auditors on AI-specific risks
- Vendor management — Extend vendor assessment criteria to cover AI-specific risks
- Data governance — Build on existing data governance for AI training data requirements
- Change management — Use existing change approval processes for AI model updates
Common mistake: Building a standalone AI governance program disconnected from existing GRC. This creates silos, duplicates effort, and reduces effectiveness.