Welcome to the AAISM Blitz. You already manage security programs — this course extends that expertise into AI. Every lesson assumes CISM/CISSP-level knowledge and focuses exclusively on AI-specific governance, risk, and controls.
Today we establish the foundation: the AI governance charter, the roles that make it work, and the stakeholders who must be aligned from day one.
An AI governance charter is the authorizing document for your entire AI security program. Without it, you have informal influence. With it, you have executive mandate.
A well-structured charter includes:
Purpose and scope — Which AI systems are covered? All enterprise AI, or a phased approach starting with high-risk systems?
Authority — Who has decision-making power over AI risk acceptance, deployment approvals, and incident escalation?
Alignment — How does AI governance map to existing enterprise risk management, information security, and compliance programs?
Accountability — Clear lines from board oversight through operational execution.
The charter should reference your organization's risk appetite statement and explicitly address AI-specific risks that fall outside traditional IT security governance.
AI security introduces roles that don't exist in traditional security programs. Understanding these roles — and how they interact — is essential for AAISM.
AI Security Architect — Designs security controls for AI systems. Evaluates model architectures, data pipelines, and deployment patterns for security implications.
AI Governance Engineer — Implements governance controls: model inventories, approval workflows, policy enforcement automation.
AI Risk Analyst — Assesses AI-specific risks: bias, drift, adversarial vulnerability, regulatory exposure. Maintains the AI risk register.
AI Auditor — Evaluates AI systems against governance policies, regulatory requirements, and ethical standards. Conducts model audits and data lineage reviews.
AI Ethics Officer — Oversees responsible AI principles. Reviews high-risk use cases for ethical implications before deployment.
As security manager, you don't fill these roles — you ensure they exist, are staffed, and have clear authority.
AI governance requires broader stakeholder alignment than traditional security. You need engagement from groups that may never have interacted with your security program before.
Board and C-suite — Set risk appetite for AI. Approve governance charter. Receive reporting on AI risk posture.
Legal and compliance — Interpret regulatory requirements (EU AI Act, sector-specific AI regulations). Review AI contracts and liability.
Engineering and data science — Build and deploy AI systems. Must understand and comply with governance policies without excessive friction.
HR and ethics — Address workforce impacts of AI. Ensure AI hiring tools meet anti-discrimination requirements.
Procurement — Evaluate AI vendors. Ensure contracts include security, transparency, and audit provisions.
Regulators — Understand expectations. Prepare for regulatory inquiries. Maintain documentation for audit readiness.
The key insight: AI governance is cross-functional by nature. A security manager who treats it as a security-only initiative will fail.
Your AI governance program should align with established frameworks rather than creating governance from scratch.
NIST AI RMF — The Govern function maps directly to charter development, roles, and stakeholder engagement. Use it as your structural backbone.
ISO 42001 — The AI management system standard. Provides auditable requirements for AI governance that align with ISO 27001 patterns you already know.
EU AI Act — Even if you're not in the EU, the risk classification framework (unacceptable, high, limited, minimal risk) provides a practical model for internal AI classification.
ISACA COBIT — Maps AI governance to enterprise governance of IT. Useful for organizations already using COBIT for IT governance alignment.
The exam expects you to know when to apply which framework, not to memorize every clause.
Your organization already has a risk appetite statement. AI governance must connect to it — not create a parallel risk universe.
Key questions to answer:
What level of AI risk is acceptable? Some organizations will accept higher risk for competitive advantage. Others (healthcare, financial services) have near-zero tolerance for certain AI failures.
How does AI risk aggregate with existing risks? A model that's individually low-risk might create systemic risk when combined with other AI systems.
Who decides? Risk acceptance for AI should follow the same escalation path as other enterprise risks, with AI-specific thresholds added.
Document these decisions in the charter. They become the guardrails for every subsequent governance decision.