Today we apply everything from Days 1-5 to a comprehensive scenario. You'll practice ISACA-style governance questions and learn the patterns that distinguish correct answers on the exam.
Scenario: First AI security manager at a financial services firm
Background: You've been hired as the first AI Security Manager at MidWest Financial, a regional bank with $12B in assets. The bank currently uses AI for:
There is no AI governance program. Engineering deploys models directly to production. Legal has raised concerns about regulatory compliance. The board wants a quarterly AI risk report starting next quarter. The CISO (your manager) has given you six months to build the program.
Your task: Design the governance program. The following questions test your ability to apply Domain 1 concepts in the ISACA mindset.
Scenario Question 1
You have six months and limited budget. What is your FIRST priority?
**Governance first.** The charter establishes your authority and mandate. The asset inventory tells you what you're governing. Without these, every other activity lacks foundation. Monitoring and risk assessment come next, but they require governance authority and asset visibility.
Scenario Question 2
The credit decisioning AI was deployed 18 months ago without any governance review. The model development team says it's performing well. What is the MOST appropriate action?
The credit decisioning model is **high-risk and regulated.** It should be prioritized for governance review — not halted (disproportionate) and not deferred (too risky given regulatory exposure). A retrospective review establishes governance without disrupting operations.
Scenario Question 3
Legal asks whether the customer service chatbot needs to disclose that customers are talking to an AI. What should you recommend?
**Governance mindset:** Transparency is a core AI governance principle. Recommending disclosure aligns with the EU AI Act and emerging regulations. Documenting the decision creates an audit trail. The security manager should **recommend** based on governance principles while respecting that legal makes the final determination.
Scenario Question 4
Engineering wants to use an open-source LLM for internal document summarization. They argue it's low-risk. What governance process should apply?
**Risk-proportionate governance.** The use case may be lower risk, but it still involves an open-source model (supply chain risk) processing internal documents (data classification implications). A proportionate process applies appropriate scrutiny without the full weight of high-risk governance.
Scenario Question 5
The board wants AI risk on the quarterly risk report starting next quarter. You have three months. What do you report?
**Be honest about maturity stage.** Reporting current state (inventory, preliminary classification, program plan) is appropriate and expected. Waiting until everything is complete delays board visibility. Reporting only high-risk systems omits relevant information. The board needs to understand both the current state and the improvement plan.
Key ISACA governance patterns
These patterns appear repeatedly on the AAISM exam:
1. Governance before technical. When both governance and technical answers seem correct, choose governance. Establish authority, policies, and processes before deploying tools.
2. Risk-proportionate response. Don't apply the same governance weight to every AI system. High-risk systems get comprehensive governance. Low-risk systems get proportionate governance.
3. Process-driven decisions. Correct answers reference documented processes, predefined thresholds, and established procedures. Avoid answers that rely on individual judgment without process backing.
4. Cross-functional engagement. AI governance is not a security-only function. Correct answers involve legal, engineering, ethics, and business stakeholders.
5. Pragmatic phased implementation. ISACA prefers phased approaches based on risk prioritization over comprehensive but delayed implementations.
Scenario Question 6
After three months, you've established the governance charter, completed the asset inventory, and begun risk assessments. The CISO asks what the program's biggest remaining gap is. What do you answer?
At this stage, governance structure exists but **enforcement is the gap.** Policies without enforcement are ineffective (Day 2 lesson). Automated deployment gates, compliance checks in CI/CD, and approval workflows are what transition from Level 2 (defined) to Level 3 (managed).
Scenario Question 7
A competitor bank receives a large regulatory fine for AI bias in lending. Your board asks what your organization is doing to prevent this. What is the BEST response?
**Governance response:** Don't panic-react and don't dismiss. Present the **current program status** with specific reference to what you know about your own AI systems. If the risk assessment identified fairness gaps, present the remediation plan. This demonstrates mature governance, not reactive scrambling.
Scenario Question 8
The fraud detection team wants to retrain their model with a new dataset from a third-party vendor. Under your new governance program, what approval is needed?
Retraining with new third-party data introduces **vendor risk, data quality risk, and model behavior risk.** The governance workflow should assess all three. This isn't just a change management issue — it's a governance event that requires the AI-specific review process.
Scenario Question 9
Six months in, your program is operational. An engineering team deploys an AI model to production without governance review, claiming urgency. How do you handle this?
**Balanced response.** A retrospective review addresses the immediate risk without disproportionate disruption. Using the incident to strengthen preventive controls (deployment gates) addresses the root cause. Escalation may be needed eventually, but start with remediation and prevention.
Scenario Question 10
The board asks you to benchmark your AI governance program against industry best practices. What framework do you recommend as the PRIMARY benchmark?
**NIST AI RMF** provides the most comprehensive and flexible framework for benchmarking AI governance maturity. It's designed for assessment and improvement, not just compliance. ISO 42001 is a certification standard (more prescriptive), EU AI Act is regulation (compliance, not benchmarking), and COBIT is IT governance (broader scope).
🎯
Day 6 Complete — Domain 1 Done
"When governance and technical answers both seem correct, ISACA wants governance. Establish authority and process before deploying tools."