Welcome to Domain III. For the next 7 days, you'll learn to govern the AI development process — from problem definition to deployment readiness. This domain tests your ability to embed governance at every stage of the AI lifecycle.
1. Problem Formulation and Use Case Assessment
- Define the business problem AI is intended to solve
- Assess whether AI is the right solution (not every problem needs AI)
- Identify stakeholders and affected populations
- Governance checkpoint: Use case review — Is this use case aligned with organizational AI principles? What risk level does it represent?
2. Data Collection and Preparation
- Gather training, validation, and test data
- Clean, label, and transform data for model consumption
- Governance checkpoint: Data governance review — Do we have rights to use this data? Is it representative? Has bias been assessed?
3. Model Selection and Training
- Choose appropriate model architecture
- Train the model on prepared data
- Governance checkpoint: Technical review — Is the model appropriate for the use case? Are training procedures documented?
4. Testing and Evaluation
- Evaluate model performance against defined metrics
- Conduct fairness, robustness, and security testing
- Governance checkpoint: Testing gate — Do test results meet defined thresholds? Have bias audits been completed?
5. Deployment Readiness Review
- Final governance review before production release
- Governance checkpoint: Go/no-go decision — Have all required reviews, approvals, and documentation been completed?
6. Post-Deployment Monitoring
- Monitor model performance, fairness, and drift in production
- Governance checkpoint: Continuous monitoring — Are KPIs being tracked? Are escalation triggers defined?
A common objection: "Governance gates slow down development." The governance professional's response:
Proportionate governance — Not every AI system needs the same level of review. Low-risk AI (spam filters, recommendation engines) can have lightweight gates. High-risk AI (lending, hiring, medical) requires comprehensive review.
Automated checks — Many governance checks can be automated: data quality scans, bias metrics, documentation completeness checks, compliance checklists.
Parallel processes — Governance reviews can run in parallel with development, not sequentially. While developers train the model, governance can review the use case and data rights.
Shift left — Embed governance requirements into the development process from the start, rather than adding review at the end. Developers who understand governance requirements build compliant systems from day one.
Governance documentation should be created during each stage, not after:
Problem formulation → Use case assessment, stakeholder analysis, risk classification
Data collection → Data provenance records, rights assessment, representativeness analysis
Model training → Training procedures, hyperparameter choices, architectural decisions
Testing → Test results, fairness metrics, known limitations
Deployment → Deployment decision rationale, monitoring plan, rollback procedures
Post-deployment → Monitoring reports, incident logs, retraining decisions
The key principle: if it's not documented, it didn't happen. Documentation created after the fact is unreliable and often insufficient for regulatory compliance.