All Lessons Course Details All Courses Enroll
Courses/ AIGP Certification Prep/ Day 23
Day 23 of 30

The Go/No-Go Decision — Deployment Readiness Review

⏱ 18 min 📊 Advanced AIGP Certification Prep

The go/no-go decision is the most consequential governance moment in the AI lifecycle. It's the final gate between development and production. The AIGP exam frequently presents scenarios requiring you to make or evaluate this decision.

The Deployment Readiness Framework

A structured go/no-go framework evaluates readiness across five dimensions:

1. Legal and Compliance Readiness

- All applicable regulations identified and addressed

- Required assessments completed (DPIA, FRIA, AIA)

- Lawful basis established for data processing

- Contract provisions in place for third-party components

2. Technical Readiness

- Model performance meets defined thresholds

- Fairness metrics within tolerance

- Robustness and security testing completed

- Infrastructure ready for production workload

3. Governance and Documentation Readiness

- Technical documentation complete (including Annex IV if applicable)

- Model card and datasheets finalized

- Risk assessment documented with approved residual risk level

- Version control and audit trail in place

4. Operational Readiness

- Monitoring framework defined with KPIs and alert thresholds

- Human oversight personnel identified and trained

- Incident response plan specific to this AI system

- Rollback procedures tested

5. Stakeholder Readiness

- Affected stakeholders identified and notified

- Transparency requirements met (disclosures, explanations)

- Feedback mechanisms in place

- Support team trained on AI system behavior

Knowledge Check
During a go/no-go review, the fairness audit reveals a 12% performance gap across demographic groups. The governance committee should:
A 12% performance gap is typically above acceptable risk tolerance for fairness. Deployment should be halted until mitigation addresses the disparity. Deploying with monitoring or disclaimers doesn't reduce the discriminatory impact. Removing demographic features may not eliminate proxy discrimination.

Approval Authority and Escalation

Who has the authority to approve deployment? The governance framework must define:

Standard approvals — Low-risk AI systems may be approved by the model owner with technical sign-off from the development team.

Elevated approvals — Medium-risk AI systems require additional sign-off from the AI governance team or risk committee.

Executive approvals — High-risk AI systems require approval from the AI governance officer, legal review, and potentially board-level notification.

Escalation triggers:

- Test results that fail to meet defined thresholds

- Unresolved legal or compliance questions

- Disagreement between technical and governance teams

- Novel use cases without precedent

- External stakeholder concerns

Conditional Deployment Options

Not every deployment decision is binary. Consider graduated approaches:

Pilot program — Deploy to a limited user group with enhanced monitoring. Validate real-world performance before full rollout.

Limited rollout — Deploy in a specific geography, business unit, or use case before expanding.

Staged deployment — Gradually increase the AI's decision-making authority. Start with AI-assisted (human decides), progress to AI-driven (human reviews), then to AI-autonomous (human monitors).

Shadow mode — The AI runs in production but its decisions are not actioned. Compare AI decisions against actual human decisions to validate alignment.

These approaches reduce deployment risk while enabling real-world validation.

Knowledge Check
A healthcare organization wants to deploy an AI diagnostic tool but is concerned about performance in real-world conditions. What deployment approach BEST balances risk management with learning?
Shadow mode allows the organization to validate AI performance against real clinical decisions without exposing patients to AI-driven decisions. This provides real-world performance data while maintaining the safety of human-driven care. Full deployment risks patient harm. Delayed deployment delays learning. Limited deployment provides less comprehensive data.

Documenting the Deployment Decision

Every go/no-go decision must be documented:

- Decision — Go, no-go, or conditional deployment

- Rationale — Why this decision was made

- Conditions — Any conditions attached to the deployment (monitoring requirements, review dates, scope limitations)

- Residual risks — Known risks accepted and why

- Approvers — Who approved, their authority level, date

- Review date — When the deployment will be reassessed

This documentation is critical for:

- Regulatory compliance (demonstrating governance process)

- Organizational accountability (who decided and why)

- Post-incident analysis (understanding the context of the deployment decision)

Final Check
An AI system passes all technical tests but the legal review is incomplete due to regulatory uncertainty. The product team pushes for immediate deployment citing competitive pressure. The governance committee should:
Conditional deployment balances business needs with governance requirements. Technical readiness alone is insufficient — legal and compliance readiness is a separate dimension that must be addressed. A limited scope deployment with a clear timeline for legal completion manages both competitive and compliance risks.
🎯
Day 23 Complete
"The go/no-go decision evaluates five dimensions: legal, technical, governance, operational, and stakeholder readiness. Not every deployment is binary — pilots, staged rollouts, and shadow mode reduce risk while enabling learning."
Next Lesson
Domain III Capstone — Mock Development Governance Review