All Lessons Course Details All Courses Enroll
Courses/ AIGP Certification Prep/ Day 28
Day 28 of 30

Updating, Retraining, and Retiring AI Systems

⏱ 18 min 📊 Medium AIGP Certification Prep

AI systems are not static. They must be updated, retrained, and eventually retired. Each of these lifecycle events creates governance obligations — and the AIGP exam tests whether you can manage them.

Retraining Triggers

When should an AI system be retrained? Define triggers in advance:

Performance triggers:

- Accuracy drops below defined threshold

- Error rates exceed acceptable levels

- Drift detection thresholds breached

Regulatory triggers:

- New regulations change compliance requirements

- Regulatory guidance changes interpretation of existing requirements

- Enforcement actions against similar AI systems reveal new risks

Business triggers:

- Change in business context (new products, markets, customer segments)

- Change in data sources or availability

- Strategic decision to expand or modify the AI's scope

Scheduled triggers:

- Regular retraining on a defined schedule (quarterly, annually)

- Periodic re-evaluation of model performance and relevance

Knowledge Check
An AI fraud detection model's false positive rate has gradually increased from 3% to 8% over 6 months. The governance framework defined a 5% threshold. What should happen?
The defined governance threshold (5%) has been exceeded. The governance framework requires action — the retraining trigger has been activated. Continuing to monitor despite exceeding thresholds undermines governance. Adjusting thresholds to avoid action is governance avoidance. Replacing the model may be premature — retraining may resolve the issue.

Governance Approval for Retrained Models

A critical governance concept: retraining creates a new model that requires its own governance review.

A retrained model may behave differently from the original:

- Different training data may introduce new biases

- Different performance characteristics across demographic groups

- Changed behavior for edge cases

- Different error patterns

Governance requirements for retraining:

1. Document the retraining trigger and rationale

2. Apply the same data governance standards to new training data

3. Conduct the same fairness, performance, and robustness testing as the original deployment

4. Compare the retrained model against the current model and the original baseline

5. Go through the appropriate approval process (proportionate to risk level)

6. Update all documentation (model card, technical documentation)

7. Maintain the ability to rollback to the previous model version

EU AI Act implication: If retraining constitutes a "substantial modification," the system may need to go through a new conformity assessment.

Version Control and Rollback

AI governance requires robust version control:

Model versioning:

- Every deployed model version must be uniquely identified

- Training data, hyperparameters, and configuration for each version must be recorded

- Performance metrics for each version must be documented

Rollback procedures:

- The previous model version must remain available for rapid rollback

- Define rollback triggers (performance degradation, unexpected behavior, incident)

- Rollback procedures must be tested before deployment

- Rollback decision authority must be defined (who can trigger a rollback?)

Documentation chain:

- Each version's documentation must be maintained (not overwritten)

- The transition from one version to another must be documented

- Version comparison reports must be created and reviewed

Knowledge Check
After retraining a high-risk AI system, what MUST the governance team do before deploying the retrained model?
A retrained model is effectively a new model — it must go through comprehensive testing, documentation, and approval. Accuracy comparison alone is insufficient; fairness and robustness must also be validated. Immediate deployment bypasses governance controls. Regulatory notification may be required in some cases but isn't the first step.

AI System Retirement

AI systems eventually need to be decommissioned. Governance for retirement includes:

Retirement criteria:

- System no longer meets performance requirements despite retraining

- Regulatory changes make the system non-compliant

- Business need has changed or ended

- Replacement system is available and proven

- Continued operation creates unacceptable risk

Retirement process:

1. Formal retirement decision with documented rationale

2. Stakeholder communication plan (users, affected individuals, business partners)

3. Data handling: retention, archival, or deletion per policy and regulation

4. Documentation archival: maintain records for regulatory and legal purposes

5. Transition plan: migrate to replacement system or manual process

6. Post-retirement validation: confirm the system is fully decommissioned

Common mistake: Retiring an AI system without retaining documentation. Regulatory inquiries, legal proceedings, or audit requests may require access to historical AI system records long after retirement.

Final Check
An organization retires an AI lending model and replaces it with a new system. Six months later, a regulator requests documentation about decisions made by the retired model. The organization cannot comply because all documentation was deleted. What was the governance failure?
Retirement governance must include documentation archival. Regulatory inquiries, legal claims, and audit requests can arise after retirement. The failure was in the retirement process — not archiving documentation that may be needed for compliance purposes.
🎯
Day 28 Complete
"Retraining creates a new model requiring full governance review. Maintain version control with rollback capability. Retirement requires documentation archival — regulatory inquiries don't stop when the model does."
Next Lesson
Domain IV Capstone — Building a Monitoring and Incident Runbook