Yesterday we identified AI risks. Today we define when those risks require action, how to treat them, and how to manage what remains. The key concept: risk-proportionate response — match your response to the actual risk level.
Risk thresholds define the boundary between acceptable and unacceptable risk. For AI systems, thresholds must address dimensions that traditional IT risk doesn't:
Performance drift thresholds — At what point does model accuracy degradation trigger action? A 2% drop might be acceptable. A 10% drop requires immediate response. Define these thresholds before deployment, not during an incident.
Bias thresholds — What level of demographic disparity triggers review? A 1% gap might be within tolerance. A 5% gap requires investigation. Align thresholds with regulatory requirements and organizational fairness commitments.
Confidence thresholds — Below what confidence level should the AI defer to human judgment? A medical diagnosis AI at 60% confidence should escalate. A product recommendation at 60% might be acceptable.
Data quality thresholds — What level of data quality degradation triggers retraining? Missing data, distribution changes, and label quality all need defined thresholds.
Latency and availability thresholds — AI-specific SLAs. Model inference latency, API availability, and throughput requirements.
Thresholds must be documented, measurable, and monitored. A threshold without monitoring is useless.
The four traditional treatment strategies apply to AI with important nuances:
Mitigate — Implement controls to reduce risk. For AI: add human oversight, implement monitoring, improve training data quality, add adversarial testing. Most common treatment for AI risks.
Transfer — Shift risk to a third party. For AI: use a vendor's AI service (transfers some operational risk), purchase AI-specific insurance, or contractually transfer liability. Caution: You can transfer financial risk but not reputational risk or regulatory accountability.
Accept — Acknowledge the risk and continue. For AI: appropriate for low-risk systems where the cost of treatment exceeds the potential impact. Requires documented risk acceptance by an authorized decision-maker at the appropriate level.
Avoid — Eliminate the risk by not pursuing the activity. For AI: don't deploy the AI system, don't use certain data types, or don't automate certain decisions. Appropriate when risk exceeds organizational appetite and cannot be adequately mitigated.
The choice between strategies should be documented with rationale and approved at the appropriate governance level. High-risk AI systems require senior management or board-level risk acceptance.
After treatment, residual risk remains. Managing it requires:
Documentation — What risk remains after controls are applied? What is the residual likelihood and impact? This is your residual risk statement.
Monitoring — Continuous monitoring for changes in residual risk. AI residual risk can increase over time as models drift, threat landscapes evolve, or regulations change.
Board reporting — Aggregate residual AI risk and report to the board. Use trend analysis to show whether residual risk is stable, increasing, or decreasing. Boards care about trends more than absolute numbers.
Reassessment triggers — Define events that trigger risk reassessment: model retraining, regulatory changes, significant incidents, or organizational changes (merger, new business line).
Continuous risk assessment post-deployment — AI risk doesn't end at deployment. Post-deployment monitoring must cover all risk dimensions: performance, fairness, security, and compliance. This is fundamentally different from traditional IT where risk is relatively stable post-deployment.
Every risk treatment decision must align with the organization's risk appetite:
Risk appetite defines how much risk the organization is willing to accept in pursuit of its objectives. For AI, this means balancing innovation potential against risk exposure.
Key alignment questions:
- Does the proposed AI deployment fall within our stated risk appetite?
- If treatment is needed, does the residual risk fall within appetite after treatment?
- Who has authority to accept risk that approaches or exceeds appetite boundaries?
- How do we communicate risk appetite to engineering teams in practical terms?
Practical translation: Convert abstract risk appetite statements into concrete, measurable criteria that engineering teams can apply. "Low risk tolerance for regulatory compliance" translates to "all AI systems in regulated domains require full governance review before deployment."
Risk appetite is set by the board. Risk tolerance (the acceptable variation around appetite) is set by management. Risk thresholds (the operational triggers) are set by the security and risk teams.