All Lessons Course Details All Courses Enroll
Courses/ AIGP Certification Prep/ Day 12
Day 12 of 30

The EU AI Act — Provider Obligations for High-Risk AI

⏱ 20 min 📊 Advanced AIGP Certification Prep

Providers of high-risk AI systems face the most extensive obligations under the EU AI Act. The AIGP exam tests your knowledge of each major requirement. Let's go through them article by article.

Article 9 — Risk Management System

Providers must establish a continuous, iterative risk management system throughout the AI system's lifecycle. This includes:

- Identification and analysis of known and reasonably foreseeable risks

- Estimation and evaluation of risks arising from intended use and reasonably foreseeable misuse

- Risk mitigation measures adopted based on assessment results

- Testing to ensure risk management measures are effective

The risk management system must consider risks to health, safety, and fundamental rights. It must be documented and updated throughout the system's lifecycle.

Article 10 — Data and Data Governance

Training, validation, and testing datasets must meet specific quality criteria:

- Relevant, sufficiently representative, and as free of errors as possible

- Appropriate statistical properties for the intended geographic, behavioral, or functional setting

- Bias examination — datasets must be examined for possible biases

- Gaps and shortcomings must be addressed through appropriate data governance measures

For high-risk AI using personal data: data governance practices must ensure compliance with data protection law, including purpose limitation and data minimization.

Articles 11–13 — Documentation, Logging, and Transparency

Article 11 (Technical Documentation) — Must be drawn up before market placement and kept up to date. Contents specified in Annex IV:

- General description of the AI system

- Detailed description of development process

- Monitoring, functioning, and control information

- Risk management documentation

- Changes throughout the lifecycle

Article 12 (Record-Keeping) — High-risk AI systems must have automatic logging capabilities. Logs must record events relevant to identifying risks, enable post-market monitoring, and facilitate traceability.

Article 13 (Transparency) — Providers must ensure high-risk AI systems are designed to be sufficiently transparent to enable deployers to interpret output and use it appropriately. Instructions for use must include the provider's identity, system characteristics, performance metrics, known limitations, and human oversight measures.

Knowledge Check
Under the EU AI Act, technical documentation for a high-risk AI system must be prepared:
Article 11 requires technical documentation to be drawn up BEFORE market placement and continuously updated. It's not a post-market or on-demand requirement — it must exist from the outset.

Articles 14–15 — Human Oversight and Robustness

Article 14 (Human Oversight) — High-risk AI systems must be designed to allow effective oversight by natural persons during use, including the ability to:

- Understand the system's capabilities and limitations

- Monitor the system's operation

- Interpret the system's output correctly

- Decide not to use the system or override/reverse its output

- Intervene or stop the system ("stop button")

Article 15 (Accuracy, Robustness, Cybersecurity) — High-risk AI systems must achieve appropriate levels of:

- Accuracy — consistent with the intended purpose

- Robustness — resilient to errors, faults, and attempted manipulations

- Cybersecurity — protected against unauthorized access and adversarial attacks

Knowledge Check
Article 14 of the EU AI Act requires human oversight of high-risk AI. Which of the following is NOT a required human oversight capability?
Article 14 requires understanding, monitoring, interpretation, override, and intervention capabilities. It does NOT require the ability to modify the underlying algorithm — that would be a development activity, not an oversight function. Human oversight means meaningful control over the system's operation, not real-time engineering.

Conformity Assessment

Before placing a high-risk AI system on the market, providers must undergo a conformity assessment to demonstrate compliance with all requirements.

Two assessment procedures:

1. Internal conformity assessment (most high-risk AI) — The provider self-assesses compliance. Requires a quality management system and technical documentation review.

2. Third-party conformity assessment (biometric identification and critical infrastructure AI) — Requires assessment by a notified body (independent third-party assessor).

After successful conformity assessment:

- Provider issues an EU declaration of conformity

- Provider affixes the CE marking to the AI system

- System is registered in the EU database for high-risk AI systems

Final Check
A company develops a high-risk AI system for credit scoring. Which conformity assessment procedure applies?
Credit scoring AI, while high-risk under Annex III, falls under the internal conformity assessment procedure. Third-party assessment by a notified body is required only for specific categories: biometric identification systems and certain critical infrastructure AI. Most Annex III high-risk AI systems use internal conformity assessment.
🎯
Day 12 Complete
"Providers bear the heaviest EU AI Act burden: risk management, data governance, documentation, logging, transparency, human oversight, and conformity assessment. Know Articles 9–15 — they define what 'compliance' means for high-risk AI."
Next Lesson
The EU AI Act — Deployer, Importer, and Distributor Obligations