Welcome to Domain 3 — the largest domain at 38%. Important context: this domain is tested at management level. You evaluate and approve architectural decisions; you don't make them yourself. Know enough to ask the right questions and assess whether proposed architectures are secure.
AI systems follow several common architecture patterns, each with different security implications:
Centralized model serving — All models run on centralized infrastructure (cloud or on-prem). Easier to secure, monitor, and govern. Single point of control. Risk: single point of failure and potential bottleneck.
Distributed model serving — Models deployed across multiple locations: edge devices, regional servers, client applications. Harder to secure and monitor. More resilient. Risk: expanded attack surface, harder to update and patch.
API-based model consumption — Your application calls a vendor's model API (e.g., OpenAI, Anthropic). Vendor manages infrastructure. Risk: data sent to third party, dependency on vendor availability, limited visibility into model behavior.
Embedded models — Models compiled into applications or firmware. Common for IoT and mobile. Risk: models are difficult to update, may be reverse-engineered from the device.
As security manager, understand which pattern your organization uses and the security tradeoffs of each. Don't dictate architecture — assess and advise.
Apply the defense-in-depth principle to AI systems across five layers:
Network layer — Segment AI infrastructure. Model training environments should be isolated from production inference. API endpoints need rate limiting, authentication, and DDoS protection.
Application layer — Secure the applications that interact with AI models. Input validation, output filtering, authentication, and authorization. Prevent direct access to model APIs from untrusted sources.
Model layer — Model-specific controls: adversarial robustness testing, model access controls (who can query, retrain, or export the model), and model integrity verification.
Data layer — Protect training data, model weights, and inference data. Encryption at rest and in transit. Access controls based on data classification. Data lineage tracking.
Output layer — Controls on what the model produces: content filtering for generative AI, confidence thresholds for decision-making AI, and output logging for audit.
Each layer provides independent protection. An attacker must defeat multiple layers to compromise the AI system.
Most AI deployments involve cloud services. Understand the shared responsibility model for AI:
Cloud provider responsibility — Physical infrastructure, hypervisor security, base network security. For managed AI services (SageMaker, Vertex AI, Azure ML): platform security, GPU infrastructure, and service availability.
Your responsibility — Model security, data security, access controls, monitoring, compliance, and governance. Regardless of what the cloud provider manages, you remain accountable for the AI system's behavior and compliance.
Key questions for cloud AI deployments:
- Where is data processed and stored? (Data residency)
- Is your data used to improve the provider's models? (Data usage)
- What access controls exist between tenants? (Isolation)
- How are model updates managed? (Change control)
- What logging and monitoring is available? (Visibility)
Don't assume cloud managed = cloud secured. The shared responsibility model means you still own significant security obligations.
AI deployed on edge devices and IoT introduces unique challenges:
Physical access — Edge devices may be in physically accessible locations. Attackers can physically extract models, reverse-engineer them, or tamper with inputs.
Limited resources — Edge devices may not support full encryption, monitoring, or real-time updates. Security controls must be adapted for constrained environments.
Update challenges — Deploying model updates to thousands of edge devices requires secure update mechanisms, rollback capability, and verification that updates were applied.
Data collection — Edge AI often collects sensitive data (cameras, sensors, biometrics). Data governance must address collection, local processing vs. cloud transmission, and retention.
Monitoring gaps — Edge devices may have intermittent connectivity. Security monitoring must account for offline periods and delayed alert delivery.
For security managers, architecture review is a governance control, not a technical activity:
When to require review — New AI deployments, significant changes to existing AI architectures, new vendor integrations, and migration between deployment models.
What to assess — Alignment with security architecture standards, defense-in-depth implementation, data flow mapping, threat modeling results, and compliance with regulatory requirements.
Who reviews — Architecture reviews should include AI security architects, data protection officers, and compliance representatives. The security manager ensures the review happens and the findings are addressed.
How to enforce — Architecture review should be a gate in the deployment approval process. No AI system goes to production without an approved architecture review.
The exam will test whether you understand architecture well enough to evaluate proposals and identify gaps — not whether you can design architectures yourself.