All Lessons Course Details All Courses Enroll
Courses/ ISACA AAISM Certification Prep/ Day 12
Day 12 of 18

AI Security Architecture and Design

⏱ 18 min 📊 Advanced ISACA AAISM Certification Prep

Welcome to Domain 3 — the largest domain at 38%. Important context: this domain is tested at management level. You evaluate and approve architectural decisions; you don't make them yourself. Know enough to ask the right questions and assess whether proposed architectures are secure.

Architecture patterns for AI systems

AI systems follow several common architecture patterns, each with different security implications:

Centralized model serving — All models run on centralized infrastructure (cloud or on-prem). Easier to secure, monitor, and govern. Single point of control. Risk: single point of failure and potential bottleneck.

Distributed model serving — Models deployed across multiple locations: edge devices, regional servers, client applications. Harder to secure and monitor. More resilient. Risk: expanded attack surface, harder to update and patch.

API-based model consumption — Your application calls a vendor's model API (e.g., OpenAI, Anthropic). Vendor manages infrastructure. Risk: data sent to third party, dependency on vendor availability, limited visibility into model behavior.

Embedded models — Models compiled into applications or firmware. Common for IoT and mobile. Risk: models are difficult to update, may be reverse-engineered from the device.

As security manager, understand which pattern your organization uses and the security tradeoffs of each. Don't dictate architecture — assess and advise.

Defense-in-depth for AI

Apply the defense-in-depth principle to AI systems across five layers:

Network layer — Segment AI infrastructure. Model training environments should be isolated from production inference. API endpoints need rate limiting, authentication, and DDoS protection.

Application layer — Secure the applications that interact with AI models. Input validation, output filtering, authentication, and authorization. Prevent direct access to model APIs from untrusted sources.

Model layer — Model-specific controls: adversarial robustness testing, model access controls (who can query, retrain, or export the model), and model integrity verification.

Data layer — Protect training data, model weights, and inference data. Encryption at rest and in transit. Access controls based on data classification. Data lineage tracking.

Output layer — Controls on what the model produces: content filtering for generative AI, confidence thresholds for decision-making AI, and output logging for audit.

Each layer provides independent protection. An attacker must defeat multiple layers to compromise the AI system.

AI defense-in-depth diagram showing five nested security layers from network to output
Defense-in-depth adapted for AI. Each layer provides independent protection across network, application, model, data, and output.
Knowledge Check
An architecture review reveals that the production AI model serving infrastructure shares the same network segment as the model training environment. What is the PRIMARY security concern?
**Network segmentation.** Shared network segments mean a breach in the less-controlled training environment provides a path to the higher-value production environment. This is a fundamental defense-in-depth violation — isolation between environments is essential.

Cloud AI shared responsibility

Most AI deployments involve cloud services. Understand the shared responsibility model for AI:

Cloud provider responsibility — Physical infrastructure, hypervisor security, base network security. For managed AI services (SageMaker, Vertex AI, Azure ML): platform security, GPU infrastructure, and service availability.

Your responsibility — Model security, data security, access controls, monitoring, compliance, and governance. Regardless of what the cloud provider manages, you remain accountable for the AI system's behavior and compliance.

Key questions for cloud AI deployments:

- Where is data processed and stored? (Data residency)

- Is your data used to improve the provider's models? (Data usage)

- What access controls exist between tenants? (Isolation)

- How are model updates managed? (Change control)

- What logging and monitoring is available? (Visibility)

Don't assume cloud managed = cloud secured. The shared responsibility model means you still own significant security obligations.

Edge AI and IoT security

AI deployed on edge devices and IoT introduces unique challenges:

Physical access — Edge devices may be in physically accessible locations. Attackers can physically extract models, reverse-engineer them, or tamper with inputs.

Limited resources — Edge devices may not support full encryption, monitoring, or real-time updates. Security controls must be adapted for constrained environments.

Update challenges — Deploying model updates to thousands of edge devices requires secure update mechanisms, rollback capability, and verification that updates were applied.

Data collection — Edge AI often collects sensitive data (cameras, sensors, biometrics). Data governance must address collection, local processing vs. cloud transmission, and retention.

Monitoring gaps — Edge devices may have intermittent connectivity. Security monitoring must account for offline periods and delayed alert delivery.

Knowledge Check
Your organization is deploying AI-powered cameras for quality inspection in manufacturing facilities. The cameras process images locally and send only defect alerts to the cloud. What is the MOST important security architecture consideration?
**Physical security of edge AI.** In a manufacturing environment, devices are physically accessible. The primary concern is model extraction (IP theft) and model tampering (causing the system to miss defects). Transit encryption and availability are important but secondary to the physical access risk.

Architecture review as governance control

For security managers, architecture review is a governance control, not a technical activity:

When to require review — New AI deployments, significant changes to existing AI architectures, new vendor integrations, and migration between deployment models.

What to assess — Alignment with security architecture standards, defense-in-depth implementation, data flow mapping, threat modeling results, and compliance with regulatory requirements.

Who reviews — Architecture reviews should include AI security architects, data protection officers, and compliance representatives. The security manager ensures the review happens and the findings are addressed.

How to enforce — Architecture review should be a gate in the deployment approval process. No AI system goes to production without an approved architecture review.

The exam will test whether you understand architecture well enough to evaluate proposals and identify gaps — not whether you can design architectures yourself.

Final Check
An engineering team proposes deploying a new AI model to production. They present a thorough technical architecture but have not addressed how the model will be monitored for drift or adversarial inputs post-deployment. How should the security manager respond?
**Governance gate.** Monitoring is a required security control, not an optional improvement. The architecture review should not pass without it. However, the response is to require the gap be addressed — not to reject the entire deployment. Work with the team to complete the architecture, then approve.
🏗️
Day 12 Complete
"Domain 3 is tested at management level. You evaluate and approve architectures — you don't design them. Know enough to ask the right questions."
Next Lesson
AI Lifecycle: Model Selection, Training, Validation