Beyond privacy and anti-discrimination, a web of sector-specific laws governs AI in consumer-facing contexts. The AIGP exam tests your ability to identify which laws apply to which AI use cases.
The Federal Trade Commission enforces Section 5 of the FTC Act, prohibiting "unfair or deceptive acts or practices." The FTC has aggressively applied this to AI:
Deceptive practices involving AI:
- Claiming AI capabilities that don't exist ("AI-powered" product that isn't actually AI)
- Failing to disclose AI-generated content (deepfakes, synthetic media)
- Misleading consumers about how their data is used for AI training
Unfair practices involving AI:
- Using AI in ways that cause substantial consumer injury
- Deploying biased AI without adequate testing
- Collecting excessive data for AI training without consumer knowledge
FTC enforcement priorities for AI:
- AI claims must be truthful and substantiated
- Data used for AI must be obtained fairly
- AI decisions affecting consumers must be transparent
- Companies must assess and mitigate AI bias risks
The FTC has ordered companies to delete AI models trained on improperly collected data — a significant enforcement tool.
The FCRA governs consumer reports — information used to evaluate consumers for credit, employment, insurance, or housing. AI intersects with FCRA when:
- AI systems use consumer data to make eligibility decisions
- AI-generated scores or assessments function as consumer reports
- Third-party AI vendors provide scoring services that qualify as consumer reporting agencies
FCRA requirements applicable to AI:
- Accuracy — Reasonable procedures to ensure maximum possible accuracy
- Adverse action notices — Must inform consumers when AI-based decisions negatively affect them
- Dispute rights — Consumers can dispute inaccurate information
- Permissible purpose — Can only obtain consumer reports for specified purposes
- Consent — Written consumer consent required for employment-related reports
The Health Insurance Portability and Accountability Act (HIPAA) governs protected health information (PHI). AI in healthcare raises specific HIPAA concerns:
Training data — AI models trained on PHI must comply with HIPAA privacy and security rules. De-identification under HIPAA standards may be required.
Business associates — AI vendors processing PHI are business associates requiring a Business Associate Agreement (BAA).
Minimum necessary — HIPAA's minimum necessary standard applies to data used for AI — only the minimum amount of PHI necessary should be used.
Patient rights — Patients have the right to access their health information, including information generated by AI diagnostic tools.
The Children's Online Privacy Protection Act (COPPA) applies when AI systems collect or use data from children under 13:
- Verifiable parental consent required before collecting children's personal data
- AI systems that interact with children (chatbots, educational AI, games) must comply
- AI voice assistants and smart toys collecting children's data are subject to COPPA
- The FTC has increased enforcement against AI systems targeting or interacting with children