This document is for informational purposes only and does not constitute legal advice. Regulations are evolving rapidly. Consult with regulatory counsel for your specific situation and jurisdiction.
Regulatory Landscape
A comprehensive overview of US and international regulations governing AI in mental health applications. The regulatory environment is evolving rapidly—stay current.
US State Regulations
Several US states have enacted or proposed legislation specifically addressing AI in mental health contexts. This represents the most active area of new regulation.
Illinois: Leading the Way
Artificial Intelligence and Mental Health Services Act (Proposed)
Status: Under consideration as of 2025
Key Provisions:
- AI systems cannot "provide therapy" or "act as a therapist"
- Must disclose AI nature to users before interaction
- Requires licensed professional oversight for clinical applications
- Mandates crisis detection and escalation capabilities
- Annual safety audits required
Nevada: AB 406
Assembly Bill 406
Status: Enacted 2024
Key Provisions:
- AI cannot provide "counseling or therapy" without licensed professional supervision
- Clear disclosure requirements when AI is involved in health communications
- Penalties for violations up to $10,000 per incident
California: Privacy Focus
California Consumer Privacy Act (CCPA) + Health Data
Status: Enacted, ongoing amendments
Key Provisions:
- Mental health data classified as "sensitive personal information"
- Opt-in consent required for collection
- Right to deletion, portability, and knowledge of data use
- Private right of action for violations
State-by-State Summary
| State | AI-Specific Law | Relevant Privacy Law | Key Focus |
|---|---|---|---|
| Illinois | Proposed | BIPA | AI therapy restrictions |
| Nevada | AB 406 | NRS 603A | Licensed oversight |
| California | Proposed | CCPA/CPRA | Data privacy, consent |
| Colorado | AI Act 2024 | CPA | Algorithmic discrimination |
| New York | Proposed | SHIELD Act | Transparency, audits |
| Texas | None | TCDPA (limited) | N/A |
US Federal Regulations
FDA: Software as Medical Device (SaMD)
When Does FDA Regulate Mental Health Software?
FDA asserts jurisdiction over software that is intended to diagnose, cure, treat, or prevent disease. This potentially includes mental health AI that:
- Screens or diagnoses mental health conditions
- Provides treatment recommendations
- Monitors symptoms to inform clinical decisions
Key FDA Guidance Documents:
- Clinical Decision Support Software Guidance (2022)
- Digital Health Software Pre-certification Program
- AI/ML-Based Software as Medical Device Action Plan (2021)
FDA currently exercises "enforcement discretion" for many low-risk digital health products, meaning they don't actively regulate "wellness" apps. However, claims of clinical efficacy or treatment can trigger oversight.
FTC: Consumer Protection
Section 5: Unfair or Deceptive Practices
FTC can bring enforcement action against mental health AI for:
- Deceptive efficacy claims
- Unfair data practices
- Failure to disclose material information (like AI nature)
- Privacy violations
Recent FTC Actions:
- BetterHelp settlement (2023): $7.8M for data sharing violations
- Multiple telehealth enforcement actions for deceptive practices
HIPAA
When Does HIPAA Apply?
HIPAA applies to "covered entities" (healthcare providers, plans, clearinghouses) and their "business associates." Key questions for mental health AI:
- Is the app provider a covered entity? (Usually no, for consumer apps)
- Is the app a business associate of a covered entity?
- Is the app integrated into a covered entity's workflow?
If HIPAA Applies:
- Privacy Rule compliance (consent, minimum necessary, etc.)
- Security Rule compliance (technical, administrative, physical safeguards)
- Business Associate Agreement required
- Breach notification requirements
Many consumer mental health apps fall outside HIPAA because they're not covered entities and don't have relationships with covered entities. This means less federal privacy protection for sensitive mental health data—a significant regulatory gap.
European Union
EU AI Act
Classification and Requirements
The EU AI Act (effective 2024-2026) classifies AI systems by risk level. Mental health AI is likely to be classified as "high-risk" under:
- Annex III, 5(b): "AI systems intended to be used to evaluate the eligibility of natural persons for health care services"
- Safety components: If the AI is a safety component of a medical device
High-Risk AI Requirements:
- Risk management system
- Data governance and quality
- Technical documentation
- Record-keeping and logging
- Transparency to users
- Human oversight mechanisms
- Accuracy, robustness, and cybersecurity
- Conformity assessment before deployment
GDPR
Special Category Data
Mental health data is "special category data" under GDPR Article 9, requiring:
- Explicit consent for processing
- Legitimate basis beyond consent (often "substantial public interest")
- Data minimization
- Purpose limitation
- Right to erasure ("right to be forgotten")
- Data portability
- Right to object to automated decision-making
Medical Device Regulation (MDR)
If mental health AI qualifies as a medical device, it falls under the Medical Device Regulation (EU 2017/745) with requirements for:
- CE marking
- Clinical evaluation
- Post-market surveillance
- Quality management system
Other International Frameworks
UK
Post-Brexit, UK is developing its own AI framework. The UK AI Safety Institute is leading on AI safety standards. MHRA regulates medical devices including software.
Canada
Health Canada regulates software as medical devices. PIPEDA governs privacy. Voluntary guidance on AI in healthcare.
Australia
TGA regulates medical device software. Privacy Act applies to health data. Voluntary AI Ethics Principles.
WHO Guidance
"Ethics and Governance of AI for Health" (2021) provides non-binding guidance emphasizing transparency, safety, and equity.
Compliance Strategy Considerations
Questions to Answer
- What claims are you making? "Wellness" vs. "treatment" has major regulatory implications.
- Where will users be located? Determines applicable jurisdictions.
- Is human oversight involved? Required by many regulations; reduces risk classification.
- What data are you collecting? Mental health data triggers heightened requirements.
- Who are you working with? Partnerships with covered entities trigger HIPAA.
- Does your product make clinical decisions? May trigger FDA and high-risk AI classification.
Safe Harbor Approaches
- Wellness framing: Focus on general wellness support, not treatment claims
- Human in the loop: Route clinical decisions to licensed professionals
- Transparency: Clear disclosure of AI nature and limitations
- Data minimization: Collect only what's necessary
- Conservative claims: "May help" not "will treat"
Emerging Trends
- More states likely to pass AI mental health laws
- FDA clarifying digital health oversight
- EU AI Act enforcement beginning 2024-2026
- Professional licensing boards updating guidance
- Potential federal US legislation
Professional Standards and Guidelines
Beyond legal requirements, professional organizations provide guidance on ethical AI use in mental health:
APA (American Psychological Association)
- Guidelines for the use of technology in psychological practice
- Emphasis on competence, informed consent, confidentiality
APA (American Psychiatric Association)
- App Evaluation Model
- Framework for assessing digital mental health tools
NHS Digital
- Digital Technology Assessment Criteria (DTAC)
- Evidence standards framework for digital health
NICE (UK)
- Evidence standards for digital health technologies
- Assessment criteria for NHS adoption