Ethics

Digital mental health requires clear ethical boundaries. These guide this work.

Core Principles

Non-negotiable principles for all work in digital mental health:

Commitments

Standing Commitments
  1. Open-source tools where possible—so others can verify and improve
  2. Full documentation of methods—nothing hidden
  3. Citation of prior work consistently—credit where due
  4. Honest reporting of limitations—including what doesn't work
  5. No therapeutic claims without evidence—"may help" not "will cure"
  6. Crisis resources prominently displayed—always a path to human help
  7. Human oversight for all clinical applications—technology augments, not replaces

Red Lines

Boundaries are as important as commitments. This project explicitly refuses to:

  • Claim therapeutic benefit without evidence.
    If something might help, the language says "might." If evidence is limited, that's stated.
  • Pretend AI can replace human connection.
    The research is clear: hybrid approaches work best. AI is adjunctive.
  • Hide sources.
    Everything is cited. The work can be checked.
  • Promise what technology can't deliver.
    Digital tools have real limitations. They're discussed openly.
  • Deploy AI for crisis intervention without human backup.
    Current AI fails at crisis detection. No pretense otherwise.
  • Collect unnecessary data.
    Minimal collection. User control. No data selling.
  • Ignore algorithmic bias.
    Testing for differential performance across groups is standard practice.
  • Obscure limitations in marketing language.
    Plain language. Honest framing.

Regulatory Landscape

Digital mental health operates in an evolving regulatory environment. Key frameworks affecting this work:

Jurisdiction Regulation Key Provisions
US (Federal) HIPAA Privacy and security for health data
US (California) CCPA/CPRA Consumer privacy rights
US (Illinois) WOPR Restrictions on AI in mental health treatment
US (Nevada) AB 406 AI therapy limitations
European Union GDPR Comprehensive privacy framework
European Union MDR Medical device requirements
Regulatory Caution

Several US states have begun restricting AI applications in mental health. Illinois (WOPR) and Nevada (AB 406) limit AI-delivered "therapy" without licensed professional oversight. These protections align with the design philosophy here: tools that complement—not replace—professional care.

Position on AI

AI Should Be

  • Adjunctive: Supporting human care, not replacing it
  • Bounded: Clear limitations on what it will discuss
  • Transparent: Always identified as non-human
  • Supervised: Human oversight for all clinical applications
  • Conservative: Erring toward caution and referral

AI Should Never

  • Claim therapeutic capability
  • Replace crisis intervention
  • Diagnose conditions
  • Recommend medication changes
  • Engage with active suicidality without immediate escalation
Current AI Safety Concerns

Research has documented significant limitations in how AI chatbots respond to mental health crises:

  • Often fail to recognize clear crisis signals
  • May provide responses that could be harmful or dismissive
  • Insufficient validation for crisis detection use cases

Until AI demonstrably improves at crisis detection, conversational AI should not be deployed for clinical applications without robust human oversight.

Crisis Safety Protocols

Any digital mental health tool must have robust crisis protocols. These are required:

⚠️ Required Safety Features
  1. Visible crisis resources on every page—988, Crisis Text Line, emergency services
  2. Clear disclaimers that tools are not therapy or crisis services
  3. Exit paths to human help at every point
  4. No engagement with active suicidality—immediate referral to crisis resources
  5. Clinician contact information where applicable
  6. Emergency instructions clearly displayed

Equity and Access

Digital mental health tools risk widening disparities if not designed carefully. Essential practices:

Testing for Bias

Any algorithms used are tested for differential performance across demographic groups. Disparities are reported when found.

Accessibility

Tools designed for screen readers, keyboard navigation, and varying literacy levels.

Cultural Humility

Recognition that approaches developed in one context may not transfer universally. Feedback is welcome.

Digital Divide

Acknowledgment that not everyone has smartphones or reliable internet. Digital tools are one option, not the only option.

Questions and Feedback

Ethical frameworks are living documents. If you see:

  • A gap in these principles
  • A concern not addressed
  • A way these tools might cause harm
  • Better approaches to adopt

Please reach out through the 22b1 correspondence page.

If You're in Crisis

This is a research site, not a crisis service. If you or someone you know is in immediate danger:

  • 988 Suicide & Crisis Lifeline: Call or text 988 (US)
  • Crisis Text Line: Text HOME to 741741
  • Emergency: Call 911 or go to your nearest emergency department
  • International: Find a helpline in your country