Research

Evidence synthesis on digital mental health—what works, what doesn't, and where the gaps remain.

Effect Size Comparison

HRV Biofeedback d = 0.81
Digital CBT d = 0.54
App-Based Mindfulness d = 0.35

Evidence Grading

Not all evidence is equal. A simplified grading system helps readers understand the strength of claims:

Grade Meaning Typical Sources
Strong Consistent findings across multiple high-quality studies Meta-analyses, multiple RCTs
Moderate Promising findings, some limitations Single RCT, multiple observational studies
Emerging Initial evidence, needs replication Pilot studies, case series
Limited Insufficient data, theoretical basis only Expert opinion, analogous research

A note on effect sizes: Throughout this research, effect sizes (Cohen's d or Hedges' g) are reported where available. For context:

  • d = 0.2 is "small" (visible to careful measurement)
  • d = 0.5 is "medium" (visible to casual observation)
  • d = 0.8 is "large" (obvious to anyone)

Most digital mental health interventions show small-to-medium effects in controlled trials. Real-world effectiveness is typically lower.

Key Numbers

d = 0.54
Digital CBT effect on depression
Andersson & Cuijpers, 2009
d = 0.81
HRV biofeedback for anxiety
Goessl et al., 2017
< 4%
App retention at 14 days
Baumel et al., 2019

Building on Existing Work

This research synthesizes—it does not invent. It builds explicitly on the work of established researchers in the field.

Dr. John Torous

Harvard Medical School / Beth Israel Deaconess. Pioneer of digital phenotyping and the mindLAMP platform.

Dr. Paul Lehrer

Rutgers University. Decades of research on HRV biofeedback mechanisms and clinical applications.

NHS IAPT Program

Demonstrating that evidence-based psychological therapy can scale to millions of people.

SAMHSA / CCBHCs

Developing community mental health models that technology can support.

Full acknowledgments and attributions →