Millions of people with mental health disorders, including depression and bipolar disorder, are not getting the help they need. The World Health Organization (WHO) estimates that up to 50% of people in developed countries and 85% in developing ones lack proper access to treatment.
But many of these underserved people do have access to smartphones. And a simple search on Apple's App Store yields hundreds of apps, which claim to do everything from diagnosing depression to treating insomnia. As you might imagine, these apps range in quality. Some are peddling pure snake oil (one app recommended that people with bipolar disorder drink hard liquor during manic episodes), while others have been been independently peer-reviewed by scientists and researchers.
I asked three experts to weigh in on whether mental health app makers have an ethical obligation to back up their claims with clinical evidence. Here's what they had to say.
John Torous, clinical fellow and senior resident in psychiatry at Brigham and Women's Hospital in Boston, Massachusetts: How would you react if your doctor told you the medication she is recommending for your life-threatening infection has never been actually tested, studied, or evaluated? The foundation of health care is trust. So why should the standard be any different for mental health technologies, like smartphone apps that claim to offer either diagnostic or therapeutic guidance?
We have clear evidence that some apps can cause harm, or are ineffective; and many make boastful claims that are simply not true. While opponents may claim that clinical studies take too long to generate evidence, there are new models of research like Agile Science that can facilitate rapid results. And there is the simple truth that if you want to claim an app is effective for long-term care, then you simply have to study it in the long-term. Instead of science and evidence-based information from which patients and clinicians can make informed decisions about mental health apps, there is currently a lack of clinical outcomes data and a world rife with bold claims and promises.
Thus the real question is not should we have evidence for these apps, but why is it currently acceptable to make claims about mental health outcomes with no supporting data? Is it stigma against mental health, belief that patients and clinicians will accept low standards, or a drive for commercialization that has created a landscape where many are making bold claims about mental health that resemble snake oil rather than science? I believe that smartphone technology for mental health will likely grow and evolve into useful clinical tools as we begin to better study and understand them, rather than sell them.
Peter Hames, CEO of digital medicine startup Big Health: Here’s the simple fact: Evidence in health care is non-negotiable. Be it drugs, talking therapies, or even homeopathy, if we're claiming an outcome we owe it to the people who use our services to give them things that work. This is more than an eye-rolling meeting of regulations. It’s our ethical duty.
As a founder, I all too frequently hear the view that the time and resources required to conduct controlled trials and publish the evidence are beyond the reach of startups. This is junk. Our first randomized, controlled trial was conducted and published while we were bootstrapped. If you have a promising innovation and believe in the importance of evidence, there are people out there desperate to help you; every clinical academic department I’ve ever had contact with is hungry for novel digital solutions to test, often with funding secured and at the ready.
But this evidence-ducking is more than just lazy—it's dangerous. For the first time ever, we have an opportunity to get evidence-based non-drug health care to the millions of people who need it. And in the process we can create a huge new industry that operates under that rarest of things—a strong code of ethics aligned with the interests of the end user. In short, this is a golden opportunity to create a better type of medicine, and if we’re not careful we’ll blow it.
If we’re serious about fulfilling the potential of digital therapeutics then we need to behave like grown-ups to win the trust of patients, clinicians, and payers. That means doing the hard work, getting the evidence, making it available for scrutiny, and publishing it. Or digital therapeutics will forever be treated like toys, footnotes at the fringes of health care. The stakes are too high, and the need for change is too great. Let’s not shrug our shoulders—let’s roll up our sleeves, meet our ethical responsibilities, and create a better type of medicine.
Thomas Goetz, cofounder and CEO of Iodine: Medicine often works at a glacial pace. Besides the 17-year lag between research and practice, there’s also just an everyday gap between what we know health care should do—the platonic ideal that’s often been proven in clinical trials—and what health care actually does—the day-to-day practice of usual care.
Mobile technologies offer us a true opportunity to speed up this cycle and to put effective interventions to work at scale. Apps, in other words, are a chance to "fork the code"—to turn well-proven but manual interventions into automatic, algorithmic interventions that leverage software to bring best practices to the many, rather than the few. This is a huge shift, and a huge chance for medicine to improve the speed of practice.
This doesn’t mean doing away with proper validation studies that vet mobile apps for real effectiveness. But we should take advantage of the rapid, iterative, launch-and-fix-bugs pace of software. And we shouldn’t expect software to stand still for one or two years while academic validation works its way through review and publication.
At my company, Iodine, we’ve taken the approach of building upon years of well-researched interventions, validated measures, and guidelines from the Institute for Clinical Systems Improvement, and seeing if we can turn that into something real people will actually find useful in their daily lives. Having proven that on the battlefields of real life, we’re now moving on to academic validation studies that will help us sell our products to the real customers: providers, insurers, and employers. It seems to me that to take the opposite approach would fail to take advantage of what technology is so good at—iterative development—and succumb to what medicine still suffers from—a timeline driven by publishing deadlines and peer review, rather than the pressing needs of real people in the real world.
What do you think? Please share your views with me on Twitter (@chrissyfarr) or via email (email@example.com)