App Analyzes Your Voice for Mood Swings

App Analyzes Your Voice for Mood Swings
The app runs in the background on an ordinary smartphone, and automatically monitors the patients' voice patterns during any calls made as well as during weekly conversations with a member of the patient's care team. The computer program analyzes many characteristics of the sounds—and silences—of each conversation. (Shutterstock*)
5/16/2014
Updated:
5/16/2014

Researchers are testing a smartphone app that monitors your mood by listening for changes in your voice. They’re hopeful it could be a tool to detect mood shifts in people with bipolar disorder—and perhaps changes seen in conditions like PTSD and Parkinson’s disease. 

While the app still needs much testing before widespread use, early results from a small group of patients show its potential to monitor moods while protecting privacy.

The developers call the project PRIORI, because they hope it will yield a biological marker to prioritize bipolar disorder care to those who need it most urgently to stabilize their moods—especially in regions of the world with scarce mental health services.

Bipolar disorder affects tens of millions of people worldwide, and can have devastating effects, including suicide.

“These pilot study results give us preliminary proof of the concept that we can detect mood states in regular phone calls by analyzing broad features and properties of speech, without violating the privacy of those conversations,” says Zahi Karam, a postdoctoral fellow and specialist in machine learning and speech analysis at the University of Michigan.

How It Works

The app runs in the background on an ordinary smartphone, and automatically monitors the patients’ voice patterns during any calls made as well as during weekly conversations with a member of the patient’s care team. The computer program analyzes many characteristics of the sounds—and silences—of each conversation.

Only the patient’s side of everyday phone calls is recorded—and the recordings themselves are encrypted and kept off-limits to the research team. They can see only the results of computer analysis of the recordings, which are stored in secure servers that comply with patient privacy laws.

Eventually, it will include a feedback loop to the patient and his or her care team and even a chosen family member.

Standardized weekly mood assessments with a trained clinician provide a benchmark for the patient’s mood, and are used to correlate the acoustic features of speech with their mood state.

Because other mental health conditions also cause changes in a person’s voice, the same technology framework developed for bipolar disorder could prove useful in everything from schizophrenia and post-traumatic stress disorder to Parkinson’s disease, the researchers say.

Results So Far

The first six patients all have a rapid-cycling form of Type 1 bipolar disorder and a history of being prone to frequent depressive and manic episodes. The researchers showed that their analysis of voice characteristics from everyday conversations could detect elevated and depressed moods.

The detection of mood states will improve over time as the software gets trained based on more conversations and data from more patients.

The researchers study patients as they experience all aspects of bipolar disorder mood changes, from mild depressions and hypomania (mild mania) to full-blown depressed and manic states. Over time, they hope to develop software that will learn to detect the changes that precede the transitions to each of these states.

They also need to develop and explore strategies for notifying the app user and care providers about mood changes, so that appropriate intervention can take place.

The app currently runs on Android operating system phones, and complies with laws about recording conversations because only one side of the conversation actually gets recorded. The University of Michigan has applied for patent protection for the intellectual property involved.

The researchers presented early results this month at the International Conference on Acoustics, Speech and Signal Processing in Italy, and published details simultaneously in the conference proceedings.

The National Institute of Mental Health funded the study.

Source: University of Michigan. Republished from Futurity.org under Creative Commons License 3.0.

*Image of  ”Hand holding iPhone“ via Shutterstock