Later On

A blog written for those whose interests more or less match mine.

Machine Learning Algorithm Spots Depression in Speech Patterns

leave a comment »

Michael Byrne has an intriguing article in Motherboard:

Researchers from the University of Southern California have developed a new machine learning tool capable of detecting certain speech-related diagnostic criteria in patients being evaluated for depression. Known as SimSensei, the tool listens to patient’s voices during diagnostic interviews for reductions in vowel expression characteristic of psychological and neurological disorders that may not be sufficiently clear to human interviewers. The idea is (of course) not to replace those interviewers, but to add additional objective weight to the diagnostic process.

The group’s work is described in the journal IEEE Transactions on Affective Computing.

Depression misdiagnosis is a huge problem in health care, particularly in cases in which a primary care doctor making (or not) the diagnosis. A 2009 meta-study covering some 50,000 patients found that docs were correctly identifying depression only about half the time, with the number of false positives outnumbering false negatives by a ratio of about three-to-one. That’s totally unacceptable.

But it’s also understandable. Doctors, especially general practitioners, will pretty much always overdiagnose an illness for two simple and related reasons: one, diagnosing an illness in error is almost always safer than not diagnosing an illness in error; two, eliminating with certainty the possibility of any single diagnosis requires more expertise/more confidence than otherwise. See also: overprescribing antibiotics.

A big part of the problem in diagnosing depression is that it’s a very heterogenous disease. It has many different causes and is expressed in many different ways. Figure that a primary care doctor is seeing maybe hundreds of patients in a week, for all manner of illness, and the challenge involved in extracting a psychiatric diagnosis from the vagaries of self-reported symptoms and interview-based observations is pretty clear. There exists a huge hole then for something like SimSensei.

The depression-related variations in speech tracked by SimSensei are already well-documented. “Prior investigations revealed that depressed patients often display flattened or negative affect, reduced speech variability and monotonicity in loudness and pitch, reduced speech, reduced articulation rate, increased pause duration, and varied switching pause duration,” the USC paper notes. “Further, depressed speech was found to show increased tension in the vocal tract and the vocal folds.” . . .

Continue reading.

Written by LeisureGuy

9 July 2016 at 8:52 am

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s