Algorithms crunch calls to health insurer for signs of disease


Did your voice give it away? US start-up Canary Speech is developing deep-learning algorithms to detect if people have neurological conditions like Parkinson’s or Alzheimer’s disease just by listening to the sound of their voice. And it’s found a controversial source of audio data to train its algorithms on: phone calls to a health insurer.

The health insurer – which Canary Speech would not name but says is “a very large American healthcare and insurance provider” – has provided the company with hundreds of millions of phone calls that have been collected over the past 15 years and are labelled with information about the speaker’s medical history and demographic background.

Using this data, the company says its algorithms could pick up on vocal cues that distinguish someone with a particular condition from someone without that condition. “For modelling purposes, we want to be able to see an individual over a period of years,” says Canary Speech CEO Henry O’Connell.

Co-founder Jeff Adams says the company hasn’t yet received all of the audio data, but could have an algorithm that aims to detect vocal indicators of Alzheimer’s disease ready within two months. It also aims to look for vocal markers for depression, stress and dyslexia.

All about the data

While you may be surprised to learn that calls to a health insurer could be used in this way, O’Connell says that the company Canary Speech is working with has “express permissions” in place to allow it. He says that he has spoken to several UK companies too, but that stricter UK data protection laws may prevent a similar project there.

How the technology is ultimately used will be down to Canary Speech’s customers, says O’Connell. The insurer that is providing the call data also manages clinics in the US, he says, and discussions have so far related to using the technology in a clinical setting, for example to help with diagnosing these conditions. When asked if the technology could potentially be used to screen callers or influence insurance premiums, he responded that such an application “may be regulated”.

“This is the type of thing where we’d want to make sure patient privacy is protected,”
says Caitlin Donovan at the US National Patient Advocate Foundation. “I would be worried that this [algorithm] could be used to either track or diagnose a condition that the patient may not be aware they have.”

The US Affordable Care Act (ACA) prevents insurers from denying benefits or raising costs because of a pre-existing condition, but that could change under the new administration of Donald Trump. Before the ACA took effect, people with Parkinson’s and Alzheimer’s were often denied coverage by insurers.

Early diagnosis

We have known about vocal indicators of neurological conditions for a long time, says Sandy Schneider at Saint Mary’s College in Notre Dame, Indiana. But only recently have people started to explore how machine-learning could aid diagnoses. “Right now, it takes somebody maybe two or three years to be definitively diagnosed with a neurodegenerative disease,” she says. She thinks algorithms may one day be able to detect symptoms in the voice before clinical diagnosis, meaning treatment of symptoms could start sooner.

The Canary Speech team is coming at the problem from a big data perspective, rather than a medical one, says Adams. “We have a large collection of audio files of people with and without [diseases], and we just have to find in those files what are the differences in the audio, in the waves,” he says. “It doesn’t require medical intuition to do that, it requires signal processing and machine learning.”

But Max Little at Aston University in Birmingham, UK, who explores the use of voice-processing algorithms to predict the severity of Parkinson’s, isn’t convinced by the big data approach. “I’m deeply sceptical of efforts that naively believe you can simply accumulate more data and that will do the job,” he says.

Little uses data collected from relatively small numbers of patients who are recorded while they read or repeat specific sounds or phrases. Machine-learning algorithms then search for vocal indicators that are associated with Parkinson’s, such as hypophonia, a softness of speech resulting from lack of coordination over the vocal muscles.

Canary Speech, however, is feeding its machine-learning algorithms with vast amounts of conversational speech, in the hope they will learn to recognise subtle differences in the voices of people with different conditions.

“People have been able to show that there is enormous promise in this approach,” says Little. “But to really scale it up to be large enough that you could basically throw away all of the fundamental science and just go with some machine learning algorithm – I’m not so sure about that.”

find out more from

It's only fair to share...Email this to someoneShare on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInPrint this page