MIT has a new AI model that can detect asymptomatic COVID-19 patients just by the using the sound of their coughs recorded over phone calls
- The Massachusetts Institute of Technology (MIT) has developed a new AI model that detects
asymptomatic COVID-19 patientsby analysing the sounds of their coughs.
- The long term plan is to make this AI model accessible on a large scale by incorporating the model into a user friendly app.
- The AI model uses four algorithms to run its analysis and is able to accurately identify 98.5% of coughs from people who were confirmed to have COVID-19 and detected those who didn’t have any symptoms but tested positive with 100% accuracy.
MIT’s new AI model can detect who may be carrying the virus, even without any discernible physical symptoms, just by hearing the way a person coughs. The salient differences between the cough of a healthy person and that of one who may be unhealthy is not discernible to the human ear but can be picked up by AI.
“Things we easily derive from fluent speech, AI can pick up simply from coughs, including things like the person’s gender, mother tongue, or even emotional state. There’s in fact sentiment embedded in how you cough,” explained co-author Brian Subirana.
AdvertisementThe long term plan is to make this accessible on a large scale by incorporating the model into a user friendly app. If it receives the Food and Drug Administration (FDA) approval, MIT hopes it can be a free, convenient, and non-invasive screening tool for coronavirus.
MIT’s AI model detects asymptomatic COVID-19 patients with 100% accuracy
The study, published in the IEEE Journal of Engineering in Medicine and Biology, used forced-cough recording, submitted by people voluntarily submitted through web browsers, smartphones, and laptops.
Using this information, the AI model was trained using tens of thousands of coughs and spoken words. At the end of the experiment, it accurately identified 98.5% of coughs from people who were confirmed to have COVID-19. And detected those who didn’t have any symptoms but tested positive with 100% accuracy.
An AI-model backed by four algorithms
The AI model is a combination of three machine learning (ML) algorithms or neural networks. ResNet 50 can discriminate between sounds that are associated with different degrees of vocal cord strength.
The second ML algorithm was trained to distinguish between different emotional states evident in speech. While certain tones may indicate frustration, others indicate happiness.
The final and third neural network is a database of coughs that can discern changes in lung and respiratory performance.
All three of these models were combined and one last algorithm was overlaid to filter all analysis and detect muscular degradation.
The results showed that together — vocal cord strength, sentiment, lung and respiratory performance, and muscular degradation — were effective biomarkers for diagnosing the disease.
“The sounds of talking and coughing are both influenced by the vocal cords and surrounding organs. This means that when you talk, part of your talking is like coughing and vice versa,” explained Subirana.
This research was supported, in part, by Takeda Pharmaceutical Company Limited.
EXCLUSIVE: Tata Communications’ IoT head explains how COVID is driving the next wave of Internet of Things
ISRO's first launch of 2020 will carry 10 satellites aboard its PSLV workhorse rocket on November 7
India's contact tracing app Aarogya Setu comes under the scanner again — three departments fail to explain their role in its development
- Rajasthan imposes night curfew in 13 districts till December 31, slashes COVID-19 test prices for private labs
- PM's visit to vaccine centres gets 'praise' from Congress
- Pollution, pandemic and pre-term birth
- Serum Institute of India threatens to seek ₹100 crore in damages from Chennai volunteer
- PM to interact with more teams of Covid-19 vaccine developers