A new app that collects audio recordings of people coughing and breathing aims to help researchers detect people infected with coronavirus.
Anyone can go to the University of Cambridge’s ‘COVID-19 Sounds’ webpage to submit recordings of them breathing in and out, coughing, and speaking.
The audio data, to be stored on university servers, will help develop machine-learning algorithms that could be used for automatic detection of the illness.
The developers of the application, who hope to collect recordings from as many people as possible, think it can help them detect coronavirus from a person’s cough or even their voice.
The project has already been launched as a webpage, with apps for Android and iOS to follow.
‘The work is about diagnostics and in time, with enough data, we hope that cough or even voice could be used for early diagnosis,’ said Professor Cecilia Mascolo from Cambridge’s Department of Computer Science and Technology, who led the development of the app.
‘Apparently both can have quite specific changes in this disease.
‘There’s still so much we don’t know about this virus and the illness it causes, and in a pandemic situation like the one we’re currently in, the more reliable information you can get, the better.’
Professor Mascolo said the recordings of people without a diagnosis of coronavirus would act as the ‘control’ group in the data set.
Once on the COVID-19 Sounds webpage, volunteers just need to select their age, biological sex, medical conditions and whether or not they’re a smoker.
They then need to select from a list of symptoms, many of which are associated with COVID-19, including fever, dry cough, difficulty breathing and loss of taste or smell.
Participants are also asked their approximate location, whether they’ve had a positive test for COVID-19 in the last 14 days and whether or not they’re in hospital.
Volunteers then record themselves breathing in and out five times, coughing three times and saying the words ‘I hope my data can help to manage the virus pandemic’ three times.
They can then submit their results, all from the single webpage.
The University of Cambridge researchers said that once they have completed their initial analysis of the data, they will release the data set to other researchers.
They say the data set could help shed light on how the disease progresses and the relationship of the respiratory complication with medical history.
Researchers did admit, however, that there is ‘no way to confirm that whatever a user is inputting is confirmed’.
‘Having spoken to doctors, one of the most common things they have noticed about patients with the virus is the way they catch their breath when they’re speaking, as well as a dry cough, and the intervals of their breathing patterns,’ said Professor Mascolo.
‘There are very few large data sets of respiratory sounds, so to make better algorithms that could be used for early detection, we need as many samples from as many participants as we can get.
‘Even if we don’t get many positive cases of coronavirus, we could find links with other health conditions.’
The application will not provide any medical advice and will not track users, the university said.
It will also only collect location data once when users are actively using it.
COVID-19 Sounds is currently only available on Chrome and Firefox browsers and not iOS browsers, and will be launched as an app for Android and iOS soon.
The new application is one of several that uses crowd-sourced data from Brits to better understand COVID-19 and its spread.
Researchers from the universities of Manchester and Liverpool have partnered the NHS-backed health record app Evergreen Life to display four different sets of data in colour-coded maps.
The COVID-19 heat map, which takes data submitted by users, revealed that Middlesbrough has the highest percentage of people who are not staying at home during the lockdown, as of this weekend, while Arun, West Sussex, has the highest percentage of those who are.
Swansea has the most households with reported COVID-19 symptoms, it also revealed, while Hull has the lowest.