Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/sayururehan/ai-and-covid19

An article written by me regarding the involvement of Artificial Intelligence during the Covid-19 pandemic
https://github.com/sayururehan/ai-and-covid19

ai artificial-intelligence covid-19 it machine-learning

Last synced: 18 days ago
JSON representation

An article written by me regarding the involvement of Artificial Intelligence during the Covid-19 pandemic

Awesome Lists containing this project

README

        

# AI & Covid-19
![image](https://user-images.githubusercontent.com/42121050/154215018-941ec458-9326-4f2e-9aa0-a87a054d6a1f.png)

When covid-19 struck in 2020, hospitals were plunged into a health crisis that was badly understood with doctors really having no clue on how to manage these covid patients.
But with the data coming out of China, they had a four-month head start in the race to beat the pandemic. If machine-learning algorithms could be trained on that data to help doctors understand and make decisions, it might save lives. If there’s any time that A.I. could prove its usefulness to the world, it’s now…

Sadly, it never happened. Research teams around the world stepped up to help. Even the A.I. community, rushed to develop software that many believed would allow hospitals to diagnose patients faster, bringing much-needed support to the frontlines. In the end, hundreds of predictive tools were developed. But none of them made a real difference, and some of them were even potentially harmful.

Not fit for clinical use

This resonates the results of two major studies that reviewed hundreds of predictive tools developed last year. The British Medical Journal is still being updated as new tools are released. They have looked at over 200 algorithms for diagnosing patients and have found that none of them were fit for clinical use with only two have been selected as being promising enough for future use.

This study was backed by the second team, The University of Cambridge. This team looked at deep-learning models for diagnosing covid and predicting patient risk. They looked at over 400 published tools and sadly concluded that none were fit for clinical use.

This pandemic was a big test for A.I. and medicine and it would have gone a long way to getting a positive opinion on the public. Unfortunately, things didn’t work out.
Both these teams found that researchers repeated the same basic errors in the way they trained or tested their tools. Incorrect assumptions about the data often meant that the trained models did not work as claimed.

A.I. has the potential to help. But there is a concern that it could be harmful if developed in the wrong way because A.I. could miss certain diagnoses or underestimate the risk for vulnerable patients.

So, what went wrong? And how do we bridge this gap?

What went wrong?

Many of the problems that were discovered are linked to the poor quality of the data that researchers used to develop their tools. Information about covid patients was collected and shared in the middle of a global pandemic, often by the same doctors trying to treat those patients. Researchers wanted to help as soon as they can, but these were the only datasets available to the public. But this meant that many of the tools built were from using mislabeled data or from unknown data sources.

Many tools were developed either by A.I. researchers who lacked medical expertise or by medical researchers who lacked mathematical skills.

Another problem introduced is the bias at the point a dataset is labeled. It would be much better to label a medical scan with the result of a PCR test rather than one doctor’s opinion. But there isn’t always time for statistical niceties in busy hospitals.

How to fix it?

Better and classified data would be a big help, but in times of a global pandemic, that’s a big ask. It’s more important to make the most of the datasets we have. The simplest move would be for A.I. teams to collaborate more with doctors. Researchers also need to share their models and state how they were trained so that others can test them and build on them. This would solve maybe 50% of the issues that are identified.

Getting hold of better data would be easier if the formats were standardized. Most researchers started to develop their own models, rather than collaborating to improve existing models. As a result, researchers around the world produced hundreds of tools rather than properly trained and tested ones.

The models are so similar that they almost all use the same techniques but with small differences, with the same inputs and they all make the same mistakes. If all these people developing new models instead tested models that were already available, maybe we’d have something that could really help end the pandemic by now…

Thanks for reading! Check out my article on medium!

https://medium.com/@sayururehan/a-i-and-covid-6443ff582d58

https://www.linkedin.com/feed/update/urn:li:activity:6901350562160373760/