At the start of the Covid-19 pandemic, doctors were at a loss and scrambling for any info they could use to combat the spread of the disease. What are the risk factors? Who were the patients likely to get extremely sick? What types of care will they require? The questions kept coming and time wasn’t on their side.

Fighting Covid-19 with Artificial Intelligence

Fighting Covid-19 with Artificial Intelligence

 

As the numbers rose, our doctors needed answers fast. This is when Mayo Clinic in Phoenix turned to algorithms for help. The hospital assembled a team of specialists to outline the data they needed from health records and applied artificial intelligence to create a predictive treatment model. In two weeks, they were able to come up with an algorithm from 12,000 patient records using age, race, gender, socioeconomic status, vaccination history, and medications to determine a person’s risk level for the novel coronavirus. At a time when tests were expensive, this algorithm helped doctors determine whether patients needed one. 

Since then, Mayo Clinic has published more than thirty papers about using artificial intelligence in healthcare. Models were created to help identify Covid-19 patients who need hospitalization, helping with capacity planning. Another model was built to help determine a patient’s risk for intensive care and prioritized those at higher risk for aggressive treatment. When patients were sent home, a model helped flag which of them might need reevaluation. 

At Massachusetts General Hospital in Boston, doctors successfully employed data analysis to help anticipate patients’ needs. They created a model to determine risk levels at three, five, and seven-day intervals after admission. Researchers identified which patients are likely to return as well as those who can be discharged to make room for new ones.

While hospitals have been using artificial intelligence to help improve services, the emergence of Covid-19 and the surge of cases created a spike in activities around the world. Theories became models and models became policies rather quickly. Researchers have successfully identified patients who are most at-risk, predicted patient outcomes, and allocated hospital resources using data. These new tools do not come without challenges. As new models are built, researchers always need to ask whether the datasets used are sufficient and free from bias.

Artificial intelligence has been applied to a wide range of issues as far as Covid is concerned. It has been instrumental in making treatment decisions to allocating hospital resources. The Mount Sinai Hospital in New York, for example, has come up with an artificial intelligence program to help classify Covid patients’ healthcare needs. 

Prior to the outbreak, a research team from the University of Virginia Medical Center developed software to help doctors spot potential respiratory failure leading to intubation. When Covid-19 hit, the software was adapted and put to the test. 

Known as CoMET, this software gathers a wide range of data including vital signs, EKG, and laboratory test results. A comet-shaped visual tool is projected onto a screen to show a patient’s risk level. This unique visual alarm stands out among a din of beeps and buzzes. The software is now available for use under license. 

There are a number of potential models and software in the pipeline. Many applications are in their final stages of development. One example is Cleveland Clinic’s software that analyzes chest X-rays of potential Covid patients. Once approved by the Food and Drug Administration, the software will only take seconds to find patterns associated with the virus.

Biomedical engineers and heart specialists at John Hopkins University in Baltimore have developed a model that identifies a Covid patient’s risk for cardiac arrest or blood clots, giving doctors several hours to prepare. Data from more than 2,000 patients were used to train and test the system which is now being integrated into hospitals.

As hospitals embrace artificial intelligence as an ally in developing treatment protocols, some researchers raise concerns that the rush for approval can lead to the use of invalid statistical tools. Data that require FDA approval is fuzzy. Researchers do not need any clearance when using models that require a healthcare worker’s interpretation. Improving the software tools’ reliability and accuracy without raising racial and socio-economic biases is another major challenge.

Tech Review Planet is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.