The University of Nottingham once published a study on PloS One: using the health data (including age and lifestyle, etc.) of British people aged 40 to 69, the researchers developed a new artificial intelligence model to use machine learning to predict the risk of premature death. Prior to this, there was another study that also works on predicting the risk of death: researchers from Google used machine learning techniques to perform data mining on a large number of electronic health records to assess the likelihood of a patient dying during hospitalization. The two studies have a common purpose, that is, to assess whether those information can help clinicians decide which patients may benefit the most from interventions.

At the same time, the US FDA is also closely monitoring the application of AI in the medical field. Earlier this month, the US FDA issued a white paper proposing a new approval framework for AI and machine learning-related medical devices. As discussions about artificial intelligence and medicine have grown, experts have called for specific oversight of the role of AI in predicting the risk of death.

There are several reasons for this. First, researchers and scientists have expressed concerns about possible bias in AI. Prejudice in machine learning stems from the "neural input" embedded in the algorithm, which may include human bias. Researchers have begun to pay attention to this issue, but the problem has not been fully resolved.

Then there is the unconscious prejudice problem in the medical field. This issue involving academic fields and patient treatment has been extensively studied. For example, patients of different ethnicities have differences in the way they perceive pain, although their effects may vary depending on the gender and perception level of the doctor. One study found that this bias was less frequent in black or female doctors.

In 2017, a study of these biases showed that although doctors may prefer white patients, this may not affect their clinical decision-making. However, this is not the case in a large number of other studies. For example, if blacks live in communities that have more racial prejudice against them, the prognosis of some of their diseases may be even worse. In addition, gender-based bias can not be ignored: for example, women's heart attack (acute coronary syndrome) treatment is relatively less positive.

These prejudices are even more of a concern when it comes to death and end-of-life care. A study called SUPPORT (Study to Understanding Prognostes and Preferences for Outcomes and Risks of Treatments) surveyed data from more than 9,000 patients in five hospitals and found that black patients received fewer interventions at the end of their lives, despite the fact that black patients expressed a desire to discuss cardiopulmonary resuscitation (CPR) with their doctors. Statistics show that they are significantly less likely to conduct these conversations. Other studies have found similar conclusions that black patients have less knowledge of end-of-life care.

However, these trends are not significant. A study of a survey’s data in 2007 found no significant ethnically relevant differences in end-of-life care. And many other studies have found that some ethnic groups tend to be more active in the end of life – which may be related to confronting systematically biased medical systems. Although preferences may vary from race to race, it may still lead to prejudice when doctors fail to provide all choices unconsciously or make assumptions about the choices their particular patient might prefer.

However, in some cases, careful use of AI can help to assess the end of life and reduce the impact of prejudice. Just last year, researchers from China used AI to assess brain death. It's worth noting that by using algorithms, machines can better capture brain activity that doctors miss when using standard techniques. However, AI should be carefully verified before it can be used for purposes other than research.

Artificial intelligence is a promising tool and has shown great potential for diagnosis. However, using this technology to predict and even determine the time and risk of human death is a unique and challenging area, where the same prejudices may exist as that in doctor-patient interactions. So before it can be applied to the actual diagnosis, scientists have to fist make sure that this technology will not inherit the inherent bias of human beings. And there is, for sure, a long way to go.

BOC Sciences always keeps close watch on the latest trends in the medical world. AI is increasingly applied in the pharmaceutical industry. BOC Sciences is working with its partners to actively explore more potentials by offering a wide range of research chemicals like ADC Cytotoxin, ADC Linker, small molecule inhibitors, Sigma Receptor Inhibitor, Aryl Hydrocarbon Receptors Inhibitor, CCKBR Inhibitor as well as laboratory services like Virtual Screening, Targeted Protein Degradation Platform, Nucleotide Synthesis, Fluorescent Probes.

Author's Bio: 

This article is written by scientists from BOC Sciences.