Health care is full of cognitive biases. Some think AI can help.

Subconscious biases are common in medicine because doctors and other health staff are humans who are prone to such errors. Cognitive biases are not necessarily caused by negative intentions — they occur based on a medical professional’s experience and observations. Even so, they can lead to issues in diagnosis and treatment. Some are looking to AI to help eliminate bias in the diagnostic process, but the technology may not be free from bias itself.

What are the main types of bias in medicine?

There are several cognitive biases that can occur in a medical context:

Availability bias: The availability bias is the “tendency to overestimate the likelihood of events when they readily come to mind,” said a 2020 study. For example, if a previous patient had a negative reaction to a treatment, the doctor is less likely to recommend it to the next one, even if the chances of a complication are low.

Confirmation bias: This type of bias “skews our perceptions and interpretations, leading us to embrace information that aligns with our initial beliefs — and causing us to discount all indications to the contrary,” said Forbes. In a medical context, “with scant information to go on, doctors quickly form a hypothesis, using additional questions, diagnostic testing and medical-record information to support their first impression,” essentially confirming what their initial impression of the problem was even if it is not correct.

Affect heuristic: Medical professionals can be “swayed by emotional reactions rather than rational deliberation about risks and benefits,” said an article published in the AMA Journal of Ethics. This can “manifest when physicians label patients as ‘complainers’ or when they experience positive or negative feelings toward a patient, based on prior experiences.”

  How will the rebels rule Syria?

Framing effect: The framing effect refers to how a person can react differently to information based on how it is presented. “Patients will consent to a chemotherapy regiment that has a 20% chance of cure but decline the same treatment when told it has 80% likelihood of failure,” said Forbes.

Self-serving bias: Doctors may have the tendency to prioritize their own interests rather than those of the patient. “Pharmaceutical and medical-device companies aggressively reward physicians who prescribe and recommend their products,” said Forbes. “And yet, physicians swear that no meal or gift will influence their prescribing habits.”

Can bias be reduced?

There are mixed opinions on whether or not subconscious bias can be reduced. “Simply increasing physicians’ familiarity with the many types of cognitive biases — and how to avoid them — may be one of the best strategies to decrease bias-related errors,” said the AMA Journal of Ethics article. A focus on educating about potential biases in medical schools could reduce levels of bias. “The practice of reflection reinforces behaviors that reduce bias in complex situations.”

However, that may be easier said than done. In high-stress situations, it becomes harder to recognize biases. Even when medical professionals have had extensive knowledge and training on identifying biases, “when their effect on reducing diagnostic errors has been tested, they have proven ineffective, regardless of the specialty and level of training at which they have been applied,” said a 2024 study published in the journal Academic Emergency Medicine.

Can AI help?

Some experts are hopeful about the diagnostic capabilities of AI. “Future generations of generative AI, pretrained with data from people’s electronic health records and fed with information about cognitive biases, will be able to spot these types of errors when they occur,” said Forbes. While there is potential, AI models are trained with data from humans. Because of this, biases can easily make their way into the models.

  Celebrating 250 years of Jane Austen

“Generative AI models display human-like cognitive biases,” said a study published in the New England Journal of Medicine AI, and the “magnitude of bias can be larger than observed in practicing clinicians.” They may be “particularly prone to biases in medicine,” said study co-author Dr. Donald Redelmeier, because it’s an area where “uncertainty and complexity are widespread.” With further advancements, safeguards against cognitive biases could be added.

The other concern is that AI has been shown more than just cognitive bias. AI models, especially those associated with health and medicine have displayed gender and racial bias as well, because the medical data used to train it has largely excluded women and people of color. “When we talk about ‘bias in AI,’ we must remember that computers learn from us,” said Michael Choma, an associate professor adjunct of radiology & biomedical imaging at Yale School of Medicine. “Bias is a human problem.”

(Visited 1 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *