As We Flock to AI, Let’s Not Forget the EQ

For the human journeys doctors must navigate with their patients, chatbots can talk the talk, but they can’t walk alongside.

As We Flock to AI, Let’s Not Forget the EQ

“All I want to do is go home to my family and have a birthday dinner: my wife is making meatballs,” croaked the 75-year-old man ridden with edema in every extracellular space. This was not his first hospitalization for heart failure, although each of his admissions was making it increasingly apparent that this one, or the next, might well be his last.

As his treating team, we “walked with” him. We sat down next to him, heard his wishes, and eased his symptoms so that he could get home. For all the technology we had to offer (which, in heart failure, is considerable), therapeutics and devices were immaterial to this man. He just wanted to go home.

In an era of rapid technological growth, where artificial intelligence (AI) might be just as good or better than a cardiologist at picking up heart disease on a 12-lead electrocardiogram, where smartwatches can detect if the heart is out of synch, what does it mean to be a doctor? Furthermore, as these technologies get folded into our doctor-patient interactions, what does it mean to be human?

Physicians are no strangers to weighing the potential risks and rewards of new technologies. We adapt to them and implement them where they have clear potential to benefit our patients.

In the field of heart failure we use left ventricular assist devices to keep blood pumping when the heart cannot. We can pace a human heart when its own electrical circulatory becomes faulty or stops. In those with lethal arrhythmias, defibrillators offer another chance at life. But what about technology that starts to take the very words from our mouths?

Large language models (LLM) like the chatbot “ChatGPT” have become an almost overnight success in their use of neural networks to create human-like language based on large inputs of text. Very quickly, users figured out that ChatGPT has numerous applications to improve efficiency for doctors, from writing discharge summaries to transcribing clinic notes—administrative chores that can take up 20% of a doctor’s day. In academic medicine, ChatGPT can produce a comprehensive paper, prompting medical publishers to hastily issue rules around AI authorship.

For now, at least, the more technical the topic, the greater the gaps, and an equal-opportunity approach to information gathering means LLMs rely on everything from medical texts to Wikipedia. The technology cannot always credit its sources, which means at this stage it is not a reliable reference for medical decision-making.

In many ways a career in medicine mirrors some of the behavior of these chatbots: as physicians we’re committed to life-long learning, we must constantly think and rethink what we have learnt in the presence of new data and new technology, then we base our treatments on what we’ve learnt from the medical literature.

More than 70 years ago, famed codebreaker and computer scientist Alan Turing was the first to describe a test to differentiate between human and artificial intelligence, which was based on nuanced communication. In Turing’s “Imitation Game,” if the machine was indistinguishable from a human in a two-way conversation, then the machine was deemed to have succeeded at the test.

A chatbot that couldn’t hold a conversation involving active listening, nuance, and humor could easily be caught out. ChatGPT and DALL-E have swiftly overcome this by drawing on a range of language sources to be both coherent and “human like.” GPT-4, due for release in 2023, is rumored to be even more sophisticated.

For physicians, this potential of LMM technology is as exciting as it is uncertain, forcing us to confront some very existential questions: will artificial intelligence demonstrate enough emotional intelligence, or “EQ,” to fill our shoes, to walk with another human in their most vulnerable times?

I saw some parallels in a New York Times article about using ChatGPT to cook a Thanksgiving feast. Much like medicine requires applying a formulaic differential diagnosis list, producing a memorable family meal involves following a recipe. Yet, to make a truly memorable dish we must taste and adjust flavors, listen to feedback, and respect the cultural context in which the meal is served. There is always the possibility of an intended or unintended inventive twist.

In the end, the NYT food columnists breathed a sigh of relief: their jobs were safe. The LLM could not write a personalized recipe for the perfect Thanksgiving turkey and pumpkin pie, or even choose an accurate accompanying photo when given a specific and personalized prompt. 

Much like a recipe requires finesse, doctor-patient interactions (and indeed all human relationships) require just the right mix of nuance: watching and listening, cultural and contextual respect, and above all, empathy. They require emotional intelligence just as much as they do data input. Unlike a robot relying on machine learning, we must also rely on our patients—who are seldom as predictable as the textbooks suggest.

What do our patients want? What will provide benefit? What might cause harm? How can we help them navigate these many decisions like choosing to enroll in a clinical trial or deciding to stop dialysis? All these day-to-day interactions are what inspire creative approaches to applying evidence-based therapies or prompt us to ask new research questions in pursuit of scientific solutions to better their care.

The beloved author-neurologist Oliver Sacks is famously quoted as saying, “In examining disease, we gain wisdom about anatomy and physiology and biology. In examining the person with disease, we gain wisdom about life.” This wisdom is clearly a different beast than intelligence, whether artificial or human. This includes characteristics of accountability, trustworthiness, empathy, and communication honed over many walks, across many patient journeys, all of which shape the ability to see the specific patient in front of you. 

For now, ChatGPT cannot sit down with my elderly patient to discuss the options for getting him home for his birthday dinner. As long as we still have the privilege of “walking with” the patients we care for, our jobs, too, are safe.

Off Script is a first-person blog written by leading voices in the field of cardiology. It does not reflect the editorial position of TCTMD.

Comments