Q&A: Lior Jankelson on AI Tools for Suspected ACS and More

The electrophysiologist discusses research being done at NYU that uses AI to enhance familiar tools like ECG.

Q&A: Lior Jankelson on AI Tools for Suspected ACS and More

Artificial intelligence (AI) has been working its way into more and more areas of cardiovascular medicine in recent years, mirroring trends in society more broadly. TCTMD spoke with Lior Jankelson, MD, PhD (NYU Langone Health, New York, NY), an electrophysiologist who leads the AI/machine learning (ML) in cardiology group at his center, about his work in developing new AI tools.

Lior Jankelson, MD, PhD What types of data sources can be incorporated into AI models?

There are a ton of opportunities in cardiovascular health for AI. As an electrophysiologist, I’m obviously biased towards the EKG, but I think we have a plethora of highly informative data sources, and our overarching goal is to make the best use of the tools that we already have. My philosophy is that instead of finding new tests, new blood work, new molecules, new forms of energy—which is clearly very important—we can do so much better with using what we already have. And that’s the premise of my work.

Generally speaking, every patient over the age of 50 going through the emergency department (ED) tends to get an EKG and an X-ray, for example. There’s a lot of information in there that is not necessarily linked to the primary cause of the ED visit. Let’s say someone comes in with a cough or a fever, so they get an X-ray for pneumonia or flu. There is a ton of information in that X-ray and the other data that’s gathered around that visit that we can use for completely different purposes for cardiovascular health. And that’s what we’re trying to do. We’re trying to inform our decisions and inform our knowledge by harnessing what’s already there. I think these are the tools that are going to be important in the very near future.

Everybody here is very excited about this era and the opportunities it’s going to bring.

What AI tools have you been developing at NYU?

We are developing an array of AI tools ranging from EKG AI tools, including algorithms that are primarily focused on an AI layer sitting over the standard 12-lead EKG. There are also models that are incorporating various data sources, including electronic health record (EHR) data, imaging data, and EKG data to provide predictions and classifications for problems that are more complex or multifaceted and very difficult to model.

Then we have a lot of activity working with language models. For example, we’ve developed a tool to simplify cardiovascular imaging reports so our patients can receive a communication using lay language that they can understand instead of a very complex and jargon-filled echo report. They’ll still get the full report, but they’ll also get a report that is understandable by them and can be much more helpful.

An algorithm for detecting coronary occlusions in patients with suspected ACS is one example that we have developed in the EKG space, and that’s about to be deployed in a study setting, but we also have models that predict atrial fibrillation and that predict heart failure and low ejection fraction. We can predict adverse responses to medications in the form of QT prolongation. We have a tool for prediction of hypertrophic cardiomyopathy. These are all EKG-related tools.

To be clear, we are not clinically using any of these tools outside of research, but these are fully functional research tools with very promising results. For example, we have an ongoing prospective randomized trial for the prediction of atrial fibrillation from EKGs showing normal sinus rhythm. The providers are getting alerts, and we recommend action upon these alerts. We are trying to understand if this is really beneficial when the rubber hits the road. And we have other AI tools that are in various stages of clinical trials and clinical development.

How close are you to deploying AI tools in routine practice?

Obviously, we have to comply and be aligned with the regulatory pathways that exist now, so to really deploy an AI tool in a clinical setting, we would have to submit the data to the FDA and get approval for the algorithm.

But if you’re asking me about how close we are in general? I think we’re very close. There’s a very large body of evidence that suggests, particularly for the EKG, that the application of AI opens a completely new dimension of insight humans cannot compete with.

If you take chest pain, for instance, you could argue that there are changes in the EKG that expert physicians are supposed to identify. And then the question is an automation question. Can the computer be more reliable, make fewer mistakes, and be available all the time? These are obviously very important aspects. When you’re looking at predicting an arrhythmia from a normal rhythm, there are basically no known markers on a normal EKG that suggest high risk of atrial fibrillation. There are very nonspecific signals that some patients can have, but it’s unlikely for the general practitioner to pick up on these signs on a normal EKG when a patient is presenting in sinus rhythm. There are many examples for that scenario where AI can really look at things that humans just cannot see.

What are the main issues or concerns that need to be addressed when thinking about greater integration of AI in clinical practice?

It starts from the very basic, which is efficacy. For any test, there should be clear information and data regarding its sensitivity, specificity, accuracy, positive predictive value, and negative predictive value. We have to understand the test itself, and these have to be thoroughly studied.

Then there’s a unique aspect of AI, which is the infamous black box nature. Concerns about that are probably overinflated to some degree, because I think one can make a very strong argument that many of the drugs that are being used on a daily basis are very poorly understood. The mechanism of action is very poorly understood, and that doesn’t prohibit the use of these drugs. It’s just a given that we all accept.

In a similar way of thinking, I don’t think that a lack of understanding of the inner layers of a neural network precludes it from being a useful clinical tool, but there is that nature. Language models are even more complex to understand. We don’t really understand how they do the predictions. Obviously, the general concepts and the algorithms are understood, but what’s really happening under the hood in the very deep layers and very complex dimensions is poorly understood. That’s another popular concern.

I’m also very interested in the practical implications of AI on daily care and the pipelines of health delivery. Because these tools are so transformative and so powerful, they could create a lot of impact. The positive is pretty obvious, but the negative also has to be thought about.

That includes things like alignment. Now that we have all these AI predictions, who deals with these predictions? Who communicates that to the patient? Who manages the patient flow that is generated by AI? Is it something that we should add to the queue of the current practitioner workflow? That’s probably not a great idea because we already know that providers across the healthcare system are fairly overworked. So further dividing their attention for these new tools is very problematic. We have to figure out a way to do that.

Then, are the patients ready? Are they agreeable to interact with AI or to accept the impact of AI on their healthcare? Do we have enough understanding of how to communicate that, how to manage that relationship? Now there’s something between us that’s new.

These are very significant challenges, and this is another thing we’re putting in tests. We are designing trials to look at a complete AI-first workflow where you have a designated predefined solution to manage AI. So there’s an AI clinic and that AI clinic interacts with patients and patients are seen by professionals who are intentionally trained in a setting that’s built specifically around AI.

This is where things are going to go. There’s probably going to be kind of a transition phase where we are going to see these hybrid solutions with human experts who manage all of those relationships. And then I think gradually we’re going to move more and more towards automation. But, in the short and intermediate term, these AI-first, human-centered, distinct workflows need to be created and tested.

Comments