AHA Weighs Clinical Value of AI in Cardiac Imaging

Transparency remains key to applying AI in clinical practice, as well as tracking performance, a new scientific statement asserts.

AHA Weighs Clinical Value of AI in Cardiac Imaging

New artificial intelligence (AI) tools seem to be bursting onto the cardiology imaging scene almost daily, but their practical use will need to be agreed upon and integrated by a variety of stakeholders for them to have clinical value, according to a new American Heart Association (AHA) scientific statement.

“We really wanted to take a larger, bigger-picture view and think about how does AI add value really across the imaging chain,” writing committee chair Kate Hanneman, MD, MPH (University of Toronto, Canada), told TCTMD. “Even before the patient is coming into our imaging department, [we should be] thinking about what is the best imaging test for a given patient? How do we acquire the images? How can AI tools help us there?”

The imaging chain extends to creating and issuing the report and communicating those findings to a referring clinician, said Hanneman.

In the statement, published online last week in Circulation, Hanneman and colleagues first define clinical value, namely that AI-based tools need to help the clinicians in daily practice, as well as address societal and financial problems.

Considering all these aspects is important because sometimes they “might be in conflict” Hanneman said.

The statement outlines the variety of different AI-based tools that can be applied in cardiac imaging, from natural language processing to image processing and generation, as well as decision analysis and direct prediction of disease and outcomes. As of April 2023, the FDA had cleared 46 AI applications for cardiac imaging, the statement notes.

The technology also changes rapidly, making it difficult to stay abreast of developments. One example Hanneman gave was generative AI, which uses AI to create new images from a lower resolution image that may have been acquired quickly. This technology only received a short mention in this paper because it was a “fairly new” concept when the article was being drafted.

The statement highlights some of the pitfalls and ethical considerations associated with using AI in clinical practice, especially as these tools move from the realm of research into clinical practice. “Despite the absolutely enormous AI development or validation of models, relatively few have actually been implemented in clinical practice,” Hanneman said. “How do we tackle those challenges in terms of actually leveraging the potential benefits of the model?”

She would like to see clinical guidelines developed to aid transparent deployment in clinical practice, “both to patients who may have these tools applied in their images, [but] also to referring physicians.” Consistent communication needs to be in place for informing a patient that an AI tool was used in an analysis, for example. There also needs to be transparency in terms of monitoring how AI tools perform over time.

“That's an important conversation not just in imaging, but also in medicine more broadly,” Hanneman concluded.

Disclosures
  • Hanneman reports no relevant conflicts of interest.

Comments