AI Model Saves Time, Increases Accuracy of Echo Measurements

EchoNet-Measurements shows promise against sonographers and in an external validation set but needs prospective study.

AI Model Saves Time, Increases Accuracy of Echo Measurements

A novel, open-source, deep learning model can accurately quantify 18 anatomic and Doppler measurements in echocardiography, showing promise that it can one day automate this process and save time, according to new retrospective data.

The EchoNet-Measurements algorithm “directly annotates the areas that cardiologists would’ve annotated themselves instead of going end-to-end to a conclusion,” senior author David Ouyang, MD (Cedars-Sinai Medical Center, Los Angeles, CA), told TCTMD. “This is more interpretable and potentially more robust.”

Cardiologists take measurements and characterize the severity of disease, and previous AI models have taken a similar tack, he explained. “[They] just tell you what the model is thinking without a sense of why they think it’s mild, moderate, or severe. Whereas in this model, our approach will recreate the types of measurements that cardiologists use . . . that actually then get distilled into whether it’s mild, moderate, or severe.”

The same team has already shown in a randomized clinical trial that similar technology can better estimate left ventricular function on echocardiography than sonographers. Another Echo model more accurately measured cardiac function and identified subtle LV wall geometric measurements that could help with the diagnosis of often-missed conditions like hypertrophic cardiomyopathy and cardiac amyloidosis.

The EchoNet-Measurements Tool

For the new study, published online in JACC with first author Yuki Sahashi, MD, PhD (Cedars-Sinai Medical Center), the researchers used 877,983 echocardiographic measurements from 155,215 studies from their institution to develop EchoNet-Measurements.

They found high levels of accuracy with their model across all 9 B-mode and 9 Doppler measurements compared with both sonographer measurements (mean coverage probability of 0.796 and 0.839) as well as an external validation data set from Stanford Healthcare (mean relative difference of 0.120 and 0.096). On end-to-end evaluation of 2,103 temporally distinct studies at their institution, the researchers found similarly reasonable performance with EchoNet-Measurements (mean coverage probability 0.803; mean relative difference of 0.108).

There were no differences seen in the algorithm’s performance based on patient characteristics including age, sex, and atrial fibrillation, and obesity status, as well as across machine vendors.

Ouyang said the results were expected, given their past work with the technology. “If we follow this approach, these measurements can potentially be more precise than human measurements because there tends to be a difference between how cardiologist A and cardiologist B do the same measurements,” he explained, adding that he also expects the model to save upwards of 10 to 20 minutes of time per study.

While EchoNet-Measurements remains a research tool at this time, Ouyang and colleagues are in the planning stages of a prospective trial to compare how it works against human sonographers.

Other similar commercially available products exist, including the Us2.ai system, but the difference with EchoNet-Measurements is that it is “free and open source,” Ouyang stressed. “We put the code online and it’s trained on a larger amount of clinical data.”

In a world where many AI tools are being commercialized, he feels that their model will be “a good benchmark for clinicians to evaluate how well different AI algorithms are doing in the space. We see that inevitably there will be more and better commercial products in the future, but this is hopefully [going] to push the field to develop algorithms with more training.”

Sources
Disclosures
  • This work is funded by National Institutes of Health and the National Heart, Lung, and Blood Institute.
  • Ouyang reports receiving support from Alexion and receiving consulting or honoraria for lectures from EchoIQ, Ultromics, Pfizer, InVision, the Korean Society of Echo, and the Japanese Society of Echo.
  • Sahashi reports receiving honoraria for consulting from m3.com Inc and InVision.

Comments

1

Dr. Yoshi Gotoh

1 month ago
Looking forward to seeing the AI-based FFR software AutocathFFR presented at the upcoming TCT. It is truly the best AI technology on the market today