Differences Across AUC Documents Sow Confusion for CAD Imaging

Is “appropriateness” in the eye of the beholder? The existence of multiple, heterogeneous recommendations suggests it might.

Differences Across AUC Documents Sow Confusion for CAD Imaging

Most US cardiologists are well aware of the 2014 act of Congress that tied Centers for Medicare & Medicaid Services (CMS) reimbursement to a set of appropriate use criteria (AUC) to guide imaging tests for coronary artery disease, but very few may be aware that multiple AUC documents exist and that their recommendations are by no means uniform.

Most cardiologists would be familiar with the AUC pioneered by the American College of Cardiology (ACC) back in 2005, now extending to multiple different areas of cardiology. The AUC that apply to coronary imaging in chronic coronary syndromes were updated earlier this summer in collaboration with the American Heart Association (AHA) and nine other cardiovascular professional groups.

But as David E. Winchester, MD (University of Florida, Gainesville), and colleagues discovered when they went looking for AUC publications, seven different qualifying organizations have issued documents rating the appropriateness of a range of medical imaging tests across different patient/coronary disease scenarios.

“I don't think people realize that there's multiple sets [of AUC],” Winchester told TCTMD. He himself has been working in the AUC realm for a decade and is the lead author of the 2023 ACC/AHA update. He said he first became aware of other AUC efforts after connecting with peers in radiology, who have AUC that predate those in cardiology. In 2016, Winchester and colleagues published a comparison of criteria from the ACC and the American College of Radiology (ACR) for gauging the appropriateness of nuclear myocardial perfusion imaging, finding notable discrepancies between the two.

Seven Different Takes

This latest paper, published yesterday in the Annals of Internal Medicine, compares AUC recommendations across seven different documents, including the ACC’s and the ACR’s as well as those put out by the Society of Nuclear Medicine & Molecular Imaging (SNMMI), Intermountain Healthcare, Johns Hopkins University School of Medicine, RAYUS Quality Institute, and the Synergetic Professional Guidelines Institute (SPGI). All may be used to guide test decisions, with tests deemed appropriate being reimbursable under CMS rules.

All of these entities, he stressed, are designated provider-led entities (PLEs) that met CMS qualifications outlining strict rules for the writing groups. Those specify that AUC be written by multidisciplinary groups that conduct systematic reviews of the literature using formal methods, disclose any conflicts of interest, and be published on an open-access website.

In fact, Winchester et al found 17 PLEs in their review, but only seven of these had broached testing for coronary artery disease in stable patients. Strikingly, however, while some AUCs listed all authors and organizational committee members and their clinical specialties, others did not. Conflict of interest (COI) reporting also varied substantially. While the ACC’s AUC spelled out individual COI for each member of the writing committee, the Intermountain Healthcare, RAYUS Radiology, ACR, and SPGI opted to post a policy statement on their own websites, but did not provide individualized information for each author. For Johns Hopkins Medicine, conflicts were handled with a simple statement that COI information was “available on request.”

Winchester was adamant that he didn’t want to ascribe financial motives to the different AUC approaches of different PLEs, though he said it would be possible for bias to creep in by giving higher weight to one test, for example, from which an individual or practice draws particular financial benefit.

“I certainly don't want to suggest that [this happens] for a financial incentive, but it could be done for a financial incentive,” he explained. “Even though there are safeguards in place—CMS set out a bunch of different criteria for what you have to do in order to be considered a legitimate entity that can develop these—there's flexibility within those to the extent that the system could definitely still be manipulated either consciously or subconsciously to say, ‘Well, you know, doing this test for this indication we do a really good job of it here. So even though the evidence is only kind of soft on this point, we definitely think it should be considered appropriate at our facility.’”

Everyone is reviewing the same literature to derive these criteria, he added, but there are not randomized controlled clinical trial data covering the 60-plus scenarios tackled in these documents. “You can put together 10 committees and you're going to get 10 different groups of people with potentially 10 different opinions about something, even if they are balanced,” Winchester pointed out.

Different Recommendations

Disparities were also seen in the types of recommendations provided for different clinical scenarios. For example, the ACC, ACR, SNMMI, Hopkins, and Intermountain AUCs all stated that myocardial perfusion imaging was an acceptable test in a patient at intermediate risk for acute coronary syndrome, whereas RAYUS and SPGI gave no rating. More heterogeneity was seen in the use of MPI as an initial test in a symptomatic patient with no known CAD and a low pretest probability of having it. Here, the ACC, SNMMI, Hopkins, RAYUS, and SPGI all stipulated that such tests were not appropriate, whereas the ACR and Intermountain said they were. Discrepancies were also seen for this particular clinical scenario and the use of coronary CT angiography (CCTA), with the ACC, ACR, Hopkins, and RAYUS giving CCTA a thumbs up, Intermountain giving it a thumbs down, and the SNMMI and SPGI providing no recommendations.

What really struck me more than anything is just how they all look so different and then, as a result of that, how they're going to be used differently. David E. Winchester

To TCTMD, Winchester said that in addition to giving disparate recommendations in different clinical scenarios, an even bigger issue is the very different way in which the recommendations are set out. “Some are built around symptoms, whereas others are built around conditions; some are built up as flowcharts, and some aren't; and some got these things all divided up into nine or 10 different documents, whereas others like the ACC’s try to put them all together into one,” he said. “What really struck me more than anything is just how they all look so different and then, as a result of that, how they're going to be used differently.”

He has heard anecdotally about colleagues coming into work to find that their electronic medical records system has been updated and with that, a new AUC has been implemented, leaving cardiologists in the bizarre predicament of having a certain test be deemed appropriate one day and inappropriate the next.

“In my opinion, and the opinion of some other people that I work with that are policy-minded, this program just really is not the best way to get people to try and order more appropriate tests,” he said. “There are other strategies that are going to be more effective with less implementation headache.”

At the big-picture level, he said, “I think this paper helps to inform discussions with policymakers about how Medicare and others should proceed in deciding about reimbursement for advanced imaging.”

Individual clinicians, however, might want to pay more attention to how decisions are being made at their practice or institution as to which AUC are being selected for use, and, at a minimum, be aware that different AUC exist, offering slightly different versions of what is and isn’t “appropriate” for coronary artery disease testing.

Contacted for comment, David J. Maron, MD (Stanford University School of Medicine, CA), a co-author of the ACC’s AUC document as well as principal investigator for the ISCHEMIA trial, said he was not aware of the existence of other AUC documents guiding the testing of patients with chronic coronary disease. He agreed with Winchester that the heterogeneity could sow confusion. 

“It calls into question which AUC a clinician should follow,” Maron said. “The same can be said for practice guidelines, which do not all align.”

Shelley Wood was the Editor-in-Chief of TCTMD and the Editorial Director at the Cardiovascular Research Foundation (CRF) from October 2015…

Read Full Bio
Sources
Disclosures
  • Winchester reports no relevant conflicts of interest.

Comments