Appropriate Use Criteria for Imaging in Chest Pain Come Under Fire
Two cardiovascular professional societies recently withdrew their support from a document outlining which imaging tests should and shouldn’t be used in patients with chest pain in the emergency department. Now, their presidents have come forward to say the document should be revised—or ignored.
In an editorial released online earlier this week, presidents of both the Society for Cardiovascular Angiography and Interventions and the American Society of Echocardiography write that the 2015 Appropriate Utilization of Cardiovascular Imaging in Emergency Department Patients With Chest Pain, published in January 2016, “could adversely impact patients.”
According to James C. Blankenship, MD (Geisinger Medical Center, Danville, PA), and Susan E. Wiegers, MD (Temple University School of Medicine, Philadelphia, PA), writing on behalf of their respective societies, the Journal of the American College of Cardiology declined to publish a letter they wrote outlining their concerns with the appropriate use criteria (AUC). Instead, they say, their letter was referred to the American College of Cardiology (ACC) writing committee to be considered in the next version of the document. Dissatisfied with this, “SCAI and ASE have taken the unusual step of publishing this joint statement detailing our concerns,” they say.
The editorial was published online March 1, 2016, in Catheterization and Cardiovascular Interventions and the Journal of the American Society of Echocardiography.
Too Broad, Too Vague
That SCAI and ASE were displeased with how the chest pain imaging AUC came out is not news. On the day the AUC were published in January, Blankenship posted a president’s message on the SCAI website, explaining why SCAI had decided not to endorse the document, noting that the ASE “had similar concerns.” Of note, while the document was jointly convened by the ACC and the American College of Radiology (ACR), an additional 12 medical societies, including the American Heart Association and a wide range of imaging groups, did endorse the document.
Blankenship and Wiegers’ editorial, however, outlines a number of their concerns, chief among them the fact that, in their view, many common clinical scenarios are not included in the AUC.
Worse, as Blankenship explained to TCTMD, in some cases the scenarios are too broad and “too vague,” such that a recommendation for or against a specific test in a group of patients might be appropriate for most but not all patients.
For example, patients coming to the emergency department with sustained chest discomfort with occluded circumflex arteries “are famous for not causing EKG changes.” Yet in these cases, according to the AUC, heading to cardiac catheterization is “maybe” or “rarely” appropriate, depending on how the troponin test comes back. “And by the time the enzymes come back, the patient has had their infarct—that’s just one example of how that could play out,” he says.
Another key complaint voiced in their editorial is the composition of the ratings panel. AUC are typically written by one group, with ratings on each scenario outsourced to a second, independent group comprised of a wider range of experts. A third group, made up of members of the endorsing societies, make up the “review” panel.
On these AUC, report Blankenship and Wiegers, the writing group included neither an ASE or a SCAI representative “nor, to our knowledge, any physician who routinely performs invasive angiography.” Moreover, the rating panel included only one representative from ASE and no representatives from SCAI.
“Our societies are concerned that the clinical scenarios and the ratings assigned to them fail to integrate the value of both invasive and noninvasive imaging, may not adequately represent real-life patients, and do not represent current standards of practice,” they write.
What’s Done Is Done
Speaking with TCTMD, AUC writing group co-chair James E. Udelson, MD (Tufts Medical Center, Boston, MA), the ACC’s representative on the document, noted that there were two interventional cardiologists on the review panel, including one from SCAI. Additionally, “many of the societies had members on the ratings panel,” he said. But Udelson did not confirm or deny whether SCAI and ASE were represented, saying only that the composition of the ratings panel “was the right mix.”
An ACC spokesperson told TCTMD that the ratings panel had one representative from ASE but that SCAI declined an invitation to appoint a panel member. The ACC therefore appointed Lloyd Klein, MD (Advocate Heart Institute, Chicago, IL), “to ensure someone with interventional experience would be on the panel.”
“By design, rating people are a very broad group who may not be experts in invasive cardiology and may not be experts in echocardiography, but they’re looking at the literature, they are looking at the scenarios. [While] their ultimate rating may not be what an echo person might do, or what I might do as a nuclear physician, . . . once the rating is done, it’s done,” Udelson said. “If one society says, ‘oh we don’t like that—that came out to be a May Be Appropriate instead of an Appropriate,’ we can’t go back to the rating panel. It’s done. That’s just the process.”
He wouldn’t comment on whether he felt the makeup of the ratings panel was the right mix to make these kinds of ratings, saying only that it was “consistent” with earlier AUC documents.
Lack of Consensus
Also at issue is the approach taken in these AUC to scenarios where the rating group could not reach consensus. Early AUC documents used the ratings of Appropriate, Uncertain, and Inappropriate. These have evolved over the years to be Appropriate, May Be Appropriate, and Rarely Appropriate in order to preserve what physicians have long argued is an essential aspect of doctoring, namely, to be able to tailor tests and therapies to individual patients. In the chest pain AUC at the heart of this week’s controversy, however, the groups tasked with putting together the document introduced another nuance: the “M-star”. The ‘May Be Appropriate’ with an asterisk, explained Udelson, denotes a category where the rating group could not reach consensus.
“The star means there wasn’t a certain level of agreement about [a given rating],” he said. “In our thinking we were just being transparent about the work of the ratings panel, but this is the first time that was used and people had issues about it. Almost every society brought that up as a question, we explained it to everyone, and most of them said, okay, got it.”
That didn’t fly with SCAI and ASE. In their editorial, Blankenship and Wiegers point out that the “well understood” rating of “M” was used just five times in this AUC, whereas “M*” was used a full 23 times. “We have suggested to ACC that it is not advisable to publish a document with so much evidence of lack of consensus,” they write.
What’s Best for Patients
Asked by TCTMD what they would like their member physicians to do with these AUC, Wiegers pointed out that the America Society of Echocardiography released its own guidelines within weeks of the AUC document and that in a number of specific scenarios the two contradict each other. “I think that every physician tries hard to do what’s best for their patients,” she said, “and I think they should continue to do that.”
The major concern with these AUC, however—as with all AUC that have gone before—is that payers will use ratings to offer reimbursement for certain tests over others. Udelson stressed that the AUC writers explicitly stated “in boilerplate language” that “the designation May Be Appropriate should not be used as the sole grounds for denial of reimbursement for a given examination for a specific clinical scenario.”
Wiegers, however, objected to the word “sole” calling it “ridiculous.”
“In fact, I thought there wording was more detrimental than usual,” she said. “One of the reasons we’re so concerned is that there have been a number of circumstances where payers have required or tried to require that modalities that are rated “A” be preferentially ordered over modalities that are rated “M.” That’s not what AUC are supposed to be used for in the first place, but say that’s a reasonable approach, then it means getting the rating correct all the more important. I’m very concerned that these flawed ratings will be used to deny patients access to tests that are indicated in good patient care.”
She pointed to the controversy over the new hypertension guidelines and the new cholesterol guidelines and hinted that the same level of scrutiny—and mutiny—may be required here, too. “Physician need to think about it, filter it, and apply it to the individual patient in front of them,” Wiegers stressed. “We strongly suggested to the ACC that they not publish this because it’s so flawed, that they reconstitute the panel and try again, but they chose to publish it anyway. We’ve been going back and forth with them since October with multiple letters and documents and discussions, and in the end, we couldn’t endorse it because it is so flawed.”
Blankenship, noting that this document was done in partnership with the ACR, likewise said he believes the ACC could consider revisiting the question with a cardiology focus. “The ACC could take a second look at it and say, ‘Well, gee, this isn’t quite as good as we had hoped it would be,’ and they could certainly do it again,” he said. “My understanding was that the ACR took the lead role in it, but the ACC could go and say, ‘Well, that was a great job we learned a lot from it, but maybe we should do it again [by] ourselves.’”
That seems unlikely to happen any time soon—AUC documents are typically reconstituted and revisited on a cycle of three to five years. Udelson notes that while the ASE document came out prior to the publication of the AUC, the ASE was well aware of what the ratings were, those having been communicated to the societies months earlier. “At some point you can’t keep going back,” he said. “We can’t go back to the raters and say, oops, someone wasn’t happy with scenario 14, because that’s just not how this process works.”
Udelson also made the point that the document was very much intended to be applied “way up front” in the very initial stages of patient assessment in the emergency department, not for what happens next.
“I think one of the things that the SCAI took issue with is that a lot of the decisions for catheterization get made a little bit later in these people, after they are admitted and the troponin values are back,” he said.
1. Blankenship JC, Wiegers SE. Concerns regarding “2015 ACR/ACC/AHA/AATS/ACEP/ASNC/NASCI/SAEM/SCCT/SCMR/SCPC/SNMMI/STR/STS: Appropriate utilization of cardiovascular imaging in emergency department patients with chest pain.” Catheter Cardiovasc Intv. 2016;Epub ahead of print.
2. Rybicki FJ, Udelson JE, Peacock WF, et al. 2015 ACR/ACC/AHA/AATS/ACEP/ASNC/NASCI/SAEM/SCCT/SCMR/SCPC/SNMMI/STR/STS: Appropriate utilization of cardiovascular imaging in emergency department patients with chest pain. A joint document of the American College of Radiology Appropriateness Criteria Committee and the American College of Cardiology Appropriate Use Criteria Task Force. J Am Coll Cardiol 2016;67:853-879.
- Center Experiences Declining PCI Volume, Reimbursement After Implementing AUC
- Curbing Overuse of Cardiac Imaging Requires Audits and Feedback, Analysis Shows
- What Are Hospitals Doing With PCI Appropriateness Reports? Some Say, Not Much
- Blankenship, Wiegers, and Udelson all report having no conflicts.