Deserved or Not, Public Reports on Underperforming PCI Hospitals Improves Performance

The use of in-hospital mortality as a quality marker remains contentious, even as one study shows publicly sharing this information may affect care.

Deserved or Not, Public Reports on Underperforming PCI Hospitals Improves Performance

Naming a hospital as an outlier for increased mortality following PCI in states with public reporting—regardless of the accuracy of the identification—seems to boost systematic improvements at these institutions in the years that follow, according to a new analysis of administrative databases.

“We believe the improvement in clinical outcomes reflects quality improvement stemming from the intended effect of public reporting, especially in the absence of evidence supporting disproportionate risk aversion,” write Stephen Waldo, MD (VA Eastern Colorado Health Care System, Denver), and colleagues.

The value of publicly reporting PCI outcomes has been hotly debated for years out of concerns that patients may weigh these data heavily when choosing an operator for an elective PCI procedure, or worries that physicians themselves may avoid withhold care rather than see their public statistics take a hit. Last year both the Society for Cardiovascular Angiography and Interventions and the American College of Cardiology released statements recommending substantial changes to the public reporting programs with the goal of minimizing risk aversion.

Waldo et al’s study was published online March 1, 2017, ahead of print in Circulation.

To investigate what happens after a hospital is tagged as an “outlier,” defined as performing worse than the 95th percentile confidence interval for expected PCI mortality, Waldo and colleagues looked at state data from Massachusetts and New York for all PCI procedures completed at 86 hospitals between 2002 and 2012. The 31 outlier hospitals were larger, more likely to have an active cardiothoracic surgery program, treated more acute MI patients, and had a higher rate of percutaneous revascularization procedures compared with institutions that weren’t outliers.

Over time, rates of PCI following public reports of outlier status increased to a similar degree both at outlier (RR 1.13; 95% CI 1.12-1.15) and nonoutlier hospitals (RR 1.13; 95% CI 1.11-1.14; P for interaction = 0.5).

In-hospital mortality for all patients presenting with MI decreased at all hospitals during the study period, but it did so to a greater degree at those designated as outliers. Looking just at the subset of MI patients who went on to be treated with PCI, rates of in-hospital mortality at outlier hospitals dropped substantially in the postreporting period as compared with the prereporting period (RR 0.72; 95% CI 0.66-0.79)—a sharper decline than that seen at nonoutliers (RR 0.87; 95% CI 0.80-0.96; P for interaction < 0.001).

Speaking to TCTMD, senior study author Robert Yeh, MD, MSc (Beth Israel Deaconess Medical Center, Boston, MA), said the overall message of the study “is a positive one.” Specifically, it doesn’t appear as if outliers seem to be more risk averse than nonoutliers in public reporting states, he said. “The flip side of it is that that doesn’t mean that [outliers] were identified correctly. What we know is when you publicly identify a hospital [as an outlier] that they do things to improve. If you randomly identified hospitals and just put labels on them and said they were poorly performing in an inaccurate way, there’s a good chance that those hospitals you've identified would improve also.”

If you randomly identified hospitals and just put labels on them and said they were poorly performing in an inaccurate way, there’s a good chance that those hospitals you've identified would improve also. Robert Yeh

‘Double-Edged Sword’

Ajay Kirtane, MD, SM (NewYork-Presbyterian/Columbia University Medical Center, New York), who was not involved in the study, told TCTMD that what concerns him about this study is that the topline results seem to imply that “outliers got identified and outliers got better, so the conclusion should therefore be we ought to do more to identify outliers.”

However, there are states without public reporting programs, he said, commenting that “there are negative consequences of public reporting that might overwhelm these potential benefits that the authors observed.” One potential explanation, Kirtane suggested, is that “once a hospital is identified as an outlier, they just do a better job of documenting risk adjustment, or factors that would be incorporated into risk adjustment, and so therefore their risk-adjusted mortality would come down, while at the same time the overall mortality might not change as much.”

This is exactly what happened in the study, he pointed out, given that the unadjusted in-hospital mortality rate for the overall cohort went up from 8% to 9% after identification of outlier status as the adjusted mortality decreased. “So whether or not they are truly modifying their behavior to ‘get better’ at least in my mind is uncertain,” Kirtane said.

Using mortality as a quality indicator for PCI “is really a double-edged sword,” Gregory Dehmer, MD (Texas A&M University School of Medicine, Temple, TX), who was not involved in the study but chairs the NCDR’s public reporting advisory group, told TCTMD. “While it’s important to evaluate it, a lot of these issues of mortality need to really be adjudicated at the local hospital level, which is difficult to do sometimes,” he said. “If you're at smaller hospitals, there may only be three or four physicians that engage in this particular kind of work, and they’re all friends and colleagues with each other, and they are hesitant to throw a colleague under the bus.”

While mortality certainly shouldn’t be ignored, he continued, “one has to be very careful about using that as the primary measure of whether or not a program is a good or bad program.” The fact that elective and emergent PCIs are combined in public reporting data does not help, Dehmer added. “It now becomes a function of who is the individual on call the night that one of these terribly sick patients comes in, as opposed to what is the real skill of the individual. There's a lot of variables and you have to be very careful when you start using mortality. It’s very easy to describe—you're either alive or you're dead—but it’s not so easy to fold that into a quality measurement,” he said.

Physicians Want ‘Full’ Transparency

Still, as Waldo argued to TCTMD in an email, there are a number of theoretical advantages to programs that publicly report clinical outcomes. These include improved transparency, objective evaluation, improved performance via the Hawthorne effect (individuals modify their behavior because they know they are being observed), and “a more open marketplace” for healthcare services. These benefits, however, face “practical realities” such as those typically raised by physicians uncomfortable with public stats. While supportive of public reporting, generally, Waldo said he believes “there need to be modifications put in place to realize its full potential.” These might include excluding specific patient populations to avoid risk aversion behavior and taking a good hard look at how risk-adjusted mortality is calculated.

For his part, Yeh said he is neither for nor against public reporting as it stands. “I am for rationally designed transparency that does not adversely affect patient care [and] that improves patient care,” he said.

Kirtane agreed, so long as in-hospital mortality is not the sole endpoint used to determine quality, although he acknowledged that more research needs to be done to determine which endpoints could be used and how. For now, all physicians, regardless of whether or not their state has a public reporting program, “can do a better job of accurately capturing the risk adjusters, or the comorbid conditions, of the patient,” Kirtane suggested.

And while some might view the anti-public reporting stance taken by some physicians as an indication that they are opposed to transparency, that’s typically not the case, Kirtane concluded. “We want full transparency, and unfortunately that is not necessarily conveyed when looking at a bland metric like 30-day risk-adjusted mortality.”

  • Waldo SW, McCabe JM, Kennedy KF, et al. Quality of care at hospitals identified as outliers in publicly reported mortality statistics for percutaneous coronary intervention. Circulation. 2017;Epub ahead of print.

  • Waldo reports serving as the associate director of the VA Clinical Assessment, Reporting and Tracking Program, a national quality improvement and reporting program for VA cardiac catheterization laboratories.
  • Yeh and Kirtane report no relevant conflicts of interest.
  • Dehmer reports serving as chair of the NCDR public reporting advisory group.

We Recommend