Lessons From 2023’s IVUS/OCT Confusion Can Lead to Better Evidence-Based Practice

We all get excited by results from single randomized trials. The wise clinician will be guided by the wider body of evidence.

Lessons From 2023’s IVUS/OCT Confusion Can Lead to Better Evidence-Based Practice

 

Science is supposed to be cumulative, but scientists only rarely cumulate evidence scientifically.  -Iain Chalmers, Cochrane Collaboration

Last year at the European Society of Cardiology (ESC) 2023 Congress, two large intravascular coronary imaging trials, ILUMIEN IV and OCTOBER, showed seemingly contradictory results, while a third trial, OCTIVUS, muddied things further by suggesting noninferiority of IVUS versus OCT. Finally, in that same session, Gregg Stone, MD (Icahn School of Medicine at Mount Sinai, New York, NY), presented a network meta-analysis of IVUS and OCT trials, and confusion—for some—skyrocketed.

Every year multiple important randomized trials are presented in meetings and simultaneously published in high-impact journals. This is great news for the cardiology community since it provides the fundamental fuel for our current best standard: evidence-based medicine (EBM). While it is natural to feel uncomfortable when faced with seemingly contradictory conclusions, EBM offers us a solution.

Heading into 2024, my hope is that cardiologists can embrace the core concepts of EBM to help interpret evidence. The intravascular imaging trials of 2023 can serve as a guide, so that we don’t make the same mistakes again and again.

Principles of Evidence-Based Medicine

The three principles of EBM were developed by clinical epidemiologists, including those who coined the term more than three decades ago:

  1. Consider all of the best available evidence.
  2. Assess evidence credibility.
  3. Accept that evidence alone is never enough to make clinical decisions.

The third principle is always a welcome one for clinicians, highlighting the need to integrate evidence with other aspects like patient values and preferences, resources, and other factors. The second principle is about assessing the credibility of research, such as the risk of bias or the hierarchy of available data. The first principle, however, is the most relevant to us with our intravascular imaging dilemma.

Would you believe me if I told you that the best standard of EBM ignores the primary outcome conclusions of big trials? For those who say, “No way!” I have a surprise for you today.

Philosophers, and probably you, too, agreed a long time ago that truth relies on incorporating the totality of the evidence. Cherry-picking evidence or relying on a subset of trials that favors a particular point of view or prior beliefs would inevitably push us away from the truth. Imagine prosecutors following “their beliefs” to reach a guilty verdict by prioritizing one piece of evidence—the accused was the owner of the gun that killed the victim—while ignoring another—the accused was on a transatlantic flight when the murder was committed. Assessing the totality of the evidence would not only prevent innocent people from going to jail, but also prevent cardiologists from arriving at inaccurate conclusions.

The totality of the best available evidence comes from systematic reviews and is the substrate we need to create the “body of the evidence.” It’s important to emphasize “systematic,” meaning that the methods used to amass the evidence were rigorous, just as prosecutors should act when assessing if someone is guilty or not. Reviews that fail this principle cannot be considered to adhere to the best EBM standards. Now pause for a second: how many clinical guidelines influencing your practice are not based on systematic reviews, considering some but not all available clinical trials?

What Is the Body of Evidence for Intravascular Imaging?

A recent systematic review including all available trials was published in November 2023 in the BMJ. In the paper, Khan and colleagues conclude that compared with angiography guidance, intravascular imaging resulted in less cardiac death and other key cardiovascular outcomes, with a high certainty of evidence using the GRADE approach. This is consistent with many other, prior systematic reviews.

It’s important to highlight that in this paper, the I2 (which is a statistic to estimate heterogeneity: the higher the value, the more different the results between trials are) was 0% for all cardiovascular outcomes, meaning that there is no evidence supporting the results being different across trials.

So how can I2 be 0% when individual trial conclusions are markedly different?

Think of scenarios in which one trial may report a nonstatistically significant reduction (ie, in ILUMIEN IV, the risk ratio for cardiac death was 0.57; 95% CI 0.25-1.29; P > 0.05) while another detects a similar risk ratio but with statistical significance (ie, RENOVATE-COMPLEX-PCI, where the risk ratio for cardiac death was 0.47; 95% CI 0.24-0.93; P < 0.05). Both trials halved cardiac death, which is very consistent. However, despite being consistent, the former based conclusions on a P value greater than 5% (“negative trial”) while the latter using a P value less than 5% (“positive trial”). Thankfully, working with the totality of the evidence improves precision, such that across all 13 studies included in the BMJ systematic review, the risk ratio for cardiac death is 0.53 (95% CI 0.39-0.72; P < 0.001).

In a nutshell: the interpretation of the body of the evidence is totally independent of individual trial conclusions.

Khan et al’s review also concludes that “the estimated absolute effects of intravascular imaging-guided percutaneous coronary intervention showed a proportional relation with baseline risk, driven by the severity and complexity of coronary artery disease”.

This is true for most scenarios in medicine: for the same risk ratio, those at higher risk of outcomes—in this case more complex coronary disease—benefit more. This may be the main explanation as to why some trials reach statistical significance while others do not, despite being consistent in their key findings.

Another question you might be asking is, is it fair to put OCT and IVUS in the same basket?

OCT and IVUS are both intravascular imaging modalities so the effects should be similar, but some may argue they are not identical, which is true. And since most of our evidence does not come from a direct OCT vs IVUS comparison, how can we answer this question? Here is where Stone and colleagues’ network meta-analysis of 20 different randomized controlled trials comes into play: it concluded not only that any form of intravascular imaging was superior to angiography, but also that, analyzed separately, OCT-guided and IVUS-guided procedures yielded similar improvements compared with angiography-guided procedures. 

Saving Time and Trouble

We, as end users of the evidence, can probably save ourselves the time and trouble of doing critical appraisals of individual new trials. I know this is difficult: I spent hundreds of hours during my cardiology residency doing critical appraisals for “journal clubs” or “evidence review rounds” that looked at dozens of trials I don’t even recall anymore, using phrases like “too many crossovers limit interpretation,” or “borderline statistical significance,” or “only secondary outcome positive results” and many others. I don’t believe this is the best way to keep teaching EBM.

It’s not true that critical appraisal skills are irrelevant, but it makes sense to spend more time learning and properly using the EBM tools that embrace the body of the evidence concept rather than nitpicking individual trial results. Ideally, regularly updated clinical guidelines or textbooks that integrate evidence-based information are the best options, but when those are not available (for example, when fresh evidence is likely to alter decisions) using high-quality systematic reviews is a great alternative.

Tikkinen and Guyatt once observed: “The notion that most clinicians emerging from professional training will regularly evaluate the risk of bias in methods and results of primary studies is deluded. Most will be uninterested in acquiring the sophisticated skills that such appraisal requires; most of those who are interested will never make obtaining the training to acquire these skills a sufficient priority; and those who do obtain the training and skills will often not have the time to apply them.”

That means clinicians, eager to devour the results of the next big clinical trial, would do well to pause and remember the rules of EBM. We were lucky with the intravascular imaging dilemma posed by the IVUS and OCT trials in 2023—updated systematic reviews like the one presented by Dr. Stone at ESC are not typically readily available after individual trials are published.

In 2024, I doubt we’ll be so lucky: most presented trials won’t be instantaneously incorporated into a systematic review. Still, make an extra effort to contain your excitement and avoid the urge to make up your mind solely based on new trial results.

 

Off Script is a first-person blog written by leading voices in the field of cardiology. It does not reflect the editorial position of TCTMD.

Comments