Cardiac Intervention and Surgery RCTs: Room for Improvement?

Contemporary RCTS “are relatively small and fragile,” and there are upsides and downsides to industry funding, researchers say.

Cardiac Intervention and Surgery RCTs Room for Improvement

The randomized trials that have guided the use of invasive cardiovascular interventions—both surgical and nonsurgical—over the past 12 years are not always designed to provide answers to meaningful clinical questions, new research suggests. Moreover, industry funding appears to influence how studies are conducted and reported, in both good and bad ways.

The findings come from an analysis of the characteristics of 216 interventional cardiology and cardiac surgery RCTs published between January 2008 and May 2019 that was performed by Mario Gaudino, MD (Weill Cornell Medicine, New York, NY), and colleagues.

“This may seem like just intellectual curiosity looking at the quality of the evidence that is published in cardiovascular intervention, but I don’t see it in this way,” Gaudino told TCTMD.

“A randomized trial is considered the highest level of evidence. Most of what we do in terms of our guidelines and decisions that we make in practice every day are based on the results of randomized trials,” he explained. “So evaluating the quality of randomized trials and making sure that they answer the question that they are supposed to answer has very important implications for patient care. Because in the end, those trials are the foundation of the decisions that physicians take and that ultimately, in the end, affect patients. So I see this as an attempt to improve the outcomes of patients even though it is not research on patients.”

The analysis turned up three key findings, according to Gaudino. First, “we as a cardiovascular intervention community are not doing a great job in performing randomized trials,” he said, pointing to the average of about 20 trials published each year over the study period.

Secondly, “those trials were not necessarily designed to answer important clinical questions,” Gaudino reported. Only about half of trials (53.2%) used major clinical events as a primary outcome. “We need to do more trials and do more trials that look at research questions that are relevant for our patients, not for us,” he stressed.

And finally, there were differences between industry-funded and noncommercially-funded trials, he said. On the one hand, industry trials were larger, provided less-fragile results, and appeared in higher-impact journals. But two findings also raised some red flags: commercial sponsorship was associated with a greater likelihood of finding a primary outcome benefit for the studied intervention (64.3% vs 48.5%) and of the authors using “spin” to imply positive results when the trial was neutral or negative (80.6% vs 54.2%).

“Without industry there would not be a lot of research going on, so there is no doubt that the support of industry is very, very important,” Gaudino said, referring to the fact that 53.2% of trials received commercial support. “However, it needs to be very carefully regulated.”

Weaknesses Found

For the study, published online June 1, 2020, ahead of print in JAMA Internal Medicine, Gaudino et al examined trials in the fields of coronary, vascular, and structural interventional cardiology and vascular and cardiac surgery. The median sample size was 502, and the median follow-up duration was 12 months.

Most of the trials had 80% power to detect an estimated treatment effect of 30%. More than half (59.3%) used composite primary endpoints.

Trials conducted with industry support were more likely to involve multiple centers and to use a noninferiority design, had larger median sample sizes (800 vs 302), and were less likely to use major clinical events as a primary endpoint. The first and/or last author of the published industry-funded studies reported a conflict of interest with the company in 58.3% of cases.

A primary endpoint favoring the experimental intervention and the presence of spin in the case of a neutral findings were both common across the board—57.0% and 65.5%, respectively—although they were more frequent in industry versus noncommercial studies, even after accounting for differences in trial characteristics.

Discrepancies between registered and published primary outcomes were seen in 38.0% of trials, without differences based on funding source. Such differences may be unavoidable—in the case of protocol changes in response to the COVID-19 pandemic, for instance—but the observed rate is concerning, Gaudino said. Whenever this situation arises, it “should be very thoroughly investigated and the reasons for the changes need to be very clearly described in the final publication of the paper so that everybody understands what was changed, when it was changed, and why it was changed,” he said.

The investigators also assessed the fragility of trials, defined as the number of patients who would have to experience a different outcome to change a statistically significant trial result to a nonsignificant one. This number was slightly higher in industry-funded trials (median 5.0 vs 4.5), indicating a more solid finding.

“These results suggest that contemporary RCTs of invasive cardiovascular interventions are relatively small and fragile, have short follow-up, and have limited power to detect large treatment effects,” Gaudino et al conclude.

Collective Responsibility to Improve the Process

Asked how he would ensure that the trials driving clinical practice are designed, conducted, and reported in a meaningful way, Gaudino highlighted the need for contributions at multiple levels.

He reiterated the need to carefully monitor the role of industry. “Ideally, industry should provide support to the trials but should not be involved in any part of the trial design, analysis, and reporting.”

Most of what we do is based on those studies and if those studies are not properly performed, designed, or reported, in the end they will harm patients. Mario Gaudino

Journals and peer reviewers also have a responsibility to evaluate the quality of trial data and ensure that the way they are presented in a publication is consistent with the actual results, he said. But ultimately, Gaudino said, it’s incumbent upon individual physicians to fully read trial publications so they’re not influenced by spin in the abstract or conclusion of a paper.

For Yves Rosenberg, MD, chief of the atherothrombosis and coronary artery disease branch of the National Heart, Lung, and Blood Institute in Bethesda, MD, there’s not a lot of new information to be gleaned from the analysis. It’s been known for a long time that invasive cardiovascular trials are difficult to perform, have moderate sample sizes, and are often designed with the assumption of large treatment effects in order to limit sample sizes, he commented to TCTMD. That “means that often they miss the mark in the traditional statistical sense of a P value of 0.05.”

Some of the differences between commercial and noncommercial trials were not particularly surprising either, Rosenberg said, because industry can afford to perform larger trials, tends to use noninferiority designs that compare a new product with an existing one, and often publishes in higher-impact journals because of the larger sample sizes and more frequent positive results.

When asked about the higher degree of spin in industry-funded trials, Rosenberg said level of concern depends on where it’s found in the publication. If it’s found in the abstract and conclusion, it’s “really concerning,” he said, “but if you include that in the discussion that’s perfectly fine in my view . . . that you can have some level of interpretation of this.” But Rosenberg echoed Gaudino’s call for a collective responsibility among funders, investigators, and journals to ensure that results are being presented in an objective and reasonable way.

To improve the quality of trials in this space, Rosenberg made a “call to have larger trials, more adequately powered, with more reasonable assumptions regarding the magnitude of the treatment effects they’re supposed to detect. But they still need to be designed in a way that they detect some clinically meaningful difference.”

Gaudino brought the discussion about trial quality back to the potential impact on patients. “Most of what we do,” he said, “is based on those studies and if those studies are not properly performed, designed, or reported, in the end they will harm patients.”

Todd Neale is the Associate News Editor for TCTMD and a Senior Medical Journalist. He got his start in journalism at …

Read Full Bio
Disclosures
  • Gaudino reports no relevant conflicts of interest.

Comments