Memo for Meta-analyses: Put Quality Over Quantity, Says AHA

Although they’re growing in popularity, not all meta-analyses are equal in terms of quality, experts caution.

Memo for Meta-analyses: Put Quality Over Quantity, Says AHA

The publication of meta-analyses is on the upswing in the cardiovascular literature, but care must be taken to ensure they provide accurate and useful information for researchers, clinicians, and patients, according to a new scientific statement issued by the American Heart Association (AHA).

The statement, published online August 7, 2017, ahead of print in Circulation, offers advice on everything from how to select the highest-quality sources to the best means for analyzing data. Some old habits are worth keeping, its authors say, but others should be amended and new approaches added to the mix.

“Despite the increasing popularity of meta-analyses and systematic reviews in general, problems with methodology are widespread and frequently undermine the credibility of the results,” note writing group chair Goutham Rao, MD (Case Western Reserve University and University Hospitals, Cleveland, OH), and colleagues. “New guidance is needed for both researchers who carry out [these analyses] and the consumers who read them and rely on the results.”

Meta-analyses—even those based on randomized controlled trials—are not without potential pitfalls, they observe. Putting undue trust in them, Rao et al say, “is truly unfortunate because problems with published meta-analyses, including unstandardized methods and misapplication and misinterpretation of statistical and other techniques, are widespread and long-standing.”

According to its authors, the AHA statement aims to provide guidance for researchers doing meta-analyses, for readers seeking to interpret results, and for journal editors weighing whether to publish a particular paper.

Ori Ben-Yehuda, MD (Cardiovascular Research Foundation, New York, NY), agreed that meta-analyses can be problematic. “One of the issues with meta-analyses is that the people doing them are not necessarily the experts or have even done any of the studies [on the topic at hand],” he told TCTMD. Many times, their authors are inexperienced or haven’t had the right level of mentorship to understand the implications for practice in a particular area. “It has become sort of a way for people to have, how shall I put it, an easy pathway to publication without doing the hard work actually required in clinical trials,” he observed.

What with their many nuances and complexities, the results of meta-analyses “have to be taken with a big grain of salt,” Ben-Yehuda cautioned. “That’s the most important thing. They’re not the end all and be all.”

That said, meta-analyses can play a useful role in certain situations. “Where they help is where we really have a few smaller, underpowered studies that all point to the same direction but are not powered enough to be convincing,” said Ben-Yehuda. “You fit them together and you now have sufficient power.”

When the included studies are too heterogenous, he added, the end product “is less convincing.”

Emphasis on Quality

In brief, here are the key recommendations from the AHA:

  • The practice of putting systematic reviews and meta-analyses at the “very top of evidence pyramids” should be dropped. “Instead, a careful assessment of the methods used in a meta-analysis should be carried out to determine its risk of bias and contribution to closing important gaps in knowledge.”
  • When conducting a meta-analysis, the research team should discuss and agree on whether it’s needed in the first place. “This rationale,” Rao et al say, “should be documented in writing in the form of a protocol that includes the details of study selection, abstraction of data, models for assessing associations, and criteria for interpretation of data.” If data do not lend themselves toward pooling, it may be that a systemic review is in order. And finally, all teams need to involve a biostatistician or other expert in meta-analytic methods.
  • The approach to searching for data in the literature should be set out in advance.
  • Regarding quality assessment, all included studies must be reviewed thoroughly to ascertain the appropriateness of their design.
  • Additionally, “it is inappropriate to pool data from studies that are clinically or methodologically very heterogeneous (eg, significantly different populations, differing doses of interventions, etc),” stress Rao et al. When studies are homogenous enough for pooling, the choice between fixed versus random effect models “should still be based on similarities among studies to be pooled in terms of populations, interventions, exposures, and outcome measures,” they say.

The AHA document also provides advice on how to measure statistical heterogeneity, avoid publication bias, and test the robustness of a meta-analysis’ results. On a cautionary note, Rao et al add: “Emerging methods such as network meta-analysis and bayesian methods should be undertaken only with expert guidance, [as these] are still under development.”

In terms of practical advice, Ben-Yehuda pointed out that even studies that seem similar on the surface may not be when it comes to entry criteria and other details. “They’re not exactly the same,” he told TCTMD. “They can’t just be lumped together. So we have to be very careful with these things, and I think a pooled patient-level analysis is better as opposed to these just from-the-literature [analyses]. That takes a lot more work, and requires cooperation from the authors [of included studies]. It’s not just someone looking at the published data, which may have a lot of subtleties they’re not aware of.”

Importantly, he noted, the research teams doing meta-analyses should include members who conducted the trials on which they are based.

Disclosures
  • Rao and Ben-Yehuda report no relevant conflicts of interest.

We Recommend

Comments