The Perils of Perusing: New Research Cautions Against Relying Solely on RCT Abstracts

Don’t “just go with what the abstract tells you,” a researcher says. Also, keep an eye out for overly rosy “spin.”

The Perils of Perusing: New Research Cautions Against Relying Solely on RCT Abstracts

Busy cardiologists may sheepishly admit that they only have time to read the abstracts of published randomized controlled trials rather than digging into the full results. But two recent studies suggest that, for a range of reasons, it’s important to read findings in full before applying any lessons to clinical practice.

Muhammad Shahzeb Khan, MD (John H. Stroger, Jr. Hospital of Cook County, Chicago, IL), is the lead author for both reports: one in Circulation: Cardiovascular Quality and Outcomes and the other in JAMA Network Open. In one, the focus was on the completeness of abstracts in particular and for the other, he explained, researchers looked throughout the entirety of academic papers for evidence of “spin,” or “manipulation of language to potentially mislead readers from the likely truth of the results.”

The idea for looking more deeply at journal abstracts, according to senior author Richard A. Krasuski, MD (Duke University Health System, Durham, NC), goes back to discussions of how residents, medical students, and other trainees digest papers and transmit those results to their peers.

“They get kind of a selective interpretation because some of them may not have time to read through the entire paper, . . . and sometimes what you find in the abstract is very different than what is contained within the paper,” Krasuski explained, adding that reading the whole article is necessary to understand a study’s methodology, patient population, and some cases, even its primary results.

“It’s important [for clinicians] to vet the data, not just go with what the abstract tells you and gauge your practice on that. But we’ve all done it before. I’ve been as guilty as anyone—when I have a lot of material presented to me, I may take the most time effective way out,” said Krasuski. “All of us would like the luxury of time to be able to [read a paper] but we all recognize there are times we don’t.”

Spotlight on RCTs

For the abstract study, researchers included 478 trial randomized controlled trial (RCT) abstracts published between 2011 and 2017 in three major cardiovascular journals—Circulation, Journal of the American College of Cardiology, and European Heart Journal. The analysis used the Consolidated Standards of Reporting Trials (CONSORT) to examine the quality of the published abstracts.

They found that all of the abstracts reported detailed information for both study groups being assessed, and 81% specified trial registration. Sixty-three percent provided results for the primary outcome and 55% mentioned harms or adverse effects. In terms of reporting methodology, however, abstracts fell short: 9% described eligibility criteria and data collection, 43% mentioned whether or not a trial was blinded, and 0.8% gave information on allocation concealment.

For the ‘spin’ study, the researchers looked for evidence of three dubious strategies: focusing on significant secondary results when the primary-outcome comparison had been nonsignificant; interpreting a nonsignificant result for the primary outcome to show treatment equivalence or rule out an adverse event; and emphasizing a treatment’s benefit with or without acknowledging that the primary-endpoint comparison didn’t reach statistical significance.

They identified 93 RCT papers published in one of six journals—New England Journal of Medicine, the Lancet, JAMA, Circulation, Journal of the American College of Cardiology (JACC), and European Heart Journal—in 2015 or 2016. They found spin in 11% of titles, 57% of abstracts, and 67% of the papers’ main texts. Slightly more than half (54%) had this type of misleading wording in the paper’s conclusion section and 48% in the abstract’s conclusion. Spin was seen in results (abstract and/or main text) in approximately four out of every 10 papers.

Researchers may be unconsciously injecting spin into their papers on negative trials, in the hopes that major journals won’t write off the results as unimportant, Khan said. One step forward is to realize that negative trials still have scientific merit in terms of informing future research, he stressed. Journal readers should also keep in mind that trials are powered specifically for primary endpoints, so any differences in secondary outcomes must be hypothesis-generating only.

Filling the Gaps

Commenting on the findings for TCTMD, James de Lemos, MD (UT Southwestern Medical Center, Dallas, TX), said the “overarching message that we can all learn from—authors, reviewers, journal editors—is just that the abstract is so often the last thing an author puts together after they finish their paper and not necessarily the focus of the detailed review.”

de Lemos, the executive editor of Circulation, agreed that many journal readers rely on the abstract alone. Sometimes even he uses the abstract as a “first pass” to see if he’s interested in reading a paper all the way through.

“We’re not paying enough attention to ensuring that that’s a standalone document,”, he said, adding, “It’s a great lesson for me as an editor.”

Khan et al suggest that lengthening abstracts beyond 250 words might help researchers fit in more aspects of their work, and de Lemos noted that Circulation in fact has done that, bringing the total to 350 words. The journal has encouraged authors to use this space for describing methods, so that readers can better assess the quality of the investigation. “That’s fundamentally where the gaps are,” he observed.

Regarding the other analysis, de Lemos said “the desire to spin is universal and inherent. There’s pressure to do it because authors believe that they have to emphasize the impact of their study to get into competitive journals and to get people to believe their research is important and the conclusions are valid. So there’s a natural tendency.” To try to combat this, Circulation editors go through papers line by line after they’ve been reviewed to tone down causal inferences, take out superlative words, and address potential exaggeration of effects, he said.

de Lemos did not, however, think Khan and colleagues’ other suggestion—that journals should agree to a more consistent format—was likely to happen “given the heterogeneity in the focus of journals. . . . I don’t know if that solves the problem. It’s not so much the format [but] the information contained.”

Importantly, there’s a balance between digestibility and technical detail, he added.

Krasuski framed the two analyses as a starting point to ask how best to ensure that information is properly communicated. “In an ideal world, if we could standardize the way things are presented from journal to journal, I think that would be a wonderful way. As a researcher, I am oftentimes frustrated by the fact that we’re constantly reformatting things and then trying to touch on all these different topics,” said Krasuski, who pointed out that he, too, serves as an editor at several journals.

Valentin Fuster, MD, PhD (Icahn School of Medicine at Mount Sinai, New York, NY), editor-in-chief for JACC, emphasized to TCTMD that journals depend upon statisticians and peer reviewers to ensure accuracy in papers. “This is as much as one can do from an executive branch of a journal,” he commented.

Only a few words can make a difference in terms of conveying information, he agreed. “I can only say that we try to really correct [for] this in the editorial meeting as much as we can. There are situations where sometimes it gets very difficult, but you already have the experts [on hand]. . . . We have to rely on their judgment.”

Khan made one additional point that might also be of interest to journal publishers. A lack of time is just one obstacle to reading articles in full; journals’ paywalls are another obstacle. “When you search PubMed, a lot of the articles aren’t open access, so we can’t immediately [get] the full-text article,” he observed.

Disclosures
  • Khan reports no relevant conflicts of interest.
  • Krasuski reported receiving grants from Edwards Lifesciences and Abbott; receiving grants and personal fees from Actelion and serving as a nonfunded scientific advisor for from Ventripoint outside the submitted work.

Comments