Need to Spin a Negative Trial? Consult the Playbook
Key opinion leaders seem obliged to come up with “excuses” to explain disappointing trial results, Darrel Francis says.
Darrel Francis, MD (Imperial College London, England), the senior ORBITA investigator who sparred with commentators after results of that trial exploded onto the interventional cardiology scene last year, has a message for those who make excuses for negative trials: Save it.
Francis, along with lead author Adam Hartley, MBBS (Imperial College London), and colleagues, delivered that message in a satirical feature article published in the annual Christmas issue of the BMJ, which is known for its offbeat and quirky content. In it, the investigators provide a “Panelists’ Playbook,” which contains a list of 40 excuses from which experts can choose to explain a negative result.
“This is a long-standing observation of mine that people who are otherwise perfectly intelligent seem to be unable to understand negative trials when journalists speak to them,” Francis told TCTMD, adding that key opinion leaders [KOLs] seem obliged to come up with reasons other than the intervention simply not working.
Francis insisted the article has nothing to do with ORBITA, the results of which sent interventional cardiologists scrambling for explanations as to why PCI was no better than a sham procedure for improving symptoms in patients with stable angina. This new paper was 3 years in the making and was always destined for the BMJ Christmas issue, he said.
To compile the list of excuses—defined as any reason given for a negative result other than the intervention not working—Francis’ team combed through news coverage from Medscape and MedPage Today of the annual meetings of the American Heart Association, the American College of Cardiology, and the European Society of Cardiology held from 2013 to 2017. Forty percent of trials presented were considered negative, meaning that the primary endpoint was not met. Most news stories about those trials (85%) contained at least one excuse for the negative result provided by a KOL.
The people giving commentaries on research tend to be the people who are famous for being famous, the cardiac celebrities. Darrel Francis
The Panelists’ Playbook consists of 40 theoretically possible excuses. The most common one used was that the sample size was too small (31%), and the authors note that in only one of those stories was a calculation provided for the correct sample size. “It is not clear whether in the other 38/39 (97%) of cases the experts were reluctant to divulge the fruits of their 30 seconds of calculation or that the excuse was simply the first thing that came into their head,” Hartley et al write.
The next most common excuses were that more studies were needed (21%), the study population was too inclusive (17%), and the follow-up was too short (17%).
“The Panelists’ Playbook provides a comprehensive approach to summarizing, or even generating, cheery key opinion leader remarks in the face of disappointing results,” the authors write. “With the help of the playbook, no intervention is too ineffective for an excuse. . . . “Even if [KOLs] lack the time, inclination, or ability to think deeply about the trial, they can pluck items from the Panelist’s Playbook to provide an effortless veneer of insight.”
Why Avoid the Negative?
Though presented playfully and sarcastically, the article shines a light on the real issue of spinning negative trial results, Francis said, adding that there could be a couple of explanations for why KOLs feel the need to come up with reasons for disappointing trial findings.
First, these commentators might behave differently when speaking with journalists than when they’re speaking with colleagues. “They may, in a bar or hospital corridor, tell their friends it doesn’t work, but when they’re speaking to a journalist, they are extra guarded on negative results in a way that they’re not on positive results,” he suggested.
And second, it could be the consequence of a process through which companies that make drugs or devices influence the discussion by aligning themselves with KOLs who are likely to put a positive spin on all situations. “There’s a selection process and an internal maturation process where we are selected for always having something nice to say,” said Francis, who added that he had a firsthand view of this process during a brief stint as a KOL for a well-known device company. When Francis started publicly stating things that didn’t sync up with that company’s view, he said, speaking opportunities on that and other topics dried up.
“This is entirely understandable, but it struck me that it would have an undesirable effect,” Francis said.
“The people giving commentaries on research tend to be the people who are famous for being famous, the cardiac celebrities. They have reached this elite state by being frequently called upon to lecture, write, or speak to journalists,” he said. “If they have been selected for giving a positive viewpoint in all circumstances, and perhaps even have learned to do so, then that entire body of opinion is effectively useless for me as a general cardiologist trying to learn something. There’s no point having an advisor who always says yes. They have to say no sometimes for the yes to be meaningful.”
Although it’s possible—in principle—that an intervention that fails in a trial might succeed in a different patient population, it’s unlikely, Francis said. Companies spend a lot of money to select the patients in whom the intervention is most likely to work, so individual physicians commenting on a trial are not in a position to identify a better study population based on personal experience, he observed. That’s because those physicians lack the benefits of experience with a sufficiently large number of patients, randomization, and blinding of outcomes.
“So, you are evaluating the person having the therapy after having decided to give them the therapy, and of course that is the biggest conflict because you always convince yourself that you made the right decision,” Francis said.
The bottom line, Francis said, is that “the people who should be neutral third-party observers who you think would be able to say [an intervention] doesn’t work cannot do so, and it is therefore dangerous to use their reported opinions in [news] articles as a basis for deciding what to do with your patients. . . . For a negative trial, don’t look at the KOL comments because they will just confuse you.”
Hartley A, Shah M, Nowbar AN, et al. Key opinion leaders’ guide to spinning a disappointing clinical trial result. BMJ. 2018;363:k5207.
- Hartley and Francis report no relevant conflicts of interest.