Unreliable Narrators: Patients Both Over- and Underreport Hospitalizations After Acute MI
Patients can’t be counted on to accurately self-report hospitalization data after acute MI, according to an analysis of the TRANSLATE-ACS study population that found an abundance of patients both diminishing and exaggerating their number of return trips within 1 year of discharge.
Many clinical trial designs rely on patients to tell researchers about events like hospitalizations over a certain period of time, which can then be validated by various means. But the accuracy of this method has never been adequately tested, say the study authors led by Arun Krishnamoorthy, MD, of Duke Clinical Research Institute (Durham, NC). Their findings “underscore the challenge of relying on follow-up that is patient reported,” and moreover, they note, using this kind of data in clinical trials “may lead to major inaccuracies in outcomes assessment.”
Of 10,643 patients in TRANSLATE-ACS who were hospitalized and treated for acute MI between April 2010 and October 2012, 43% reported a total of 7,734 rehospitalizations within 1 year (53% were supposedly unplanned). However, after reviewing medical billing records, the investigators found only 6,786 rehospitalizations in 47% of the population—even when going as far as to check dates within 7 days before and after and looking at hospitals within 60 miles of what the patient originally reported to account for potential errors.
While some hospitalizations were reported by patients and not medical records, and some vice versa, the cumulative incidence rate of rehospitalization was higher when identified by patients as opposed to billing records (43% vs 37%; P < .001).
Overall, 72% of patients accurately reported their number of rehospitalizations, but 18% overreported by a mean of 1.3 events—with some overstating by as many as 7—and 10% underreported by a mean of 1.5 events—with some omitting as many as 14. Underreporters were more likely to be older, women, African-American, or unemployed compared with those who accurately reported. Notably, overreporters were also more likely to be women or unemployed.
MACE and its individual components (MI, stroke, or unplanned coronary revascularization) were all more likely for underreporters compared with those who accurately reported hospitalizations within 1 year (OR 7.03; 95% CI 6.01-8.12). Similarly, MACE was also more likely in overreporters than in accurate reporters (OR 1.82; 95% CI 1.59-2.09).
Lastly, culling the population down to only patients who were actually rehospitalized in the year following discharge according to medical bill records (n = 5,015), 38% were overreporters and 21% were underreporters. MACE was still higher in the latter group than accurate reporters (OR 1.30), but overreporters were actually found to have less risk (OR 0.34).
Verification Is Costly, Not Always Definitive
Krishnamoorthy and colleagues point out that relying on patient reporting in this population to identify events “would have led to higher rates of rehospitalization, but not recurrent MI,” since there was similar patient and physician agreement of MI within 1 year.
But verifying patient-reported events is “costly and typically requires intensive study resources” that many research groups do not have, they write. “Moreover, it is unclear whether patient report of an event always leads to definitive documentation of an actual event or whether important events are missed because of inaccurate patient recall.”
Going forward, trial designers will be challenged by balancing accuracy of their data and efficiency of their resources, and “therefore, the practice of using patient report in clinical studies warrants critical appraisal,” according to the authors. Underreporting costs researchers money and time to make sure events are not missed, yet overreporting “may ultimately lead to unnecessary expenditure of trial resources,” they add.
Identifying Inaccurate Reporters
Also, the fact that the United States lacks a single-payer system “impedes the ability to centrally monitor longitudinal events,” the authors write. What might help, they say, would be identifying the patients who are likely to underreport events and building “safety nets” around them in practical study designs. Examples include “planned queries of electronic health records or health care providers for additional screening of events.”
Preventing overreporting will be a little more challenging, Krishnamoorthy and colleagues say, as they “may be more difficult to distinguish from accurate reporters.” While current data cannot pinpoint exact markers of overreporting, they write, “patient input into trial design may be warranted to further facilitate effective and efficient data collection, as well as to understand how patients process adverse events.”
A universal health care record
could also go a long way toward streamlining accurate data collection as well,
the authors write.
Krishnamoorthy A, Peterson ED, Knight JD, et al. How reliable are patient-reported rehospitalizations? Implications for the design of future practical clinical studies. J Am Heart Assoc. 2016;5:e002695.
- Krishnamoorthy reports receiving research funding from Novartis Pharmaceutical Corporation and travel support from Medtronic.