Clinical Epidemiology (Feb 2020)
External Validation of an Algorithm to Identify Patients with High Data-Completeness in Electronic Health Records for Comparative Effectiveness Research
Abstract
Kueiyu Joshua Lin,1,2 Gary E Rosenthal,3 Shawn N Murphy,4,5 Kenneth D Mandl,6 Yinzhu Jin,1 Robert J Glynn,1 Sebastian Schneeweiss1 1Division of Pharmacoepidemiology and Pharmacoeconomics, Department of Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA; 2Department of Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; 3Department of Internal Medicine, Wake Forest School of Medicine, Winston-Salem, NC, USA; 4Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; 5Research Information Science and Computing, Partners Healthcare, Somerville, MA, USA; 6Computational Health Informatics Program, Boston Children’s Hospital, Harvard Medical School, Boston, MA, USACorrespondence: Kueiyu Joshua LinDivision of Pharmacoepidemiology and Pharmacoeconomics, Department of Medicine, Brigham and Women’s Hospital, Harvard Medical School, 1620 Tremont St. Suite 3030, Boston, MA 02120, USATel +1 617 278-0930Fax +1 617 232-8602Email [email protected]: Electronic health records (EHR) data-discontinuity, i.e. receiving care outside of a particular EHR system, may cause misclassification of study variables. We aimed to validate an algorithm to identify patients with high EHR data-continuity to reduce such bias.Materials and Methods: We analyzed data from two EHR systems linked with Medicare claims data from 2007 through 2014, one in Massachusetts (MA, n=80,588) and the other in North Carolina (NC, n=33,207). We quantified EHR data-continuity by Mean Proportion of Encounters Captured (MPEC) by the EHR system when compared to complete recording in claims data. The prediction model for MPEC was developed in MA and validated in NC. Stratified by predicted EHR data-continuity, we quantified misclassification of 40 key variables by Mean Standardized Differences (MSD) between the proportions of these variables based on EHR alone vs the linked claims-EHR data.Results: The mean MPEC was 27% in the MA and 26% in the NC system. The predicted and observed EHR data-continuity was highly correlated (Spearman correlation=0.78 and 0.73, respectively). The misclassification (MSD) of 40 variables in patients of the predicted EHR data-continuity cohort was significantly smaller (44%, 95% CI: 40– 48%) than that in the remaining population.Discussion: The comorbidity profiles were similar in patients with high vs low EHR data-continuity. Therefore, restricting an analysis to patients with high EHR data-continuity may reduce information bias while preserving the representativeness of the study cohort.Conclusion: We have successfully validated an algorithm that can identify a high EHR data-continuity cohort representative of the source population.Keywords: electronic medical records, data linkage, comparative effectiveness research, information bias, continuity, external validation