Journal of Patient-Reported Outcomes (Jun 2020)
Randomized comparative study of child and caregiver responses to three software functions added to the Japanese version of the electronic Pediatric Quality of Life Inventory (ePedsQL) questionnaire
Abstract
Abstract Background Patient-reported outcomes (PROs) refer to any report of the status of a patient’s health condition, health behavior, or experience with healthcare directly from the patient, without interpretation of the patient’s response by a clinician or any other external party. While many PROs, such as the Pediatric Quality of Life Inventory (PedsQL), were originally administered in paper-and-pencil format, these are now available as electronic versions (ePROs). Although ePROs might well have used the same structure as their paper versions, we developed an alternate ePedsQL incorporating three software functions: 1) a non-forcing non-response alert, 2) a conditional question branch of the School Functioning Scale that only displays for (pre) school children, and 3) a vertical item-by-item display for small-screen devices. This report evaluated the effect of these functions on item non-response rate, survey completion time, and user experience. Methods All surveys were conducted via the online/computer mode. We compared the dynamic format containing the three functions with the basic format in a randomized comparative study in 2803 children and 6289 caregivers in Japan. Results We found that the non-response alert lowered the item non-response rate (0.338% to 0.046%, t = − 4.411, p < 0.001 by generalized linear mixed model analysis). The conditional question branch had mixed effects on survey completion time depending on the respondents’ age. Surprisingly, respondents rated the vertical question display for handheld devices less legible than the matrix format. Further, multigroup structural equation modelling revealed that the same configuration for both formats showed an acceptable fit (CFI 0.933, RMSEA 0.060, SRMR 0.038) but the errors of observed variables were larger for the dynamic format than the basic format. Conclusions We confirmed the robustness of the ePedsQL in different formats. The non-response rate of ePedsQL was very low even in the absence of an alert. The branch and item-by-item display were effective but unnecessary for all populations. Our findings further understanding of how humans respond to special software functions and different digital survey formats and provide new insight on how the three tested functions might be most successfully implemented.
Keywords