Health Technology Assessment (Jul 2024)

Home-monitoring for neovascular age-related macular degeneration in older adults within the UK: the MONARCH diagnostic accuracy study

  • Ruth E Hogg,
  • Robin Wickens,
  • Sean O’Connor,
  • Eleanor Gidman,
  • Elizabeth Ward,
  • Charlene Treanor,
  • Tunde Peto,
  • Ben Burton,
  • Paul Knox,
  • Andrew J Lotery,
  • Sobha Sivaprasad,
  • Michael Donnelly,
  • Chris A Rogers,
  • Barnaby C Reeves

DOI
https://doi.org/10.3310/CYRA9912
Journal volume & issue
Vol. 28, no. 32

Abstract

Read online

Background Most neovascular age-related macular degeneration treatments involve long-term follow-up of disease activity. Home monitoring would reduce the burden on patients and those they depend on for transport, and release clinic appointments for other patients. The study aimed to evaluate three home-monitoring tests for patients to use to detect active neovascular age-related macular degeneration compared with diagnosing active neovascular age-related macular degeneration by hospital follow-up. Objectives There were five objectives: Estimate the accuracy of three home-monitoring tests to detect active neovascular age-related macular degeneration. Determine the acceptability of home monitoring to patients and carers and adherence to home monitoring. Explore whether inequalities exist in recruitment, participants’ ability to self-test and their adherence to weekly testing during follow-up. Provide pilot data about the accuracy of home monitoring to detect conversion to neovascular age-related macular degeneration in fellow eyes of patients with unilateral neovascular age-related macular degeneration. Describe challenges experienced when implementing home-monitoring tests. Design Diagnostic test accuracy cohort study, stratified by time since starting treatment. Setting Six United Kingdom Hospital Eye Service macular clinics (Belfast, Liverpool, Moorfields, James Paget, Southampton, Gloucester). Participants Patients with at least one study eye being monitored by hospital follow-up. Reference standard Detection of active neovascular age-related macular degeneration by an ophthalmologist at hospital follow-up. Index tests KeepSight Journal: paper-based near-vision tests presented as word puzzles. MyVisionTrack®: electronic test, viewed on a tablet device. MultiBit: electronic test, viewed on a tablet device. Participants provided test scores weekly. Raw scores between hospital follow-ups were summarised as averages. Results Two hundred and ninety-seven patients (mean age 74.9 years) took part. At least one hospital follow-up was available for 317 study eyes, including 9 second eyes that became eligible during follow-up, in 261 participants (1549 complete visits). Median testing frequency was three times/month. Estimated areas under receiver operating curves were 6 and 70 years), gender, one or both eyes with nAMD, time since first treatment (defined as above) and adherence to home monitoring (test data from the two electronic tests were used to categorise participants into ‘regular’ testers and ‘irregular’ testers). Patients who declined to participate in MONARCH but provided consent to be contacted about the qualitative study, informal ‘carers’, supporters or significant others in the lives of patients and healthcare professionals who interacted with participants at study sites visits were also approached to gather their perspectives about the acceptability of home monitoring. Statistical analysis Objective A: The test accuracy of index tests was estimated by fitting a logistic regression model to predict the reference standard from summary test scores for the interval between monitoring visits, adjusting for participants’ baseline data. Accuracy was estimated for the primary outcome using all index test data, data only for the 4 weeks preceding the monitoring visit, the reference standard based on reading centre decisions made from OCT images and for the secondary outcome. Test scores were summarised as: means (MBT and mVT); median (KSJ reported near visual acuity (VA), ordinal six-point scale); proportions (KSJ reported VA, Amsler grid and household object appearance reported worse than baseline vs. same or better). All four scores were fitted in the KSJ model and a single area under the receiving operator curve (AUROC) was estimated. Separate models were fitted for each test for the primary outcome, the two sensitivity analyses and the secondary outcome. Model performance was quantified by the odds ratio (OR) for the index test summary score(s) and the estimate of the AUROC and their respective confidence intervals (CIs). AUROCs were based on predicted probabilities calculated using only the fixed effects in the models. Sensitivity, specificity, positive and negative predictive values and 95% CIs were calculated using cut-off thresholds corresponding to Youden’s index for each model, which minimises overall misclassifications. Average test scores above and below the thresholds were also calculated. Analyses took account of the structure within the data, that is, the nesting of visits and eyes within patients. Objective B: All interviews were audio-recorded and transcribed. A directed content analysis approach based on deductive and inductive coding was used. NVivo version 12 was used to manage data and facilitate the analysis process, which, in summary, included the following stages: (1) independent transcription, (2) data familiarisation, (3) independent coding, (4) development of an analytical framework, (5) indexing, (6) charting and (7) interpreting data. Objective C: Willingness in principle to participate was defined as an approached eligible patient agreeing to attend a research visit for training. Ability to perform an index test was defined as the proportion of monitoring visits for which some valid index test data were available. Adherence was defined as the proportion of weeks between monitoring visits for which some valid data for an index test were available. The ability and adherence models were performed for each test separately at the level of the patient. Regression models estimated associations of age, sex, Index of Multiple Deprivation (IMD), stratum of time since first diagnosis and baseline visual acuity at diagnosis on the outcomes of willingness to participate, ability to perform tests and adherence to weekly testing; models for the latter two outcomes were fitted for each index test. Associations were reported with 95% CIs. Analyses of adherence and ability took account of nesting of visits within participants. Objective D: The test accuracy of the index tests for the reference standard of an ophthalmologist’s classification of a fellow eye as having active disease at a monitoring visit, that is, conversion to active nAMD, was estimated by the same methods as described for Objective A. Two sensitivity analyses were carried out: (1) the same reference standard but using test data only for the 4 weeks preceding the management visit and (2) the alternative reference standard of classification of a fellow eye having active disease based on reading centre grading of OCTs carried out during the monitoring visits. Objective E: This objective used descriptive summary descriptive statistics only. Results The study recruited 297 patients (consented participants) between 21 August 2018 and 31 March 2020. Half of recruited participants were first treated for nAMD 6–17 months before consenting, 28% 18–29 months before consenting and 22% 30–41 months before consenting. At the end of the study, data for at least one monitoring visit after starting to use the index monitoring tests were available for 357 study eyes in 297 patients. Data were available for at least one complete monitoring visit after starting to use the index monitoring tests for 317 study eyes of 261 patients. More participants were women (58.6%). Participants’ mean age was 74.9 (6.6) years [standard deviation (SD)]. The mean visual acuity of study eyes (better seeing eyes if participants had two study eyes) was 0.2 (0.2) Logarithm of the minimum angle of resolution (LogMAR) (SD). Objective A: Median testing frequency was three times per month. In the primary analysis, estimated AUROCs were 0.08 for ability and adherence, except for worse IMD being associated with better adherence for the KSJ, χ2 = 12.15, p = 0.016). Recruiting site was also associated with being able to test and adhering to weekly testing. Objective D: There were 132 fellow eyes with data from 544 monitoring visits, 17 of which (12.9%) had nAMD recorded at one or more management visits over about 100 participant-years. This rate of conversion was higher than expected based on epidemiological studies of conversion rates in unaffected fellow eyes, potentially due to study eyes having had nAMD longer ago. Some predictors could not be fitted in models and estimates of associations were imprecise. The no-test model predicted conversion better than for Objective A (AUROC = 0.73) and electronic tests did not increase this (AUROCs = 0.73 and 0.76 for MBT and mVT, respectively). The estimated AUROC for the KSJ was 0.85, due to a strong positive association of the household object summary score with conversion (OR 15.3, p = 0.036). Objective E: Despite two-thirds of the population having previously used a smartphone, there were still a variety of challenges experienced with the electronic devices while testing at home that contributed to both reduced adherence and ultimately withdrawals from the study. Strengths and limitations The study had several strengths. Estimates of the diagnostic test accuracy of index tests were at low risk of bias: the study population was appropriate for the intended use of the tests, and summary test scores were not available to ophthalmologists providing the reference standard, which was judged after the index test data were collected. Limitations A smaller-than-planned sample size (less than half the target number of monitoring visits); nonetheless, 95% CIs for AUROCs were narrow (± 0.04) and estimates were able to rule out tests providing adequate accuracy for diagnosing active nAMD to enable patients to be monitored without hospital review. Tests were sometimes not available for technical reasons that were beyond the control of the research team. The study had no control over monitoring visits and participants are likely to have reported their subjective visual experience to their consultants, which might have influenced the reference standard. We could not define test thresholds a priori, and instead estimated AUROCs. We did not compare AUROCs for tests due to their poor accuracy. The ways in which patients were approached and screened varied across sites, generating a site effect in analyses of potential inequalities; variations may have reflected the pre-conceptions of research staff regarding the capabilities of patients to use the electronic tests. Conclusions Based on the detection of lesion activity assessed by clinicians in the clinic, we have shown that none of the index tests provides acceptable test accuracy for home monitoring in this context. Associations of increasing age and deprivation index for home address with unwillingness in principle to participate despite provision of hardware highlight the potential for inequality with interventions of the kind evaluated. While a proportion of nAMD patients are willing and interested in the potential for home monitoring, substantial practical and technological issues are encountered in the implementation of such, requiring a significant support infrastructure, including a study helpline. Future work Future research should focus on the methodological challenge of efficiently evaluating mobile health technologies which deal with constantly emerging new technology. The clear evidence of inequalities in participation and retention should prompt future research on ways to encourage participant and adoption of mobile health technologies by underserved populations. Focus should also be placed on methods to improve adherence and retention in longitudinal studies involving electronic testing, particularly around the nature of feedback to participants. Trial registration This trial is registered as ISRCTN79058224. Funding This award was funded by the National Institute of Health and Care Research (NIHR) Health Technology Assessment programme (NIHR award ref: 15/97/02) and is published in full in Health Technology Assessment; Vol. 28, No. 32. See the NIHR Funding and Awards website for further award information.

Keywords