BMJ Oncology (Nov 2024)

From prediction to practice: mitigating bias and data shift in machine-learning models for chemotherapy-induced organ dysfunction across unseen cancers

  • Heather Shaw,
  • Pinkie Chambers,
  • Matthew Watson,
  • Luke Steventon,
  • James Harmsworth King,
  • Angelo Ercia,
  • Noura Al Moubayed

DOI
https://doi.org/10.1136/bmjonc-2024-000430
Journal volume & issue
Vol. 3, no. 1

Abstract

Read online

Objectives Routine monitoring of renal and hepatic function during chemotherapy ensures that treatment-related organ damage has not occurred and clearance of subsequent treatment is not hindered; however, frequency and timing are not optimal. Model bias and data heterogeneity concerns have hampered the ability of machine learning (ML) to be deployed into clinical practice. This study aims to develop models that could support individualised decisions on the timing of renal and hepatic monitoring while exploring the effect of data shift on model performance.Methods and analysis We used retrospective data from three UK hospitals to develop and validate ML models predicting unacceptable rises in creatinine/bilirubin post cycle 3 for patients undergoing treatment for the following cancers: breast, colorectal, lung, ovarian and diffuse large B-cell lymphoma.Results We extracted 3614 patients with no missing blood test data across cycles 1–6 of chemotherapy treatment. We improved on previous work by including predictions post cycle 3. Optimised for sensitivity, we achieve F2 scores of 0.7773 (bilirubin) and 0.6893 (creatinine) on unseen data. Performance is consistent on tumour types unseen during training (F2 bilirubin: 0.7423, F2 creatinine: 0.6820).Conclusion Our technique highlights the effectiveness of ML in clinical settings, demonstrating the potential to improve the delivery of care. Notably, our ML models can generalise to unseen tumour types. We propose gold-standard bias mitigation steps for ML models: evaluation on multisite data, thorough patient population analysis, and both formalised bias measures and model performance comparisons on patient subgroups. We demonstrate that data aggregation techniques have unintended consequences on model bias.