Communications Medicine (Jan 2024)

Generative deep learning furthers the understanding of local distributions of fat and muscle on body shape and health using 3D surface scans

  • Lambert T. Leong,
  • Michael C. Wong,
  • Yong E. Liu,
  • Yannik Glaser,
  • Brandon K. Quon,
  • Nisa N. Kelly,
  • Devon Cataldi,
  • Peter Sadowski,
  • Steven B. Heymsfield,
  • John A. Shepherd

DOI
https://doi.org/10.1038/s43856-024-00434-w
Journal volume & issue
Vol. 4, no. 1
pp. 1 – 9

Abstract

Read online

Abstract Background Body shape, an intuitive health indicator, is deterministically driven by body composition. We developed and validated a deep learning model that generates accurate dual-energy X-ray absorptiometry (DXA) scans from three-dimensional optical body scans (3DO), enabling compositional analysis of the whole body and specified subregions. Previous works on generative medical imaging models lack quantitative validation and only report quality metrics. Methods Our model was self-supervised pretrained on two large clinical DXA datasets and fine-tuned using the Shape Up! Adults study dataset. Model-predicted scans from a holdout test set were evaluated using clinical commercial DXA software for compositional accuracy. Results Predicted DXA scans achieve R 2 of 0.73, 0.89, and 0.99 and RMSEs of 5.32, 6.56, and 4.15 kg for total fat mass (FM), fat-free mass (FFM), and total mass, respectively. Custom subregion analysis results in R 2s of 0.70–0.89 for left and right thigh composition. We demonstrate the ability of models to produce quantitatively accurate visualizations of soft tissue and bone, confirming a strong relationship between body shape and composition. Conclusions This work highlights the potential of generative models in medical imaging and reinforces the importance of quantitative validation for assessing their clinical utility.