Scientific Reports (Aug 2023)

Two sequence- and two structure-based ML models have learned different aspects of protein biochemistry

  • Anastasiya V. Kulikova,
  • Daniel J. Diaz,
  • Tianlong Chen,
  • T. Jeffrey Cole,
  • Andrew D. Ellington,
  • Claus O. Wilke

DOI
https://doi.org/10.1038/s41598-023-40247-w
Journal volume & issue
Vol. 13, no. 1
pp. 1 – 9

Abstract

Read online

Abstract Deep learning models are seeing increased use as methods to predict mutational effects or allowed mutations in proteins. The models commonly used for these purposes include large language models (LLMs) and 3D Convolutional Neural Networks (CNNs). These two model types have very different architectures and are commonly trained on different representations of proteins. LLMs make use of the transformer architecture and are trained purely on protein sequences whereas 3D CNNs are trained on voxelized representations of local protein structure. While comparable overall prediction accuracies have been reported for both types of models, it is not known to what extent these models make comparable specific predictions and/or generalize protein biochemistry in similar ways. Here, we perform a systematic comparison of two LLMs and two structure-based models (CNNs) and show that the different model types have distinct strengths and weaknesses. The overall prediction accuracies are largely uncorrelated between the sequence- and structure-based models. Overall, the two structure-based models are better at predicting buried aliphatic and hydrophobic residues whereas the two LLMs are better at predicting solvent-exposed polar and charged amino acids. Finally, we find that a combined model that takes the individual model predictions as input can leverage these individual model strengths and results in significantly improved overall prediction accuracy.