PLoS ONE (Jan 2022)

Large-scale design and refinement of stable proteins using sequence-only models

  • Jedediah M. Singer,
  • Scott Novotney,
  • Devin Strickland,
  • Hugh K. Haddox,
  • Nicholas Leiby,
  • Gabriel J. Rocklin,
  • Cameron M. Chow,
  • Anindya Roy,
  • Asim K. Bera,
  • Francis C. Motta,
  • Longxing Cao,
  • Eva-Maria Strauch,
  • Tamuka M. Chidyausiku,
  • Alex Ford,
  • Ethan Ho,
  • Alexander Zaitzeff,
  • Craig O. Mackenzie,
  • Hamed Eramian,
  • Frank DiMaio,
  • Gevorg Grigoryan,
  • Matthew Vaughn,
  • Lance J. Stewart,
  • David Baker,
  • Eric Klavins

Journal volume & issue
Vol. 17, no. 3

Abstract

Read online

Engineered proteins generally must possess a stable structure in order to achieve their designed function. Stable designs, however, are astronomically rare within the space of all possible amino acid sequences. As a consequence, many designs must be tested computationally and experimentally in order to find stable ones, which is expensive in terms of time and resources. Here we use a high-throughput, low-fidelity assay to experimentally evaluate the stability of approximately 200,000 novel proteins. These include a wide range of sequence perturbations, providing a baseline for future work in the field. We build a neural network model that predicts protein stability given only sequences of amino acids, and compare its performance to the assayed values. We also report another network model that is able to generate the amino acid sequences of novel stable proteins given requested secondary sequences. Finally, we show that the predictive model—despite weaknesses including a noisy data set—can be used to substantially increase the stability of both expert-designed and model-generated proteins.