Abstract
Computational protein design using machine learning models has advanced rapidly since the introduction of AlphaFold2. There is now a suite of tools that enable in silico design of proteins with desired structures and properties. Most design workflows require fitting a designed backbone with a sequence that stabilizes it, and many machine learning sequence design models have been proposed. These models are trained to recover the native sequence paired with a known structure, a task known as native sequence recovery (NSR). Here, we demonstrate the limitations of optimizing a sequence design model only for NSR. We show that NSR is often misaligned with more important metrics of model performance: the compatibility of the generated sequence with the desired fold and the ability of the model to predict the energetic effects of mutations. We introduce PottsMPNN, which is trained to generate a Potts energy function consisting of single-residue and residue-pair terms from a protein backbone, and we demonstrate that learning a Potts model reduces NSR but improves sequence generation and energy prediction. To further show that NSR is not the optimal metric, we trained PottsMPNN with noised backbone structures and multiple sequence alignments. In tests on held-out data, NSR decreased, but the quality of the designed sequences and energy predictions improved. By demonstrating the limitations of optimizing for NSR and the effectiveness of alternative strategies for avoiding over-optimizing for NSR, our work provides a new direction for the sequence design field.