Abstract
MOTIVATION: Protein language models (PLMs) have transformed protein research by learning rich representations from sequence data alone, yet they largely ignore the wealth of structural information now available through advances in structure prediction. Current methods that incorporate structural data often require substantial computational resources and complex architectures, limiting their practical adoption. We present a novel joint sequence and structure embedding method that achieves computational and parameter efficiency while maintaining high performance. Our approach introduces a lightweight integration framework that combines pretrained sequence transformers' self-attention with specialized structural adapters, enabling seamless incorporation of structural knowledge into existing PLMs through these enhanced self-attention mechanisms. RESULTS: The method demonstrates remarkable efficiency, requiring only modest pretraining on 542K protein structures, three orders of magnitude less than the data used to train PLMs, using standard masked language modeling objectives. Despite this lightweight approach, our joint embeddings consistently outperform sequence-only models like ESM-2 while achieving comparable results to more complex structure-based methods that use significantly more parameters and computational resources. This work establishes a new paradigm for protein representation learning that balances performance with practical constraints. By providing computationally efficient joint sequence-structure embeddings, we offer the scientific community an accessible tool that captures both sequential and structural protein information without the computational overhead typically associated with structure-aware models. AVAILABILITY AND IMPLEMENTATION: code and links to checkpoints are available at https://github.com/BorgwardtLab/PST.