Abstract
Understanding the determinants of protein structural stability remains a central challenge in computational biology. While recent prediction models achieve high accuracy, they provide limited insight into why specific conformations persist or how topology and packing confer robustness. We introduce Support Field Neural Representation Learner (SF-NRL) framework, a framework that represents stability as a learnable scalar field shaped by persistent topological motifs and local density. Implemented as the SF-NRL framework, our approach integrates persistent homology, kernel density estimation, and equivariant geometric encoders to predict residue-level support values. These values correlate with biophysical markers such as RMSF and B-factors, capture fold-level motifs, and generalize across unseen structural classes. Unlike methods based on contact maps or empirical potentials, SF-NRL framework yields an interpretable energy-like landscape that reflects intrinsic stability. This work provides a theoretical and computational foundation for stability-aware modeling, linking topological insights with deep representation learning.