Abstract
Recurrent neural networks (RNNs) have emerged as a prominent tool for modeling cortical function. However, their conventional architecture is fundamentally lacking in physiological and anatomical fidelity, often raising questions regarding the validity of the insights gleaned from them. Our work therefore develops mathematically grounded methods that let us simultaneously incorporate Dale's law with highly sparse connectivity motifs into the RNN training pipeline such that the performance of our constrained models empirically matches that of RNNs trained without any constraints. We subsequently demonstrate the utility of our methods for inferring multi-regional interactions by training RNN models with data-driven, cell type-specific connectivity constraints to reconstruct two-photon calcium imaging data during visual behavior in mice spread across multiple cortical layers and brain areas. The interactions inferred by our models corroborate experimental findings in agreement with the theory of predictive coding, across both long and short timescales.