Abstract
We present LazyNet, a compact one-step neural-ODE model for single-cell CRISPR activation/interference (A/I) that operates directly on two-snapshot ("pre → post") measurements and yields parameters with clear mechanistic meaning. The core log-linear-exp residual block exactly represents multiplicative effects, so synergistic multi-locus responses appear as explicit components rather than opaque composites. On a 53k-cell × 18k-gene neuronal Perturb-seq matrix, a three-replica LazyNet ensemble trained under a matched 1 h budget achieved strong threshold-free ranking and competitive error (genome-wide r ≈ 0.67) while running on CPUs. For comparison, we instantiated transformer (scGPT-style) and state-space (RetNet/CellFM-style) architectures from random initialization and trained them from scratch on the same dataset and within the same 1 h cap on a GPU platform, without any large-scale pretraining or external data. Under these strictly controlled, low-data conditions, LazyNet matched or exceeded their predictive performance while using far fewer parameters and resources. A T-cell screen included only for generalization showed the same ranking advantage under the identical evaluation pipeline. Beyond prediction, LazyNet exposes directed, local elasticities; averaging Jacobians across replicas produces a consensus interaction matrix from which compact subgraphs are extracted and evaluated at the module level. The resulting networks show coherent enrichment against authoritative resources (large-scale co-expression and curated functional associations) and concordance with orthogonal GPX4-knockout proteomes, recovering known ferroptosis regulators and nominating testable links in a lysosomal-mitochondrial-immune module. These results position LazyNet as a practical option for from-scratch, low-data CRISPR A/I studies where large-scale pretraining of foundation models is not feasible.