Abstract
The ability to anticipate future events continuously is a hallmark of biological vision, yet standard deep learning models often struggle with long-term coherence due to the rigid discretization of time. In this paper, we propose NeuralVisionNet, a probabilistic framework that models visual anticipation as a continuous generative process, drawing inspiration from the predictive coding mechanisms of the hippocampal-entorhinal circuit. Our architecture synergizes hierarchical Video Swin Transformers with Attentive Neural Processes, employing a novel grid-like coding scheme to represent spatiotemporal dynamics as a continuous function rather than a fixed sequence of frames. Furthermore, we introduce a variational global latent variable to encode the "event gist," ensuring semantic consistency over extended horizons. Extensive evaluations on KTH, Human 3.6M, and UCF 101 benchmarks demonstrate that NeuralVisionNet significantly outperforms state-of-the-art stochastic baselines in perceptual quality (FVD) and structural fidelity (SSIM), offering a robust computational proof-of-concept for continuous, bio-inspired visual forecasting.