Abstract
In this paper, we address the challenges of inferring and learning from a substantial number of observations (N≫1) with a Gaussian process regression model. First, we propose a flexible construction of well-adapted covariances originally derived from specific differential operators. Second, we prove its convergence and show its low computational cost scaling as O(Nm2) for inference and O(m3) for learning instead of O(N3) for a canonical Gaussian process where N≫m. Moreover, we develop an implementation that requires less memory O(m2) instead of O(N2). Finally, we demonstrate the effectiveness of the proposed method with simulation studies and experiments on real data. In addition, we conduct a comparative study with the aim of situating it in relation to certain cutting-edge methods.