Abstract
Temporal domain generalization is crucial for the temporal forecasting of sensor data due to the non-stationary and evolving nature of most sensor-generated time series. However, temporal dynamics vary in scale, semantics, and structure, leading to distribution shifts that a single model cannot easily generalize over. Additionally, conflicts between temporal domain-specific patterns and limited model capacity make it difficult to learn shared parameters that work universally. To address this challenge, we propose an ensemble learning framework that leverages multiple domain-specific models to improve temporal domain generalization for sensor data forecasting. We first segment the original sensor time series into distinct temporal tasks to better handle the distribution shifts inherent in sensor measurements. A meta-learning strategy is then applied to extract shared representations across these tasks. Specifically, during meta-training, a recurrent encoder combined with variational inference captures contextual information for each task, which is used to generate task-specific model parameters. Relationships among tasks are modeled via a self-attention mechanism. For each query, the prediction results are adaptively reweighted based on all previously learned models. At inference, predictions are directly generated through the learned ensemble mechanism without additional tuning. Extensive experiments on public sensor datasets demonstrate that our method significantly enhances the generalization performance in forecasting across unseen sensor segments.