Abstract
Sequence learning is a crucial aspect of intelligence research, with sequence prediction tasks commonly used to evaluate the performance of sequence learning models. This paper introduces and tests a novel sequence learning model that mimics the structure of neocortical mini-columns and is grounded in Non-Axiomatic Logic, offering interpretability. The model's learning mechanism encompasses three steps: hypothesizing, revising, and recycling, enabling it to operate effectively under conditions of insufficient knowledge and resources. The model's performance is assessed using synthetic datasets for sequence prediction. The results demonstrate that the model consistently achieves high accuracy across various levels of difficulty, reaching the theoretical maximum. Furthermore, the model's concept-centered representation effectively avoids catastrophic forgetting, a finding supported by the experimental results.