Abstract
The subset sum problem is a classical NP-hard problem. Various methods have been developed to address this issue, including backtracking techniques, dynamic programming approaches, branch-and-bound strategies, and Monte Carlo methods. In recent years, researchers have proposed several neural network-based methods for solving combinatorial optimization problems, which have shown commendable performance. However, there has been limited research on the performance guarantees of recurrent neural networks (RNNs) when applied to the subset sum problem. In this paper, we conduct a novel investigation into the performance guarantees of RNNs to solve the subset sum problem for the first time. A construction method for RNNs is developed to compute both exact and approximate solutions of subset sum problems, and the mathematical model of each hidden layer in RNNs is rigorously defined. Furthermore, the correctness of the proposed RNNs is strictly proven through mathematical reasoning, and their performance is thoroughly analyzed. In particular, we prove wNN≥wOPT(1-ε) mathematically, i.e., the errors between the approximate solutions obtained by the proposed ASS-NN model and the actual optimal solutions are relatively small and highly consistent with theoretical expectations. Finally, the validity of RNNs is verified through a series of examples, where the actual error value of the approximate solution aligns closely with the theoretical error value. Additionally, our research reveals that recurrence relations in dynamic programming can effectively simulate the process of constructing solutions.