Performance Guarantees of Recurrent Neural Networks for the Subset Sum Problem

循环神经网络在子集和问题上的性能保证

阅读:2

Abstract

The subset sum problem is a classical NP-hard problem. Various methods have been developed to address this issue, including backtracking techniques, dynamic programming approaches, branch-and-bound strategies, and Monte Carlo methods. In recent years, researchers have proposed several neural network-based methods for solving combinatorial optimization problems, which have shown commendable performance. However, there has been limited research on the performance guarantees of recurrent neural networks (RNNs) when applied to the subset sum problem. In this paper, we conduct a novel investigation into the performance guarantees of RNNs to solve the subset sum problem for the first time. A construction method for RNNs is developed to compute both exact and approximate solutions of subset sum problems, and the mathematical model of each hidden layer in RNNs is rigorously defined. Furthermore, the correctness of the proposed RNNs is strictly proven through mathematical reasoning, and their performance is thoroughly analyzed. In particular, we prove wNN≥wOPT(1-ε) mathematically, i.e., the errors between the approximate solutions obtained by the proposed ASS-NN model and the actual optimal solutions are relatively small and highly consistent with theoretical expectations. Finally, the validity of RNNs is verified through a series of examples, where the actual error value of the approximate solution aligns closely with the theoretical error value. Additionally, our research reveals that recurrence relations in dynamic programming can effectively simulate the process of constructing solutions.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。