Kolmogorov GAM Networks Are All You Need!

Kolmogorov GAM 网络就是您所需要的一切!

阅读:1

Abstract

Kolmogorov GAM (K-GAM) networks have been shown to be an efficient architecture for both training and inference. They are additive models with embeddings that are independent of the target function of interest. They provide an alternative to Transformer architectures. They are the machine learning version of Kolmogorov's superposition theorem (KST), which provides an efficient representation of multivariate functions. Such representations are useful in machine learning for encoding dictionaries (a.k.a. "look-up" tables). KST theory also provides a representation based on translates of the Köppen function. The goal of our paper is to interpret this representation in a machine learning context for applications in artificial intelligence (AI). Our architecture is equivalent to a topological embedding, which is independent of the function, together with an additive layer that uses a generalized additive model (GAM). This provides a class of learning procedures with far fewer parameters than current deep learning algorithms. Implementation can be parallelizable, which makes our algorithms computationally attractive. To illustrate our methodology, we use the iris data from statistical learning. We also show that our additive model with non-linear embedding provides an alternative to Transformer architectures, which, from a statistical viewpoint, are kernel smoothers. Additive KAN models, therefore, provide a natural alternative to Transformers. Finally, we conclude with directions for future research.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。