Compact and Interpretable Neural Networks Using Lehmer Activation Units

基于莱默激活单元的紧凑且可解释的神经网络

阅读:1

Abstract

We introduce Lehmer Activation Units (LAUs), a class of aggregation-based neural activations derived from the Lehmer transform that unify feature weighting and nonlinearity within a single differentiable operator. Unlike conventional pointwise activations, LAUs operate on collections of features and adapt their aggregation behavior through learnable parameters, yielding intrinsically interpretable representations. We develop both real-valued and complex-valued formulations, with the complex extension enabling phase-sensitive interactions and enhanced expressive capacity. We establish a universal approximation theorem for LAU-based networks, providing formal guarantees of expressive completeness. Empirically, we show that LAUs enable highly compact architectures to achieve strong predictive performance under tightly controlled experimental settings, demonstrating that expressive power can be concentrated within individual neurons rather than architectural depth. These results position LAUs as a principled, interpretable, and efficient alternative to conventional activation functions.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。