Forward Stepwise Deep Autoencoder-based Monotone Nonlinear Dimensionality Reduction Methods

基于前向逐步深度自编码器的单调非线性降维方法

阅读:1

Abstract

Dimensionality reduction is an unsupervised learning task aimed at creating a low-dimensional summary and/or extracting the most salient features of a dataset. Principal components analysis (PCA) is a linear dimensionality reduction method in the sense that each principal component is a linear combination of the input variables. To allow features that are nonlinear functions of the input variables, many nonlinear dimensionality reduction methods have been proposed. In this paper we propose novel nonlinear dimensionality reduction methods based on bottleneck deep autoencoders (Kramer, 1991). Our contributions are two-fold: (1) We introduce a monotonicity constraint into bottleneck deep autoencoders for estimating a single nonlinear component and propose two methods for fitting the model. (2) We propose a new, forward stepwise (FS) deep learning architecture for estimating multiple nonlinear components. The former helps extract interpretable, monotone components when the assumption of monotonicity holds, and the latter helps evaluate reconstruction errors in the original data space for a range of components. We conduct numerical studies to compare different model fitting methods and use two real data examples from the studies of human immune responses to HIV to illustrate the proposed methods.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。