Pattern and structural detection in grayscale images through the application of quantile graphs in higher-dimensional spaces

通过在高维空间中应用分位数图,对灰度图像进行模式和结构检测

阅读:1

Abstract

Deep Learning (DL) and Machine Learning (ML) algorithms are adept at managing and classifying a wide range of data formats, including time series, text, and images, addressing challenges in both supervised and unsupervised learning. However, the practical applications of specific algorithms-particularly convolutional neural networks (CNNs) and vision transformers (VTs)-are often constrained by the need for large datasets, extensive training, and complex parameter tuning, which frequently relies on a trial-and-error approach. Other approaches, such as visibility graphs (VGs), often produce networks with an exceedingly high number of nodes, resulting in significant computational costs related to runtime and memory usage. Recent research has explored alternative feature extraction and classification solutions to address these challenges. One noteworthy innovation is the use of quantile graphs (QGs), initially applied to time series data, which transform data points into a complex network of quantiles. This method effectively identifies key structural patterns while minimizing computational requirements. These graphs have produced promising outcomes in analyzing physiological time series related to brain function and disorders, including Alzheimer's disease. This research enhances quantile graphs for image identification and introduces a method for feature extraction applicable in ML and DL processes within the domain of computer vision. The novelty of this work is extending the QGs framework from one-dimensional time series to two-dimensional images, introducing a scalable graph-based approach for image classification, and providing an open-source implementation of the method. The study utilized two well-established benchmark datasets: the Modified National Institute of Standards and Technology (MNIST) handwritten digit database and Fashion MNIST. The performance of the proposed QGs was evaluated in comparison to that of CNNs and VTs. Our findings reveal that, while CNNs and VTs demonstrate superior accuracy in certain circumstances, the proposed QGs outperform these methods in other scenarios, particularly when training data is limited. Additionally, QGs yielded more consistent results across all situations, suggesting the choice of training components has less influence on them than CNNs and VTs. Moreover, the QGs were applied to a medical imaging dataset to illustrate their relevance to real biological data, indicating potential for integration into applications to detect brain diseases.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。