Optimizing visual data retrieval using deep learning driven CBIR for improved human machine interaction

利用深度学习驱动的基于内容的检索(CBIR)优化视觉数据检索,以改善人机交互

阅读:1

Abstract

Content-based image retrieval (CBIR) systems have formidable obstacles in connecting human comprehension with machine-driven feature extraction due to the exponential expansion of visual data across many areas. Robust performance across varied datasets is challenging for traditional CBIR methods due to their reliance on hand-crafted features and inflexible structures. This study presents a deep adaptive attention network (DAAN) for CBIR that combines multi-scale feature extraction and hybrid neural architectures to solve these problems and improve the speed and accuracy of visual retrieval. The DAAN architecture integrates transformer-based models for capturing picture contextual connections with deep neural network (DNN) to extract spatial features. A new adaptive multi-level attention module (AMLA) that guarantees accurate feature weighting improves the system's ability to detect minute visual material changes. Findings show that DAAN-CBIR outperforms existing approaches with high mean average precision (map), retrieval speed, and reduced training time. These developments prove its efficacy in various fields, including e-commerce, digital information preservation, medical imaging diagnostics, and personalized media recommendations.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。