Novel cross-dimensional coarse-fine-grained complementary network for image-text matching

一种用于图像-文本匹配的新型跨维度粗细粒度互补网络

阅读:1

Abstract

The fundamental aspects of multimodal applications such as image-text matching, and cross-modal heterogeneity gap between images and texts have always been challenging and complex. Researchers strive to overcome the challenges by proposing numerous significant efforts directed toward narrowing the semantic gap between visual and textual modalities. However, existing methods are usually limited to computing the similarity between images (image regions) and text (text words), ignoring the semantic consistency between fine-grained matching of word regions and coarse-grained overall matching of image and text. Additionally, these methods often ignore the semantic differences across different feature dimensions. Such limitations may result in an overemphasis on specific details at the expense of holistic understanding during image-text matching. To tackle this challenge, this article proposes a new Cross-Dimensional Coarse-Fine-Grained Complementary Network (CDGCN). Firstly, the proposed CDGCN performs fine-grained semantic alignment of image regions and sentence words based on cross-dimensional dependencies. Next, a Coarse-Grained Cross-Dimensional Semantic Aggregation module (CGDSA) is developed to complement local alignment with global image-text matching ensuring semantic consistency. This module aggregates local features across different dimensions as well as within the same dimension to form coherent global features, thus preserving the semantic integrity of the information. The proposed CDGCN is evaluated on two multimodal datasets, Flickr30K and MS-COCO against state-of-the-art methods. The proposed CDGCN achieved substantial improvements with performance increment of 7.7-16% for both datasets.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。