Context aware hierarchical attention for abstractive dialogue summarization

面向抽象对话摘要的上下文感知分层注意力机制

阅读:1

Abstract

Abstractive dialogue summarization has gained increasing attention due to its ability to generate concise and informative summaries from complex conversational data. In social dialogues, phenomena like ellipsis and topic shifts frequently occur, making it essential to account for the rich contextual information embedded at multiple levels. Traditional transformer-based models often fail to fully exploit this multi-level context. To address this limitation, we propose a novel Hierarchical Context-aware Attention (HCAtt) network. Our model incorporates both segment-level and utterance-level contextual information into the transformer framework, enhancing the model's ability to capture the intricate dependencies in dialogue data. Specifically, we hierarchically integrate these levels during the calculation of query and key transformations, which improves the modeling of contextual relationships across token representations. Experimental results on the benchmark SAMSum, DialogSum and AMI datasets demonstrate that HCAtt outperforms existing methods, highlighting its effectiveness in handling the complexities of dialogue summarization.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。