Human-interpretable clustering of short text using large language models

利用大型语言模型对短文本进行人类可理解的聚类分析

阅读:2

Abstract

Clustering short text is a difficult problem, owing to the low word co-occurrence between short text documents. This work shows that large language models (LLMs) can overcome the limitations of traditional clustering approaches by generating embeddings that capture the semantic nuances of short text. In this study, clusters are found in the embedding space using Gaussian mixture modelling. The resulting clusters are found to be more distinctive and more human-interpretable than clusters produced using the popular methods of doc2vec and latent Dirichlet allocation. The success of the clustering approach is quantified using human reviewers and through the use of a generative LLM. The generative LLM shows good agreement with the human reviewers and is suggested as a means to bridge the 'validation gap' which often exists between cluster production and cluster interpretation. The comparison between LLM coding and human coding reveals intrinsic biases in each, challenging the conventional reliance on human coding as the definitive standard for cluster validation.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。