Medical knowledge representation enhancement in large language models through clinical tokens optimization

通过临床标记优化增强大型语言模型中的医学知识表示

阅读:2

Abstract

During the training of medical large language models (LLMs), conventional tokenizers frequently segment domain-specific medical terms into multiple subword tokens, resulting in suboptimal recognition and representation of specialized vocabulary. As a consequence, the model encounters difficulties in effectively acquiring medical domain knowledge during the fine-tuning process. To address this limitation, the present study introduces “clinical tokens”—medical subword units—by augmenting the vocabulary of the original LLaMA2 tokenizer. This adapted tokenizer retains medical terms as whole tokens wherever feasible, thereby enhancing tokenization accuracy and enabling the model to learn and interpret medical knowledge more effectively. For downstream task adaptation, this study employs the Byte Pair Encoding (BPE) algorithm to construct a domain-specific vocabulary and tokenization model, ensuring the inclusion of medical subword units (clinical tokens). We compare the tokenization performance of three variants: the original LLaMA2 tokenizer, the Chinese-LLaMA2 tokenizer (expanded with an extended Chinese vocabulary), and the clinical token-augmented tokenizer. This was followed by fine-tuning the large language models on curated medical datasets. The experimental results indicate that the enhanced tokenizer improves encoding and decoding efficiency, extends the model’s effective context window, and yields superior performance on downstream medical tasks.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。