Ontology-Based Medication Named Entity Recognition Using Pretrained Transformer Models From a Thai Hospital: Model Fine-Tuning and Validation Study

基于本体的药物命名实体识别:基于泰国医院预训练Transformer模型的模型微调与验证研究

阅读:1

Abstract

BACKGROUND: Extracting accurate medication information from Thai hospital records presents challenges due to the narrative style of medical notes, which often combine Thai and English terminology. Named entity recognition (NER) serves as the foundational step for advanced clinical information extraction (IE) tasks, including medical concept normalization and relation extraction. This study aimed to establish a robust NER framework to address these difficulties by leveraging ontology-based annotation and pretrained transformer models. OBJECTIVE: The primary objective of this study was to evaluate the performance of 5 fine-tuned pretrained transformer models-BioClinicalBERT, ClinicalBERT, PubMedBERT, MultilingualBERT, and ThaiBERT-based on Bidirectional Encoder Representations from Transformers (BERT) in extracting structured medication information from unstructured Thai hospital discharge summaries. METHODS: Ninety discharge summaries were collected from Maharaj Nakhon Chiang Mai Hospital. These documents were annotated by physicians following the annotation guidelines based on international standards, including Systematized Nomenclature of Medicine-Clinical Terms (SNOMED-CT) and Health Level Seven Fast Healthcare Interoperability Resources (HL7 FHIR). The dataset was divided into fine-tuning (70 records, 78%, 2030 annotated spans), validation (10 records, 11%, 277 annotated spans), and testing sets (10 records, 11%, 358 annotated spans). The 5 transformer models were fine-tuned and evaluated using this annotated data to recognize and classify key medication entities (substance, route of administration, unit of measure, time patterns, and unit of presentation). RESULTS: We found that all models had good NER performance metrics in both the validation and test datasets. Regarding test performance, ClinicalBERT achieved the highest exact F1-score at 0.973, compared with 0.968 for BioClinicalBERT, 0.925 for PubMedBERT, 0.931 for MultilingualBERT, and 0.969 for ThaiBERT. All models showed strength in accurately identifying "Substance" and "Dosage" entities, whereas "Unit of Measure" proved to be the most challenging entity type due to implicit information in the source text for all models. CONCLUSIONS: The findings suggest that ontology-based medication IE using transformer-based models holds promise for enhancing data standardization and interoperability within the Thai health care system. Future work will need to leverage the granular annotations preserved in the dataset to develop medical concept normalization and relation extraction models to complete the medical IE system.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。