Assessing the Impact of the Quality of Textual Data on Feature Representation and Machine Learning Models: Quantitative Study Using Large Language Models

评估文本数据质量对特征表示和机器学习模型的影响:基于大型语言模型的定量研究

阅读:1

Abstract

BACKGROUND: Data collected in controlled settings typically results in high-quality datasets. However, in real-world applications, the quality of data collection is often compromised. It is well established that the quality of a dataset significantly impacts the performance of machine learning models. In this context, detailed information about individuals is often recorded in progress notes. Given the critical nature of health applications, it is essential to evaluate the impact of textual data quality, as any incorrect prediction can have serious, potentially life-threatening consequences. OBJECTIVE: This study aims to quantify the quality of textual datasets and systematically evaluate the impact of varying levels of errors on feature representation and machine learning models. The primary goal is to determine whether feature representations and machine learning models are tolerant to errors and to assess whether investing additional time and computational resources to improve data quality is justified. METHODS: We developed a rudimentary error rate metric to evaluate textual dataset quality at the token level. The Mixtral large language model (LLM) was used to quantify and correct errors in low-quality datasets. The study analyzed two health care datasets: the high-quality MIMIC-III public hospital dataset (for mortality prediction) and a lower-quality private dataset from Australian aged care homes (AACHs; for depression and fall risk prediction). Errors were systematically introduced into MIMIC-III at varying rates, while the AACH dataset quality was improved using the LLM. Feature representations and machine learning models were assessed using the area under the receiver operating curve. RESULTS: For the sampled 35,774 and 6336 patients from the MIMIC and AACH datasets, respectively, we used Mixtral to introduce errors in MIMIC and correct errors in AACH. Mixtral correctly detected errors in 63% of progress notes, with 17% containing a single token misclassified due to medical terminology. LLMs demonstrated potential for improving progress note quality by addressing various errors. Under varying error rates (5%-20%, in 5% increments), feature representation performance was tolerant to lower error rates (<10%) but declined significantly at higher rates. This aligned with the AACH dataset's 8% error rate, where no major performance drop was observed. Across both datasets, term frequency-inverted document frequency outperformed embedding features, and machine learning models varied in effectiveness, highlighting that optimal feature representation and model choice depend on the specific task. CONCLUSIONS: This study revealed that models performed relatively well on datasets with lower error rates (<10%), but their performance declined significantly as error rates increased (≥10%). Therefore, it is crucial to evaluate the quality of a dataset before using it for machine learning tasks. For datasets with higher error rates, implementing corrective measures is essential to ensure the reliability and effectiveness of machine learning models.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。