Abstract
BACKGROUND: Data collected in controlled settings typically results in high-quality datasets. However, in real-world applications, the quality of data collection is often compromised. It is well established that the quality of a dataset significantly impacts the performance of machine learning models. In this context, detailed information about individuals is often recorded in progress notes. Given the critical nature of health applications, it is essential to evaluate the impact of textual data quality, as any incorrect prediction can have serious, potentially life-threatening consequences. OBJECTIVE: This study aims to quantify the quality of textual datasets and systematically evaluate the impact of varying levels of errors on feature representation and machine learning models. The primary goal is to determine whether feature representations and machine learning models are tolerant to errors and to assess whether investing additional time and computational resources to improve data quality is justified. METHODS: We developed a rudimentary error rate metric to evaluate textual dataset quality at the token level. The Mixtral large language model (LLM) was used to quantify and correct errors in low-quality datasets. The study analyzed two health care datasets: the high-quality MIMIC-III public hospital dataset (for mortality prediction) and a lower-quality private dataset from Australian aged care homes (AACHs; for depression and fall risk prediction). Errors were systematically introduced into MIMIC-III at varying rates, while the AACH dataset quality was improved using the LLM. Feature representations and machine learning models were assessed using the area under the receiver operating curve. RESULTS: For the sampled 35,774 and 6336 patients from the MIMIC and AACH datasets, respectively, we used Mixtral to introduce errors in MIMIC and correct errors in AACH. Mixtral correctly detected errors in 63% of progress notes, with 17% containing a single token misclassified due to medical terminology. LLMs demonstrated potential for improving progress note quality by addressing various errors. Under varying error rates (5%-20%, in 5% increments), feature representation performance was tolerant to lower error rates (<10%) but declined significantly at higher rates. This aligned with the AACH dataset's 8% error rate, where no major performance drop was observed. Across both datasets, term frequency-inverted document frequency outperformed embedding features, and machine learning models varied in effectiveness, highlighting that optimal feature representation and model choice depend on the specific task. CONCLUSIONS: This study revealed that models performed relatively well on datasets with lower error rates (<10%), but their performance declined significantly as error rates increased (≥10%). Therefore, it is crucial to evaluate the quality of a dataset before using it for machine learning tasks. For datasets with higher error rates, implementing corrective measures is essential to ensure the reliability and effectiveness of machine learning models.