Abstract
Readability formulas are prominent health communication assessment tools, but they can yield varying estimates. Such variation is often treated as error in computerized tools but can result from text preprocessing decisions in manual and computerized assessments alike. This study illustrates the effect of preprocessing on reading grade level estimates in short-form online content, thereby illustrating the importance of reporting these decisions and the limitations of these formulas.We manually counted words, sentences, and syllables in a sample of 100 Tweets by U.S. state health agencies from 2012 through 2022. We applied the Simplified Measure of Gobbledygook and Flesch-Kincaid formulas under 7 inclusive preprocessing scenarios, differentially including URLs, hashtags, and/or numbers in word counts. We compared resulting estimates to those from a restrictive baseline that excluded these elements. Wilcoxon signed-rank tests revealed significant differences in median grade level estimates. No significant differences were found in the percentage of Tweets meeting an 8th-grade benchmark. Linear regression showed that baseline estimates did not adequately explain observed variation.Despite the potential benefit of interpretability, we conclude that readability formulas are unreliable for short-form online content. Instead, we recommend directly using word, sentence, and syllable counts. We also recommend conducting sensitivity analyses for readability assessments.