Abstract
Objective: With a global prevalence ranging from 50% to 100%, gingivitis is considered the most common oral disease in adults worldwide. It is characterized by clinical signs of inflammation, such as redness, swelling and bleeding, on gentle probing. Although it is considered a milder form of periodontal disease, gingivitis plays an important role in overall oral health. Early detection and treatment are essential to prevent progression to more severe conditions. Typically, diagnosis is performed by dental professionals, as individuals are often unable to accurately assess whether they are affected. Therefore, the aim of the present study was to determine to what degree gingivitis is visually detectable by an easy-to-use camera-based application. Materials and methods: Standardized intraoral photographs were taken using a specialized intraoral camera and processed using a custom-developed filter based on a machine-learning algorithm. The latter was trained to highlight areas suggestive of gingivitis. A total of 110 participants were enrolled through ad hoc sampling, resulting in 320 assessable test sites. A dentist provided two reference standards: the clinical diagnosis based on bleeding on probing of the periodontal sulcus (BOP) and an independent visual assessment of the same images. Agreement between diagnostic methods was measured using Cohen's kappa statistic. Results: The agreement between the application's output and the BOP-based clinical diagnosis was low, with a kappa value of 0.055 [p = 0.010]. Similarly, the dentist's visual assessment of clinical photos showed low agreement with BOP, with a kappa value of 0.087 [p < 0.001]. In contrast, the agreement between the application and the dentist's photo-based evaluations was higher, with a kappa value of 0.280 [p < 0.001]. Conclusions: In its current form, the camera-based application is not able to reliably detect gingivitis. The low level of agreement between dentists' visual assessments and the clinical gold standard highlights that gingivitis is difficult to identify merely visually. These results underscore the need to refine visual diagnostic approaches further, which could support future self-assessment or remote screening applications.