Comparing three algorithms of automated facial expression analysis in autistic children: different sensitivities but consistent proportions

比较三种用于自闭症儿童面部表情自动分析的算法:灵敏度不同但比例一致

阅读:2

Abstract

BACKGROUND: Difficulties with non-verbal communication, including atypical use of facial expressions, are a core feature of autism. Quantifying atypical use of facial expressions during naturalistic social interactions in a reliable, objective, and direct manner is difficult, but potentially possible with facial analysis computer vision algorithms that identify facial expressions in video recordings. METHODS: We analyzed > 5 million video frames from 100 verbal children, 2-7 years-old (72 with autism and 28 controls), who were recorded during a ~ 45-minute ADOS-2 assessment using modules 2 or 3, where they interacted with a clinician. Three different facial analysis algorithms (iMotions, FaceReader, and Py-Feat) were used to identify the presence of six facial expressions (anger, fear, sadness, surprise, disgust, and happiness) in each video frame. We then compared results across algorithms and across autism and control groups using robust non-parametric statistical tests. RESULTS: There were significant differences in the performance of the three facial analysis algorithms including differences in the proportion of frames identified as containing a face and frames classified as containing each of the six examined facial expressions. Nevertheless, analyses across all three algorithms demonstrated that there were no significant differences in the quantity of any facial expression produced by children with autism and controls. Furthermore, the quantity of facial expressions did not correlate with autism symptom severity as measured by ADOS-2 CSS scores. LIMITATIONS: The current findings are limited to verbal children with autism who completed ADOS-2 assessments using modules 2 and 3 and were able to sit in a stable manner while facing a wall-mounted camera. Furthermore, the analyses focused on comparing the quantity of facial expressions across groups rather than their quality, timing, or social context. CONCLUSIONS: Commonly used automated facial analysis algorithms exhibit large variability in their output when identifying facial expressions of young children during naturalistic social interactions. Nonetheless, all three algorithms did not identify differences in the quantity of facial expressions across groups, suggesting that atypical production of facial expressions in verbal children with autism is likely related to their quality, timing, and social context rather than their quantity during natural social interaction.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。