Shared Multimodal Input Through Social Coordination: Infants With Monolingual and Bilingual Learning Experiences

通过社会协调共享多模态输入:单语和双语学习经历的婴儿

阅读:1

Abstract

A growing number of children in the United States are exposed to multiple languages at home from birth. However, relatively little is known about the early process of word learning-how words are mapped to the referent in their child-centered learning experiences. The present study defined parental input operationally as the integrated and multimodal learning experiences as an infant engages with his/her parent in an interactive play session with objects. By using a head-mounted eye tracking device, we recorded visual scenes from the infant's point of view, along with the parent's social input with respect to gaze, labeling, and actions of object handling. Fifty-one infants and toddlers (aged 6-18 months) from an English monolingual or a diverse bilingual household were recruited to observe the early multimodal learning experiences in an object play session. Despite that monolingual parents spoke more and labeled more frequently relative to bilingual parents, infants from both language groups benefit from a comparable amount of socially coordinated experiences where parents name the object while the object is looked at by the infant. Also, a sequential path analysis reveals multiple social coordinated pathways that facilitate infant object looking. Specifically, young children's attention to the referent objects is directly influenced by parent's object handling. These findings point to the new approach to early language input and how multimodal learning experiences are coordinated socially for young children growing up with monolingual and bilingual learning contexts.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。