Abstract
Children's ability to share attention with another social partner (joint attention) plays an important role in language development. However, our understanding of the role of joint attention comes mainly from children learning spoken languages, which gives a very narrow, speech-centric impression of the role of joint attention. This study broadens the scope by examining how deaf children learning a sign language achieve joint attention with their caregivers during natural social interaction, and how caregivers provide word learning opportunities. We analyzed naturalistic play sessions of 54 caregiver-child dyads using American Sign Language (ASL), and identified joint attention that surrounded caregivers' labeling of either familiar or novel objects using a comprehensive multimodal coding scheme. We observed that dyads using ASL establish joint attention using linguistic, visual, and tactile cues, and that most naming events took place in the context of a successful joint attention episode. Key characteristics of these joint attention episodes were significantly correlated with the children's expressive vocabulary size, mirroring the patterns observed for spoken language acquisition. We also found that sign familiarity as well as the order of mention of object labels affected the timing of naming events within joint attention. Our results suggest that caregivers using ASL are highly sensitive to their child's visual attention in interactions and modulate joint attention differently when providing familiar versus novel object labels. These joint attentional episodes facilitate word learning in sign language, just as they do in spoken language interactions.