Tracking the time course of orthographic information in spoken-word recognition

追踪口语识别中正字法信息的时间进程

阅读:1

Abstract

Two visual-world experiments evaluated the time course and use of orthographic information in spoken-word recognition using printed words as referents. Participants saw 4 words on a computer screen and listened to spoken sentences instructing them to click on one of the words (e.g., Click on the word bead). The printed words appeared 200 ms before the onset of the spoken target word. In Experiment 1, the display included the target word and a competitor with either a lower degree (e.g., bear) or a higher degree (e.g., bean) of phonological overlap with the target. Both competitors had the same degree of orthographic overlap with the target. There were more fixations to the competitors than to unrelated distractors. Crucially, the likelihood of fixating a competitor did not vary as a function of the amount of phonological overlap between target and competitor. In Experiment 2, the display included the target word and a competitor with either a lower degree (e.g., bare) or a higher degree (e.g., bear) of orthographic overlap with the target. Competitors were homophonous and thus had the same degree of phonological overlap with the target. There were more fixations to higher overlap competitors than to lower overlap competitors, beginning during the temporal interval where initial fixations driven by the vowel are expected to occur. The authors conclude that orthographic information is rapidly activated as a spoken word unfolds and is immediately used in mapping spoken words onto potential printed referents.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。