Multimodal instruction with AI-generated images for noun retention: Exploring semantic scene and materiality effects

利用人工智能生成图像进行名词记忆的多模态教学:探索语义场景和物质性效应

阅读:1

Abstract

This study explores the effectiveness of integrating multimodal instruction with artificial intelligence (AI)-generated visual content into English noun vocabulary instruction, as compared to text-only instruction. Rather than treating visual instruction as an end in itself, the approach leverages generative image technology to create contextually relevant stimuli that align with cognitive principles of memory formation. A controlled experiment (text-only vs. text + AI-generated images) was conducted with 40 English learners recruited from China. Participants completed immediate and delayed recall tests, definition selection, image-to-word matching (available only in the multimodal condition), and semantic rating tasks. Results revealed that the multimodal group significantly outperformed the text-only group across all measures, with large effect sizes for memory retention and semantic understanding. However, the study design does not allow us to attribute this advantage to the AI-generated nature of the images, as no condition with traditional images was included. These findings indicate that multimodal presentation can support durable and meaningful vocabulary learning when visual materials are designed to reflect perceptual and contextual features that facilitate memory. The study highlights the pedagogical potential of combining multimodal materials with memory-informed instructional design in language education.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。