Visual Speech Segmentation: Using Facial Cues to Locate Word Boundaries in Continuous Speech
Language Cognition and Neuroscience
Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition.
Mitchel, Aaron and Weiss, Daniel J.. "Visual Speech Segmentation: Using Facial Cues to Locate Word Boundaries in Continuous Speech." Language Cognition and Neuroscience 29, no. 7 (2014) : 771-780.
This document is currently not available here.