A Mechanism For Sensorimotor Translation in Singing: The Multi-Modal Imagery Association (Mmia) Model
Skip to Next Section We propose a new framework to understand singing accuracy, based on multi-modal imagery associations: the MMIA model. This model is based on recent data suggesting a link between auditory imagery and singing accuracy, evidence for a link between imagery and the functioning of internal models for sensorimotor associations, and the use of imagery in singing pedagogy. By this account, imagery involves automatic associations between different modalities, which in the present context comprise associations between pitch height and the regulation of vocal fold tension. Importantly, these associations are based on probabilistic relationships that may vary with respect to their precision and accuracy. We further describe how this framework may be extended to multi-modal associations at the sequential level, and how these associations develop. The model we propose here constitutes one part of a larger architecture responsible for singing, but at the same time is cast at a general level that can extend to multi-modal associations outside the domain of singing.