Poster No:
2495
Submission Type:
Abstract Submission
Authors:
stefano ioannucci1, Carole Jordan2, Anne-Sophie Carnet2, Carolyn McGettigan3, Petra Vetter2
Institutions:
1University of Fribourg, Fribourg, Switzerland, 2Visual and Cognitive Neuroscience Lab, University of Fribourg., Fribourg, Switzerland, 3UCL Speech Hearing and Phonetic Sciences, University College London, London, United Kingdom
First Author:
Co-Author(s):
Carole Jordan
Visual and Cognitive Neuroscience Lab, University of Fribourg.
Fribourg, Switzerland
Anne-Sophie Carnet
Visual and Cognitive Neuroscience Lab, University of Fribourg.
Fribourg, Switzerland
Carolyn McGettigan
UCL Speech Hearing and Phonetic Sciences, University College London
London, United Kingdom
Petra Vetter
Visual and Cognitive Neuroscience Lab, University of Fribourg.
Fribourg, Switzerland
Introduction:
Sound symbolism, also called the "bouba-kiki effect" is a fascinating phenomenon in psychology that demonstrates a non-arbitrary association between meaningless speech sounds and visual shapes. For example, participants routinely tend to associate rounded, curvy shapes with the word "bouba" and jagged, angular shapes with the word "kiki" (Lockwood & Dingemanse, 2015; Ramachandran & Hubbard, 2001). This suggests that there is a cross-modal association between certain speech sounds and specific visual characteristics.
However, despite a long tradition of behavioral experiments manifesting the effect of sound symbolism, evidence for its neural correlates in the human brain is still sparse (Peiffer-Smadja & Cohen, 2019; Revill et al., 2014).
Given that early visual cortex has been shown to respond to auditory stimuli in a selective and content-specific way (Vetter et al., 2014, 2020), we sought to build upon these two separate lines of research to assess if visual cortices implicated in the processing of visual shape are sensible to the implicit shape conveyed by sound symbolic speech sounds. We hypothesized that sound-symbolic cross-modal shape associations should result in distinguishable neural activity patterns in the visual cortex, particularly in shape-selective brain regions.
Methods:
We acquired 3T fMRI BOLD signals from 8 healthy adult participants while they were blindfolded and listened to a set of sound symbolic words (rounded, spiky and mixed). Participants' task was to categorize the associated visual shape of words (more spiky or more round on a 4-point scale). After standard fMRI pre-processing, we extracted beta weights from anatomically defined visual regions of interest (V1, V2, V3 and LOC). A within-subject multi-variate pattern classification analysis (MVPA) was then carried out with chance level and null distribution empirically generated via label permutation, to assess whether different sound-symbolic word categories were represented with differential neural activity patterns in our visual regions of interest.
Results:
Behavioral results showed that we could replicate the "bouba-kiki" effect inside the MRI scanner, i.e. "rounded" words were judged as being associated more often with rounded shapes than "spiky" words, and vice versa. Preliminary fMRI data analyses indicate that it is possible to successfully decode sounds judged as "round" or "spiky" based on fMRI activity within visual cortical regions of interest in 5 out of 8 participants, indicating an expected Maximum A Posteriori effect size point estimate (Ince et al., 2021) of 68% as the most likely value for the general population parameter.
Conclusions:
Our preliminary results show that sound symbolic words entailing a visual shape association can be decoded from fMRI activity patterns in early visual and shape selective cortical regions, even in the absence of visual stimulation. The present results provide evidence that the visual cortex can represent information from speech, even when it consists of meaningless non-words, as long as these words have a visual shape association. This supports the notion that sound symbolism may be a true effect of sensory and cross-modal association in the human brain rather than a linguistic epiphenomenon. Furthermore, our results add to the existing evidence that visual brain regions do not just process visual feed-forward information but are also sensitive to top-down, high-level auditory information. How fine-tuned the visual cortex is to different characteristics of auditory stimuli, revealing the depth of audiovisual interactions in the visual cortex, still remains to be explored.
Higher Cognitive Functions:
Imagery 2
Modeling and Analysis Methods:
Multivariate Approaches
Perception, Attention and Motor Behavior:
Perception: Multisensory and Crossmodal 1
Perception: Visual
Keywords:
Cognition
FUNCTIONAL MRI
Hearing
Language
Multivariate
Perception
Vision
1|2Indicates the priority used for review
Provide references using author date format
Ince, R. A., Paton, A. T., Kay, J. W., & Schyns, P. G. (2021). Bayesian inference of population prevalence. eLife, 10, e62461. https://doi.org/10.7554/eLife.62461
Lockwood, G., & Dingemanse, M. (2015). Iconicity in the lab: A review of behavioral, developmental, and neuroimaging research into sound-symbolism. Frontiers in Psychology, 6, 1246. https://doi.org/10.3389/fpsyg.2015.01246
Peiffer-Smadja, N., & Cohen, L. (2019). The cerebral bases of the bouba-kiki effect. NeuroImage, 186, 679–689. https://doi.org/10.1016/j.neuroimage.2018.11.033
Ramachandran, V. S., & Hubbard, E. M. (2001). Synaesthesia—A window into perception, thought and language. Journal of Consciousness Studies, 8(12), 3–34.
Revill, K. P., Namy, L. L., DeFife, L. C., & Nygaard, L. C. (2014). Cross-linguistic sound symbolism and crossmodal correspondence: Evidence from fMRI and DTI. Brain and Language, 128(1), 18–24. https://doi.org/10.1016/j.bandl.2013.11.002
Vetter, P., Bola, Ł., Reich, L., Bennett, M., Muckli, L., & Amedi, A. (2020). Decoding Natural Sounds in Early “Visual” Cortex of Congenitally Blind Individuals. Current Biology, 30(15), 3039-3044.e2. https://doi.org/10.1016/j.cub.2020.05.071
Vetter, P., Smith, F. W., & Muckli, L. (2014). Decoding Sound and Imagery Content in Early Visual Cortex. Current Biology, 24(11), 1256–1262. https://doi.org/10.1016/j.cub.2014.04.020