Understanding the neurocognitive mechanisms of vocal communication in health and disease

Research Lines

VoicES in context

Despite the increasing number of studies probing how humans decode voice, speech, and emotion information, the neurocognitive underpinnings of these processes remain elusive. At the VoicES Laboratory, we investigate the effects of stimulus properties (e.g., sound duration, intensity), task demands (e.g., attention to either speech or speaker-specific cues), and social context (e.g., joint action) on vocal communication.

As the dynamically changing nature of auditory stimuli may suggest, time is everything in voice and speech research. Taking advantage of the high temporal resolution of the ERP methodology, we demonstrated, for example, that vocal emotional information is processed very rapidly and in a highly automatic way, i.e. even when the information is not relevant for the current task of the listener or even when it is not in the focus of attention.

Pinheiro, A. P., Lima, D., Albuquerque, P., Anikin, A., & Lima, C. F. (2019) 
Spatial location and emotion modulate voice perception. Cognition and Emotion, 33(8), 1577-1586. doi: 10.1080/02699931.2019.1586647

Conde, T., Gonçalves, O. F., & Pinheiro, A. P. (2018)
Stimulus complexity matters when you hear your own voice: Attention effects on self-generated voice processing. International Journal of Psychophysiology, 133, 66-78. doi: 10.1016/j.ijpsycho.2018.08.007

Pinheiro, A. P., Barros, C., Dias, M., & Kotz, S. A. (2017)
Laughter catches attention! Biological Psychology, 130, 11-21. doi:10.1016/j.biopsycho.2017.09.012

Pinheiro, A. P., Barros, C., Dias, M., & Kotz, S. A. (2017)
Is laughter a better vocal change detector than a growl? Cortex, 92, 233-248. doi: 10.1016/j.cortex.2017.03.018

VoicES in neuropsychiatric disorders

Alterations in voice, emotion, and speech perception mechanisms are characteristic of neuropsychiatric disorders such as psychosis. Specifically, there is a substantial body of evidence showing that a failure to distinguish between internally and externally generated sensory signals (e.g., my voice vs. someone else’s voice) underlies the experience of auditory verbal hallucinations (AVH). This failure seems to result from an inability to predict the sensory consequences of self-generated signals as a consequence of impaired internal forward modeling.

Our studies aim to bring us closer to understanding why some people hear voices when there is nobody speaking. Enhancing our basic understanding of AVH pathophysiology may inform early diagnosis and treatment/prevention strategies.

Our work benefits from established international collaborations (Maastricht University; Harvard Medical School [HMS]; International Consortium on Hallucination Research [ICHR] and the Early Career Hallucination Research group [ECHR] in a close interaction with mental health units (e.g., Psychiatry Department, Hospital de Santa Maria, Lisbon).

Erb, J., Kreitewolf, J., Pinheiro, A. P., & Obleser, J. (2020)
Cerebellar circuitry and auditory verbal hallucinations: An integrative synthesis and perspective. Neuroscience and Biobehavioral Reviews. doi: 10.1016/j.neubiorev.2020.08.004

Pinheiro, A. P., Schwartze, M., & Kotz, S. A. (in press)
Cerebellar circuitry and auditory verbal hallucinations: An integrative synthesis and perspective. Neuroscience and Biobehavioral Reviews. doi: 10.1016/j.neubiorev.2020.08.004

Pinheiro, A. P., Schwartze, M., Amorim, M., Coentre, R., Levy, P., & Kotz, S. A. (2020)
Changes in motor preparation affect the sensory consequences of voice production in voice hearers. Neuropsychologia, 146, 107531. doi: 10.1016/j.neuropsychologia.2020.107531

Pinheiro, A. P., & Niznikiewicz, M. (2019)
Altered attentional processing of happy prosody in schizophrenia. Schizophrenia Research, 206, 217-224. doi: 10.1016/j.schres.2018.11.024

VoicES across the lifespan

As social beings, learning to rapidly and successfully decode the intentions and goals of others is a critical developmental challenge.

The production and perception of voice, speech and emotion undergo major developmental changes from childhood to older adulthood. For example, the ability to accurately decode emotions improves from childhood to adulthood but declines in older adulthood. There is also some evidence suggesting that the recognition of different emotions may develop at different paces.

Our research aims to specify how voice, speech, and emotion perception is affected by age and how it relates to brain changes during development.

Amorim, M., Anikin, A., Mendes, A. J., Lima, C. F., Kotz, S. A., & Pinheiro, A. P.* (in press)
Changes in vocal emotion recognition across the life span. Emotion. doi: 10.1037/emo0000692

Adaptation in VoicES

We investigate adaptation processes in voice, speech, and emotion by examining two distinct models of experience-driven neuroplasticity: long-term musical training and visual deprivation.

The advantages of musical training and expertise on different cognitive domains have been consistently demonstrated. Some studies indicate that this expertise might translate into enhanced language and speech perception abilities, such as vocal emotional perception. Our studies aim to clarify whether musical training enhances the extraction of regularities from voice and speech stimuli, as well as the decoding of emotional salience from voice and face stimuli.

After the loss of vision, the human brain reorganizes to compensate for that sensory loss. Contrasting with the strong reliance on vision by sighted individuals, blind listeners depend more strongly on voice cues to identify and to interact with their social interlocutors in everyday life. By combining EEG and behavioral methods, we also aim at investigating whether visual deprivation, at different stages of neural development, affects the (re)organization of voice identity and emotion perceptual mechanisms.

Pinheiro, A. P., Vasconcelos, M., Dias, M., Arrais, N., & Gonçalves, O. F. (2015)
The music of language: an ERP investigation of the effects of musical training on emotional prosody processing. Brain and Language, 140, 24-34. doi: 10.1016/j.bandl.2014.10.009

Our work has been funded by FCT and BIAL Foundation:

“Voice perception in the visually deprived: Behavioral and electrophysiological insights”.

This project aims to specify the effects of visual deprivation on voice perception mechanisms, combining behavioral and EEG methods.

“When prediction errs: Examining the brain dynamics of altered saliency in self-voice perception”.
This project specified how unexpected changes in the emotional quality of the voice during speaking impact upon EEG signatures indexing sensory feedback processing (N1, P2, neural oscillations).

“I predict, therefore I do not hallucinate: a longitudinal study testing the neurophysiological underpinnings of auditory verbal hallucinations”.

The project probed which aspects of impaired sensory prediction may explain the experience of AVH in psychotic and nonclinical voice hearers. Combining both ERP and neural oscillatory activity in the time-frequency domain, it examined the role of stimulus salience in altered sensory prediction in AVH.

Examining abnormalities in auditory emotional processing in schizophrenia: an electrophysiological investigation with high-risk, early-stage and chronic patients”.

This project examined the brain mechanisms underlying vocal emotion perception across the psychosis spectrum, using EEG.

Electrophysiological investigation of auditory affective processing in schizophrenia and its relationship with self-monitoring: a window into auditory hallucinations?”.

This project examined the role of emotion in self-other voice discrimination in psychosis, and its relationship with auditory verbal hallucinations.

Address

Faculty of Psychology
University of Lisbon
Alameda da Universidade
1649-013 Lisboa
Portugal

Project by

The VoicES Lab is part of the CICPSI CO2 Research Group.

2017_FCT_H_branco

This website was created with financial support from FCT (UIDB/04527/2020 and UIDP/04527/2020).

© 2020 VoiCES Neuroscience Lab | All rights reserved | Developed by Luminária Digital Agency