The human voice is likely the most important sound category in the human auditory landscape. For example, a preference for human voices relative to non-social auditory stimuli is already observed in neonates. In the context of a conversation, listeners need to rapidly integrate multiple auditory cues, which include not only linguistic but also paralinguistic or speaker-specific information, such as his/her emotional state. At the VoicES Laboratory we investigate how humans perceive, recognize, and make sense of voice, speech, and emotional information. For example:
We have a new paper online in Cortex! “Real and imagined sensory feedback have comparable effects on action anticipation”
https://authors.elsevier.com/a/1bQhM2VHXzzH3
Life under lockdown can be extremely demanding and tough. We would like to invite you to participate in a study about the effects of social isolation in the context of the COVID-19 pandemic.
To learn more, click here:
https://ulfp.qualtrics.com/jfe/form/SV_0wbTE0XZIOo560R
Have you read our latest paper on #memory? “Is Internal Source Memory Recognition Modulated by Emotional Encoding Contexts?” https://doi.org/10.1007/s00426-020-01294-4
The VoicES Lab is part of the CICPSI CO2 Research Group.
This website was created with financial support from FCT (UIDB/04527/2020 and UIDP/04527/2020).
Faculty of Psychology
University of Lisbon
Alameda da Universidade
1649-013 Lisboa
Portugal
© 2020 VoiCES Neuroscience Lab | All rights reserved | Developed by Luminária Digital Agency