Understanding the neurocognitive mechanisms of vocal communication in health and disease
Welcome to the Voice, Emotion, and Speech (VoicES) Neuroscience Lab!
Slider_Homepage_V2
Our Research

The human voice is likely the most important sound category in the human auditory landscape. For example, a preference for human voices relative to non-social auditory stimuli is already observed in neonates. In the context of a conversation, listeners need to rapidly integrate multiple auditory cues, which include not only linguistic but also paralinguistic or speaker-specific information, such as his/her emotional state. At the VoicES Laboratory we investigate how humans perceive, recognize, and make sense of voice, speech, and emotional information. For example:

Examples

How do we differentiate familiar and unfamiliar voices?

How do we decode emotions from other people’s voices?

How do we assign meaning to speech stimuli?

Where and when do these processes take place in our brains?

In order to address these questions, we use behavioral and brain imaging measures in healthy and patient populations.

Meet our Team

Latest News

Meetings

VoicES Lab – Current semester meetings

This semester, the VoicES Lab Meetings take place on Wednesdays, at 11:00 am, online. Contact…
Pointing with words

Pointing with words – Language as a social motor activity?

Language is everywhere and the use of it is one of the defining features of…
WHY DO WE GET CHILLS WHEN LISTENING TO MUSIC?

Why do we get chills when listening to music?

Have you ever experienced a pleasant chill running up your spine when listening to a…

Follow us on our social media channels

We have a new paper online in Cortex! “Real and imagined sensory feedback have comparable effects on action anticipation”
https://authors.elsevier.com/a/1bQhM2VHXzzH3

Life under lockdown can be extremely demanding and tough. We would like to invite you to participate in a study about the effects of social isolation in the context of the COVID-19 pandemic.
To learn more, click here:
https://ulfp.qualtrics.com/jfe/form/SV_0wbTE0XZIOo560R

Have you read our latest paper on #memory? “Is Internal Source Memory Recognition Modulated by Emotional Encoding Contexts?” https://doi.org/10.1007/s00426-020-01294-4

Load More…

The VoicES Lab is part of the CICPSI CO2 Research Group.

This website was created with financial support from FCT (UIDB/04527/2020 and UIDP/04527/2020).

Address

Faculty of Psychology
University of Lisbon
Alameda da Universidade
1649-013 Lisboa
Portugal

Project by

The VoicES Lab is part of the CICPSI CO2 Research Group.

2017_FCT_H_branco

This website was created with financial support from FCT (UIDB/04527/2020 and UIDP/04527/2020).

© 2020 VoiCES Neuroscience Lab | All rights reserved | Developed by Luminária Digital Agency