Header bottom accent
two people standing under HRTF frame

Principal Investigators (PIs)

Masthead Bottom

Alessandro Vinciarelli

School of Computing Science, University of Glasgow (UK)


Alessasndro Vinciarelli is a full Professor at the University of Glasgow, where he is a member of the School of Computing Science and Affiliate Academic of the Institute of Neuroscience and Psychology. His main research interest is Social Signal Processing, the computing domain aimed at modelling, analysis and synthesis of nonverbal communication in human-human and human-machine interactions. In particular, Alessasndro Vinciarelli’s work aims at developing computational models capable to infer social and psychological phenomena (e.g., personality or conflict) from nonverbal behavioural cues (e.g., facial expressions and tone of voice) automatically detected in recordings of human behaviour (e.g., videos) captured with multiple sensors (e.g., cameras and accelerometers). In simple terms, he helps machines to understand the social landscape in the same way as humans do. The goal is to make machines socially intelligent, i.e., capable to seamlessly participate in social interactions.

Alessasndro Vinciarelli is the leader of WP2, the work package dedicated to AI-driven approaches aimed at automatic analysis and synthesis of socially relevant stimuli in Virtual Reality. His efforts will focus on the development of approaches aimed at mapping virtual behavioural stimuli into social perceptions.


  • Tayarani, A.Esposito and A.Vinciarelli, “What an `Ehm’ Leaks About You: Mapping Fillers into Personality Traits with Quantum Evolutionary Feature Selection Algorithms“, accepted for publication by IEEE Transactions on Affective Computing, to appear, 2021.
  • Aylett, M.Wester and A.Vinciarelli, “Speech Synthesis for the Generation of Artificial Personality“, IEEE Transactions on Affective Computing, Vol. 11, No. 2, pp. 361-372, 2020.
  • Mohammadi and A.Vinciarelli, “Automatic Personality Perception: Prediction of Trait Attribution Based on Prosodic Features“, IEEE Transactions on Affective Computing, Vol. 3, no. 3, pp. 273-284, 2012.