Header bottom accent
Lady sitting with a virtual headset on and holding a controller surrounded by speakers


Masthead Bottom


SONICOM is a 5-year research project funded under the Horizon2020 FET Proactive – Boosting emerging technologies (H2020-FETPROACT-2018-2020).

Immersive audio is our everyday experience of being able to hear and interact with sounds around us. Simulating spatially located sounds in virtual or augmented reality (VR/AR) must be done in a unique way for each individual and currently requires expensive, time-consuming individual measurements, making it commercially unfeasible. The impact of immersive audio beyond perceptual metrics such as presence and localisation is still an unexplored area of research, specifically when related with social interaction, entering the behavioural and cognitive realms.

SONICOM will revolutionise the way we interact socially within AR/VR environments and applications by leveraging Artificial Intelligence to design a new generation of immersive audio technologies and techniques, specifically looking at personalisation and customisation of the audio rendering. Using a data-driven approach it will explore, map, and model how the physical characteristics of spatialised auditory stimuli can influence observable behavioural, physiological, kinematic, and psychophysical reactions of listeners within social interaction scenarios.

The developed techniques and models will be evaluated in an ecologically valid manner, exploiting AR/VR simulations as well as real-life scenarios, developing appropriate hardware and software proofs-of-concept.

Finally, to reinforce the idea of reproducible research to promote future development and innovation in the area of auditory-based social interaction, the SONICOM Ecosystem will be created, which will include auditory data closely linked with model implementations and immersive audio rendering components.

Map of SONICOM partners
Testimonial Wave

“Imagine a virtual meeting space where you see colleagues to your right, left, and across from you. We want to make this possible in audio form, using AI not only to improve and personalise the immersive sound rendering, but also to model the reactions of the listeners and predict how this could influence and possibly modify the interactions.”

Listen here to an example of binaurally spatialised audio – please wear a pair of headphones.

Lead investigator Dr Lorenzo Picinali
Imperial’s Dyson School of Design Engineering

Testimonial Wave


It involves an international team of 10 research institutions and creative tech companies from 6 European countries. The partnership includes researchers and AR/VR experts from seven research/academic institutions and three small-medium enterprises.

Read more about SONICOM’S partners

Research/academic partners:

Small-medium enterprise partners:

Full with image

SONICOM’S partners

Map of SONICOM partners