Enhancing Virtual Reality with BeRTA: A Modular Spatial Audio Renderer

Close your eyes and listen to the world around you for a moment. Whether it’s birdsong, cars, the TV, or coworkers typing away at their desks, you’ll experience a variety of sounds that have depth, distance and direction.

Simulating this kind of 3D soundscape through headphones is known as spatial audio. As this technology becomes more important in fields like virtual reality, interactive media, and audio perception research, there’s a growing need for tools that are not only powerful and accurate, but also accessible and flexible.

That’s where the Binaural Rendering Toolbox (BRT) comes in. Developed by researchers from the SONICOM project at the University of Málaga and Imperial College London, BRT is a modular, open-source library designed to simulate realistic spatial sound in real time. Built with researchers and developers in mind, it allows precise control over how sound sources behave, how spaces respond acoustically, and how listeners perceive those environments.

On top of BRT, we’ve built BeRTA, a standalone application that brings this technology to life in a user-friendly way. BeRTA connects seamlessly with popular graphics engines like Unity, making it easy to create immersive auditory scenes for experiments, virtual environments, or artistic projects. Whether you’re studying human perception or designing rich interactive soundscapes, BRT and BeRTA provide the tools to do it with precision and creativity.

A powerful, flexible tool

BeRTA is a highly configurable application built on a modular architecture, allowing users to design a wide range of virtual sound scenarios. It handles all audio processing and spatialisation through the Binaural Rendering Toolbox (BRT), which builds upon algorithms originally developed in the 3D Tune-In Toolkit, expanding their functionality within a more flexible and research-oriented framework.

BeRTA integrates additional open-source components for tasks such as file handling, graphical user interface rendering, and communication via Open Sound Control (OSC). The entire system can be controlled remotely using OSC commands, while the GUI primarily serves as a visual aid for configuration and monitoring. A startup configuration file allows users to define default parameters and customize auditory scenes to meet specific experimental or creative needs.

BeRTA supports multi-listener setups, simulates both omnidirectional and directional sound sources, and generates binaural output using HRTF and BRIR models. It recreates realistic room acoustics through techniques like Scattering Delay Networks, and includes binaural filters for perceptual compensation and fine-tuning.

This versatility makes BeRTA a robust solution for advanced auditory research and interactive spatial sound design.

More yet to come

BeRTA and the BRT Library are under active development, with upcoming features including advanced hybrid reverberation models, support for ambisonic sound sources, and tools for simulating hearing impairments.

As a versatile, open-source solution for spatial audio, BeRTA is designed to meet the needs of researchers and developers working in virtual reality, auditory perception, and immersive media. Its modular architecture, real-time rendering capabilities, and flexibility enable the creation of rich, realistic soundscapes that go far beyond what conventional audio tools offer.

You can explore the documentation, access the source code, and start using BeRTA today via our official GitHub repository: https://github.com/GrupoDiana/BRTLibrary

Guest Authors