Decoding the Shape of our Ears

In everyday indoor environments, we can locate sound sources easily, even with our eyes closed. One reason for that is room acoustics, and the way sound is reflected and absorbed by the materials in the room. Take the room away, and we retain this ability. Why? Because of the way the human body is shaped.

Sound is absorbed and reflected by our bodies. When sound arrives at our eardrums, its properties have changed – some parts are reduced, and others are enhanced– for each position a sound can come from. This position-dependent spectrum is known as a head-related transfer function (HRTF). It acts as a personal acoustic filter, shaped by our individual anatomy.

Imagine a drone starts hovering in front of you, and you have your eyes closed. The drone flies straight up above you, and lands on the ground behind you. For all the drone’s positions, there were no time or level differences between the ears. The reason we can still easily locate the drone is that our ears have a particular shape. Not only is the shape of the outer ears unique, but our ears are individual, similar to a fingerprint (yes, even left and right are different from one another!). During our lives, we have learnt to map the filtering effect of our ears to specific locations around us.

To see this in action, check out popular engineer and online educator, Mark Rober’s short demonstration of how our ear structures are important to the way we locate sound:

For sound source positions to the left and right of us, there are time and level differences that occur between our ears. However, some sounds, even if positioned in slightly different positions, would give us the same or no difference between our ears at all; for example, a sound that is positioned right in front of us or behind us, or above or below us. 

All the cues we use to locate sound are part of an individual HRTF. The ability to locate vanishes when we listen to sound over headphones, because the sound is not filtered by our anatomy, and thus, we perceive sound within our head. However, when we mix the headphone signal with an individual HRTF, we can trick the brain into perceiving the sound outside our head. As a result, we can feel more immersed in the movie, music, conversation, or anything that we’re listening to. This is one of the objectives of the SONICOM project. 

The challenge of individual HRTFs 

Because of our complex and unique ear shapes, it is not easy to provide individual HRTFs to a large number of people. HRTFs can be measured acoustically or calculated from a high-resolution scan of the head and ears. Both methods are often cumbersome and require a specific room and special, often very expensive, equipment. 

The SONICOM database is currently one of the largest datasets available of both acoustically measured HRTFs and high-resolution scans of the torso, head, and ears of over 300 participants. Within the project, we have followed several approaches to model and approximate individual ear shapes, so that in the future, we don’t have to acquire a high-resolution scan or measurement for every person using special equipment. 

3 men calibrating an HRTF machine

The Turret Lab at Imperial College London, where researchers are collecting HRTFs as part of the SONICOM project

Broader applications

Individual HRTFs have applications beyond entertainment. By tailoring headphone playback to a listener’s unique hearing pattern, individual HRTFs can reduce listening fatigue. An individual HRTF can also help the brain to identify dialogue in a noisy environment. 

Additionally, research into HRTFs and spatial hearing is crucial for developing and improving hearing-assistive devices, such as hearing aids or cochlear implants. One promising application of using HRTFs, in an assistive way, is auditory navigation for visually impaired people. SONICOM partner, Dreamwaves, has developed a mobile application, waveOut, to guide a person to a specific point using virtual sound sources played through headphones. This becomes incredibly helpful when a visually impaired person wants to locate something, such as a door in a building. 

The interdisciplinary approach of SONICOM is a key strength. Researchers from a wide range of disciplines and backgrounds are collaborating to advance both fundamental and applied hearing science. Through SONICOM, spatial hearing research is not only being pushed forward but also applied to practical applications that offer real-world benefits. 

Guest Author