Binaural synthesis allows for the authentic reproduction and placement of virtual sound sources within a 3-dimensional space.
Localisation cues for horizontal, elevation and distance perception are prerecorded into a set of binaural room impulse responses (BRIRs), using an artificial head or a real person with microphones inserted inside each ear channel.
During playback, the left and right ear BRIRs matching the direction of the sound source are chosen, and imprinted onto the source, employing a partitioned convolution scheme for realtime processing. The scene is dynamically updated as angles and distances change.
For playback, it is mandatory to use headphones, which assures that the time and frequency response characteristics - belonging to a single direction - reach the listeners ear discretely and under controlled conditions.
Provided a head tracking device, head movements can also be accounted for, in order to achieve stable source positions, and to improve the localisation precision due to small head movements.
The realism and convincibility of the scene all depends on the matching of the listeners head and body anatomy, including the influence of the headphones, to the conditions of the recording situation. As such, the experience may vary from person to person due to different anatomical shapes, especially of the pinna. With a good match, the most prominent feature - besides distinct positions for source-localisation - may then be the externalisation, i.e. the perception of the sound source outside of the listeners head.
This is the third in a series of audio devices
relating spatial reproduction to a set of parameters derived from a swarm simulation. Evolving from the Stereoboids instrument plugin (2007) over the Audioboids' wave field synthesis installation (2008) to the BinauralBoids of today.