Y2 PROJECT

3D Headphone Technology

3D Headphone Technology

The sound we hear in everyday life comes from everywhere. We hear bird's singing from the direction of the bird, we hear footsteps from below us, and we hear raindrops hitting umbrella from above us. However, rendering these simple sound directions in a virtual environment is very difficult. This is because humans are recognizing the location of the sound by distinguishing tiny difference in the sound waves coming into our right and left ears.

The 3D Headphone technology that we introduce here enables to render sound sources in any direction of the sphere by controlling the sound signal that is fed to right and left channel of headphones. In other words, you can place virtual sound sources wherever you want.


Try it out with your headphones.
Viewing on low resolution video will reduce the audio quality.
For the best viewing and hearing experience possible,
please set your video quality to high definition (1080p or higher). [1]

How does it work?

3d_simulation.jpg

We are using Head-Related Transfer Functions (HRTFs). HRTF emulates the process of the sound wave propagation from a sound source to the ears. We developed an advanced generic HRTF which fits many people, by calculating an average of human ear shapes and its associated average HRTF.This was achieved by the recently introduced ear shape modeling technology [2], together with hi-fi acoustic simulation techniques. Our new HRTF technology thus realizes high quality rendering of sound localization not only for any direction within the horizontal plane, but even for sound sources placed above or below us, which was previously very difficult. In addition, our technology does not require any measurements of HRTFs of the listener or selections of fitting HRTFs from a predefined database, therefore it is easy to use and it can be easily integrated in any kind of application which makes use of HRTFs.

What can I use it for?

It is suited for any application which is meant to deliver a user experience with high-realistic acoustic sound field. For example, rendering the voice of a person in a video chat from his position in the screen, rendering virtual reality 3D audio for head-mounted displays, 3D games, etc.

For more information, please contact us.

[1] The maximum bitrate possible on YouTube is 192kbps (as of June 2016). Compared with uncompressed video, the sound localization performance lowers even when viewed on best possible video quality.



[2] Kaneko, S., Suenaga, T., Fujiwara, M., Kumehara, K., Shirakihara, F., & Sekine, S. (2016, February). Ear Shape Modeling for 3D Audio and Acoustic Virtual Reality: The Shape-Based Average HRTF. In Audio Engineering Society Conference: 61st International Conference: Audio for Games. Audio Engineering Society.