Posted on August 3, 2022
Experts, as part of the University of York XR Stories project, have proposed a new sound strategy for developing virtual reality (VR) environments, based on how people experience images and sounds in the real world, thereby improving the user experience and reducing the risk of adverse effects. motion sickness VR.
Virtual reality applications – experienced in the home environment – use a variety of techniques to transmit sound; some use a “movie” approach, where sound is outside of interactions in the VR world, others use a single object for music, such as a radio, and some associate sound with environmental interactions, such as the sound of an object falling on the ground.
More often, however, virtual reality uses a mixture of audio methods – some sounds coming from an object in the world and other sounds appearing to be imagined or outside the virtual world.
This approach comes with some issues, however, which can contribute to feelings of being disconnected from the virtual world that has been created or, in some cases, can create a feeling of motion sickness, in which signals sent from the brain of the ‘user to his eyes and ears do not quite match.
If the sound does not reflect the everyday audio experience – a bird chirping should become louder as the VR user approaches it for example – or if the music is overlaid with no apparent source or reason for the sound, this can lead to an inauthentic and confusing sound. live.
Research at the University of York has proposed a new sound design strategy based on objects that would naturally produce sound in the “real world”; this means that the entire audio environment in virtual reality would be exclusively built around objects that produce sound in a specific position in space, reflecting how people experience sound in real time.
Constantin Popp, research associate on the XR Stories project at the AudioLab at the University of York, said: “The difference with previous methods is that we aim to apply this thinking to all sounds and music that make part of an experience, not just certain parts.
“This thinking allows us to make each sound-producing object interactive and responsive to the user, which enhances the user experience. This way we can also reuse existing data in the VR game, such as the speed of an object, and apply it to sound.
“For example, when a user drops an object, the game would play a corresponding sound indicating how fast and where the object fell. This strategy improves credibility and narrative depth.
The researchers added, however, that this methodology requires more computing power in the VR headset than exists with many current approaches, and would also increase the VR development phase and overall cost, so more work may be required to make this process faster. and cheaper.
Professor Damain Murphy, from the University of York and director of XR Stories, said: “We believe that by applying this method we can make virtual reality experiences more ‘real’ as it increases the responsiveness of the environment to be more in tune with that of our everyday world.
“Better audio-visual design also reduces the risk of the user feeling ‘spaced out’ or suffering the effects of motion sickness – a common problem for some people in VR – by allowing the eyes and ears to synchronize with the images that the brain receives.
“This approach to audio in VR can provide a more unified, natural and authentic experience.”
The research is supported by the XR Stories project of the Arts and Humanities Research Council (AHRC) under the Creative Industries Cluster Programme, and published in the journal of Applied Science.
Discover more news
Deputy Media Relations Manager
+44 (0)1904 322029