facebook rss twitter

AMD launches TrueAudio Next for realistic VR audio experiences

by Mark Tyson on 18 August 2016, 11:01

Tags: AMD (NYSE:AMD)

Quick Link: HEXUS.net/qac5u6

Add to My Vault: x

AMD has launched TrueAudio Next as part of its LiquidVR SDK, open source and available via the firm's Github software repository. This is obviously AMD addressing the same audio issues as targeted by Nvidia's VRWorks Audio. In brief it is a "scalable AMD technology that enables full real-time dynamic physics-based audio acoustics rendering, leveraging the powerful resources of AMD GPU Compute". The tech promises immersive audio to match the visual immersion provided by VR headsets.

For many years audio advancements seem to have stagnated while graphics surged ahead. In a post on the GPU Open blog, entitled The Importance of Audio in VR, Carl Wakeland, a Fellow Design Engineer at AMD, suggests that this is down to the dominance of the 2D screen, positioned a little in front of the user in computing and gaming. Yes, some FPS games and the like might sprinkle a bit of 3D audio into the mix, where it helps the player gauge positions of foes etc, but most of the time 3D audio would be a distraction for anything but some minor background noises. Now the growing popularity of the head-mounted display "changes everything," asserts Wakeland.

AMD TrueAudio Next is a significant step towards making environmental sound rendering closer to real-world acoustics with the modelling of the physics that propagate sound – AKA auralization. Though this is not perfect modelling, it comes very close, thanks to the power of real-time GPU compute enabled by AMD TrueAudio Next combined with other system resources.

Discussing the computing power required by auralization, Wakeland says that two primary algorithms need to be catered for; time-varying convolution (in the audio processing component) and ray-tracing (in the propagation component). He goes on to explain that "On AMD Radeon GPUs, ray-tracing can be accelerated using AMD’s open-source FireRays library, and time-varying real-time convolution can be accelerated with the AMD TrueAudio Next library." The new AMD TrueAudio Next library is a high-performance, OpenCL-based real-time math acceleration library for audio, with special emphasis on GPU compute acceleration.

Importantly, to ensure smoothness in audio to match the VR display, AMD implements its new 'CU Reservation' feature to reserve some CUs for audio, as necessary, and the use of asynchronous compute.



HEXUS Forums :: 5 Comments

Login with Forum Account

Don't have an account? Register today!
Reminds me of the original operation flashpoint, I assume they carried over their audio processing to Arma. Sound would travel at the correct speed, so a far off explosion would be seen before it was heard. Even a sniper shot with a hill close by would echo correctly so you didn't know where the original shot came from.

EAX was pretty cool, but not as advanced as this. It was my main advantage in CS, using EAX and surround speakers where most people used headphones. Still firmly stick with speakers as a result.
Dashers
EAX was pretty cool, but not as advanced as this. It was my main advantage in CS, using EAX and surround speakers where most people used headphones. Still firmly stick with speakers as a result.

You only have two ears; provided the audio processing is done correctly you can get all the surround effect you need with stereo headphones. Your hearing judges location based on sound phase and volume which can be calculated by audio processors.

Not that lots of games actually do that proper stereo processing of course…
That's not quite true, whilst you only have two ears, unlike eyes, they don't receive on a 2D plane. Your ear can detect noise through your head very easily and the direction is significant.

Whilst virtual surround is often very good, it's simply no match for directional audio.
Lets see if this actually gets used in more games! ;)
Dashers
That's not quite true, whilst you only have two ears, unlike eyes, they don't receive on a 2D plane. Your ear can detect noise through your head very easily and the direction is significant.

Whilst virtual surround is often very good, it's simply no match for directional audio.

That's irrelevant, the sound still arrives at one of two ear drums and hence things like distortion caused by transmission through bone can calculated through signal processing (though I imagine the difference is negligible at best). There are countless demos proving you can get full directional sound from a stereo headset. You also don't have to deal with things like speakers not being aligned as the studio intended, or sound reflections from walls or objects in the room.

However like I said, the main issue is many games just not doing this, which is why lots of sound cards/headphones allow you to feed them a 5.1/7.1 input and they do the binaural processing instead, usually to good effect.

A similar concept is used in radio beamforming - having two antennas driven through some signal processing allows it to effectively increase the gain in a given direction whilst destructively cancelling it elsewhere.