

That was a bit of a rant.but it does bug me that most people think we have incredible game audio now, but really its shitty. If each person is facing directly toward the shooter, and were all 500 m away they are all going to have a completely different perspective on that sound dependent on what was between them and the shooter, even if all that was between them was a change in elevation.

and besides his target you have 4 other people who hear it. then as that sound travels outward it hits trees, plants buildings. immediately the sound is going to be affected by the shooter's body, the ground that he was on when he pulled the trigger. Think of a game like ARMA, a sniper is 900 meters out, and shoots, that sound has to travel 360 degrees outward. you have to be able to detect where the sound started, and all the objects that the sound will have to encounter before reaching the person who hears it, which would become pretty taxing on a system, so most games do have directional audio, its not very well done, this is why those soundscapes if you will sound so immersive is because the sound is actually calculated how your going to hear it every second. As most chips now include onboard sound integration, well they are not powerfull enough to do truly immersive audio. Which most of us probably know a bit about, but as graphics cards blew up sound quality (directionality) kind of went to the back burner. To achieve truely 3d audio you need a dedicated sound card. This is actually pretty sad to me, i did some research on this, we have directional sound, but we dont have truely 3d audio. Tl dr convincing 3d audio is at least a decade away and in the mean time we're stuck with some basic approximations that are pretty fake sounding. The University of Surrey has used high resolution 3d capture to customise HRIRs to individuals and demonstrated an improvement on a generic model, but it's still a good way behind the spatial resolution experienced in real life. There's a real easy way you can demonstrate to yourself just how strong the effect is, just put some music on and grab hold of your ears, pull them around while listening to how dramatically the frequency content shifts.

To give an example of how much you rely on your own anatomy, someone who's lost an ear will experience a period of poor sound location while their brain adjusts to the differences and never recover fully the front/back resolution (because your ears face to the sides and forwards). The second problem is that audio location is dealt with by your brain by the delay between the sound reaching each of your ears and also the additional influence of the shape of your head, shoulders and pinna which are unique to you and only you. It's like asking why we don't have realtime ray tracers for graphics yet, we just don't have that kind of computational power to dedicate. At the two points in space where your ears are located, you'd need to calculate all the interference and delays from those multiple reflections and the influence of the physical properties of the objects they bounced off of. First, sound waves reflect and bounce around an environment many times. There are two reasons why 3d audio is nowhere near where people expect it to be.
