[sldev] "directed audio"? [was: Vote for voice protocol
documentation]
Soft Linden
soft at lindenlab.com
Fri Aug 17 08:36:52 PDT 2007
On 8/16/07, Matthew Dowd <matthew.dowd at hotmail.co.uk> wrote:
>
> e.g. the easiest way to do spatial audio of an avatar talking is to
> regard the avatar itself as the sound source - but as mentioned this
> means the sound intensity is the same regardless of which way the
> avatar is facing.
>
> a more acurate way is to have the avatar's mouth as the sound
> source, and include the avatar's head and body in the occlusion
> calculations for the audio.
>
> Similarly, you could simple have an avatar as the sound receiver or
> more accurately have the avatar's ears as the receiver and the
> avatar's head and body included in the audio occlusion.
>
> However, I doubt if diamondware et al go to quite these lengths,
> and probably just have some simple heuristics based on the
> direction things are facing in a similar way that graphics engines
> handle directional/projectional light sources.
If you can find good sources of information, I expect we can pass on
any suggested improvements to see if Vivox and all want to act on
them. I know the facing of the speaker is important, as is whether the
speaker is in front of or behind the listener (the shape of your ears
reduces how well you hear higher frequencies behind you). I don't know
which aspects are most critical, however.
Googling "Head Related Transform Functions" should get you to a lot of
related pages, and if you have access to SIGGraph notes (many schools
do), there have a been a number of related SIGSound talks given there.
More information about the SLDev
mailing list