[sldev] Call for requirements: ISO MPEG-V (mpeg for virtual
worlds) Deadline: July 16, 2008
Matthew Underwood
sakkaku at gmail.com
Sun May 25 17:11:04 PDT 2008
On Sun, May 25, 2008 at 7:56 PM, Argent Stonecutter
<secret.argent at gmail.com> wrote:
> On 2008-05-25, at 18:16, Lawson English wrote:
>>
>> Argent Stonecutter wrote:
>>>
>>> On 2008-05-25, at 15:32, Lawson English wrote:
>>>>
>>>> Sure, but MPEG-4 streams audio/video anyway, and I doubt that there's
>>>> more data coming into SL than a streaming QT movie, MPEG-4 or not.
>>>
>
>>> Um, I'd think about that again if I were you. Steady state, maybe not,
>>> but right after you rez? Bandwidth requirements in SL when you're moving
>>> around from area to area and sim to sim are pretty high these days.
>>
>
>> For a new sim yes,
>
> Which means "for the start of any new stream".
>
>>> But then you wouldn't be encapsulating the game engine, which is what I
>>> thought you were talking about. Now you're talking about a standardized
>>> scheme that the SL environment could be mapped into, which is what I was
>>> presenting as a more credible alternative.
>
>> Sorry. I hadn't even dreamed of encapsulating the game engine itself,
>> which is probably why I misunderstood half of what you were talking about.
>
> Isn't that the example you used?
>
>> At the least, you need a reasonably good way to send data back to the
>> server. I think I was wrong: there's no real 2-way pipe in MPEG-4 streaming.
>> There's bound to be "resend lost data" signals, but we need something to
>> send, at the least, keypresses for avatar movement, and preferably ways of
>> uploading data for baked textures, locally built items, etc.
>
>
> In other words, just run Second Life.
>
> What I was talking about was having a stripped down version of the viewer
> that takes care of the presence and "avatar is still alive" stuff, and then
> repackages what comes in from the sim in the new streaming format, with
> feedback coming in through a separate control channel if you're not just
> replaying an existing sequence of events. Moving around in this world would
> mostly be changing the viewpoint of the camera, and sending controls back
> via HTTP to update the games-eye-view of the camera position when needed.
>
Wouldn't it be more efficient to send the textures/models (or link to
the assets) and keyframes to the client to render? If you make it
agnostic enough then most clients should be able to render it easily
(of course easy is relative). The bandwidth requirement would be high
initially as all the assets are loaded, but then you would just need a
few dozen kb/sec to update motions/deletions/additions/etc.
More information about the SLDev
mailing list