[sldev] [AWG] Why the LLSD notation format should be thrown out

Gary Wardell gwardell at gwsystems.co.il
Wed Oct 17 20:06:54 PDT 2007


If it's going into a spec it should certainly be marked as either "depreciated"  or "obsolete".

Gary

> -----Original Message-----
> From: sldev-bounces at lists.secondlife.com
> [mailto:sldev-bounces at lists.secondlife.com]On Behalf Of John Hurliman
> Sent: Wed, October 17, 2007 10:56 PM
> To: Second Life Developer Mailing List
> Subject: Re: [sldev] [AWG] Why the LLSD notation format 
> should be thrown
> out
> 
> 
> Right, so there should never be a need to deserialize notation format 
> since no machine should ever have to read it (aside from 
> support for the 
> current login system which will soon become legacy). I don't 
> understand 
> the purpose of putting a serialize-only format in to a spec, 
> but as long 
> as it won't be necessary in the future that's fine. We're working on 
> unit tests for LLSD and I'm going to skip notation format 
> since there is 
> no real way to write them without both serialization and 
> deserialization 
> support.
> 
> 
> Phoenix wrote:
> > I think the moral of the story really is: don't use notation for 
> > machine->machine exchanges. We currently do use it in some 
> > circumstances, but I don't believe anyone is planning on 
> using it for 
> > any new services anywhere.
> >
> >
> > On 2007-10-17, at 19:13 , John Hurliman wrote:
> >> Why the LLSD notation format should be thrown out:
> >>
> >> * It mixes strings and raw binary data. One of the two binary 
> >> serializations looks like this: b(13)"thisisatest", where 
> thisisatest 
> >> is raw binary data. Notice that it encloses the data with quotes, 
> >> even though it is raw binary and not an escaped string. It also 
> >> includes the two quotes in the byte length even though it 
> is not part 
> >> of the original data (so in code it looks something like int len = 
> >> myByteArray.Length + 2;). The entire rest of the format 
> uses strings, 
> >> so a UUID is 32 characters plus the hyphens instead of 16 bytes. A 
> >> parser has to either constantly convert small byte arrays into 
> >> strings and parse the strings, or convert the entire thing to a 
> >> string and convert the binary parts back to byte arrays.
> >>
> >> * It puts implementation-specific details in to protocol. The only 
> >> purpose of notation format is to provide something that is human 
> >> readable. While this may be useful for debugging, there is 
> no reason 
> >> two separate machines need to exchange data in a human readable 
> >> format. If you wanted, the XML serialization of LLSD is perfectly 
> >> readable for anything except binary data, and a local 
> pseudo-markup 
> >> can be used to create a human readable format (for example, in 
> >> libsecondlife we originally used an ASCII-art tree structure). 
> >> Forcing LLSD implementations to agree on this format makes 
> >> implementation and unit testing more tedious, and means it 
> will sneak 
> >> its way in to the protocol in places it should not be such 
> as how the 
> >> current XML-RPC login exchange uses small bits of it.
> >>
> >> * It is the most difficult of the three formats to 
> implement in code. 
> >> The binary format is very straight-forward to implement and very 
> >> efficient (speed-wise) to parse, and the XML format is easily 
> >> implemented by piggybacking on top of an existing XML library. The 
> >> notation format involves the most branching of code paths in the 
> >> parser and has the most formatting options for the 
> serializer, along 
> >> with the aforementioned issues of converting back and 
> forth between 
> >> binary and string data.
> >>
> >> John Hurliman
> >> _______________________________________________
> >> Click here to unsubscribe or manage your list subscription:
> >> /index.html
> >
> 
> _______________________________________________
> Click here to unsubscribe or manage your list subscription:
> /index.html
> 



More information about the SLDev mailing list