[sldev] Packing alogrithm behind LAND_LAYER_CODE part of
a LayerData message
John Hurliman
jhurliman at wsu.edu
Wed Mar 14 15:30:55 PDT 2007
Tleiades wrote:
> Hi
>
> I've been trying to figure out the algorithm behind packing and
> decoding the LAND_LAYER_CODE part of a LayerData message. I'd like to
> get the broad picture of how that data is encoded, but I seem to get
> lost, not seeing the forest, because all the trees get in the way.
>
> Is there someone who can provide me with an overall picture of the
> algorithm or point me to some url's explaining the algorithm?
>
Again I'll defer to the libsecondlife source code, TerrainManager.cs has
a functional encoder and decoder for land packets. The Heightmap example
will log an avatar in to the grid and show the heightmap+water for that
simulator and the OpenSim project makes use of the libsecondlife encoder
to hook up custom terrain generators and send terrain data to clients. I
don't have any good documentation written, just a scatter of notes that
were sent to me for the implementation. The basic idea is that there is
a header for the whole packet that says what type of data this is (yes
it's a duplicate of the other field in the packet), the patch size
(current viewer implementation only supports 16x16 and 32x32 although
I've never actually seen a 32x32 patch), and a few other things. Then
each patch has a small header, followed by DCT compressed data (similar
to JPEG but using non-standard coefficients). The decoding is a lot
better than the encoder, which makes some guesses on values for header
fields and hardcodes a few things which is probably not optimal. The
trickiest part about this whole thing is that it uses what LL calls
"bitpacking", where CPU cycles are thrown to the wind in favor of
smashing the data in to the most compact data you can get. Probably for
the better since these packets send dynamic wind and cloud data to the
client constantly. Bitpacking means you can pack and unpack integer and
floating point values using an arbitrary number of bits, although in the
LayerData code I think floating point values always get their full 32
bits. So for example you might say pack an integer I using three bits,
then pack an integer J using eight bits, then pack a float F.
I I I J J J J J | J J J F F F F F | F F F F F F F F | F F F F F F F F |
F F F F F F F F | F F F . . . . .
In the example above it took five bytes and some change, the current
byte position in the bit packer is five (zero based) and the current bit
position is three (zero based). There is no metadata in there so the
unpacker has to know to unpack a three bit integer, eight bit integer,
and then a 32-bit float. The entire data field in the packet is one big
bitpacker so the end of a DCT patch and the beginning of the next header
may be in the middle of a byte.
The viewer and libsecondlife code both achieve the same thing but they
work a bit differently; libsecondlife builds all of the
compression/decompression tables once at startup while the viewer
rebuilds these tables every time it decodes a patch, and there are a few
other minor optimizations. Maybe someone could backport them to the
viewer and submit a patch? I would use the existing define (#define
16_and_32_only or something like that) to create two separate sets of
tables appended with 16 and 32 and fill them maybe the first time that
code is executed in place of the code that initializes them each time.
If someone wants to document this on the official wiki that would be
great and maybe we could get more knowledgeable people to fill in the
missing bits that we've been guessing on such as the stride and when to
use what wbits values.
John Hurliman
More information about the SLDev
mailing list