[opensource-dev] J2C fast decoder

Philippe (Merov) Bossut merov at lindenlab.com
Mon Sep 13 21:15:52 PDT 2010


Very interesting discussion though it seems that folks collide several
things when talking about "textures" in general terms: there's a load of
difference between a repetitive 64x64 texture used to tile a brick wall and
a photographic quality 1024x1024 texture. The former could certainly benefit
from being stored and sent around as PNG (low format overhead, lossly
compression will actually make things worse on such a small image no matter
what, and lossless jpeg will end up being bigger than PNG) while the later
will benefit tremendously from the wavelet compression provided by jpeg2000.

I won't go into the advantage of wavelet compression for photographic images
as there is a *huge* literature on the subject with loads and loads of data
proving the point. One can argue between "normal" jpeg (using DCT) and
jpeg2000 (using wavelets) but there's absolutely no contest between jpeg
(whichever flavor) and png for photographic images in term of quality at
high or even moderate compression ratio.

On the subject of access per resolution (aka "discard levels" in SL
parlance), it is of great interest as some folks mentioned when viewing a
texture from a distance. No matter what's the transport protocol, exchanging
a 32x32 will be faster than exchanging a 1024x1024 (with RGBA pixel
values...). The viewer is able to use partially populated mipmaps (which
are, in effect, subres pyramids themselves as there isn't much difference
between "discard level" and LOD...) and, therefore, use partially downloaded
and decompressed images. Note that with jpeg2000 when asking for a new
level, one does not download the whole full res 32 bits per pixels data but
the wavelet coefficients for that level which are, roughly speaking,
encoding the difference with the previous level. That translate in huge
compression benefits in slowly changing areas in particular.

One wavelet property though that our viewer does not take advantage of is
the spatial random access property of the jpeg2000 format. That would allow
for instance to request and download only a portion of the full res data
when needed. That is advantageous in cases where only a portion of the whole
image is mapped on a prim for instance. I've no data though to know if it's
a frequent case in SL but that would be interesting to know how much of a
texture is truly displayed on average. There's may be an interesting ore of
performance to mine there.

All that though needs to be backed by data. This preliminary performance
gathering toolbox I talked about is a first step in that direction.

- Merov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.secondlife.com/pipermail/opensource-dev/attachments/20100913/3ee9b957/attachment.htm 

More information about the opensource-dev mailing list