[sldev] Texture Cache: A summary of The Plan as it stands

Dahlia Trimble dahliatrimble at gmail.com
Thu Jun 12 23:48:34 PDT 2008


Some initial technical thoughts...

I suspect that there may be quite a few factors that affect the speed of
retrieving raw image data from a hard disk, the operating system and file
systems in use, whether disk compression is in place, the transfer rate and
seek time of the disk hardware, the size and efficiency of the disk cache,
how badly fragmented the drive is, how many other processes on the machine
are competing for disk access.... I'm sure there are more. You may want to
offer easy to use end-user options for enabling and controlling your
proposed changes and do some testing on a few different machines. I suspect
that in a multi-core multi-threading setup you may have equal or better
results by storing the jpeg2000 files as they currently are stored and maybe
changing the criteria for cached textures being deleted so they arent
downloaded so often.

good luck! :)

On Thu, Jun 12, 2008 at 10:57 PM, Buckaroo Mu <sldev at bitparts.org> wrote:

> Leaving aside the politics and hoping to clear up any misunderstandings,
> I'd like to summarize what I believe the conclusions are as far as how I, at
> least, would like to move forward. This is a technical summary, and is still
> open to discussion (of course) - and, it's how I plan to attempt to fumble
> my way through the code (wish me luck). It's late, and I don't have Visual
> Studio open to reference the real names of data files & functions right now,
> so I'm going from memory - be kind.
>
> CURRENTLY: The texture cache stores compressed JPG2000 versions, as
> received from the region/asset server, in two files - the index data (UUID &
> such) and header for the JPG2000 data stored in a flat file array of
> structures (texture.cache), with the remaining image data stored in the
> cache/textures/x/uuid files. When pulled from the cache, the header data is
> retrieved, then the remaining bulk of the data is pulled from the individual
> texture cache file, decoded, and returned to whatever called it as a
> llRawImage object.
>
> MY PLAN: When receiving a texture from the region/asset server, it would be
> first decoded, then store the llRawImage data - the first x number of bytes
> (however many are currently used for JPG2000 texture headers) in the
> texture.cache file, with the remaining data stored in the
> cache/textures/x/uuid files. The index may also include such data as the
> dimensions of the texture and total file size (normally in the JPG2000
> header, if I'm not mistaken). When pulled from the cache, the "header" data
> are retrieved from the texture.cache, with the remainder coming from the
> individual file. No further decoding is necessary. I also plan on finding
> the code that hard-limits the cache to 1gb and upping that to as much as
> 100gb - although changing the value to something less would be possible via
> the gui with a simple modification of the XML file.
>
> The immediate impact I can see of these changes is a massive increase in
> the render speed for previously-cached textures - although there may be some
> slowdown for new textures, if the JPG2000 format is designed to allow
> decoding to proceed from a low-resolution to a higher resolution as the file
> is downloaded the first time. If I'm wrong about that, please correct me (as
> with anything in this post). In other words, the texture will not
> progressively decode as it's downloading - it will have to download
> completely before it is decoded.
>
> This is all very first-stage planning - I presume that the cache discard
> algorithm will have to be tweaked for larger caches, as well as checking the
> memory footprint required by the larger cache (not sure how much is stored
> in memory). Changes to the prioritization of downloading may also be
> necessary to get the most benefit from this, although I strongly suspect
> that moving to http delivery of textures will necessitate radical changes in
> that area as well. Down the line, we might be able to look at decoding
> on-the-fly new textures, then writing the raw image data to the cache when
> it's completely downloaded. For that matter, I don't see why it's not
> possible for the cache to store both JPG2000-encoded and raw files in the
> cache at the same time, with an image format indicator as part of the
> texture.cache data. This could theoretically pave the way for caching files
> delivered in other formats - PNG, TGA, etc, should that ever become a
> potential benefit.
>
> I'm not sure if I see a benefit from XORing the raw texture data - granted,
> it's much less processor-intensive than decoding is, but it's still an extra
> step, and the system described herein would be just as obfuscated as the
> existing system.
>
> OK - don't be brutal, but please, poke holes - point me where I'm wrong,
> TECHNICALLY. I get enough politics every evening from the news. Thanks!
>
> - Not a Linden, and not wanting to be a Linden,
>  Buckaroo Mu
>
> _______________________________________________
> Click here to unsubscribe or manage your list subscription:
> /index.html
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.secondlife.com/pipermail/sldev/attachments/20080612/496c57bf/attachment.htm


More information about the SLDev mailing list