[opensource-dev] Openjpeg/KDU the cold hard metrics

Robin Cornelius robin.cornelius at gmail.com
Thu Sep 23 02:02:18 PDT 2010


On Thu, Sep 23, 2010 at 3:40 AM, Sheet Spotter <sheet.spotter at gmail.com> wrote:
> There may be another option for obtaining a common set of J2C files for
> comparison.
>
> Viewer 2 installs 784 JPEG-2000 files (*.j2c) in the "local_assets" folder
> under the main install folder (e.g., "C:\Program Files\SecondLifeViewer2").
>

Doh!, good call. we could set the default path for the test to use
those +1 on that idea.

> The "j2k_metric" test harness decoded all 784 files from the "local_assets"
> folder without errors. The OpenJPEG 1.3.0 library was 3.7 times slower than
> the KDU library that comes with Viewer 2.1.1.208043 (the current release
> version).

That's significantly different to my latest results, but i need to
validate on that texture set. I'm expecting we will see massive
differences between relative performance from CPU to CPU. I'll put up
a google doc that any one can add to and we can all (if you want) add
our individual results against this set.

>
> I have been unsuccessful at implementing the additional functions to make
> the OpenJPEG v2 interface compatible with the earlier v1.3.0 interface. The
> test application aborted with buffer overrun errors. Oopsie!

Did you see https://jira.secondlife.com/browse/SNOW-361 ? i used that
code to interface to OJP 2, you do need to build openjpeg with a
compile flag to expose  two functions that have been deprecated ,
USE_OPJ_DEPRECATED is what is needed in openjpeg.

> There was one small surprise. Only two threads were in use for both the
> OpenJPEG v1.3.0 and KDU tests. I expected KDU to use all eight available
> threads.>

I'm not overly concerned on threading for this reason, JPEG2000s often
are GB/TB images, Nasa etc use them for planet surveys and all kinds
of things, multi core support there is really valuable and will
clearly speed up the total process. In SL we are decoding lots of
small images so my argument is that KDU using all 4/8/what ever cores
is little real world advantage over 4/8/whatever threads decoding
separate images in parallel and for the end user the effects will be
very similar (assuming a similar single thread decode performance) and
infact depending on how KDU is written, it might even have a
performance penalty if it spawns threads for each decode as starting
threads has a cost, that cost is worth paying for single very large
images, but could be too much for multiple images as the thread start
costs adds up. I suspect the KDU api has an option to use/not use
multi core so may be LL have some control over this and can optimised?

Robin


More information about the opensource-dev mailing list