[opensource-dev] Script memory limit vs server CPU utilization as a key metric

Joel Foner joel.foner at gmail.com
Tue Mar 9 10:49:36 PST 2010


Many apologies if this has been discussed at length in a place that I've
missed...

I'm a bit baffled by the continuing strong focus on memory utilization of
scripts rather than CPU load on the host servers. If (maybe I'm missing an
important issue here) the issue is to avoid a resident or scripted item from
causing performance problems on a region, wouldn't the relative CPU load
imposed by that script be a critical item?

I understand that if the total active memory size for a server goes above
it's physical available RAM, then paging would increase and potentially
create issues. Is there some objective analysis of servers with the Second
Life simulator code on to show that they go into continuous swap mode in
this case, or is it occasional "blips" of performance degradation on a
slower interval? It seems to me that having continuing excessive CPU load
would generate an on-going low simulator frame rate, which would be more
frustrating than occasional hits from swapping.

This line of thinking makes me wonder if a better metric for managing the
user's perception of performance would be script CPU load rather than memory
size.

Thanks in advance, and again if this has already been addressed please feel
free to point me at the thread so that I can read up.

Best regards,

Joel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.secondlife.com/pipermail/opensource-dev/attachments/20100309/ee029a80/attachment.htm 


More information about the opensource-dev mailing list