[sldev] Script/Parcel/Memory Limits - Memory Limit Configuration

Michael Schlenker schlenk at uni-oldenburg.de
Sat Dec 19 05:15:57 PST 2009


Simplicity has its value, but as Einstein said: As simple as possible, but not simpler.

Setting a fixed limit per avatar/parcel makes things easily analyzable and safe, but obviously wastes a huge amount of resources on reserves. It is the perfect solution if you need 100% reliability and can afford the reserve memory to back it up for 'normal' use.

It would be really good if LL could provide some insight about the memory use of the servers in general, how much memory is used by scripts, how much is needed for physics, for avatar state, for textures, inventory etc. Script memory might be an easy target, because its very easy to implement a trivial control on VM memory limits, but is it the right target?

Probably LL does not yet know what kind of limits would be sane, because they simply do not have any good profiling data about it yet. Takes some time to track and collect the right data.

So, in fact there are a few issues here:
- LL wants to cut cost and increase utilization of its hardware
- Residents want to keep at least the same experience they have now, probably a better one
- Scripters want a reliable and working base upon which they can build

So, can we eat three cakes and keep all of them?

I think LL does a good thing with providing some tools to profile script costs. Not sure if those tools will be worth using, would be nice if some Linden could provide a short description of what kind of metrics we can expect (e.g. useless aggregated data with no timeline as we see it now in the debug panel for estate owners or fine grained profiling info down to individual functions of our scripts. Or maybe a special 'trace' mode for developers which want to analyze there scripts?). If your system does a good job like e.g. DTrace on Solaris or OS X, very good, if it does less, lets see how much less...

The second step, to provide more efficient script functions to do common tasks is also a great thing. Long overdue. Lets hope you kill some more warts of the language in the process. (how about some functions to override the built in animations with our own, so we do not need to have silly AOs lag the world while trying to win the race condition against your built in anims? Or how about a real timer queue instead of the simple timer event, where you simply register callback functions to be called at some time in the future (e.g. like Tcl's after command, including something like after idle to run code when the simulator isn't really too busy doing other stuff). Some persistent storage would also be quite nice, as i guess a lot of scripts that need much memory just use it to store data, like logs, messages, but do not really need to have all of it in RAM (or swapped in).

But the third step, memory limits and enforcement are the part where you need to be really careful to not create just chaos.

Fixed limits are good, in general. They make things really reliable and ease planning, so thats good for both scripters and LL. But the drop in available capacity in the average case for residents makes it a really ugly solution. Works, but will either annoy a lot of residents or LLs cash balance (if they invested for the needed reserves to handle peak utilization gracefully).

Is there a reason why we only discuss script memory limits? How about textures, those must eat a lot of memory too? And how about memory sharing techniques? If you bitch about resizer scripts in every prim of a 100 prim hair, why not optimize memory sharing for those? If any of those would need more than about 512 Bytes real modifiable memory it would be just a case for bitching about the bad choice for the mono runtime. Maybe provide a 'const' keyword or some other way to allow more aggressive sharing of script resources and the whole case of 'excessive' amounts of identical scripts would nearly vanish with NO impact for residents or scripters. Might be a bit more complex for the people writing the server code, but well, its easier to higher some qualified people to fix the server code than it is to educated a few thousand scripters and a few million users.

Michael


Am 18.12.2009 um 19:13 schrieb Kelly Linden:

> I'll be honest.  I just really don't like the dynamic resource limits idea.  It is very neat and interesting in theory, and fun to design and discuss.  However I see a lot of value in knowing all my content will continue to work and knowing what content I can use - In knowing that when I buy/rent/lease land as part of that I am buying/renting/leasing a specific amount of resources.  I hate the idea of *any* of my content only sometimes working.
> 
> I place a high value on simplicity.  I want to trivially understand where I am, how much headroom I have, how close I am to what limits there are.  I don't want to code complex solutions with multiple behaviors based on the state of the region.  And yes, I know people already do this by monitoring sim stats but it would be awesome if they didn't need to.  "normal" residents also need to understand these limits and be able to see where they are.  These limits will effect *everyone*, even if all you do is rent a house and buy content to furnish it.  You will need to know what will and won't work and buying something that will sometimes work no matter what the reasons is just going to be a nightmare.  If I buy a fish tank it needs to always work and always have fish and not have fish that sleep or disappear when my neighbors decide it is chicken shooting time. 
> 
> That said, I also understand the usage issues here, which mirror closely the more generic web hosting problems.  Resource usage patterns aren't equal or consistent over time or space, this is obvious and known and is NOT something we are ignoring.  The general solution for web hosting is to over-sell, rely on some rules of averages and be able to move things around to accommodate users.  Doing something similar is certainly a possibility, and one I have pushed for.  It isn't as trivial as just setting higher numbers - we need to adjust and fix our infrastructure to more optimally assign regions to hosts - but it is certainly not impossible, and indeed such infrastructure changes would benefit everyone regardless.
> 
> Dynamic resource limits are just complicated by nature.  They are fluid in some respect, and they change based on time and usage - that is just what it means.  Unfortunately it is that nature that makes it hard to plan around and hard to build content for and hard to understand.  The system we use needs to be as easy as prim limits are now, where you can see the cost of an object and you can see how much you can support.
> 
>  - Kelly



More information about the SLDev mailing list