[opensource-dev] Script Memory Limits UI

Marine Kelley marinekelley at gmail.com
Mon Mar 8 23:54:45 PST 2010


Change requires work. Unnecessary, unwanted, and uncalled for in this  
instance. We have to adapt to handle a part of the task that LL was  
supposed to do themselves. Oh of course this is a hard job, allocating  
memory dynamically in an environment like this. Perhaps it is  
impossible. I have yet to hear a Linden say, in all honesty, "sorry  
guys, we can't do it as initially planned, we have to ask you to  
participate by tailoring your scripts, because we can't do it from our  
side". What I have heard so far is "you will be provided tools to  
adapt to the change that is taking place". The two mean exactly the  
same thing, but a little honesty does not hurt. This additional  
workload was not planned, is a shift of work that we were not supposed  
to take in charge in the first place, with no compensation, so I'd  
have liked a little explanation at least.

I am not against the limits. Of course scripts need limits, and they  
actually always had some, only they were hidden, unmanaged and had to  
be discovered through trial and error. As I said, "I am ok with that".  
What I am not ok with, is the fact that I have to review every single  
script of mine to try to find out how much memory they need in the  
worst case. Finding said worst case is already a challenge in itself.  
I have thousands of scripts in-world, not even counting the ones in my  
products that I will have to issue an update for. Because wait a  
little and you'll see merchants start to advertize how little the  
memory footprint of their scripts is compared to the next competitor,  
regardless of how crappy said scripts are.

Anyway this is a moot point now. As far as I understand it, LL has  
decided that their implementation would require us to adapt our  
scripts ourselves, exactly like I described.



On 9 mars 2010, at 02:40, Lear Cale <lear.cale at gmail.com> wrote:

> It would be nice if everything were free, too.
>
> The issue is memory *allocation*.  If a script only uses 16K but is
> allocated 64K, that 64K counts against the server's actual memory
> allocation limit.
>
> So, cool, wouldn't it be nice to only allocate what is actually
> requested?  Well that implies rewriting a lot of code to use memory
> differently, and requires frequent reallocation of memory (as the
> needs grow or shrink).  There is a real cost to this, in terms of
> programming effort and in terms of runtime costs.
>
> Until now, script memory has seemed to be a free lunch.  Well, the
> free lunch is over, and we'll have to deal with it.  I won't believe
> that it's feasible to simply "report the actual memory used" until
> someone who really understands Mono memory allocation and our
> scripting language's arrays (the main memory users) says that there
> actually is a feasible solution along these lines.
>
> Change causes upset.  This will be an issue.  IMHO, though, failing to
> address the problem would be worse.
>
> Regards
> Jeff
>
> On Sun, Mar 7, 2010 at 6:20 AM, Marine Kelley  
> <marinekelley at gmail.com> wrote:
>> Well we have two mutually exclusive solutions here.
>>
>> Either Mono scripts are given a hard memory limit that we (the  
>> scripters)
>> can change within the scripts, with all the overhead work that it  
>> implies
>> (i.e. modifying hundreds of scripts before issuing an update, and  
>> having to
>> know upfront how much memory will be taken exactly), which means  
>> that in
>> regards to the scripts memory usage UI, the script will use exactly  
>> as much
>> as the limit it has requested, no matter whether it really uses it  
>> or not.
>> This gives wasted memory and false information.
>>
>> Or, Mono scripts are given a hard memory limit that we cannot  
>> change, and
>> they report exactly as many bytes as they use at any time. But we  
>> shouldn't
>> be able to change the limit ourselves, because it wouldn't make  
>> sense to do
>> so, it would only be restraining ourselves if we set less than 64k,  
>> and
>> wasting memory space if we set more than 64k.
>>
>> In both cases, the question of whether the script crashes when  
>> reaching the
>> limit or not is not related.
>>
>> I seriously, and I mean seriously, think that choosing the first  
>> option is
>> going to hurt the established scripters very badly, and therefore  
>> the grid
>> as a whole. To me scripts should report exactly as much memory as  
>> they use,
>> not more, and should not require the scripters to modify them to  
>> report
>> something that could be computed by the sims more accurately anyway.
>>
>> Of course it is tempting to tell the scripters "you can now decide  
>> how much
>> memory to allow, and that way you are certain it will report the  
>> amount you
>> have set", as much as it is tempting to shift the workload of  
>> allocating
>> script memory onto the scripters since LL can't seem do it.
>>
>> Remember, we are now going to have limits on a service that didn't  
>> have them
>> before. For the same price. All in the sake of stabilizing the  
>> grid. Ok for
>> me. This will already hurt scripters who will have to adapt bad  
>> scripts. But
>> now we are told we are going to also adapt good scripts as well ! I  
>> repeat,
>> this is unacceptable.
>>
>> Marine
>>
>>
>> On 7 March 2010 03:02, Frans <mrfrans at gmail.com> wrote:
>>>
>>> As for the dynamic vs fixed memory usage. Of course it would make  
>>> sense to
>>> have dynamic memory usage, but I haven't seen a response yet on  
>>> how to solve
>>> the problem that Kelly described, about scripts suddenly running  
>>> out of
>>> available memory to use, when they fill up lists with info, etc.  
>>> And break
>>> because of it. Or is this considered not to be a big problem?
>>
>>
>> _______________________________________________
>> Policies and (un)subscribe information available here:
>> http://wiki.secondlife.com/wiki/OpenSource-Dev
>> Please read the policies before posting to keep unmoderated posting
>> privileges
>>


More information about the opensource-dev mailing list