[sldev] Request for help with modifying client camera behavior

Kent Quirk (Q Linden) q at lindenlab.com
Tue Aug 26 22:03:30 PDT 2008


Hi, Victor.

Although I work at Linden Lab, I'm not able to give you much help on  
the details of the code you'll have to write, as I haven't looked at  
our camera code at all, nor am I an expert in LSL.

But what I can do is give you some benefit from having done some work  
on camera cinematics in a game I worked on. We had the need for an  
intelligent camera that could track between waypoints, and we also  
needed a repeatable camera that we could fine-tune the motion track for.

I (and others) spent a fair amount of time playing around with the  
system until we got the sorts of results we wanted. What I can  
remember (it's been 8 years now) were:

For camera motion smoothing:

* We had a desired position of the camera eye, and a desired "lookat"  
point (the target of the camera). The camera had its actual position  
of the eye and the target. It was possible for both the desired  
position and the desired lookat to be moving. Each frame, we would  
move the camera's position a fraction toward its final location, and  
we'd adjust the rotation fractionally toward the desired location.

* The camera also had a velocity and an acceleration. We had a maximum  
acceleration, and a maximum velocity. We also limited rotational  
acceleration and rotational velocity. The values for these limits were  
determined by experiment.

* Our position goal was to move the camera each frame 1/16th of the  
distance to the target position, but without violating our  
acceleration and velocity constraints.

* Our rotational goal was to rotate the camera each frame 1/16th (I  
think) of the rotation to the target angle, again without violating  
the speed or acceleration limits.

* For position moves, we would start with the camera's current  
position and target for this frame; that would give us a velocity  
vector. If that was greater than our max velocity, we'd clamp it.

* We'd then subtract that velocity vector from the current velocity  
vector to get an acceleration; if that was greater than our limit,  
we'd clamp the acceleration vector.

* Finally, we'd add the acceleration vector to the velocity vector to  
get a new velocity vector. We'd add that to position to get the new  
position (note that all these calculations need to include a  
multiplier by delta t).

A similar, but independent calculation would take place for camera  
rotation. Note that using Euler angles for 3D rotations can cause odd  
tumbling artifacts. There are matrix rotation systems, but all the  
cool kids (including LL) use quaternions, because they can be  
interpolated smoothly. But unless you're going to fly over things, you  
may be able to get away with just dealing with 2D rotations and minor  
amounts of tilt.

Note that in this system, every frame started over, independently; the  
only carryover from frame to frame was the camera's position and  
velocity. We also had a minimum distance -- if you're close (for some  
arbitrary definition of close) you don't even bother to move the camera.

This gave rise to some wonderful swooping and beautiful pans with a  
nice ease into final position -- very much not jerky.


For fixed camera motions that were replayable, we actually took (as  
you suggest) a series of points that corresponded to keyframes, and  
stored those points and orientations in a database. We then added  
timing information for each keyframe.

We then generated a spline curve through all of those points. Our  
problem was that we attempted to do it by creating two splines -- one  
for camera position, and one for camera angle. The trouble was that  
while each waypoint was perfect for one frame, we tended to get rapid  
changes in angle around the waypoints, just at the point when you'd  
like the angle to be most stable.

As I recall, we fixed that problem by kicking the math up a notch --  
but I don't remember the details, because I wasn't the one who coded  
it. We probably also added additional keyframes to control where the  
camera was before and after the key moment.

Once you've done that, you can generate all of the other tween frames,  
and then feed that to some data stream that controls your camera.

Good luck!

	Q




On Aug 26, 2008, at 7:13 PM, Vector Hastings wrote:

> Hello all, I'm new to this list, never learned C++, but was once a  
> pretty
> competent programmer on the IBM midrange platform.
>
> Please forgive a long first posting, but I've been trying other  
> avenues (as
> this will make clear) and find myself needing to escalate to client  
> hacking.
> A full treatment of what I'm seeking involves a certain amount of  
> detail,
> which now follows:
>
> I'm trying to launch an ambitious filmmaking project inside Second  
> Life
> (machinima). One of the critical keys to getting a more cinematic  
> look to
> machinima in Second Life is better camera control.
>
> The ultimate goal is an in-world camera that can follow a fully
> reproducible, smooth, and non-colliding path.
>
> There's been a lot of work on that already: the 3dNavigator flycam  
> gives us
> beautiful smooth paths, non-colliding behavior, focal-length  
> control, but
> there is zero reproducibility. There are a number of waypoint  
> systems that
> use a vehicle to move the avatar around, allowing for a somewhat
> reproducible path, but at the cost of collisions which are  
> unacceptable in
> interior spaces, and the effect is often choppy and fine control of  
> camera
> angles is not achieved with current systems (even though I suspect  
> that last
> item is theoretically possible).
>
> I've experimented with making a system of waypoints trying to use the
> llSetCameraParams option to move just the av's camera. I believe  
> this is the
> best middle-ground approach, and will achieve a major step forward in
> machinima cameras, but...
>
> The inherent desire of the camera to return to the filming avatar's  
> default
> camera position means my filming results are so choppy that they're  
> only
> useful for simulating an earthquake. :-\
>
> I know that viewer code is in flux right now, so anything done now  
> is at
> risk of having to be re-developed later. But I am also on deadlines,  
> so I'm
> reaching out for guidance in how to do this. Ending up with an older  
> client
> is workable for me. Hopefully the feature could be ultimately re- 
> engineered
> for the more stable 1.21. (I'm also happy to use an RC 1.21 viewer,  
> if it
> becomes usable on the main grid in the next two to three weeks. I  
> was a
> long-time user of the last two RC candidates, doing a lot of our
> pre-production filming with them.)
>
>
> What I think I need is a hack of the client to solve the problem  
> with my
> rig, and there's probably multiple approaches:
>
> 	1. Create a debug setting that tells the camera to apply its native,
> 	manual-mode Cam Transition time and Cam Smoothing to changes in  
> scripted
> camera location.
> 	This sounds relatively simple, since the initial entry and exit into
> scripted camera control obeys those parameters,
> 	but the movement from one location to another does not, which  
> implies to me
> that the camera routines know whether
> 	they are under scripted control or not and have been set to behave  
> in this
> instantaneous relocation mode when switching
> 	from one camera position and/or focus to the next.
>
> 	2. Create a debug setting that turns off av camera target drift  
> altogether.
> There are lag parameters in
> 	llSetCameraParams, but they only seem to be effective when  
> position_locked
> = FALSE -- those lag parameters would
> 	give excellent speed control in the camera -- but right now,
> position_locked = FALSE means the camera pulls toward
> 	the owning avatar like it's attached with a giant rubber band, and  
> the
> effect is a seesawing nightmare.
>
> 	3. Create a flycam recorder. This might be a very general and  
> powerful
> solution, but authoring a camera path with it
> 	would probably be unwieldy. One would want the ability to store a  
> camera
> path log file to disk in a human readable format
> 	so that it could be tweaked into shape. That tweaking could perhaps  
> be done
> with some sort of statistical analysis program,
> 	but I think it would take one dramatically outside the world of SL  
> to do
> it.
>
> 	4. Create a waypoint recorder in the client, that simply lets you  
> hit a
> button to add a camera position/angle to
> 	the list of spots to hit, then reposition, add another point. It  
> would be
> nice to have a speed control, and vital to be
> 	able to save it, and helpful to be able to edit it. This is the  
> basics of
> the scripted waypoint systems in-world,
> 	including the one I'm working on.
>
> I downloaded the code last night, and went to the library to check out
> "teach yourself c++" today.
>
> Somebody have mercy on me, please!
>
> While I will initially use the camera system I develop on my  
> project, as
> soon as we begin releasing episodes, I will also release the camera  
> as a way
> of building good-will and buzz in the project.
>
> If you want to see a little about the project I'm cooking, you can  
> visit
> www.vectorpicturestudios.com.
>
> All the best to you all,
>
> Vector Hastings
>
> _______________________________________________
> Policies and (un)subscribe information available here:
> http://wiki.secondlife.com/wiki/SLDev
> Please read the policies before posting to keep unmoderated posting  
> privileges



More information about the SLDev mailing list