[sldev] Anyone here with OpenCV experience?

Moriz Gupte moriz.gupte at gmail.com
Thu May 21 14:36:31 PDT 2009


Hello,
For non-immersive VR stuff, I have seen a few web cam applications that
allows one to map trackers to control keys of a game (
http://www.camspace.com). I do not know if they share their API (I thought I
saw their SDK ... just checked they do provide an evaluation copy.. and they
even have a link for emulation authoring here http://wiki.camspace.com) and
a Lua scripting reference.

Are we still designing for folks sitting in front of the screen? Would eye
tracking not be more appropriate rather than head tracking (head posn
remains fixed most of time in my case)? Would a linear mapping of head
movement into the virtual environment be useful in terms of informing peers
about areas of interest of the person's head being tracked? Or are we more
looking at head movements to drive avatar navigation, or camera position?
Interesting usability issues to here resolve, having head control of anyone
of these things will impact time to select and interact with targets unless
additional modal controls are introduced. Head tracking and eyetracking is
definitely *more* problematic than in immersive 3D.
R
*apologies if post got multiplied I had to repost because I used a name that
is not acceptable to the list

On Thu, May 21, 2009 at 3:29 PM, Tigro Spottystripes <
tigrospottystripes at gmail.com> wrote:

> from what i remember of an old version of that program, you can change
> the positions it expects to find the dots in 3d as long as you follow a
> simple rule (like for 4 dots  they can't be coplanar, I'm not sure about
> the requirements for 3 dots), so a generic pattern for the position of
> things in the face should work with minimum calibration
>
>
> Jan Ciger escreveu:
> > Tigro Spottystripes wrote:
> > > I'm talking about using the 6dof calculation algorithm, but inputing 3
> > > or 4 dots that were acquired a different way, like say, the eyes and
> the
> > > nose identified by another algorithm
> >
> > Yes? You probably do not realize that those points need to be in a
> > specific spatial relationship. The original algorithm works by fitting a
> > model of the rig with known size into the 2D image. If you change the
> > size (everyone's face is different), it won't work anymore.So you cannot
> > just feed arbitrary 4 points acquired from a different place.
> >
> > Otherwise you would need at least two cameras in a calibrated rig (like
> > that Minoru webcam) to be able to track anything more that 2DOF.
> >
> > Regards,
> >
> > Jan
>
> _______________________________________________
> Policies and (un)subscribe information available here:
> http://wiki.secondlife.com/wiki/SLDev
> Please read the policies before posting to keep unmoderated posting
> privileges
>



-- 
Rameshsharma Ramloll PhD Research Assistant Professor Idaho State
University, PocatelloTel: 208-282-5333
More info at http://tr.im/RRamloll
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.secondlife.com/pipermail/sldev/attachments/20090521/f5922921/attachment-0001.htm 


More information about the SLDev mailing list