Jump to content

2016 options for headtracking with low latency


Recommended Posts

Since my dream of the Occulus Rift went down in flames ($1000AUD+ is too high for my limited budget) I was wondering on my options for headtracking with very low latency as I get somewhat nauseas with lag.

 

I have a PS3 Eye, and a 3point IR cap I used with Freetrack which works ok although cumbersome and also a bit worried about future compatibility since the drivers are a PITA. I wouldn't mind the TrackIR but for $250-300 it's out of my budget for something I don't use regularly.

 

I also a 10dof sensor board + arduino available to use but I haven't wired it up or played with it. Would this work better than the IR cap?

 

Are there ways to do headtracking now with a Kinect style twin lens sensor? I wear glasses all the time so would that work well to track the edge of the frames which remain a constant shape vs the dynamics of a human face?

Link to comment
Share on other sites

I don't know about the DIY stuff, but TrackIR5 captures 120 frames per second and the perceived latency when inside DCS mostly depends on the game's framerate. If the DCS framerate is above 75fps then it starts to feel more or less latency free. If it drops down to 60fps or lower it starts to bother me.

Link to comment
Share on other sites

The newest Version of (Trackhat) Opentrack allows to drive the ps3 eye at h igher rates ( upto 189 hertz in 320x240 ) mode, however this also means more pictures have to be computed which requires more pc perfomance which could lower dcs fps. I also think its more a dcs problem though as i got little to zero lag in opentracks preview- think about the stuttering/lagging when tilting the external view with the mouse, so i guess theres something buggy about this.

 

However there are professional motion-capture cameras with build in picture processing and high rates/resolution starting around a 100$ if i remeber right - not sure if theres free software for those though or ull have to buy something extra.

 

There was an article in a german ct about a guy building his own jedi-"holodeck" ( more like a bigger version of steam vr) where those cams where mentioned - sadly ull have to pay to read to full article - http://www.heise.de/ct/ausgabe/2014-24-Mit-VR-Brille-und-Motion-Capture-Kameras-zur-eigenen-Jedi-Cave-2433093.html

 

Also the 10dof sensor and ardunio could offer almoust zero latency as the arduino ( leonardo - not sure about the mega and uno) could just act like a mouse or hid joystick so there would be no impact on our pcs perfomance. I also got myself a sensor and experimented with it, but i gave up as its not as easy as getting the sensors data and then map them to a mouse/joystick movement - First ull have to use a communication protocoll to get the sensors data (like i2c), which requires a certain software libary on the arduino, also the raw sensor data is noisy and therefore should be filtered - and thats when things get realy complicated, i once stumpled across such a filter, but the filter script alone was so long it wouldnt fit into the arduinos Memory together with the i2c libary and my own script - all i managed to do is to get a very shaky 3d cube ( its an example script included with processing ) to spin on 2 axis , third axis delivered random values, not sure if broken sensor or bug in the communication protocol.

 

A good starting point could be to google for "arduino wii remote" as this has been done before and there should be scripts for this - also used wii remote is cheaper and easier to get than a single sensor.

 

 

-forgot- for a cheap try on 3dof u could get some apps like trinus vr and use your smartphone as sensor even without cardboard, or a webcam + "facetrack noir" which doenst require ir-leds but tracks your face instead (6dof)


Edited by tob.s
Link to comment
Share on other sites

Unfortunately the Kinect was never able to reach the level of performance offered by Track IR. Kinect actually isn't stereoscopic, it uses a single infrared camera at 640x480 and a laser diode projected through a special diffraction grating. The grating creates a pseudo-random speckle pattern which and with some very complex calculations, can be converted into a depth map. The only decent use of Kinect is simply using the IR camera for the tracking your IR point model which isn't any more accurate than a webcam.

 

One technology I'm really interested in it the Intel RealSense depth camera which uses a similar speckle pattern to depth map process but has stereoscopic infrared cameras and an HD RGB camera. It's already supported by OpenTrack and I'm considering purchasing one.

 

See my post on The RealSense here:

http://forums.eagle.ru/showthread.php?t=157606

Link to comment
Share on other sites

10dof sensor board + arduino work better, than the IR cap. Only 3 axis are now available, but you can make pseudo 6d.

 

 

 

That looks very impressive, how is response/latency? Is there a video showing the person in front of the screen?

 

Could you explain what pseudo 6DOF is? How is translation generated from an IMU?

 

I have my doubts about it being better than tracking fixed points of known geometry. With the 10dof solution.

 

Position is estimated from a third order, the 2nd integral of acceleration. It's hard to overcome the drawbacks of such a system, even with a magnetometer.

Link to comment
Share on other sites

That looks very impressive, how is response/latency? Is there a video showing the person in front of the screen?

There are instructions on how to write? )) Of course I will try.

 

Could you explain what pseudo 6DOF is? How is translation generated from an IMU?

http://2.firepic.org/2/images/2016-02/16/a96agxp6nryv.jpg

looks good

 

[ame]

[/ame]

 

 


Edited by Econ
Link to comment
Share on other sites

A word regarding track IR.

 

I've had track ir 5 for quite a while now. I was using it with the bundled reflective clip on a hat. For the longest time I stayed away from the Trackclip pro that emits IR and puts the sensor in passive mode. The reason being the highly negative reviews about it's durability.

 

Well let me tell you, I wish I never listened. It may not be the most durable but the difference in how responsive and accurate it is becomes out of this world. There isn't even a comparison to me.

 

As a result I was also able to turn the smoothness down considerably and it's rock stable, with no noticeable lag to me, and just an amazing experience all around.

 

My advice is don't even hesitate, go with trackir5 with the clip, it's worth every penny as far as i'm concerned. The learning curve of setting up a profile just right is a bit to get used to but once you've got it it's amazing.

 

I can't ever see a need for the occulus. giving up the ability to see the stick, throttle, and keyboard is a terrible trade to me. Not to mention the price. However, adding a note, like someone else mentioned, it feels less great at low frame rate. I have no framerate problems so it feels great, but if you have consistently less than 60fps you might not enjoy it as much given your sensitivity to lag. Only thing I can say, is upgrade if you need to and you can, it's well worth it for the enjoyment. I just got Vaicom going as well and it adds even more immersion and works unbelievably well once you pass the learning curve of the commands.


Edited by FeistyLemur
Link to comment
Share on other sites

  • 2 weeks later...
There are instructions on how to write? )) Of course I will try.

 

 

http://2.firepic.org/2/images/2016-02/16/a96agxp6nryv.jpg

looks good

 

 

 

Hi Econ,

 

I wonder how you get x,y,z positioning to work.

 

Currently I'm playing around with a Nano, a 9DOF MPU6050 and the OpenTrack Hatire plugin. Roll,pitch and yaw work flawlessly, acceleration data looks good as well but for position? I could calculate them with integrals...

 

Edit: The link to the jpeg does not work


Edited by f4l0
Link to comment
Share on other sites

  • 10 months later...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...