Jump to content

Headtracking effect - A true 3D perspective "window"


FOD

Recommended Posts

Johnny Chung Lee is pretty awesome. He's done very simple powerful things with the Wiimote as sort of a "proof of concept" set of demos.

 

First of all, TrackIR and this setup are identical in terms of the hardware. Both have infrared cameras and infrared lights/reflectors to monitor your head movements. The effect is also extremely similar if not the same.

 

The point of this demo is to show how the Wiimote, being a relatively cheap device has yet another use (check out his other demos) on top of being a Wii controller. He's trying to pave the way for intuitive, simple, powerful interface devices to be adopted en masse. Also the Wiimote isn't "hobbled" it's just temporarily hooked up to a PC. RedTiger makes it sound like you have to destroy a Wiimote to get this to work.

 

6DOF TrackIR does exactly what this setup does, if not a little better since it has 3, 4 or more dots to understand more complex geometries. Since the Wiimote setup only has 2 light sources it cannot tell the difference between the light source array yawing and getting farther away (since both simply move the two points together from the camera's point of view). However it would be a simple matter to add more light sources to remove such ambiguities.

 

Overall the lesson is that you can do headtracking with a Wiimote but you can't play Wii with a TrackIR.

 

If I wanted to make a really good, competitive trackIR-like system I would make a device like the Wiimote (except without the unnecessary electronics like the accelerometers and console interface) which would basically be a camera. This would basically be FreeTrack with a nice camera. After that I think I would couple it with some wearable screen glasses and an ability track very large head motions (yaw mostly). That way I could make it 1:1 and you wouldn't care that you weren't facing forward because the screen was strapped onto your face.

 

I think uniquely identifying the LEDs via different flashing frequencies might help the software un-confuse the points (assuming they go all the way around behind your head so you can look backward and the front lights can't be seen). However flashing is probably problematic unless you get a really high-FPS camera.

 

Perhaps you could use LEDs of different "color" and a camera that could tell them apart. That would allow unique point identification with a low FPS camera and smooth motion.

Link to comment
Share on other sites

It seems to me that if BS had the fov (up to nearly 180 deg) programmed as an axis control, you (anyone, not necessarily ED) could pretty easily enable 1 "cockpit window lcd" (as described in my above post).

 

Method:

1: freetrack receives 3 dof (xyz position only not rotations) from camera

2: freetrack computes necessary head direction (the other 3 dof, not position), based on head position and known location/size of window in helo, in order keep the view always pointed at the window.

3: freetrack computes fov based on how close head position is to window/lcd

4: freetrack gives all 6 dof data (3 received & 3 computed) and the fov value to ppjoy

5: ppjoy gives data to BS in joystick form

6: BS will display image according to what its given

 

The shortfall (i think we can't currently do this) is that, for a practical cockpit, we would need the ability to define multiple dynamic views and have joystick axis controls in all 7 dof (6+fov) for each one of them.

 

ED???

Link to comment
Share on other sites

You can do this with FreeTrack by mapping inverted x and y measurements to yaw and pitch in the game (eg moving to the left rotates the view to the right, moving up rotates the view down) and z to inverted fov (moving away from the monitor zooms the view). If you fiddle with sensitivity and limits you can get something that has a nice effect.

Link to comment
Share on other sites

I was shown this vid last night and am really impressed by how well it works, wouldn't it be cool if ED could implement this technique on all future releases;

 

 

The immersion factor would be huge, but I guess so would the computation.

 

Cheers!

 

This is what Eagle Dynamics is all about:

Giving you the best immersion !

Some Demos:

 

http://www.youtube.com/watch?v=rn9-y-wOFiA&feature=channel_page

 

http://www.veoh.com/videos/v1627326198keXc3X?source=embed

(The sound really sucks!)

 

http://www.veoh.com/videos/v16445605e7zQQnfX?source=embed

Link to comment
Share on other sites

I got a huge boost in enjoyment by assigning my TIR Z Axis to both Longitude Camera (the default) and Zoom View, and setting TIR sensitivity on Z axis so less head movement = more axis movement. This way, I move closer to the screen and you both zoom in and move forward in simulation, so it compensates for me the lack of resolution and depth perception of computer displays (even in 24" 1920x1200, its not like real-world :))

Link to comment
Share on other sites

I just realized the difference between Lee's demo and TrackIR is simply 6DOF vs 3DOF.

 

TrackIR moves the camera postion (x,y,z) and the camera vector (yaw,roll,pitch) then displays the result of the camera's view on the screen.

 

Lee's method has 3DOF which he uses for head positioning (x,y,z) to make the experience less dull he doesn't just let the view vector be fixed foward in direction instead he "auto aims" the view so it is constrained it to pass through a fixed pre-deterimined point so the view angle is always that which looks at the fixed point. If Lee had 3 more DOF to work with (I don't know if the Wii tracks more than 2 points using hardware) he could easily duplicate TIR.

 

The reason Lee's method feels more "real" is that he is using 1:1 scaling where as TrackIR tends not to. If you set TrackIR to turn the view 10deg for every 10deg your head turns and move 10 feet for every 10 feet your head moves, then it would look perfectly realistic. The problem of course is that your screen is only a few degrees wide so the center of your vision would quickly not be on the screen.

 

If you mounted a screen to your face and made TIR perfectly 1:1 then it would seem utterly real from your point of view. Black Shark already supports this if you have the budget.

Link to comment
Share on other sites

If I wanted to make a really good, competitive trackIR-like system I would make a device like the Wiimote (except without the unnecessary electronics like the accelerometers and console interface) which would basically be a camera. This would basically be FreeTrack with a nice camera. After that I think I would couple it with some wearable screen glasses and an ability track very large head motions (yaw mostly). That way I could make it 1:1 and you wouldn't care that you weren't facing forward because the screen was strapped onto your face.

 

I think uniquely identifying the LEDs via different flashing frequencies might help the software un-confuse the points (assuming they go all the way around behind your head so you can look backward and the front lights can't be seen). However flashing is probably problematic unless you get a really high-FPS camera.

 

Perhaps you could use LEDs of different "color" and a camera that could tell them apart. That would allow unique point identification with a low FPS camera and smooth motion.

 

Try mounting the Wiimote (or TrackIR camera) above your head facing downward, with your 3 (or 4) point source (homemade or TrackClip Pro) pointing up, instead of toward your monitor. Swap axis in the FreeTrack / TrackIR software as needed. You should get plenty of yaw coverage with that without your head obstructing the camera's view.

[sIGPIC][/sIGPIC]

There's no place like 127.0.0.1

Link to comment
Share on other sites

(I don't know if the Wii tracks more than 2 points using hardware) he could easily duplicate TIR.

 

Authentic Wiimotes track up to 4 points. Some knock offs are known to only track 2 points.

 

FreeTrack provides 6DOF with a 3 point model (hat or clip) or the obsolete 4 point model.

[sIGPIC][/sIGPIC]

There's no place like 127.0.0.1

Link to comment
Share on other sites

Frederf: That's close but not exactly right. The wii-man's program actually changes the field of view as you get closer to the screen (check the football field part). BS does not do this.

 

It is importand to understand the difference between zoom and field of view. Think of the BS rendering process like this: An observer looks through a picture frame into the BS world. That observer is a fixed distance from the frame, and the frame rotates around the observer if he looks left/right/up/down. It's as if you walked around in real life with a picture frame held at arms length. It turns when you turn. That fixed distance between the observer and the frame determines the field of view. In the BS cockpit leaning forward moves the observer and the frame forward, thus maintaining the same field of view. Objects on screen become larger (of course, you're closer to them) but your peripheral vision through the frame stays the same. That's just zoom.

 

The wii-man's program simulates the effect of moving the observer closer to the picture frame. When the observer moves closer, he has greater peripheral vision through the frame (again see the football stadium scene). If you consider your monitor as your picture frame, then moving your head closer to that frame should allow to see more of the BS world on the edges of your screen (the stuff that is actually in your real life peripheral vision because your face is so close to the screen). This does not happen. When you (physically) move towards the screen in BS, the 'in-game' frame stays the same distance from the ingame observer, thus maintaining the fov.

 

For the fov effect to be useful in the game, the in-game picture frame essentially has to have a fixed position relative to the helo. Thus moving your head left/right/up/down/fwd/bkwd (regardless of where it's pointing), alters the view through the in-game frame in the same way that it would if you moved left/right/up/down/fwd/bkwd around a real picture frame. When it comes down to it, moving your head left should angle your view through the window to the right. Moving your head up should angle your view through the window down. Finally moving your head closer should increase the field of view. The last step will keep the frame edges in the same in-game position and essentially compress a larger angular 'slice' of the world into that frame. Of course the practicality of this is limiting...unless you have a separate monitor for each helicopter window and a to-scale physically built helo cockpit, in which case it would be astonishing.


Edited by iantron
Link to comment
Share on other sites

This is what Eagle Dynamics is all about:

Giving you the best immersion !

Some Demos:

 

http://www.youtube.com/watch?v=rn9-y-wOFiA&feature=channel_page

 

http://www.veoh.com/videos/v1627326198keXc3X?source=embed

(The sound really sucks!)

 

http://www.veoh.com/videos/v16445605e7zQQnfX?source=embed

 

Urze I like the multiscreen set ups. Now imagine you had a semi-wrap around screen and a high def projector and a Wiimote. Watch the clip's below.

 

http://uk.youtube.com/watch?v=nhSR_6-Y5Kg

 

http://uk.youtube.com/watch?v=XgrGjJUBF_I&feature=channel


Edited by Vault

[sIGPIC][/sIGPIC]

Link to comment
Share on other sites

If you have the Track IR and the 3D Glasses you get exactly the same effect in almost every game from FPS to flight sims. It's been around along time I don't see what is so special about doing it with a WII remote.

Ask Jesus for Forgiveness before you takeoff :pilotfly:!

PC=Win 10 HP 64 bit, Gigabyte Z390, Intel I5-9600k, 32 gig ram, Nvidia 2060 Super 8gig video. TM HOTAS WARTHOG with Saitek Pedals

Link to comment
Share on other sites

The wii-man's program actually changes the field of view as you get closer to the screen (check the football field part). BS does not do this. {1}

 

In the BS cockpit leaning forward moves the observer and the frame forward, thus maintaining the same field of view. {2} Objects on screen become larger (of course, you're closer to them) but your peripheral vision through the frame stays the same. That's just zoom. {3}

 

The wii-man's program simulates the effect of moving the observer closer to the picture frame. When the observer moves closer, he has greater peripheral vision through the frame (again see the football stadium scene). {4} When you (physically) move towards the screen in BS, the 'in-game' frame stays the same distance from the ingame observer, thus maintaining the fov.

 

1. Yeah, by default. I was speaking mostly about the hardware but the software setups differ but not the software capabilities. That is not to say that you couldn't get fore-aft movement if TIR to adjust FOV if you adjusted an axis setting in options.

 

2. As it should be. A proper 6DOF system should only move the camera and camera vector, not adjust FOV. I mean there's no real life analog to zooming in with the human eye. I understand that visual acuity through a (resolution + FOV squishing issues) monitor is not "fair" but I was speaking theoretically if these weren't issues.

 

3. I'm pretty sure that FOV adjustment and zoom are functionally identical when referring to a fixed size viewing device. FOV is how much angular space is displayed and zoom is how large the image is. If you have a fixed viewing device then FOV dictates zoom and vice versa.

 

4. Again, as well it should. Mr's Lee could have programmed his program in much the same way. I was speaking more to the capabilities than the specific manner he chose to employ those capabilities.

 

Ultimately the "fixed aiming point" method is not a good way to go about using head tracking in flight sims, but at best a way to cover up the limitations of a 3DOF system by coupling direction and displacement axes. If the fixed aim point was at the HUD, you'd have to physically get out of the helicopter and walk around in front of the HUD to look back at your seat. Whether you want to use the Z-displacement axis (fore-aft translation) to be displacement of the camera or FOV/Zoom adjustment.

 

Ideally I would want a head-mounted display with FOV matching the max FOV the simulation is able to produce (can it do 120?) and a resolution where my eyes were the limiting factor in acuity. Then I could use 6DOF in a direct 1:1 relationship where FOV does not need to be adjusted for any reason.

 

I don't see what is so special about doing it with a WII remote.

It's cheaper basically and I think it offloads some calculations onto the hardware component (and not CPU).


Edited by Frederf
Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...