Microsoft presented project NATAL at E3 this year. This has rekindled my interest in the matching VR display problem (we want that holodeck!). Anyway, my main interest is in multiview displays or panoramagrams.
The idea is that a display presents multiple images at different angles, this generating a different image for each eye from whichever angle you look at it – generating stereoscopic image.
I think the future solution to this will be something like nano-pixels, where a group of light sources (lED or something smaller) form a half sphere. The seperate rays are deliniated by tiny tiny tubes (maybe not quite nano) that prevent the eye from seeing the others from a given angle.
This, together with a motion detection tech similar to NATAL and a VR setup like the CAVE would pretty much check all the boxes for a holodeck, although we still cant hold project objects onto our hand that way (we’d need a display on our hands with the current solution, which seems infeasable). So it’s not perfect, but definately a step forward as it works without periferals and with multiple participants: both the motion detection and the multiview screen techs are user-number independent.