Premium
This is an archive article published on May 1, 2011

The depth of the matter

Research labs and TV companies are working on ‘real’,glasses-free 3D displays,but several challenges lie ahead

Even as we watched Jake fly his Banshee over the subliminal floating mountains in Avatar,some of us complained of headaches from the 3D experience. This was attributed to the focus-convergence problem. In the real world,our eyes automatically focus and converge at the same distance—for instance,when looking at an object that is near,the eyeballs rotate and look inwards to bring the object into auto-focus. While watching a 3D movie,our eyes are required to focus on the screen but converge at different distances to perceive different illusions of depth. If the convergence point—where the left and right camera lens axes meet—is behind the object,the object appears in front of the screen; and if the point is in front of the object,the object appears behind the screen. This decoupling of focus and convergence puts a strain on the eyes. However,engineering 3D content in accordance with what is comfortable to the human eye—by minimising the distance between the focus and convergence points—is the least of the problems facing the 3D industry. “It’s all about the display,” says Sudhir Dixit,director of HP Labs,Bangalore,one of the several research labs across the world working on delivering a “true”,glasses-free 3D experience.

To understand this,try closing one eye and pouring water from a glass into a narrow-mouthed bottle. It becomes a struggle because our eyes are about 65 mm away from each other,they see an object from slightly different angles,producing two different images of the same object that are stitched together by the brain to form a 3D image. This is the reason people with one eye do not see in 3D. The ability to sense depth by the difference in image angle between the two eyes and perceive a single ‘cyclopean’ image of the world is called stereoscopic vision. And the difference in the position of the object viewed from two different points—in this case,the two eyes—is called parallax. A still from a 3D movie is made up of two superimposed images of different polarisations. 3D glasses contain different polarisation filters in each eye and each eye sees only one of the two images simultaneously projected on the screen,perceiving depth. TV companies are now working on autostereoscopic displays that don’t require viewers to wear glasses. “In a regular display,each pixel displays the same colour in all directions and there is no parallax,and no depth. A light field display,on the other hand,is one where pixels display different colours in each direction,so that there is horizontal as well as vertical parallax. This is real 3D,” Dixit says.

Most glasses-free 3D displays available today,such as the Nintendo 3DS—use an LCD panel at the back,with a parallax barrier,made of an array of magnifying lenses,in front. These have serious limitations. For instance,Samsung’s and Toshiba’s lenticular-display prototypes can be viewed from just nine positions. “Resolution is often reduced relative to glasses-based systems due to the use of parallax barriers or lenticular sheets. Also,glasses-free displays have a finite depth of field,so objects blur rapidly as they move away from the display surface. And,brightness is often reduced. This is due to using time-sequential methods,where images are sequentially projected to each eye,” says Douglas Lanman,a postdoctoral associate at the Camera Culture Lab at the Massachusetts Institute of Technology,US. “Computationally,there isn’t too much of a difference at the moment compared to glasses-based systems: two views must be captured/rendered and fed to the display driver,” he adds. Using two LCD displays,one of them transparent,and innovative content-adaptive parallax barriers,MIT Media Lab has developed a ‘High-Rank 3D’ display that addresses some of these issues. “Content-adaptive parallax barriers allow the spacing and orientation of slits to be optimised to transmit as much light as possible,while retaining the fidelity of the projected 3D images. The insight is to realise that the pattern of slits must be changed depending on the 3D scene being projected… Content-adaptive parallax barriers are well-suited to mobile devices,optimising the brightness of displays without reducing battery life,” explains the MIT paper on the technology.

Now,3D is coming to mobile phones armed with dual-lens cameras. But is a paradigm shift to 3D really justified? 3D viewing has been shown to increase understanding of abstract information and a truly immersive experience must be a 3D one. This is what HP Labs India,which has worked on multimodal integration of touch,voice and face recognition in 2D interfaces,is interested in. “Our interaction with computers is complex,a touch and gesture paradigm,supported by face and speech recognition and other intelligent systems make the experience easier. And it’s happening. Computing companies have demonstrated features where,depending on the fixation of the user’s gaze on a certain icon,a file opens without any command,” Dixit says. Translating gesture recognition and touch into 3D,is a greater challenge. “I’d say we are about 10 years away,if not more,from developing a 3D system where everything is real-world-like and natural,” he says.

Latest Comment
Post Comment
Read Comments
Advertisement
Advertisement
Advertisement
Advertisement