Virtual Reality and Augmented Reality (or Mixed Reality) headsets have evolved quite a bit over the last few years. Improvements in resolution, lag, and other factors, have led to new, extremely immersive systems such as the HTC Vive. Hovewer, one missing feature is still holding back the technology:
Generally speaking, most of today’s displays consist of a two-dimensional display that’s placed at a fixed distance from the user’s eyes. This creates a conflict for our eyes and brain, which in the real world are used to a linked adjustment of the angle between the eyes (“vergence”) and the focus plain (“accommodation”). Recent proof-of-concept systems use up to three display planes, allowing us to experience discrete near, mid-range and far layer to focus on, but for a better, more immersive 3D experience we’ll need the ability to experience at almost continuous focal range.
The most promising solution to this problem is light field technology: For instance, Nvidia’s light field display prototype has shown successfully (though at low resolution) that it is possible to construct a light field image that allows placement of multiple objects at different focal planes or virtual distances. The Nvidia prototype uses a microlens array, much like in light field cameras from Lytro or Raytrix. Magic Leap is another company working on light field technology. While the company has teased a head-mounted light field display on several occasions, they have yet to explain how exactly their system works, let alone present a working prototype to the public.
Now, another company has entered the light field space. Head-mounted display maker Avegant has announced a new display that uses “a new method to create light fields” to simultaneously display multiple objects at different focal planes. While all digital light fields have discrete focal planes, according to Avegant CTO Edward Tang, the new technology can interpolate between these to create a “continuous, dynamic focal plane”. “This is a new optic that we’ve developed that results in a new method to create light fields,” says Tang. Continue reading →
Light field imaging has captured the mind of many technology enthusiasts and imaging pioneers, and there have been rumours of light field cameras in future iPhones or Android smartphones.
Now a new patent has surfaced that shows Apple is still interested in light field cameras. The twist is, the proposed “plenoptic” (a.k.a. light field) camera system is intended to aid robots in the manufacturing process. Continue reading →
The quality of a DSLR in a camera that fits in your pocket – that’s what camera startup Light promises with the just announced L16, the world’s first multi-aperture computational compact camera, which is made up of 16 individual camera modules.
Apple is known to have been interested in light field technology since before Lytro released their first-generation light field camera, as Ren Ng was reportedly invited by Steve Jobs himself to discuss the technology’s potential. The company has even patented some of their own inventions in the field.
Now, it seems that the tech giant has made its next move towards light field photography: Apple has acquired Israeli camera module maker LinX, which specializes in thin camera arrays similar to Pelican Imaging’s PiCam.
LinX promises powerful camera modules with advanced image quality (“leading the way to DSLR performance in slim handsets”), but also additional information such as scene depth through its “multi-aperture” modules (read: array cameras and possibly light field technology).
It’s gotten a bit quiet around Pelican Imaging, lately. Until today, when the mobile plenoptics specialists have broken the silence and announced their own version of a WebGL light field viewer.
“What do photos with depth look like?”, the company teased in their newsletter. To answer that question, the company has published a small sample image gallery based on the new “Pelican 3D Image Viewer”, which allows users to check out and interact with 8 sample images taken with the Pelican Array Camera.