Jan Kučera has recently released a suite of software updates for his Lytro Meltdown tools, the Lytro Compatible Viewer (updated to version 18.104.22.168), the Lytro Compatible Communicator (new version: 22.214.171.124), and the Lytro Compatible Library (new version: 126.96.36.199).
Updates include a 3D mesh view from depth maps for the Viewer, improved demosaicing, and user manuals. The library has received accessors for well-known components in light field packages, dedicated classes and methods for easier access to sub-aperture and individual microlens images.
Earlier this year at Augmented World Expo, Nvidia researcher Douglas Lanman gave a talk about Near-Eye Light Field displays, i.e. electronic glasses which allow users to experience both 3D and depth. When asked about Augmented Reality (AR) applications during the discussion, Lanman noted that creating a set of transparent glasses that would also include microlenses (or something equivalent) but still allow “normal” see-through vision, was a real challenge. He very briefly teased “pinlight displays”, which were to be presented at the same conference, but no further information could be found online.
In the Emerging Technologies section of the Siggraph 2014 conference (10-14 August 2014), Adam Maimone and colleagues from the University of North Carolina at Chapel Hill and Nvidia will be presenting their new invention in a talk entitled “Pinlight Displays: Wide-Field-of-View Augmented-Reality Eyeglasses Using Defocused Point-Light Sources”. Continue reading →
With today’s light field sensors, extracting 3D stereo images from light field recordings typically results in a lowered effective image resolution – but that limitation may soon be history: Sony has developed a novel sensor design with overlapping pixels in two layers, that will allow 3D output without the typical decrease in image resolution. In Sony’s recently granted US Patent, Nr. US20140071244, author Isao Hirota introduces a dual level microlens array setup in combination with a sensor that consists of two layers of light sensitive pixel grids – front-facing and back-facing grids that are rotated at, for example, 45 degrees.
The described configuration allows different neighbouring pixels to share the same information from a single microlens while being allocated to either the left or right stereo views, resulting in higher-resolution 3D stereo output from a single-lens, single-sensor device (i.e. a “monocular 3D stereo camera”).
About a year ago, Nvidia presented a novel head-mounted display that is based on light field technology and offers both depth and refocus capability to the human eye. Their so-called Near-Eye Light Field Display was more a proof of concept, but it’s exciting new technology that solves a number of existing problems with stereoscopic virtual reality glasses.
Nvidia researcher Douglas Lanman recently gave a talk at Augmented World Expo (AWE2014), in which he explained the background and evolution of head-mounted displays and the history and design of Nvidia’s near-eye light field display prototypes: Continue reading →
Today’s glasses-free 3D displays ususally consist of dozens of devices, which makes them not only very complex, but also bulky, energy-consuming and costly. At SIGGRAPH 2014 conference, Gordon Wetzstein and Matthew Hirsch from the MIT’s Camera Culture Group presented a new approach to glasses-free 3D that is based on projectors and optical technology found in Keplerian telescopes. Their novel method for “Compressive Light Field Projection” consists of a single device without mechanically moving parts.
Because it’s relatively cheap to build with today’s optics and electronics, the presented prototype could pave the way for cinema-scale glasses-free 3D displays. Continue reading →