Jun 25

Patent: Integrated Light Field Sensor on a Chip

Patent: Integrated Light Field Sensor on a Chip (Figure by Kim 2017, modified by www.lightfield-forum.com) Current light field sensors typically consist of an imaging sensor and a separate microlens array, both of which are assembled into an optical system. While this allows the use of common CCD- or CMOS sensors, it may also introduce some issues where extreme precision is needed for optimal imaging conditions, e.g. with microlenses of micrometer-range focal lengths. A mismatch of these separately fabricated elements can affect image quality.
A new patent application by Jong Eun Kim at SK hynix (Korea) aims to solve these potential issues: The patent application details a novel light field imaging device where the microlens array is formed on top of the imaging sensor. Continue reading

Apr 22

Light Field Lab: Startup is Working on Glasses-Free Holographic TV Sets

Light Field Lab: Startup is Working on Glasses-Free Holographic TV Sets using Light FIeld Tech (picture: Light Field Lab) At this week’s National Association of Broadcasters (NAB) show in Las Vegas, a startup named Light Field Lab has announced they’re developing the next big thing in display technology: glasses-free holographic TVs.
Founded by former Lytro specialists Jon Karafin (former Head of Light Field Video at Lytro), Brendan Bevensee (former Lead Engineer at Lytro) and Ed Ibe (former Lead Hardware Engineer at Lytro), the company is working on the “next generation of light field display technologies”. Continue reading

Apr 16

Lytro Immerge becomes Bigger and Better

Lytro Immerge becomes Bigger and Better (photo: Road to VR) Back in August 2016, Lytro unveiled its first Virtual Reality experience, “Moon” (see below), to show off the capabilities of Immerge, the company’s groundbreaking, high-end production camera that records light fields for virtual reality. While it was reportedly an impressive experience for the VR viewer, it also had its limitations (especially with moving objects in the recorded scene).
Now, Ben Lang from RoadToVR talks about a recent visit to Lytro, where he saw the new and improved Immerge prototype. Continue reading

Apr 08

MIT Boosts Efficiency of Lensless Single Pixel Cameras by Factor 50

MIT Boosts Efficiency of Lensless Single Pixel Cameras by Factor 50. Original image (top), single-pixel camera image (50 and 2500 exposures, respectively; second row), and reconstructions from ultrafast sensing (third and fourth rows, 50 exposures using 100 and 20 picosecond sensing, respectively). Image: Satat et al., 2017. Today’s conventional cameras require a set of highly precise lenses and a large array of individual light sensors. This general blueprint limits the application of cameras for new uses, e.g. in ever-thinner smartphones, or in spectra outside the visible light range.
To overcome these limitations and completely rethink the basics of imaging, researchers from Rice University, Heriot Watt University and the University of Glasgow (among others) have recently developed a “compressive sensing” concept camera which uses only a single pixel and no lens whatsoever to record pictures using computational imaging. The trick here lies in the light source, which illuminates the scene using a series of defined black-and-white patterns (coded mask). Based on changes in the resulting light intensity across many exposures, the single pixel camera can then infer the position of objects and patterns in the scene.
Until recently such single-pixel systems required a large number of exposures in the range of 2000 or more. Now, Guy Satat and colleagues from MIT‘s Camera Culture lab have combined single-pixel camera with another cutting-edge technology, ultrafast femto- or picosecond light sensors. Looking not only at the intensity changes across masked illumination bursts, but also within individual bursts, the researchers are able to break up the signal into light reflected from different distances in the scene. This brings the number of required exposures down from 2500 to just 50 in the example outlined in the paper and video. Moving from a single pixel to several light-sensing pixels, placed in a defined sensor pattern, this number can be further reduced without losses in image qualilty. Continue reading

Apr 02

Avegant: New Light Field Display for better Augmented Reality Headsets

Avegant: New Light Field Display for better Augmented Reality Headsets (Mockup via RoadtoVR.com) Virtual Reality and Augmented Reality (or Mixed Reality) headsets have evolved quite a bit over the last few years. Improvements in resolution, lag, and other factors, have led to new, extremely immersive systems such as the HTC Vive. Hovewer, one missing feature is still holding back the technology:
Generally speaking, most of today’s displays consist of a two-dimensional display that’s placed at a fixed distance from the user’s eyes. This creates a conflict for our eyes and brain, which in the real world are used to a linked adjustment of the angle between the eyes (“vergence”) and the focus plain (“accommodation”). Recent proof-of-concept systems use up to three display planes, allowing us to experience discrete near, mid-range and far layer to focus on, but for a better, more immersive 3D experience we’ll need the ability to experience at almost continuous focal range.
The most promising solution to this problem is light field technology: For instance, Nvidia’s light field display prototype has shown successfully (though at low resolution) that it is possible to construct a light field image that allows placement of multiple objects at different focal planes or virtual distances. The Nvidia prototype uses a microlens array, much like in light field cameras from Lytro or Raytrix. Magic Leap is another company working on light field technology. While the company has teased a head-mounted light field display on several occasions, they have yet to explain how exactly their system works, let alone present a working prototype to the public.

Now, another company has entered the light field space. Head-mounted display maker Avegant has announced a new display that uses “a new method to create light fields” to simultaneously display multiple objects at different focal planes. While all digital light fields have discrete focal planes, according to Avegant CTO Edward Tang, the new technology can interpolate between these to create a “continuous, dynamic focal plane”. “This is a new optic that we’ve developed that results in a new method to create light fields,” says Tang. Continue reading