Jun 25

Patent: Integrated Light Field Sensor on a Chip

Patent: Integrated Light Field Sensor on a Chip (Figure by Kim 2017, modified by www.lightfield-forum.com) Current light field sensors typically consist of an imaging sensor and a separate microlens array, both of which are assembled into an optical system. While this allows the use of common CCD- or CMOS sensors, it may also introduce some issues where extreme precision is needed for optimal imaging conditions, e.g. with microlenses of micrometer-range focal lengths. A mismatch of these separately fabricated elements can affect image quality.
A new patent application by Jong Eun Kim at SK hynix (Korea) aims to solve these potential issues: The patent application details a novel light field imaging device where the microlens array is formed on top of the imaging sensor. Continue reading

May 07

Video: The Making of Hallelujah with Lytro Immerge

Lytro recently upped their Immerge VR Camera to the next generation, with a larger and planar camera array for easier VR video production. Their most highly promoted feature is recording content at 6 degrees of freedom, meaning that you can’t just rotate your view around, but actually move your head around in space (within limits).

At the recent Tribeca Film Festival, the company presented a first VR video experience titled “Hallelujah”, featuring a performance of Leonard Cohen’s popular song, and recorded with the Lytro second-gen Immerge. Lytro’s “Making Of” video not only hints at what VR viewers will see in the video, but also gives some insight into the Immerge production controls and interfaces: Continue reading

Apr 22

Light Field Lab: Startup is Working on Glasses-Free Holographic TV Sets

Light Field Lab: Startup is Working on Glasses-Free Holographic TV Sets using Light FIeld Tech (picture: Light Field Lab) At this week’s National Association of Broadcasters (NAB) show in Las Vegas, a startup named Light Field Lab has announced they’re developing the next big thing in display technology: glasses-free holographic TVs.
Founded by former Lytro specialists Jon Karafin (former Head of Light Field Video at Lytro), Brendan Bevensee (former Lead Engineer at Lytro) and Ed Ibe (former Lead Hardware Engineer at Lytro), the company is working on the “next generation of light field display technologies”. Continue reading

Apr 16

Lytro Immerge becomes Bigger and Better

Lytro Immerge becomes Bigger and Better (photo: Road to VR) Back in August 2016, Lytro unveiled its first Virtual Reality experience, “Moon” (see below), to show off the capabilities of Immerge, the company’s groundbreaking, high-end production camera that records light fields for virtual reality. While it was reportedly an impressive experience for the VR viewer, it also had its limitations (especially with moving objects in the recorded scene).
Now, Ben Lang from RoadToVR talks about a recent visit to Lytro, where he saw the new and improved Immerge prototype. Continue reading

Apr 08

MIT Boosts Efficiency of Lensless Single Pixel Cameras by Factor 50

MIT Boosts Efficiency of Lensless Single Pixel Cameras by Factor 50. Original image (top), single-pixel camera image (50 and 2500 exposures, respectively; second row), and reconstructions from ultrafast sensing (third and fourth rows, 50 exposures using 100 and 20 picosecond sensing, respectively). Image: Satat et al., 2017. Today’s conventional cameras require a set of highly precise lenses and a large array of individual light sensors. This general blueprint limits the application of cameras for new uses, e.g. in ever-thinner smartphones, or in spectra outside the visible light range.
To overcome these limitations and completely rethink the basics of imaging, researchers from Rice University, Heriot Watt University and the University of Glasgow (among others) have recently developed a “compressive sensing” concept camera which uses only a single pixel and no lens whatsoever to record pictures using computational imaging. The trick here lies in the light source, which illuminates the scene using a series of defined black-and-white patterns (coded mask). Based on changes in the resulting light intensity across many exposures, the single pixel camera can then infer the position of objects and patterns in the scene.
Until recently such single-pixel systems required a large number of exposures in the range of 2000 or more. Now, Guy Satat and colleagues from MIT‘s Camera Culture lab have combined single-pixel camera with another cutting-edge technology, ultrafast femto- or picosecond light sensors. Looking not only at the intensity changes across masked illumination bursts, but also within individual bursts, the researchers are able to break up the signal into light reflected from different distances in the scene. This brings the number of required exposures down from 2500 to just 50 in the example outlined in the paper and video. Moving from a single pixel to several light-sensing pixels, placed in a defined sensor pattern, this number can be further reduced without losses in image qualilty. Continue reading