Apple’s latest light field patent describes the use of a camera array for immersive augmented reality (AR), live display walls, head mounted displays, video conferencing, and similar applications based on a user’s point of view. The patent application, simply titled “Light field capture”, talks about AR video conferencing where the user’s background can be replaced with other information (e.g. their own view of a scene, or live sports).
The invention also includes concepts including pixel culling (i.e. following the user’s movements and cropping to the interesting parts of the entire camera view), conversion of 3D data to 2D views for the left and right eyes of the second party,
Interestingly, the authors also mention the possibility for a hybrid display/camera-array that would integrate both devices into a single, light-field sensing screen.
For more information, check out Patently Apple and Patent US9681096 – Light field capture on Google Patents.
Current light field sensors typically consist of an imaging sensor and a separate microlens array, both of which are assembled into an optical system. While this allows the use of common CCD- or CMOS sensors, it may also introduce some issues where extreme precision is needed for optimal imaging conditions, e.g. with microlenses of micrometer-range focal lengths. A mismatch of these separately fabricated elements can affect image quality.
A new patent application by Jong Eun Kim at SK hynix (Korea) aims to solve these potential issues: The patent application details a novel light field imaging device where the microlens array is formed on top of the imaging sensor. Continue reading
At this week’s National Association of Broadcasters (NAB) show in Las Vegas, a startup named Light Field Lab has announced they’re developing the next big thing in display technology: glasses-free holographic TVs.
Founded by former Lytro specialists Jon Karafin (former Head of Light Field Video at Lytro), Brendan Bevensee (former Lead Engineer at Lytro) and Ed Ibe (former Lead Hardware Engineer at Lytro), the company is working on the “next generation of light field display technologies”. Continue reading
Back in August 2016, Lytro unveiled its first Virtual Reality experience, “Moon” (see below), to show off the capabilities of Immerge, the company’s groundbreaking, high-end production camera that records light fields for virtual reality. While it was reportedly an impressive experience for the VR viewer, it also had its limitations (especially with moving objects in the recorded scene).
Now, Ben Lang from RoadToVR talks about a recent visit to Lytro, where he saw the new and improved Immerge prototype. Continue reading
Today’s conventional cameras require a set of highly precise lenses and a large array of individual light sensors. This general blueprint limits the application of cameras for new uses, e.g. in ever-thinner smartphones, or in spectra outside the visible light range.
To overcome these limitations and completely rethink the basics of imaging, researchers from Rice University, Heriot Watt University and the University of Glasgow (among others) have recently developed a “compressive sensing” concept camera which uses only a single pixel and no lens whatsoever to record pictures using computational imaging. The trick here lies in the light source, which illuminates the scene using a series of defined black-and-white patterns (coded mask). Based on changes in the resulting light intensity across many exposures, the single pixel camera can then infer the position of objects and patterns in the scene.
Until recently such single-pixel systems required a large number of exposures in the range of 2000 or more. Now, Guy Satat and colleagues from MIT‘s Camera Culture lab have combined single-pixel camera with another cutting-edge technology, ultrafast femto- or picosecond light sensors. Looking not only at the intensity changes across masked illumination bursts, but also within individual bursts, the researchers are able to break up the signal into light reflected from different distances in the scene. This brings the number of required exposures down from 2500 to just 50 in the example outlined in the paper and video. Moving from a single pixel to several light-sensing pixels, placed in a defined sensor pattern, this number can be further reduced without losses in image qualilty. Continue reading