The US Patent and Trademark Office has just released a patent application by Lytro titled “Light field image capture device having 2D image capture mode”. The application was filed September 8 2014 by nine (then-) Lytro employees, and describes a dual-mode light field camera that can switch between two modes, allowing either light field imaging or traditional high-resolution 2D imaging:
Abstract: A dual-mode light field camera or plenoptic camera is enabled to perform both 3D light field imaging and conventional high-resolution 2D imaging, depending on the selected mode. In particular, an active system is provided that enables the microlenses to be optically or effectively turned on or turned off, allowing the camera to selectively operate as a 2D imaging camera or a 3D light field camera.
In order to record colour images, camera sensors typically use a colour filter array consisting of red, green, and blue filters on top of the light-intensity sensing sub-pixels. After recording each sub-pixel’s light intensity, the so-called “demosaic” process combines four monochrome sub-pixels (2x red, 2x green, 1x blue) into a single pixel containing RGB colour information.
In microlens-based light field cameras, this “demosaic” job may result in a blur effect around the boundaries of objects in the final image. Image Sensors World found a patent application by Samsung which can solve this blur-problem: In the patent application entitled “Photographing device and photographing method for taking picture by using a plurality of microlenses”, authors Tae-Hee Lee et al. propose moving the colour filter in front of the microlenses (instead of having them behind the microlenses), creating single-colour sub-images. Continue reading →
Light field technology is making its way into the mainstream, but the production and assembly of some of its components has not quite reached an efficient scale of mass production.
A typical light field sensor consists of an ordinary image sensor and a microlens array (MLA) or printedmask.
In the assembly of light field sensors, one of the most vital processes is the precise adjustment of the MLA‘s position on the sensor. This adjustment is required for every individual sensor and can thus take up a long time. Since the MLA is usually positioned using screws or springs, physical impact on the light field camera may displace the light field sensor’s layers.
With today’s light field sensors, extracting 3D stereo images from light field recordings typically results in a lowered effective image resolution – but that limitation may soon be history: Sony has developed a novel sensor design with overlapping pixels in two layers, that will allow 3D output without the typical decrease in image resolution. In Sony’s recently granted US Patent, Nr. US20140071244, author Isao Hirota introduces a dual level microlens array setup in combination with a sensor that consists of two layers of light sensitive pixel grids – front-facing and back-facing grids that are rotated at, for example, 45 degrees.
The described configuration allows different neighbouring pixels to share the same information from a single microlens while being allocated to either the left or right stereo views, resulting in higher-resolution 3D stereo output from a single-lens, single-sensor device (i.e. a “monocular 3D stereo camera”).
About a year ago, Nvidia presented a novel head-mounted display that is based on light field technology and offers both depth and refocus capability to the human eye. Their so-called Near-Eye Light Field Display was more a proof of concept, but it’s exciting new technology that solves a number of existing problems with stereoscopic virtual reality glasses.
Nvidia researcher Douglas Lanman recently gave a talk at Augmented World Expo (AWE2014), in which he explained the background and evolution of head-mounted displays and the history and design of Nvidia’s near-eye light field display prototypes: Continue reading →