Current light field sensors typically consist of an imaging sensor and a separate microlens array, both of which are assembled into an optical system. While this allows the use of common CCD- or CMOS sensors, it may also introduce some issues where extreme precision is needed for optimal imaging conditions, e.g. with microlenses of micrometer-range focal lengths. A mismatch of these separately fabricated elements can affect image quality.
A new patent application by Jong Eun Kim at SK hynix (Korea) aims to solve these potential issues: The patent application details a novel light field imaging device where the microlens array is formed on top of the imaging sensor. Continue reading
One of the most limiting hardware factors in light field photography is the loss of image resolution by use of microlens arrays: In Lytro’s light field cameras, the effective image resolution is a factor of 10 below the sensor resolution (i.e. 4 Megapixel images from a 40 Megaray sensor in the Lytro Illum). Raytrix, on the other hand, has managed to achieve up to 25% of sensor resolution using multi-focus plenoptic arrays.
In a recent article on SPIE.org, the Society for Optics and Photonics Technology, researchers José Manuel Rodriguez-Ramos and colleagues discuss a new deconvolution approach which allows recovery of full image resolution from a raw light field picture. Continue reading
The US Patent and Trademark Office has just released a patent application by Lytro titled “Light field image capture device having 2D image capture mode”. The application was filed September 8 2014 by nine (then-) Lytro employees, and describes a dual-mode light field camera that can switch between two modes, allowing either light field imaging or traditional high-resolution 2D imaging:
Abstract: A dual-mode light field camera or plenoptic camera is enabled to perform both 3D light field imaging and conventional high-resolution 2D imaging, depending on the selected mode. In particular, an active system is provided that enables the microlenses to be optically or effectively turned on or turned off, allowing the camera to selectively operate as a 2D imaging camera or a 3D light field camera.
In order to record colour images, camera sensors typically use a colour filter array consisting of red, green, and blue filters on top of the light-intensity sensing sub-pixels. After recording each sub-pixel’s light intensity, the so-called “demosaic” process combines four monochrome sub-pixels (2x red, 2x green, 1x blue) into a single pixel containing RGB colour information.
In microlens-based light field cameras, this “demosaic” job may result in a blur effect around the boundaries of objects in the final image.
Image Sensors World found a patent application by Samsung which can solve this blur-problem: In the patent application entitled “Photographing device and photographing method for taking picture by using a plurality of microlenses”, authors Tae-Hee Lee et al. propose moving the colour filter in front of the microlenses (instead of having them behind the microlenses), creating single-colour sub-images. Continue reading
Light field technology is making its way into the mainstream, but the production and assembly of some of its components has not quite reached an efficient scale of mass production.
A typical light field sensor consists of an ordinary image sensor and a microlens array (MLA) or printed mask.
In the assembly of light field sensors, one of the most vital processes is the precise adjustment of the MLA‘s position on the sensor. This adjustment is required for every individual sensor and can thus take up a long time. Since the MLA is usually positioned using screws or springs, physical impact on the light field camera may displace the light field sensor’s layers.
With patent application US 20140183334 A1 “Image sensor for light field device and manufacturing method thereof“, recently discovered by Image Sensors World, Visera Technologies is aiming for an integrated manufacturing method for light field sensors: Authors Wei-Ko Wang and colleagues propose a system where two layers of microlenses (and an intermediate space layer) are formed directly on the image sensor using semiconductor processes. Continue reading