Apr 08

MIT Boosts Efficiency of Lensless Single Pixel Cameras by Factor 50

MIT Boosts Efficiency of Lensless Single Pixel Cameras by Factor 50. Original image (top), single-pixel camera image (50 and 2500 exposures, respectively; second row), and reconstructions from ultrafast sensing (third and fourth rows, 50 exposures using 100 and 20 picosecond sensing, respectively). Image: Satat et al., 2017. Today’s conventional cameras require a set of highly precise lenses and a large array of individual light sensors. This general blueprint limits the application of cameras for new uses, e.g. in ever-thinner smartphones, or in spectra outside the visible light range.
To overcome these limitations and completely rethink the basics of imaging, researchers from Rice University, Heriot Watt University and the University of Glasgow (among others) have recently developed a “compressive sensing” concept camera which uses only a single pixel and no lens whatsoever to record pictures using computational imaging. The trick here lies in the light source, which illuminates the scene using a series of defined black-and-white patterns (coded mask). Based on changes in the resulting light intensity across many exposures, the single pixel camera can then infer the position of objects and patterns in the scene.
Until recently such single-pixel systems required a large number of exposures in the range of 2000 or more. Now, Guy Satat and colleagues from MIT‘s Camera Culture lab have combined single-pixel camera with another cutting-edge technology, ultrafast femto- or picosecond light sensors. Looking not only at the intensity changes across masked illumination bursts, but also within individual bursts, the researchers are able to break up the signal into light reflected from different distances in the scene. This brings the number of required exposures down from 2500 to just 50 in the example outlined in the paper and video. Moving from a single pixel to several light-sensing pixels, placed in a defined sensor pattern, this number can be further reduced without losses in image qualilty. Continue reading

Jun 26

MIT Camera Culture: Simple, Cheap Method for Light Field Photography at Full Sensor Resolution

At its current development stage, light field photography (based on microlens arrays) poses a compromise between spatial information and resolution: The more refocus or prespective a camera is required to provide, for example, the more of its sensor resolution is sacrificed. In the Lytro Light Field Camera, an 11 Megapixel sensor takes pictures that result in pictures of 1.1 Megapixels, so only about 10 % of the sensor resolution make it into the final image.

MIT Camera Culture: Simple, cheap method for Light Field Photography at Full Sensor Resolution (picture: Kshitij Marwah)MIT Camera Culture: Simple, cheap method for Light Field Photography at Full Sensor Resolution (picture: Kshitij Marwah)

As reported previously, the MIT‘s Camera Culture group has come up with a new method to capture light fields, which is both cheaper and more effective. In a new article published by MIT News, the researchers explain what their system, named “Focii”, is capable of:

At this summer’s Siggraph — the major computer graphics conference — they’ll present a paper demonstrating that Focii can produce a full, 20-megapixel multiperspective 3-D image from a single exposure of a 20-megapixel sensor.

Continue reading

Jun 06

Bell Labs Creates Compressive Sensing Camera: No Lens, Always in Focus

Bell Labs Creates Compressive Sensing Camera: No Lens, Always in Focus (picture: Technology Review) Traditionally, all cameras contain an optical lens and some sort of imaging sensor (analog or digital). With the help of sophisticated computing, these basics may soon be reduced to just a sensor.

According to a new report on Technology Review, Bell Labs (a research organization within Alcatel-Lucent) has developed a new type of camera which makes lenses unnecessary. Instead, the new camera prototype sees the world through a “series of transparent openings” (aperture assembly). The camera compares the individual images coming through each aperture (think “coded mask“), and uses the differences between them to reconstruct the final image.

Continue reading

May 06

Compressive LightField Photography enables Higher Resolution LightFields in a Single Image

Compressive LightField Photography enables Higher Resolution LightFields in a Single Image Today’s LightField technology uses either of two methods to record a LightField: it either reconstructs a single low-resolution LightField image (e.g. using microlens arrays or coded masks), or requires several individual pictures to be taken and combined for a high-resolution LightField (e.g. using camera gantries or coded apertures).
In a recent publication, Kshitij Marwah and colleagues introduced a new LightField camera prototype that combines the advantages of these two methods, to reconstruct higher-resolution LightFields from a single, coded image. To do so, they have co-designed the prototype camera to incorporate both of the main aspects of LightField technology: camera optics and computational processing.

Continue reading