Tech-Giant Toshiba first appeared in the “refocus market” several months ago, when news got out about a tiny light field camera module for smartphones and tablets in development. According to the original report, the prototype was scheduled for mass production “by the end of fiscal 2013″.
Last week, Toshiba officially announced a smartphone camera module with refocus capability, but it’s quite different from the products that were described earlier this year: Instead of a single 1 cm2 camera module with 8-13 megapixel sensor, 500,000 microlenses and effective resolutions of 2 megapixels (6 MP in the second prototype), the new prototype dubbed TCM9518MD consists of two 5 megapixel cameras, a Large Scale Integrated (LSI) chip and no microlenses at all.
In an official press release, Toshiba announced that the dual-camera module will offer software refocus and other features, but not 3D functionality. The module is priced at 5000 Yen (approx. 52 USD, 38 EUR). Working samples will be available in January 2014, and mass production is set for April 2014.
At this year’s SIGGRAPH conference, currently taking place in Anaheim CA, tech blog Engadget spotted an unusual participant in the “Emerging Technologies” section. Douglas Lanman and David Luebke from the research labs at graphics processing specialist Nvidia presented what may be considered a prototype of the future of Virtual Reality: a near-eye light field display.
But what does it do?
Microlens arrays, which are mounted just in front of the high resolution displays, are used to convert pixels to individual light rays, thus creating a light field directly in front of the eye. The viewer is thus able to refocus at multiple depths into the scene.
Traditionally, all cameras contain an optical lens and some sort of imaging sensor (analog or digital). With the help of sophisticated computing, these basics may soon be reduced to just a sensor.
According to a new report on Technology Review, Bell Labs (a research organization within Alcatel-Lucent) has developed a new type of camera which makes lenses unnecessary. Instead, the new camera prototype sees the world through a “series of transparent openings” (aperture assembly). The camera compares the individual images coming through each aperture (think “coded mask“), and uses the differences between them to reconstruct the final image.
Today’s LightField technology uses either of two methods to record a LightField: it either reconstructs a single low-resolution LightField image (e.g. using microlens arrays or coded masks), or requires several individual pictures to be taken and combined for a high-resolution LightField (e.g. using camera gantries or coded apertures).
In a recent publication, Kshitij Marwah and colleagues introduced a new LightField camera prototype that combines the advantages of these two methods, to reconstruct higher-resolution LightFields from a single, coded image. To do so, they have co-designed the prototype camera to incorporate both of the main aspects of LightField technology: camera optics and computational processing.
3D displays are slowly moving into mainstream, but most of the technologies used today require the viewers to wear special 3D glasses, or watch from a very defined, small optimum viewpoint. More advanced 3D displays use eye tracking, and create a stereoscopic effect by specifically sending different images to either eye.
David Fattal and colleagues from HP Laboratories in Palo Alto, California developed a new approach to glasses-free 3D displays, which comes with a number of improvements: Their prototype displays use multi-directional diffractive backlight technology, which makes them particularly well-suited for mobile devices (e.g. smartphones, tablets, or watches). They’re high-resolution, very thin (<1 mm), don’t require eye tracking, and feature a very wide view zone (up to 180 degrees) at an observation distance of up to a metre. Their work was recently published in Nature.