Smartphone makers around the world are racing release the world’s first light field enabled smartphone. According to some reports, manufacturers such as Apple, HTC, Nokia, and also the MIT are working on further miniaturizing the technology to fit into mobile devices. Meanwhile, companies like Pelican and Toshiba are finalizing their camera designs for third-party licensing.
Now, The US Patent and Trademark has granted Apple a new patent describing a “digital camera including refocusable imaging mode adaptor”, and it comes with an interesting addition to the existing light field solutions by Lytro or Raytrix. Continue reading
For the past year, the Fraunhofer Digital Cinema Alliance has been researching new, cheaper technology and workflow solutions for 3D film making. In a recent press release, the Fraunhofer Institute for Integrated Circuits (IIS) announced some of the newest innovations from the Spatial-AV project. There’s a new, miniaturized 360-degree ultra-high definition panoramic video camera rig, a prototype microphone management solution for spatial audio recording, and – most interesting to us – a “Light-field Media Production System” that is touted as being “the most innovative lightfield camera recording system to date”.
Let’s have a closer look at Fraunhofer’s setup:
Today’s light field processing algorithms have mostly been tailored for relatively low image resolutions in the range of a few megapixels. That means, even with increasing sensor resolutions, light field technology will still be effectively limited by resolution. The analysis of light fields at high spatio-angular resolution, so-called “gigaray light fields“, remains a technological challenge due to the sheer computing power it requires.
Researchers at Disney Research in Zürich, Switzerland, have come up with a new, faster way of processing such light fields. Their secret: ignore some of today’s established practices in image-based reconstruction, and try something different.
Events such as concerts, public performances or weddings have two things in common: Virtually everybody’s taking pictures, and it can be quite unsatisfying to be in the wrong spot. What if you could just switch your perspective to someone’s in the first row in order to get the perfect view, or even move around a scene as you like?
CrowdCam is a smartphone app concept by Aydin Arpa (MIT) and colleagues, designed to do that and more, using everybody’s smartphone cameras: The app, which is currently in development, compares photos taken at the event and estimates the different angles between camera views. It then arranges these pictures according to their relative location in the scene, giving users the ability to swipe between different points of view, while stabilizing the image and transition and keeping the image centered on the main object of interest.
In other words, the app creates a collaborative network of cameras and views, allowing you to find the best view and virtually move around in any scene.
At this year’s SIGGRAPH conference, currently taking place in Anaheim CA, tech blog Engadget spotted an unusual participant in the “Emerging Technologies” section. Douglas Lanman and David Luebke from the research labs at graphics processing specialist Nvidia presented what may be considered a prototype of the future of Virtual Reality: a near-eye light field display.
But what does it do?
Microlens arrays, which are mounted just in front of the high resolution displays, are used to convert pixels to individual light rays, thus creating a light field directly in front of the eye. The viewer is thus able to refocus at multiple depths into the scene.