Lytro recently upped their Immerge VR Camera to the next generation, with a larger and planar camera array for easier VR video production. Their most highly promoted feature is recording content at 6 degrees of freedom, meaning that you can’t just rotate your view around, but actually move your head around in space (within limits).
At the recent Tribeca Film Festival, the company presented a first VR video experience titled “Hallelujah”, featuring a performance of Leonard Cohen’s popular song, and recorded with the Lytro second-gen Immerge. Lytro’s “Making Of” video not only hints at what VR viewers will see in the video, but also gives some insight into the Immerge production controls and interfaces: Continue reading
Back in August 2016, Lytro unveiled its first Virtual Reality experience, “Moon” (see below), to show off the capabilities of Immerge, the company’s groundbreaking, high-end production camera that records light fields for virtual reality. While it was reportedly an impressive experience for the VR viewer, it also had its limitations (especially with moving objects in the recorded scene).
Now, Ben Lang from RoadToVR talks about a recent visit to Lytro, where he saw the new and improved Immerge prototype. Continue reading
Today’s conventional cameras require a set of highly precise lenses and a large array of individual light sensors. This general blueprint limits the application of cameras for new uses, e.g. in ever-thinner smartphones, or in spectra outside the visible light range.
To overcome these limitations and completely rethink the basics of imaging, researchers from Rice University, Heriot Watt University and the University of Glasgow (among others) have recently developed a “compressive sensing” concept camera which uses only a single pixel and no lens whatsoever to record pictures using computational imaging. The trick here lies in the light source, which illuminates the scene using a series of defined black-and-white patterns (coded mask). Based on changes in the resulting light intensity across many exposures, the single pixel camera can then infer the position of objects and patterns in the scene.
Until recently such single-pixel systems required a large number of exposures in the range of 2000 or more. Now, Guy Satat and colleagues from MIT‘s Camera Culture lab have combined single-pixel camera with another cutting-edge technology, ultrafast femto- or picosecond light sensors. Looking not only at the intensity changes across masked illumination bursts, but also within individual bursts, the researchers are able to break up the signal into light reflected from different distances in the scene. This brings the number of required exposures down from 2500 to just 50 in the example outlined in the paper and video. Moving from a single pixel to several light-sensing pixels, placed in a defined sensor pattern, this number can be further reduced without losses in image qualilty. Continue reading
Virtual Reality and Augmented Reality (or Mixed Reality) headsets have evolved quite a bit over the last few years. Improvements in resolution, lag, and other factors, have led to new, extremely immersive systems such as the HTC Vive. Hovewer, one missing feature is still holding back the technology:
Generally speaking, most of today’s displays consist of a two-dimensional display that’s placed at a fixed distance from the user’s eyes. This creates a conflict for our eyes and brain, which in the real world are used to a linked adjustment of the angle between the eyes (“vergence”) and the focus plain (“accommodation”). Recent proof-of-concept systems use up to three display planes, allowing us to experience discrete near, mid-range and far layer to focus on, but for a better, more immersive 3D experience we’ll need the ability to experience at almost continuous focal range.
The most promising solution to this problem is light field technology: For instance, Nvidia’s light field display prototype has shown successfully (though at low resolution) that it is possible to construct a light field image that allows placement of multiple objects at different focal planes or virtual distances. The Nvidia prototype uses a microlens array, much like in light field cameras from Lytro or Raytrix. Magic Leap is another company working on light field technology. While the company has teased a head-mounted light field display on several occasions, they have yet to explain how exactly their system works, let alone present a working prototype to the public.
Now, another company has entered the light field space. Head-mounted display maker Avegant has announced a new display that uses “a new method to create light fields” to simultaneously display multiple objects at different focal planes. While all digital light fields have discrete focal planes, according to Avegant CTO Edward Tang, the new technology can interpolate between these to create a “continuous, dynamic focal plane”. “This is a new optic that we’ve developed that results in a new method to create light fields,” says Tang. Continue reading
Every now and then, somebody comes up with a radically new way to improve technology and do something different. One such example is the Flexible Sheet Camera that researchers at the Laboratory for Unconventional Electronics at Columbia University have developed. Rather than a little handheld box with a single lens and some sort of zoom optics, this super-thin concept camera lets you adjust the field of view by simply bending it: