The LightField is defined as all the lightrays at every point in space travelling in every direction. It is essentially 4D data, because every point in three-dimensional space is also attributed a direction (=the fourth dimension). The concept of the LightField was invented in the 1990s to solve common problems in computer graphics and vision.
How Does LightField Photography Work?
Traditional cameras – analog or digital – only record a two-dimensional representation of a scene, using the two available dimensions (length and width; pixels along the x and y axis) of the film/sensor.
Contrary to that, LightField cameras (also called plenoptic cameras) have a microlense array just in front of the imaging sensor. Such arrays consist of many microscopic lenses (often in the range of 100,000) with tiny focal lengths (as low as 0.15 mm), and split up what would have become a 2D-pixel into individual light rays just before reaching the sensor. The resulting raw image is a composition of as many tiny images as there are microlenses.
Here’s the fascinating part: every sub-image differs a little bit from its neighbours, because the lightrays were diverted slightly differently depending on the corresponding microlense’s position in the array.
Next, sophisticated software is used to find matching lightrays across all these images. Once it has collected a list of (1) matching lightrays, (2) their position in the microlense array and (3) within the sub-image, the information can be used to reconstruct a sharp 3D model of the scene.
Using this model, you have all of the LightField capabilities at your fingertips: you can define what parts of the image should be in focus or out of focus, define the depth of field, you can set everything in focus, you can shift the perspective or parallax a bit, … You can even use the parallax data to create 3D pictures from a single LightField lense and capture.
All of this can be done after you’ve recorded the image.
Fascinating, isn’t it?