Hallo Jokerro,
in einem Satz ausgedrückt geht es um die winzigen Unterschiede in den einzelnen Sub-Bildern, die man miteinander verrechnen kann.
Ausführliche Infos findest du in der Dissertation von Lytro-Gründer Ren Ng. Hier ein einleitender Auszug aus Kapitel 1.3:
To process final photographs from the recorded light field, digital light field photography uses ray-tracing techniques.The idea is to imagine a camera configured as desired, and trace the recorded light rays through its optics to its imaging plane. Summing the light rays in this imaginary image produces the desired photograph. This ray-tracing framework provides a general mechanism for handling the undesired non-convergence of rays that is central to the focus problem. What is required is imagining a camera in which the rays converge as desired in order to drive the final image computation.
For example, let us return to the firstmanifestation of the focus problem – the burden of having to focus the camera before exposure. Digital light field photography frees us of this chore by providing the capability of refocusing photographs after exposure (Figure 1.3). The solution is to imagine a camera with the depth of the film plane altered so that it is focused as desired. Tracing the recorded light rays onto this imaginary film plane sorts them to a different location in the image, and summing them there produces the images focused at different depths.
The same computational framework provides solutions to the other two manifestations of the focus problem. Imagining a camera in which each output pixel is focused independently severs the coupling between aperture size and depth of field. Similarly, imagining a lens that is free of aberrations yields clearer, sharper images. Final image computation involves taking rays from where they actually refracted and re-tracing them through the perfect, imaginary lens.
Besonders interessant ist das Kapitel "3.2 Computing Photographs from the Light Field", zu finden auf Seite 26.
|
Recent Comments