Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

F-number and focal length of light field systems: a comparative study of field of view, light efficiency, signal to noise ratio, and depth of field

Open Access Open Access

Abstract

The paper discusses the light efficiency and signal-to-noise ratio (SNR) of light field imaging systems in comparison to classical 2D imaging, which necessitates the definition of focal length and f-number. A comparison framework between 2D imaging and arbitrary light field imaging systems is developed and exemplified for the kaleidoscopic and the afocal light field imaging architectures. Since the f-number, in addition to the light efficiency of the system, is conceptually linked to the depth-of-field, an appropriate depth-of-field interpretation for light field systems is discussed as well.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The $\textrm {F}/\#$ captures major important features of imaging systems and is therefore in popular use. It describes the light efficiency of an optical system as well as the corresponding loss in depth-of-field for higher performing lenses.

Since light field imaging systems [15] perform aperture subdivision, the exact meaning of the $\textrm {F}/\#$ is ambiguous in their case. In addition, since the $\textrm {F}/\#$ involves the focal length of the system, the field of view interpretation is affected as well.

This article therefore discusses the major features related to the $\textrm {F}/\#$ of arbitrary light field imaging systems: their field-of-view, light efficiency, SNR, and depth-of-field. The focus is on establishing conditions that enable a comparison to standard 2D imaging systems and that yield a consistent meaningful description of their properties. As for 2D imaging systems, these will be suitable definitions of focal length and $\textrm {F}/\#$.

Since light field imaging systems differ considerably in their optical implementation, the derivations are carried out in object space using the equivalent camera array (ECA) model of Mignard-Debise and Ihrke [6], which is a geometrical optics model. While the analysis is therefore limited to first-order considerations, this measure abstracts from the particular systems being studied and enables their comparison on common grounds by linking them to an equivalent 2D imaging scenario. The development is exemplified by studying the K|Lens One, a light field objective lens based on the kaleidoscopic light field imaging principle [4], and the Lytro Illum, a lenslet-based light field camera in the afocal configuration [1,2]. The light efficiency of the afocal configuration has been studied in the microscopic context [7]. The other known implementations are:

  • • the focused lenslet configuration [1,3], where the depth-of-field has been analyzed [8], and
  • • the Fourier Light Field configuration for microscopy applications, where depth-of-field and light efficiency have been studied [9].
General review articles covering the major aspects of light field imaging are [10,11] and the interested reader is referred to the references therein for an extensive review.

The majority of treatments are in terms of thin lens geometrical models [2,3,8,12,13] for individual light field architectures. Unifying geometric models based on light beam parameterizations have been developed to enable comparative studies of different architectures [6,14], where the former work is in terms of phase space cells, as also explored in [15], and the latter work is formulated in terms of virtual cameras that are forming an “equivalent camera array” (ECA). This description is most suited to the current analysis.

The major attention of these geometric models has, however, been on the geometric properties of the resulting cameras [6,14] since these directly relate to their depth sensing capabilities and performance. Another notable use of geometric light field imaging models has been the calibration of these systems [12,13], which is a necessary prerequisite for satisfactory operation. Optical design considerations for light field systems have only recently been published [16]. Light throughput and depth-of-field analyses have so far been restricted to light field microscopy systems [7,9,17].

In contrast to prior work, this article aims at photographic applications and enables a comparative study of different light field systems in terms of their major optical properties relating to light throughput and SNR. I am proposing suitable definitions and discuss details of interpretation for the focal length, f-number and depth-of-field, relating them to their 2D imaging counter parts. The article is intended to aid the communication and interpretation of these numbers in the context of light field imaging.

The paper is organized as follows: Section 2 introduces the comparison setting that is elaborated throughout the paper and introduces the equivalent focal length of a light field imaging system. Section 3 then investigates the conditions for imaging at the same exposure, which is a prerequisite for analyzing the SNR properties, Section 4. The development of Sections 24 are performed in a simplified setting. Complicating factors of real light field imaging systems and the required adaptations are then discussed in Section 5. It is seen that the comparison settings derived such result in a different depth of field for the comparison systems. The implications are therefore discussed in Section 7. Section 6.1 discusses the application of the developed concepts to two example light field systems.

2. Comparison setting and equivalent focal length

2.1 Setting

We are concerned with imaging the same field of view for a light field system ($\textrm {LF}$) and a standard 2D imaging system ($\textrm {St}$) at equal exposure, fixing two of the important quantities for comparison. The labels in parentheses are used in the following as a superscript to variables indicating the system that a quantity relates to. Further, a sensor of the same size and equal pixel resolution is assumed for the standard ($\textrm {St}$) and the light field ($\textrm {LF}$) setting.

As the $(\textrm {LF})$ system, in order to obtain angular information, captures sub-views at a lower spatial resolution, we also consider a low resolution ($\textrm {LR}$) setting. This system shares the same field of view but uses a sensor with a lower pixel resolution matching the one of a single $(\textrm {LF})$ system sub-view, but having the same physical size as the standard system $(\textrm {St})$.

A summary of the setting is shown in Fig. 1. The analysis, except for parts of Section 5, will be performed in a 2D setting to simplify the expressions. The conclusions are easily transferred to the full 3D case.

 figure: Fig. 1.

Fig. 1. Object-side view of three comparison systems observing the same field of view. The systems a) and b) use the same hypothetical sensor, system c) uses a sensor of same size but with larger pixels. Left: standard imaging Middle: light field imaging. Right: standard imaging with a low-resolution sensor (same physical size as in the other cases). In the case of the light field system, the aperture is subdivided by a factor of, in this case $N=3$, the object-side pixel size $h_o$ of the standard system is increased by the same factor. The low-resolution system has the full NA of the standard system, but a smaller number of larger pixels chosen such as to match those of a light field subview. In the lower row (’spatial’), the object-side image of the sensor has been colored red to emphasize this fact. The different pixel size for the different settings is indicated by the different spacing of the tick marks. Please also note that the entrance pupil planes for the 2D systems and the LF system differ in general. This is discussed in detail in Section 5.

Download Full Size | PDF

Further, the main characteristics and the comparison setting are first developed in a simplified manner, i.e. an idealized light field setting is assumed where 1. the entrance pupil of the light field system is tightly subdivided and fully filled by the light field sub-apertures, and 2. the entrance pupil plane of the main lens and that of the equivalent camera array (ECA) of the light field system agree. The adaptations to real cases where these assumptions do not hold will be discussed in Section 5.

Introducing ${h_i}$ as a sensor pixel size, the assumptions lead to

$${h_i}^{\textrm{St}} = {h_i}^{\textrm{LF}} = \frac{1}{N}\times{h_i}^{\textrm{LR}},$$
where $N$ is the number of sub-views that the $(\textrm {LF})$ system generates via aperture sub-sampling. In the following, we will gradually build up the table in Table 1, where all relations are collected in an easily accessible manner.

Tables Icon

Table 1. Relations for the 3 comparison systems shown in Fig. 1 that are derived in the paper. All quantities are in reference to the standard system ($\textrm {St}$).

2.2 Equivalent focal length

Returning to Fig. 1, we use an object-space description [6] to abstract from specific optical light field system implementations. We primarily consider the entrance pupil of the system and argue in object space using the magnification $M=\frac {{h_i}}{{h_o}}$, where ${h_o}$ is the object space pixel size. Since we are assuming a common field of view of the comparison systems, we have

$${h_o}^{\textrm{St}} = \frac{1}{N}\times{h_o}^{\textrm{LF}} = \frac{1}{N}\times{h_o}^{\textrm{LR}},$$
i.e. the standard system ($\textrm {St}$) resolves the object plane $N\times$ better than both the ($\textrm {LF}$) and ($\textrm {LR}$) systems. The required optical magnification is therefore
$${M}^{\textrm{St}} = N\times{M}^{\textrm{LF}} = {M}^{\textrm{LR}},$$
which follows from
$$\begin{aligned} {M}^{\textrm{LF}} &= \frac{{h_i}^{\textrm{LF}}}{{h_o}^{\textrm{LF}}}=\frac{{h_i}^{\textrm{St}}}{N\times{h_o}^{\textrm{St}}}=\frac{1}{N}\times{M}^{\textrm{St}}, \textrm{and}\\ {M}^{\textrm{LR}} &= \frac{{h_i}^{\textrm{LR}}}{{h_o}^{\textrm{LR}}}=\frac{N\times{h_i}^{\textrm{St}}}{N\times{h_o}^{\textrm{St}}}={M}^{\textrm{St}}. \end{aligned}$$
The magnification can be expressed in terms of the system focal length ${f}$ and the object plane distance ${d}$ as ${M}=\frac {f}{d-f}$. Solving for the focal length in terms of the inverse magnification ${M}^{-1}$, we obtain
$$\begin{aligned} {f}^{\textrm{St}} &= \frac{{d}}{({M}^{\textrm{St}})^{{-}1}+1} \approx \frac{{d}}{({M}^{\textrm{St}})^{{-}1}}, \,\,\textrm{but}\\ {f}^{\textrm{LF}} &= \frac{{d}}{({M}^{\textrm{LF}})^{{-}1}+1} = \frac{{d}}{N\times({M}^{\textrm{St}})^{{-}1}+1} \approx \frac{1}{N}{f}^{\textrm{St}}. \end{aligned}$$
For the low resolution ($\textrm {LR}$) system, ${f}^{\textrm {LR}}={f}^{\textrm {St}}$ due to equal magnification. The approximation in Eq. (4) holds for moderately large ${M}^{-1}$ which is the setting for photography at medium to large distances.

We see that the required optical focal length is reduced by the factor $\frac {1}{N}$ for the light field ($\textrm {LF}$) setting. Since the focal length / sensor size combination is commonly used to interpret the field of view of the system, the optical focal length ${f}^{\textrm {LF}}$ of the ($\textrm {LF}$) system is misleading.

I therefore propose to use the focal length ${f}^{\textrm {St}}$ of the comparable standard system as an equivalent focal length for the light field system:

$${f_\textrm{eq}}^{\textrm{LF}} = {f}^{\textrm{St}} = {f}^{\textrm{LR}}.$$

3. Light efficiency and F-number

Since the $\textrm {F}/\#={\frac {F}{D}}$ involves the focal length as well as the physical aperture size D in the entrance pupil plane ${d}=0$, we now discuss the conditions for equal exposure by the comparison systems.

3.1 F-Number

We perform the analysis in terms of the object-side numerical aperture $\textrm {NA}_o={n}\sin \alpha _o=1/(2{M}\textrm {F}/\#_w)$ with $\textrm {F}/\#_w=(1+|{M}|)\textrm {F}/\#$ being the working f-number for unit pupil magnification and imaging in air (${n}=1$) which is assumed in the following. For photography at medium to large distances, the working f-number and the f-number are approximately identical.

As is seen from Fig. 1,

$$\textrm{NA}_o^{\textrm{St}} = N\times\textrm{NA}_o^{\textrm{LF}} = \textrm{NA}_o^{\textrm{LR}}$$
and therefore
$$\begin{aligned} \textrm {F}/\#_w^{\textrm{St}} &= \frac{1}{2{M}^{\textrm{St}}\textrm{NA}_o^{\textrm{St}}},\\ \textrm {F}/\#_w^{\textrm{LF}} &= \frac{1}{2{M}^{\textrm{LF}}\textrm{NA}_o^{\textrm{LF}}} = \frac{1}{2 (N\times{M}^{\textrm{St}})(\frac{1}{N}\times\textrm{NA}_o^{\textrm{St}})}=\textrm {F}/\#_w^{\textrm{St}}, \end{aligned}$$
and $\textrm {F}/\#_w^{\textrm {LR}}=\textrm {F}/\#_w^{\textrm {St}}$ because of equal NA and magnification in the two settings.

A similar argument can be made in terms of entrance aperture sizes: Let the full aperture in Fig. 1 be given by ${D}$, then ${D}^{\textrm {St}} = N \times {D}^{\textrm {LF}} = {D}^{\textrm {LR}}$. Using the respective optical focal length of Eq. (4), $\textrm {F}/\#^{\textrm {St}}=\textrm {F}/\#^{\textrm {LF}}=\textrm {F}/\#^{\textrm {LR}}$ is obtained.

3.2 Exposure

While the $\textrm {F}/\#$ is equal for all comparison systems, the exposure, which is proportional to the étendue ${G} = {h_i} \textrm {NA}_i = {h_o} \textrm {NA}_o$ of a system differs:

$$\begin{aligned} {G}^{\textrm{St}} &= {h_o}^{\textrm{St}} \textrm{NA}_o^{\textrm{St}}, \\ {G}^{\textrm{LF}} &= {h_o}^{\textrm{LF}} \textrm{NA}_o^{\textrm{LF}} = (N\times{h_o}^{\textrm{St}})(\frac{1}{N}\textrm{NA}_o^{\textrm{St}})={G}^{\textrm{St}}, \,\,\textrm{whereas}\\ {G}^{\textrm{LR}} &= {h_o}^{\textrm{LR}} \textrm{NA}_o^{\textrm{LR}} = (N\times{h_o}^{\textrm{St}})(\textrm{NA}_o^{\textrm{St}})=N\times{G}^{\textrm{St}}. \end{aligned}$$
Using comparable optical focal lengths, Eq. (4), and an equal $\textrm {F}/\#$, Eq. (7) based on it, we see that the exposure is equal for the ($\textrm {St}$) and the ($\textrm {LF}$) systems, whereas, as expected, the ($\textrm {LR}$) system gathers $N\times$ more light per pixel.

The following additional points should be emphasized:

  • 1. When using the equivalent focal length ${f_\textrm {eq}}^{\textrm {LF}}$ proposed in Eq. (5) for describing the light field system, the full system aperture ${D}^{\textrm {St}}$, i.e. covering all light field sub-views, should be used to satisfy the $\textrm {F}/\#$ equality.
  • 2. The comparison $\textrm {F}/\#^{\textrm {St}}$ is not equal to the main lens f-number of the light field system since this would be based on the optical focal length: ${f}^{\textrm {LF}} / {D}^{\textrm {St}}$ rather than on the equivalent focal length ${f_\textrm {eq}}^{\textrm {LF}} / {D}^{\textrm {St}}$.
  • 3. As seen from Eq. (8), the f-number is not a good concept for predicting the exposure if different sensors are involved. Instead, the étendue must be considered.
  • 4. Alternatively, the factor $N$ in the last row of Eq. (8) could be moved from the object-space pixel size ${h_o}^{\textrm {St}}$ to the $\textrm {NA}_o^{\textrm {St}}$-term, which when formulated in terms of $\textrm {F}/\#_w$ would result in a factor of $1/N$, i.e. an equivalent f-number $\textrm {F}/\#_\textrm {eq}^{\textrm {LR}}= 1/N \times \textrm {F}/\#^{\textrm {St}}$ that expresses the improved exposure in relation to the standard system can be defined in this manner. This is also relevant for digital summation of the subviews as discussed next.

4. Signal-to-noise ratio

The current section is concerned with digital summation procedures that produce virtually equivalent exposures in Eq. (8), i.e. that compensate the factor of # exposure advantage of the low resolution system: for the standard system (St), N adjacent high-resolution pixels can be digitally added, whereas for the light field system (LF), a more complex procedure involving sub-view registration is involved [10]. Since N sub-views are available, depth-compensated corresponding pixels (i.e. those showing the same object point) can be used for summation. The following discussion focuses on the difference between analog integration of photo electrons vs. digital summation after A/D conversion.

In the following we use a simple noise model that is commonly used in the literature [1820]:

$$\sigma_{\textrm{tot}}^{2} = \sigma_{\textrm{r}}^{2} + \sigma_{\textrm{p}}^{2},$$
i.e. a combination of read noise $\sigma _{\textrm {r}}$ and photon noise $\sigma _{\textrm {p}}$ for a pixel. The read noise depends on the operating conditions and electronics of the pixel (but not its size) and can be assumed to be equal for the comparison systems, whereas the photon noise is inherent in measuring the arriving photons. Dark noise could be added and modeled as depending on the pixel size, assuming that the quantum well scales with the size of the pixel.

In the following, consider the object to consist of a homogenous light source as e.g. required by the EMVA 1288 standard [21], or, alternatively, of a Lambertian reflector. In both cases, the radiance $L [\frac {\textrm {W}}{\textrm {m}^{2} \textrm {sr}}]$ is constant for all light directions. The energy $Q [\textrm {J}]$ falling onto a single pixel can therefore be expressed in terms of the étendue $G [\textrm {m}^{2} \textrm {sr}]$ of the optical system: $Q [\textrm {Ws}] = L [\frac {\textrm {W}}{\textrm {m}^{2} \textrm {sr}}] \cdot G [\textrm {m}^{2} \textrm {sr}] \cdot t [\textrm {s}]$. Joining the factors characterizing illumination/scene and exposure time into a single constant $a=L\cdot t$, the signal can be expressed as $a \cdot G$. The photon noise is proportional to the square root of the signal, i.e.

$$\sigma_{\textrm{p}} {{=}} \sqrt{{{a \cdot}} G}.$$
The SNR is the ratio of signal over noise:
$$\textrm{SNR} = \frac{{{a \cdot}} G}{\sigma_{\textrm{tot}}} = \frac{a \cdot {h_i}}{2 \textrm {F}/\#_w \sigma_{\textrm{tot}}},$$
where the last equality is due to combining Eqs. (7) and (8) and using the definition of ${M}$. It is of interest to directly relate SNR and $\textrm {F}/\#$.

Returning to the comparison with the low-resolved comparison system (LR), the effect in terms of signal-to-noise ratio (SNR) of a single pixel is an approximate $\sqrt {N}$ improvement of the low-resolved standard system (LR) over the pixels in the individual light field (LF) subviews and the pixels in the high-resolved standard system (St):

$$\begin{aligned} \textrm{SNR}^{\textrm{LR}} &\stackrel{(11)}{=}& \frac{{{a \cdot}} G^{\textrm{LR}}}{{\sigma_{\textrm{tot}}}^{\textrm{LR}}}\\ &\stackrel{(8),(9)}{=}& \frac{N\cdot{{a \cdot}} G^{\textrm{St}}}{\sqrt{{\sigma_{\textrm{p}}^{2}}^{\textrm{LR}}+{\sigma_{\textrm{r}}^{2}}^{\textrm{LR}}}}\\ &\stackrel{(8),(10)}{=}& \frac{N\cdot{{a \cdot}} G^{\textrm{St}}}{\sqrt{N\cdot{\sigma_{\textrm{p}}^{2}}^{\textrm{St}}+{\sigma_{\textrm{r}}^{2}}^{\textrm{St}}}} \end{aligned}$$
where I assume that the read noise is the same for both cases, i.e. $\sigma _{\textrm {r}}^{\textrm {LR}} = \sigma _{\textrm {r}}^{\textrm {St}}$.

4.1 Case I: Negligible read noise

If read noise is negligible, i.e. photon noise dominates over other noise sources such as the read noise $\sigma _{\textrm {r}} \ll \sigma _{\textrm {p}}$, the expression may be further simplified to

$$\textrm{SNR}^{\textrm{LR}} = \sqrt{N}\cdot\frac{{{a \cdot}} G^{\textrm{St}}}{\sigma_{\textrm{tot}}^{\textrm{St}}} = \sqrt{N}\cdot\textrm{SNR}^{\textrm{St}} = \sqrt{N}\cdot\textrm{SNR}^{\textrm{LF}}_{\textrm{oneview}},$$
where the last equality is due to the same étendue for the light field (LF) and the high-resolution standard (St) system, Eq. (8). An important observation is that this argument holds for each of the $N$ individual subviews of the light field system (LF), whereas the low-resolved comparison system (LR) only has a single view.

Let us assume that the $N$ light field sub-views can be registered (i.e. their disparity can be computed and compensated for). In this case, the registered subviews may be added digitally to produce an improved SNR.

The signal is then composed of the sum of $N$ digitally registered pixels and the SNR for negligible read noise becomes:

$$\textrm{SNR}^{\textrm{LF}} = \frac{\sum_{i=1}^{N} {{a \cdot}} G^{\textrm{LF}}}{\sqrt{\sum_{i=1}^{N} {\sigma_{\textrm{p}}^{2}}^{\textrm{LF}}}} = \sqrt{N} \cdot \textrm{SNR}^{\textrm{LF}}_{\textrm{oneview}}.$$
Comparing with Eq. (13), we see that digital averaging has, as expected, the same effect as physical integration, i.e. the signal-to-noise ratio of a digitally summed registered light field view is equivalent to the low-resolution comparison system (LR). The presence of dark noise would not change this situation since it was assumed to scale with the pixel size. The same argument holds for the summation of adjacent pixels in the high-resolved standard image (St).

4.2 Case II: Low light conditions

In low light conditions, the read noise is non-negligible. The expected difference is that, in the case of light field imaging (LF) and digitally summed standard imaging ($\textrm {St}$), the read noise of $N$ pixel amplifiers poses a disadvantage, as compared to the low-resolved standard imaging system (LR) where only one amplifier is present. Indeed

$$\textrm{SNR}^{\textrm{LF}} = \frac{\sum_{i=1}^{N} {{a \cdot}} G^{\textrm{LF}}}{\sqrt{\sum_{i=1}^{N} {\sigma_{\textrm{tot}}^{2}}^{\textrm{LF}}}} = \frac{N \cdot {{a \cdot}} G^{\textrm{LF}}}{\sqrt{N \cdot \left({\sigma_{\textrm{p}}^{2}}^{\textrm{LF}} + {\sigma_{\textrm{r}}^{2}}^{\textrm{LF}}\right)}} = \frac{N\cdot {{a \cdot}} G^{\textrm{St}}}{\sqrt{N\cdot{\sigma_{\textrm{p}}^{2}}^{\textrm{St}} + N\cdot{\sigma_{\textrm{r}}^{2}}^{\textrm{St}}} },$$
where the last equality is due to the equality of signals between standard imaging (St) and light field imaging (LF), Eq. (8). Comparison with Eq. (12) shows, as expected, that in low light conditions physical integration is advantageous and light field and other digitally summed systems have a lower SNR even with a digital summation of the registered subviews.

Note that all SNR expressions can equivalently be given in terms of the working f-number $\textrm {F}/\#_w$, Eq. (11).

5. Real light field systems and the equivalent f-number

The previous analysis describes an idealized setting in that it is tacitly assumed that the complete light cone of the standard system ($\textrm {St}$) is partitioned, i.e. sub-divided without loss of rays and without overlap, and that all sensor pixels are utilized. In general, this is not the case [17,22].

5.1 Vignetting: entrance pupil under-fill

Real light field systems exhibit vignetting and Mignard-Debise [22] introduced a separation into spatial and angular vignetting, as illustrated in Fig. 2. The illustrating example systems are kaleidoscopic light field imaging [4] and afocal lenslet-based light field imaging [1,2] systems. For correctly interpreting the figure, note that the afocal lenslet-based light field architecture is characterized by exchanged positions for the angular and the spatial sampling, see also [6].

 figure: Fig. 2.

Fig. 2. Vignetting in two optical light field configurations Left: kaleidoscopic light field imaging, Right: afocal lenslet-based light field imaging. White areas are passing light whereas grayed out areas are vignetted. The top row shows the angular sampling pattern in the aperture of the main lens of the light field system. The middle row shows the spatial sampling pattern on the sensor, while the bottom row illustrates decoded light field subviews. The areas of the main lens aperture and the sensor, respectively, that are not covered by the light field system introduce loss of light as compared to the standard 2D imaging system of Fig. 1 a). The kaleidoscopic system suffers from angular vignetting since the main lens aperture is not fully covered. The lenslet system has angular vignetting between the microlens images (middle row), but often also shows spatial vignetting (bottom row) that is introduced by the cat’s-eye shape of the outer microlens images.

Download Full Size | PDF

According to Mignard-Debise [22], vignetting can be roughly described by a scalar factor

$${c_{\textrm{system}}} [\%] = {c_{\textrm{spatial}}} [\%] \times {c_{\textrm{angular}}} [\%],$$
that multiplies the étendue ${G}$ of the ideal system. Since the étendue is a measure of exposure, i.e. it applies to pixels individually, it is more adequate to only consider the angular vignetting part in the definition of an effective étendue:
$$\hat{{G}} = {c_{\textrm{angular}}} \cdot {G}.$$
For estimating the vignetting factor, we usually need to resort to the 3D case by considering a 2D version of the aperture, e.g. as in Fig. 2 (upper left). An example calculation for this system will be given in Section 6.1. Denote the area of the light-passing sub-view apertures by $A_i, i=1..9$ and assume the encircling aperture of the comparison system ($\textrm {St}$) has a diameter of ${D}^{\textrm {St}}$, then ${c_{\textrm {angular}}}$ is given by the ratio
$${c_{\textrm{angular}}} = \frac{\sum_i A_i}{\pi/4 \times ({D}^{\textrm{St}})^{2}}.$$
Now, consider ${c_{\textrm {angular}}}$ to be estimated and known. If we want to relate back to our derivations hitherto performed in the simplified setting, we can use the estimated angular vignetting factor to compute an equivalent effective system diameter ${D}^{\textrm {LF}}_\textrm {eff}$ assuming a circular aperture with the same relative area as the sum of the sub-view areas in Eq. (18) as
$${D}^{\textrm{LF}}_\textrm{eff} = \sqrt{{c_{\textrm{angular}}}}\times{D}^{\textrm{St}}.$$
Using ${D}^{\textrm {LF}}_\textrm {eff}$, a more realistic effective f-number can be defined as $\textrm {F}/\#_\textrm {eff}={f_\textrm {eq}}^{\textrm {LF}} / {D}^{\textrm {LF}}_\textrm {eff} = 1/\sqrt ({c_{\textrm {angular}}})\times \textrm {F}/\#^{\textrm {St}}$. From the derivation in terms of the full comparison system aperture, the effective f-number applies to the standard system ($\textrm {St}$), but due to equality of the f-numbers also to the other two comparison systems.

It should additionally be emphasized that the spatial vignetting part ${c_{\textrm {spatial}}}$ of Eq. (16) should not be forgotten in the analysis of a light field imaging system. Effectively, spatial vignetting reduces the number of “mega-rays” that can be successfully acquired by a light field imaging system and typically reduces the angular coverage for extreme field points.

5.2 Entrance pupil plane mismatch: entrance pupil overfill

Several light field imaging systems, in particular, the common microlens-based afocal and focused configurations [2,3], have virtual entrance pupil planes for their equivalent camera array that are displaced from the entrance pupil of the main lens, see [6] for details. In this case, an overfill of the comparison entrance pupil can be observed, see Fig. 3, also referred to, and quantified as overlap in [23].

 figure: Fig. 3.

Fig. 3. Propagating the LF equivalent camera array (ECA) entrance pupil (red) to that of the comparison standard system (blue) overfills its aperture as compared to a perfect $1/N$ subdivision of the pupil.

Download Full Size | PDF

As an example, if the virtual entrance pupils of the light field subviews are located in front of the main lens entrance pupil (e.g. the afocal microlens setting results in a displacement of one main lens focal length), the pupil areas of different field points spread over a larger area in the main lens aperture as well as exhibiting enlarged areas individually as compared to the perfect $1/N$ subdivision discussed in Sections 24. This typically leads to spatial vignetting at the edges of the field, Fig. 2(b) (middle and lower).

This factor can be taken into account by comparing the sum of the subview NAs to the main lens NA [23]. In the current discussion, it is equivalent to state the relation in terms of entrance pupil diameters. Liu et al. [23] propose to quantify the overlap in percent by:

$$\oslash_p = 1-\frac{{D}^{\textrm{St}}}{N\cdot{D}^{\textrm{LF}}}.$$
Underfill and the vignetting factor ${c_{\textrm {angular}}}$, as discussed in the previous subsection, could be interpreted as negative overlap. Alternatively, overfill could be seen as a ${c_{\textrm {angular}}}>1$, where ${c_{\textrm {spatial}}}$ is typically reduced. The two definitions can thus be used interchangeably.

6. Example systems

6.1 Case Study I: K|Lens One

To illustrate the discussion, let us consider an example system. We will use the K|Lens One (K|Lens GmbH, http://k-lens-one.com) that is based on the kaleidoscopic light field imaging principle shown in Fig. 2 (left), the working principle of which is described in [4]. The kaleidoscopic LF architecture has matched entrance pupils for the ECA and the main lens, but features angular vignetting, as discussed in Section 5.1.

The system has an equivalent focal length ${f_\textrm {eq}}^{\textrm {LF}}=78\textrm {mm}={f}^{\textrm {St}}$ and a main lens entrance pupil diameter of ${D}^{\textrm {St}}=20\textrm {mm}$. From considerations in Section 3, we would estimate $\textrm {F}/\#=78/20=3.9$. However, from the basic geometry in Fig. 2 (upper left), we can estimate ${c_{\textrm {angular}}}=0.42$. The conditions used are 1. the sub-view aperture geometry touches the main lens aperture in the diagonal sub-apertures. 2. the ratio of center distances is $3:2$, and 3. vertical sub-apertures touch. The conditions can be related to the maximal setting of a round aperture for the sub-views in the kaleidoscopic light field architecture. Using the vignetting factor, we obtain an effective $\textrm {F}/\#_\textrm {eff}=6.0$. The real sub-view aperture size from optical simulations is ${D}^{\textrm {LF}}=4.1\textrm {mm}$ which yields an $\textrm {F}/\#_\textrm {eff}=6.3$.

A comparison photograph on a Nikon D850 showing equal exposure of a light field sub-view, Fig. 4 (middle row, left), and an equivalent standard lens (Nikon AF Nikkor $28-80\textrm {mm},\, 3.3-5.6$G set to ${f}^{\textrm {St}}=75\textrm {mm}, \textrm {F}/\#^{\textrm {St}}=6.3$), using equal ISO and exposure time settings, is shown in Fig. 4 (middle row, right). An example of digitally summed light field views is shown in the bottom row (right). The virtual exposure is much brighter and could be characterized by the equivalent f-number $\textrm {F}/\#_\textrm {eq}=6.3/3=2.1$ proposed in Section 3.

 figure: Fig. 4.

Fig. 4. Kaleidoscopic light field lens “KLens One” with view of the sub-apertures (top row, left), acquired sensor image of the light field system (top row, right), cropped center sub-view of the light field imaging system, equivalent focal length $=78\textrm {mm}$, ISO setting$=200$, exposure time$=1/10\textrm {s}$, insets show raw image content (middle row, left), comparison standard system $(\textrm {St})$ photograph using a Nikon AF Nikkor $28-80\textrm {mm}$ f/3.3-5.6 G, set to focal length $=75\textrm {mm}$ focal length, aperture setting $=f/6.3$, ISO setting$=200$, exposure time$=1/10\textrm {s}$, insets show raw image content, note the $3\times$ higher resolution but the much shallower DoF (middle row, right), an averaging of the light field views gives an approximate impression of the digitally decreased DoF available to focus synthesis methods [4], the blur region is slightly larger than in the standard system $(\textrm {St})$ due to uncompensated sub-view distortions (lower row, left), summation of the light field views demonstrates improved exposure and SNR, e.g. on the body of the owl statue (lower row, right). Raw images used to generate this figure can be found in Dataset 1 [25].

Download Full Size | PDF

6.2 Case Study II: Lytro Illum

The second example is the Lytro Illum, a light field camera based on the afocal microlens configuration [2]. In this setting, the ECA entrance pupil plane and the main lens entrance pupil do not agree. In addition, there is no angular vignetting as illustrated in Fig. 2 (right).

For the following discussion, it should be noted that the Lytro Illum uses a $1/1.2''$ sensor $(10.82 \times 7.52~\textrm {mm}^{2})$ as compared to the K|Lens One that is designed for a full frame sensor $(36 \times 24~\textrm {mm}^{2}$. All dimensional quantities are therefore linearly scaled by $1/3.33$ for this system.

For generating the same field-of-view as in the previous case, the equivalent focal length is chosen as ${f_\textrm {eq}}=78~\textrm {mm}/3.33=23.4~\textrm {mm}$. As extracted from the meta-data of the example raw file, the main lens has an $\textrm {F}/\#=2.2$ for this setting. For the entrance pupil diameter $D=23.4~\textrm {mm} / 2.2 = 10.65~\textrm {mm}$ follows.

In the afocal light field configuration, the object space image of a micro-image pixel serves as the virtual entrance pupil of a light field subview [6]. Paraxial calculations with Lytro Illum parameters (${f}_{\textrm {main}}=23.4~\textrm {mm}, {f}_{\textrm {ML}}=40~\mu \textrm {m}, {h_i}=1.4~\mu \textrm {m}$) yield a subview entrance pupil size of ${D}^{\textrm {LF}}=0.85~\textrm {mm}$. With $14$ subviews (in one lateral direction), the overall entrance pupil of the ECA is thus estimated as $14 \times 0.85=11.9~\textrm {mm}$. With these values, Eq. (20) yields an overlap of $\oslash _p = 10\%$. The actual f-number of the light field subviews is therefore $\textrm {F}/\#^{\textrm {LF}}=2.0$. The same result is obtained using the optical focal length of a subview ${f}=\frac {{f_\textrm {eq}}}{N}=1.67$ and the subview entrance pupil diameter ${D}^{\textrm {LF}}$: $1.67/0.85 = 1.97$. This is also the value stated on the Illum’s main lens. A visual example is shown in Fig. 6.

6.3 Comparison table

For direct comparison, the optical data, as well as some additional sensor properties, are summarized in Table 2. Values in parentheses indicate chosen values in the above discussion in the case of adjustable settings. Digital super-resolution performance is taken from company information.

Tables Icon

Table 2. Main parameters of the two discussed example light field systems.

7. Depth-of-field

Since the f-number is also used to communicate an expected depth-of-field, a discussion is not complete without investigating the DoF, and in particular, the scaling laws for aperture subdivision by a factor of $N$ as in light field imaging. In the following, a geometrical optics argument is invoked to determine the tendencies underlying light field imaging systems. For numerical computations in real systems, additional effects such as aberrations, wave optical effects, vignetting, sensor tilt, etc. need to be considered. The development in this section is therefore only a first approximation.

7.1 Depth-of-field formula

For completeness, the derivation of the used depth-of-field formula is sketched in the following. The relevant geometry is shown in Fig. 5. The argument, again, follows a predominantly object-space approach in order to abstract from particular systems. However, since the DoF is typically defined in terms of circles of confusion (CoC) that relate to the pixel size of the sensor (e.g. the CoC is $1$ pixel), it will be necessary to relate to the image space pixel size ${h_i}$ via the optical focal length of the imaging system, Eq. (4).

 figure: Fig. 5.

Fig. 5. Sketch illustrating the geometry for the DoF discussion. ${D}$ is the aperture diameter, ${d}$ the distance of the focal plane from the aperture plane (considered positive towards the object space) and ${h_o}^{-}$ the object space pixel size at the near depth-of-field plane. Similarly, ${d^{-}}$ and ${d^{+}}$ are distances from the aperture plane.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Comparison standard 2D imaging (Nikon D850, AF Nikkor 85 mm f/1.8; $\textrm {F}/\#=2.2$, exposure time=$1/80~\textrm {s}$, ISO=100) and Lytro Illum (exposure time=$1/100~\textrm {s}$, ISO=100, ${f}_{\textrm{eq}} \approx 80~\textrm{mm}$). The output is as “unprocessed” as possible. The colored raw image uses the Bayer pattern brightness assigned to the respective color channel. Raw images are shown as captured. The processed Lytro Illum images have been directly generated from the raw data stored in the .lfp camera file. Only demosaicing and resampling has been applied to the 10-bit output. Note the similar exposure. The DoF is different due to incomparable sensor and pixel sizes. Raw images used to generate this figure can be found in Dataset 1 [25].

Download Full Size | PDF

The ansatz is a similarity relation between the two triangles indicated in the figure (the green triangle has been flipped for clarity). The derivation is carried out for the distance of the near depth-of-field plane ${d^{-}}$:

$$\frac{\frac{{D}}{2}-\frac{{h_o}^{-}}{2}}{\frac{{D}}{2}} = \frac{{D}-{h_o}^{-}}{{D}}=\frac{{d^{-}}}{{d}}.$$
Here, ${D}$ is the aperture diameter, ${h_o}^{-}$ the object side pixel size at the near depth-of-field plane ${d^{-}}$ and ${d}$ is the object distance that the system is focused at. The object side pixel size is obtained from the (fixed) image-space pixel size ${h_i}$ via the magnification ${M}^{-}=\frac {f}{{d^{-}}-f}$ for the near DoF plane:
$${h_o}^{-}= \frac{1}{{M}^{-}} {h_i} = \frac{{d^{-}}-f}{f} {h_i}.$$
Inserting Eq. (22) into (21), solving for ${d^{-}}$ and simplifying
$${d^{-}}= \frac{f {d}({D}+{h_i})}{{D}f + {d} {h_i}}$$
is obtained. Similarly, ${d^{+}}$ is derived as
$${d^{+}}= \frac{f {d}({D}-{h_i})}{{D}f - {d} {h_i}}$$
Finally, the DoF is given by the difference of these two distances $\textrm {DoF} = {d^{+}} - {d^{-}}$, which, after insertion of Eqs. (23) and (24) simplifies to
$$\textrm{DoF} = \frac{2{D}f {d} {h_i} ( {d} - f )}{{D}^{2} f^{2} - {d}^{2} {h_i}^{2}}.$$
In the case of $f\ll {d}$ and ${d}^{2} {h_i}^{2} \ll {D}^{2} f^{2}$, which holds for intermediate focal distances for common systems, a useful approximation is obtained:
$$\textrm{DoF} \approx \frac{2 {d}^{2} {h_i}}{{D}f}.$$
By using the definition of $\textrm {F}/\#={f} / {D}$, rearranging for ${D}$ and inserting in the previous equation, we can also express the depth of field in terms of the $\textrm {F}/\#$:
$$\textrm{DoF} \approx \textrm {F}/\# \frac{2{d}^{2}}{{f}^{2}} {h_i}.$$

7.2 Comparison

From Eq. (27), we obtain the following relations for the three comparison systems. The ($\textrm {LF}$) system has an $N^{2}\times$ larger DoF than the comparison system ($\textrm {St}$) due to its optical focal length ($f$ in Eq. (27)) being reduced by a factor of $1/N$, cf. Table 1. This effect can, intuitively, be attributed to two factors of $N$ each: 1) the reduced $\textrm {NA}_o^{\textrm {LF}}$ of the light field sub-views, and, 2) the larger object space pixel size ${h_o}^{\textrm {LF}}$ due to lower resolution, but same field of view as the comparison system ($\textrm {St}$). The low-resolution standard system ($\textrm {LR}$) occupies an intermediate position: it has an $N\times$ increased DoF as compared to ($\textrm {St}$) that is due to effect 2) only (${h_i}^{\textrm {LR}}=N\times {h_i}^{\textrm {St}}$). Similar conclusions have been drawn by Levoy et al. [7].

7.3 Discussion

This analysis appears to show that the $\textrm {F}/\#$ is not an adequate measure for characterizing the DoF if different sensor systems or modalities are compared. However, the picture is not complete without the digital processing that can be applied to the raw data of light field imaging systems.

In particular, the sub-views can be (shifted and) summed to synthesize images that are equivalent to refocused versions of the image that the low-resolution standard system ($\textrm {LR}$) generates. For practical application, note that 1. the DoF changes with the focal distance; therefore, the digital “sum-and-add” refocusing is not exactly equivalent to the natural one. 2. for generating visually pleasing synthetic refocus, view interpolation is typically required [10]. The effective aperture for the ($\textrm {LF}$) system in the table then becomes equal to the one of the low-resolution standard system ($\textrm {LR}$).

In order to synthesize the even narrower DoF of the high-resolution standard system ($\textrm {St}$), digital super-resolution techniques [10,24] must be employed which effectively reduces ${h_i}^{\textrm {LF}}$ to ${h_i}^{\textrm {LF}}/N={h_i}^{\textrm {St}}$. In this case, the narrow DoF of the high-resolution standard system ($\textrm {St}$) can be synthesized fully. In fact, it is even possible to extrapolate synthetic apertures and achieve an even narrower DoF [4].

Therefore, I propose to still interpret the $\textrm {F}/\#$ as a rough measure of the minimum synthetic DoF that can certainly be achieved (i.e. without extrapolation) by a light field imaging system involving digital processing.

8. Conclusions

Doing so, several relations and peculiarities were illuminated. In summary, just as in standard photography, the $\textrm {F}/\#$ is a good measure of light efficiency and SNR in good illumination conditions. With digital summation, even the SNR of the equivalent low-resolved system can be matched. For low light conditions, light field systems suffer a slight disadvantage that may be exacerbated by registration problems for the then noisy data. In terms of depth-of-field, the equivalent $\textrm {F}/\#$ provides a rough measure of the minimal synthetic refocus depth of field that can be achieved. For the analysis of a light field system, the comparable 2D standard imaging system quantities should be determined and communicated.

Acknowledgments

Portions of this work were presented at the conference “Quality Control by Artificial Vision” (QCAV) in 2019 as a non-peer reviewed invited paper contribution with the title “An Equivalent F-Number for Light Field Systems: Light Efficiency, Signal-to-Noise Ratio, and Depth of Field”. I would like to thank Loïs Mignard-Debise for his careful proof-reading this prior paper version. I would also like to thank the anonymous reviewers for their careful reading and insightful questions.

Disclosures

II: K|Lens GmbH (F,I,E).

Data availability

Data underlying the results presented in this paper are available in Dataset 1, Ref. [25].

References

1. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Machine Intell. 14(2), 99–106 (1992). [CrossRef]  

2. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR2, 1–11 (2005).

3. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in 2009 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2009), pp. 1–8.

4. A. Manakov, J. Restrepo, O. Klehm, R. Hegedus, E. Eisemann, H.-P. Seidel, and I. Ihrke, “A reconfigurable camera add-on for high dynamic range, multispectral, polarization, and light-field imaging,” ACM Trans. Graph. 32(4), 1–14 (2013). [CrossRef]  

5. A. Llavador, J. Sola-Pikabea, G. Saavedra, B. Javidi, and M. Martínez-Corral, “Resolution improvements in integral microscopy with Fourier plane recording,” Opt. Express 24(18), 20792–20798 (2016). [CrossRef]  

6. L. Mignard-Debise, J. Restrepo, and I. Ihrke, “A unifying first-order model for light-field cameras: the equivalent camera array,” IEEE Trans. Comput. Imaging 3(4), 798–810 (2017). [CrossRef]  

7. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006). [CrossRef]  

8. T. G. Georgiev and A. Lumsdaine, “Depth of Field in plenoptic cameras,” Eurographics (Short Papers) 11814, 118140B (2009). [CrossRef]  

9. M. Martínez-Corral and B. Javidi, “Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Adv. Opt. Photonics 10(3), 512–566 (2018). [CrossRef]  

10. I. Ihrke, J. Restrepo, and L. Mignard-Debise, “Principles of light field imaging: Briefly revisiting 25 years of research,” IEEE Signal Process. Mag. 33(5), 59–69 (2016). [CrossRef]  

11. G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: An overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017). [CrossRef]  

12. O. Johannsen, C. Heinze, B. Goldluecke, and C. Perwaß, “On the calibration of focused plenoptic cameras,” in Proc. Seminar Time- Flight Depth Imag. Sens., Algorithms, Appl., 302–317 (2013).

13. Q. Zhang, C. Zhang, J. Ling, Q. Wang, and J. Yu, “A generic multi-projection-center model and calibration method for light field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 41(11), 2539–2552 (2018). [CrossRef]  

14. C. Hahne, A. Aggoun, S. Haxha, V. Velisavljevic, and J. C. J. Fernández, “Light field geometry of a standard plenoptic camera,” Opt. Express 22(22), 26659–26673 (2014). [CrossRef]  

15. R. Ng, “Digital light field photography,” PhD Thesis, Stanford University (2006).

16. Q. Cui, S. Zhu, and L. Gao, “Developing an optical design pipeline for correcting lens aberrations and vignetting in light field cameras,” Opt. Express 28(22), 33632–33643 (2020). [CrossRef]  

17. L. Mignard-Debise and I. Ihrke, “A vignetting model for light field cameras with applications to light field microscopy,” IEEE Trans. Comput. Imaging 5(4), 585 (2019). [CrossRef]  

18. C. J. Oliver and E. R. Pike, “Multiplex advantage in the detection of optical images in the photon noise limit,” Appl. Opt. 13(1), 158–161 (1974). [CrossRef]  

19. A. Wuttig, “Optimal transformations for optical multiplex measurements in the presence of photon noise,” Appl. Opt. 44(14), 2710–2719 (2005). [CrossRef]  

20. I. Ihrke, G. Wetzstein, and W. Heidrich, “A theory of plenoptic multiplexing,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (IEEE, 2010), pp. 483–490.

21. European Machine Vision Association, “EMVA Standard 1288: Standard for characterization of image sensors and cameras, release 3.0,” European Machine Vision Association, Nov. 29, (2010).

22. L. Mignard-Debise and I. Ihrke, “Light-field microscopy with a consumer light-field camera,” in 2015 International Conference on 3D Vision, (IEEE, 2015), pp. 335–343.

23. J. Liu, D. Claus, T. Xu, T. Keßner, A. Herkommer, and W. Osten, “Light field endoscopy and its parametric description,” Opt. Lett. 42(9), 1804–1807 (2017). [CrossRef]  

24. T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in 2009 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2009), 1–9.

25. I. Ihrke, “Raw images K|Lens One and Lytro,” Zenodo (2022), https://doi.org/10.5281/zenodo.6327312.

Supplementary Material (1)

NameDescription
Dataset 1       Raw images K|Lens One and Lytro

Data availability

Data underlying the results presented in this paper are available in Dataset 1, Ref. [25].

25. I. Ihrke, “Raw images K|Lens One and Lytro,” Zenodo (2022), https://doi.org/10.5281/zenodo.6327312.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Object-side view of three comparison systems observing the same field of view. The systems a) and b) use the same hypothetical sensor, system c) uses a sensor of same size but with larger pixels. Left: standard imaging Middle: light field imaging. Right: standard imaging with a low-resolution sensor (same physical size as in the other cases). In the case of the light field system, the aperture is subdivided by a factor of, in this case $N=3$, the object-side pixel size $h_o$ of the standard system is increased by the same factor. The low-resolution system has the full NA of the standard system, but a smaller number of larger pixels chosen such as to match those of a light field subview. In the lower row (’spatial’), the object-side image of the sensor has been colored red to emphasize this fact. The different pixel size for the different settings is indicated by the different spacing of the tick marks. Please also note that the entrance pupil planes for the 2D systems and the LF system differ in general. This is discussed in detail in Section 5.
Fig. 2.
Fig. 2. Vignetting in two optical light field configurations Left: kaleidoscopic light field imaging, Right: afocal lenslet-based light field imaging. White areas are passing light whereas grayed out areas are vignetted. The top row shows the angular sampling pattern in the aperture of the main lens of the light field system. The middle row shows the spatial sampling pattern on the sensor, while the bottom row illustrates decoded light field subviews. The areas of the main lens aperture and the sensor, respectively, that are not covered by the light field system introduce loss of light as compared to the standard 2D imaging system of Fig. 1 a). The kaleidoscopic system suffers from angular vignetting since the main lens aperture is not fully covered. The lenslet system has angular vignetting between the microlens images (middle row), but often also shows spatial vignetting (bottom row) that is introduced by the cat’s-eye shape of the outer microlens images.
Fig. 3.
Fig. 3. Propagating the LF equivalent camera array (ECA) entrance pupil (red) to that of the comparison standard system (blue) overfills its aperture as compared to a perfect $1/N$ subdivision of the pupil.
Fig. 4.
Fig. 4. Kaleidoscopic light field lens “KLens One” with view of the sub-apertures (top row, left), acquired sensor image of the light field system (top row, right), cropped center sub-view of the light field imaging system, equivalent focal length $=78\textrm {mm}$, ISO setting$=200$, exposure time$=1/10\textrm {s}$, insets show raw image content (middle row, left), comparison standard system $(\textrm {St})$ photograph using a Nikon AF Nikkor $28-80\textrm {mm}$ f/3.3-5.6 G, set to focal length $=75\textrm {mm}$ focal length, aperture setting $=f/6.3$, ISO setting$=200$, exposure time$=1/10\textrm {s}$, insets show raw image content, note the $3\times$ higher resolution but the much shallower DoF (middle row, right), an averaging of the light field views gives an approximate impression of the digitally decreased DoF available to focus synthesis methods [4], the blur region is slightly larger than in the standard system $(\textrm {St})$ due to uncompensated sub-view distortions (lower row, left), summation of the light field views demonstrates improved exposure and SNR, e.g. on the body of the owl statue (lower row, right). Raw images used to generate this figure can be found in Dataset 1 [25].
Fig. 5.
Fig. 5. Sketch illustrating the geometry for the DoF discussion. ${D}$ is the aperture diameter, ${d}$ the distance of the focal plane from the aperture plane (considered positive towards the object space) and ${h_o}^{-}$ the object space pixel size at the near depth-of-field plane. Similarly, ${d^{-}}$ and ${d^{+}}$ are distances from the aperture plane.
Fig. 6.
Fig. 6. Comparison standard 2D imaging (Nikon D850, AF Nikkor 85 mm f/1.8; $\textrm {F}/\#=2.2$, exposure time=$1/80~\textrm {s}$, ISO=100) and Lytro Illum (exposure time=$1/100~\textrm {s}$, ISO=100, ${f}_{\textrm{eq}} \approx 80~\textrm{mm}$). The output is as “unprocessed” as possible. The colored raw image uses the Bayer pattern brightness assigned to the respective color channel. Raw images are shown as captured. The processed Lytro Illum images have been directly generated from the raw data stored in the .lfp camera file. Only demosaicing and resampling has been applied to the 10-bit output. Note the similar exposure. The DoF is different due to incomparable sensor and pixel sizes. Raw images used to generate this figure can be found in Dataset 1 [25].

Tables (2)

Tables Icon

Table 1. Relations for the 3 comparison systems shown in Fig. 1 that are derived in the paper. All quantities are in reference to the standard system ( St ).

Tables Icon

Table 2. Main parameters of the two discussed example light field systems.

Equations (28)

Equations on this page are rendered with MathJax. Learn more.

h i St = h i LF = 1 N × h i LR ,
h o St = 1 N × h o LF = 1 N × h o LR ,
M St = N × M LF = M LR ,
M LF = h i LF h o LF = h i St N × h o St = 1 N × M St , and M LR = h i LR h o LR = N × h i St N × h o St = M St .
f St = d ( M St ) 1 + 1 d ( M St ) 1 , but f LF = d ( M LF ) 1 + 1 = d N × ( M St ) 1 + 1 1 N f St .
f eq LF = f St = f LR .
NA o St = N × NA o LF = NA o LR
F / # w St = 1 2 M St NA o St , F / # w LF = 1 2 M LF NA o LF = 1 2 ( N × M St ) ( 1 N × NA o St ) = F / # w St ,
G St = h o St NA o St , G LF = h o LF NA o LF = ( N × h o St ) ( 1 N NA o St ) = G St , whereas G LR = h o LR NA o LR = ( N × h o St ) ( NA o St ) = N × G St .
σ tot 2 = σ r 2 + σ p 2 ,
σ p = a G .
SNR = a G σ tot = a h i 2 F / # w σ tot ,
SNR LR = ( 11 ) a G LR σ tot LR = ( 8 ) , ( 9 ) N a G St σ p 2 LR + σ r 2 LR = ( 8 ) , ( 10 ) N a G St N σ p 2 St + σ r 2 St
SNR LR = N a G St σ tot St = N SNR St = N SNR oneview LF ,
SNR LF = i = 1 N a G LF i = 1 N σ p 2 LF = N SNR oneview LF .
SNR LF = i = 1 N a G LF i = 1 N σ tot 2 LF = N a G LF N ( σ p 2 LF + σ r 2 LF ) = N a G St N σ p 2 St + N σ r 2 St ,
c system [ % ] = c spatial [ % ] × c angular [ % ] ,
G ^ = c angular G .
c angular = i A i π / 4 × ( D St ) 2 .
D eff LF = c angular × D St .
p = 1 D St N D LF .
D 2 h o 2 D 2 = D h o D = d d .
h o = 1 M h i = d f f h i .
d = f d ( D + h i ) D f + d h i
d + = f d ( D h i ) D f d h i
DoF = 2 D f d h i ( d f ) D 2 f 2 d 2 h i 2 .
DoF 2 d 2 h i D f .
DoF F / # 2 d 2 f 2 h i .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.