Abstract
The paper discusses the light efficiency and signal-to-noise ratio (SNR) of light field imaging systems in comparison to classical 2D imaging, which necessitates the definition of focal length and f-number. A comparison framework between 2D imaging and arbitrary light field imaging systems is developed and exemplified for the kaleidoscopic and the afocal light field imaging architectures. Since the f-number, in addition to the light efficiency of the system, is conceptually linked to the depth-of-field, an appropriate depth-of-field interpretation for light field systems is discussed as well.
© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
1. Introduction
The $\textrm {F}/\#$ captures major important features of imaging systems and is therefore in popular use. It describes the light efficiency of an optical system as well as the corresponding loss in depth-of-field for higher performing lenses.
Since light field imaging systems [1–5] perform aperture subdivision, the exact meaning of the $\textrm {F}/\#$ is ambiguous in their case. In addition, since the $\textrm {F}/\#$ involves the focal length of the system, the field of view interpretation is affected as well.
This article therefore discusses the major features related to the $\textrm {F}/\#$ of arbitrary light field imaging systems: their field-of-view, light efficiency, SNR, and depth-of-field. The focus is on establishing conditions that enable a comparison to standard 2D imaging systems and that yield a consistent meaningful description of their properties. As for 2D imaging systems, these will be suitable definitions of focal length and $\textrm {F}/\#$.
Since light field imaging systems differ considerably in their optical implementation, the derivations are carried out in object space using the equivalent camera array (ECA) model of Mignard-Debise and Ihrke [6], which is a geometrical optics model. While the analysis is therefore limited to first-order considerations, this measure abstracts from the particular systems being studied and enables their comparison on common grounds by linking them to an equivalent 2D imaging scenario. The development is exemplified by studying the K|Lens One, a light field objective lens based on the kaleidoscopic light field imaging principle [4], and the Lytro Illum, a lenslet-based light field camera in the afocal configuration [1,2]. The light efficiency of the afocal configuration has been studied in the microscopic context [7]. The other known implementations are:
- • the Fourier Light Field configuration for microscopy applications, where depth-of-field and light efficiency have been studied [9].
The majority of treatments are in terms of thin lens geometrical models [2,3,8,12,13] for individual light field architectures. Unifying geometric models based on light beam parameterizations have been developed to enable comparative studies of different architectures [6,14], where the former work is in terms of phase space cells, as also explored in [15], and the latter work is formulated in terms of virtual cameras that are forming an “equivalent camera array” (ECA). This description is most suited to the current analysis.
The major attention of these geometric models has, however, been on the geometric properties of the resulting cameras [6,14] since these directly relate to their depth sensing capabilities and performance. Another notable use of geometric light field imaging models has been the calibration of these systems [12,13], which is a necessary prerequisite for satisfactory operation. Optical design considerations for light field systems have only recently been published [16]. Light throughput and depth-of-field analyses have so far been restricted to light field microscopy systems [7,9,17].
In contrast to prior work, this article aims at photographic applications and enables a comparative study of different light field systems in terms of their major optical properties relating to light throughput and SNR. I am proposing suitable definitions and discuss details of interpretation for the focal length, f-number and depth-of-field, relating them to their 2D imaging counter parts. The article is intended to aid the communication and interpretation of these numbers in the context of light field imaging.
The paper is organized as follows: Section 2 introduces the comparison setting that is elaborated throughout the paper and introduces the equivalent focal length of a light field imaging system. Section 3 then investigates the conditions for imaging at the same exposure, which is a prerequisite for analyzing the SNR properties, Section 4. The development of Sections 2–4 are performed in a simplified setting. Complicating factors of real light field imaging systems and the required adaptations are then discussed in Section 5. It is seen that the comparison settings derived such result in a different depth of field for the comparison systems. The implications are therefore discussed in Section 7. Section 6.1 discusses the application of the developed concepts to two example light field systems.
2. Comparison setting and equivalent focal length
2.1 Setting
We are concerned with imaging the same field of view for a light field system ($\textrm {LF}$) and a standard 2D imaging system ($\textrm {St}$) at equal exposure, fixing two of the important quantities for comparison. The labels in parentheses are used in the following as a superscript to variables indicating the system that a quantity relates to. Further, a sensor of the same size and equal pixel resolution is assumed for the standard ($\textrm {St}$) and the light field ($\textrm {LF}$) setting.
As the $(\textrm {LF})$ system, in order to obtain angular information, captures sub-views at a lower spatial resolution, we also consider a low resolution ($\textrm {LR}$) setting. This system shares the same field of view but uses a sensor with a lower pixel resolution matching the one of a single $(\textrm {LF})$ system sub-view, but having the same physical size as the standard system $(\textrm {St})$.
A summary of the setting is shown in Fig. 1. The analysis, except for parts of Section 5, will be performed in a 2D setting to simplify the expressions. The conclusions are easily transferred to the full 3D case.
Further, the main characteristics and the comparison setting are first developed in a simplified manner, i.e. an idealized light field setting is assumed where 1. the entrance pupil of the light field system is tightly subdivided and fully filled by the light field sub-apertures, and 2. the entrance pupil plane of the main lens and that of the equivalent camera array (ECA) of the light field system agree. The adaptations to real cases where these assumptions do not hold will be discussed in Section 5.
Introducing ${h_i}$ as a sensor pixel size, the assumptions lead to
where $N$ is the number of sub-views that the $(\textrm {LF})$ system generates via aperture sub-sampling. In the following, we will gradually build up the table in Table 1, where all relations are collected in an easily accessible manner.2.2 Equivalent focal length
Returning to Fig. 1, we use an object-space description [6] to abstract from specific optical light field system implementations. We primarily consider the entrance pupil of the system and argue in object space using the magnification $M=\frac {{h_i}}{{h_o}}$, where ${h_o}$ is the object space pixel size. Since we are assuming a common field of view of the comparison systems, we have
We see that the required optical focal length is reduced by the factor $\frac {1}{N}$ for the light field ($\textrm {LF}$) setting. Since the focal length / sensor size combination is commonly used to interpret the field of view of the system, the optical focal length ${f}^{\textrm {LF}}$ of the ($\textrm {LF}$) system is misleading.
I therefore propose to use the focal length ${f}^{\textrm {St}}$ of the comparable standard system as an equivalent focal length for the light field system:
3. Light efficiency and F-number
Since the $\textrm {F}/\#={\frac {F}{D}}$ involves the focal length as well as the physical aperture size D in the entrance pupil plane ${d}=0$, we now discuss the conditions for equal exposure by the comparison systems.
3.1 F-Number
We perform the analysis in terms of the object-side numerical aperture $\textrm {NA}_o={n}\sin \alpha _o=1/(2{M}\textrm {F}/\#_w)$ with $\textrm {F}/\#_w=(1+|{M}|)\textrm {F}/\#$ being the working f-number for unit pupil magnification and imaging in air (${n}=1$) which is assumed in the following. For photography at medium to large distances, the working f-number and the f-number are approximately identical.
As is seen from Fig. 1,
and thereforeA similar argument can be made in terms of entrance aperture sizes: Let the full aperture in Fig. 1 be given by ${D}$, then ${D}^{\textrm {St}} = N \times {D}^{\textrm {LF}} = {D}^{\textrm {LR}}$. Using the respective optical focal length of Eq. (4), $\textrm {F}/\#^{\textrm {St}}=\textrm {F}/\#^{\textrm {LF}}=\textrm {F}/\#^{\textrm {LR}}$ is obtained.
3.2 Exposure
While the $\textrm {F}/\#$ is equal for all comparison systems, the exposure, which is proportional to the étendue ${G} = {h_i} \textrm {NA}_i = {h_o} \textrm {NA}_o$ of a system differs:
The following additional points should be emphasized:
- 1. When using the equivalent focal length ${f_\textrm {eq}}^{\textrm {LF}}$ proposed in Eq. (5) for describing the light field system, the full system aperture ${D}^{\textrm {St}}$, i.e. covering all light field sub-views, should be used to satisfy the $\textrm {F}/\#$ equality.
- 2. The comparison $\textrm {F}/\#^{\textrm {St}}$ is not equal to the main lens f-number of the light field system since this would be based on the optical focal length: ${f}^{\textrm {LF}} / {D}^{\textrm {St}}$ rather than on the equivalent focal length ${f_\textrm {eq}}^{\textrm {LF}} / {D}^{\textrm {St}}$.
- 3. As seen from Eq. (8), the f-number is not a good concept for predicting the exposure if different sensors are involved. Instead, the étendue must be considered.
- 4. Alternatively, the factor $N$ in the last row of Eq. (8) could be moved from the object-space pixel size ${h_o}^{\textrm {St}}$ to the $\textrm {NA}_o^{\textrm {St}}$-term, which when formulated in terms of $\textrm {F}/\#_w$ would result in a factor of $1/N$, i.e. an equivalent f-number $\textrm {F}/\#_\textrm {eq}^{\textrm {LR}}= 1/N \times \textrm {F}/\#^{\textrm {St}}$ that expresses the improved exposure in relation to the standard system can be defined in this manner. This is also relevant for digital summation of the subviews as discussed next.
4. Signal-to-noise ratio
The current section is concerned with digital summation procedures that produce virtually equivalent exposures in Eq. (8), i.e. that compensate the factor of # exposure advantage of the low resolution system: for the standard system (St), N adjacent high-resolution pixels can be digitally added, whereas for the light field system (LF), a more complex procedure involving sub-view registration is involved [10]. Since N sub-views are available, depth-compensated corresponding pixels (i.e. those showing the same object point) can be used for summation. The following discussion focuses on the difference between analog integration of photo electrons vs. digital summation after A/D conversion.
In the following we use a simple noise model that is commonly used in the literature [18–20]:
i.e. a combination of read noise $\sigma _{\textrm {r}}$ and photon noise $\sigma _{\textrm {p}}$ for a pixel. The read noise depends on the operating conditions and electronics of the pixel (but not its size) and can be assumed to be equal for the comparison systems, whereas the photon noise is inherent in measuring the arriving photons. Dark noise could be added and modeled as depending on the pixel size, assuming that the quantum well scales with the size of the pixel.In the following, consider the object to consist of a homogenous light source as e.g. required by the EMVA 1288 standard [21], or, alternatively, of a Lambertian reflector. In both cases, the radiance $L [\frac {\textrm {W}}{\textrm {m}^{2} \textrm {sr}}]$ is constant for all light directions. The energy $Q [\textrm {J}]$ falling onto a single pixel can therefore be expressed in terms of the étendue $G [\textrm {m}^{2} \textrm {sr}]$ of the optical system: $Q [\textrm {Ws}] = L [\frac {\textrm {W}}{\textrm {m}^{2} \textrm {sr}}] \cdot G [\textrm {m}^{2} \textrm {sr}] \cdot t [\textrm {s}]$. Joining the factors characterizing illumination/scene and exposure time into a single constant $a=L\cdot t$, the signal can be expressed as $a \cdot G$. The photon noise is proportional to the square root of the signal, i.e.
The SNR is the ratio of signal over noise:Returning to the comparison with the low-resolved comparison system (LR), the effect in terms of signal-to-noise ratio (SNR) of a single pixel is an approximate $\sqrt {N}$ improvement of the low-resolved standard system (LR) over the pixels in the individual light field (LF) subviews and the pixels in the high-resolved standard system (St):
4.1 Case I: Negligible read noise
If read noise is negligible, i.e. photon noise dominates over other noise sources such as the read noise $\sigma _{\textrm {r}} \ll \sigma _{\textrm {p}}$, the expression may be further simplified to
Let us assume that the $N$ light field sub-views can be registered (i.e. their disparity can be computed and compensated for). In this case, the registered subviews may be added digitally to produce an improved SNR.
The signal is then composed of the sum of $N$ digitally registered pixels and the SNR for negligible read noise becomes:
4.2 Case II: Low light conditions
In low light conditions, the read noise is non-negligible. The expected difference is that, in the case of light field imaging (LF) and digitally summed standard imaging ($\textrm {St}$), the read noise of $N$ pixel amplifiers poses a disadvantage, as compared to the low-resolved standard imaging system (LR) where only one amplifier is present. Indeed
Note that all SNR expressions can equivalently be given in terms of the working f-number $\textrm {F}/\#_w$, Eq. (11).
5. Real light field systems and the equivalent f-number
The previous analysis describes an idealized setting in that it is tacitly assumed that the complete light cone of the standard system ($\textrm {St}$) is partitioned, i.e. sub-divided without loss of rays and without overlap, and that all sensor pixels are utilized. In general, this is not the case [17,22].
5.1 Vignetting: entrance pupil under-fill
Real light field systems exhibit vignetting and Mignard-Debise [22] introduced a separation into spatial and angular vignetting, as illustrated in Fig. 2. The illustrating example systems are kaleidoscopic light field imaging [4] and afocal lenslet-based light field imaging [1,2] systems. For correctly interpreting the figure, note that the afocal lenslet-based light field architecture is characterized by exchanged positions for the angular and the spatial sampling, see also [6].
According to Mignard-Debise [22], vignetting can be roughly described by a scalar factor
that multiplies the étendue ${G}$ of the ideal system. Since the étendue is a measure of exposure, i.e. it applies to pixels individually, it is more adequate to only consider the angular vignetting part in the definition of an effective étendue: For estimating the vignetting factor, we usually need to resort to the 3D case by considering a 2D version of the aperture, e.g. as in Fig. 2 (upper left). An example calculation for this system will be given in Section 6.1. Denote the area of the light-passing sub-view apertures by $A_i, i=1..9$ and assume the encircling aperture of the comparison system ($\textrm {St}$) has a diameter of ${D}^{\textrm {St}}$, then ${c_{\textrm {angular}}}$ is given by the ratio Now, consider ${c_{\textrm {angular}}}$ to be estimated and known. If we want to relate back to our derivations hitherto performed in the simplified setting, we can use the estimated angular vignetting factor to compute an equivalent effective system diameter ${D}^{\textrm {LF}}_\textrm {eff}$ assuming a circular aperture with the same relative area as the sum of the sub-view areas in Eq. (18) as Using ${D}^{\textrm {LF}}_\textrm {eff}$, a more realistic effective f-number can be defined as $\textrm {F}/\#_\textrm {eff}={f_\textrm {eq}}^{\textrm {LF}} / {D}^{\textrm {LF}}_\textrm {eff} = 1/\sqrt ({c_{\textrm {angular}}})\times \textrm {F}/\#^{\textrm {St}}$. From the derivation in terms of the full comparison system aperture, the effective f-number applies to the standard system ($\textrm {St}$), but due to equality of the f-numbers also to the other two comparison systems.It should additionally be emphasized that the spatial vignetting part ${c_{\textrm {spatial}}}$ of Eq. (16) should not be forgotten in the analysis of a light field imaging system. Effectively, spatial vignetting reduces the number of “mega-rays” that can be successfully acquired by a light field imaging system and typically reduces the angular coverage for extreme field points.
5.2 Entrance pupil plane mismatch: entrance pupil overfill
Several light field imaging systems, in particular, the common microlens-based afocal and focused configurations [2,3], have virtual entrance pupil planes for their equivalent camera array that are displaced from the entrance pupil of the main lens, see [6] for details. In this case, an overfill of the comparison entrance pupil can be observed, see Fig. 3, also referred to, and quantified as overlap in [23].
As an example, if the virtual entrance pupils of the light field subviews are located in front of the main lens entrance pupil (e.g. the afocal microlens setting results in a displacement of one main lens focal length), the pupil areas of different field points spread over a larger area in the main lens aperture as well as exhibiting enlarged areas individually as compared to the perfect $1/N$ subdivision discussed in Sections 2–4. This typically leads to spatial vignetting at the edges of the field, Fig. 2(b) (middle and lower).
This factor can be taken into account by comparing the sum of the subview NAs to the main lens NA [23]. In the current discussion, it is equivalent to state the relation in terms of entrance pupil diameters. Liu et al. [23] propose to quantify the overlap in percent by:
Underfill and the vignetting factor ${c_{\textrm {angular}}}$, as discussed in the previous subsection, could be interpreted as negative overlap. Alternatively, overfill could be seen as a ${c_{\textrm {angular}}}>1$, where ${c_{\textrm {spatial}}}$ is typically reduced. The two definitions can thus be used interchangeably.6. Example systems
6.1 Case Study I: K|Lens One
To illustrate the discussion, let us consider an example system. We will use the K|Lens One (K|Lens GmbH, http://k-lens-one.com) that is based on the kaleidoscopic light field imaging principle shown in Fig. 2 (left), the working principle of which is described in [4]. The kaleidoscopic LF architecture has matched entrance pupils for the ECA and the main lens, but features angular vignetting, as discussed in Section 5.1.
The system has an equivalent focal length ${f_\textrm {eq}}^{\textrm {LF}}=78\textrm {mm}={f}^{\textrm {St}}$ and a main lens entrance pupil diameter of ${D}^{\textrm {St}}=20\textrm {mm}$. From considerations in Section 3, we would estimate $\textrm {F}/\#=78/20=3.9$. However, from the basic geometry in Fig. 2 (upper left), we can estimate ${c_{\textrm {angular}}}=0.42$. The conditions used are 1. the sub-view aperture geometry touches the main lens aperture in the diagonal sub-apertures. 2. the ratio of center distances is $3:2$, and 3. vertical sub-apertures touch. The conditions can be related to the maximal setting of a round aperture for the sub-views in the kaleidoscopic light field architecture. Using the vignetting factor, we obtain an effective $\textrm {F}/\#_\textrm {eff}=6.0$. The real sub-view aperture size from optical simulations is ${D}^{\textrm {LF}}=4.1\textrm {mm}$ which yields an $\textrm {F}/\#_\textrm {eff}=6.3$.
A comparison photograph on a Nikon D850 showing equal exposure of a light field sub-view, Fig. 4 (middle row, left), and an equivalent standard lens (Nikon AF Nikkor $28-80\textrm {mm},\, 3.3-5.6$G set to ${f}^{\textrm {St}}=75\textrm {mm}, \textrm {F}/\#^{\textrm {St}}=6.3$), using equal ISO and exposure time settings, is shown in Fig. 4 (middle row, right). An example of digitally summed light field views is shown in the bottom row (right). The virtual exposure is much brighter and could be characterized by the equivalent f-number $\textrm {F}/\#_\textrm {eq}=6.3/3=2.1$ proposed in Section 3.
6.2 Case Study II: Lytro Illum
The second example is the Lytro Illum, a light field camera based on the afocal microlens configuration [2]. In this setting, the ECA entrance pupil plane and the main lens entrance pupil do not agree. In addition, there is no angular vignetting as illustrated in Fig. 2 (right).
For the following discussion, it should be noted that the Lytro Illum uses a $1/1.2''$ sensor $(10.82 \times 7.52~\textrm {mm}^{2})$ as compared to the K|Lens One that is designed for a full frame sensor $(36 \times 24~\textrm {mm}^{2}$. All dimensional quantities are therefore linearly scaled by $1/3.33$ for this system.
For generating the same field-of-view as in the previous case, the equivalent focal length is chosen as ${f_\textrm {eq}}=78~\textrm {mm}/3.33=23.4~\textrm {mm}$. As extracted from the meta-data of the example raw file, the main lens has an $\textrm {F}/\#=2.2$ for this setting. For the entrance pupil diameter $D=23.4~\textrm {mm} / 2.2 = 10.65~\textrm {mm}$ follows.
In the afocal light field configuration, the object space image of a micro-image pixel serves as the virtual entrance pupil of a light field subview [6]. Paraxial calculations with Lytro Illum parameters (${f}_{\textrm {main}}=23.4~\textrm {mm}, {f}_{\textrm {ML}}=40~\mu \textrm {m}, {h_i}=1.4~\mu \textrm {m}$) yield a subview entrance pupil size of ${D}^{\textrm {LF}}=0.85~\textrm {mm}$. With $14$ subviews (in one lateral direction), the overall entrance pupil of the ECA is thus estimated as $14 \times 0.85=11.9~\textrm {mm}$. With these values, Eq. (20) yields an overlap of $\oslash _p = 10\%$. The actual f-number of the light field subviews is therefore $\textrm {F}/\#^{\textrm {LF}}=2.0$. The same result is obtained using the optical focal length of a subview ${f}=\frac {{f_\textrm {eq}}}{N}=1.67$ and the subview entrance pupil diameter ${D}^{\textrm {LF}}$: $1.67/0.85 = 1.97$. This is also the value stated on the Illum’s main lens. A visual example is shown in Fig. 6.
6.3 Comparison table
For direct comparison, the optical data, as well as some additional sensor properties, are summarized in Table 2. Values in parentheses indicate chosen values in the above discussion in the case of adjustable settings. Digital super-resolution performance is taken from company information.
7. Depth-of-field
Since the f-number is also used to communicate an expected depth-of-field, a discussion is not complete without investigating the DoF, and in particular, the scaling laws for aperture subdivision by a factor of $N$ as in light field imaging. In the following, a geometrical optics argument is invoked to determine the tendencies underlying light field imaging systems. For numerical computations in real systems, additional effects such as aberrations, wave optical effects, vignetting, sensor tilt, etc. need to be considered. The development in this section is therefore only a first approximation.
7.1 Depth-of-field formula
For completeness, the derivation of the used depth-of-field formula is sketched in the following. The relevant geometry is shown in Fig. 5. The argument, again, follows a predominantly object-space approach in order to abstract from particular systems. However, since the DoF is typically defined in terms of circles of confusion (CoC) that relate to the pixel size of the sensor (e.g. the CoC is $1$ pixel), it will be necessary to relate to the image space pixel size ${h_i}$ via the optical focal length of the imaging system, Eq. (4).
The ansatz is a similarity relation between the two triangles indicated in the figure (the green triangle has been flipped for clarity). The derivation is carried out for the distance of the near depth-of-field plane ${d^{-}}$:
7.2 Comparison
From Eq. (27), we obtain the following relations for the three comparison systems. The ($\textrm {LF}$) system has an $N^{2}\times$ larger DoF than the comparison system ($\textrm {St}$) due to its optical focal length ($f$ in Eq. (27)) being reduced by a factor of $1/N$, cf. Table 1. This effect can, intuitively, be attributed to two factors of $N$ each: 1) the reduced $\textrm {NA}_o^{\textrm {LF}}$ of the light field sub-views, and, 2) the larger object space pixel size ${h_o}^{\textrm {LF}}$ due to lower resolution, but same field of view as the comparison system ($\textrm {St}$). The low-resolution standard system ($\textrm {LR}$) occupies an intermediate position: it has an $N\times$ increased DoF as compared to ($\textrm {St}$) that is due to effect 2) only (${h_i}^{\textrm {LR}}=N\times {h_i}^{\textrm {St}}$). Similar conclusions have been drawn by Levoy et al. [7].
7.3 Discussion
This analysis appears to show that the $\textrm {F}/\#$ is not an adequate measure for characterizing the DoF if different sensor systems or modalities are compared. However, the picture is not complete without the digital processing that can be applied to the raw data of light field imaging systems.
In particular, the sub-views can be (shifted and) summed to synthesize images that are equivalent to refocused versions of the image that the low-resolution standard system ($\textrm {LR}$) generates. For practical application, note that 1. the DoF changes with the focal distance; therefore, the digital “sum-and-add” refocusing is not exactly equivalent to the natural one. 2. for generating visually pleasing synthetic refocus, view interpolation is typically required [10]. The effective aperture for the ($\textrm {LF}$) system in the table then becomes equal to the one of the low-resolution standard system ($\textrm {LR}$).
In order to synthesize the even narrower DoF of the high-resolution standard system ($\textrm {St}$), digital super-resolution techniques [10,24] must be employed which effectively reduces ${h_i}^{\textrm {LF}}$ to ${h_i}^{\textrm {LF}}/N={h_i}^{\textrm {St}}$. In this case, the narrow DoF of the high-resolution standard system ($\textrm {St}$) can be synthesized fully. In fact, it is even possible to extrapolate synthetic apertures and achieve an even narrower DoF [4].
Therefore, I propose to still interpret the $\textrm {F}/\#$ as a rough measure of the minimum synthetic DoF that can certainly be achieved (i.e. without extrapolation) by a light field imaging system involving digital processing.
8. Conclusions
Doing so, several relations and peculiarities were illuminated. In summary, just as in standard photography, the $\textrm {F}/\#$ is a good measure of light efficiency and SNR in good illumination conditions. With digital summation, even the SNR of the equivalent low-resolved system can be matched. For low light conditions, light field systems suffer a slight disadvantage that may be exacerbated by registration problems for the then noisy data. In terms of depth-of-field, the equivalent $\textrm {F}/\#$ provides a rough measure of the minimal synthetic refocus depth of field that can be achieved. For the analysis of a light field system, the comparable 2D standard imaging system quantities should be determined and communicated.
Acknowledgments
Portions of this work were presented at the conference “Quality Control by Artificial Vision” (QCAV) in 2019 as a non-peer reviewed invited paper contribution with the title “An Equivalent F-Number for Light Field Systems: Light Efficiency, Signal-to-Noise Ratio, and Depth of Field”. I would like to thank Loïs Mignard-Debise for his careful proof-reading this prior paper version. I would also like to thank the anonymous reviewers for their careful reading and insightful questions.
Disclosures
II: K|Lens GmbH (F,I,E).
Data availability
Data underlying the results presented in this paper are available in Dataset 1, Ref. [25].
References
1. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Machine Intell. 14(2), 99–106 (1992). [CrossRef]
2. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR2, 1–11 (2005).
3. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in 2009 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2009), pp. 1–8.
4. A. Manakov, J. Restrepo, O. Klehm, R. Hegedus, E. Eisemann, H.-P. Seidel, and I. Ihrke, “A reconfigurable camera add-on for high dynamic range, multispectral, polarization, and light-field imaging,” ACM Trans. Graph. 32(4), 1–14 (2013). [CrossRef]
5. A. Llavador, J. Sola-Pikabea, G. Saavedra, B. Javidi, and M. Martínez-Corral, “Resolution improvements in integral microscopy with Fourier plane recording,” Opt. Express 24(18), 20792–20798 (2016). [CrossRef]
6. L. Mignard-Debise, J. Restrepo, and I. Ihrke, “A unifying first-order model for light-field cameras: the equivalent camera array,” IEEE Trans. Comput. Imaging 3(4), 798–810 (2017). [CrossRef]
7. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006). [CrossRef]
8. T. G. Georgiev and A. Lumsdaine, “Depth of Field in plenoptic cameras,” Eurographics (Short Papers) 11814, 118140B (2009). [CrossRef]
9. M. Martínez-Corral and B. Javidi, “Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Adv. Opt. Photonics 10(3), 512–566 (2018). [CrossRef]
10. I. Ihrke, J. Restrepo, and L. Mignard-Debise, “Principles of light field imaging: Briefly revisiting 25 years of research,” IEEE Signal Process. Mag. 33(5), 59–69 (2016). [CrossRef]
11. G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: An overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017). [CrossRef]
12. O. Johannsen, C. Heinze, B. Goldluecke, and C. Perwaß, “On the calibration of focused plenoptic cameras,” in Proc. Seminar Time- Flight Depth Imag. Sens., Algorithms, Appl., 302–317 (2013).
13. Q. Zhang, C. Zhang, J. Ling, Q. Wang, and J. Yu, “A generic multi-projection-center model and calibration method for light field cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 41(11), 2539–2552 (2018). [CrossRef]
14. C. Hahne, A. Aggoun, S. Haxha, V. Velisavljevic, and J. C. J. Fernández, “Light field geometry of a standard plenoptic camera,” Opt. Express 22(22), 26659–26673 (2014). [CrossRef]
15. R. Ng, “Digital light field photography,” PhD Thesis, Stanford University (2006).
16. Q. Cui, S. Zhu, and L. Gao, “Developing an optical design pipeline for correcting lens aberrations and vignetting in light field cameras,” Opt. Express 28(22), 33632–33643 (2020). [CrossRef]
17. L. Mignard-Debise and I. Ihrke, “A vignetting model for light field cameras with applications to light field microscopy,” IEEE Trans. Comput. Imaging 5(4), 585 (2019). [CrossRef]
18. C. J. Oliver and E. R. Pike, “Multiplex advantage in the detection of optical images in the photon noise limit,” Appl. Opt. 13(1), 158–161 (1974). [CrossRef]
19. A. Wuttig, “Optimal transformations for optical multiplex measurements in the presence of photon noise,” Appl. Opt. 44(14), 2710–2719 (2005). [CrossRef]
20. I. Ihrke, G. Wetzstein, and W. Heidrich, “A theory of plenoptic multiplexing,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (IEEE, 2010), pp. 483–490.
21. European Machine Vision Association, “EMVA Standard 1288: Standard for characterization of image sensors and cameras, release 3.0,” European Machine Vision Association, Nov. 29, (2010).
22. L. Mignard-Debise and I. Ihrke, “Light-field microscopy with a consumer light-field camera,” in 2015 International Conference on 3D Vision, (IEEE, 2015), pp. 335–343.
23. J. Liu, D. Claus, T. Xu, T. Keßner, A. Herkommer, and W. Osten, “Light field endoscopy and its parametric description,” Opt. Lett. 42(9), 1804–1807 (2017). [CrossRef]
24. T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in 2009 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2009), 1–9.
25. I. Ihrke, “Raw images K|Lens One and Lytro,” Zenodo (2022), https://doi.org/10.5281/zenodo.6327312.