## Abstract

This paper derives a mathematical point spread function (PSF) and a depth-invariant focal sweep point spread function (FSPSF) for plenoptic camera 2.0. Derivation of PSF is based on the Fresnel diffraction equation and image formation analysis of a self-built imaging system which is divided into two sub-systems to reflect the relay imaging properties of plenoptic camera 2.0. The variations in PSF, which are caused by changes of object’s depth and sensor position variation, are analyzed. A mathematical model of FSPSF is further derived, which is verified to be depth-invariant. Experiments on the real imaging systems demonstrate the consistency between the proposed PSF and the actual imaging results.

© 2017 Optical Society of America

## 1. Introduction

Light field cameras, also known as plenoptic cameras, attract an increasing attention over the past decades [1–3]. Compared with conventional cameras, light field cameras insert a microlens array between the main lens and the image sensor, which results in that the image sensor can record both spatial and angular information of light field in a single shot [4]. Together with rendering algorithm, light field cameras can refocus images at different depths, and reconstruct images with viewpoint changes [5], which facilitates its application to 3D reconstruction, depth estimation and digital refocusing [6]. The existing light field cameras can mainly be classified into two types, the so called plenoptic camera 1.0 [3] and plenoptic camera 2.0 [7]. Different from plenoptic camera 1.0, which inserts a microlens array at the image plane of the main lens, plenoptic camera 2.0 is a relay imaging system which inserts a microlens array behind the image plane of the main lens to re-image the object. It benefits the increase in spatial resolution using rendering algorithms proposed in [8] and superresolution reconstruction proposed in [9,10].

Although the plenoptic camera 2.0 outperforms plenoptic camera 1.0 by improved spatial resolution, it is still fundamentally desired to investigate its point spread function (PSF) to extend the depth of field and further increase the spatial resolution [11]. PSF, as the impulse response of an imaging system, is considered to be the fundamental unit of theoretical models, which reflects the image formation process of an imaging system. It can be used to correct the optical aberrations to improve the imaging quality and to recover the light field information by extending the depth of field and performing superresolution. Furthermore, the details in the recovered light field information can be fully exploited to further improve the quality of the applications, such as 3D reconstruction, depth map estimation and object detection. To derive the PSF for plenoptic cameras, S. A. Shroff *et al.* provided a mathematical model based on the image formation properties, but the PSF was only applicable to plenoptic camera 1.0 [12–14]. T. E. Bishop *et al.* also derived the PSF for plenoptic camera 1.0 with the assumption that both PSFs of the main lens and the microlens are defocused PSFs [15,16]. Although the defocused PSF has low computational complexity in calculation, the accuracy in representing the optical structure of plenoptic camera is limited. M. Turola discussed the simulated PSFs of plenoptic cameras in his dissertation based on the Fourier optics, which mainly concentrated on the simulation process, like the setting strategies of sampling rate, choice of spatial frequency filter and optimization of computational time [17]. However, his work lacks the mathematical description of the image formation process and impulse response of plenoptic camera 2.0 via wave optics.

Furthermore, as we extend the depth of field for the camera and perform superresolution reconstruction for the captured images, focal sweep PSF (FSPSF), the aggregate PSF for a scene point, is generally used by exploiting its depth-invariant property. For the conventional cameras, FSPSF is obtained by changing the focal plane during the exposure time, which is carried out by moving the sensor [18]. Y. Bando *et al.* proved the depth-invariant property of FSPSF theoretically for the conventional cameras [19] and S. Kuthirummal *et al.* proved both the space and depth invariant properties of FSPSF empirically [20]. R. Yokoya *et al.* extended the FSPSF to catadioptric imaging system and verified the depth-invariant property [21]. However, for FSPSF of plenoptic camera 2.0, neither mathematical model nor empirical analysis is available yet.

Consequently, this paper tries to build the mathematical model of PSF and FSPSF for plenoptic camera 2.0 and analyzes the properties for them. The mathematical derivation of PSF is based on the Fresnel diffraction equation and the image formation analysis. The variations in PSF, which are caused by changes of object’s depth and sensor position variation, are provided. A mathematical model of FSPSF for plenoptic camera 2.0 is further derived, which is verified to be depth-invariant. The consistency between the results calculated by the proposed PSF and those captured by the real imaging system demonstrated the correctness of the proposed PSF.

The rest of the paper is organized as follows. Section 2 describes the derivation of PSF in detail. Both the properties of PSF and its FSPSF are analyzed in Section 3. Verifications of the proposed PSF are provided in the Section 4 followed by conclusions in Section 5.

## 2. PSF for plenoptic camera 2.0

#### 2.1 Optical prototype of imaging system

To derive the PSF, the image formation process and optical structure of plenoptic camera 2.0 are first analyzed. Figure 1(a) shows the structure of plenoptic camera 2.0. Parameters in Fig. 1(a) satisfy the Gaussian equation [5]:

*f*

_{1}is the focal length of the main lens;

*f*

_{2}is the focal length of the microlens. T. Georgiev and A. Lumsdaine have analyzed the image formation properties of plenoptic camera 2.0 in [8]. First, rays from the object pass through the main lens and focus on the image plane of the main lens. Then, treating the light field on the image plane as a new object, the microlens array re-images it to the sensor as a relay imaging system. Different from the micro image, i.e. image under each microlens, captured by plenoptic camera 1.0 which mainly records angular distribution of rays, a micro image captured by plenoptic camera 2.0 records both angular and positional distribution of rays, which presents a portion of the image generated by the object as shown in Fig. 1(b) [22].

Considering that each micro image presents a portion of the image generated by the object and the position of each microlens only affects the area of rays passing through, we proposed to simplify the imaging system by placing only one microlens on the optical axis to analyze the imaging results. The impulse response of the system can be verified easily by comparing the real imaging results with the mathematically calculated results, based on which the model can be further extended to the imaging system using a microlens array. The imaging system is shown in the next subsection.

#### 2.2 Design of imaging system

The architecture of the self-built imaging system is shown in Fig. 2. As shown in Fig. 2, a non-directional white light source illuminates the object and produces the Lambertian reflection. Rays from the object plane propagate to an optical filter at 532 nm to facilitate formulating under monochromatic wavelength. Then, the filtered light rays pass through a main lens and a microlens. The image can be retrieved from the CMOS sensor. Figure 3 is the photo of the experimental setup. The imaging system installs on the slide rail with the accuracy of 0.5mm. The geometric parameters are listed in Table 1. In the system, the diameter of the microlens we used is much larger than that used in [8]. Using a bigger microlens is based on the consideration of increasing the amount of lights passing through. It will provide a brighter imaging result to facilitate the model verification, and it will not change the propagation rules relative to using a small microlens.

#### 2.3 PSF derivation

Based on the above optical analysis, plenoptic camera 2.0 can be considered as a twice imaging system. Thus, the self-built imaging system can be divided into two subsystems: Subsystem 1 represents ray propagation from the object plane to the image plane; Subsystem 2 represents ray propagation from image plane to the sensor. Combining the analyses of the two subsystems, a layered derivation of PSF for plenoptic camera 2.0 is shown as follows.

First, we analyze the Subsystem 1. Rays emitting from a point (*x*_{0}, *y*_{0}) on the object plane, as shown in Fig. 1(a), reach a point (*x _{main}, y_{main}*) on the main lens. Considering

*h*

_{11}(

*x*

_{main}, y_{main}, x_{0},

*y*

_{0}) is the impulse response describing ray propagation process, the field at the point (

*x*) can be modeled by:

_{main}, y_{main}*U*(

*x*) can also be formulated by:

_{main}, y_{main}*k*is the wavelength number and equals to $2\pi /\lambda $;

*d*

_{1}is the distance between the object and the main lens. Comparing Eq. (2) with Eq. (3), we get:

After generating the field at point (*x _{main}, y_{main}*), rays pass through the main lens and propagate to the image plane. The field at point (

*x*

_{1},

*y*

_{1}) on the image plane, as shown in Fig. 1(a), can be formulated by:

*h*

_{12}(

*x*

_{1},

*y*

_{1,}

*x*) is the impulse response describing ray propagation from the main lens to the image plane. Similarly, using the Fresnel diffraction equation,

_{main}, y_{main}*U*(

*x*

_{1},

*y*

_{1}) is given by:

*d*

_{2}is the distance between the main lens and the image plane; and

*t*(

_{main}*x*) is the phase correction factor of the main lens, which represents the optical characteristic. The formulation of

_{main}, y_{main}*t*(

_{main}*x*) is:

_{main}, y_{main}*P*

_{1}(

*x*) is the pupil function of the main lens; and

_{main}, y_{main}*f*

_{1}is the focal length of the main lens. Comparing Eq. (5) with Eq. (6), we have:

*h*

_{1}(

*x*

_{1,}

*y*

_{1,}

*x*

_{0},

*y*

_{0}), can be formulated by:

Then, the analysis is performed on the Subsystem 2. The imaging system using only one microlens is considered here, as described in Section 2.1, to simplify the model. Using the similar derivation process to that in Subsystem 1, ray propagation from a point (*x*_{1,} *y*_{1}) on the image plane to a point (*x _{micro}, y_{micro}*) on the microlens plane and afterwards from a point (

*x*) on the microlens plane to a point (

_{micro}, y_{micro}*x, y*) on the sensor is modeled. Thus, the field at point (

*x, y*) on the sensor is given by:

*h*

_{2}(

*x, y, x*

_{1,}

*y*

_{1}) is the PSF of Subsystem 2.

*h*

_{2}(

*x, y, x*

_{1,}

*y*

_{1}) is given by:

*h*

_{21}(

*x*

_{micro}, y_{micro}, x_{1,}

*y*

_{1}) and

*h*

_{22}(

*x*

_{,}

*y*

_{,}

*x*) is the impulse response describing ray propagation from the image plane to the microlens and ray propagation from the microlens to the sensor, respectively.

_{micro}, y_{micro}*h*

_{21}(

*x*

_{micro}, y_{micro}, x_{1,}

*y*

_{1}) is given by:

*d*

_{3.1}is the distance between the image plane and the microlens.

*h*

_{22}(

*x*

_{,}

*y*

_{,}

*x*) is given by:

_{micro}, y_{micro}*d*

_{3.2}is the distance between the microlens and the sensor;

*t*(

_{micro}*x*) is the phase correction factor for a microlens and is formulated by:

_{micro}, y_{micro}*P*

_{2}(

*x*) and

_{micro}, y_{micro}*f*

_{2}are the pupil function and focal length of the microlens, respectively.

Plugging Eq. (13) to Eq. (15) into Eq. (12), PSF of Subsystem 2 with only one microlens is given by:

Extending the PSF to a microlens array with *m* × *n* microlenses, the change only lies in the process of *h*_{22} (*x*_{,} *y*_{,} *x _{micro}, y_{micro}*) that describes ray propagation from the microlens plane to the sensor. The phase correction factor of the microlens array becomes to be the accumulation of the phase correction factors of

*m*×

*n*microlenses:$\sum _{m}{\displaystyle \sum _{n}{t}_{micro}({x}_{micro}-mD,{y}_{micro}-nD)}$. Thus,

*h*

_{22}(

*x*

_{,}

*y*

_{,}

*x*) becomes to be:

_{micro}, y_{micro}*D*is the diameter of a microlens. So,

*h*

_{2}(

*x*

_{,}

*y*

_{,}

*x*

_{1},

*y*

_{1}) becomes to be:

Having the PSF of Subsystem 1 and Subsystem 2, the field on the sensor of the whole imaging system is:

*h*(

*x*

_{,}

*y*

_{,}

*x*

_{0},

*y*

_{0}) to represent the PSF of the whole system, Eq. (19) can be rewritten as:

*h*(

*x*

_{,}

*y*

_{,}

*x*

_{0},

*y*

_{0}) equals to:

## 3. Wave analysis for PSF and FSPSF in plenoptic camera 2.0

#### 3.1 PSF wave analysis

The characteristics of the proposed PSF are analyzed by varying the imaging parameters according to the self-built system shown in Fig. 3. Two sets of analyses have been conducted by calculating the response of PSF according to the variable settings listed in Table 2.

The first set of analysis tries to evaluate the object’s depth responses by changing *d*_{1} from 70mm to 110mm, as listed in the first row of Table 2. PSFs calculated according to Eq. (21) are shown in Fig. 4, which are normalized by the max value of the intensity in the five PSFs. Since the system was designed to be perfectly focused when *d*_{1} is 90mm, the PSF at *d*_{1} = 90mm becomes to be a clearest and smallest spot in the center, as shown in the Fig. 4(c), and meanwhile the outer circular rings are darker than other PSFs. As the defocus degree of the object changes, corresponding to the changes of difference between *d*_{1} and 90mm, outer circular rings become brighter gradually and the energy at the center becomes weaker and dispersive. From Figs. 4(a) to 4(e), the max intensities after normalization are 0.204, 0.206,1.000, 0.963 and 0.508, which indicates that the energy concentration capability at *d*_{1} = 90mm is the highest and that at the defocused planes is much weaker. The energy concentration degree will directly affect the image quality: the higher the concentration degree is, the clearer the image is.

The second set of analysis, *Analysis Set* 2 in Table 2, tries to evaluate the sensor position responses by changing *d*_{3.2} from 10mm to 20mm. PSFs calculated are shown in Fig. 5. Since changing *d*_{3.2} corresponds to moving the focal plane of the system accordingly as referring to Eq. (1), the sensor moving farther/closer to the microlens corresponds to the distance between the focal plane and the main lens becoming shorter/longer. Thus, the variations in PSFs at different sensor positions are corresponding to the variations in PSFs at different object depths. Since the system is designed to be focused at *d*_{3.2} = 15mm, PSF shown in Fig. 5(c) has the brightest spot in the center and the darkest outer rings. As the difference between *d*_{3.2} and 15mm changes, the energy at the central spot and the outer rings, which represents the energy decentralization degree, propagates further.

#### 3.2 FSPSF Derivation

Based on the proposed PSF, FSPSF can be calculated for extending depth of field and superresolution reconstruction. Based on the definition, FSPSF can be calculated as the integration of PSF along the time variation by:

During the time interval, which corresponds to a specific exposure time, the sensor position ${d}_{3.2}$ gradually changes, which results in Eq. (22) becoming to be:The sensor plane moves uniformly at a constant velocity, denoted by $v$. So, the sensor position*d*

_{3.2}is

*vt + d*

_{0}, where

*d*

_{0}is the initial position of the sensor plane. Thus, Eq. (23) becomes to be:

*d*

_{3.2}as:

Equation (25) gives the mathematical FSPSF for plenoptic camera 2.0. Verification for the properties of FSPSF based on Eq. (25) is shown in Section 4.3.

## 4. Experimental results

#### 4.1 PSF verification

The correctness of the proposed PSF is verified via comparing PSFs captured by a real imaging system with those calculated by Eq. (21). First, using the imaging system described in Fig. 3, PSFs of the system with a single microlens are captured by changing the object’s depth *d*_{1}. The point light source is generated using a LED lamp bead and an objective lens, by which the light from the LED can converge to a point and thereafter diverge to be omnidirectional.

As shown in the figure, the variation tendency of the real imaging results, shown in Figs. 6(a)-6(d) are visually similar with those simulated results, shown in Figs. 6(e)-6(h), both in terms of size changes and the blurry degree. It is observed that it is a bit different in the imaging result at *d*_{1} equaling to 75mm. The central energy of the experimental result, as shown in Fig. 6(a) is lower than that in the simulated one, as shown in Fig. 6(e). It is mainly caused by the difference between the actual position where the light rays focus after passing through the main lens and that calculated by the theoretic model. In the theoretic model, the optical path differences are calculated by approximating the spherical surfaces of the main lens as parabolic ones. Approximation error will affect the calculated results more obviously with the decrement in the object distance.

Second, a real imaging system using a microlens array is built to further demonstrate the correctness of the proposed model. The imaging system is shown in Fig. 7(a). The point light source is also generated by a LED lamp bead and an objective lens. The geometric parameters of the real imaging system are listed in Table 3. Figure 7(b) shows the experimental results at *d*_{1} = 90mm, where the focal plane of the system is located.

The imaging result of the system is shown in Fig. 7(b) and that of the adjacent seven microlenses, the region lined in red in Fig. 7(b), is magnified in Fig. 7(c). It is compared with the simulated PSF that is calculated by Eq. (21) and shown in Fig. 7(d). It can be found that the two PSFs are visually similar. The distances between the PSF from the center microlens and its six circumjacent microlens are 0.170mm and 0.172mm on average in Fig. 7(c) and Fig. 7(d), respectively, which shows high consistency in the scale between each other. There is a small difference between the simulation results and the experimental results at the circumjacent microlenses images, which may be caused by the aberrations as the point light source is lighting around the edge of the microlens.

#### 4.2 Image formation verification

The correctness of the proposed PSF is then verified by comparing the real imaging results with the corresponding simulation results. The results generated by the proposed PSF are obtained by calculating the system response of every point on the object using Eq. (21), retrieving the intensity and accumulating all the responses together.

First, two objects in different shapes shown in Fig. 8 are used as the objects in Fig. 2. By changing the object’s depth *d*_{1}, images captured on the sensor are retrieved, normalized and compared with the normalized results generated by the proposed PSF. Here, two criteria are investigated to measure the difference between the real imaging results and corresponding simulated results: the size of the image, which is measured along *x* and *y* directions lined in red in Fig. 8 between the two dashed lines; and the defocus level, which is measured by calculating the ratio of the high frequency energy to the low frequency energy (the four DCT coefficients at DC and the lowest frequency) after applying an 8 × 8 DCT transformation to the image [23]. The higher the value of defocus level is, i.e. the energy at high frequency is higher, the better the focus performance is.

The results are shown in Fig. 9. Since the imaging system is focused at *d*_{1} = 90mm, the imaging results and the simulated results in Fig. 9(b) are the sharpest. As the object depth *d*_{1} changes, images become smaller/bigger and blurred at the edge, which are caused by the defocus. As the object moves away from the main lens, corresponding to Figs. 9(c)–9(f), the image size becomes smaller gradually, while the blurred degree is positively correlated with the difference between the object depth and 90mm.

Besides, the visually similarity between the real imaging results and the corresponding simulated results, the variations in object size and defocus level are consistent between each other if we compare the images in the first/third row with those in the second/forth row. The difference in the size of the two sets of results, corresponding to the two objects in Fig. 8, is 1.36% and 4.61% on average. Although the defocus level measurement method is sensitive to the environmental illumination during real imaging process, it still can be observed that the variation trend in the defocus level is quite similar between the real imaging results and the corresponding simulated results, which demonstrates the correctness of the proposed model.

Second, the USAF resolution target, as shown in the Fig. 10, with more texture details is tested to further verify the proposed PSF formulation. As the size of the USAF resolution target is much bigger than the two objects in Fig. 8, the focal length and the diameter of the main lens and the microlens have been changed to decrease the optical aberrations at the edge of the lens. The diameter and the focal length of the main lens/microlens are 40mm/25.4mm and 100mm/30mm, respectively. The real imaging results and corresponding simulated results are shown in Fig. 11.

As shown in Fig. 11, the variation in the size of the image and the defocus level are almost consistent between the simulated results and the real imaging results as well. As the imaging system is focused at *d*_{1} = 300mm, the results shown in Fig. 11(b) are the sharpest and the defocus values are the highest. When the object depth *d*_{1} increases/decreases relative to 300mm, the size of the imaging results becomes smaller/larger and the defocus becomes bigger. Averaging the size difference of the three pairs, the size difference is only 4.11%. Although the defocus values of the real imaging results are affected by the optical aberrations and the difference between the actual focus plane in the real imaging system and the theoretical focus plane in the simulation model, the variation tendency is still consistent between the two rows.

The results verified the consistency between the real imaging results and the results calculated by the proposed model, which demonstrate the effectiveness of the proposed PSF in representing the image formation property of the imaging system correctly.

#### 4.3 FSPSF property analysis

To analyze the properties of FSPSF for plenoptic camera 2.0, FSPSFs at different object depths are calculated. The integration range for the sensor position${d}_{3.2}$is from: 10mm to 30mm. Normalized FSPSFs at five depths are shown in Fig. 12. The five object’s depths are the same with the depths in Fig. 4, at which PSFs vary obviously. However, for the five FSPSFs, they are almost visually the same.

Since FSPSFs are almost centrally symmetric, we extract FSPSFs distribution at *y* = 0mm to compare them in detail in Fig. 13. FSPSFs at other depths, like *d*_{1} equals to 75mm to 105mm using 5mm as a step, are also calculated and included in Fig. 13. As shown in the figure, the curves of the FSPSFs at different depths are almost overlapped with each other, which demonstrates the depth-invariant property of FSPSF. The empirical verification can assist the deconvolution algorithm, which can help to extend the depth of field and perform superresolution.

## 5. Conclusions

In this paper, we derived a mathematical PSF and FSPSF for plenoptic camera 2.0 by analyzing the image formation process and optical structure. Mathematic derivation of the PSF is provided according to the analysis of the image formation property and Fresnel diffraction equation. Based on the proposed PSF and FSPSF definition, the mathematical model of FSPSF, which is further verified to be depth-invariant empirically, is calculated. Experiments on the self-built imaging system verify the correctness of the proposed PSF.

In order to optimize the proposed mathematical PSF and FSPSF, future works include the model extension to adapt to the thick lens and commercial lens, theoretical verification of the depth-invariant property of FSPSF and the parameter optimization for FSPSF calculation such as the integrated range of sensor position. In addition, the researches on the light field recovery algorithms based on the proposed PSF and FSPSF, such as extending the depth of field, superresolution reconstruction and optical aberration correction, will also be investigated.

## Funding

National Natural Science Foundation of China (NSFC) (61371138); Natural Science Foundation of Guangdong, China (2014A030313733); National Key Scientific Instrument and Equipment Development Project, China (2013YQ140517).

## References and links

**1. **E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. **14**(2), 99–106 (1992). [CrossRef]

**2. **R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a Hand-Held Plenopic Camera,” Technical Report, Stanford University (2005).

**3. **R. Ng, “Digital light field photography,” Ph.D. thesis, Stanford University (2006).

**4. **M. Levoy, R. Ng, A. Adam, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. **25**(3), 924–934 (2006). [CrossRef]

**5. **E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” J. Opt. Soc. Am. A **32**(11), 2021–2032 (2015). [CrossRef] [PubMed]

**6. **V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings of 2014 IEEE International Conference on Computational Photography (ICCP) (2014), pp. 1–10. [CrossRef]

**7. **A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (ICCP, 2009), pp. 1–8. [CrossRef]

**8. **T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging **19**(2), 1–28 (2010).

**9. **A. Lumsdaine and T. Georgiev, “Full resolution light field rendering,” Technical report, Adobe Systems (2008).

**10. **T. Georgiev and A. Lumsdaine, “Superresolution with Plenoptic 2.0 cameras,” in *Frontiers in Optics 2009/Laser Science XXV/Fall 2009*, OSA Technical Digest (CD) (Optical Society of America) (2009), paper STuA6.

**11. **M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express **21**(21), 25418–25439 (2013). [CrossRef] [PubMed]

**12. **S. Shroff and K. Berkner, “Plenoptic System Response and Image Formation,” in *Imaging and Applied Optics*, OSA Technical Digest (online) (Optical Society of America, 2013), paper JW3B.1.

**13. **S. Shroff and K. Berkner, “High Resolution Image Reconstruction for Plenoptic Imaging Systems using System Response,” in Imaging and Applied Optics Technical Papers, OSA Technical Digest (online) (Optical Society of America (2012)), paper CM2B.2. [CrossRef]

**14. **S. Shroff and K. Berkner, “Wave analysis of a plenoptic system and its applications,” Proc. SPIE **8667**, 86671L (2013). [CrossRef]

**15. **T. E. Bishop and P. Favaro, “The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. **34**(5), 972–986 (2012). [CrossRef] [PubMed]

**16. **T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in 2009 IEEE International Conference on Computational Photography (ICCP) (2009), pp.1–9. [CrossRef]

**17. **M. Turola, “Investigation of plenoptic imaging systems: a wave optics approach,” PhD dissertation, City University London (2016).

**18. **G. Häusler, “A method to increase the depth of focus by two step image processing,” Opt. Commun. **6**(1), 38–42 (1972). [CrossRef]

**19. **Y. Bando, H. Holtzman, and R. Raskar, “Near-invariant blur for depth and 2D motion via time-varying light field analysis,” ACM Trans. Graph. **32**(2), 539–555 (2013). [CrossRef]

**20. **S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, “Flexible Depth of Field Photography,” IEEE Trans. Pattern Anal. Mach. Intell. **33**(1), 58–71 (2011). [CrossRef] [PubMed]

**21. **R. Yokoya and S. K. Nayar, “Extended Depth of Field Catadioptric Imaging Using Focal Sweep,” in 2015 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2015), pp. 3505–3513. [CrossRef]

**22. **T. Georgiev, “Plenoptic 2.0 data: Photographer,” http://www.tgeorgiev.net/Jeff.jpg.

**23. **X. Marichal, W. Ma, and H. Zhang, “Blur determination in the compressed domain using DCT information,” in Proceedings of 1999 International Conference on Image Processing (Cat. 99CH36348), Kobe (1999), pp. 386–390. [CrossRef]