Abstract

This paper derives a mathematical point spread function (PSF) and a depth-invariant focal sweep point spread function (FSPSF) for plenoptic camera 2.0. Derivation of PSF is based on the Fresnel diffraction equation and image formation analysis of a self-built imaging system which is divided into two sub-systems to reflect the relay imaging properties of plenoptic camera 2.0. The variations in PSF, which are caused by changes of object’s depth and sensor position variation, are analyzed. A mathematical model of FSPSF is further derived, which is verified to be depth-invariant. Experiments on the real imaging systems demonstrate the consistency between the proposed PSF and the actual imaging results.

© 2017 Optical Society of America

1. Introduction

Light field cameras, also known as plenoptic cameras, attract an increasing attention over the past decades [1–3]. Compared with conventional cameras, light field cameras insert a microlens array between the main lens and the image sensor, which results in that the image sensor can record both spatial and angular information of light field in a single shot [4]. Together with rendering algorithm, light field cameras can refocus images at different depths, and reconstruct images with viewpoint changes [5], which facilitates its application to 3D reconstruction, depth estimation and digital refocusing [6]. The existing light field cameras can mainly be classified into two types, the so called plenoptic camera 1.0 [3] and plenoptic camera 2.0 [7]. Different from plenoptic camera 1.0, which inserts a microlens array at the image plane of the main lens, plenoptic camera 2.0 is a relay imaging system which inserts a microlens array behind the image plane of the main lens to re-image the object. It benefits the increase in spatial resolution using rendering algorithms proposed in [8] and superresolution reconstruction proposed in [9,10].

Although the plenoptic camera 2.0 outperforms plenoptic camera 1.0 by improved spatial resolution, it is still fundamentally desired to investigate its point spread function (PSF) to extend the depth of field and further increase the spatial resolution [11]. PSF, as the impulse response of an imaging system, is considered to be the fundamental unit of theoretical models, which reflects the image formation process of an imaging system. It can be used to correct the optical aberrations to improve the imaging quality and to recover the light field information by extending the depth of field and performing superresolution. Furthermore, the details in the recovered light field information can be fully exploited to further improve the quality of the applications, such as 3D reconstruction, depth map estimation and object detection. To derive the PSF for plenoptic cameras, S. A. Shroff et al. provided a mathematical model based on the image formation properties, but the PSF was only applicable to plenoptic camera 1.0 [12–14]. T. E. Bishop et al. also derived the PSF for plenoptic camera 1.0 with the assumption that both PSFs of the main lens and the microlens are defocused PSFs [15,16]. Although the defocused PSF has low computational complexity in calculation, the accuracy in representing the optical structure of plenoptic camera is limited. M. Turola discussed the simulated PSFs of plenoptic cameras in his dissertation based on the Fourier optics, which mainly concentrated on the simulation process, like the setting strategies of sampling rate, choice of spatial frequency filter and optimization of computational time [17]. However, his work lacks the mathematical description of the image formation process and impulse response of plenoptic camera 2.0 via wave optics.

Furthermore, as we extend the depth of field for the camera and perform superresolution reconstruction for the captured images, focal sweep PSF (FSPSF), the aggregate PSF for a scene point, is generally used by exploiting its depth-invariant property. For the conventional cameras, FSPSF is obtained by changing the focal plane during the exposure time, which is carried out by moving the sensor [18]. Y. Bando et al. proved the depth-invariant property of FSPSF theoretically for the conventional cameras [19] and S. Kuthirummal et al. proved both the space and depth invariant properties of FSPSF empirically [20]. R. Yokoya et al. extended the FSPSF to catadioptric imaging system and verified the depth-invariant property [21]. However, for FSPSF of plenoptic camera 2.0, neither mathematical model nor empirical analysis is available yet.

Consequently, this paper tries to build the mathematical model of PSF and FSPSF for plenoptic camera 2.0 and analyzes the properties for them. The mathematical derivation of PSF is based on the Fresnel diffraction equation and the image formation analysis. The variations in PSF, which are caused by changes of object’s depth and sensor position variation, are provided. A mathematical model of FSPSF for plenoptic camera 2.0 is further derived, which is verified to be depth-invariant. The consistency between the results calculated by the proposed PSF and those captured by the real imaging system demonstrated the correctness of the proposed PSF.

The rest of the paper is organized as follows. Section 2 describes the derivation of PSF in detail. Both the properties of PSF and its FSPSF are analyzed in Section 3. Verifications of the proposed PSF are provided in the Section 4 followed by conclusions in Section 5.

2. PSF for plenoptic camera 2.0

2.1 Optical prototype of imaging system

To derive the PSF, the image formation process and optical structure of plenoptic camera 2.0 are first analyzed. Figure 1(a) shows the structure of plenoptic camera 2.0. Parameters in Fig. 1(a) satisfy the Gaussian equation [5]:

1d1+1d2=1f1,1d3.1+1d3.2=1f2,
where f1 is the focal length of the main lens; f2 is the focal length of the microlens. T. Georgiev and A. Lumsdaine have analyzed the image formation properties of plenoptic camera 2.0 in [8]. First, rays from the object pass through the main lens and focus on the image plane of the main lens. Then, treating the light field on the image plane as a new object, the microlens array re-images it to the sensor as a relay imaging system. Different from the micro image, i.e. image under each microlens, captured by plenoptic camera 1.0 which mainly records angular distribution of rays, a micro image captured by plenoptic camera 2.0 records both angular and positional distribution of rays, which presents a portion of the image generated by the object as shown in Fig. 1(b) [22].

 figure: Fig. 1

Fig. 1 (a) Optical structure of plenoptic camera 2.0; and (b) raw image and magnifications of three micro images [22].

Download Full Size | PPT Slide | PDF

Considering that each micro image presents a portion of the image generated by the object and the position of each microlens only affects the area of rays passing through, we proposed to simplify the imaging system by placing only one microlens on the optical axis to analyze the imaging results. The impulse response of the system can be verified easily by comparing the real imaging results with the mathematically calculated results, based on which the model can be further extended to the imaging system using a microlens array. The imaging system is shown in the next subsection.

2.2 Design of imaging system

The architecture of the self-built imaging system is shown in Fig. 2. As shown in Fig. 2, a non-directional white light source illuminates the object and produces the Lambertian reflection. Rays from the object plane propagate to an optical filter at 532 nm to facilitate formulating under monochromatic wavelength. Then, the filtered light rays pass through a main lens and a microlens. The image can be retrieved from the CMOS sensor. Figure 3 is the photo of the experimental setup. The imaging system installs on the slide rail with the accuracy of 0.5mm. The geometric parameters are listed in Table 1. In the system, the diameter of the microlens we used is much larger than that used in [8]. Using a bigger microlens is based on the consideration of increasing the amount of lights passing through. It will provide a brighter imaging result to facilitate the model verification, and it will not change the propagation rules relative to using a small microlens.

 figure: Fig. 2

Fig. 2 The schematic diagram of the self-built imaging system.

Download Full Size | PPT Slide | PDF

 figure: Fig. 3

Fig. 3 The prototype of the self-built imaging system: (a) the top view; and (b) magnification of the subsystem in (a).

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. Geometric parameters of the self-built imaging system.

2.3 PSF derivation

Based on the above optical analysis, plenoptic camera 2.0 can be considered as a twice imaging system. Thus, the self-built imaging system can be divided into two subsystems: Subsystem 1 represents ray propagation from the object plane to the image plane; Subsystem 2 represents ray propagation from image plane to the sensor. Combining the analyses of the two subsystems, a layered derivation of PSF for plenoptic camera 2.0 is shown as follows.

First, we analyze the Subsystem 1. Rays emitting from a point (x0, y0) on the object plane, as shown in Fig. 1(a), reach a point (xmain, ymain) on the main lens. Considering h11(xmain, ymain, x0, y0) is the impulse response describing ray propagation process, the field at the point (xmain, ymain) can be modeled by:

U(xmain,ymain)=+U(x0,y0)h11(xmain,ymain,x0,y0)dx0dy0.
According to the Fresnel diffraction equation, the field U(xmain, ymain) can also be formulated by:
U(xmain,ymain)=exp(ikd1)iλd1exp[ik2d1(xmain2+ymain2)]+U(x0,y0)×exp[ik2d1(x02+y02)]exp[ikd1(x0xmain+y0ymain)]dx0dy0,
whereλ is the wavelength of the light; k is the wavelength number and equals to 2π/λ; d1 is the distance between the object and the main lens. Comparing Eq. (2) with Eq. (3), we get:

h11(xmain,ymain,x0,y0)=exp(ikd1)iλd1exp{ik2d1[(xmainx0)2+(ymainy0)2]}.

After generating the field at point (xmain, ymain), rays pass through the main lens and propagate to the image plane. The field at point (x1, y1) on the image plane, as shown in Fig. 1(a), can be formulated by:

U(x1,y1)=+U(xmain,ymain)h12(x1,y1,xmain,ymain)dxmaindymain,
where h12(x1, y1, xmain, ymain) is the impulse response describing ray propagation from the main lens to the image plane. Similarly, using the Fresnel diffraction equation, U(x1, y1) is given by:
U(x1,y1)=exp(ikd2)iλd2exp[ik2d2(x12+y12)]+U(xmain,ymain)×tmain(xmain,ymain)exp[ik2d2(xmain2+ymain2)]×exp[ikd2(xmainx1+ymainy1)]dxmaindymain,
where d2 is the distance between the main lens and the image plane; and tmain (xmain, ymain) is the phase correction factor of the main lens, which represents the optical characteristic. The formulation of tmain (xmain, ymain) is:
tmain(xmain,ymain)=P1(xmain,ymain)exp[ik2f1(xmain2+ymain2)],
where P1 (xmain, ymain) is the pupil function of the main lens; and f1 is the focal length of the main lens. Comparing Eq. (5) with Eq. (6), we have:
h12(x1,y1,xmain,ymain)=exp(ikd2)iλd2tmain(xmain,ymain)×exp{ik2d2[(xmainx1)2+(ymainy1)2]}.
Then, plugging Eq. (2) into Eq. (5), the field at the image plane is formulated by:
U(x1,y1)=+U(x0,y0)h11(xmain,ymain,x0,y0)×h12(x1,y1,xmain,ymain)dxmaindymaindx0dy0.
Thus, the PSF of Subsystem 1, h1(x1, y1, x0, y0), can be formulated by:

h1(x1,y1x0,y0)=+h11(xmain,ymain,x0,y0)h12(x1,y1,xmain,ymain)dxmaindymain=exp[ik(d1+d2)]-λ2d1d2+tmain(xmain,ymain)×exp{ik2d1[(x0xmain)2+(y0ymain)2]}×exp{ik2d2[(x1xmain)2+(y1ymain)2]}dxmaindymain.

Then, the analysis is performed on the Subsystem 2. The imaging system using only one microlens is considered here, as described in Section 2.1, to simplify the model. Using the similar derivation process to that in Subsystem 1, ray propagation from a point (x1, y1) on the image plane to a point (xmicro, ymicro) on the microlens plane and afterwards from a point (xmicro, ymicro) on the microlens plane to a point (x, y) on the sensor is modeled. Thus, the field at point (x, y) on the sensor is given by:

U(x,y)=+U(x1,y1)h2(x,y,x1,y1)dx1dy1,
where h2(x, y, x1, y1) is the PSF of Subsystem 2. h2 (x, y, x1, y1) is given by:
h2(x,y,x1,y1)=+h21(xmicro,ymicro,x1,y1)h22(x,y,xmicro,ymicro)dxmicrodymicro,
where h21 (xmicro, ymicro, x1, y1) and h22 (x, y, xmicro, ymicro) is the impulse response describing ray propagation from the image plane to the microlens and ray propagation from the microlens to the sensor, respectively. h21 (xmicro, ymicro, x1, y1) is given by:
h21(xmicro,ymicro,x1,y1)=exp(ikd3.1)iλd3.1exp[ik2d3.1(xmicro2+ymicro2)]×exp[ik2d3.1(x12+y12)]exp[ikd3.1(x1xmicro+y1ymicro)]=exp(ikd3.1)iλd3.1exp{ik2d3.1[(xmicrox1)2+(ymicroy1)2]},
where d3.1 is the distance between the image plane and the microlens. h22 (x, y, xmicro, ymicro) is given by:
h22(x,y,xmicro,ymicro)=exp(ikd3.2)iλd3.2tmicro(xmicro,ymicro)×exp{ik2d3.2[(xxmicro))2+(yymicro))2]},
where d3.2 is the distance between the microlens and the sensor; tmicro (xmicro, ymicro) is the phase correction factor for a microlens and is formulated by:
tmicro(xmicro,ymicro)=P2(xmicro,ymicro)exp[ik2f2(xmicro2+ymicro2)],
where P2 (xmicro, ymicro) and f2 are the pupil function and focal length of the microlens, respectively.

Plugging Eq. (13) to Eq. (15) into Eq. (12), PSF of Subsystem 2 with only one microlens is given by:

h2(x,y,x1,y1)=+h21(xmicro,ymicro,x1,y1)h22(x,y,xmicro,ymicro)dxmicrodymicro=exp[ik(d3.1+d3.2)]-λ2d3.1d3.2+tmicro(xmicro,ymicro)×exp{ik2d3.1[(x1xmicro)2+(y1ymicro)2]}×exp{ik2d3.2[(xxmicro)2+(yymicro)2]}dxmicrodymicro.

Extending the PSF to a microlens array with m × n microlenses, the change only lies in the process of h22 (x, y, xmicro, ymicro) that describes ray propagation from the microlens plane to the sensor. The phase correction factor of the microlens array becomes to be the accumulation of the phase correction factors of m × n microlenses:mntmicro(xmicromD,ymicronD). Thus, h22 (x, y, xmicro, ymicro) becomes to be:

h22(x,y,xmicro,ymicro)=exp(ikd3.2)iλd3.2mntmicro(xmicromD,ymicronD)×exp{ik2d3.2[(xxmicro)2+(yymicro)2]},
where D is the diameter of a microlens. So, h2 (x, y, x1, y1) becomes to be:

h2(x,y,x1,y1)=+h21(xmicro,ymicro,x1,y1)h22(x,y,xmicro,ymicro)dxmicrodymicro=exp[ik(d3.1+d3.2)]-λ2d3.1d3.2mn+tmicro(xmicromD,ymicronD)×exp{ik2d3.1[(x1xmicro)2+(y1ymicro)2]}×exp{ik2d3.2[(xxmicro)2+(yymicro)2]}dxmicrodymicro.

Having the PSF of Subsystem 1 and Subsystem 2, the field on the sensor of the whole imaging system is:

U(x,y)=+U(x0,y0)h1(x1,y1,x0,y0)h2(x,y,x1,y1)dx1dy1dx0dy0.
Using h(x, y, x0, y0) to represent the PSF of the whole system, Eq. (19) can be rewritten as:
U(x,y)=+U(x0,y0)h(x,y,x0,y0)dx0dy0,
where h(x, y, x0, y0) equals to:
h(x,y,x0,y0)=+h1(x1,y1,x0,y0)h2(x,y,x1,y1)dx1dy1=exp[ik(d1+d2+d3.1+d3.2)]λ4d1d2d3.1d3.2mn+tmicro(xmicromD,ymicronD).×exp{ik2d3.1[(x1xmicro)2+(y1ymicro)2]}×exp{ik2d3.2[(xxmicro)2+(yymicro)2]}dxmicrodymicro×tmain(xmain,ymain)exp{ik2d1[(x0xmain)2+(y0ymain)2]}×exp{ik2d2[(x1xmain)2+(x1ymain)2]}dxmaindymaindx1dy1
For a real imaging system, the image on the sensor for a point light source is the intensity of the PSF, or absolute square of the PSF.

3. Wave analysis for PSF and FSPSF in plenoptic camera 2.0

3.1 PSF wave analysis

The characteristics of the proposed PSF are analyzed by varying the imaging parameters according to the self-built system shown in Fig. 3. Two sets of analyses have been conducted by calculating the response of PSF according to the variable settings listed in Table 2.

Tables Icon

Table 2. The variation settings of object’s depth and sensor position.

The first set of analysis tries to evaluate the object’s depth responses by changing d1 from 70mm to 110mm, as listed in the first row of Table 2. PSFs calculated according to Eq. (21) are shown in Fig. 4, which are normalized by the max value of the intensity in the five PSFs. Since the system was designed to be perfectly focused when d1 is 90mm, the PSF at d1 = 90mm becomes to be a clearest and smallest spot in the center, as shown in the Fig. 4(c), and meanwhile the outer circular rings are darker than other PSFs. As the defocus degree of the object changes, corresponding to the changes of difference between d1 and 90mm, outer circular rings become brighter gradually and the energy at the center becomes weaker and dispersive. From Figs. 4(a) to 4(e), the max intensities after normalization are 0.204, 0.206,1.000, 0.963 and 0.508, which indicates that the energy concentration capability at d1 = 90mm is the highest and that at the defocused planes is much weaker. The energy concentration degree will directly affect the image quality: the higher the concentration degree is, the clearer the image is.

 figure: Fig. 4

Fig. 4 PSFs calculated at different object’s depths d1 by Eq. (21): (a) d1 = 70mm; (b) d1 = 80mm; (c) d1 = 90mm; (d) d1 = 100mm and (e) d1 = 110mm.

Download Full Size | PPT Slide | PDF

The second set of analysis, Analysis Set 2 in Table 2, tries to evaluate the sensor position responses by changing d3.2 from 10mm to 20mm. PSFs calculated are shown in Fig. 5. Since changing d3.2 corresponds to moving the focal plane of the system accordingly as referring to Eq. (1), the sensor moving farther/closer to the microlens corresponds to the distance between the focal plane and the main lens becoming shorter/longer. Thus, the variations in PSFs at different sensor positions are corresponding to the variations in PSFs at different object depths. Since the system is designed to be focused at d3.2 = 15mm, PSF shown in Fig. 5(c) has the brightest spot in the center and the darkest outer rings. As the difference between d3.2 and 15mm changes, the energy at the central spot and the outer rings, which represents the energy decentralization degree, propagates further.

 figure: Fig. 5

Fig. 5 PSF calculated at different sensor positions d3.2 by Eq. (21): (a) d3.2 = 10mm; (b) d3.2 = 12mm; (c) d3.2 = 15mm; (d) d3.2 = 18mm; and (e) d3.2 = 20mm.

Download Full Size | PPT Slide | PDF

3.2 FSPSF Derivation

Based on the proposed PSF, FSPSF can be calculated for extending depth of field and superresolution reconstruction. Based on the definition, FSPSF can be calculated as the integration of PSF along the time variation by:

FSPSF=PSFdt.
During the time interval, which corresponds to a specific exposure time, the sensor position d3.2 gradually changes, which results in Eq. (22) becoming to be:
FSPSF=PSF(d3.2)dt.
The sensor plane moves uniformly at a constant velocity, denoted by v. So, the sensor position d3.2 is vt + d0, where d0 is the initial position of the sensor plane. Thus, Eq. (23) becomes to be:
FSPSF=PSF(d3.2)d(1vd3.2d0)=1vPSF(d3.2)dd3.2.
Based on Eq. (24), we can obtain the FSPSF by integrating the PSF with different d3.2 as:

FSPSF=1vh(x,y,x0,y0)dd3.2=1vexp[ik(d1+d2+d3.1+d3.2)]λ4d1d2d3.1d3.2mn+tmicro(xmicromD,ymicronD)exp{ik2d3.1[(x1xmicro)2+(y1ymicro)2]}×exp{ik2d3.2[(xxmicro)2+(yymicro)2]}dxmicrodymicro×tmain(xmain,ymain)exp{ik2d1[(x0xmain)2+(y0ymain)2]}×exp{ik2d2[(x1xmain)2+(x1ymain)2]}dxmaindymaindx1dy1dd3.2.

Equation (25) gives the mathematical FSPSF for plenoptic camera 2.0. Verification for the properties of FSPSF based on Eq. (25) is shown in Section 4.3.

4. Experimental results

4.1 PSF verification

The correctness of the proposed PSF is verified via comparing PSFs captured by a real imaging system with those calculated by Eq. (21). First, using the imaging system described in Fig. 3, PSFs of the system with a single microlens are captured by changing the object’s depth d1. The point light source is generated using a LED lamp bead and an objective lens, by which the light from the LED can converge to a point and thereafter diverge to be omnidirectional.

As shown in the figure, the variation tendency of the real imaging results, shown in Figs. 6(a)-6(d) are visually similar with those simulated results, shown in Figs. 6(e)-6(h), both in terms of size changes and the blurry degree. It is observed that it is a bit different in the imaging result at d1 equaling to 75mm. The central energy of the experimental result, as shown in Fig. 6(a) is lower than that in the simulated one, as shown in Fig. 6(e). It is mainly caused by the difference between the actual position where the light rays focus after passing through the main lens and that calculated by the theoretic model. In the theoretic model, the optical path differences are calculated by approximating the spherical surfaces of the main lens as parabolic ones. Approximation error will affect the calculated results more obviously with the decrement in the object distance.

 figure: Fig. 6

Fig. 6 PSFs with a single microlens: (a)-(d): PSFs obtained by the real imaging systems with d1 = 75mm, d1 = 90mm, d1 = 100mm, and d1 = 125mm, respectively; (e)-(h): simulated PSFs with d1 = 75mm, d1 = 90mm, d1 = 100mm, and d1 = 125mm, respectively.

Download Full Size | PPT Slide | PDF

Second, a real imaging system using a microlens array is built to further demonstrate the correctness of the proposed model. The imaging system is shown in Fig. 7(a). The point light source is also generated by a LED lamp bead and an objective lens. The geometric parameters of the real imaging system are listed in Table 3. Figure 7(b) shows the experimental results at d1 = 90mm, where the focal plane of the system is located.

 figure: Fig. 7

Fig. 7 PSFs with a microlens array: (a) the prototype of the real imaging system with a microlens array; (b) experimental PSF captured as d1 = 90mm; (c) the enlarged view of the region lined in red in (b); (d) simulated PSF as d1 = 90mm.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 3. Geometric parameters of the imaging system with a microlens array.

The imaging result of the system is shown in Fig. 7(b) and that of the adjacent seven microlenses, the region lined in red in Fig. 7(b), is magnified in Fig. 7(c). It is compared with the simulated PSF that is calculated by Eq. (21) and shown in Fig. 7(d). It can be found that the two PSFs are visually similar. The distances between the PSF from the center microlens and its six circumjacent microlens are 0.170mm and 0.172mm on average in Fig. 7(c) and Fig. 7(d), respectively, which shows high consistency in the scale between each other. There is a small difference between the simulation results and the experimental results at the circumjacent microlenses images, which may be caused by the aberrations as the point light source is lighting around the edge of the microlens.

4.2 Image formation verification

The correctness of the proposed PSF is then verified by comparing the real imaging results with the corresponding simulation results. The results generated by the proposed PSF are obtained by calculating the system response of every point on the object using Eq. (21), retrieving the intensity and accumulating all the responses together.

First, two objects in different shapes shown in Fig. 8 are used as the objects in Fig. 2. By changing the object’s depth d1, images captured on the sensor are retrieved, normalized and compared with the normalized results generated by the proposed PSF. Here, two criteria are investigated to measure the difference between the real imaging results and corresponding simulated results: the size of the image, which is measured along x and y directions lined in red in Fig. 8 between the two dashed lines; and the defocus level, which is measured by calculating the ratio of the high frequency energy to the low frequency energy (the four DCT coefficients at DC and the lowest frequency) after applying an 8 × 8 DCT transformation to the image [23]. The higher the value of defocus level is, i.e. the energy at high frequency is higher, the better the focus performance is.

 figure: Fig. 8

Fig. 8 Two objects used in the experiments: (a) the first object; (b) the second object. Red lines describe the lengths calculated in x and y directions.

Download Full Size | PPT Slide | PDF

The results are shown in Fig. 9. Since the imaging system is focused at d1 = 90mm, the imaging results and the simulated results in Fig. 9(b) are the sharpest. As the object depth d1 changes, images become smaller/bigger and blurred at the edge, which are caused by the defocus. As the object moves away from the main lens, corresponding to Figs. 9(c)–9(f), the image size becomes smaller gradually, while the blurred degree is positively correlated with the difference between the object depth and 90mm.

 figure: Fig. 9

Fig. 9 The first row and the second row are the real imaging results and the simulated results of the object in Fig. 8(a), respectively. The third row and the forth row are the real imaging results and the simulated results of the object in Fig. 8 (b), respectively. Columns (a) to (f) corresponds to d1 = 75mm, d1 = 90mm, d1 = 100mm, d1 = 125mm, d1 = 150mm and d1 = 175mm, respectively.

Download Full Size | PPT Slide | PDF

Besides, the visually similarity between the real imaging results and the corresponding simulated results, the variations in object size and defocus level are consistent between each other if we compare the images in the first/third row with those in the second/forth row. The difference in the size of the two sets of results, corresponding to the two objects in Fig. 8, is 1.36% and 4.61% on average. Although the defocus level measurement method is sensitive to the environmental illumination during real imaging process, it still can be observed that the variation trend in the defocus level is quite similar between the real imaging results and the corresponding simulated results, which demonstrates the correctness of the proposed model.

Second, the USAF resolution target, as shown in the Fig. 10, with more texture details is tested to further verify the proposed PSF formulation. As the size of the USAF resolution target is much bigger than the two objects in Fig. 8, the focal length and the diameter of the main lens and the microlens have been changed to decrease the optical aberrations at the edge of the lens. The diameter and the focal length of the main lens/microlens are 40mm/25.4mm and 100mm/30mm, respectively. The real imaging results and corresponding simulated results are shown in Fig. 11.

 figure: Fig. 10

Fig. 10 USAF resolution target used in the real imaging systems. Red lines describe the lengths calculated in x and y directions.

Download Full Size | PPT Slide | PDF

 figure: Fig. 11

Fig. 11 The first row and the second row are the real imaging results and the simulated results of the USAF resolution target in Fig. 10, respectively. Columns (a)-(c) corresponds to d1 = 280mm, d1 = 300mm, and d1 = 330mm.

Download Full Size | PPT Slide | PDF

As shown in Fig. 11, the variation in the size of the image and the defocus level are almost consistent between the simulated results and the real imaging results as well. As the imaging system is focused at d1 = 300mm, the results shown in Fig. 11(b) are the sharpest and the defocus values are the highest. When the object depth d1 increases/decreases relative to 300mm, the size of the imaging results becomes smaller/larger and the defocus becomes bigger. Averaging the size difference of the three pairs, the size difference is only 4.11%. Although the defocus values of the real imaging results are affected by the optical aberrations and the difference between the actual focus plane in the real imaging system and the theoretical focus plane in the simulation model, the variation tendency is still consistent between the two rows.

The results verified the consistency between the real imaging results and the results calculated by the proposed model, which demonstrate the effectiveness of the proposed PSF in representing the image formation property of the imaging system correctly.

4.3 FSPSF property analysis

To analyze the properties of FSPSF for plenoptic camera 2.0, FSPSFs at different object depths are calculated. The integration range for the sensor positiond3.2is from: 10mm to 30mm. Normalized FSPSFs at five depths are shown in Fig. 12. The five object’s depths are the same with the depths in Fig. 4, at which PSFs vary obviously. However, for the five FSPSFs, they are almost visually the same.

 figure: Fig. 12

Fig. 12 FSPSFs with depth d1 changes: (a) d1 = 70mm; (b) d1 = 80mm; (c) d1 = 90mm; (d) d1 = 100mm; and (e) d1 = 110mm.

Download Full Size | PPT Slide | PDF

Since FSPSFs are almost centrally symmetric, we extract FSPSFs distribution at y = 0mm to compare them in detail in Fig. 13. FSPSFs at other depths, like d1 equals to 75mm to 105mm using 5mm as a step, are also calculated and included in Fig. 13. As shown in the figure, the curves of the FSPSFs at different depths are almost overlapped with each other, which demonstrates the depth-invariant property of FSPSF. The empirical verification can assist the deconvolution algorithm, which can help to extend the depth of field and perform superresolution.

 figure: Fig. 13

Fig. 13 x-cross section at y = 0 mm of FSPSFs for objects with different object’s depths d1.

Download Full Size | PPT Slide | PDF

5. Conclusions

In this paper, we derived a mathematical PSF and FSPSF for plenoptic camera 2.0 by analyzing the image formation process and optical structure. Mathematic derivation of the PSF is provided according to the analysis of the image formation property and Fresnel diffraction equation. Based on the proposed PSF and FSPSF definition, the mathematical model of FSPSF, which is further verified to be depth-invariant empirically, is calculated. Experiments on the self-built imaging system verify the correctness of the proposed PSF.

In order to optimize the proposed mathematical PSF and FSPSF, future works include the model extension to adapt to the thick lens and commercial lens, theoretical verification of the depth-invariant property of FSPSF and the parameter optimization for FSPSF calculation such as the integrated range of sensor position. In addition, the researches on the light field recovery algorithms based on the proposed PSF and FSPSF, such as extending the depth of field, superresolution reconstruction and optical aberration correction, will also be investigated.

Funding

National Natural Science Foundation of China (NSFC) (61371138); Natural Science Foundation of Guangdong, China (2014A030313733); National Key Scientific Instrument and Equipment Development Project, China (2013YQ140517).

References and links

1. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992). [CrossRef]  

2. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a Hand-Held Plenopic Camera,” Technical Report, Stanford University (2005).

3. R. Ng, “Digital light field photography,” Ph.D. thesis, Stanford University (2006).

4. M. Levoy, R. Ng, A. Adam, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006). [CrossRef]  

5. E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” J. Opt. Soc. Am. A 32(11), 2021–2032 (2015). [CrossRef]   [PubMed]  

6. V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings of 2014 IEEE International Conference on Computational Photography (ICCP) (2014), pp. 1–10. [CrossRef]  

7. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (ICCP, 2009), pp. 1–8. [CrossRef]  

8. T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 1–28 (2010).

9. A. Lumsdaine and T. Georgiev, “Full resolution light field rendering,” Technical report, Adobe Systems (2008).

10. T. Georgiev and A. Lumsdaine, “Superresolution with Plenoptic 2.0 cameras,” in Frontiers in Optics 2009/Laser Science XXV/Fall 2009, OSA Technical Digest (CD) (Optical Society of America) (2009), paper STuA6.

11. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013). [CrossRef]   [PubMed]  

12. S. Shroff and K. Berkner, “Plenoptic System Response and Image Formation,” in Imaging and Applied Optics, OSA Technical Digest (online) (Optical Society of America, 2013), paper JW3B.1.

13. S. Shroff and K. Berkner, “High Resolution Image Reconstruction for Plenoptic Imaging Systems using System Response,” in Imaging and Applied Optics Technical Papers, OSA Technical Digest (online) (Optical Society of America (2012)), paper CM2B.2. [CrossRef]  

14. S. Shroff and K. Berkner, “Wave analysis of a plenoptic system and its applications,” Proc. SPIE 8667, 86671L (2013). [CrossRef]  

15. T. E. Bishop and P. Favaro, “The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012). [CrossRef]   [PubMed]  

16. T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in 2009 IEEE International Conference on Computational Photography (ICCP) (2009), pp.1–9. [CrossRef]  

17. M. Turola, “Investigation of plenoptic imaging systems: a wave optics approach,” PhD dissertation, City University London (2016).

18. G. Häusler, “A method to increase the depth of focus by two step image processing,” Opt. Commun. 6(1), 38–42 (1972). [CrossRef]  

19. Y. Bando, H. Holtzman, and R. Raskar, “Near-invariant blur for depth and 2D motion via time-varying light field analysis,” ACM Trans. Graph. 32(2), 539–555 (2013). [CrossRef]  

20. S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, “Flexible Depth of Field Photography,” IEEE Trans. Pattern Anal. Mach. Intell. 33(1), 58–71 (2011). [CrossRef]   [PubMed]  

21. R. Yokoya and S. K. Nayar, “Extended Depth of Field Catadioptric Imaging Using Focal Sweep,” in 2015 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2015), pp. 3505–3513. [CrossRef]  

22. T. Georgiev, “Plenoptic 2.0 data: Photographer,” http://www.tgeorgiev.net/Jeff.jpg.

23. X. Marichal, W. Ma, and H. Zhang, “Blur determination in the compressed domain using DCT information,” in Proceedings of 1999 International Conference on Image Processing (Cat. 99CH36348), Kobe (1999), pp. 386–390. [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
    [Crossref]
  2. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a Hand-Held Plenopic Camera,” Technical Report, Stanford University (2005).
  3. R. Ng, “Digital light field photography,” Ph.D. thesis, Stanford University (2006).
  4. M. Levoy, R. Ng, A. Adam, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
    [Crossref]
  5. E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” J. Opt. Soc. Am. A 32(11), 2021–2032 (2015).
    [Crossref] [PubMed]
  6. V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings of 2014 IEEE International Conference on Computational Photography (ICCP) (2014), pp. 1–10.
    [Crossref]
  7. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (ICCP, 2009), pp. 1–8.
    [Crossref]
  8. T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 1–28 (2010).
  9. A. Lumsdaine and T. Georgiev, “Full resolution light field rendering,” Technical report, Adobe Systems (2008).
  10. T. Georgiev and A. Lumsdaine, “Superresolution with Plenoptic 2.0 cameras,” in Frontiers in Optics 2009/Laser Science XXV/Fall 2009, OSA Technical Digest (CD) (Optical Society of America) (2009), paper STuA6.
  11. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013).
    [Crossref] [PubMed]
  12. S. Shroff and K. Berkner, “Plenoptic System Response and Image Formation,” in Imaging and Applied Optics, OSA Technical Digest (online) (Optical Society of America, 2013), paper JW3B.1.
  13. S. Shroff and K. Berkner, “High Resolution Image Reconstruction for Plenoptic Imaging Systems using System Response,” in Imaging and Applied Optics Technical Papers, OSA Technical Digest (online) (Optical Society of America (2012)), paper CM2B.2.
    [Crossref]
  14. S. Shroff and K. Berkner, “Wave analysis of a plenoptic system and its applications,” Proc. SPIE 8667, 86671L (2013).
    [Crossref]
  15. T. E. Bishop and P. Favaro, “The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
    [Crossref] [PubMed]
  16. T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in 2009 IEEE International Conference on Computational Photography (ICCP) (2009), pp.1–9.
    [Crossref]
  17. M. Turola, “Investigation of plenoptic imaging systems: a wave optics approach,” PhD dissertation, City University London (2016).
  18. G. Häusler, “A method to increase the depth of focus by two step image processing,” Opt. Commun. 6(1), 38–42 (1972).
    [Crossref]
  19. Y. Bando, H. Holtzman, and R. Raskar, “Near-invariant blur for depth and 2D motion via time-varying light field analysis,” ACM Trans. Graph. 32(2), 539–555 (2013).
    [Crossref]
  20. S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, “Flexible Depth of Field Photography,” IEEE Trans. Pattern Anal. Mach. Intell. 33(1), 58–71 (2011).
    [Crossref] [PubMed]
  21. R. Yokoya and S. K. Nayar, “Extended Depth of Field Catadioptric Imaging Using Focal Sweep,” in 2015 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2015), pp. 3505–3513.
    [Crossref]
  22. T. Georgiev, “Plenoptic 2.0 data: Photographer,” http://www.tgeorgiev.net/Jeff.jpg .
  23. X. Marichal, W. Ma, and H. Zhang, “Blur determination in the compressed domain using DCT information,” in Proceedings of 1999 International Conference on Image Processing (Cat. 99CH36348), Kobe (1999), pp. 386–390.
    [Crossref]

2015 (1)

2013 (3)

M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013).
[Crossref] [PubMed]

S. Shroff and K. Berkner, “Wave analysis of a plenoptic system and its applications,” Proc. SPIE 8667, 86671L (2013).
[Crossref]

Y. Bando, H. Holtzman, and R. Raskar, “Near-invariant blur for depth and 2D motion via time-varying light field analysis,” ACM Trans. Graph. 32(2), 539–555 (2013).
[Crossref]

2012 (1)

T. E. Bishop and P. Favaro, “The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

2011 (1)

S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, “Flexible Depth of Field Photography,” IEEE Trans. Pattern Anal. Mach. Intell. 33(1), 58–71 (2011).
[Crossref] [PubMed]

2010 (1)

T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 1–28 (2010).

2006 (1)

M. Levoy, R. Ng, A. Adam, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

1992 (1)

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
[Crossref]

1972 (1)

G. Häusler, “A method to increase the depth of focus by two step image processing,” Opt. Commun. 6(1), 38–42 (1972).
[Crossref]

Adam, A.

M. Levoy, R. Ng, A. Adam, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

Adelson, E. H.

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
[Crossref]

Andalman, A.

Bando, Y.

Y. Bando, H. Holtzman, and R. Raskar, “Near-invariant blur for depth and 2D motion via time-varying light field analysis,” ACM Trans. Graph. 32(2), 539–555 (2013).
[Crossref]

Berkner, K.

S. Shroff and K. Berkner, “Wave analysis of a plenoptic system and its applications,” Proc. SPIE 8667, 86671L (2013).
[Crossref]

Bishop, T. E.

T. E. Bishop and P. Favaro, “The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in 2009 IEEE International Conference on Computational Photography (ICCP) (2009), pp.1–9.
[Crossref]

Boominathan, V.

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings of 2014 IEEE International Conference on Computational Photography (ICCP) (2014), pp. 1–10.
[Crossref]

Broxton, M.

Cohen, N.

Deisseroth, K.

Favaro, P.

T. E. Bishop and P. Favaro, “The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in 2009 IEEE International Conference on Computational Photography (ICCP) (2009), pp.1–9.
[Crossref]

Footer, M.

M. Levoy, R. Ng, A. Adam, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

Georgiev, T.

T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 1–28 (2010).

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (ICCP, 2009), pp. 1–8.
[Crossref]

Grosenick, L.

Häusler, G.

G. Häusler, “A method to increase the depth of focus by two step image processing,” Opt. Commun. 6(1), 38–42 (1972).
[Crossref]

Holtzman, H.

Y. Bando, H. Holtzman, and R. Raskar, “Near-invariant blur for depth and 2D motion via time-varying light field analysis,” ACM Trans. Graph. 32(2), 539–555 (2013).
[Crossref]

Horowitz, M.

M. Levoy, R. Ng, A. Adam, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

Kuthirummal, S.

S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, “Flexible Depth of Field Photography,” IEEE Trans. Pattern Anal. Mach. Intell. 33(1), 58–71 (2011).
[Crossref] [PubMed]

Lam, E. Y.

Levoy, M.

Lumsdaine, A.

T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 1–28 (2010).

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (ICCP, 2009), pp. 1–8.
[Crossref]

Mitra, K.

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings of 2014 IEEE International Conference on Computational Photography (ICCP) (2014), pp. 1–10.
[Crossref]

Nagahara, H.

S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, “Flexible Depth of Field Photography,” IEEE Trans. Pattern Anal. Mach. Intell. 33(1), 58–71 (2011).
[Crossref] [PubMed]

Nayar, S. K.

S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, “Flexible Depth of Field Photography,” IEEE Trans. Pattern Anal. Mach. Intell. 33(1), 58–71 (2011).
[Crossref] [PubMed]

R. Yokoya and S. K. Nayar, “Extended Depth of Field Catadioptric Imaging Using Focal Sweep,” in 2015 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2015), pp. 3505–3513.
[Crossref]

Ng, R.

M. Levoy, R. Ng, A. Adam, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

Raskar, R.

Y. Bando, H. Holtzman, and R. Raskar, “Near-invariant blur for depth and 2D motion via time-varying light field analysis,” ACM Trans. Graph. 32(2), 539–555 (2013).
[Crossref]

Shroff, S.

S. Shroff and K. Berkner, “Wave analysis of a plenoptic system and its applications,” Proc. SPIE 8667, 86671L (2013).
[Crossref]

Veeraraghavan, A.

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings of 2014 IEEE International Conference on Computational Photography (ICCP) (2014), pp. 1–10.
[Crossref]

Wang, J. Y. A.

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
[Crossref]

Yang, S.

Yokoya, R.

R. Yokoya and S. K. Nayar, “Extended Depth of Field Catadioptric Imaging Using Focal Sweep,” in 2015 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2015), pp. 3505–3513.
[Crossref]

Zanetti, S.

T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in 2009 IEEE International Conference on Computational Photography (ICCP) (2009), pp.1–9.
[Crossref]

Zhou, C.

S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, “Flexible Depth of Field Photography,” IEEE Trans. Pattern Anal. Mach. Intell. 33(1), 58–71 (2011).
[Crossref] [PubMed]

ACM Trans. Graph. (2)

M. Levoy, R. Ng, A. Adam, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

Y. Bando, H. Holtzman, and R. Raskar, “Near-invariant blur for depth and 2D motion via time-varying light field analysis,” ACM Trans. Graph. 32(2), 539–555 (2013).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (3)

S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, “Flexible Depth of Field Photography,” IEEE Trans. Pattern Anal. Mach. Intell. 33(1), 58–71 (2011).
[Crossref] [PubMed]

T. E. Bishop and P. Favaro, “The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
[Crossref]

J. Electron. Imaging (1)

T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 1–28 (2010).

J. Opt. Soc. Am. A (1)

Opt. Commun. (1)

G. Häusler, “A method to increase the depth of focus by two step image processing,” Opt. Commun. 6(1), 38–42 (1972).
[Crossref]

Opt. Express (1)

Proc. SPIE (1)

S. Shroff and K. Berkner, “Wave analysis of a plenoptic system and its applications,” Proc. SPIE 8667, 86671L (2013).
[Crossref]

Other (13)

S. Shroff and K. Berkner, “Plenoptic System Response and Image Formation,” in Imaging and Applied Optics, OSA Technical Digest (online) (Optical Society of America, 2013), paper JW3B.1.

S. Shroff and K. Berkner, “High Resolution Image Reconstruction for Plenoptic Imaging Systems using System Response,” in Imaging and Applied Optics Technical Papers, OSA Technical Digest (online) (Optical Society of America (2012)), paper CM2B.2.
[Crossref]

T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in 2009 IEEE International Conference on Computational Photography (ICCP) (2009), pp.1–9.
[Crossref]

M. Turola, “Investigation of plenoptic imaging systems: a wave optics approach,” PhD dissertation, City University London (2016).

R. Yokoya and S. K. Nayar, “Extended Depth of Field Catadioptric Imaging Using Focal Sweep,” in 2015 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2015), pp. 3505–3513.
[Crossref]

T. Georgiev, “Plenoptic 2.0 data: Photographer,” http://www.tgeorgiev.net/Jeff.jpg .

X. Marichal, W. Ma, and H. Zhang, “Blur determination in the compressed domain using DCT information,” in Proceedings of 1999 International Conference on Image Processing (Cat. 99CH36348), Kobe (1999), pp. 386–390.
[Crossref]

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings of 2014 IEEE International Conference on Computational Photography (ICCP) (2014), pp. 1–10.
[Crossref]

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (ICCP, 2009), pp. 1–8.
[Crossref]

A. Lumsdaine and T. Georgiev, “Full resolution light field rendering,” Technical report, Adobe Systems (2008).

T. Georgiev and A. Lumsdaine, “Superresolution with Plenoptic 2.0 cameras,” in Frontiers in Optics 2009/Laser Science XXV/Fall 2009, OSA Technical Digest (CD) (Optical Society of America) (2009), paper STuA6.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a Hand-Held Plenopic Camera,” Technical Report, Stanford University (2005).

R. Ng, “Digital light field photography,” Ph.D. thesis, Stanford University (2006).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 (a) Optical structure of plenoptic camera 2.0; and (b) raw image and magnifications of three micro images [22].
Fig. 2
Fig. 2 The schematic diagram of the self-built imaging system.
Fig. 3
Fig. 3 The prototype of the self-built imaging system: (a) the top view; and (b) magnification of the subsystem in (a).
Fig. 4
Fig. 4 PSFs calculated at different object’s depths d1 by Eq. (21): (a) d1 = 70mm; (b) d1 = 80mm; (c) d1 = 90mm; (d) d1 = 100mm and (e) d1 = 110mm.
Fig. 5
Fig. 5 PSF calculated at different sensor positions d3.2 by Eq. (21): (a) d3.2 = 10mm; (b) d3.2 = 12mm; (c) d3.2 = 15mm; (d) d3.2 = 18mm; and (e) d3.2 = 20mm.
Fig. 6
Fig. 6 PSFs with a single microlens: (a)-(d): PSFs obtained by the real imaging systems with d1 = 75mm, d1 = 90mm, d1 = 100mm, and d1 = 125mm, respectively; (e)-(h): simulated PSFs with d1 = 75mm, d1 = 90mm, d1 = 100mm, and d1 = 125mm, respectively.
Fig. 7
Fig. 7 PSFs with a microlens array: (a) the prototype of the real imaging system with a microlens array; (b) experimental PSF captured as d1 = 90mm; (c) the enlarged view of the region lined in red in (b); (d) simulated PSF as d1 = 90mm.
Fig. 8
Fig. 8 Two objects used in the experiments: (a) the first object; (b) the second object. Red lines describe the lengths calculated in x and y directions.
Fig. 9
Fig. 9 The first row and the second row are the real imaging results and the simulated results of the object in Fig. 8(a), respectively. The third row and the forth row are the real imaging results and the simulated results of the object in Fig. 8 (b), respectively. Columns (a) to (f) corresponds to d1 = 75mm, d1 = 90mm, d1 = 100mm, d1 = 125mm, d1 = 150mm and d1 = 175mm, respectively.
Fig. 10
Fig. 10 USAF resolution target used in the real imaging systems. Red lines describe the lengths calculated in x and y directions.
Fig. 11
Fig. 11 The first row and the second row are the real imaging results and the simulated results of the USAF resolution target in Fig. 10, respectively. Columns (a)-(c) corresponds to d1 = 280mm, d1 = 300mm, and d1 = 330mm.
Fig. 12
Fig. 12 FSPSFs with depth d1 changes: (a) d1 = 70mm; (b) d1 = 80mm; (c) d1 = 90mm; (d) d1 = 100mm; and (e) d1 = 110mm.
Fig. 13
Fig. 13 x-cross section at y = 0 mm of FSPSFs for objects with different object’s depths d1.

Tables (3)

Tables Icon

Table 1 Geometric parameters of the self-built imaging system.

Tables Icon

Table 2 The variation settings of object’s depth and sensor position.

Tables Icon

Table 3 Geometric parameters of the imaging system with a microlens array.

Equations (25)

Equations on this page are rendered with MathJax. Learn more.

1 d 1 + 1 d 2 = 1 f 1 , 1 d 3.1 + 1 d 3.2 = 1 f 2 ,
U( x main , y main )= + U( x 0 , y 0 ) h 11 ( x main , y main , x 0 , y 0 )d x 0 d y 0 .
U( x main , y main )= exp(ik d 1 ) iλ d 1 exp[ ik 2 d 1 ( x main 2 + y main 2 )] + U( x 0 , y 0 ) ×exp[ ik 2 d 1 ( x 0 2 + y 0 2 )]exp[ ik d 1 ( x 0 x main + y 0 y main )]d x 0 d y 0 ,
h 11 ( x main , y main , x 0 , y 0 )= exp(ik d 1 ) iλ d 1 exp{ ik 2 d 1 [ ( x main x 0 ) 2 + ( y main y 0 ) 2 ]}.
U( x 1 , y 1 )= + U( x main , y main ) h 12 ( x 1 , y 1 , x main , y main )d x main d y main ,
U( x 1 , y 1 )= exp(ik d 2 ) iλ d 2 exp[ ik 2 d 2 ( x 1 2 + y 1 2 )] + U( x main , y main ) × t main ( x main , y main )exp[ ik 2 d 2 ( x main 2 + y main 2 )] ×exp[ ik d 2 ( x main x 1 + y main y 1 )]d x main d y main ,
t main ( x main , y main )= P 1 ( x main , y main )exp[ ik 2 f 1 ( x main 2 + y main 2 )],
h 12 ( x 1 , y 1 , x main , y main )= exp(ik d 2 ) iλ d 2 t main ( x main , y main ) ×exp{ ik 2 d 2 [ ( x main x 1 ) 2 + ( y main y 1 ) 2 ]}.
U( x 1 , y 1 )= + U( x 0 , y 0 ) h 11 ( x main , y main , x 0 , y 0 ) × h 12 ( x 1 , y 1 , x main , y main )d x main d y main d x 0 d y 0 .
h 1 ( x 1 , y 1 x 0 , y 0 )= + h 11 ( x main , y main , x 0 , y 0 ) h 12 ( x 1 , y 1 , x main , y main )d x main d y main = exp[ ik( d 1 + d 2 ) ] - λ 2 d 1 d 2 + t main ( x main , y main ) ×exp{ ik 2 d 1 [ ( x 0 x main ) 2 + ( y 0 y main ) 2 ]} ×exp{ ik 2 d 2 [ ( x 1 x main ) 2 + ( y 1 y main ) 2 ]}d x main d y main .
U(x,y)= + U( x 1 , y 1 ) h 2 (x,y, x 1 , y 1 )d x 1 d y 1 ,
h 2 (x,y, x 1 , y 1 )= + h 21 ( x micro , y micro , x 1 , y 1 ) h 22 (x,y, x micro , y micro )d x micro d y micro ,
h 21 ( x micro , y micro , x 1 , y 1 )= exp(ik d 3.1 ) iλ d 3.1 exp[ ik 2 d 3.1 ( x micro 2 + y micro 2 )] ×exp[ ik 2 d 3.1 ( x 1 2 + y 1 2 )]exp[ ik d 3.1 ( x 1 x micro + y 1 y micro )] = exp(ik d 3.1 ) iλ d 3.1 exp{ ik 2 d 3.1 [ ( x micro x 1 ) 2 + ( y micro y 1 ) 2 ]},
h 22 (x,y, x micro , y micro )= exp(ik d 3.2 ) iλ d 3.2 t micro ( x micro , y micro ) ×exp { ik 2 d 3.2 [(x x micro )) 2 +(y y micro )) 2 ]},
t micro ( x micro , y micro )= P 2 ( x micro , y micro )exp[ ik 2 f 2 ( x micro 2 + y micro 2 )],
h 2 (x,y, x 1 , y 1 )= + h 21 ( x micro , y micro , x 1 , y 1 ) h 22 (x,y, x micro , y micro )d x micro d y micro = exp[ ik( d 3.1 + d 3.2 ) ] - λ 2 d 3.1 d 3.2 + t micro ( x micro , y micro ) ×exp{ ik 2 d 3.1 [ ( x 1 x micro ) 2 + ( y 1 y micro ) 2 ]} ×exp{ ik 2 d 3.2 [ (x x micro ) 2 + (y y micro ) 2 ]}d x micro d y micro .
h 22 (x,y, x micro , y micro )= exp(ik d 3.2 ) iλ d 3.2 m n t micro ( x micro mD, y micro nD) ×exp{ ik 2 d 3.2 [ (x x micro ) 2 + (y y micro ) 2 ]},
h 2 (x,y, x 1 , y 1 )= + h 21 ( x micro , y micro , x 1 , y 1 ) h 22 (x,y, x micro , y micro )d x micro d y micro = exp[ ik( d 3.1 + d 3.2 ) ] - λ 2 d 3.1 d 3.2 m n + t micro ( x micro mD, y micro nD) ×exp{ ik 2 d 3.1 [ ( x 1 x micro ) 2 + ( y 1 y micro ) 2 ]} ×exp{ ik 2 d 3.2 [ (x x micro ) 2 + (y y micro ) 2 ]}d x micro d y micro .
U(x,y)= + U( x 0 , y 0 ) h 1 ( x 1 , y 1 , x 0 , y 0 ) h 2 (x,y, x 1 , y 1 )d x 1 d y 1 d x 0 d y 0 .
U(x,y)= + U( x 0 , y 0 )h(x,y, x 0 , y 0 )d x 0 d y 0 ,
h(x,y, x 0 , y 0 )= + h 1 ( x 1 , y 1 , x 0 , y 0 ) h 2 (x,y, x 1 , y 1 )d x 1 d y 1 = exp[ik( d 1 + d 2 + d 3.1 + d 3.2 )] λ 4 d 1 d 2 d 3.1 d 3.2 m n + t micro ( x micro mD, y micro nD) . ×exp{ ik 2 d 3.1 [ ( x 1 x micro ) 2 + ( y 1 y micro ) 2 ]} ×exp{ ik 2 d 3.2 [ (x x micro ) 2 + (y y micro ) 2 ]}d x micro d y micro × t main ( x main , y main )exp{ ik 2 d 1 [ ( x 0 x main ) 2 + ( y 0 y main ) 2 ]} ×exp{ ik 2 d 2 [ ( x 1 x main ) 2 + ( x 1 y main ) 2 ]}d x main d y main d x 1 d y 1
FSPSF= PSFdt .
FSPSF= PSF( d 3.2 )dt .
FSPSF= PSF( d 3.2 )d( 1 v d 3.2 d 0 ) = 1 v PSF( d 3.2 )d d 3.2 .
FSPSF= 1 v h (x,y, x 0 , y 0 )d d 3.2 = 1 v exp[ik( d 1 + d 2 + d 3.1 + d 3.2 )] λ 4 d 1 d 2 d 3.1 d 3.2 m n + t micro ( x micro mD, y micro nD) exp{ ik 2 d 3.1 [ ( x 1 x micro ) 2 + ( y 1 y micro ) 2 ]} ×exp{ ik 2 d 3.2 [ (x x micro ) 2 + (y y micro ) 2 ]}d x micro d y micro × t main ( x main , y main )exp{ ik 2 d 1 [ ( x 0 x main ) 2 + ( y 0 y main ) 2 ]} ×exp{ ik 2 d 2 [ ( x 1 x main ) 2 + ( x 1 y main ) 2 ]}d x main d y main d x 1 d y 1 d d 3.2 .

Metrics