Abstract

Recovering the real light field, including the light field intensity distributions and continuous volumetric data in the object space, is an attractive and important topic with the developments in light-field imaging. In this paper, a blind light field reconstruction method is proposed to recover the intensity distributions and continuous volumetric data without the assistant of prior geometric information. The light field reconstruction problem is approximated to be a summation of the localized reconstructions based on image formation analysis. Blind volumetric information derivation is proposed based on backward image formation modeling to exploit the correspondence among the deconvoluted results. Finally, a light field is blindly reconstructed via the proposed inverse image formation approximation and wave propagation. We demonstrate that the method can blindly recover the light field intensity with continuous volumetric data. It can be further extended to other light field imaging systems if the backward image formation model can be derived.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Recovering the real light field in the object space is an attractive and important topic with the developments in light-field imaging. According to the wave-optics models, the light field in the object space consists of the intensity distributions and continuous volumetric data. Reconstructing them only using the spatial and angular information recorded on the sensor is challenging since the volumetric information is lost during acquisition and the spatial resolution of acquired data is limited.

The existing light field recovery works mainly use the data acquired by plenoptic cameras since they can record the direction of light rays via a single shoot [1–3]. They reconstruct the light field by computational synthesis of 3D focal stacks across the 3D scene based on ray-optics [3,4]. While the volumetric information they provided is a relative depth among the virtual focal planes and the compromise between the lateral and angular resolution shows resolution loss during reconstruction. Zhang et al. reconstructed 3D object by moving a plenoptic camera around the object and updating the structure-from-motion method [5]. Although the point cloud can be reconstructed, it needs to capture and register multiple light-field images, which is only applicable to static objects. S. Shroff et al. proposed wave-optics models to reconstruct the light field through point-spread-function (PSF) deconvolution [6–8]. It mitigates the resolution loss for the reconstructed light field, while, the prior information, like the distance of each object or the distance of each object point in an extreme case, is needed. We proposed a light field reconstruction model to tackle the scenarios that the imaging noise exists in the acquired data [9]. However, the exact distance of the object plane is still needed for reconstruction. C. Guo et al. extended the work to the microscopic scale and reconstructed 3D volumetric information. Nevertheless, the geometric information of the scene is needed [10]. M. Broxton et al. used Richardson-Lucy algorithm to recover the 3D scene [11]. However, their work cannot obtain the exact object distance, which results in the difficulty in clarifying which object on which depth generates the reconstructed intensity. Also, they cannot reconstruct the light field for a specific object.

So, in this paper, a blind light field reconstruction method is proposed to recover the intensity distributions and continuous volumetric data without the assistant of prior geometric information. Plenoptic camera 2.0 [12–14], which inserts a microlens array behind the image plane of the main lens for an improved spatial resolution of the acquired light field, is exploited to benefit from its distinct image response. By analyzing the image responses among the neighboring microlens, we proposed to approximate the light field reconstruction problem to be a summation of the localized reconstructions. Based on the approximation, blind light field reconstruction problem becomes to be a problem in blindly deriving the distance correspondence from the reconstructions generated by the microlens images. Blind volumetric information derivation is proposed based on backward image formation modeling to exploit the correspondence among the deconvoluted results. Finally, light field is blindly reconstructed via the proposed inverse image formation approximation and wave propagation. We demonstrate that the method can blindly recover the light field intensity with continuous volumetric data. It can be further extended to other light field imaging systems if the backward image formation model can be derived.

The paper is organized as follows. The proposed light field reconstruction approximation is described in detail in Section 2. Section 3 describes the proposed blind light field reconstruction method. Section 4 provides experimental results to demonstrate the effectiveness of the proposed method. Section 5 concludes the paper.

2. Light field reconstruction approximation

2.1 Image formation of plenoptic camera 2.0 and light field reconstruction modeling

The optical configuration of plenoptic camera 2.0 is shown in Fig. 1.

 figure: Fig. 1

Fig. 1 Optical structure of plenoptic camera 2.0.

Download Full Size | PPT Slide | PDF

As shown in the figure, a microlens array is inserted between the image plane of the main lens and the imaging sensor. Rays coming from the object on the focal plane, the rays in green, pass through the main lens and focus on the image plane. Then, treating the light field on the image plane as a new object, the microlens array reimages it on the sensor. Thus, dividing the relay imaging system into several sub-imaging-systems, our previous work successfully modeled the image formation process of plenoptic camera 2.0 for a point light source placed at (x0, y0) on depth d1 by wave-optics as [9,15]

h(x,y,x0,y0)=exp[ik(d1+d2+d3+l)]λ4d1d2d3lmn+tmicro(xmmD,ymnD)×exp{ik2d3[(x1xm)2+(y1ym)2]}×exp{ik2l[(xxm)2+(yym)2]}dxmdym×tmain(xmain,ymain)exp{ik2d1[(x0xmain)2+(y0ymain)2]}×exp{ik2d2[(x1xmain)2+(y1ymain)2]}dxmaindymaindx1dy1,
where (x0, y0), (xmain, ymain), (x1, y1), (xm, ym) and (x, y) are the coordinate of the point on the object plane, main lens plane, the image plane of the main lens, microlens array plane and sensor plane, respectively; d1 and d2 is the distance between the object and the main lens and that between the main lens and the image plane, respectively; d3 and l is the distance between the image plane and the microlens and that between the microlens and the sensor plane, respectively; λ is the wavelength of the light; k is the wavelength number equaling to2π/λ; tmain (xmain, ymain) and tmicro (xm, ym) is the phase correction factor of the main lens and that of a single microlens, respectively. tmain (xmain, ymain) represents the optical characteristic of the main lens, which is given by [15]:
tmain(xmain,ymain)=P1(xmain,ymain)exp[ik2f1(xmain2+ymain2)],
where P1 (xmain, ymain) is the pupil function of the main lens; and f1 is the focal length of the main lens. tmicro (xm, ym) represents the optical characteristic of a single microlens, which is given by [15]:
tmicro(xm,ym)=P2(xm,ym)exp[ik2f2(xm2+ym2)],
where P2 (xm, ym) is the pupil function of the microlens; and f2 is the focal length of a single microlens.

h(x, y, x0, y0) describes the imaging response of a point light source, called the PSF of the imaging system. Treating a real imaging target as a set of point light sources, a pixel on the sensor actually records the summation of the imaging responses from all the light sources. Since only the intensity value is recorded, the intensity of a pixel on the sensor, I (x, y), can be formulated as:

I(x,y)=I(x0,y0)|h(x,y,x0,y0)|2dx0dy0,
where I (x0, y0) is the intensity of point (x0, y0). Thus, a linear forward imaging model can be established [4–7,16,17] as:
Isd1n=Hd1nI0d1n.
I0d1n, a (P0×Q0,1) vector, represents an object consisting of P0×Q0point light sources on the object plane d1 = d1naway from the main lens. Each entry in I0d1nrepresents the intensity of a point light source. Similarly, Isd1n is a (Ps×Qs,1) vector corresponding to Ps×Qssensor data generated by the point sources at depth d1n. Hd1n is a(Ps×Qs,P0×Q0) system transmission matrix of object distance d1n. It is organized like:
[Isd1n(1,1)Isd1n(1,2)Isd1n(2,1)Isd1n(2,2)Isd1n(Ps,Qs)]=[Hd1n(1,1,1,1)Hd1n(1,1,1,2)Hd1n(1,1,P0,Q0)Hd1n(1,2,1,1)Hd1n(1,2,1,2)Hd1n(1,2,P0,Q0)Hd1n(2,1,1,1)Hd1n(2,1,1,2)Hd1n(2,1,P0,Q0)Hd1n(2,2,1,1)Hd1n(2,2,1,2)Hd1n(2,2,P0,Q0)Hd1n(Ps,Qs,1,1)Hd1n(Ps,Qs,1,2)Hd1n(Ps,Qs,P0,Q0)][I0d1n(1,1)I0d1n(1,2)I0d1n(2,1)I0d1n(2,2)I0d1n(P0,Q0)],
where Hd1n(x,y,x0,y0)equals to |h(x,y,x0,y0)|2. It can be found that column i in Hd1n describes how the light rays coming from the ith object point contribute to all the pixels on the sensor. While, row j in Hd1n describes how the object space points contribute to the jth pixel on the senor.

So, to recover the light field intensity in the object space, I0d1nin Eq. (5), the inverse problem of Eq. (5) can be formulated by Tikhonov regularization [18,19] considering the existence of imaging noise and the possibility of non-singular Hd1n [9]. It is given by:

I0d1n=argminHd1nI0d1n-Isd1n22+τI0d1n22,
where τ is a regularization parameter. If τ>0, a unique solution always exists, and I0d1nis bounded. Besides the intensity smoothness prior, other priors can also be used to model the inverse problem of Eq. (5), which is outside the scope of this paper.

If d1nis known, which corresponds to the prior geometric information of the imaging target is known, I0d1ncan be recovered with high accuracy via the derivation of Hd1n. However, such prior information is generally unknown, which results in blind light field reconstruction becoming challenging.

2.2 The proposed light field reconstruction approximation

Considering the information that can be exploited is limited to the sensor data and the optical configuration of the imaging system, we propose to approximate the light field reconstruction problem by exploiting the optical structure of plenoptic camera 2.0. Referring to the system structure shown in Fig. 1, the image on the sensor can also be treated as the summation of the imaging responses from all the microlenses. So, the image formation process of plenoptic camera 2.0 with M × N microlenses can be reformulated by [14]:

Isd1n=m=(1,1)(M,N)Ism,d1n=m=(1,1)(M,N)Hd1nmI0d1nm=(mx,my),mx[1,M],my[1,N],
where Ism,d1nis the sensor response generated by the (mx,my)th microlens;Hd1nm is the (mx,my)th microlens’s system transmission matrix at object distance d1n. Theoretically, the dimension of Ism,d1nis the same with that ofIsd1n, and the dimension of Hd1nmis the same with that of Hd1n. However, saving M × N transmission matrices is storage expensive and retrieving Ism,d1nis impractical to a real image.

To simply this, the image formation process is further analyzed using ray-optics to discover the ray contribution on the sensor. For a point light source at (x0, y0) in the object space, as shown in Fig. 1, rays coming from it pass through the main lens and converge at (x1, y1). (x1, y1) and (x0, y0) satisfy:

x1=d2d1x0,y1=d2d1y0.
d1 and d2 satisfy the Gaussian equation:

1d1+1d2=1f1.

Then, treating the point at (x1, y1) as a new object, the microlenses within the imaging range reimage it. The rays pass through the edge of the main lens, as Ray1 and Ray2 shown in Fig. 1, determine the imaging range on the microlens array. Using the coordinates in the vertical direction as instances, the microlens’ coordinates within the imaging range can be derived as follows. For Ray1, the vertical coordinate of its image on microlens array is ym1. It is given by:

ym1=y1Rd2L+R,
where R is the radius of the main lens; and L is the distance between the main lens plane and microlens array plane. Similarly, the vertical coordinate of the image of Ray2 in Fig. 1 on microlens array is ym2. It equals to:

ym2=y1+Rd2LR.

As L is bigger than d2, which corresponds to imaging the objects with rays converged before the microlens array, the focused image y1 is between the microlens array and main lens and ym1 is vertically below ym2. The vertical coordinate of a microlens,my[1,N], in the imaging range satisfies:

2myr+rym12myrrym2,
where r is the radius of a microlens. Based on above equations, the microlens my within the imaging range satisfies:
y1Rd2L+Rr2myry1+Rd2L(Rr).
Substituting Eqs. (9) and (10) into it, we have:

y0d1R(d1f1)d1f1L+Rr2myry0d1+R(d1f1)d1f1LR+r.

As L is smaller than d2, which corresponds to imaging the objects with rays converged behind the microlens array, the focused image y1 is behind the microlens array and ym1 is vertically above ym2. Similarly, my within the imaging range satisfies:

y0d1+R(d1f1)d1f1LR+r2myry0d1R(d1f1)d1f1L+Rr.

The above derivation can be performed for the x dimension equally. Combining Eqs. (15) and (16) together, it is found that for a specific object point (x0, y0), no matter it is focused or defocused, only some microlens (mx, my) together with the pixels under them record its information.

Further analyzing the imaging response of (x0, y0), i.e. the pixel on the sensor, the generated converge point (x2, y2) is given by:

x2=d4Ld2(x1mxD)+mxDy2=d4Ld2(y1myD)+myD,
where D is the pitch of a microlens equaling to 2r. d4 and (L-d2) satisfy the Gaussian equation:

1L-d2+1d4=1f2.

After rays converging at (x2, y2), if the object point is on the focal plane of the main lens, like the rays in green shown in Fig. 1, the imaging result behind microlens (mx, my) will be a pixel (x, y). If the object point is not on the focal plane, like the red rays shown in Fig. 1, the rays will propagate from (x2, y2) to the sensor, which results in a bright disk area on the sensor. As the image formation properties are similar between the center of the disk and the points around, we use the center point (x, y) of the disk in the following derivations because of its simplicity in mathematical expressions. The center of the disk is:

x=lLd2(x1mxD)+mxDy=lLd2(y1myD)+myD.
Substituting Eqs. (9) and (10) into Eq. (19), we have:
x=ld2(Ld2)d1(x0+mxD((Ld2)d1ld2+d1d2))y=ld2(Ld2)d1(y0+myD((Ld2)d1ld2+d1d2)),
which indicates that for a specific pixel (x, y) on the sensor, only a small group of object points (x0, y0) and some microlens (mx, my) contribute to it.

Thus, combining the two observations together with design constraints described in [3] that the image-side f-number must match the microlens f-number to prevent microlens image overlapping and to maximize the illuminated area behind each microlens for the plenoptic cameras, we propose to approximate image formation process of plenoptic camera 2.0 with M × N microlenses by the summation of the localized responses of microlenses as:

Isd1n=m=(1,1)(M,N)Ism,d1n=m=(1,1)(M,N)Hd1nmI0d1nm.
Here, Ism,d1nis the image under the mth microlens which is spatially cropped from Isd1n; and I0d1nmrepresents point light sources that contribute to the mth microlens. Hd1nm is the mth microlens’s system transmission matrix that is approximated by keeping the rows in Hd1nthat correspond to the pixels in Ism,d1nunchanged while setting other rows to be zero based on the observations above that only a small group of object points (x0, y0) and some microlens (mx, my) contribute to specific pixels on the sensor. Considering that I0d1nconsists of point light sources from multiple objects at depth d1nand a group of object points only contribute to a specific number of pixels on the sensor, we proposed to further simplify Eq. (21) to be:
Isd1n=m=(1,1)(M,N)Ism,d1n=m=(1,1)(M,N)k=1OIs,Ωkm,d1n=m=(1,1)(M,N)Hd1nmk=1OI0d1n,Ωkm,
where Ωkis a point light source set from a non-overlapped object on d1n; O is the total number of objects on d1n;I0d1n,Ωkmis the intensity of point sources in Ωk that contribute to the mth microlens; Is,Ωkm,d1nis the image corresponding to Ωkunder microlens m.

Since exchanging the order of summation will not affect the result in Eq. (22), finally, recovering the light field intensity in the object space, I0d1nin Eq. (7), is proposed to be approximated by:

I0d1n=k=1Om=(1,1)(M,N)argminHd1nmI0d1n,Ωkm-Is,Ωkm,d1n22+τI0d1n,Ωkm22.
Based on the approximation, blind light field reconstruction problem becomes to be a problem in deriving the distance correspondence from the reconstructions generated by the microlens images. We propose to solve this by backward image formation modeling, which will be described in detail in the next section.

3. Blind light field reconstruction

3.1 Backward image formation modeling and blind volumetric information derivation

To blindly derive d1n for a correct reconstruction, the backward image formation process is analyzed to estimate d1n from the spatial correspondence among the reconstructions generated at a series of depths using multiple microlens images.

Substituting Eq. (10) into Eq. (20) and generalizing (x, y) by (xd1nmx,yd1nmy)as a pixel under microlens m = (mx, my), corresponding to a point light source (x0d1n,Ωkmx,y0d1n,Ωkmy) in Ωk whose intensity is an element ofI0d1n,Ωkm, we can express the relationship between (xd1nmx,yd1nmy) and (x0d1n,Ωkmx,y0d1n,Ωkmy) by the function of d1n. Using the horizontal direction as an instance, we have:

xd1nmx=l(d1nf1)L(d1nf1)d1nf1(f1d1nf1x0d1n,Ωkmx+mxD)+mxD
x0d1n,Ωkmx=d1nf1f1(L(d1nf1)d1nf1l(d1nf1)(xd1nmxmxD)mxD).
Since d1n is unknown, during recovery, d1n that is different from d1n may be assigned to Eq. (25). Meanwhile, xd1nmx in Eq. (24) is constant caused by the fixed x0d1n,Ωkmx and d1n. Thus, via substituting Eq. (24) into Eq. (25), the spatial distance between the inverse projection at d1nand that at the real distance d1n is given by:
x0d1n,Ωkmx=L(d1nf1)d1nf1L(d1nf1)d1nf1x0d1n,Ωkmx+(d1nf1)f1L(d1nf1)d1nf1L(d1nf1)d1nf1mxDd1nf1f1mxD.
It can be found that as d1n is smaller/bigger than d1n, the reconstructed x0d1n,Ωkmxmoves closer/farther to the correct position x0d1n,Ωkmx gradually. The distance between x0d1n,Ωkmxand x0d1n,Ωkmx is changed linearly. It only equals to zero as d1n equals to d1n.

If (x0d1n,Ωkmx,y0d1n,Ωkmy)also contributes to the pixels under other microlens, like the red point source in Fig. 1, it can be reconstructed by the pixels from different microlenses. Thus, the distance between x0d1n,Ωkmx1 and x0d1n,Ωkmx2that recovered from the pixel under microlens mx1 andmx2, respectively, using d1n different from d1n is given by:

x0d1n,Ωkmx1x0d1n,Ωkmx2=(d1nf1)f1L(d1nf1)d1nf1L(d1nf1)d1nf1D(mx1mx2)d1nf1f1D(mx1mx2),
which is also a linear equation of d1nand the distance between microlens mx1 andmx2. As d1nis different from d1n, the reconstructed object points x0d1n,Ωkmx1 and x0d1n,Ωkmx2are spatially apart from each other, which visually displayed as ghosting effect in the reconstructed light field. The effect can only be eliminated as d1nequals to d1n, i.e., the object points are spatially coincident to be the real object point. It is obvious that the derivation is general to all the point light sources belonging to Ωkat depth d1n. So, blind deriving d1n can be solved by detecting whether the object points reconstructed from the pixels under different microlenses are spatially coincident. The above derivations are also valid to the vertical direction.

Since (x0d1n,Ωkmx1,y0d1n,Ωkmy1) and (x0d1n,Ωkmx2,y0d1n,Ωkmy2)signals the entry of the reconstructed pixel intensity in image I0d1n,Ωkm1and I0d1n,Ωkm2, we propose to detect whether (x0d1n,Ωkmx1,y0d1n,Ωkmy1) is spatially coincident with (x0d1n,Ωkmx2,y0d1n,Ωkmy2)by evaluating the similarity between the corresponding reconstructed intensity image I0d1n,Ωkm1and I0d1n,Ωkm2. As d1nis different from d1n, the intensity of the point located at (x0d1n,Ωkmx1,y0d1n,Ωkmy1)in I0d1n,Ωkm1is different from the intensity of the collocated point in I0d1n,Ωkm2. So, if calculating the pixel-wise intensity difference between I0d1n,Ωkm1and I0d1n,Ωkm2, the difference will decrease with the difference decrement between d1nand d1n. It will reach the minimum as I0d1n,Ωkm1is exactly the same withI0d1n,Ωkm2, which corresponds to (x0d1n,Ωkmx1,y0d1n,Ωkmy1) is spatially coincident with (x0d1n,Ωkmx2,y0d1n,Ωkmy2). Generalizing the process to horizontal and vertical directions, d1n can be blindly derived by:

d1n=argmind1nDis(I0d1n,Ωkm1,I0d1n,Ωkm2),
where Dis(A,B) is a function evaluating the spatial similarity between signal A and B. In image processing area, there are several methods in evaluating the spatial similarity between two images. In this paper, we use Euclidean distance between I0d1n,Ωkm1and I0d1n,Ωkm2as the evaluation method taking the consideration of its low complexity and sufficient accuracy in identification.

So, for the point sources in Ωkat depth d1n, we segment its image under microlens m1 as Is,Ωkm1,d1n in Eq. (23), and uses a series of Hd1nm1 at d1n to reconstruct a series ofI0d1n,Ωkm1by:

I0d1n,Ωkm1=argminHd1nm1I0d1n,Ωkm1-Is,Ωkm1,d1n22+τI0d1n,Ωkm122.
Perform this process equally to the segmented image under microlens m2, Is,Ωkm2,d1n, a series of I0d1n,Ωkm2can be reconstructed. Letting I0d1n,Ωkm1s and I0d1n,Ωkm2s as the input of Eq. (28), d1n is derived. Then, I0d1n can be directly reconstructed by adding the reconstruction of each object under each microlens at d1n, I0d1n,Ωkm, together according to Eq. (23), since they have been reconstructed during deriving d1n. Although, theoretically, the images under all the microlenses need to complete the process for each object, it can be further simplified by using limited number of images based on the discussion above that the rays from specific object point only contribute to limited number of pixels on the sensor. During the implementation, we use the most complete two images of the object from two microlens to greatly reduce the computational complexity and preserve the reconstruction quality.

3.2 Light field repropagation

For the real scenario that several imaging targets are located at different depths, i.e. different d1n, the above processing can be applied iteratively to recover I0d1n. Since I0d1n only contains the light field intensity of the targets, i.e. Ωk s, located at depth d1n, light field repropagation is required to get additional light field on d1n that generated by the light sources (imaging targets) on other depths. Similarly using the light propagation as we exploited in deriving the imaging response of plenoptic camera 2.0 [14], the light field at point (x’, y’) on d1m that is generated by the light propagated from the light source (x0, y0) in Ωk on d1n equals to:

Ud1m,d1n(x,y)=exp{ik|d1nd1m|}iλ|d1nd1m|exp[ik2|d1nd1m|(x2+y2)]+I0d1n,Ωk(x0,y0)exp{θ(x0,y0)}×exp[ik2|d1nd1m|(x02+y02)]exp[ik|d1nd1m|(x0x+y0y)]dx0dy0,
where θ(x0,y0)is the initial phase of point light source (x0, y0) at d1n that can be defined by the type of light source. Finally, the light field intensity at d1m equals to:
|Ud1m|=I0d1m+d1nd1m|Ud1m,d1n|,
where ||retrieves intensity from the light field response.

4. Experiments and results

The effectiveness of the proposed blind light field reconstruction method is demonstrated by testing on the simulated sensor data. The plenoptic camera 2.0 system is simulated according to [14], which consists of a main lens with f1 = 40mm and 4mm radius, and a 3 × 3 microlens array with f2 = 4mm and 160μm radius for each microlens. The focal plane of the whole system is set at 65mm before the main lens. L and l equals to 122.49mm and 5.104mm, respectively. Three objects, “P,” “S,” and “F” are placed at d1n = 65mm, 67mm, and 69mm, respectively, as shown in Fig. 2(a), and the simulated sensor data is shown in Fig. 2(b).

 figure: Fig. 2

Fig. 2 (a) Imaging targets “P,” “S,” and “F” are placed at 65mm, 67mm and 69mm, respectively; (b) the simulated sensor data using the plenoptic camera 2.0 with a main lens (f1 = 40mm and 4mm radius) and a 3 × 3 microlens array (f2 = 4mm and 160μm radius).

Download Full Size | PPT Slide | PDF

To extract the imaging results for a same object under a microlens from the sensor data, Is,Ωkm,d1nin Eq. (23), several image segmentation methods, like graph-cut [20], can be exploited to distinguish the objects’ response on the sensor. During the experiments, we use the connected components analysis in [21] to label the 8-connected components in the image and segment out the region as the image of an object. The segmented regions are lined in red and shown in Fig. 3. Using the segmented images of “P” as instances, the regions lined in red are magnified on the right in Fig. 3. According to Eq. (23),the segmented images of “P” under microlens (1,1), (1,2), and (2,1) can be denoted by Is,Ωkm1,d1nIs,Ωkm2,d1nand Is,Ωkm3,d1n, respectively.

 figure: Fig. 3

Fig. 3 Imaging results under each microlens. Microlens coordinate is (mx, my). The region circled by dotted line corresponds to the imaging area of a microlens. The regions lined in red are the segmented imaging results “P”, “F” and “S” under each microlens. The segmented imaging results of “P” under microlens (1,1), (1,2) and (2,1) are magnified on the right.

Download Full Size | PPT Slide | PDF

4.1 Blind depth derivation verification

First, the correctness of Eq. (28), which deriving the depth by evaluating the similarity between the reconstructed images, is verified by executing Eq. (29) for Is,Ωkm1,d1nIs,Ωkm2,d1nand Is,Ωkm3,d1nat a series of d1n and comparing the results of Eq. (28) with the real depth information. d1n from 62mm to 71mm with 1mm interval is used and the reconstructed I0d1n,Ωkm1,I0d1n,Ωkm2andI0d1n,Ωkm3are shown in Fig. 4 from (b) to (k), respectively.

 figure: Fig. 4

Fig. 4 (a) Segmented imaging results of “P” under microlens (1,1), (1,2) and (2,1); (b) to (k) are the reconstructed intensity at d1nfrom 62mm to 71mm using the image in (a).

Download Full Size | PPT Slide | PDF

We use Euclidean distance as the function of Dis(A,B)in Eq. (28) because of its simplicity and sufficient accuracy. Smaller value corresponds to smaller distance and higher similarity. The results of each pair of reconstructed images at d1n are listed in Table 1. It can be found that as d1n increases from 62mm to the real distance 65mm, the value of Dis(.) decreases, which corresponds to the reconstructed images spatially moving closer to each other. The effect is consistent with that shown in Fig. 4 and the derivation in Eq. (27). Inversely, as d1nincreases from the real distance 65 mm to 71mm, Dis(.) increases which corresponds to the reconstructed images spatially moving apart from each other. Dis(.) always reaches the minimum at 65 mm, which is the real distance that “P” is placed at, for all the pairs. It indicates that according to Eq. (28), d1n equaling to 65mm can be achieved no matter which pair of images is input.

Tables Icon

Table 1. Spatial similarity measurement between the reconstructed intensity images at different depths

Similar processes are performed to the image of “S” and “F.” Since any pair of the images under two microlenses can derive the real distance, we just show the results of “S” and “F” using the most complete two images of the object from two microlens. Reconstructing “S” uses the segmented images under microlens (2, 2) and (3, 2) asIs,Ωkm1,d1nand Is,Ωkm2,d1n, respectively. Reconstructing “F” uses segmented images under microlens (2,3) and (3,3) asIs,Ωkm1,d1nand Is,Ωkm2,d1n, respectively. The reconstructed intensity images are shown in Fig. 5 from (b) to (k). To make the spatial disparity of reconstructed results clearly, we use red color to represent Is,Ωkm1,d1nand its reconstructed I0d1n,Ωkm1, and use green color to highlight Is,Ωkm2,d1nand its reconstructedI0d1n,Ωkm2. The similarities measured by Dis(.) between I0d1n,Ωkm1and I0d1n,Ωkm2 are listed in Table 2. From Fig. 5, It can be found that the spatial disparity between I0d1n,Ωkm1and I0d1n,Ωkm2reaches the minimum at 67mm, the real depth of “S,” for object “S” and reaches 69mm, the real depth of “F,” for object “F.” Combining the disparity calculation results in Tables 1 and 2 together, it demonstrates that the blind volumetric information derivation method proposed is effective and accurate.

 figure: Fig. 5

Fig. 5 (a): segmented imaging results under microlens (2,2), (3,2) for object “S” and those under microlens (2, 3) and (3, 3) for object “F,” respectively. (b) to (k) corresponds to reconstructed intensity at different d1ns, from 62mm to 71mm, using the images in (a), respectively. Image in red represents Is,Ωkm1,d1nand its reconstructed I0d1n,Ωkm1. Image in green represents Is,Ωkm2,d1nand its reconstructed I0d1n,Ωkm2.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 2. Spatial similarity measurement between the reconstructed intensity images for “S” and “F”

4.2 Blind depth derivation verification for noisy imaging results

To further verify that the proposed depth derivation method, i.e., Eq. (28), also works for the noisy imaging results, Gaussian noise is added to the imaging result in Fig. 2(b). The noisy imaging result shown in Fig. 6, whose peak signal to noise ratio (PSNR) is only 25dB, present strong noise distortion in the image relative to the noise-free result in Fig. 2(b).

 figure: Fig. 6

Fig. 6 Sensor data with noise. The brightness of the image is adjusted by 20% to show the noise clearly.

Download Full Size | PPT Slide | PDF

Same to the implementation in processing the noise-free sensor data, we use the segmented images under microlens (1, 1) and (1, 2) for object “P,” the segmented images under microlens (2,2) and (3,2) for object “S” and the segmented images under microlens (2, 3) and (3, 3) for object “F.” Treating them as the image responses Is,Ωkm1,d1nand Is,Ωkm2,d1n, the reconstructedI0d1n,Ωkm1and I0d1n,Ωkm2as d1n varies from 62mm to 71mm with 1mm interval are shown in Fig. 7 from (b) to (k), respectively.

 figure: Fig. 7

Fig. 7 (a): segmented imaging results with 50dB noise under microlens (1, 1), (1, 2) for object “P”, those under microlens (2,2), (3,2) for object “S” and those under microlens (2, 3) and (3, 3) for object “F,” respectively. (b) to (k) corresponds to reconstructed intensity at different d1ns, from 62mm to 71mm, using the images in (a), respectively. Image in red represents Is,Ωkm1,d1nand its reconstructed I0d1n,Ωkm1. Image in green represents Is,Ωkm2,d1nand its reconstructed I0d1n,Ωkm2.

Download Full Size | PPT Slide | PDF

It can be found that the spatial disparities between the reconstructed object points are similar to the noise-free case. I0d1n,Ωkm1is spatially coincident with I0d1n,Ωkm2at 65mm, the real depth of “P,” for object “P.” Also, the spatial disparity reaches the minimum at 67mm and 69mm, the real depth of “S” and “F,” for object “S” and object “F,” respectively. Still using the Euclidean distance as the function of Dis(A,B) in Eq. (28), the spatial similarities measured by Dis(.) are listed in Table 3. Although the strong noise decreases the difference in the spatial similarity, the reconstruction model proposed in Eq. (29) weakens the noise influence by smoothness regularization. Thus, we can still obtain the correct object distance from Table 3, which demonstrates the robustness of the proposed method.

Tables Icon

Table 3. Spatial similarity measurement between images of “P,” “S,” and “F” reconstructed from noisy imaging result

4.3 Light field reconstruction results

Using the distances blindly derived above, the reconstructed discrete intensity information at the specific distance and the volumetric information recovered are shown in Fig. 8.

 figure: Fig. 8

Fig. 8 The reconstructed volumetric information of “P”, “S,” and “F” in the object space, in which the recovered “P”, “S,” and “F” are located at distance 65mm, 67mm and 69mm, respectively. The images on the right in each subimage are the recovered spatial discrete intensity at the specific distance. (a) reconstructed from noise-free imaging results in Fig. 2(b); (b) reconstructed from noisy imaging results in Fig. 6.

Download Full Size | PPT Slide | PDF

Compared Fig. 8 with the original object information in Fig. 2(a), our reconstructed volumetric information in Fig. 8 can embody the depth and the actual size of the objects, which gives real space information. Further applying the light field repropagation as Eq. (30), the light field intensity at depth 65mm, 67mm, and 69mm is generated using Eq. (31) and shown in Fig. 9. As shown in the figure, the light field propagates to all the directions, which results in on each light field slice we can observe some light field intensity information generated by the light sources of objects “P,” “S,” and “F.” The effect is consistent with the theoretical understanding of light field.

 figure: Fig. 9

Fig. 9 Recovered light field intensity using light field repropagation at the object distance: (a) 65mm; (b) 67mm and (c) 69mm.

Download Full Size | PPT Slide | PDF

4.4 Light field reconstruction for bigger object with more microlenses

To further verify the universality of the proposed method, reconstruction results are provided for a much bigger imaging target using a plenoptic camera 2.0 with a 7 × 7 microlens array.

The system parameters are consistent with those in the above experiments. The imaging target “A,” shown in Fig. 10(a), is placed at 66mm. Its physical size is much larger than “P,” “S,” or “F” used before, which cannot be fully imaged by a single microlens. Thus, as the simulated sensor data shown in Fig. 10(b), the image response under each microlens is only a part of “A.”

 figure: Fig. 10

Fig. 10 (a) Imaging target “A” placed at 66mm; (b) the simulated sensor data using the plenoptic camera 2.0 with a 7 × 7 microlens array.

Download Full Size | PPT Slide | PDF

Since image responses under different microlenses correspond to different parts of the object, as shown in the Fig. 10(b), we use three pairs of image responses to recover the light field for the whole object. As shown in the Fig. 11, the first pair uses the image responses under microlens (2, 2) and (2, 3) asIs,Ωkm1,d1n and Is,Ωkm2,d1n, respectively; the second pair uses the image responses under microlens (5, 2) and (5, 3) asIs,Ωkm1,d1n and Is,Ωkm2,d1n, respectively; the third pair uses the image responses under microlens (4, 4) and (4, 5) asIs,Ωkm1,d1n and Is,Ωkm2,d1n, respectively.

 figure: Fig. 11

Fig. 11 (a) The first pair of imaging responses; (b) The second pair of imaging responses; (c) The third pair of imaging responses.

Download Full Size | PPT Slide | PDF

Using the distance derived by Eq. (28), the reconstructed information from the first, the second and the third pair image responses are shown in Fig. 12(a)-(c), respectively. Since all the derived object’s distances are 66mm, the recovered light field intensity at distance 66mm is generated by Eq. (23), i.e., adding the three reconstructed light fields together. The recovered light field intensity is shown in Fig. 12(d), which shows the information of the object “A.” Comparing it with the original imaging target in Fig. 10(a), the recovered “A” has exactly the same physical size. The completion of recovered “A” can be further improved by reconstructing the image responses under more microlenses. It demonstrates that the proposed approach also works for the plenoptic camera 2.0 with more microlenses for bigger imaging targets.

 figure: Fig. 12

Fig. 12 The reconstructed volumetric information at 66mm. (a) reconstructed information from the first pair; (b) reconstructed information from the second pair; (c) reconstructed information from the third pair; (d) The recovered light field intensity at 66mm.

Download Full Size | PPT Slide | PDF

5. Conclusion

In this paper, we proposed a blind light field reconstruction method based on inverse image formation approximation and blind volumetric information derivation. The inverse image formation is approximated to be a summation of the localized reconstructions based on image formation analysis. Blind volumetric information derivation is proposed based on backward image formation modeling to exploit the correspondence among the deconvoluted results. The light field is blindly reconstructed via the proposed inverse image formation approximation and wave propagation. Experimental results demonstrated the correctness and effectiveness of the proposed method in blindly recovering light field intensity with continuous volumetric data. Since the internal parameter changes do not affect the mathematical formalism of all the derivations and the image formation analysis provided in the paper, the proposed method is general to different optical parameters of plenoptic camera 2.0.

To further optimize the proposed algorithm, we are investigating on more automatic segmentation methods to extract the imaging results even if the depth-dependent imaging distortion exists. Also, recovering real objects on heterogeneous optical configurations are under modeling.

Funding

National Natural Science Foundation of China (NSFC) (61771275); Shenzhen Project, China (JCYJ20170817162658573).

References and links

1. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992). [CrossRef]  

2. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenopic camera,” Technical Report, Stanford University (2005).

3. R. Ng, “Digital light field photography,” Ph.D. thesis, Stanford University (2006).

4. N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in 2016 IEEE International Conference on Computational Photography (ICCP), Evanston, IL, pp. 1–11 (2016).

5. Y. Zhang, Z. Li, W. Yang, P. Yu, H. Lin, and J. Yu, “The light field 3D scanner,” in 2017 IEEE International Conference on Computational Photography (ICCP), Stanford, CA, pp. 1–9 (2017).

6. S. Shroff and K. Berkner, “High resolution image reconstruction for plenoptic imaging systems using system response,” in Imaging and Applied Optics Technical Papers, OSA Technical Digest (online) (Optical Society of America (2012)), paper CM2B.2. [CrossRef]  

7. S. Shroff and K. Berkner, “Plenoptic system response and image formation,” in Imaging and Applied Optics, OSA Technical Digest (online) (Optical Society of America, 2013), paper JW3B.1.

8. S. Shroff and K. Berkner, “Wave analysis of a plenoptic system and its applications,” Proc. SPIE 8667, 86671L (2013). [CrossRef]  

9. L. Liu, X. Jin, and Q. Dai, “Image formation analysis and light field information reconstruction for plenoptic Camera 2.0,” Pacific-Rim Conference on Multimedia (PCM)2017, Sept. 28–29, Harbin, China. [CrossRef]  

10. C. Guo, H. Li, I. Muniraj, B. Schroeder, J. Sheridan, and S. Jia, “Volumetric light-field encryption at the microscopic scale,” in Frontiers in Optics 2017, OSA Technical Digest (online) (Optical Society of America, 2017), paper JTu2A.94.

11. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013). [CrossRef]   [PubMed]  

12. E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” J. Opt. Soc. Am. A 32(11), 2021–2032 (2015). [CrossRef]   [PubMed]  

13. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (ICCP, 2009), pp. 1–8.

14. T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 1–28 (2010).

15. X. Jin, L. Liu, Y. Chen, and Q. Dai, “Point spread function and depth-invariant focal sweep point spread function for plenoptic camera 2.0,” Opt. Express 25(9), 9947–9962 (2017). [CrossRef]   [PubMed]  

16. T. Georgiev and A. Lumsdaine, “Superresolution with plenoptic 2.0 cameras,” in Frontiers in Optics 2009/Laser Science XXV/Fall 2009, OSA Technical Digest (CD) (Optical Society of America) (2009), paper STuA6.

17. T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012). [CrossRef]   [PubMed]  

18. C. C. Paige and M. A. Saunders, “LSQR: an algorithm for sparse linear equations and sparse least squares,” ACM Trans. Math. Softw. 8(1), 43–71 (1982). [CrossRef]  

19. D. C. L. Fong and M. Saunders, “LSMR: an iterative algorithm for sparse least-squares problems,” SIAM J. Sci. Comput. 33(5), 2950–2971 (2011). [CrossRef]  

20. Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001). [CrossRef]  

21. R. M. Haralick and L. G. Shapiro, Computer and Robot Vision (Addison-Wesley Longman Publishing Co., 1992), pp. 28–48, vol. I.

References

  • View by:
  • |
  • |
  • |

  1. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
    [Crossref]
  2. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenopic camera,” Technical Report, Stanford University (2005).
  3. R. Ng, “Digital light field photography,” Ph.D. thesis, Stanford University (2006).
  4. N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in 2016 IEEE International Conference on Computational Photography (ICCP), Evanston, IL, pp. 1–11 (2016).
  5. Y. Zhang, Z. Li, W. Yang, P. Yu, H. Lin, and J. Yu, “The light field 3D scanner,” in 2017 IEEE International Conference on Computational Photography (ICCP), Stanford, CA, pp. 1–9 (2017).
  6. S. Shroff and K. Berkner, “High resolution image reconstruction for plenoptic imaging systems using system response,” in Imaging and Applied Optics Technical Papers, OSA Technical Digest (online) (Optical Society of America (2012)), paper CM2B.2.
    [Crossref]
  7. S. Shroff and K. Berkner, “Plenoptic system response and image formation,” in Imaging and Applied Optics, OSA Technical Digest (online) (Optical Society of America, 2013), paper JW3B.1.
  8. S. Shroff and K. Berkner, “Wave analysis of a plenoptic system and its applications,” Proc. SPIE 8667, 86671L (2013).
    [Crossref]
  9. L. Liu, X. Jin, and Q. Dai, “Image formation analysis and light field information reconstruction for plenoptic Camera 2.0,” Pacific-Rim Conference on Multimedia (PCM)2017, Sept. 28–29, Harbin, China.
    [Crossref]
  10. C. Guo, H. Li, I. Muniraj, B. Schroeder, J. Sheridan, and S. Jia, “Volumetric light-field encryption at the microscopic scale,” in Frontiers in Optics 2017, OSA Technical Digest (online) (Optical Society of America, 2017), paper JTu2A.94.
  11. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013).
    [Crossref] [PubMed]
  12. E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” J. Opt. Soc. Am. A 32(11), 2021–2032 (2015).
    [Crossref] [PubMed]
  13. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (ICCP, 2009), pp. 1–8.
  14. T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 1–28 (2010).
  15. X. Jin, L. Liu, Y. Chen, and Q. Dai, “Point spread function and depth-invariant focal sweep point spread function for plenoptic camera 2.0,” Opt. Express 25(9), 9947–9962 (2017).
    [Crossref] [PubMed]
  16. T. Georgiev and A. Lumsdaine, “Superresolution with plenoptic 2.0 cameras,” in Frontiers in Optics 2009/Laser Science XXV/Fall 2009, OSA Technical Digest (CD) (Optical Society of America) (2009), paper STuA6.
  17. T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
    [Crossref] [PubMed]
  18. C. C. Paige and M. A. Saunders, “LSQR: an algorithm for sparse linear equations and sparse least squares,” ACM Trans. Math. Softw. 8(1), 43–71 (1982).
    [Crossref]
  19. D. C. L. Fong and M. Saunders, “LSMR: an iterative algorithm for sparse least-squares problems,” SIAM J. Sci. Comput. 33(5), 2950–2971 (2011).
    [Crossref]
  20. Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001).
    [Crossref]
  21. R. M. Haralick and L. G. Shapiro, Computer and Robot Vision (Addison-Wesley Longman Publishing Co., 1992), pp. 28–48, vol. I.

2017 (1)

2015 (1)

2013 (2)

2012 (1)

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

2011 (1)

D. C. L. Fong and M. Saunders, “LSMR: an iterative algorithm for sparse least-squares problems,” SIAM J. Sci. Comput. 33(5), 2950–2971 (2011).
[Crossref]

2010 (1)

T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 1–28 (2010).

2001 (1)

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001).
[Crossref]

1992 (1)

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
[Crossref]

1982 (1)

C. C. Paige and M. A. Saunders, “LSQR: an algorithm for sparse linear equations and sparse least squares,” ACM Trans. Math. Softw. 8(1), 43–71 (1982).
[Crossref]

Adelson, E. H.

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
[Crossref]

Andalman, A.

Berkner, K.

S. Shroff and K. Berkner, “Wave analysis of a plenoptic system and its applications,” Proc. SPIE 8667, 86671L (2013).
[Crossref]

Bishop, T. E.

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

Boykov, Y.

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001).
[Crossref]

Broxton, M.

Chen, Y.

Cohen, N.

Dai, Q.

X. Jin, L. Liu, Y. Chen, and Q. Dai, “Point spread function and depth-invariant focal sweep point spread function for plenoptic camera 2.0,” Opt. Express 25(9), 9947–9962 (2017).
[Crossref] [PubMed]

L. Liu, X. Jin, and Q. Dai, “Image formation analysis and light field information reconstruction for plenoptic Camera 2.0,” Pacific-Rim Conference on Multimedia (PCM)2017, Sept. 28–29, Harbin, China.
[Crossref]

Deisseroth, K.

Favaro, P.

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

Fong, D. C. L.

D. C. L. Fong and M. Saunders, “LSMR: an iterative algorithm for sparse least-squares problems,” SIAM J. Sci. Comput. 33(5), 2950–2971 (2011).
[Crossref]

Georgiev, T.

T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 1–28 (2010).

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (ICCP, 2009), pp. 1–8.

Grosenick, L.

Jin, X.

X. Jin, L. Liu, Y. Chen, and Q. Dai, “Point spread function and depth-invariant focal sweep point spread function for plenoptic camera 2.0,” Opt. Express 25(9), 9947–9962 (2017).
[Crossref] [PubMed]

L. Liu, X. Jin, and Q. Dai, “Image formation analysis and light field information reconstruction for plenoptic Camera 2.0,” Pacific-Rim Conference on Multimedia (PCM)2017, Sept. 28–29, Harbin, China.
[Crossref]

Lam, E. Y.

Levoy, M.

Liu, L.

X. Jin, L. Liu, Y. Chen, and Q. Dai, “Point spread function and depth-invariant focal sweep point spread function for plenoptic camera 2.0,” Opt. Express 25(9), 9947–9962 (2017).
[Crossref] [PubMed]

L. Liu, X. Jin, and Q. Dai, “Image formation analysis and light field information reconstruction for plenoptic Camera 2.0,” Pacific-Rim Conference on Multimedia (PCM)2017, Sept. 28–29, Harbin, China.
[Crossref]

Lumsdaine, A.

T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 1–28 (2010).

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (ICCP, 2009), pp. 1–8.

Paige, C. C.

C. C. Paige and M. A. Saunders, “LSQR: an algorithm for sparse linear equations and sparse least squares,” ACM Trans. Math. Softw. 8(1), 43–71 (1982).
[Crossref]

Saunders, M.

D. C. L. Fong and M. Saunders, “LSMR: an iterative algorithm for sparse least-squares problems,” SIAM J. Sci. Comput. 33(5), 2950–2971 (2011).
[Crossref]

Saunders, M. A.

C. C. Paige and M. A. Saunders, “LSQR: an algorithm for sparse linear equations and sparse least squares,” ACM Trans. Math. Softw. 8(1), 43–71 (1982).
[Crossref]

Shroff, S.

S. Shroff and K. Berkner, “Wave analysis of a plenoptic system and its applications,” Proc. SPIE 8667, 86671L (2013).
[Crossref]

Veksler, O.

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001).
[Crossref]

Wang, J. Y. A.

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
[Crossref]

Yang, S.

Zabih, R.

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001).
[Crossref]

ACM Trans. Math. Softw. (1)

C. C. Paige and M. A. Saunders, “LSQR: an algorithm for sparse linear equations and sparse least squares,” ACM Trans. Math. Softw. 8(1), 43–71 (1982).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (3)

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001).
[Crossref]

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
[Crossref]

J. Electron. Imaging (1)

T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 1–28 (2010).

J. Opt. Soc. Am. A (1)

Opt. Express (2)

Proc. SPIE (1)

S. Shroff and K. Berkner, “Wave analysis of a plenoptic system and its applications,” Proc. SPIE 8667, 86671L (2013).
[Crossref]

SIAM J. Sci. Comput. (1)

D. C. L. Fong and M. Saunders, “LSMR: an iterative algorithm for sparse least-squares problems,” SIAM J. Sci. Comput. 33(5), 2950–2971 (2011).
[Crossref]

Other (11)

R. M. Haralick and L. G. Shapiro, Computer and Robot Vision (Addison-Wesley Longman Publishing Co., 1992), pp. 28–48, vol. I.

T. Georgiev and A. Lumsdaine, “Superresolution with plenoptic 2.0 cameras,” in Frontiers in Optics 2009/Laser Science XXV/Fall 2009, OSA Technical Digest (CD) (Optical Society of America) (2009), paper STuA6.

L. Liu, X. Jin, and Q. Dai, “Image formation analysis and light field information reconstruction for plenoptic Camera 2.0,” Pacific-Rim Conference on Multimedia (PCM)2017, Sept. 28–29, Harbin, China.
[Crossref]

C. Guo, H. Li, I. Muniraj, B. Schroeder, J. Sheridan, and S. Jia, “Volumetric light-field encryption at the microscopic scale,” in Frontiers in Optics 2017, OSA Technical Digest (online) (Optical Society of America, 2017), paper JTu2A.94.

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (ICCP, 2009), pp. 1–8.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenopic camera,” Technical Report, Stanford University (2005).

R. Ng, “Digital light field photography,” Ph.D. thesis, Stanford University (2006).

N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in 2016 IEEE International Conference on Computational Photography (ICCP), Evanston, IL, pp. 1–11 (2016).

Y. Zhang, Z. Li, W. Yang, P. Yu, H. Lin, and J. Yu, “The light field 3D scanner,” in 2017 IEEE International Conference on Computational Photography (ICCP), Stanford, CA, pp. 1–9 (2017).

S. Shroff and K. Berkner, “High resolution image reconstruction for plenoptic imaging systems using system response,” in Imaging and Applied Optics Technical Papers, OSA Technical Digest (online) (Optical Society of America (2012)), paper CM2B.2.
[Crossref]

S. Shroff and K. Berkner, “Plenoptic system response and image formation,” in Imaging and Applied Optics, OSA Technical Digest (online) (Optical Society of America, 2013), paper JW3B.1.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Optical structure of plenoptic camera 2.0.
Fig. 2
Fig. 2 (a) Imaging targets “P,” “S,” and “F” are placed at 65mm, 67mm and 69mm, respectively; (b) the simulated sensor data using the plenoptic camera 2.0 with a main lens (f1 = 40mm and 4mm radius) and a 3 × 3 microlens array (f2 = 4mm and 160 μm radius).
Fig. 3
Fig. 3 Imaging results under each microlens. Microlens coordinate is (mx, my). The region circled by dotted line corresponds to the imaging area of a microlens. The regions lined in red are the segmented imaging results “P”, “F” and “S” under each microlens. The segmented imaging results of “P” under microlens (1,1), (1,2) and (2,1) are magnified on the right.
Fig. 4
Fig. 4 (a) Segmented imaging results of “P” under microlens (1,1), (1,2) and (2,1); (b) to (k) are the reconstructed intensity at d 1n from 62mm to 71mm using the image in (a).
Fig. 5
Fig. 5 (a): segmented imaging results under microlens (2,2), (3,2) for object “S” and those under microlens (2, 3) and (3, 3) for object “F,” respectively. (b) to (k) corresponds to reconstructed intensity at different d 1n s, from 62mm to 71mm, using the images in (a), respectively. Image in red represents I s, Ω k m 1 , d 1n and its reconstructed I 0 d 1n , Ω k m 1 . Image in green represents I s, Ω k m 2 , d 1n and its reconstructed I 0 d 1n , Ω k m 2 .
Fig. 6
Fig. 6 Sensor data with noise. The brightness of the image is adjusted by 20% to show the noise clearly.
Fig. 7
Fig. 7 (a): segmented imaging results with 50dB noise under microlens (1, 1), (1, 2) for object “P”, those under microlens (2,2), (3,2) for object “S” and those under microlens (2, 3) and (3, 3) for object “F,” respectively. (b) to (k) corresponds to reconstructed intensity at different d 1n s, from 62mm to 71mm, using the images in (a), respectively. Image in red represents I s, Ω k m 1 , d 1n and its reconstructed I 0 d 1n , Ω k m 1 . Image in green represents I s, Ω k m 2 , d 1n and its reconstructed I 0 d 1n , Ω k m 2 .
Fig. 8
Fig. 8 The reconstructed volumetric information of “P”, “S,” and “F” in the object space, in which the recovered “P”, “S,” and “F” are located at distance 65mm, 67mm and 69mm, respectively. The images on the right in each subimage are the recovered spatial discrete intensity at the specific distance. (a) reconstructed from noise-free imaging results in Fig. 2(b); (b) reconstructed from noisy imaging results in Fig. 6.
Fig. 9
Fig. 9 Recovered light field intensity using light field repropagation at the object distance: (a) 65mm; (b) 67mm and (c) 69mm.
Fig. 10
Fig. 10 (a) Imaging target “A” placed at 66mm; (b) the simulated sensor data using the plenoptic camera 2.0 with a 7 × 7 microlens array.
Fig. 11
Fig. 11 (a) The first pair of imaging responses; (b) The second pair of imaging responses; (c) The third pair of imaging responses.
Fig. 12
Fig. 12 The reconstructed volumetric information at 66mm. (a) reconstructed information from the first pair; (b) reconstructed information from the second pair; (c) reconstructed information from the third pair; (d) The recovered light field intensity at 66mm.

Tables (3)

Tables Icon

Table 1 Spatial similarity measurement between the reconstructed intensity images at different depths

Tables Icon

Table 2 Spatial similarity measurement between the reconstructed intensity images for “S” and “F”

Tables Icon

Table 3 Spatial similarity measurement between images of “P,” “S,” and “F” reconstructed from noisy imaging result

Equations (31)

Equations on this page are rendered with MathJax. Learn more.

h(x,y, x 0 , y 0 )= exp[ik( d 1 + d 2 + d 3 +l)] λ 4 d 1 d 2 d 3 l m n + t micro ( x m mD, y m nD) ×exp{ ik 2 d 3 [ ( x 1 x m ) 2 + ( y 1 y m ) 2 ]} ×exp{ ik 2l [ (x x m ) 2 + (y y m ) 2 ]}d x m d y m × t main ( x main , y main )exp{ ik 2 d 1 [ ( x 0 x main ) 2 + ( y 0 y main ) 2 ]} ×exp{ ik 2 d 2 [ ( x 1 x main ) 2 + ( y 1 y main ) 2 ]}d x main d y main d x 1 d y 1 ,
t main ( x main , y main )= P 1 ( x main , y main )exp[ ik 2 f 1 ( x main 2 + y main 2 )],
t micro ( x m , y m )= P 2 ( x m , y m )exp[ ik 2 f 2 ( x m 2 + y m 2 )],
I(x,y)= I( x 0 , y 0 )| h(x,y, x 0 , y 0 ) | 2 d x 0 d y 0 ,
I s d 1n = H d 1n I 0 d 1n .
[ I s d 1n (1,1) I s d 1n (1,2) I s d 1n (2,1) I s d 1n (2,2) I s d 1n ( P s , Q s ) ]=[ H d 1n (1,1,1,1) H d 1n (1,1,1,2) H d 1n (1,1, P 0 , Q 0 ) H d 1n (1,2,1,1) H d 1n (1,2,1,2) H d 1n (1,2, P 0 , Q 0 ) H d 1n (2,1,1,1) H d 1n (2,1,1,2) H d 1n (2,1, P 0 , Q 0 ) H d 1n (2,2,1,1) H d 1n (2,2,1,2) H d 1n (2,2, P 0 , Q 0 ) H d 1n ( P s , Q s ,1,1) H d 1n ( P s , Q s ,1,2) H d 1n ( P s , Q s , P 0 , Q 0 ) ][ I 0 d 1n (1,1) I 0 d 1n (1,2) I 0 d 1n (2,1) I 0 d 1n (2,2) I 0 d 1n ( P 0 , Q 0 ) ],
I 0 d 1n = argmin H d 1n I 0 d 1n - I s d 1n 2 2 +τ I 0 d 1n 2 2 ,
I s d 1n = m =( 1,1 ) (M,N) I s m , d 1n = m =( 1,1 ) (M,N) H d 1n m I 0 d 1n m =( m x , m y ), m x [ 1,M ], m y [ 1,N ],
x 1 = d 2 d 1 x 0 , y 1 = d 2 d 1 y 0 .
1 d 1 + 1 d 2 = 1 f 1 .
y m1 = y 1 R d 2 L+R,
y m2 = y 1 +R d 2 LR.
2 m y r+r y m1 2 m y rr y m2 ,
y 1 R d 2 L+Rr2 m y r y 1 +R d 2 L( Rr ).
y 0 d 1 R( d 1 f 1 ) d 1 f 1 L+Rr2 m y r y 0 d 1 + R( d 1 f 1 ) d 1 f 1 LR+r.
y 0 d 1 + R( d 1 f 1 ) d 1 f 1 LR+r2 m y r y 0 d 1 R( d 1 f 1 ) d 1 f 1 L+Rr.
x 2 = d 4 L d 2 ( x 1 m x D )+ m x D y 2 = d 4 L d 2 ( y 1 m y D )+ m y D,
1 L- d 2 + 1 d 4 = 1 f 2 .
x= l L d 2 ( x 1 m x D )+ m x D y= l L d 2 ( y 1 m y D )+ m y D.
x= l d 2 ( L d 2 ) d 1 ( x 0 + m x D( ( L d 2 ) d 1 l d 2 + d 1 d 2 ) ) y= l d 2 ( L d 2 ) d 1 ( y 0 + m y D( ( L d 2 ) d 1 l d 2 + d 1 d 2 ) ),
I s d 1n = m =( 1,1 ) (M,N) I s m , d 1n = m =( 1,1 ) (M,N) H d 1n m I 0 d 1n m .
I s d 1n = m =( 1,1 ) (M,N) I s m , d 1n = m =( 1,1 ) (M,N) k=1 O I s, Ω k m , d 1n = m =( 1,1 ) (M,N) H d 1n m k=1 O I 0 d 1n , Ω k m ,
I 0 d 1n = k=1 O m =( 1,1 ) (M,N) argmin H d 1n m I 0 d 1n , Ω k m - I s, Ω k m , d 1n 2 2 +τ I 0 d 1n , Ω k m 2 2 .
x d 1n m x = l( d 1n f 1 ) L( d 1n f 1 ) d 1n f 1 ( f 1 d 1n f 1 x 0 d 1n , Ω k m x + m x D )+ m x D
x 0 d 1n , Ω k m x = d 1n f 1 f 1 ( L( d 1n f 1 ) d 1n f 1 l( d 1n f 1 ) ( x d 1n m x m x D ) m x D ).
x 0 d 1n , Ω k m x = L( d 1n f 1 ) d 1n f 1 L( d 1n f 1 ) d 1n f 1 x 0 d 1n , Ω k m x + ( d 1n f 1 ) f 1 L( d 1n f 1 ) d 1n f 1 L( d 1n f 1 ) d 1n f 1 m x D d 1n f 1 f 1 m x D.
x 0 d 1n , Ω k m x1 x 0 d 1n , Ω k m x2 = ( d 1n f 1 ) f 1 L( d 1n f 1 ) d 1n f 1 L( d 1n f 1 ) d 1n f 1 D( m x1 m x2 ) d 1n f 1 f 1 D( m x1 m x2 ),
d 1n = argmin d 1n Dis( I 0 d 1n , Ω k m 1 , I 0 d 1n , Ω k m 2 ),
I 0 d 1n , Ω k m 1 = argmin H d 1n m 1 I 0 d 1n , Ω k m 1 - I s, Ω k m 1 , d 1n 2 2 +τ I 0 d 1n , Ω k m 1 2 2 .
U d 1m , d 1n ( x , y )= exp{ ik| d 1n d 1m | } iλ| d 1n d 1m | exp[ ik 2| d 1n d 1m | ( x 2 + y 2 )] + I 0 d 1n , Ω k ( x 0 , y 0 )exp{θ( x 0 , y 0 )} ×exp[ ik 2| d 1n d 1m | ( x 0 2 + y 0 2 )]exp[ ik | d 1n d 1m | ( x 0 x + y 0 y )]d x 0 d y 0 ,
| U d 1m |= I 0 d 1m + d 1n d 1m | U d 1m , d 1n | ,

Metrics