Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Light field reconstruction from projection modeling of focal stack

Open Access Open Access

Abstract

This paper aims to reconstruct the object-side light field from the focal stack focusing on different imaging planes. In the forward problem, the focal stack was modeled as projections of the light field. Based on projection modeling, both the filtered-backprojection(FBP) method and the Landweber iterative scheme of solving the inverse problem regarding light field reconstruction from focal stack were derived by applying the methods of computerized tomography(CT). The experimental results show that the high-precision light field can be reconstructed via FBP and Simultaneous-Algebraic-Reconstruction-Technique(SART) algorithm, and depth and surface of the scene can be reconstructed via the reconstructed light field.

© 2017 Optical Society of America

1. Introduction

In the conventional camera imaging, the information of the light direction and the scene depth is compressed on the imaging plane. The light field imaging system is used to obtain the 4D light field data and record the radiation information of the 3D scene [1,2], including the direction and the radiance of light rays [3,4]. As a crucial method of computational photography [5], the light field imaging technology can be utilized to break through the limitations of conventional cameras in obtaining richer visual information. The computational imaging techniques, which can be realized by the light field, involve digital refocusing [6], viewpoint switching [7], extended depth of field [8], the depth reconstruction [9], the 3D scene reconstruction [10] and the stereo display [11,12].

The design of the light field imaging system is based on the two-plane parameterization of the light field [3,4] in which the 7D plenoptic function is approximated and simplified to the 4D light field. The quaternary group (x, y, u, v) uniquely determines a light ray by using the (x, y) plane recording the spatial information and the (u, v) plane recording the direction information. The 4D light field data acquisition methods are assorted into direct acquisition and indirect access methods. The direct acquisition of the light field data entails the camera array [13, 14] light field system and the microlens array light field camera [15,16]. The ways to obtain the light field data indirectly comprise the light field reconstruction from the coded mask data [17–21] and the focal stack [22].

The focal stack contains a wealth of 3D information, and has been widely used in the computer vision such as the shape reconstruction from defocus and focus. As a sequence of images focusing on different imaging planes [23], the focal stack enables the flexible acquisition of the real scene. Compared with the direct methods and the coded mask method, the first advantage of the focal stack lies in that the light field data can be reconstructed with arbitrary angular resolution. The second one is that new optical elements are not required to be attached to the camera. There exist two types of reconstruction methods from the focal stack, i.e., the depth based methods [24,25] and the deconvolution based methods [22,23]. The former relies on the depth estimation process, while the latter requires dense focal stack to gain the depth-invariance kernel. In recent studies, the partial light field was reconstructed by the epipolar image using the depth map and the all-in-focus image [25]. The filter-based iterative method was proposed based on the modified normal equation to reconstruct the light field from the focal stack [26].

This paper aims to reconstruct the object-side light field from the focal stack. In this study, the forward and inverse problems of the light field reconstruction from the focal stack were elaborated according to the imaging process of the focal stack. The forward problem of the light field reconstruction from the focal stack shares significant similarity to tomographic reconstruction from projections, especially CT. As a consequence, the ideas and technologies in CT can be extended to the light field reconstruction from the focal stack. As a well-established technique of the tomographic reconstruction [27, 28], the FBP method was derived from the projection slice theorem in the continuous case. As the Landweber iterative scheme [29] is one of powerful methods of solving the inverse problems such as computerized tomography [30,31], thereupon we adopted the Landweber iterative scheme to reconstruct the light filed iteratively.

2. Related work

The light field imaging is a vibrant and cross-disciplinary research field which enables a broad range of novel imaging applications, in which the 7D plenoptic function is approximated and simplified into the 4D light field. The digital refocusing is one of the most important light field imaging techniques. The Fourier slice photography theorem [32] was established by Ng Ren, which reveals that the refocused image corresponds to a 2D slice of the 4D light field in the frequency domain. As the theoretical basis of the frequency domain relationship, the Fourier slice photography can be utilized to implement the digital refocusing by using the light field data. This theorem provides a theoretical tool for the implementation and analysis of digital refocusing [33], and promotes varied extensions and applications in the light field imaging [19, 20, 34]. The Fourier slice photography theorem is an extension of the classical projection slice theorem in the light field imaging. The idea of the Fourier slice theorem originated from the classical Fourier slice theorem by Ronald N. Bracewell, deriving from the problem of radio astronomy in 1956 [35]. This theorem can be generalized to the finite-dimensional Hilbert space. The generalized version of this theorem indicates that the integral equations with the kernel of linear manifold satisfy the projection slice theorem, which is the basic theoretical outcome of the Fourier analysis of these integral equations [28].

The integral equations of the CT image reconstruction and the light field reconstruction are significantly similar. This similarity provides the theoretical basis for introducing the ideas and techniques from the CT image reconstruction to the inverse problem regarding the light field reconstruction. CT image reconstruction is theoretically attributed to the problem of reconstructing image from projections on the basis of the relationship between the image space and the projection data space. There are two main types of reconstruction methods of CT, i.e, the analytical and the iterative methods. FBP is an approximate and stable form of the inverse Radon transform based on the fourier slice theorem in 2D imaging. FBP remains one of the most effective methods in the medical imaging and the industrial detection [36], for it can be directly implemented to discrete calculations according to the reconstruction formula. As a method of achieving the approximate solution of the integral equation, the Landweber iterative scheme presents a concise structure of the iteration, which provides a framework for the theoretical derivation and the proof of convergence [30]. This general iterative scheme has varied special cases depending on the V-norm and W-norm, such as SART, Algebraic Reconstruction Technique (ART), and Component Averaging (CAV) algorithms, etc.

In this study, the object-side light field reconstruction from the focal stack was theoretically transformed into an inverse problem based on the projection modeling of the focal stack. Then we derived the applicable methods of solving this inverse problem, including the analytical and iterative methods.

3. The forward model of light field reconstruction

3.1. Projection modeling of focal stack

The formation of the focal stack by the light field is the focal imaging process. In the two-plane parameterization of the light field, the projection operator is defined to characterize the focal imaging process, which is the 2D projection of the 4D light field [32].

The object-side light field and the image-side light field are conjugate on account of the main-lens system. In this study, the object-side light field is reconstructed. The object-side light field Ls (xs, ys, u, v) is parameterized by the (u, v) and (xs, ys) planes and describes the radiance of the light rays (Fig. 1(a)). The focal stack E(s, xs, ys) represents the irradiance at the depth of s (Fig. 1(b)). (u, v) is the mainlens plane. (xs, ys) is an arbitrary focal plane in the object space. (xs0, ys0) plane is the reference focal plane. s and s0 are the distances from (u, v) to (xs, ys) plane and (xs0, ys0) plane respectively.

 figure: Fig. 1

Fig. 1 Focus imaging based on the two-plane parameterization of the object-side light field.

Download Full Size | PDF

We denote L(x, y, u, v) ≜ Ls0 (xs0, ys0, u, v). L(x, y, u, v) and Ls (xs, ys, u, v) represent the same light ray. We have

L(x,y,u,v)=Ls(xs,ys,u,v)

The affine transformations of (x, u) to (xs, u), and (y, v) to (ys, v) are

(xsu)=(ss01ss001)(xsu),(ysv)=(ss01ss001)(yv)
Definition 1 Let P be a bounded linear operator of the light field space into the focal stack space. The projection operator P is the focus imaging process of forming the focal stack E(s, xs, ys) by the light field L(x, y, u, v).
E(s,xs,ys)=P[L(x,y,u,v)]=L(x,y,u,v)δ(ss0x+(1ss0)uxs,ss0y+(1ss0)vys)dudvdxdy

Equation (3) is an integral equation with the kernel δ(ss0x+(1ss0)uxs,ss0y+(1ss0)vys). The projection operator P integrates the light field on ℝ4 over the integral paths xs=ss0x+(1ss0)u and ys=ss0y+(1ss0)v. Thus, the focal stack E(s, xs, ys) at the depth of s is a projection of the 4D light field L(x, y, u, v).

The integral path of the forward model in the CT image reconstruction is the straight line, while the integral path of the forward model in the light field reconstruction is the linear mani-fold. Therefore, the methods of solving the inverse problem in the CT image reconstruction can be applied to solving the inverse problem regarding the light field reconstruction, so as to derive the analytical and iterative methods. The analytical method and iterative scheme of solving the inverse problem can be derived.

3.2. Projection slice theorem for light field reconstruction

In the CT image reconstruction, the projection slice theorem [27] reveals the relationship between the projection data space and the image space in the Fourier domain. The Fourier slice photography theorem [32, 33] is an extension of the classical projection slice theorem [35] in the light field imaging.

In the forward problem of light field reconstruction from focal stack, the focal stack E(s, xs, ys) at the depth of s is the 2D projection of the 4D light field L(x, y, u, v) in the spatial domain. The following projection slice theorem presents the relationship in the Fourier domain between the light field space and the focal stack space.

Theorem 1 The 2D Fourier transform of the image E(s, xs, ys) at the depth of s is a 2D slice of the 4D Fourier transform of the light field L(x, y, u, v).

Proof:

According to Eq. (3), the focal stack E(s, xs, ys) at the depth of s is given by

E(s,xs,ys)=L(x,y,u,v)δ(ss0x+(1ss0)uxs,ss0y+(1ss0)vys)dudvdxdy
With the expression of 2D δ–function
δ(ss0x+(1ss0)uxs,ss0y+(1ss0)vys)=exp(2πi((ss0x+(1ss0)uxs)ω1,(ss0y+(1ss0)vy2)ω2))dω1dω2
We have
E(s,xs,ys)=L(x,y,u,v)exp(2πi((ss0x+(1ss0)uxs)ω1,(ss0y+(1ss0)vys)ω2))dω1dω2dudvdxdy=L(x,y,u,v)exp(2πi((ss0x+(1ss0)u)ω1,(ss0y+(1ss0)v)ω2))dudvdxdyexp(2πi(xsω1+ysω2))dω1dω2
Then, with
L(x,y,u,v)exp(2πi((ss0x+(1ss0)u)ω1,(ss0y+(1ss0)v)ω2))dududxdy=(ss0ω1,ss0ω2,(1ss0)ω1,(1ss0)ω2))
where (ωx, ωy, ωu, ωv) represents the 4D Fourier transform of the light field L(x, y, u, v).

Now Eq. (4) becomes

E(s,xs,ys)=(ss0ω1,ss0ω2,(1ss0)ω1,(1ss0)ω2))exp(2πi(xsω1+ysω2))dω1dω2
With the Fourier transform of both sides of Eq. (8), we have
[E(s,xs,ys)]=(ss0ω1,ss0ω2,(1ss0)ω1,(1ss0)ω2)
Here, [E(s, xs, ys)] represents a 2D Fourier transform of the focal stack E(s, xs, ys) at the depth of s. [E(s, xs, ys)] is a 2D slice of (ωx, ωy, ωu, ωv), and the slice in the Fourier domain is selected as
ωx=ss0ω1,ωy=ss0ω2,ωu=(1ss0)ω1,ωv=(1ss0)ω2

4. Light field reconstruction methods from focal stack

The projection model and the projection slice theorem are the essential characterizations of the compression of the light field data to the focal stack data. The analytical method of the light field reconstruction, i.e., filtered and convoluted backprojection algorithms, can be adopted in line with the projection slice theorem. On the basis of the integral equation, the Landweber iterative scheme can be applied to solving the inverse problem regarding light field reconstruction.

4.1. Filtered backprojection method for light field reconstruction

In this section, we derived and demonstrated the analytical method. According to Eq. (6) of the selected slice, we need to obtain the following integral variable substitution:

dωudωx=J1dω1ds,dωvdωy=J2dω2ds
With Jacobian determinant J1 and J2, we have
J1=|ωuω1ωusωxω1ωxs|=1s0|ω1|,J2=|ωvω2ωvsωyω2ωys|=1s0|ω2|
Then we get the integral variable substitution as follows
dωudωx=1s0|ω1|dω1ds,dωvdωy=1s0|ω2|dω2ds

Thus, the analytical method of reconstructing the light field L(x, y, u, v) is derived as follows. In the Fourier domain of the light filed, we have

(ωx,ωy,ωu,ωv)=(ss0ω1,ss0ω2,(1ss0)ω1,(1ss0)ω2)
Hence,
L(x,y,u,v)=(ωx,ωy,ωu,ωv)e2πi(xωx+yωy+uωu+vωv)dωxdωydωudωv=(ss0ω1,ss0ω2,(1ss0)ω1,(1ss0)ω2)exp(2πi(xss0ω1+yss0ω2+u(1ss0)ω1+v(1ss0)ω2))dωxdωydωudωv
Using the integral variable substitution in Eq. 13, we obtain
L(x,y,u,v)=(ss0ω1,ss0ω2,(1ss0)ω1,(1ss0)ω2)exp(2πi(xss0ω1+yss0ω2+u(1ss0)ω1+v(1ss0)ω2))(1s0)2|ω1||ω2|dω1dω2ds
Substituting Eq. (8) into Eq. (16), the result is
L(x,y,u,v)=[E(s,xs,ys)]exp(2πi(xss0ω1+yss0ω2+u(1ss0)ω1+v(1ss0)ω2))(1s0)2|ω1||ω2|dω1dω2ds
Recalling xs=ss0x+(1ss0)u and ys=ss0y+(1ss0)v, we have
L(x,y,u,v)=[E(s,xs,ys)]exp(2πi(xsω1+ysω2))(1s0)2|ω1||ω2|dω1dω2ds=(1s0)2[E(s,xs,ys)]|ω1||ω2|exp(2πi(xsω1+ysω2))dω1dω2ds

Then we have

L(x,y,u,v)=(1s0)21[[E(s,xs,ys)]|ω1||ω2|]ds

From Eq. (19), we achieved the FBP method of reconstructing the 4D light field from the focal stack. Using convolution in the spatial domain instead of filtering in the Fourier domain, FBP became the convolution backprojection (CBP) algorithm.

L(x,y,u,v)=(1s0)2E(s,xs,ys)*1(|ω1||ω2|)ds
where * is the convolution operation.

Equations (19) and (20) are ideal reconstruction equations. According to the Perry-Wiener criterion, the ideal frequency filter (ω1ω2) = |ω1||ω2| is an infinite band function, which can not be achieved because

+(|ω1||ω2|)2dω1dω2

In the numerical calculation, the approximate filter function ℋ̃(ω1, ω2) = (ω1, ω2)W(ω1, ω2) can be used to replace (ω1, ω2), and W(ω1, ω2) is the truncated window function. Adopting the filter function, the light field can be reconstructed by using the FBP and the CBP methods. (ω1, ω2) can be selected as the rectangular filter function rect (ω1, ω2) or the sinc filter function sinc (ω1, ω2).

rect(ω1,ω2)=|ω1||ω2|Rect(ω1,ω2)
sinc(ω1,ω2)=|ω1||ω2|sinc(ω1,ω2)Rect(ω1,ω2)
Here, the window function Rect(ω1,ω2)={1,|ω1|<Bω12and|ω2|<Bω220,others and sinc(ω1, ω2) = sinc(ω1)sinc(ω2). Bω1 and Bω2 are the bandwidths of ω1 and ω2.

4.2. Landweber iterative scheme for light field reconstruction

Equation (3) is an integral equation that is characterized by the kernel δ(ss0x+(1ss0)uxs,ss0y+(1ss0)vys). In this section, we analyzed and derived the Landweber iterative scheme for the light field reconstruction from the focal stack. The light field reconstruction is considered as solving an inverse problem based on the forward model. As a forward process, forming the focal stack via the 4D light field data is the projection operator in the focal imaging process. The light field reconstruction is the inverse process of solving the integral equation E(s, xs, ys) = P[L(x, y, u, v)].

The Landweber iterative scheme [29] is the steepest descent method of solving the target functional 12E(s,xs,ys)P[L(x,y,u,v)]2, which can be used to achieve an iterative solution to reconstructing the 4D light field from the focal stack.

L(n+1)(x,y,u,v)=L(n)(x,y,u,v)+αnP*(E(s,xs,ys)P[L(n)(x,y,u,v)])ds
Here, L(n)(x, y, u, v) is the result of the nth iteration of the light field. P* is the adjoint operator of P. αn is the relaxation parameter. The projection operator P corresponds to the trajectory tracking of the imaging point in the light field space L(x, y, u, v). P* corresponds to the trajectory tracking of the light ray L(x, y, u, v) in the focal stack space E(s, xs, ys). The discretized light field reconstruction can be modeled in the discretized form as follows:
AX=B
where A = (aij)M × N denotes an M × N matrix, B = (b1 · · · bM)T ∈ ℝM the discretized focal stack, X = (x1 · · · xN)T ∈ ℝN the discretized light field.

In the W-norm and V-norm, the discretized form of the integral equation is equivalent to solving the optimization problem with weighted least-squares.

X*=argmin{12BAXW,V2}
Then we got the discretized Landweber iterative scheme [30] regarding the W-norm and V-norm:
X(n+1)=X(n)+αnV1ATW(BAX(n))
Here, AT denotes the transpose of A. V and W are two positive definite diagonal matrices. This general iterative scheme entails varied special cases contingent on the V-matrix and W-matrix.

The SART iterative method [30,37] is the special case of the Landweber iterative scheme.

xj(n+1)=xj(n)+αni=1Maiji=1Mbij=1Naijxj(n)j=1Naij
In the SART, the diagonal matrix V is defined as vjj=i=1Maij, and W is defined as wii=j=1Naij.

5. Experimental results

In this section, the FBP method and the Landweber iterative scheme were validated by the actually captured focal stack data. The focal stack was captured by the Point Gray camera (Model: GS3-U3-60S6M-C) and the Myutron prime lens (Model: HF5018V) with an exposure time of 10ms and an F number of f/1.8. The captured focal stack data consist of 13 images(see Visualization 1). The angle resolution of the reconstructed light field is 9 × 9. The central view of the reconstructed light field and close-ups are given below (Figs. 2 and 3). Furthermore, the computational imaging results of the depth (Fig. 4) and the scene surface (Fig. 5) are computed based on the reconstructed light field data.

 figure: Fig. 2

Fig. 2 Central view of reconstructed light field by the FBP method. The rectangle filter and the sinc filter were employed respectively. The light field data reconstructed were animated in Visualization 2 (with the rectangle filter)and Visualization 3 (with the sinc filter).The reconstructed outcome of the sinc filter achieved higher accuracy.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Central view of reconstructed light field by Landweber iterative scheme. The SART iterative method was adopted in the Landweber iterative scheme, and the relaxation factor is 0.5. The reconstruction results with 2 and 8 iterations were shown. The reconstructed light field data were animated in Visualization 4 (with 2 iterations) and Visualization 5 (with 8 iterations).The results of the SART iterative algorithm to reconstruct the light field data was improved with the increase in the number of iterations.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Reconstructed depth via the reconstructed light field. The scene depth map was reconstructed by the method proposed in [38] based on the iterative reconstruction method via the light field data

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Reconstructed scene surface via the reconstructed light field. The scene surface was reconstructed based on the central view image of the light field and the depth map of the scene (see Visualization 6)

Download Full Size | PDF

Experimental results show that the reconstructed outcome of the rectangular filter function is of low signal-to-noise ratio, and the sinc filter function can be used to reconstruct high-precision light field data. With the increase of the iteration times, the accuracy of the SART iterative algorithm to reconstruct the light field data is improved accordingly. The angle resolution can be set arbitrarily to satisfy the practical needs. In this experiment, we presented the depth reconstruction and the scene surface reconstruction results via the light field data. Other computational imaging and display techniques can be also realized via the light field data, such as viewpoint switching, extended depth of field, 3D cloud reconstruction and stereo display, etc.

6. Discussion

Due to the inherent similarity of the imaging model, the methods of the CT image reconstruction can be applied to the light field reconstruction. From the perspective of the CT image reconstruction, the high-precision reconstruction algorithms for the light field reconstruction from the focal stack can be achieved.

The newly developed CT techniques are of the referential significance for the light field reconstruction problem, but it is not necessarily theoretically applicable. The 3D reconstruction algorithms cannot be directly derived based on the projection slice theorem, in that the scanning model of the 3D imaging differs from that of the 2D imaging, such as the Pi-line algorithm [39] and M-line algorithm [40] of the spiral CT, and the FDK algorithm [41] of the cone beam CT. How to apply the CT image reconstruction algorithm to the light field reconstruction and how to construct the applicable algorithm are the issues require further study. For example, the analytic inversion of the focused imaging operator is expected to be derived to form the spatial reconstruction algorithm. Since the images form the polar grid with the discrete Fourier transform. As a result, the frequency domain of the light field can be reconstructed by interpolating the polar grid into the Cartesian grid points.

We derived the analytical and iterative methods of solving the inverse problem regarding the light field reconstruction from the focal stack. The analytical method we derived is the FBP algorithm, which can be directly implemented by the discrete calculation of the accumulation according to the reconstruction formula. The iterative method we derived is the Landweber iterative scheme. As the general iterative scheme, it entails varied special cases. Different iteration algorithms correspond to different V-norms and W-norms, constraining the light field data space and the focal stack data space respectively. Different from the Landweber iterative scheme, the filter-based iterative method proposed by X. Yin et al. [26] is based on the modified normal equation. The method proposed by A. Mousnier et al. [25] first calculated the depth map and the all-in-focus image from the focal stack, and then partially reconstructed the light field by the epipolar image using the depth map and the all-in-focus image. The methods proposed in our paper reconstructed the light field directly from the focal stack, which are independent of the depth map and the all-in-focus image.

As a means of reconstructing the light field, the focal stack can capture the real scene with flexibility without inserting the new optical elements. The high-precision light field data can be reconstructed with arbitrary angle resolutions. However, there are two shortcomings. First, in the acquisition process of the focal stack data, the identical exposure parameters should be kept to make an exact record of the focal stack. Second, the reconstruction process is computationally complex, and increases the time of solving the inverse problem.

7. Summary

In this study, we modeled the projection relationship between the 4D light field data space and the focal stack space. The light field reconstruction from the focal stack was derived as the inverse problem solved by the backprojection and the iterative methods. The FBP and the CBP methods were derived based on the projection slice theorem. What’s more, the forward process was discretized into the linear equations to solve the inverse problem regarding the light field reconstruction by adopting the Landweber scheme. The precision of the reconstructed light field can be improved by optimizing the filter function and the iterative scheme. The projection modeling is of referential and enlightening implications for the theories and applications of the light field imaging.

The methods proposed in this study are based on the projection model and the inverse problem, which differ from those conventional ones based on the optical model. There is space for further research on the parameter adjustments of both the Landweber iterative scheme and the FBP algorithm. The characteristics of the scene can be integrated, such as the scene irradiance information, the geometric information and the depth range, to form the specific light field reconstruction algorithms. In the light of the characteristics of the scene, a flexible selection of the V-norm and the W-norm can be conducive to forming the specific acquisition model and the iterative algorithm. In keeping with the Fourier domain characteristics of the light field, the specific filter might be designed to form a more effective FBP algorithm.

Acknowledgment

This study is jointly supported by the National Basic Research Program of China (973 Program) (2015CB351803), National Natural Science Foundation of China (61372150, 61421062, 61520106004), and Sino-German Center (GZ 1025).

References and links

1. A. Gershun, “The light field,” J. Math. Phys. 18(1–4), 51–151 (1939). [CrossRef]  

2. E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing (MIT Press, 1991), pp. 3–20.

3. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’96 (ACM, 1996), pp. 31–42.

4. S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’96 (ACM, 1996), pp. 43–54.

5. C. Zhou and S. K. Nayar, “Computational cameras: convergence of optics and processing,” IEEE Trans. Image Process. 20(12), 3322–3340 (2011). [CrossRef]   [PubMed]  

6. F. P. Nava, J. G. Marichal-Hernandez, and J. M. Rodriguez-Ramos, “The Discrete Focal Stack Transform,” in 16th European Signal Processing Conference (2008), pp. 1–5.

7. M. Klug, T. Burnett, A. Fancello, A. Heath, K. Gardner, S. O’Connell, and C. Newswanger, “A scalable, collaborative, interactive light-field display system,” SID Symposium Digest of Technical Papers 44(1), 412–415 (2013). [CrossRef]  

8. H. Nagahara, S. Kuthirummal, C. Zhou, and S. K. Nayar, “Flexible depth of field photography,” in Proceedings of the 10th European Conference on Computer Vision (2008), pp. 60–73.

9. J. P. Luke, F. Rosa, J. G. Marichal-Hernández, J. C. Sanluís, C. D. Conde, and J. M. Rodríguez-Ramos, “Depth from light fields analyzing 4D local structure,” J. Disp. Technol. 11(11), 900–907 (2015). [CrossRef]  

10. C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 73 (2013). [CrossRef]  

11. G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 80 (2012). [CrossRef]  

12. D. Lanman, G. Wetzstein, M. Hirsch, W. Heidrich, and R. Raskar, “Polarization fields: dynamic light field display using multi-layer LCDs,” ACM Trans. Graph. 30(6), 186 (2011). [CrossRef]  

13. B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005). [CrossRef]  

14. B. Wilburn, N. Joshi, V. Vaish, M. Levoy, and M. Horowitz, “High-Speed Videography Using a Dense Camera Array,” in Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2004), pp. 294–301. [CrossRef]  

15. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Tech. Rep.11 (2005).

16. C. Perwass and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” Proc. SPIE 8291, 829108 (2012). [CrossRef]  

17. C. K. Liang, T. H. Lin, B. Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: multiplexed light field acquisition,” ACM Trans. Graph. 27(3), 55 (2008). [CrossRef]  

18. H. Nagahara, C. Zhou, T. Watanabe, H. Ishiguro, and S. K. Nayar, “Programmable aperture camera using LCoS,” in ECCV (6) (2010), pp. 337–350.

19. A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26(3), 1276463 (2007). [CrossRef]  

20. Z. Xu, J. Ke, and E. Y. Lam, “High-resolution lightfield photography using two masks,” Opt. Express 20(10), 10971–10983 (2012). [CrossRef]   [PubMed]  

21. K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph. 32(4), 46 (2013). [CrossRef]  

22. J. R. Alonso, A. Fernández, and J. A. Ferrari, “Reconstruction of perspective shifts and refocusing of a three-dimensional scene from a multi-focus image stack,” Appl. Opt. 55(9), 2380–2386 (2016). [CrossRef]   [PubMed]  

23. A. Levin and F. Durand, “Linear view synthesis using a dimensionality gap light field prior,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2010), pp. 1831–1838. [CrossRef]  

24. L. McMillan and G. Bishop, “Plenoptic Modeling: An Image-based Rendering System,” in Proceedings of the 22Nd Annual Conference on Computer Graphics and Interactive Techniques (1995), pp. 39–46.

25. A. Mousnier, E. Vural, and C. Guillemot, “Partial light field tomographic reconstruction from a fixed-camera focal stack,” arXiv:1503.01903 (2015).

26. X. Yin, G. Wang, W. Li, and Q. Liao, “Iteratively reconstructing 4D light fields from focal stacks,” Appl. Opt. 55(30), 8457–8463 (2016). [CrossRef]   [PubMed]  

27. G. T. Herman, Fundamentals of Computerized Tomography: Image Reconstruction from Projections (Springer Science and Business Media, 2009). [CrossRef]  

28. F. Natterer and F. Wubbeling, Mathematical Methods in Image Reconstruction (Society for Industrial and Applied Mathematics, 2001). [CrossRef]  

29. L. Landweber, “An iteration formula for Fredholm integral equations of the first kind,” Am. J. Math. 73, 615–624 (1951). [CrossRef]  

30. M. Jiang and G. Wang, “Convergence studies on iterative algorithms for image reconstruction,” IEEE Trans. Medical Imaging 22(5), 569–579 (2003). [CrossRef]   [PubMed]  

31. J. Qiu and M. Xu, “A Method of Symmetric Block-Iterative for Image Reconstruction,” J. Electron. Inf. Technol. 29(10), 2296–2300 (2007).

32. R. Ng, “Fourier slice photography,” ACM Trans. Graph. 24(3), 735–744 (2005). [CrossRef]  

33. E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” J. Opt. Soc. Am. A 32(11), 2021–2032 (2015). [CrossRef]  

34. D. Dansereau and L. T. Bruton, “A 4D dual-fan filter bank for depth filtering in light fields,” IEEE Trans. Signal Process. 55(2), 542–549 (2007). [CrossRef]  

35. R. N. Bracewell, “Strip integration in radio astronomy,” Australian J. Phys. 9(2), 198–217 (1956). [CrossRef]  

36. X. Pan, E. Y. Sidky, and M. Vannier, “Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?” Inverse Problems 25(12), 123009 (2009). [CrossRef]  

37. A. H. Andersen and A. C. Kak, “Simultaneous Algebraic Reconstruction Technique (SART): a superior implementation of the art algorithm,” Ultrasonic Imaging 6(1), 81–94 (1984). [CrossRef]   [PubMed]  

38. C. Liu, J. Qiu, and S. Zhao, “Iterative reconstruction of scene depth with fidelity based on light field data,” Appl. Opt. 56(11), 3185–3192 (2017). [CrossRef]   [PubMed]  

39. Y. Zou and X. Pan, “Image reconstruction on PI-lines by use of filtered backprojection in helical cone-beam CT,” Phys. Med. Biol. 49(12), 2717 (2004). [CrossRef]   [PubMed]  

40. J. D. Pack and F. Noo, “Cone-beam reconstruction using 1D filtering along the projection of M-lines,” Inverse Problems 21(3), 1105 (2005). [CrossRef]  

41. M. Grass, T. Köhler, and R. Proksa, “3D cone-beam CT reconstruction for circular trajectories,” Phys. Med. Biol. 45(2), 329 (2000). [CrossRef]   [PubMed]  

Supplementary Material (6)

NameDescription
Visualization 1: MP4 (673 KB)      the focal stack data we captured
Visualization 2: MP4 (3415 KB)      the light field reconstructed by the FBP algorithm with rectangle filter
Visualization 3: MP4 (323 KB)      the light field reconstructed by the FBP algorithm with sin filter
Visualization 4: MP4 (205 KB)      the light field reconstructed by the SART algorithm with 2 iterations
Visualization 5: MP4 (374 KB)      the light field reconstructed by the SART algorithm with 8 iterations
Visualization 6: MP4 (5187 KB)      the reconstructed scene surface via the reconstructed light field

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Focus imaging based on the two-plane parameterization of the object-side light field.
Fig. 2
Fig. 2 Central view of reconstructed light field by the FBP method. The rectangle filter and the sinc filter were employed respectively. The light field data reconstructed were animated in Visualization 2 (with the rectangle filter)and Visualization 3 (with the sinc filter).The reconstructed outcome of the sinc filter achieved higher accuracy.
Fig. 3
Fig. 3 Central view of reconstructed light field by Landweber iterative scheme. The SART iterative method was adopted in the Landweber iterative scheme, and the relaxation factor is 0.5. The reconstruction results with 2 and 8 iterations were shown. The reconstructed light field data were animated in Visualization 4 (with 2 iterations) and Visualization 5 (with 8 iterations).The results of the SART iterative algorithm to reconstruct the light field data was improved with the increase in the number of iterations.
Fig. 4
Fig. 4 Reconstructed depth via the reconstructed light field. The scene depth map was reconstructed by the method proposed in [38] based on the iterative reconstruction method via the light field data
Fig. 5
Fig. 5 Reconstructed scene surface via the reconstructed light field. The scene surface was reconstructed based on the central view image of the light field and the depth map of the scene (see Visualization 6)

Equations (28)

Equations on this page are rendered with MathJax. Learn more.

L ( x , y , u , v ) = L s ( x s , y s , u , v )
( x s u ) = ( s s 0 1 s s 0 0 1 ) ( x s u ) , ( y s v ) = ( s s 0 1 s s 0 0 1 ) ( y v )
E ( s , x s , y s ) = P [ L ( x , y , u , v ) ] = L ( x , y , u , v ) δ ( s s 0 x + ( 1 s s 0 ) u x s , s s 0 y + ( 1 s s 0 ) v y s ) d u d v d x d y
E ( s , x s , y s ) = L ( x , y , u , v ) δ ( s s 0 x + ( 1 s s 0 ) u x s , s s 0 y + ( 1 s s 0 ) v y s ) d u d v d x d y
δ ( s s 0 x + ( 1 s s 0 ) u x s , s s 0 y + ( 1 s s 0 ) v y s ) = exp ( 2 π i ( ( s s 0 x + ( 1 s s 0 ) u x s ) ω 1 , ( s s 0 y + ( 1 s s 0 ) v y 2 ) ω 2 ) ) d ω 1 d ω 2
E ( s , x s , y s ) = L ( x , y , u , v ) exp ( 2 π i ( ( s s 0 x + ( 1 s s 0 ) u x s ) ω 1 , ( s s 0 y + ( 1 s s 0 ) v y s ) ω 2 ) ) d ω 1 d ω 2 d u d v d x d y = L ( x , y , u , v ) exp ( 2 π i ( ( s s 0 x + ( 1 s s 0 ) u ) ω 1 , ( s s 0 y + ( 1 s s 0 ) v ) ω 2 ) ) d u d v d x d y exp ( 2 π i ( x s ω 1 + y s ω 2 ) ) d ω 1 d ω 2
L ( x , y , u , v ) exp ( 2 π i ( ( s s 0 x + ( 1 s s 0 ) u ) ω 1 , ( s s 0 y + ( 1 s s 0 ) v ) ω 2 ) ) d u d u d x d y = ( s s 0 ω 1 , s s 0 ω 2 , ( 1 s s 0 ) ω 1 , ( 1 s s 0 ) ω 2 ) )
E ( s , x s , y s ) = ( s s 0 ω 1 , s s 0 ω 2 , ( 1 s s 0 ) ω 1 , ( 1 s s 0 ) ω 2 ) ) exp ( 2 π i ( x s ω 1 + y s ω 2 ) ) d ω 1 d ω 2
[ E ( s , x s , y s ) ] = ( s s 0 ω 1 , s s 0 ω 2 , ( 1 s s 0 ) ω 1 , ( 1 s s 0 ) ω 2 )
ω x = s s 0 ω 1 , ω y = s s 0 ω 2 , ω u = ( 1 s s 0 ) ω 1 , ω v = ( 1 s s 0 ) ω 2
d ω u d ω x = J 1 d ω 1 d s , d ω v d ω y = J 2 d ω 2 d s
J 1 = | ω u ω 1 ω u s ω x ω 1 ω x s | = 1 s 0 | ω 1 | , J 2 = | ω v ω 2 ω v s ω y ω 2 ω y s | = 1 s 0 | ω 2 |
d ω u d ω x = 1 s 0 | ω 1 | d ω 1 d s , d ω v d ω y = 1 s 0 | ω 2 | d ω 2 d s
( ω x , ω y , ω u , ω v ) = ( s s 0 ω 1 , s s 0 ω 2 , ( 1 s s 0 ) ω 1 , ( 1 s s 0 ) ω 2 )
L ( x , y , u , v ) = ( ω x , ω y , ω u , ω v ) e 2 π i ( x ω x + y ω y + u ω u + v ω v ) d ω x d ω y d ω u d ω v = ( s s 0 ω 1 , s s 0 ω 2 , ( 1 s s 0 ) ω 1 , ( 1 s s 0 ) ω 2 ) exp ( 2 π i ( x s s 0 ω 1 + y s s 0 ω 2 + u ( 1 s s 0 ) ω 1 + v ( 1 s s 0 ) ω 2 ) ) d ω x d ω y d ω u d ω v
L ( x , y , u , v ) = ( s s 0 ω 1 , s s 0 ω 2 , ( 1 s s 0 ) ω 1 , ( 1 s s 0 ) ω 2 ) exp ( 2 π i ( x s s 0 ω 1 + y s s 0 ω 2 + u ( 1 s s 0 ) ω 1 + v ( 1 s s 0 ) ω 2 ) ) ( 1 s 0 ) 2 | ω 1 | | ω 2 | d ω 1 d ω 2 d s
L ( x , y , u , v ) = [ E ( s , x s , y s ) ] exp ( 2 π i ( x s s 0 ω 1 + y s s 0 ω 2 + u ( 1 s s 0 ) ω 1 + v ( 1 s s 0 ) ω 2 ) ) ( 1 s 0 ) 2 | ω 1 | | ω 2 | d ω 1 d ω 2 d s
L ( x , y , u , v ) = [ E ( s , x s , y s ) ] exp ( 2 π i ( x s ω 1 + y s ω 2 ) ) ( 1 s 0 ) 2 | ω 1 | | ω 2 | d ω 1 d ω 2 d s = ( 1 s 0 ) 2 [ E ( s , x s , y s ) ] | ω 1 | | ω 2 | exp ( 2 π i ( x s ω 1 + y s ω 2 ) ) d ω 1 d ω 2 d s
L ( x , y , u , v ) = ( 1 s 0 ) 2 1 [ [ E ( s , x s , y s ) ] | ω 1 | | ω 2 | ] d s
L ( x , y , u , v ) = ( 1 s 0 ) 2 E ( s , x s , y s ) * 1 ( | ω 1 | | ω 2 | ) d s
+ ( | ω 1 | | ω 2 | ) 2 d ω 1 d ω 2
rect ( ω 1 , ω 2 ) = | ω 1 | | ω 2 | Rect ( ω 1 , ω 2 )
sinc ( ω 1 , ω 2 ) = | ω 1 | | ω 2 | sinc ( ω 1 , ω 2 ) Rect ( ω 1 , ω 2 )
L ( n + 1 ) ( x , y , u , v ) = L ( n ) ( x , y , u , v ) + α n P * ( E ( s , x s , y s ) P [ L ( n ) ( x , y , u , v ) ] ) d s
AX = B
X * = argmin { 1 2 B AX W , V 2 }
X ( n + 1 ) = X ( n ) + α n V 1 A T W ( B A X ( n ) )
x j ( n + 1 ) = x j ( n ) + α n i = 1 M a i j i = 1 M b i j = 1 N a i j x j ( n ) j = 1 N a i j
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.