## Abstract

A novel technique for depth filtering of integral imaging is proposed. Integral imaging captures spatio-angular distribution of the light rays which delivers three-dimensional information of the object scene. The proposed method performs filtering operation in the frequency domain of the captured spatio-angular light ray distribution, achieving depth selective reconstruction. Grating projection further enhances the depth discrimination performance. The principle is verified experimentally.

©2011 Optical Society of America

## 1. Introduction

Integral imaging has been considered as one of the most prominent autostereoscopic three-dimensional(3D) display techniques [1–12]. Besides its benefits as 3D display technique, integral imaging is also a versatile technology in 3D capturing and processing [13–16]. Unlike usual two-dimensional(2D) imaging which records only spatial distribution of the light rays, integral imaging captures spatio-angular distribution of the light rays so that 3D information can be acquired [13,14]. Various techniques for processing captured spatio-angular distribution of the light rays have been reported including 3D mesh model reconstruction [15,17], arbitrary view reconstruction [18], hologram synthesis [19–21], and depth plane reconstruction [22–24].

Depth plane reconstruction of integral imaging is usually called computational integral imaging reconstruction(CIIR). By collecting and averaging light rays corresponding to each point in a depth plane, an image refocused on the plane is obtained [13,22]. Repeating this process for successive depth planes gives a stack of refocused images so that 3D structure of the object scene can be understood visually. Since CIIR is performed by simple pixel averaging without significant image processing, it is effective and free from image processing errors. However, CIIR has its limitations as well. One of the significant limitations is that the reconstructed image of CIIR is simply a refocused image in a certain depth plane without any further depth processing. The reconstructed image of the CIIR consists of focused object points accompanied by object points blurred according to difference between their actual depths and reconstruction depth. Any additional processing such as depth sectioning, and multi-plane refocusing is not possible in conventional CIIR.

A few techniques have been reported to give additional functionality to the CIIR. K.-J. Lee et al. applied a focus filter to extract the focused part of the object in an effort to realize tomographic imaging [25]. The focus filter, however, relies on the spatial variation in an image window and generally prone to error. G. Baasantseren et al. used random pattern illumination to suppress blurred part [26]. However, except reduction of the effective depth of focus, various functionality of depth filtering is not possible.

In this paper, we propose a method to perform depth filtering on the spatio-angular light ray distribution captured by integral imaging. Using the depth dependency of the frequency distribution of the light rays [27], the proposed method enables various depth filtering including depth selection and multi-plane refocusing. It is also possible to reconstruct various views with depth filtered data, which was not possible in conventional CIIR. In addition to the basic method, we also propose a method using grating projection in order to enhance the depth discrimination. In the followings, we explain the principle and present the experimental results for verification.

## 2. Theory

#### 2.1. Depth filtering using frequency distribution of light rays

Figure 1 shows geometry of integral imaging capture. A 3D scene is captured through an array of the identical small lenses. Since each lens captures different perspective of the 3D scene, the resultant image is a set of perspectives each of which is called elemental image. Assuming pinhole lens model and ray optics, each elemental image can be thought to be a representation of the angular ray distribution at the principal point of the corresponding lens. With an array of the lenses, captured set of the elemental images contains spatio-angular distribution of the light rays in the lens array plane.

In order to investigate the frequency characteristics of the captured spatio-angular light ray distribution, let us suppose a plane object at a specific distance *z* from the lens array as shown in Fig. 2
. For a non-specular plane object *f*, the light ray *l* at a position *x* and an angle *θ* in the lens array plane is given under paraxial approximation by

*θ*is radian. By Fourier-transforming Eq. (1), the spatio-angular frequency distribution of the captured light rays

*L*is given by

*F*is the Fourier transform of the plane object

*f*, and

*f*and

_{x}*f*are the spatial and angular frequencies measured in cycles/m and cycles/rad, respectively. Here the Fourier transform is performed only for

_{θ}*x*and

*θ*, not the object depth

*z*. Equation (2) reveals that a plane object at a distance

*z*is represented by a single line in the frequency domain representation with a slanting angle proportional to the distance

*z*as shown in Fig. 2.

Note that this can also be explained using Fourier slice theorem. As can be observed in Fig. 2, the projection in spatial domain onto the line *θ*=*zx* gives the plane object function *f*. According to the Fourier slice theorem, the corresponding slice *f _{θ}*=

*zf*in frequency domain represents the Fourier transform of the object function

_{x}*f*. The other projections in spatial domain give constant function assuming the extent of the spatio-angular distribution is sufficiently large in the spatial domain, and thus their corresponding slices give delta function located at the origin. Consequently, the plane object is represented as a line

*f*=

_{θ}*zf*in frequency domain. Also note that in case of specular reflection, the frequency spectrum of a plane object cannot be represented as a single line due to limited angular extent of the rays. Hence discussion in this paper is valid only for objects of diffusive surface.

_{x}For a volume object with extended depth range, the frequency representation of the captured light rays becomes an area including the collection of those slanted lines.

Based on this characteristic of the captured light rays, the proposed depth filtering is performed as shown in Fig. 3 . The light ray distribution of a 3D scene is captured by using a lens array following integral imaging principle. The captured light ray distribution is Fourier transformed to yield spatio-angular frequency domain representation. A desired filtering operation is performed on this frequency domain representation. As an example, depth pass filtering is illustrated in Fig. 3. Finally, inverse Fourier transform gives depth filtered light ray distribution. Any previously reported techniques to visualize the 3D information embedded in the elemental images can be additionally applied to this depth filtered light ray distribution. Note that for the color objects, the proposed method is performed for each color channel, i.e. red, green, and blue, independently. The filtered color channels are then merged to generate color output. In the followings, frequency spectrums are plotted only for red channel, while spatial domain representations are shown using all color channels for visibility.

In order to verify the principle, a simulation is performed for a 3D scene comprised of three plane objects as shown in Fig. 4
. Note that the three plane objects are located in both left and right field of the lens array plane in order to emphasize depth dependency of the frequency spectrum of the spatio-angular ray distribution. In real implementation of the integral imaging capture system, this condition is satisfied when the objects are not directly captured but their intermediate images which are formed by an additional imaging lens are captured by the lens array. In other cases, the objects are usually located in the right field, i.e. positive *z*, of the lens array. The depth filtering result is shown in Fig. 5
. Depth filtering is performed to select one or two objects out of three plane objects by using the proposed method and the filtered light ray distribution is further processed to synthesize a view at a central position for a visualization purpose following conventional method [13]. The depth range for filtering of each object is set to 10 mm around its actual depth.

In the second row named ‘original’ in Fig. 5, it can be seen that the frequency spectrum *f _{x}*-

*f*plot and

_{θ}*f*-

_{y}*f*plot reveals three lines with different slanting angles as expected. Filtering is performed to select specific lines in this frequency spectrum as shown in the two columns ‘

_{φ}*f*-

_{x}*f*plot’ and ‘

_{θ}*f*-

_{y}*f*plot’. The last column in Fig. 5 shows the synthesized central views for filtered data. Note that selection of two objects at distant depths as shown in last 3 rows of Fig. 5 could not be done by conventional CIIR. From Fig. 5, it can be confirmed that the proposed depth filtering operation performs as expected.

_{φ}In order to verify that conventional processing can be additionally applied to the filtered data, various view reconstruction is performed for the filtered data as an example. Figure 6 shows the result of original data and filtered data for apple and banana objects with two movies. From Fig. 6, it can be confirmed that the conventional view reconstruction algorithm works well with the filtered data.

#### 2.2. Depth discrimination enhancement using grating projection

The basic method proposed in the previous section enables depth selective filtering of the captured 3D scene. The discrimination ratio of the depth, however, is not very good. As shown in last column in Fig. 5, the unselected depth plane object is not completely removed in the reconstruction, but remains in a blurry shape. This becomes especially severe when the depth separation between the objects is small. In this sub-section, we propose an additional method to enhance the depth discrimination ratio.

The low depth discrimination ratio is primarily due to the overlapped low frequency components of depth planes. As shown in Fig. 7(a) , slanted lines corresponding to different depth planes intersect at the origin of the spatio-angular frequency domain. Note that in cases of the real 3D objects, the most energy is concentrated on the low frequency range around the origin. Hence, any depth pass filtering is accompanied by significant energy from other depths, resulting in low discrimination ratio.

In order to enhance the discrimination ratio, we propose a method using grating projection, inspired by the standard phase retrieval procedures used in applications such as phase-shifting interferometry [28]. Figure 7(b) illustrates the proposed method. Instead of usual white light illumination, 4 sinusoidal amplitude grating patterns ${G}_{\varphi}(x)=1+\mathrm{sin}\left(\pi x/{T}_{x}+\varphi \right)$are projected on the 3D scene sequentially with *ϕ*=0°, 90°, 180°, and 270°. The captured 4 sets of spatio-angular ray distributions, *l*
_{0°}, *l*
_{90°}, *l*
_{180°}, and *l*
_{270°} are processed following

*l*(

*x*,

*θ*). For simplicity, let us consider a plane object

*f*(

*x*) at a distance

*z*. Note that by grating projection, the object function

*f*(

*x*) becomes

*f*(

*x*)

*G*(

_{ϕ}*x*). Hence from Eq. (2), the captured ray distribution with grating phase shift

*ϕ*is

*l*(

_{ϕ}*x*,

*θ*)=

*f*(

*x*+

*θz*)

*G*(

_{ϕ}*x*+

*θz*). Substituting this in Eqs. (3) and (4) shows that the complex-valued ray distribution

*l*(

*x*,

*θ*) is given by

Equation (5) indicates that the complex-valued ray distribution synthesized by Eqs. (3) and (4) is equivalent to the ray distribution of a complex-valued 3D scene *f _{c}* given by

*T*is the half period of the projected grating pattern along

_{x}*x*axis. Therefore, in the frequency domain representation of the complex-valued ray distribution

*l*(

*x*,

*θ*), the spectrum of each depth plane is shifted by 1/

*T*along

_{x}*f*axis as shown in Fig. 7(b). The slanted lines corresponding to different depth planes still intersect at the origin, but now the low frequency part where most energy is concentrated is moved to

_{x}*f*=1/

_{x}*T*and is well separated without overlapping. Therefore, the depth discrimination ratio can be enhanced in the reconstruction.

_{x}Note that since the effect of the grating projection is limited only to the phase of the 3D scene as revealed by Eq. (6) and what we usually care in the reconstruction is the amplitude distribution in the object space, any additional processing is not required to compensate the added phase term in the reconstruction. Also note that the grating period *T _{x}* does not need to be maintained constant for all objects at different depths. When the grating period

*T*is a function of depth

_{x}*z*, the spectrums of depth planes are shifted by a different amount. However, low frequency parts of depth planes are still separated by the spectrum shifts, and thus the depth discrimination is enhanced. Another noteworthy point is that due to the shift of the spectrum, higher spatial frequencies can be captured for a side band. Considering the other side band can also be recovered using conjugate symmetry of frequency spectrum of real-valued object function, there is a possibility to enhance not only the depth discrimination but also the resolution.

Figures 8
-10
show the simulation result. Simulation is performed for the same 3D scene shown in Fig. 4, but the locations of three plane objects are changed to 30mm, 20mm and 10mm right to the lens array plane for Lena, banana, and apple objects, respectively. Note that the spacing between the plane objects is reduced to emphasize depth discrimination ratio. The diagonally slanted grating pattern is used in the simulation to give equal contribution to *x* and *y* directions. Using Eqs. (3) and (4), the four-dimensional, i.e. *x*, *y*, *θ*, and *φ*, complex-valued light ray distribution is obtained. Figure 8 shows two slices revealing *x*-*θ* plot and *y*-*φ* plot. As shown in Fig. 8, the phase of the captured light ray distribution now has periodic grating pattern due to grating projection technique.

Figure 9
shows two slices of *f _{x}*-

*f*plot and

_{θ}*f*-

_{y}*f*plot with or without grating projection. Although three lines are not identified individually due to small depth separation, it can be observed that the peak points which represents DC component are moved from the coordinates origin as shown in Fig. 9(a) and (b) to different positions as shown in Fig. 9(c) and (d).

_{φ}The depth filtering result for one out of three plane objects is shown in Fig. 10. The depth range for filtering is set to 4 mm around the object’s actual depth. In Fig. 10(a), the residual blurred images exist with significant energy due to low depth discrimination. In Fig. 10(b), however, it can be confirmed that they are largely suppressed leaving the desired object unchanged by the grating projection technique.

## 3. Experimental result

We verified the proposed method experimentally. Two experiments performed to verify the depth filtering operation and depth discrimination ratio enhancement, respectively. For the first experiment, three plane objects ‘J’, ‘K’, and ‘M’ shown in Fig. 11 are located at 2cm, 7cm, and 12cm from a lens array, respectively. The lens array used in the experiment consists of identical elemental lenses of 1mm lens pitch and 3.3mm focal length. The number of the elemental lenses in the array is about 110(H)×55(V). Under uniform regular white illumination, the elemental images are captured through the lens array as shown in Fig. 12 . The number of pixels per each elemental image is 31(H)×31(V).

In the experiment, the depth range for filtering of each object was determined empirically considering resultant depth discrimination and brightness of the reconstruction. The depth filtering result is shown in Fig. 13 . Although reconstruction of the object ‘M’ is rather weak due to its large depth, it can be seen that the depth selection filtering for one or two objects out of three is performed successfully.

The original set of the elemental images and the filtered one are further processed for the various view reconstruction and the result is shown in Figs. 14 and 15 . Figures 14 and 15 reveal different views of the 3D object scene can be reconstructed from the filtered elemental images as well as from the original elemental images.

The experiment for verification of the depth discrimination enhancement using grating projection is performed with a setup shown in Fig. 16(a)
. Three plane objects ‘J’, ‘K’, and ‘M’ are located at 4cm, 6cm, and 8cm from the lens array. Instead of uniform illumination, 4 diagonal sinusoidal amplitude grating patterns with 90° phase shift are projected to the objects and corresponding 4 sets of the elemental images are captured as shown in Fig. 16(b). The grating period projected on the object surfaces is approximately 9mm for both *x* and *y* axis. In the captured elemental images, this projected grating period is sampled with more than 14 pixels both *x* and *y* axis without aliasing. Magnified image of the captured elemental images shown in Fig. 16(b) shows intensity variation due to the grating projection.

From these 4 sets of the elemental images, the complex-valued elemental images are synthesized using Eqs. (3) and (4). Figure 17 shows the amplitude and phase distribution of the synthesized complex-valued elemental images. Figure 17(a) reveals the intensity fluctuation in raw image of Fig. 16 is now removed by Eq. (3). The phase distribute shown in Fig. 17(b) shows periodic grating pattern as desired.

Figure 18
shows *f _{x}*-

*f*plot with or without the grating projection. The plot in non-grating-projection case shown in Fig. 18(a) is calculated by ignoring phase term of the complex-valued set of the elemental images of Fig. 17. As expected, the high energy point representing DC components of the objects is moved from the origin to different location.

_{θ}Finally, the depth filtering result is shown in Fig. 19 . One of three plane objects is selected for the reconstruction in non-grating and grating cases. In last three rows, it is observed that the residual objects remain in the reconstruction with significant intensity due to small depth separation between the objects. By the grating projection method, however, they are successfully suppressed, leaving the desired object as shown in first four rows of Fig. 19. From the results in Fig. 19, it can be confirmed that the depth discrimination ratio is enhanced by the proposed grating projection method.

## 4. Conclusion

A novel method to perform the depth filtering using integral imaging is proposed. By using spatio-angular frequency characteristics of the captured light ray distribution, various depth filtering operations can be performed. Any conventional processing developed for the integral imaging can be further applied after the proposed filtering. We also propose an additional method using grating projection. The grating projection method enhances the depth discrimination performance of the depth filtering operation by reducing overlapped energy between depth planes. The experimental results are provided for the verification of the proposed method.

## Acknowledgment

This work was supported by the research grant of the Chungbuk National University in 2009.

## References and links

**1. **F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. **36**(7), 1598–1603 (1997). [CrossRef] [PubMed]

**2. **N. Davies, M. McCormick, and L. Yang, “Three-dimensional imaging systems: a new development,” Appl. Opt. **27**(21), 4520 (1988). [CrossRef] [PubMed]

**3. **S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. **44**(2), L71–L74 (2005). [CrossRef]

**4. **R. Martinez-Cuenca, A. Pons, G. Saavedra, M. Martinez-Corral, and B. Javidi, “Optically-corrected elemental images for undistorted Integral image display,” Opt. Express **14**(21), 9657–9663 (2006). [CrossRef] [PubMed]

**5. **J. Arai, M. Okui, T. Yamashita, and F. Okano, “Integral three-dimensional television using a 2000-scanning-line video system,” Appl. Opt. **45**(8), 1704–1712 (2006). [CrossRef] [PubMed]

**6. **H. Liao, T. Dohi, and M. Iwahara, “Improved viewing resolution of integral videography by use of rotated prism sheets,” Opt. Express **15**(8), 4814–4822 (2007). [CrossRef] [PubMed]

**7. **J. Hahn, Y. Kim, and B. Lee, “Uniform angular resolution integral imaging display with boundary folding mirrors,” Appl. Opt. **48**(3), 504–511 (2009). [CrossRef] [PubMed]

**8. **J. Kim, S.-W. Min, and B. Lee, “Viewing window expansion of integral floating display,” Appl. Opt. **48**(5), 862–867 (2009). [CrossRef] [PubMed]

**9. **Y. Kim, H. Choi, J. Kim, S.-W. Cho, Y. Kim, G. Park, and B. Lee, “Depth-enhanced integral imaging display system with electrically variable image planes using polymer-dispersed liquid-crystal layers,” Appl. Opt. **46**(18), 3766–3773 (2007). [CrossRef] [PubMed]

**10. **J.-H. Jung, Y. Kim, Y. Kim, J. Kim, K. Hong, and B. Lee, “Integral imaging system using an electroluminescent film backlight for three-dimensional-two-dimensional convertibility and a curved structure,” Appl. Opt. **48**(5), 998–1007 (2009). [CrossRef] [PubMed]

**11. **M. Shin, G. Baasantseren, K.-C. Kwon, N. Kim, and J.-H. Park, “Three-dimensional display system based on integral imaging with viewing direction control,” Jpn. J. Appl. Phys. **49**(7), 0725011–0725017 (2010). [CrossRef]

**12. **J.-Y. Son, B. Javidi, S. Yano, and K.-H. Choi, “Recent Developments in 3-D Imaging Technologies,” J. Display Technol. **6**(10), 394–403 (2010). [CrossRef]

**13. **J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. **48**(34), H77–H94 (2009). [CrossRef] [PubMed]

**14. **M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. on Graphics (Proc. SIGGRAPH) **25**, 924–934 (2006).

**15. **J.-H. Jung, K. Hong, G. Park, I. Chung, J.-H. Park, and B. Lee, “Reconstruction of three-dimensional occluded object using optical flow and triangular mesh reconstruction in integral imaging,” Opt. Express **18**(25), 26373–26387 (2010). [CrossRef] [PubMed]

**16. **B. Javidi, I. Moon, and S. Yeom, “Three-dimensional identification of biological microorganism using integral imaging,” Opt. Express **14**(25), 12096–12108 (2006). [CrossRef] [PubMed]

**17. **G. Passalis, N. Sgouros, S. Athineos, and T. Theoharis, “Enhanced reconstruction of three-dimensional shape and texture from integral photography images,” Appl. Opt. **46**(22), 5311–5320 (2007). [CrossRef] [PubMed]

**18. **J.-H. Park, G. Baasantseren, N. Kim, G. Park, J.-M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express **16**(12), 8800–8813 (2008). [CrossRef] [PubMed]

**19. **T. Mishina, M. Okui, and F. Okano, “Calculation of holograms from elemental images captured by integral photography,” Appl. Opt. **45**(17), 4026–4036 (2006). [CrossRef] [PubMed]

**20. **N. T. Shaked, J. Rosen, and A. Stern, “Integral holography: white-light single-shot hologram acquisition,” Opt. Express **15**(9), 5754–5760 (2007). [CrossRef] [PubMed]

**21. **J.-H. Park, M.-S. Kim, G. Baasantseren, and N. Kim, “Fresnel and Fourier hologram generation using orthographic projection images,” Opt. Express **17**(8), 6320–6334 (2009). [CrossRef] [PubMed]

**22. **S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express **12**(3), 483–491 (2004). [CrossRef] [PubMed]

**23. **J.-B. Hyun, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Curved computational integral imaging reconstruction technique for resolution-enhanced display of three-dimensional object images,” Appl. Opt. **46**(31), 7697–7708 (2007). [CrossRef] [PubMed]

**24. **B. Javidi and Y. S. Hwang, “Passive near-infrared 3D sensing and computational reconstruction with synthetic aperture integral imaging,” J. Display Technol. **4**(1), 3–5 (2008). [CrossRef]

**25. **K.-J. Lee, D.-C. Hwang, S.-C. Kim, and E.-S. Kim, “Blur-metric-based resolution enhancement of computationally reconstructed integral images,” Appl. Opt. **47**(15), 2859–2869 (2008). [CrossRef] [PubMed]

**26. **G. Baasantseren, J.-H. Park, and N. Kim, “Depth discrimination enhanced computational integral imaging using random pattern illumination,” Jpn. J. Appl. Phys. **48**(2), 0202161–0202163 (2009). [CrossRef]

**27. **J.-X. Chai, S.-C. Chan, H.-Y. Shum, and X. Tong, “Plenoptic sampling,” Proc. ACM SIGGRAPH, 307–318 (2000).

**28. **X. Chen, M. Gramaglia, and J. A. Yeazell, “Phase-shift calibration algorithm for phase-shifting interferometry,” J. Opt. Soc. Am. A **17**(11), 2061–2066 (2000). [CrossRef] [PubMed]