Abstract

A novel technique for depth filtering of integral imaging is proposed. Integral imaging captures spatio-angular distribution of the light rays which delivers three-dimensional information of the object scene. The proposed method performs filtering operation in the frequency domain of the captured spatio-angular light ray distribution, achieving depth selective reconstruction. Grating projection further enhances the depth discrimination performance. The principle is verified experimentally.

©2011 Optical Society of America

1. Introduction

Integral imaging has been considered as one of the most prominent autostereoscopic three-dimensional(3D) display techniques [112]. Besides its benefits as 3D display technique, integral imaging is also a versatile technology in 3D capturing and processing [1316]. Unlike usual two-dimensional(2D) imaging which records only spatial distribution of the light rays, integral imaging captures spatio-angular distribution of the light rays so that 3D information can be acquired [13,14]. Various techniques for processing captured spatio-angular distribution of the light rays have been reported including 3D mesh model reconstruction [15,17], arbitrary view reconstruction [18], hologram synthesis [1921], and depth plane reconstruction [2224].

Depth plane reconstruction of integral imaging is usually called computational integral imaging reconstruction(CIIR). By collecting and averaging light rays corresponding to each point in a depth plane, an image refocused on the plane is obtained [13,22]. Repeating this process for successive depth planes gives a stack of refocused images so that 3D structure of the object scene can be understood visually. Since CIIR is performed by simple pixel averaging without significant image processing, it is effective and free from image processing errors. However, CIIR has its limitations as well. One of the significant limitations is that the reconstructed image of CIIR is simply a refocused image in a certain depth plane without any further depth processing. The reconstructed image of the CIIR consists of focused object points accompanied by object points blurred according to difference between their actual depths and reconstruction depth. Any additional processing such as depth sectioning, and multi-plane refocusing is not possible in conventional CIIR.

A few techniques have been reported to give additional functionality to the CIIR. K.-J. Lee et al. applied a focus filter to extract the focused part of the object in an effort to realize tomographic imaging [25]. The focus filter, however, relies on the spatial variation in an image window and generally prone to error. G. Baasantseren et al. used random pattern illumination to suppress blurred part [26]. However, except reduction of the effective depth of focus, various functionality of depth filtering is not possible.

In this paper, we propose a method to perform depth filtering on the spatio-angular light ray distribution captured by integral imaging. Using the depth dependency of the frequency distribution of the light rays [27], the proposed method enables various depth filtering including depth selection and multi-plane refocusing. It is also possible to reconstruct various views with depth filtered data, which was not possible in conventional CIIR. In addition to the basic method, we also propose a method using grating projection in order to enhance the depth discrimination. In the followings, we explain the principle and present the experimental results for verification.

2. Theory

2.1. Depth filtering using frequency distribution of light rays

Figure 1 shows geometry of integral imaging capture. A 3D scene is captured through an array of the identical small lenses. Since each lens captures different perspective of the 3D scene, the resultant image is a set of perspectives each of which is called elemental image. Assuming pinhole lens model and ray optics, each elemental image can be thought to be a representation of the angular ray distribution at the principal point of the corresponding lens. With an array of the lenses, captured set of the elemental images contains spatio-angular distribution of the light rays in the lens array plane.

 figure: Fig. 1

Fig. 1 Geometry of integral imaging capture

Download Full Size | PPT Slide | PDF

In order to investigate the frequency characteristics of the captured spatio-angular light ray distribution, let us suppose a plane object at a specific distance z from the lens array as shown in Fig. 2 . For a non-specular plane object f, the light ray l at a position x and an angle θ in the lens array plane is given under paraxial approximation by

l(x,θ)=f(x+θz),
where unit of θ is radian. By Fourier-transforming Eq. (1), the spatio-angular frequency distribution of the captured light rays L is given by
L(fx,fθ)=f(x+θz)ej2π(fxx+fθθ)dxdθ,         =F(fx)δ(fxzfθ)
where F is the Fourier transform of the plane object f, and fx and fθ are the spatial and angular frequencies measured in cycles/m and cycles/rad, respectively. Here the Fourier transform is performed only for x and θ, not the object depth z. Equation (2) reveals that a plane object at a distance z is represented by a single line in the frequency domain representation with a slanting angle proportional to the distance z as shown in Fig. 2.

 figure: Fig. 2

Fig. 2 Frequency spectrum of a single plane object

Download Full Size | PPT Slide | PDF

Note that this can also be explained using Fourier slice theorem. As can be observed in Fig. 2, the projection in spatial domain onto the line θ=zx gives the plane object function f. According to the Fourier slice theorem, the corresponding slice fθ=zfx in frequency domain represents the Fourier transform of the object function f. The other projections in spatial domain give constant function assuming the extent of the spatio-angular distribution is sufficiently large in the spatial domain, and thus their corresponding slices give delta function located at the origin. Consequently, the plane object is represented as a line fθ=zfx in frequency domain. Also note that in case of specular reflection, the frequency spectrum of a plane object cannot be represented as a single line due to limited angular extent of the rays. Hence discussion in this paper is valid only for objects of diffusive surface.

For a volume object with extended depth range, the frequency representation of the captured light rays becomes an area including the collection of those slanted lines.

Based on this characteristic of the captured light rays, the proposed depth filtering is performed as shown in Fig. 3 . The light ray distribution of a 3D scene is captured by using a lens array following integral imaging principle. The captured light ray distribution is Fourier transformed to yield spatio-angular frequency domain representation. A desired filtering operation is performed on this frequency domain representation. As an example, depth pass filtering is illustrated in Fig. 3. Finally, inverse Fourier transform gives depth filtered light ray distribution. Any previously reported techniques to visualize the 3D information embedded in the elemental images can be additionally applied to this depth filtered light ray distribution. Note that for the color objects, the proposed method is performed for each color channel, i.e. red, green, and blue, independently. The filtered color channels are then merged to generate color output. In the followings, frequency spectrums are plotted only for red channel, while spatial domain representations are shown using all color channels for visibility.

 figure: Fig. 3

Fig. 3 Procedure of proposed depth filtering

Download Full Size | PPT Slide | PDF

In order to verify the principle, a simulation is performed for a 3D scene comprised of three plane objects as shown in Fig. 4 . Note that the three plane objects are located in both left and right field of the lens array plane in order to emphasize depth dependency of the frequency spectrum of the spatio-angular ray distribution. In real implementation of the integral imaging capture system, this condition is satisfied when the objects are not directly captured but their intermediate images which are formed by an additional imaging lens are captured by the lens array. In other cases, the objects are usually located in the right field, i.e. positive z, of the lens array. The depth filtering result is shown in Fig. 5 . Depth filtering is performed to select one or two objects out of three plane objects by using the proposed method and the filtered light ray distribution is further processed to synthesize a view at a central position for a visualization purpose following conventional method [13]. The depth range for filtering of each object is set to 10 mm around its actual depth.

 figure: Fig. 4

Fig. 4 3D scene used in simulation

Download Full Size | PPT Slide | PDF

 figure: Fig. 5

Fig. 5 Simulation result of depth filtering.

Download Full Size | PPT Slide | PDF

In the second row named ‘original’ in Fig. 5, it can be seen that the frequency spectrum fx-fθ plot and fy-fφ plot reveals three lines with different slanting angles as expected. Filtering is performed to select specific lines in this frequency spectrum as shown in the two columns ‘fx-fθ plot’ and ‘fy-fφ plot’. The last column in Fig. 5 shows the synthesized central views for filtered data. Note that selection of two objects at distant depths as shown in last 3 rows of Fig. 5 could not be done by conventional CIIR. From Fig. 5, it can be confirmed that the proposed depth filtering operation performs as expected.

In order to verify that conventional processing can be additionally applied to the filtered data, various view reconstruction is performed for the filtered data as an example. Figure 6 shows the result of original data and filtered data for apple and banana objects with two movies. From Fig. 6, it can be confirmed that the conventional view reconstruction algorithm works well with the filtered data.

 figure: Fig. 6

Fig. 6 View reconstructions: (a) 9 examples and (b) movie (Media 1) using original data, (c) 9 examples and (d) movie (Media 2) using filtered data

Download Full Size | PPT Slide | PDF

2.2. Depth discrimination enhancement using grating projection

The basic method proposed in the previous section enables depth selective filtering of the captured 3D scene. The discrimination ratio of the depth, however, is not very good. As shown in last column in Fig. 5, the unselected depth plane object is not completely removed in the reconstruction, but remains in a blurry shape. This becomes especially severe when the depth separation between the objects is small. In this sub-section, we propose an additional method to enhance the depth discrimination ratio.

The low depth discrimination ratio is primarily due to the overlapped low frequency components of depth planes. As shown in Fig. 7(a) , slanted lines corresponding to different depth planes intersect at the origin of the spatio-angular frequency domain. Note that in cases of the real 3D objects, the most energy is concentrated on the low frequency range around the origin. Hence, any depth pass filtering is accompanied by significant energy from other depths, resulting in low discrimination ratio.

 figure: Fig. 7

Fig. 7 Frequency spectrum (a) without grating projection and (b) with grating spectrum

Download Full Size | PPT Slide | PDF

In order to enhance the discrimination ratio, we propose a method using grating projection, inspired by the standard phase retrieval procedures used in applications such as phase-shifting interferometry [28]. Figure 7(b) illustrates the proposed method. Instead of usual white light illumination, 4 sinusoidal amplitude grating patterns Gϕ(x)=1+sin(πx/Tx+ϕ)are projected on the 3D scene sequentially with ϕ=0°, 90°, 180°, and 270°. The captured 4 sets of spatio-angular ray distributions, l , l 90°, l 180°, and l 270° are processed following

|l(x,θ)|=l0°(x,θ)+l180°(x,θ)2,
l(x,θ)=2tan1(l0°(x,θ)l180°(x,θ)l90°(x,θ)l270°(x,θ)),
to yield complex-valued ray distribution l(x,θ). For simplicity, let us consider a plane object f(x) at a distance z. Note that by grating projection, the object function f(x) becomes f(x)Gϕ(x). Hence from Eq. (2), the captured ray distribution with grating phase shift ϕ is lϕ(x,θ)=f(x+θz)Gϕ(x+θz). Substituting this in Eqs. (3) and (4) shows that the complex-valued ray distribution l(x,θ) is given by

l(x,θ)=f(x+θz)exp{j2π(x+θz)Tx}.

Equation (5) indicates that the complex-valued ray distribution synthesized by Eqs. (3) and (4) is equivalent to the ray distribution of a complex-valued 3D scene fc given by

fc(x)=f(x)exp(j2πTxx),
where Tx is the half period of the projected grating pattern along x axis. Therefore, in the frequency domain representation of the complex-valued ray distribution l(x,θ), the spectrum of each depth plane is shifted by 1/Tx along fx axis as shown in Fig. 7(b). The slanted lines corresponding to different depth planes still intersect at the origin, but now the low frequency part where most energy is concentrated is moved to fx=1/Tx and is well separated without overlapping. Therefore, the depth discrimination ratio can be enhanced in the reconstruction.

Note that since the effect of the grating projection is limited only to the phase of the 3D scene as revealed by Eq. (6) and what we usually care in the reconstruction is the amplitude distribution in the object space, any additional processing is not required to compensate the added phase term in the reconstruction. Also note that the grating period Tx does not need to be maintained constant for all objects at different depths. When the grating period Tx is a function of depth z, the spectrums of depth planes are shifted by a different amount. However, low frequency parts of depth planes are still separated by the spectrum shifts, and thus the depth discrimination is enhanced. Another noteworthy point is that due to the shift of the spectrum, higher spatial frequencies can be captured for a side band. Considering the other side band can also be recovered using conjugate symmetry of frequency spectrum of real-valued object function, there is a possibility to enhance not only the depth discrimination but also the resolution.

Figures 8 -10 show the simulation result. Simulation is performed for the same 3D scene shown in Fig. 4, but the locations of three plane objects are changed to 30mm, 20mm and 10mm right to the lens array plane for Lena, banana, and apple objects, respectively. Note that the spacing between the plane objects is reduced to emphasize depth discrimination ratio. The diagonally slanted grating pattern is used in the simulation to give equal contribution to x and y directions. Using Eqs. (3) and (4), the four-dimensional, i.e. x, y, θ, and φ, complex-valued light ray distribution is obtained. Figure 8 shows two slices revealing x-θ plot and y-φ plot. As shown in Fig. 8, the phase of the captured light ray distribution now has periodic grating pattern due to grating projection technique.

 figure: Fig. 8

Fig. 8 A slice of spatio-angular light ray distribution: (a) amplitude and (b) phase of x-θ plot at y=φ =0, (c) amplitude and (d) phase of y-φ plot at x=θ=0

Download Full Size | PPT Slide | PDF

 figure: Fig. 10

Fig. 10 Depth filtering results (a) without grating projection, (b) with grating projection

Download Full Size | PPT Slide | PDF

Figure 9 shows two slices of fx-fθ plot and fy-fφ plot with or without grating projection. Although three lines are not identified individually due to small depth separation, it can be observed that the peak points which represents DC component are moved from the coordinates origin as shown in Fig. 9(a) and (b) to different positions as shown in Fig. 9(c) and (d).

 figure: Fig. 9

Fig. 9 A slice of spatio-angular frequency spectrum (a) fx-fθ plot at fy=fφ=0 and (b) fy-fφ plot at fx=fθ=0 without grating projection, (c) fx-fθ plot at fy=fφ=0 and (d) fy-fφ plot at fx=fθ=0 with grating projection

Download Full Size | PPT Slide | PDF

The depth filtering result for one out of three plane objects is shown in Fig. 10. The depth range for filtering is set to 4 mm around the object’s actual depth. In Fig. 10(a), the residual blurred images exist with significant energy due to low depth discrimination. In Fig. 10(b), however, it can be confirmed that they are largely suppressed leaving the desired object unchanged by the grating projection technique.

3. Experimental result

We verified the proposed method experimentally. Two experiments performed to verify the depth filtering operation and depth discrimination ratio enhancement, respectively. For the first experiment, three plane objects ‘J’, ‘K’, and ‘M’ shown in Fig. 11 are located at 2cm, 7cm, and 12cm from a lens array, respectively. The lens array used in the experiment consists of identical elemental lenses of 1mm lens pitch and 3.3mm focal length. The number of the elemental lenses in the array is about 110(H)×55(V). Under uniform regular white illumination, the elemental images are captured through the lens array as shown in Fig. 12 . The number of pixels per each elemental image is 31(H)×31(V).

 figure: Fig. 11

Fig. 11 Experimental setup (a) object (b) configuration

Download Full Size | PPT Slide | PDF

 figure: Fig. 12

Fig. 12 Captured elemental images

Download Full Size | PPT Slide | PDF

In the experiment, the depth range for filtering of each object was determined empirically considering resultant depth discrimination and brightness of the reconstruction. The depth filtering result is shown in Fig. 13 . Although reconstruction of the object ‘M’ is rather weak due to its large depth, it can be seen that the depth selection filtering for one or two objects out of three is performed successfully.

 figure: Fig. 13

Fig. 13 Experimental depth filtering result

Download Full Size | PPT Slide | PDF

The original set of the elemental images and the filtered one are further processed for the various view reconstruction and the result is shown in Figs. 14 and 15 . Figures 14 and 15 reveal different views of the 3D object scene can be reconstructed from the filtered elemental images as well as from the original elemental images.

 figure: Fig. 14

Fig. 14 Examples of the view synthesis (a) original data, (b) filtered for ‘J’ and ‘M’, (c) filtered for ‘J’ and ‘K’, and (d) filtered for ‘K’ and ‘M’

Download Full Size | PPT Slide | PDF

 figure: Fig. 15

Fig. 15 Movies of view synthesis (a) (Media 3) original data (b) (Media 4) filtered for ‘J’ and ‘M’, (c) (Media 5) filtered for ‘J’ and ‘K’, and (d) (Media 6) filtered for ‘K’ and ‘M’

Download Full Size | PPT Slide | PDF

The experiment for verification of the depth discrimination enhancement using grating projection is performed with a setup shown in Fig. 16(a) . Three plane objects ‘J’, ‘K’, and ‘M’ are located at 4cm, 6cm, and 8cm from the lens array. Instead of uniform illumination, 4 diagonal sinusoidal amplitude grating patterns with 90° phase shift are projected to the objects and corresponding 4 sets of the elemental images are captured as shown in Fig. 16(b). The grating period projected on the object surfaces is approximately 9mm for both x and y axis. In the captured elemental images, this projected grating period is sampled with more than 14 pixels both x and y axis without aliasing. Magnified image of the captured elemental images shown in Fig. 16(b) shows intensity variation due to the grating projection.

 figure: Fig. 16

Fig. 16 Experimental setup for grating projection (a) configuration, (b) captured elemental images

Download Full Size | PPT Slide | PDF

From these 4 sets of the elemental images, the complex-valued elemental images are synthesized using Eqs. (3) and (4). Figure 17 shows the amplitude and phase distribution of the synthesized complex-valued elemental images. Figure 17(a) reveals the intensity fluctuation in raw image of Fig. 16 is now removed by Eq. (3). The phase distribute shown in Fig. 17(b) shows periodic grating pattern as desired.

 figure: Fig. 17

Fig. 17 Synthesized set of elemental images (a) amplitude (b) phase

Download Full Size | PPT Slide | PDF

Figure 18 shows fx-fθ plot with or without the grating projection. The plot in non-grating-projection case shown in Fig. 18(a) is calculated by ignoring phase term of the complex-valued set of the elemental images of Fig. 17. As expected, the high energy point representing DC components of the objects is moved from the origin to different location.

 figure: Fig. 18

Fig. 18 A slice of spatio-angular frequency spectrum (fx-fθ plot at fy=fφ=0) (a) without grating projection (b) with grating projection

Download Full Size | PPT Slide | PDF

Finally, the depth filtering result is shown in Fig. 19 . One of three plane objects is selected for the reconstruction in non-grating and grating cases. In last three rows, it is observed that the residual objects remain in the reconstruction with significant intensity due to small depth separation between the objects. By the grating projection method, however, they are successfully suppressed, leaving the desired object as shown in first four rows of Fig. 19. From the results in Fig. 19, it can be confirmed that the depth discrimination ratio is enhanced by the proposed grating projection method.

 figure: Fig. 19

Fig. 19 Experimental result of depth discrimination enhancement using grating projection

Download Full Size | PPT Slide | PDF

4. Conclusion

A novel method to perform the depth filtering using integral imaging is proposed. By using spatio-angular frequency characteristics of the captured light ray distribution, various depth filtering operations can be performed. Any conventional processing developed for the integral imaging can be further applied after the proposed filtering. We also propose an additional method using grating projection. The grating projection method enhances the depth discrimination performance of the depth filtering operation by reducing overlapped energy between depth planes. The experimental results are provided for the verification of the proposed method.

Acknowledgment

This work was supported by the research grant of the Chungbuk National University in 2009.

References and links

1. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef]   [PubMed]  

2. N. Davies, M. McCormick, and L. Yang, “Three-dimensional imaging systems: a new development,” Appl. Opt. 27(21), 4520 (1988). [CrossRef]   [PubMed]  

3. S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44(2), L71–L74 (2005). [CrossRef]  

4. R. Martinez-Cuenca, A. Pons, G. Saavedra, M. Martinez-Corral, and B. Javidi, “Optically-corrected elemental images for undistorted Integral image display,” Opt. Express 14(21), 9657–9663 (2006). [CrossRef]   [PubMed]  

5. J. Arai, M. Okui, T. Yamashita, and F. Okano, “Integral three-dimensional television using a 2000-scanning-line video system,” Appl. Opt. 45(8), 1704–1712 (2006). [CrossRef]   [PubMed]  

6. H. Liao, T. Dohi, and M. Iwahara, “Improved viewing resolution of integral videography by use of rotated prism sheets,” Opt. Express 15(8), 4814–4822 (2007). [CrossRef]   [PubMed]  

7. J. Hahn, Y. Kim, and B. Lee, “Uniform angular resolution integral imaging display with boundary folding mirrors,” Appl. Opt. 48(3), 504–511 (2009). [CrossRef]   [PubMed]  

8. J. Kim, S.-W. Min, and B. Lee, “Viewing window expansion of integral floating display,” Appl. Opt. 48(5), 862–867 (2009). [CrossRef]   [PubMed]  

9. Y. Kim, H. Choi, J. Kim, S.-W. Cho, Y. Kim, G. Park, and B. Lee, “Depth-enhanced integral imaging display system with electrically variable image planes using polymer-dispersed liquid-crystal layers,” Appl. Opt. 46(18), 3766–3773 (2007). [CrossRef]   [PubMed]  

10. J.-H. Jung, Y. Kim, Y. Kim, J. Kim, K. Hong, and B. Lee, “Integral imaging system using an electroluminescent film backlight for three-dimensional-two-dimensional convertibility and a curved structure,” Appl. Opt. 48(5), 998–1007 (2009). [CrossRef]   [PubMed]  

11. M. Shin, G. Baasantseren, K.-C. Kwon, N. Kim, and J.-H. Park, “Three-dimensional display system based on integral imaging with viewing direction control,” Jpn. J. Appl. Phys. 49(7), 0725011–0725017 (2010). [CrossRef]  

12. J.-Y. Son, B. Javidi, S. Yano, and K.-H. Choi, “Recent Developments in 3-D Imaging Technologies,” J. Display Technol. 6(10), 394–403 (2010). [CrossRef]  

13. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef]   [PubMed]  

14. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. on Graphics (Proc. SIGGRAPH) 25, 924–934 (2006).

15. J.-H. Jung, K. Hong, G. Park, I. Chung, J.-H. Park, and B. Lee, “Reconstruction of three-dimensional occluded object using optical flow and triangular mesh reconstruction in integral imaging,” Opt. Express 18(25), 26373–26387 (2010). [CrossRef]   [PubMed]  

16. B. Javidi, I. Moon, and S. Yeom, “Three-dimensional identification of biological microorganism using integral imaging,” Opt. Express 14(25), 12096–12108 (2006). [CrossRef]   [PubMed]  

17. G. Passalis, N. Sgouros, S. Athineos, and T. Theoharis, “Enhanced reconstruction of three-dimensional shape and texture from integral photography images,” Appl. Opt. 46(22), 5311–5320 (2007). [CrossRef]   [PubMed]  

18. J.-H. Park, G. Baasantseren, N. Kim, G. Park, J.-M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express 16(12), 8800–8813 (2008). [CrossRef]   [PubMed]  

19. T. Mishina, M. Okui, and F. Okano, “Calculation of holograms from elemental images captured by integral photography,” Appl. Opt. 45(17), 4026–4036 (2006). [CrossRef]   [PubMed]  

20. N. T. Shaked, J. Rosen, and A. Stern, “Integral holography: white-light single-shot hologram acquisition,” Opt. Express 15(9), 5754–5760 (2007). [CrossRef]   [PubMed]  

21. J.-H. Park, M.-S. Kim, G. Baasantseren, and N. Kim, “Fresnel and Fourier hologram generation using orthographic projection images,” Opt. Express 17(8), 6320–6334 (2009). [CrossRef]   [PubMed]  

22. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12(3), 483–491 (2004). [CrossRef]   [PubMed]  

23. J.-B. Hyun, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Curved computational integral imaging reconstruction technique for resolution-enhanced display of three-dimensional object images,” Appl. Opt. 46(31), 7697–7708 (2007). [CrossRef]   [PubMed]  

24. B. Javidi and Y. S. Hwang, “Passive near-infrared 3D sensing and computational reconstruction with synthetic aperture integral imaging,” J. Display Technol. 4(1), 3–5 (2008). [CrossRef]  

25. K.-J. Lee, D.-C. Hwang, S.-C. Kim, and E.-S. Kim, “Blur-metric-based resolution enhancement of computationally reconstructed integral images,” Appl. Opt. 47(15), 2859–2869 (2008). [CrossRef]   [PubMed]  

26. G. Baasantseren, J.-H. Park, and N. Kim, “Depth discrimination enhanced computational integral imaging using random pattern illumination,” Jpn. J. Appl. Phys. 48(2), 0202161–0202163 (2009). [CrossRef]  

27. J.-X. Chai, S.-C. Chan, H.-Y. Shum, and X. Tong, “Plenoptic sampling,” Proc. ACM SIGGRAPH, 307–318 (2000).

28. X. Chen, M. Gramaglia, and J. A. Yeazell, “Phase-shift calibration algorithm for phase-shifting interferometry,” J. Opt. Soc. Am. A 17(11), 2061–2066 (2000). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997).
    [Crossref] [PubMed]
  2. N. Davies, M. McCormick, and L. Yang, “Three-dimensional imaging systems: a new development,” Appl. Opt. 27(21), 4520 (1988).
    [Crossref] [PubMed]
  3. S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44(2), L71–L74 (2005).
    [Crossref]
  4. R. Martinez-Cuenca, A. Pons, G. Saavedra, M. Martinez-Corral, and B. Javidi, “Optically-corrected elemental images for undistorted Integral image display,” Opt. Express 14(21), 9657–9663 (2006).
    [Crossref] [PubMed]
  5. J. Arai, M. Okui, T. Yamashita, and F. Okano, “Integral three-dimensional television using a 2000-scanning-line video system,” Appl. Opt. 45(8), 1704–1712 (2006).
    [Crossref] [PubMed]
  6. H. Liao, T. Dohi, and M. Iwahara, “Improved viewing resolution of integral videography by use of rotated prism sheets,” Opt. Express 15(8), 4814–4822 (2007).
    [Crossref] [PubMed]
  7. J. Hahn, Y. Kim, and B. Lee, “Uniform angular resolution integral imaging display with boundary folding mirrors,” Appl. Opt. 48(3), 504–511 (2009).
    [Crossref] [PubMed]
  8. J. Kim, S.-W. Min, and B. Lee, “Viewing window expansion of integral floating display,” Appl. Opt. 48(5), 862–867 (2009).
    [Crossref] [PubMed]
  9. Y. Kim, H. Choi, J. Kim, S.-W. Cho, Y. Kim, G. Park, and B. Lee, “Depth-enhanced integral imaging display system with electrically variable image planes using polymer-dispersed liquid-crystal layers,” Appl. Opt. 46(18), 3766–3773 (2007).
    [Crossref] [PubMed]
  10. J.-H. Jung, Y. Kim, Y. Kim, J. Kim, K. Hong, and B. Lee, “Integral imaging system using an electroluminescent film backlight for three-dimensional-two-dimensional convertibility and a curved structure,” Appl. Opt. 48(5), 998–1007 (2009).
    [Crossref] [PubMed]
  11. M. Shin, G. Baasantseren, K.-C. Kwon, N. Kim, and J.-H. Park, “Three-dimensional display system based on integral imaging with viewing direction control,” Jpn. J. Appl. Phys. 49(7), 0725011–0725017 (2010).
    [Crossref]
  12. J.-Y. Son, B. Javidi, S. Yano, and K.-H. Choi, “Recent Developments in 3-D Imaging Technologies,” J. Display Technol. 6(10), 394–403 (2010).
    [Crossref]
  13. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009).
    [Crossref] [PubMed]
  14. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. on Graphics (Proc. SIGGRAPH) 25, 924–934 (2006).
  15. J.-H. Jung, K. Hong, G. Park, I. Chung, J.-H. Park, and B. Lee, “Reconstruction of three-dimensional occluded object using optical flow and triangular mesh reconstruction in integral imaging,” Opt. Express 18(25), 26373–26387 (2010).
    [Crossref] [PubMed]
  16. B. Javidi, I. Moon, and S. Yeom, “Three-dimensional identification of biological microorganism using integral imaging,” Opt. Express 14(25), 12096–12108 (2006).
    [Crossref] [PubMed]
  17. G. Passalis, N. Sgouros, S. Athineos, and T. Theoharis, “Enhanced reconstruction of three-dimensional shape and texture from integral photography images,” Appl. Opt. 46(22), 5311–5320 (2007).
    [Crossref] [PubMed]
  18. J.-H. Park, G. Baasantseren, N. Kim, G. Park, J.-M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express 16(12), 8800–8813 (2008).
    [Crossref] [PubMed]
  19. T. Mishina, M. Okui, and F. Okano, “Calculation of holograms from elemental images captured by integral photography,” Appl. Opt. 45(17), 4026–4036 (2006).
    [Crossref] [PubMed]
  20. N. T. Shaked, J. Rosen, and A. Stern, “Integral holography: white-light single-shot hologram acquisition,” Opt. Express 15(9), 5754–5760 (2007).
    [Crossref] [PubMed]
  21. J.-H. Park, M.-S. Kim, G. Baasantseren, and N. Kim, “Fresnel and Fourier hologram generation using orthographic projection images,” Opt. Express 17(8), 6320–6334 (2009).
    [Crossref] [PubMed]
  22. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12(3), 483–491 (2004).
    [Crossref] [PubMed]
  23. J.-B. Hyun, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Curved computational integral imaging reconstruction technique for resolution-enhanced display of three-dimensional object images,” Appl. Opt. 46(31), 7697–7708 (2007).
    [Crossref] [PubMed]
  24. B. Javidi and Y. S. Hwang, “Passive near-infrared 3D sensing and computational reconstruction with synthetic aperture integral imaging,” J. Display Technol. 4(1), 3–5 (2008).
    [Crossref]
  25. K.-J. Lee, D.-C. Hwang, S.-C. Kim, and E.-S. Kim, “Blur-metric-based resolution enhancement of computationally reconstructed integral images,” Appl. Opt. 47(15), 2859–2869 (2008).
    [Crossref] [PubMed]
  26. G. Baasantseren, J.-H. Park, and N. Kim, “Depth discrimination enhanced computational integral imaging using random pattern illumination,” Jpn. J. Appl. Phys. 48(2), 0202161–0202163 (2009).
    [Crossref]
  27. J.-X. Chai, S.-C. Chan, H.-Y. Shum, and X. Tong, “Plenoptic sampling,” Proc. ACM SIGGRAPH, 307–318 (2000).
  28. X. Chen, M. Gramaglia, and J. A. Yeazell, “Phase-shift calibration algorithm for phase-shifting interferometry,” J. Opt. Soc. Am. A 17(11), 2061–2066 (2000).
    [Crossref] [PubMed]

2010 (3)

2009 (6)

2008 (3)

2007 (5)

2006 (4)

2005 (1)

S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44(2), L71–L74 (2005).
[Crossref]

2004 (1)

2000 (1)

1997 (1)

1988 (1)

Arai, J.

Athineos, S.

Baasantseren, G.

M. Shin, G. Baasantseren, K.-C. Kwon, N. Kim, and J.-H. Park, “Three-dimensional display system based on integral imaging with viewing direction control,” Jpn. J. Appl. Phys. 49(7), 0725011–0725017 (2010).
[Crossref]

J.-H. Park, M.-S. Kim, G. Baasantseren, and N. Kim, “Fresnel and Fourier hologram generation using orthographic projection images,” Opt. Express 17(8), 6320–6334 (2009).
[Crossref] [PubMed]

G. Baasantseren, J.-H. Park, and N. Kim, “Depth discrimination enhanced computational integral imaging using random pattern illumination,” Jpn. J. Appl. Phys. 48(2), 0202161–0202163 (2009).
[Crossref]

J.-H. Park, G. Baasantseren, N. Kim, G. Park, J.-M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express 16(12), 8800–8813 (2008).
[Crossref] [PubMed]

Chen, X.

Cho, S.-W.

Choi, H.

Choi, K.-H.

Chung, I.

Davies, N.

Dohi, T.

Gramaglia, M.

Hahn, J.

Hong, K.

Hong, S.-H.

Hoshino, H.

Hwang, D.-C.

Hwang, Y. S.

Hyun, J.-B.

Iwahara, M.

Jang, J.-S.

Javidi, B.

Jung, J.-H.

Kang, J.-M.

Kim, E.-S.

Kim, J.

Kim, M.-S.

Kim, N.

M. Shin, G. Baasantseren, K.-C. Kwon, N. Kim, and J.-H. Park, “Three-dimensional display system based on integral imaging with viewing direction control,” Jpn. J. Appl. Phys. 49(7), 0725011–0725017 (2010).
[Crossref]

J.-H. Park, M.-S. Kim, G. Baasantseren, and N. Kim, “Fresnel and Fourier hologram generation using orthographic projection images,” Opt. Express 17(8), 6320–6334 (2009).
[Crossref] [PubMed]

G. Baasantseren, J.-H. Park, and N. Kim, “Depth discrimination enhanced computational integral imaging using random pattern illumination,” Jpn. J. Appl. Phys. 48(2), 0202161–0202163 (2009).
[Crossref]

J.-H. Park, G. Baasantseren, N. Kim, G. Park, J.-M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express 16(12), 8800–8813 (2008).
[Crossref] [PubMed]

Kim, S.-C.

Kim, Y.

Kwon, K.-C.

M. Shin, G. Baasantseren, K.-C. Kwon, N. Kim, and J.-H. Park, “Three-dimensional display system based on integral imaging with viewing direction control,” Jpn. J. Appl. Phys. 49(7), 0725011–0725017 (2010).
[Crossref]

Lee, B.

J.-H. Jung, K. Hong, G. Park, I. Chung, J.-H. Park, and B. Lee, “Reconstruction of three-dimensional occluded object using optical flow and triangular mesh reconstruction in integral imaging,” Opt. Express 18(25), 26373–26387 (2010).
[Crossref] [PubMed]

J.-H. Jung, Y. Kim, Y. Kim, J. Kim, K. Hong, and B. Lee, “Integral imaging system using an electroluminescent film backlight for three-dimensional-two-dimensional convertibility and a curved structure,” Appl. Opt. 48(5), 998–1007 (2009).
[Crossref] [PubMed]

J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009).
[Crossref] [PubMed]

J. Hahn, Y. Kim, and B. Lee, “Uniform angular resolution integral imaging display with boundary folding mirrors,” Appl. Opt. 48(3), 504–511 (2009).
[Crossref] [PubMed]

J. Kim, S.-W. Min, and B. Lee, “Viewing window expansion of integral floating display,” Appl. Opt. 48(5), 862–867 (2009).
[Crossref] [PubMed]

J.-H. Park, G. Baasantseren, N. Kim, G. Park, J.-M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express 16(12), 8800–8813 (2008).
[Crossref] [PubMed]

Y. Kim, H. Choi, J. Kim, S.-W. Cho, Y. Kim, G. Park, and B. Lee, “Depth-enhanced integral imaging display system with electrically variable image planes using polymer-dispersed liquid-crystal layers,” Appl. Opt. 46(18), 3766–3773 (2007).
[Crossref] [PubMed]

S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44(2), L71–L74 (2005).
[Crossref]

Lee, K.-J.

Liao, H.

Martinez-Corral, M.

Martinez-Cuenca, R.

McCormick, M.

Min, S.-W.

J. Kim, S.-W. Min, and B. Lee, “Viewing window expansion of integral floating display,” Appl. Opt. 48(5), 862–867 (2009).
[Crossref] [PubMed]

S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44(2), L71–L74 (2005).
[Crossref]

Mishina, T.

Moon, I.

Okano, F.

Okui, M.

Park, G.

Park, J.-H.

Passalis, G.

Pons, A.

Rosen, J.

Saavedra, G.

Sgouros, N.

Shaked, N. T.

Shin, D.-H.

Shin, M.

M. Shin, G. Baasantseren, K.-C. Kwon, N. Kim, and J.-H. Park, “Three-dimensional display system based on integral imaging with viewing direction control,” Jpn. J. Appl. Phys. 49(7), 0725011–0725017 (2010).
[Crossref]

Son, J.-Y.

Stern, A.

Theoharis, T.

Yamashita, T.

Yang, L.

Yano, S.

Yeazell, J. A.

Yeom, S.

Yuyama, I.

Appl. Opt. (12)

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997).
[Crossref] [PubMed]

N. Davies, M. McCormick, and L. Yang, “Three-dimensional imaging systems: a new development,” Appl. Opt. 27(21), 4520 (1988).
[Crossref] [PubMed]

J. Hahn, Y. Kim, and B. Lee, “Uniform angular resolution integral imaging display with boundary folding mirrors,” Appl. Opt. 48(3), 504–511 (2009).
[Crossref] [PubMed]

J. Kim, S.-W. Min, and B. Lee, “Viewing window expansion of integral floating display,” Appl. Opt. 48(5), 862–867 (2009).
[Crossref] [PubMed]

Y. Kim, H. Choi, J. Kim, S.-W. Cho, Y. Kim, G. Park, and B. Lee, “Depth-enhanced integral imaging display system with electrically variable image planes using polymer-dispersed liquid-crystal layers,” Appl. Opt. 46(18), 3766–3773 (2007).
[Crossref] [PubMed]

J.-H. Jung, Y. Kim, Y. Kim, J. Kim, K. Hong, and B. Lee, “Integral imaging system using an electroluminescent film backlight for three-dimensional-two-dimensional convertibility and a curved structure,” Appl. Opt. 48(5), 998–1007 (2009).
[Crossref] [PubMed]

J. Arai, M. Okui, T. Yamashita, and F. Okano, “Integral three-dimensional television using a 2000-scanning-line video system,” Appl. Opt. 45(8), 1704–1712 (2006).
[Crossref] [PubMed]

J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009).
[Crossref] [PubMed]

G. Passalis, N. Sgouros, S. Athineos, and T. Theoharis, “Enhanced reconstruction of three-dimensional shape and texture from integral photography images,” Appl. Opt. 46(22), 5311–5320 (2007).
[Crossref] [PubMed]

T. Mishina, M. Okui, and F. Okano, “Calculation of holograms from elemental images captured by integral photography,” Appl. Opt. 45(17), 4026–4036 (2006).
[Crossref] [PubMed]

J.-B. Hyun, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Curved computational integral imaging reconstruction technique for resolution-enhanced display of three-dimensional object images,” Appl. Opt. 46(31), 7697–7708 (2007).
[Crossref] [PubMed]

K.-J. Lee, D.-C. Hwang, S.-C. Kim, and E.-S. Kim, “Blur-metric-based resolution enhancement of computationally reconstructed integral images,” Appl. Opt. 47(15), 2859–2869 (2008).
[Crossref] [PubMed]

J. Display Technol. (2)

J. Opt. Soc. Am. A (1)

Jpn. J. Appl. Phys. (3)

G. Baasantseren, J.-H. Park, and N. Kim, “Depth discrimination enhanced computational integral imaging using random pattern illumination,” Jpn. J. Appl. Phys. 48(2), 0202161–0202163 (2009).
[Crossref]

M. Shin, G. Baasantseren, K.-C. Kwon, N. Kim, and J.-H. Park, “Three-dimensional display system based on integral imaging with viewing direction control,” Jpn. J. Appl. Phys. 49(7), 0725011–0725017 (2010).
[Crossref]

S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44(2), L71–L74 (2005).
[Crossref]

Opt. Express (8)

R. Martinez-Cuenca, A. Pons, G. Saavedra, M. Martinez-Corral, and B. Javidi, “Optically-corrected elemental images for undistorted Integral image display,” Opt. Express 14(21), 9657–9663 (2006).
[Crossref] [PubMed]

J.-H. Park, G. Baasantseren, N. Kim, G. Park, J.-M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express 16(12), 8800–8813 (2008).
[Crossref] [PubMed]

J.-H. Jung, K. Hong, G. Park, I. Chung, J.-H. Park, and B. Lee, “Reconstruction of three-dimensional occluded object using optical flow and triangular mesh reconstruction in integral imaging,” Opt. Express 18(25), 26373–26387 (2010).
[Crossref] [PubMed]

B. Javidi, I. Moon, and S. Yeom, “Three-dimensional identification of biological microorganism using integral imaging,” Opt. Express 14(25), 12096–12108 (2006).
[Crossref] [PubMed]

H. Liao, T. Dohi, and M. Iwahara, “Improved viewing resolution of integral videography by use of rotated prism sheets,” Opt. Express 15(8), 4814–4822 (2007).
[Crossref] [PubMed]

N. T. Shaked, J. Rosen, and A. Stern, “Integral holography: white-light single-shot hologram acquisition,” Opt. Express 15(9), 5754–5760 (2007).
[Crossref] [PubMed]

J.-H. Park, M.-S. Kim, G. Baasantseren, and N. Kim, “Fresnel and Fourier hologram generation using orthographic projection images,” Opt. Express 17(8), 6320–6334 (2009).
[Crossref] [PubMed]

S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12(3), 483–491 (2004).
[Crossref] [PubMed]

Other (2)

J.-X. Chai, S.-C. Chan, H.-Y. Shum, and X. Tong, “Plenoptic sampling,” Proc. ACM SIGGRAPH, 307–318 (2000).

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. on Graphics (Proc. SIGGRAPH) 25, 924–934 (2006).

Supplementary Material (6)

» Media 1: AVI (1516 KB)     
» Media 2: AVI (1628 KB)     
» Media 3: AVI (2060 KB)     
» Media 4: AVI (2052 KB)     
» Media 5: AVI (2028 KB)     
» Media 6: AVI (2004 KB)     

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (19)

Fig. 1
Fig. 1 Geometry of integral imaging capture
Fig. 2
Fig. 2 Frequency spectrum of a single plane object
Fig. 3
Fig. 3 Procedure of proposed depth filtering
Fig. 4
Fig. 4 3D scene used in simulation
Fig. 5
Fig. 5 Simulation result of depth filtering.
Fig. 6
Fig. 6 View reconstructions: (a) 9 examples and (b) movie (Media 1) using original data, (c) 9 examples and (d) movie (Media 2) using filtered data
Fig. 7
Fig. 7 Frequency spectrum (a) without grating projection and (b) with grating spectrum
Fig. 8
Fig. 8 A slice of spatio-angular light ray distribution: (a) amplitude and (b) phase of x-θ plot at y=φ =0, (c) amplitude and (d) phase of y-φ plot at x=θ=0
Fig. 10
Fig. 10 Depth filtering results (a) without grating projection, (b) with grating projection
Fig. 9
Fig. 9 A slice of spatio-angular frequency spectrum (a) fx -fθ plot at fy =fφ =0 and (b) fy -fφ plot at fx =fθ =0 without grating projection, (c) fx -fθ plot at fy =fφ =0 and (d) fy -fφ plot at fx =fθ =0 with grating projection
Fig. 11
Fig. 11 Experimental setup (a) object (b) configuration
Fig. 12
Fig. 12 Captured elemental images
Fig. 13
Fig. 13 Experimental depth filtering result
Fig. 14
Fig. 14 Examples of the view synthesis (a) original data, (b) filtered for ‘J’ and ‘M’, (c) filtered for ‘J’ and ‘K’, and (d) filtered for ‘K’ and ‘M’
Fig. 15
Fig. 15 Movies of view synthesis (a) (Media 3) original data (b) (Media 4) filtered for ‘J’ and ‘M’, (c) (Media 5) filtered for ‘J’ and ‘K’, and (d) (Media 6) filtered for ‘K’ and ‘M’
Fig. 16
Fig. 16 Experimental setup for grating projection (a) configuration, (b) captured elemental images
Fig. 17
Fig. 17 Synthesized set of elemental images (a) amplitude (b) phase
Fig. 18
Fig. 18 A slice of spatio-angular frequency spectrum (fx -fθ plot at fy =fφ =0) (a) without grating projection (b) with grating projection
Fig. 19
Fig. 19 Experimental result of depth discrimination enhancement using grating projection

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

l ( x , θ ) = f ( x + θ z ) ,
L ( f x , f θ ) = f ( x + θ z ) e j 2 π ( f x x + f θ θ ) d x d θ ,           = F ( f x ) δ ( f x z f θ )
| l ( x , θ ) | = l 0 ° ( x , θ ) + l 180 ° ( x , θ ) 2 ,
l ( x , θ ) = 2 tan 1 ( l 0 ° ( x , θ ) l 180 ° ( x , θ ) l 90 ° ( x , θ ) l 270 ° ( x , θ ) ) ,
l ( x , θ ) = f ( x + θ z ) exp { j 2 π ( x + θ z ) T x } .
f c ( x ) = f ( x ) exp ( j 2 π T x x ) ,

Metrics