Abstract

In this paper, we propose a simple correction method of distorted elemental images for computational integral imaging reconstruction (CIIR) method by using surface markers on lenslet array. The position information of surface markers is extracted from distorted elemental images with geometric misalignments such as skew, rotation and so on. Then the elemental images can be corrected simply when applying linear transformation calculated from the extracted positions. Therefore, the proposed method can simply correct geometric misalignments such as skew and rotation. The corrected elemental images can provide the precise reconstruction of 3D plane images in CIIR. To show the usefulness of the proposed method, the preliminary experiments are carried out and the experimental results are presented. To the best of our knowledge, this is the first report to deal with compensating for the distorted elemental images recorded by using computational integral imaging.

©2009 Optical Society of America

1. Introduction

Three-dimensional (3D) displays for next-generation 3D television and movie are desirable to generate true 3D images in space [14]. For this purpose, an integral imaging has been studied significantly [510]. Integral imaging can provide observers with true color 3D images with full parallax and continuous viewing points. Recently, based on the principle of integral imaging, several computational integral imaging systems have been introduced for 3D visualization and recognition [1115]. A computational integral imaging system is shown in Fig. 1 . It is composed of the optical pickup and the computational integral imaging reconstruction (CIIR) based on the pinhole-array model [11]. In the optical pickup shown in Fig. 1(a), 3D object is recorded as the elemental images through a lenslet array. In the CIIR process shown in Fig. 1(b), the elemental images are digitally processed by use of a computer where 3D images can be easily reconstructed at any reconstruction output plane without optical devices.

 

Fig. 1 Concept of computational integral imaging (a) Optical pickup (b) CIIR.

Download Full Size | PPT Slide | PDF

 

Fig. 3 (a) Proposed pickup system using surface markers on lenslet array (b) Example of distorted elemental images (c) Example of corrected elemental images

Download Full Size | PPT Slide | PDF

However, the schematic geometry of optical pickup as shown in Fig. 1(a) may be difficult to be used for real object pickup because pickup devices are physically small compared with lenslet array and there are mismatches of pixels between the lenslet array pitch and pickup device. To solve the problem resulting from the difference in size between the lenslet array and the pickup device, the use of a relay optic system (ROS) was proposed as shown in Fig. 2 [10,16,17]. In the practical realizations of the pickup stage in integral imaging, the ROS can be composed of several lens combination. In this case, the size and position of the pickup device should be matched to the elemental-cell grid of lenslet array well. It is hard to obtain the perfect matching condition between lenslet array and pickup device because of distortion and diffraction from the ROS. For solving this problem, the advanced optical design technique may be used but it is not cost-efficient. Therefore, the digital correction method can provide an effective solution without high fabrication cost. In addition, either lenslet array or the pickup device causes spatial geometric distortions such as rotation and tilting. When the ROS-based pickup system is used, the recorded elemental images may contain various distortions which can prevent high-quality 3D image reconstruction. Recently, the concept on geometric analysis of spatial distortion in each elemental image was reported when 3D image is optically displayed [18]. They showed that the spatial geometric errors lead to a significantly distorted 3D image in optical display. In this paper, however, we consider the global geometrical distortions related to the structure of the entire elemental images in the pickup process.

 

Fig. 2 Pickup stage using the relay optic system.

Download Full Size | PPT Slide | PDF

Although many research efforts for the ROS-based pickup system have been carried out in the past few years, there has been no study on dealing with the distortion-contained elemental images. Therefore, in this paper, we propose a correction method of distorted elemental images to reconstruct the desired 3D plane images in CIIR method. In the proposed correction method, we simply use surface markers on lenslet array. The surface markers can provide the initial information related to the distortions in the pickup stage. The position information of surface markers is extracted from distorted elemental images and the elemental images can be corrected when applying the linear transformation calculated from the extracted positions. Thus, the proposed method can correct geometric misalignments such as skew, rotation and translation and provide the precise reconstruction of 3D plane images in CIIR. To the best of our best knowledge, this is the first report to deal with compensating the distorted elemental image in computational integral imaging.

2. Proposed correction method

For the correction of the distorted elemental images recorded from the ROS-based pickup system shown in Fig. 2, the proposed architecture is shown in Fig. 3. We assume that the geometric distortion is provided by misalignment of the lenslet array. The recorded elemental images may contain geometric errors such as skew, rotation and so on. To correct these geometric errors, we introduce a method for using surface markers on lenslet array. The position of surface markers can be selected according to the system structure. The accuracy may be improved as the number of surface marker increases. In this paper, we consider an example when the number of surface markers, denoted by n, is 9 as shown in Fig. 3. The use of surface markers on lenslet array can provide the estimation information related to the geometric error in the recorded elemental images. Since the surface marker may be easily designed by the user, the position information of surface markers can be easily extracted and thus the distortion can be compensated by using a linear perspective transform.

As shown in Fig. 3(a), we can directly mark the nine surface markers on the lenslet array. These markers are recorded as the elemental images as shown in Fig. 3(b) and used as reference points. The size of surface marker is identical to that of one of the lenslets. In the recorded elemental images, the marker can be easily extracted because the intensity (or color) information of the marker is already known. From the recorded markers, the central position is extracted by averaging the estimated area of marker and is used as the marker positions. Using the estimated mark positions, we compensate for the geometric distortion by misalignments. Consider an estimated marker position (x, y) in the originally recorded elemental images and its target position (u, v) for desired elemental images.

Now, suppose we recover the mapping from the distorted elemental images to the desired elemental images using position information of the n surface markers. This can be expressed by a single projective transform,

u=p1x+p2y+p3p7x+p8y+p9
v=p4x+p5y+p6p7x+p8y+p9
with eight degrees of freedom P=(p 1,…,p 9)T constrained by |P|=1. The same transform is expressed in homogeneous coordinates as
(uwvww)=(p1p2p3p4p5p6p7p8p9)(xy1).
P can be determined when the estimated positions of the n markers are given. Given n position values as shown in Fig. 3, Eq. (3) can be changed into Eq. (4).

AP=[x1y11000u1x1u1y1u1000x1y11v1x1v1y1v1xnxn1000unxnunynun000xnxn1vnxnvnxnvn][p1p2p3p4p5p6p7p8p9]=0.

In Eq. (4), the goal is to find the unit vector P that minimizes |AP|. This is given by the eigenvector corresponding to the smallest eigenvalue of ATA. For the nine surface markers used in our method, we can get the correction parameters using Eq. (4). The example of our correction for distorted elemental images is shown in Fig. 3(c).

3. Experiments and results

3.1 Computational experiments

To show the usefulness of the proposed method in the CIIR method, we first carried out computational experiments. This computational experiment can provide the precise analysis for the proposed method. Figure 4(a) illustrates our experimental setup of computational tests. A pinhole array in the experimental setup is composed of 34×25 pinholes and it is located at z=0 mm. The interval between the pinholes is 1.08 mm and the gap g between the elemental images and the pinhole array is 3 mm. The 9 surface markers were used as shown in Fig. 3. These marks have the grey level of ‘255’ for easy detection. Three images named Lena, Cow, and Car are used as test images as shown in Fig. 5(a) . The size of each image is 1020×750 pixels. When the test image was located at distance z=27 mm from the pinhole array, the elemental images were synthesized by the computational pickup based on the simple ray geometric analysis [14]. The number of entire elemental images is 34×25 and each elemental image has 30×30 pixels. The synthesized elemental images are distorted geometrically using a graphic tool. For the distorted elemental images, the correction process was performed as shown in Fig. 4(b). First, the positions of surface markers were estimated. Figure 6 shows the detection process for surface markers. In the proposed process, the distorted elemental images are thresholded at the known intensity information of surface marker and then we performed blob (connected components) analysis to obtain the blob area related to the surface markers. And we calculate the center positions for each blob area and apply the position information to the single projective transform of Eq. (4). Then we obtained the corrected elemental images.

 

Fig. 4 Conceptual diagram for computational test (a) Pickup and distortion process (b) Correction and CIIR process.

Download Full Size | PPT Slide | PDF

 

Fig. 5 (a) Three test images for computational experiments. (b) Their elemental images with markers.

Download Full Size | PPT Slide | PDF

 

Fig. 6 Diagram of our correction process.

Download Full Size | PPT Slide | PDF

To evaluate the characteristics of the elemental images for three images shown in Fig. 5(a), we have simulated the CIIR method shown in Fig. 1(b). The principle of CIIR used in the experiment is as follows [11]. First, each elemental image is projected inversely through the corresponding pinhole. Second, when an image is reconstructed on the output plane of z from the pinhole array, the inversely projected elemental image is digitally magnified by a factor of z/g, representing the ratio of the distance (z) between the virtual pinhole array and the output plane to the distance (g) between the pinhole array and the elemental image plane. Third, the enlarged elemental image is overlapped and summed at the corresponding pixels of the output plane. To completely obtain a reconstructed plane image of a 3D object at the distance of z, the same process must be repeatedly performed to all of the picked up elemental images through each corresponding pinhole. Also, through iterative computation of the above process by increasing the z value, a series of z-dependent plane images can be obtained. When the distance z is the same with one where the original object was located, we can obtain a clear plane image. The corrected elemental images were applied to CIIR and the plane images were reconstructed at z=27 mm. To quantitatively estimate the visual quality of the reconstructed plane images, the peak signal-to-noise ratio (PSNR) was employed as an image quality measure [14].

3.2 Computational test for rotation and skew distortions

We consider two distortions in our experiments. One is the rotation distortion and the other is the skew distortion. First, the rotation distortion was applied to the proposed method shown in Fig. 4. We reconstructed plane images in CIIR and measured the PSNR according to the various angles. The reconstructed plane images are shown in Fig. 7 and the PSNR results are presented in Fig. 8 . From the results of Fig. 7, in case of using the rotation-distorted elemental images we can see severely distorted plane image. Also, in the result of Fig. 8, the PSNR decreases as the rotation angle increases. However the corrected elemental images using our correction method recovered the right plane images with the high PSNR values for all rotation angles.

 

Fig. 7 Examples of reconstructed plane images according to the rotation angle.

Download Full Size | PPT Slide | PDF

 

Fig. 8 PSNR results for rotation distortion in CIIR

Download Full Size | PPT Slide | PDF

Next, we applied the skew distortion to the elemental images of three test images. The various elemental images were skew-distorted using a graphic tool as shown in Fig. 9 . The PSNR results between the original elemental images and the corrected elemental images are presented in the bottom of each elemental image. And we performed CIIR experiment using the some elemental images. The reconstructed images are shown in Fig. 10 . When we used the skew-distorted elemental images in CIIR, the plane images was not reconstructed clearly. However, the corrected elemental images give us the well-reconstructed plane image. From the results of Fig. 10, we can see that our correction method reconstructed the precise plane images

 

Fig. 9 Skew-distorted elemental images (a) Lena (b) Cow (c) Caw

Download Full Size | PPT Slide | PDF

 

Fig. 10 (a) Original elemental images and its CIIR image (b) Skew-distorted elemental images and its CIIR image (c) Corrected elemental images and its CIIR image.

Download Full Size | PPT Slide | PDF

3.3 Experiments for real 3D object

To evaluate for real 3D object, some experiments using the elemental images that were optically picked up from a real 3D object were carried out. The experimental setup is shown in Fig. 11 . A test real object is a toy car. It is longitudinally located at approximately z=55 mm from the lenslet array. The lenslet array with 30×30 lenslets is located at z=0 mm. Each lenslet size d is 1.08 mm and each of elemental images is composed of 30×30 pixels. To obtain the function of surface maker, we used the marker sheet where the square markers are recorded at the transparent sheet as shown in Fig. 11. The marker sheet was contacted with the lenslet array. The elemental images obtained by an optical pickup are shown in Fig. 12(a) . Using the recorded elemental images, we performed CIIR experiments and the plane images were reconstructed at z=55 mm as shown in Fig. 12(b). We tested rotation and skew distortions in this experiment. First, the rotation-distorted elemental images were recorded by rotating the lenslet array by approximately 1°. The recorded elemental images and its CIIR image are shown in Fig. 13(a) . Next, the skew-distorted elemental images and its CIIR image, as shown in Fig. 14(a) , were obtained by tilting the lenslet array by approximately 5°. In case of using both distorted elemental images, we obtained the unclearly distorted CIIR images. For both distorted elemental images, our correction method was applied. The corrected elemental images and their CIIR images are shown in Fig. 13(b) and 14(b), respectively. From the experimental results, it is seen that the corrected elemental images using our correction method improved the visual quality of plane images dramatically.

 

Fig. 11 Experimental structure for pickup of real 3D object.

Download Full Size | PPT Slide | PDF

 

Fig. 12 (a) Original elemental images without distortion (b) CIIR image.

Download Full Size | PPT Slide | PDF

 

Fig. 13 (a) Rotational-distorted elemental images and its CIIR image (b) Rotation-corrected elemental images and its CIIR image.

Download Full Size | PPT Slide | PDF

 

Fig. 14 (a) Skew-distorted elemental images and its CIIR image (b) Skew-corrected elemental images and its CIIR image.

Download Full Size | PPT Slide | PDF

4. Discussion and conclusion

In the proposed method, we believe that the accuracy may be improved as the number of surface marker increases. However, we lose many elemental images in the ROS-based pickup system. In this case, the missing elemental images can be recovered by using a computational method such as intermediate view reconstruction technique [19].

In conclusion, we proposed a correction method of distorted elemental images caused by geometric error in the pickup stage for the CIIR-based application system. Since the proposed method used surface markers on lenslet array, it has simple structure and can provide high compensation ability. The experimental results show that our method can provide high-quality elemental images even though the severely distorted elemental images were recorded.

Acknowledgment

We thank the anonymous reviewer whose valuable comments greatly improved this paper. This research was supported by Institute of Ambient Intelligence (IAI), Dongseo University.

References and links

1. S. A. Benton, ed., Selected Papers on Three-Dimensional Displays (SPIE Optical Engineering Press, Bellingham, WA, 2001).

2. T. Okoshi, “Three-dimensional display,” Proc. IEEE 68(5), 548–564 (1980). [CrossRef]  

3. A. R. L. Travis, “The display of Three-dimensional video images,” Proc. IEEE 85(11), 1817–1832 (1997). [CrossRef]  

4. D. H. McMahon and H. J. Caulfield, “A technique for producing wide-angle holographic displays,” Appl. Opt. 9(1), 91–96 (1970). [CrossRef]   [PubMed]  

5. G. Lippmann, “La photographie integrale,” Comptes-Rendus Academie des Sciences 146, 446–451 (1908).

6. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef]   [PubMed]  

7. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27(5), 324–326 (2002). [CrossRef]  

8. S.-W. Min, B. Javidi, and B. Lee, “Enhanced three-dimensional integral imaging system by use of double display devices,” Appl. Opt. 42(20), 4186–4195 (2003). [CrossRef]   [PubMed]  

9. D.-H. Shin, S.-H. Lee, and E.-S. Kim, “Optical display of true 3D objects in depth-priority integral imaging using an active sensor,” Opt. Commun. 275(2), 330–334 (2007). [CrossRef]  

10. R. Martinez-Cuenca, A. Pons, G. Saavedra, M. Martinez-Corral, and B. Javidi, “Optically-corrected elemental images for undistorted Integral image display,” Opt. Express 14(21), 9657–9663 (2006). [CrossRef]   [PubMed]  

11. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12(3), 483–491 (2004). [CrossRef]   [PubMed]  

12. S.-H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12(19), 4579–4588 (2004). [CrossRef]   [PubMed]  

13. J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced three-dimensional image correlator using computationally reconstructed integral images,” Opt. Commun. 276, 72–79 (2007). [CrossRef]  

14. D.-H. Shin and H. Yoo, “Image quality enhancement in 3D computational integral imaging by use of interpolation methods,” Opt. Express 15(19), 12039–12049 (2007). [CrossRef]   [PubMed]  

15. D.-H. Shin and E.-S. Kim, “Computational integral imaging reconstruction of 3D object using a depth conversion technique,” J. Opt. Soc. Korea 12(3), 131–135 (2008). [CrossRef]  

16. R. Martínez-Cuenca, H. Navarro, G. Saavedra, B. Javidi, and M. Martinez-Corral, “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Opt. Express 15(24), 16255–16260 (2007). [CrossRef]   [PubMed]  

17. J. Hahn, Y. Kim, E. H. Kim, and B. Lee, “Undistorted pickup method of both virtual and real objects for integral imaging,” Opt. Express 16(18), 13969–13978 (2008). [CrossRef]   [PubMed]  

18. M. Kawakita, H. Sasaki, J. Arai, F. Okano, K. Suehiro, Y. Haino, M. Yoshimura, and M. Sato, “Geometric analysis of spatial distortion in projection-type integral imaging,” Opt. Lett. 33(7), 684–686 (2008). [CrossRef]   [PubMed]  

19. D.-C. Hwang, J.-S. Park, S.-C. Kim, D.-H. Shin, and E.-S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45(19), 4631–4637 (2006). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. S. A. Benton, ed., Selected Papers on Three-Dimensional Displays (SPIE Optical Engineering Press, Bellingham, WA, 2001).
  2. T. Okoshi, “Three-dimensional display,” Proc. IEEE 68(5), 548–564 (1980).
    [Crossref]
  3. A. R. L. Travis, “The display of Three-dimensional video images,” Proc. IEEE 85(11), 1817–1832 (1997).
    [Crossref]
  4. D. H. McMahon and H. J. Caulfield, “A technique for producing wide-angle holographic displays,” Appl. Opt. 9(1), 91–96 (1970).
    [Crossref] [PubMed]
  5. G. Lippmann, “La photographie integrale,” Comptes-Rendus Academie des Sciences 146, 446–451 (1908).
  6. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997).
    [Crossref] [PubMed]
  7. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27(5), 324–326 (2002).
    [Crossref]
  8. S.-W. Min, B. Javidi, and B. Lee, “Enhanced three-dimensional integral imaging system by use of double display devices,” Appl. Opt. 42(20), 4186–4195 (2003).
    [Crossref] [PubMed]
  9. D.-H. Shin, S.-H. Lee, and E.-S. Kim, “Optical display of true 3D objects in depth-priority integral imaging using an active sensor,” Opt. Commun. 275(2), 330–334 (2007).
    [Crossref]
  10. R. Martinez-Cuenca, A. Pons, G. Saavedra, M. Martinez-Corral, and B. Javidi, “Optically-corrected elemental images for undistorted Integral image display,” Opt. Express 14(21), 9657–9663 (2006).
    [Crossref] [PubMed]
  11. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12(3), 483–491 (2004).
    [Crossref] [PubMed]
  12. S.-H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12(19), 4579–4588 (2004).
    [Crossref] [PubMed]
  13. J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced three-dimensional image correlator using computationally reconstructed integral images,” Opt. Commun. 276, 72–79 (2007).
    [Crossref]
  14. D.-H. Shin and H. Yoo, “Image quality enhancement in 3D computational integral imaging by use of interpolation methods,” Opt. Express 15(19), 12039–12049 (2007).
    [Crossref] [PubMed]
  15. D.-H. Shin and E.-S. Kim, “Computational integral imaging reconstruction of 3D object using a depth conversion technique,” J. Opt. Soc. Korea 12(3), 131–135 (2008).
    [Crossref]
  16. R. Martínez-Cuenca, H. Navarro, G. Saavedra, B. Javidi, and M. Martinez-Corral, “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Opt. Express 15(24), 16255–16260 (2007).
    [Crossref] [PubMed]
  17. J. Hahn, Y. Kim, E. H. Kim, and B. Lee, “Undistorted pickup method of both virtual and real objects for integral imaging,” Opt. Express 16(18), 13969–13978 (2008).
    [Crossref] [PubMed]
  18. M. Kawakita, H. Sasaki, J. Arai, F. Okano, K. Suehiro, Y. Haino, M. Yoshimura, and M. Sato, “Geometric analysis of spatial distortion in projection-type integral imaging,” Opt. Lett. 33(7), 684–686 (2008).
    [Crossref] [PubMed]
  19. D.-C. Hwang, J.-S. Park, S.-C. Kim, D.-H. Shin, and E.-S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45(19), 4631–4637 (2006).
    [Crossref] [PubMed]

2008 (3)

2007 (4)

R. Martínez-Cuenca, H. Navarro, G. Saavedra, B. Javidi, and M. Martinez-Corral, “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Opt. Express 15(24), 16255–16260 (2007).
[Crossref] [PubMed]

J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced three-dimensional image correlator using computationally reconstructed integral images,” Opt. Commun. 276, 72–79 (2007).
[Crossref]

D.-H. Shin and H. Yoo, “Image quality enhancement in 3D computational integral imaging by use of interpolation methods,” Opt. Express 15(19), 12039–12049 (2007).
[Crossref] [PubMed]

D.-H. Shin, S.-H. Lee, and E.-S. Kim, “Optical display of true 3D objects in depth-priority integral imaging using an active sensor,” Opt. Commun. 275(2), 330–334 (2007).
[Crossref]

2006 (2)

2004 (2)

2003 (1)

2002 (1)

1997 (2)

1980 (1)

T. Okoshi, “Three-dimensional display,” Proc. IEEE 68(5), 548–564 (1980).
[Crossref]

1970 (1)

1908 (1)

G. Lippmann, “La photographie integrale,” Comptes-Rendus Academie des Sciences 146, 446–451 (1908).

Arai, J.

Caulfield, H. J.

Hahn, J.

Haino, Y.

Hong, S.-H.

Hoshino, H.

Hwang, D.-C.

J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced three-dimensional image correlator using computationally reconstructed integral images,” Opt. Commun. 276, 72–79 (2007).
[Crossref]

D.-C. Hwang, J.-S. Park, S.-C. Kim, D.-H. Shin, and E.-S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45(19), 4631–4637 (2006).
[Crossref] [PubMed]

Jang, J.-S.

Javidi, B.

Kawakita, M.

Kim, E. H.

Kim, E.-S.

D.-H. Shin and E.-S. Kim, “Computational integral imaging reconstruction of 3D object using a depth conversion technique,” J. Opt. Soc. Korea 12(3), 131–135 (2008).
[Crossref]

J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced three-dimensional image correlator using computationally reconstructed integral images,” Opt. Commun. 276, 72–79 (2007).
[Crossref]

D.-H. Shin, S.-H. Lee, and E.-S. Kim, “Optical display of true 3D objects in depth-priority integral imaging using an active sensor,” Opt. Commun. 275(2), 330–334 (2007).
[Crossref]

D.-C. Hwang, J.-S. Park, S.-C. Kim, D.-H. Shin, and E.-S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45(19), 4631–4637 (2006).
[Crossref] [PubMed]

Kim, S.-C.

Kim, Y.

Lee, B.

Lee, S.-H.

D.-H. Shin, S.-H. Lee, and E.-S. Kim, “Optical display of true 3D objects in depth-priority integral imaging using an active sensor,” Opt. Commun. 275(2), 330–334 (2007).
[Crossref]

Lippmann, G.

G. Lippmann, “La photographie integrale,” Comptes-Rendus Academie des Sciences 146, 446–451 (1908).

Martinez-Corral, M.

Martinez-Cuenca, R.

Martínez-Cuenca, R.

McMahon, D. H.

Min, S.-W.

Navarro, H.

Okano, F.

Okoshi, T.

T. Okoshi, “Three-dimensional display,” Proc. IEEE 68(5), 548–564 (1980).
[Crossref]

Park, J.-S.

J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced three-dimensional image correlator using computationally reconstructed integral images,” Opt. Commun. 276, 72–79 (2007).
[Crossref]

D.-C. Hwang, J.-S. Park, S.-C. Kim, D.-H. Shin, and E.-S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45(19), 4631–4637 (2006).
[Crossref] [PubMed]

Pons, A.

Saavedra, G.

Sasaki, H.

Sato, M.

Shin, D.-H.

Suehiro, K.

Travis, A. R. L.

A. R. L. Travis, “The display of Three-dimensional video images,” Proc. IEEE 85(11), 1817–1832 (1997).
[Crossref]

Yoo, H.

Yoshimura, M.

Yuyama, I.

Appl. Opt. (4)

Comptes-Rendus Academie des Sciences (1)

G. Lippmann, “La photographie integrale,” Comptes-Rendus Academie des Sciences 146, 446–451 (1908).

J. Opt. Soc. Korea (1)

Opt. Commun. (2)

J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced three-dimensional image correlator using computationally reconstructed integral images,” Opt. Commun. 276, 72–79 (2007).
[Crossref]

D.-H. Shin, S.-H. Lee, and E.-S. Kim, “Optical display of true 3D objects in depth-priority integral imaging using an active sensor,” Opt. Commun. 275(2), 330–334 (2007).
[Crossref]

Opt. Express (6)

Opt. Lett. (2)

Proc. IEEE (2)

T. Okoshi, “Three-dimensional display,” Proc. IEEE 68(5), 548–564 (1980).
[Crossref]

A. R. L. Travis, “The display of Three-dimensional video images,” Proc. IEEE 85(11), 1817–1832 (1997).
[Crossref]

Other (1)

S. A. Benton, ed., Selected Papers on Three-Dimensional Displays (SPIE Optical Engineering Press, Bellingham, WA, 2001).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1
Fig. 1 Concept of computational integral imaging (a) Optical pickup (b) CIIR.
Fig. 3
Fig. 3 (a) Proposed pickup system using surface markers on lenslet array (b) Example of distorted elemental images (c) Example of corrected elemental images
Fig. 2
Fig. 2 Pickup stage using the relay optic system.
Fig. 4
Fig. 4 Conceptual diagram for computational test (a) Pickup and distortion process (b) Correction and CIIR process.
Fig. 5
Fig. 5 (a) Three test images for computational experiments. (b) Their elemental images with markers.
Fig. 6
Fig. 6 Diagram of our correction process.
Fig. 7
Fig. 7 Examples of reconstructed plane images according to the rotation angle.
Fig. 8
Fig. 8 PSNR results for rotation distortion in CIIR
Fig. 9
Fig. 9 Skew-distorted elemental images (a) Lena (b) Cow (c) Caw
Fig. 10
Fig. 10 (a) Original elemental images and its CIIR image (b) Skew-distorted elemental images and its CIIR image (c) Corrected elemental images and its CIIR image.
Fig. 11
Fig. 11 Experimental structure for pickup of real 3D object.
Fig. 12
Fig. 12 (a) Original elemental images without distortion (b) CIIR image.
Fig. 13
Fig. 13 (a) Rotational-distorted elemental images and its CIIR image (b) Rotation-corrected elemental images and its CIIR image.
Fig. 14
Fig. 14 (a) Skew-distorted elemental images and its CIIR image (b) Skew-corrected elemental images and its CIIR image.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

u=p1x+p2y+p3p7x+p8y+p9
v=p4x+p5y+p6p7x+p8y+p9
(uwvww)=(p1p2p3p4p5p6p7p8p9)(xy1).
AP=[x1y11000u1x1u1y1u1000x1y11v1x1v1y1v1xnxn1000unxnunynun000xnxn1vnxnvnxnvn][p1p2p3p4p5p6p7p8p9]=0.

Metrics