Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system

Open Access Open Access

Abstract

One of the main limitations of integral imaging is the narrow viewing angle. This drawback comes from the limited field of view of microlenses during the pickup and display. We propose a novel all-optical technique which allows the substantial increase of the field of view of any microlens and therefore of the viewing angle of integral-imaging displays.

©2007 Optical Society of America

1. Introduction

The design and development of techniques for the recording and display of three-dimensional (3D) images has attracted the attention of scientists and engineers in very different disciplines. Examples of this are the works by Smith [1] and Porterfield [2] whom, during the 18th century, studied the influence of binocular vision on the disparity between the images perceived and on depth perception. In the 19th century Wheatstone [3] built the first stereoscope. Note that with an stereoscope two observers at different locations would perceive the same image.

At the beginning of 20th century, Lippmann [4] proposed the Integral Photography (IP), which allows the reconstruction of true 3D images. Based on the principle of reversibility for light rays, IP is characterized by being autostereoscopic, i.e., there is no need for the use of any additional device to perceive the 3D images. A few years later, H. E. Ives [5] simplified the implementation of Lipmann’s technique by use of lens sheets and parallax barriers instead of fly-eye lenses. Many years later the groups leaded by Buckhardt, Okoshi and McCornick [68] continued the study and improvement of the IP designs.

The first attempts for obtaining 3D animated images were severely limited by the low resolution of the digital cameras and video projectors available at the 80’s. However, the advances in microlens-array building techniques and the development of high-resolution digital methods have permitted to rescue the IP concept under the name of Integral Imaging (InI) [9]. Nowadays InI has become a promising procedure for production of real-time 3D image display [1026]. In the past few years important research efforts have been addressed to overcome fundamental limitations of InI, such as the limited extension of the depth of field [1215], the generation of orthoscopic integral images [9, 16] and the improvement of the quality of displayed images [17,18]. Besides, there have been remarkable practical advances by designing 2D-3D displays [19] and multiview video architecture and rendering [20].

Due to the advanced degree of its development, InI technology could be ready for massive commercialization in the coming decades. However, there are still some problems that delay such implementation. One is the limited range of viewing angle of InI monitors. This is a very important deficiency, because the 3D images reconstructed with InI systems can be visualized from a narrow range of angular positions. Some research groups have perceived the importance of this problem and therefore have made interesting suggestions. Among them the proposal of using dynamic barriers [21], the one of switching of the elemental lenses [22], or the use of curved lens array [23,24], are remarkable. These techniques, however, do not display simultaneously in all the directions or, in the later caser, are useful only for 3D scenes in the vicinity of the center of the curved array. Other proposals for achieving some enhancement in the viewing angle consider the use of a 2000-scanning-line video system [25], or the computational synthetic aperture [26].

The aim of this paper is to present a novel architecture for the pickup and the display. The technique will be named as the Multiple-Axis Telecentric RElay System (MATRES). In the pickup, the MATRES permits the acquisition in parallel of three sets of elemental images, each with different, but complementary, range of viewing angles. The telecentric nature of the MATRES allows the optical implementation of the barriers that prevent from the overlapping between the elemental images. In the display stage, the use of the MATRES allows the simultaneous display of three sets of elemental images and, therefore, the visualization of the reconstructed images from a threefold viewing angle. The power of the proposed method results from the combination between telecentric relay concept and space division multiplexing. The space division multiplexing is faster than time-division multiplexing [27], and allows the increasing of viewing angle without any cost in resolution of depth of view.

2. Viewing angle limitations

Let us start by drawing a schematic configuration of the pickup stage of an InI system. As shown in Fig. 1(a), a collection of elemental images, each with different perspective of the 3D scene, is generated onto a matrix sensor. Note however that this architecture cannot be used for an actual pickup. This is because when capturing large scenes, the elemental images are much larger than the corresponding elemental cells, giving rise to a strong overlapping between elemental images. In such case, any elemental cell receives light from many microlenses and therefore no useful information can be extracted from them. Typical solutions to this problem are the use an array of GRIN microlenses [28], or the insertion of opaque barriers, commonly known as optical barriers, [21], as shown in the scheme of Fig. 1(b).

 figure: Fig. 1.

Fig. 1. The pickup of a InI system. (a) The elemental images can invade the neighbor elemental cells; (b) The opaque barriers prevent from the overlapping between elemental images

Download Full Size | PDF

In a previous paper [29] we suggested the use of a telecentric relay system (TRES). As shown in Fig. 2, the TRES is inserted between microlens array (MLA) and the image sensor, so that three problems are solved simultaneously: (i) The difference in size between the MLA and the image sensor; (ii) The overlapping between elemental images; and (iii) The shift of the elemental images towards the optical axis of the complete system.

 figure: Fig. 2.

Fig. 2. Pickup stage with the TRES inserted between the MLA and the image sensor. Micro EP is the conjugate of the aperture stop through the microlens.

Download Full Size | PDF

The conjugate of the TRES aperture stop through the different microlenses are the elemental micro entrance pupils of the system, and will be denoted as micro-EPs. The multiple micro-EPs are located just in front of the center of any microlenses. Thus, among the rays emitted by the object, only the ones passing through the micro EPs, and therefore located in the corresponding elemental cell, are captured. As seen in the figure, any microlens can form the image of only parts of the object within a cone originated at the corresponding micro EP. The angular extension of such cone defines the field of view (FOV) of the microlens. The FOV covered by the whole MLA is much wider than the one covered by a single microlens. In summary, the use of the TRES permits the implementation by optical means of the barriers.

The use of the TRES in the display stage prevents the typical flipping effect that appears when an elemental image is seen through the neighboring microlens. The rays have the same trajectory as in the pickup, but in the reverse direction (see Fig. 3). The image of any point on the 3D object is reconstructed by the intersection of rays emanated from the elemental images in which such point was recorded. Due to the optical barriers any point of the object is recorded in only a few elemental images. Consequently, during the display the image of such point can be seen only through the same few microlenses. This effect determines the viewing angle of InI displays. In the example of Fig 3, in which the display is illustrated with distorted scale, the image of the central point of the 3D object is stored only in some central elemental images. The viewing angle, Ω, can be calculated as tanΩ=p/(2f), where p and f are the pitch and the focal length of the MLA, respectively

 figure: Fig. 3.

Fig. 3. The viewing angle is determined by the pitch and focal length of the microlenses.

Download Full Size | PDF

3. Oblique optical barriers by multi-axis telecentric relay.

To overcome the problem of the viewing-angle limitation in InI monitors, Choi et al. suggested a clever method in Ref. [21], where they presented the concept of oblique opaque barrier array. As shown above, when the set of elemental images is captured with perpendicular barriers, the reconstructed image of a given point of the object, for example the central point of the arrow, can be seen from positions within the cone of angle Ω. If the set of elemental images is captured with oblique barriers, the reconstructed image can be seen from a range angles different from the one obtained in the perpendicular-barriers case. Thus, if one could capture the three sets of elemental images and display them simultaneously, with three different barrier arrangements, the viewing angle could be expanded by factor three. To do that, Choi et al., suggested tilting the barrier with enough speed to induce afterimage effect, and synchronizing the display of the corresponding elemental images.

What we propose here is a new design of the telecentric relay for the parallel acquisition, by optical means, of the three sets of elemental images. The same system is used for the parallel display of the three sets. We name the proposed optical architecture as the Multiple-Axis Telecentric RElay System. As we show in Fig. 4, in the MATRES the camera lens is substituted by an array of three camera lenses, each with the corresponding aperture stop. In this way, the telecentricity condition is accomplished in three directions; the perpendicular one and two oblique directions. The central camera lens acquires the same collection of elemental images as with the conventional TRES. The left and the right camera lenses acquire two additional sets of elemental images each with different, but complementary, perspective information. Note that the MATRES allows the simultaneous implementation, by optical means, of three sets of perpendicular and oblique barriers. As in Fig. 3, any microlens can form the image of only parts of the object within a cone originated at the corresponding micro EP. But now each microlens has, in parallel, three micro-EPs. In other words, each microlens has associated three complementary cones, which yield to a threefold increase of the FOV of any microlens.

 figure: Fig. 4.

Fig. 4. Illustration of the MATRES. (a) The telecentricity condition holds in three directions; (b) In this figure we show the three micro EPs corresponding to a single microlens, and show that there is no crosstalk between picked up images.

Download Full Size | PDF

The same MATRES is used in the display stage. The rays have the same trajectory as in the pickup, but in the reverse direction, allowing the implementation by optical means of the perpendicular and oblique barriers to avoid the flipping effect (see Fig. 5). The image of any point of the 3D object is reconstructed by the intersection of rays emanated from the three sets of elemental images. Consequently during the display the image of a point (for example the central point of the arrow) is perceived from a wider range of viewing angles. Specifically, with the MATRES the tangent of the viewing angle, ΩM, is enlarged by a factor of three as compared with the conventional TRES realization.

 figure: Fig. 5.

Fig. 5. 3D display with MATRES enlarges the viewing angle by a factor of three. As in the pickup, the micro-EPs are the conjugate of the aperture stops through the field lens.

Download Full Size | PDF

4. Hybrid experiment

To illustrate our approach, we have performed a hybrid experiment in which the pickup was simulated numerically whereas the reconstruction was obtained experimentally in the laboratory. In the pickup a microlens array composed of 25×25 square microlenses of 1.01 mm×1.01 mm in size and focal length of f=5 mm was used. Each elemental cell consisted of 91×91 pixels. We calculated two sets of elemental images (the central and the right ones, see Fig. 4) of a 3D scene consisting of two capital letters, namely R and B, each one printed on a different plate and located at distances 50 mm and 110 mm from the MLA. The letters were surrounded by a square frame. The size of the letters was set so that they were recorded with the same size and, therefore, with the same resolution. In the simulation the reference plane was set at d=100 mm, so that the gap between the microlenses and the aerial-images plane was g=5.26 mm. The two sets of elemental images obtained with the MATRES are shown in Fig. 6. Note that the central set is, precisely, the one that would be obtained if the pickup were realized with the conventional TRES.

For the experimental reconstruction we printed the two sets of elemental images on photo paper. The pictures were illuminated with white light and inserted in the display setup which was composed by the same MLA as in the pickup stage, a field lens of focal length f L=200 mm, and two camera lenses of focal length f C=225mm. The aperture-stop diameters were set to ϕ=2 mm. The separation Δ between the central and the right optical axis was Δ=f L/g=38 mm. In Fig. 7 we show a movie of the reconstructed image. The frames of the movie were obtained with a digital camera placed at a distance D=370 mm from the MLA. We displaced the camera laterally from the central position (x=0) to x=+180 mm, which correspond to viewing angles within the interval [0°,+25.9°]. In the left-hand side movie we show the reconstruction corresponding to conventional pickup and display, namely, the one in which only the central set of elemental images is used. In the right-hand side movie we show the image reconstructed with the MATRES. It is clear that the use of MATRES in pickup and display enlarges the viewing angle of the InI monitor.

 figure: Fig. 6.

Fig. 6. Set of 25×25 elemental images obtained with: (a) The central camera lens; (b) The righthand side camera lens.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Reconstructed images obtained with conventional setup (left movie) and with the MATRES (right movie). (Video file of 3.1 Mb). [Media 1]

Download Full Size | PDF

5. Conclusions

We have presented an all-optical technique for substantial enhancement of the viewing angle of InI monitors. The method is based on the extension to oblique angle of the concept of telecentricity. Additionally the multi-axis telecentric relay prevents, by optical means, the overlapping between elemental images in the pickup, and the flipping in the display. We have performed a hybrid experiment that illustrates that our proposed setup provides reconstructed 3D scenes with almost a threefold increase of viewing angle. Of course the use of small diameters for the aperture stop will reduce the intensity of captured light so it may require more exposure time of cameras and projectors of high intensity.

Acknowledgements

This work has been funded in part by the Plan Nacional I+D+I (grant DPI2006-8309), Ministerio de Educación y Ciencia, Spain. R. Martínez-Cuenca acknowledges funding from the Universitat de València (Cinc Segles grant). We also acknowledge the support from the Generalitat Valenciana (grant GV06/219).

References and Links

1. R. Smith, A Complete System of Optics in Four Books, viz. A popular, a mathematical, a mechanical, and a philosophical treatise. To which are added remarks upon the whole. Vol. 2 (Cambridge, 1738).

2. W. Porterfield, A Treatise on the Eye, the Manner and Phaenomena of Vision. (Edinburgh: Hamilton and Balfour, 1759).

3. C. Wheatstone: “On some remarkable, and hitherto unobserved, phenomena of binocular vision,” Philos. Trans. Roy. Soc. London , 128, 371–394 (1838). [CrossRef]  

4. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. (Paris) 7, 821–825 (1908).

5. H. E. Ives, “Optical properties of a Lippmann lenticulated sheet,” J. Opt. Soc. Am. 21, 171–176 (1931). [CrossRef]  

6. C. B. Burckhardt, “Optimum parameters and resolution limitation of integral photography,” J. Opt. Soc. Am. 58, 71–76 (1968). [CrossRef]  

7. T. Okoshi, “Optimum design and depth resolution of lens-sheet and projection-type three-dimensional displays,” Appl. Opt. 10, 2284–2291 (1971). [CrossRef]   [PubMed]  

8. L. Yang, M. McCornick y, and N. Davies, “Discussion of the optics of a new 3-D imaging system,” Appl. Opt. 27, 4529–4534 (1988). [CrossRef]   [PubMed]  

9. F. Okano, H. Hoshino, J. Arai, and I. Yayuma, “Real time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997). [CrossRef]   [PubMed]  

10. J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37, 2034–2045 (1998). [CrossRef]  

11. B. Javidi and F. Okano eds, Three Dimensional Television, Video, and Display Technologies, Springer Verlag Berlin, 2002

12. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Integral imaging with improved depth of field by use of amplitude-modulated microlens array,” Appl. Opt. 43, 5806–5813 (2004). [CrossRef]   [PubMed]  

13. R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “Enhanced depth of field integral imaging with sensor resolution constraints,” Opt. Express 12, 5237–5242 (2004). [CrossRef]   [PubMed]  

14. R. Martínez-Cuenca, G. Saavedra, A. Pons, B. Javidi, and M. Martínez-Corral, “Facet braiding: a fundamental problem in Integral Imaging,” Opt. Lett. 32, 1078–1080 (2007). [CrossRef]   [PubMed]  

15. A. Castro, Y. Frauel, and B. Javidi, “Integral imaging with large depth of field using an asymmetric phase mask,” Opt. Express 15, 10266–10273 (2007). [CrossRef]   [PubMed]  

16. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping.” Opt. Express 13, 9175–9180 (2005). [CrossRef]   [PubMed]  

17. J.-S Jang. and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27, 324–326 (2002). [CrossRef]  

18. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Multifacet structure of observed reconstructed integral images,” J. Opt. Soc. Am. A 22, 597–603 (2005). [CrossRef]  

19. J.-H. Park, H.-R. Kim, Y. Kim, J. Kim, J. Hong, S.-D. Lee, and B. Lee, “Depth-enhanced three-dimensional two-dimensional convertible display based on modified integral imaging,” Opt. Lett. 29, 2734–2736 (2004). [CrossRef]   [PubMed]  

20. H. Liao, M. Iwahara, H. Nobuhiko, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express 12, 1067–1076 (2004). [CrossRef]   [PubMed]  

21. H. Choi, S.-W. Min, S. Yung, J.-H. Park, and B. Lee, “Multiple viewing zone integral imaging using dynamic barrier array for three-dimensional displays,” Opt. Express 11, 927–932 (2003). [CrossRef]   [PubMed]  

22. J.-H. Park, S. Jung, H. Choi, and B. Lee, “Viewing-angle-enhanced integral imaging by elemental image resizing and elemental lens switching,” Appl. Opt. 41, 6875–6883 (2002). [CrossRef]   [PubMed]  

23. J. S. Jang and B. Javidi, “Depth and size control of three-dimensional images in projection integral imaging,” Opt. Express 12, 3778–3790 (2004). [CrossRef]   [PubMed]  

24. Y. Kim, J.-H. Park, S.-W. Min, S. Jung, H. Choi, and B. Lee, “Wide-viewing-angle integral three-dimensional imaging system by curving a screen and a lens array,” Appl. Opt. 44, 546–552 (2005). [CrossRef]   [PubMed]  

25. J. Arai, M. Okui, T. Yamashita, and F. Okano, “Integral three-dimensional television using a 2000-scanning-line video system,” Appl. Opt. 45, 1704–1712 (2006). [CrossRef]   [PubMed]  

26. A. Stern and B. Javidi, “3D computational synthetic aperture integral imaging (COMPSAII),” Opt. Express 11, 2466–2451 (2003). [CrossRef]  

27. A. Stern and B. Javidi, “Three-dimensional sensing, visualization and processing using integral imaging,” Proc. IEEE 94, 591–607 (2006).. [CrossRef]  

28. F. Okano, J. Arai, and M. Okui, “Amplified optical window for three-dimensional images,” Opt. Lett. 31, 1842–1844 (2006). [CrossRef]   [PubMed]  

29. R. Martínez-Cuenca, A. Pons, G. Saavedra, M. Martínez-Corral, and B. Javidi, “Optically-corrected elemental images for undistorted integral image display,” Opt. Express 14, 9657–9663 (2006) [CrossRef]   [PubMed]  

Supplementary Material (1)

Media 1: GIF (3104 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. The pickup of a InI system. (a) The elemental images can invade the neighbor elemental cells; (b) The opaque barriers prevent from the overlapping between elemental images
Fig. 2.
Fig. 2. Pickup stage with the TRES inserted between the MLA and the image sensor. Micro EP is the conjugate of the aperture stop through the microlens.
Fig. 3.
Fig. 3. The viewing angle is determined by the pitch and focal length of the microlenses.
Fig. 4.
Fig. 4. Illustration of the MATRES. (a) The telecentricity condition holds in three directions; (b) In this figure we show the three micro EPs corresponding to a single microlens, and show that there is no crosstalk between picked up images.
Fig. 5.
Fig. 5. 3D display with MATRES enlarges the viewing angle by a factor of three. As in the pickup, the micro-EPs are the conjugate of the aperture stops through the field lens.
Fig. 6.
Fig. 6. Set of 25×25 elemental images obtained with: (a) The central camera lens; (b) The righthand side camera lens.
Fig. 7.
Fig. 7. Reconstructed images obtained with conventional setup (left movie) and with the MATRES (right movie). (Video file of 3.1 Mb). [Media 1]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.