## Abstract

We propose a multi-viewer tracking integral imaging system for viewing angle and viewing zone improvement. In the tracking integral imaging system, the pickup angles in each elemental lens in the lens array are decided by the positions of viewers, which means the elemental image can be made for each viewer to provide wider viewing angle and larger viewing zone. Our tracking integral imaging system is implemented with an infrared camera and infrared light emitting diodes which can track the viewers’ exact positions robustly. For multiple viewers to watch integrated three-dimensional images in the tracking integral imaging system, it is needed to formulate the relationship between the multiple viewers’ positions and the elemental images. We analyzed the relationship and the conditions for the multiple viewers, and verified them by the implementation of two-viewer tracking integral imaging system.

©2009 Optical Society of America

## 1. Introduction

Many researches for three-dimensional (3D) display devices have been proceeding for more than a century. But the 3D display has emerged recently as one of the hottest issues in display industries and academia especially in the technologies adopting flat-panel displays with high frame rate. Integral imaging has attracted much attention because of its merits that it can provide both horizontal and vertical parallaxes by a micro-lens array without glasses, and provide quasi-continuous views to observers [1, 2]. On the other hand, the integral imaging also has difficulties to overcome – the limitation in 3D image resolution, viewing angle, and small depth range. Many researches have been focused on solving such problems [3–13].

Tracking technology has been used to recognize users’ positions or motions in some fields like virtual reality and to compensate the weak points in some kinds of 3D display technologies. The virtual reality systems like CAVE (Cave Automatic Virtual Environment), Workbench, and Illusionhole use stereoscopic 3D display and apply the tracking technology to implement the motion parallax which is an important cue for spatial cognition [14–16]. SeeReal Technologies took advantage of sub-hologram, which is made with the information of a viewer’s position in real time [17, 18]. The eye-tracking was the main factor in developing large real-time holographic display systems.

In integral imaging, the tracking technology can also be used to enhance viewing angle and viewing zone for one viewer as we proposed at a recent conference [19]. Viewer tracking enables elemental images to be generated corresponding with the viewer’s position dynamically. As a result, the wider viewing angle and the broader viewing zone can be implemented in the same integral imaging system. But there is a problem when more than one viewer wants to see 3D images in the tracking integral imaging system. It is the overlapping problem that the elemental images for viewers are overlapped on the same elemental image plane. In this paper, the conditions for overlapping not to occur with two viewers are analyzed using the positions of elemental images for each viewer in both real and virtual modes. The analysis is used to interpret the system parameters of tracking integral imaging, and we implemented tracking integral imaging system for two viewers by the analysis.

## 2. Principle of the proposed method

We use the tracking system to change the elemental images dynamically as the viewer’s position changes. Tracking in integral imaging system can make the viewer always be located at the central position in the viewing zone. It means that the viewing angle can be wider as far as the aberrations of the lens-array are tolerant and the tracking system supports wide angle. This section explains the difference between the conventional and the tracking integral imaging systems, the overlapping problem in the tracking integral imaging system and the analysis of viewing zones to avoid the overlapping problem.

#### 2.1 Comparison of viewing zones in conventional and tracking integral imaging systems

In conventional integral imaging system, the positions of elemental images are static. Therefore, the pickup angle of each elemental lens through which rays pass for capturing and displaying procedures is decided by the pitch of lens array and the gap between the lens array and the display device. In the aspect of the viewer, the viewing zone is the space where all the displaying zones of each elemental lens are overlapped as shown in Fig. 1(a) . The nearest position of viewing zone from the viewer is proportional to the number of lenses in a row or a column in a lens array. Therefore, it has a limitation in providing the viewers with large binocular disparity.

The tracking integral imaging system uses the tracked viewers’ position to change the displaying angle of each elemental lens in the lens array in accordance with the viewer’s 3D position in real time. Therefore it can expand the available viewing zone and give viewers larger disparity in the nearer distance as shown in Fig. 1(b). In the one-viewer tracking integral imaging system, the area for an elemental image is defined by rays from the position of the viewer passing the center of each lens as shown in Fig. 2
. And the pickup angle *θ* is determined from the area. Both the positions of elemental image areas and the pickup angle *θ* are variables which change every time using the tracking results.

Differently from the one-viewer tracking integral imaging system [19], the overlapping problem can occur in tracking integral imaging system for multiple viewers because the elemental image plane on the display device has a limited area. Figure 3 shows the different positions of elemental images for the positions of three viewers. Each viewer can see only red, green, or blue area through the same lens on the elemental image plane. But the elemental images for primary and secondary viewers are slightly overlapped, which we refer to as the overlapping problem.

While the conventional integral imaging system assumes that viewers are infinitely far from the lens array, the proposed system tracks the viewer who is located in the specific 3D coordinates related with the lens array as shown in Fig. 4
. Figure 4(a) shows some terms which are used to describe the positions of elemental images and integrated images when the central depth plane is in front of a lens array, which is called real mode in the integral imaging. Figure 4(b) shows the case of the virtual mode in which central depth plane is behind the lens array. In Fig. 4, *A _{n}* means the

*n*-th boundary coordinate of elemental image area which is the position where the rays from viewers through each boundary of elemental lens meet the elemental image plane. Besides, the

*B*is the

_{n}*n*-th coordinate of the boundary of elemental images magnified by each lens in the lens array, which are parts of integrated images on central depth plane. Not whole area of each elemental image is used to make integrated image to the viewer, but only $\overline{{C}_{n-1}{C}_{n-2}}$ area in the elemental image area $\overline{{A}_{n}{A}_{n-1}}$ is used to be integrated. The formulas in the following can be adapted both in real and virtual modes.

The *y* coordinate, *A _{n}* in Fig. 4 can be calculated like Eq. (1)a). In this formula,

*g*means the gap between a display device and a lens array. It can be seen that the distance between

*A*and

_{n}*A*

_{n-}_{1}, i.e. $\overline{{A}_{n}{A}_{n-1}}$, is bigger than the pitch of a lens

*P*. The following formulas considered elemental image plane as only one-dimensional line on

_{L}*y*axis for analysis simplification, but they can be expanded to two-dimensional

*x-y*plane easily.

We could get *B _{n}* using

*A*, which results in the following formulas. In Eqs. (2)a) and (2b),

_{n}*L*means the position of central depth plane on

*z*-axis.

*L*has a positive value in real mode and a negative value in virtual mode. And $\overline{{B}_{n}{B}_{n-1}}$, which is the distance between

*B*and

_{n}*B*

_{n-}_{1}, is obtained from

*B*as Eq. (2)b).

_{n}*C _{n-}*

_{1}is a position in elemental image plane which can be calculated by rays from

*B*through the center of

_{n+1}*n*-th lens, and

*C*is a position by rays from

_{n-2}*B*through the center of

_{n}*n*-th lens. Those positions can be obtained by the following Eqs. (3)a) – (3c):

#### 2.2 Overlapping problem in multi-viewer tracking integral imaging system

When elemental images for one viewer are made by tracking system, they cause no problem showing normal integrated image to the viewer. Each elemental image can use the whole area of $\overline{{A}_{n}{A}_{n-1}}$. But the viewer cannot watch entire elemental image in a moment due to the fact that integrated images are made by magnifying the specific areas $\overline{{C}_{n-1}{C}_{n-2}}$ in $\overline{{A}_{n}{A}_{n-1}}$ by a magnification ratio and integrating those in the viewer’s direction. Therefore, the entire area $\overline{{A}_{n}{A}_{n-1}}$ does not have to be used, but only area $\overline{{C}_{n-1}{C}_{n-2}}$ should be used. It means that more than one viewer can use the tracking integral imaging system in the condition while there is no overlap between the elemental images for all viewers as shown in Fig. 5 . But the tracking results should be more precise than the one-viewer tracking integral imaging system because each smaller area can cover smaller angle from the lens array. The angle is about 6.5° in our tracking integral imaging system.

In the real world, the positions of the viewers cannot be the same, so the elemental images should be located in different positions. But we need to strictly analyze the condition of getting no overlap to design the parameters optimally for the multiple-viewer integral imaging system.

#### 2.3 Viewing zone for secondary viewer for avoiding overlapping problem in multi-viewer tracking integral imaging system

To avoid the overlapping problem, the relationship between the positions of viewers and $\overline{{C}_{n-1}{C}_{n-2}}$ in elemental image plane needs to be known. As shown in Fig. 5, if there is no overlap in the elemental image plane with two viewers, they can watch each of their own directional integrated 3D images.

We analyzed two modes of the integral imaging system, the real mode and the virtual mode, while both having two viewers in the system. Figure 6 shows the elemental images, $\overline{{C}_{n-1}{C}_{n-2}}$ sequential areas on the elemental image plane of primary and secondary viewers. Total four cases for each mode in integral imaging and the relative positions of two viewers are considered.

The first case is when *V _{y}*

_{2}>

*V*

_{y}_{1}in real mode. Here, each

*V*

_{y}_{1}and

*V*

_{y}_{2}means primary and secondary viewers’ positions in

*y*-axis. In Fig. 6(a) and (b), the left red thick lines on elemental image plane mean the elemental images for the primary viewer, while the right blue thick lines do for the secondary viewer. Based on Fig. 6(a), we can build inequalities of each elemental image area for no overlap like the formulas as follows:

If *V _{z1}* =

*V*=

_{z2}*V*is to be assumed for analysis simplification, inequalities (4a) and (4b) can be converted to the following:

_{z}The second case is when *V _{y2} < V_{y1}* in real mode. This case is when viewers have relatively opposite positions against the first case. Inequalities (5a) and (5b) can be made from the position relation in Fig. 6(b):

If *V _{z1}* =

*V*=

_{z2}*V*is to be assumed for analysis simplification again, inequalities (5a) and (5b) can be reformed as

_{z}Inequalities (4c) and (5c) can be put together as (6) whose *ΔV _{y}* means the difference between

*V*and

_{y1}*V*. It can be seen that the gap distance between the positions of viewers is critical for the overlapping condition, and it is defined as the magnification ratio which depends on

_{y2}*g*,

*L*and the focal length of the elemental lens, and the viewers' distance

*V*from the lens array.

_{z}Inequality for no overlap in the virtual mode is different from that in real mode because the directions of elemental images are not inverted when those are imaged on the central depth plane as shown in Fig. 4.

The third case is when *V _{y}*

_{2}

*> V*

_{y}_{1}in virtual mode. The relationship between elemental image areas for two viewers is shown in Fig. 6(c). The elemental image for the primary viewer, $\overline{C{\text{'}}_{n-1}C{\text{'}}_{n-2}}$ should be located among the boundaries of the elemental images for the secondary viewer. We can get following inequalities (7a), (7b) from the condition:

We can get the following relationship in the third case for no overlap with the same assumption applied on the first and the second cases.

The fourth case is when *V _{y2}* <

*V*in virtual mode. In this case, $\overline{C{\text{'}}_{n-1}C{\text{'}}_{n-2}}$ should be between ${C}_{(n+1)-2}$ and ${C}_{n-1}$, as shown in Fig. 6(d). It can be formulated to the following inequalities (8a) and (8b).

_{y1}We can also get a relationship similar with the third case with the same assumption that *V _{z1}* =

*V*=

_{z2}*V*. And inequality (7c) and (8c) can be combined to (9).

_{z}Inequalities (6) and (9) show the condition in which two viewers can watch integrated 3D images without any overlapping problem in tracking integral imaging system. We plotted the condition in real mode in Fig. 7 . It is assumed that the primary viewer has 600 mm width shoulder which is represented as a yellow ellipse in each figure, and the viewer is at the same height with a display device. The red dashed lines mean inner boundaries for the prevention of overlapping, while the blue solid lines do outer boundaries. In other words, the secondary viewer can watch integrated images without any overlap when the viewer stands in the area between the red line and the blue line. Figure 7(a) and (b) show the tendency that the secondary viewer's viewing zone occupies the areas which have bigger angle around the first viewer as the magnification ratio increases. It is because the effective elemental image of the first viewer goes smaller, and the other area has the bigger rest area. The focal length of lens array is a critical factor which can affect the viewing angle. The fact that the lens array has smaller focal length means the viewing angle with the lens array is bigger than that of the lens array with bigger focal length as shown in Fig. 7(c) and (d). Figure 7(e) and (f) show no difference when the first viewer's distance from the tracking integral imaging system is within 1.5 m to 2.5 m.

## 3. Experimental results

Many kinds of tracking technologies have been undergone as research. But the tracking integral imaging system needs a tracking method with fast response time enough to be applied in relatively large space since the elemental images should be made in real time while tracking viewers. Simultaneously within our experiment, an infrared (IR) camera and IR light emitting diodes (LEDs) are used for tracking. This method is efficient because recognizing IR LED markers is distinguished from visible light with little delay. IR LED markers in images from the IR camera are recognized by finding contour algorithm in OpenCV [20] and the results are used to calculate 3D positions of the viewers. Two IR LEDs are equipped in a goggle to give a depth clue to an IR camera. Those IR LEDs give the position of a head position of a viewer. The assumption that each IR LED is at the same distance from the IR camera is set. In our system, the IR camera is installed on the monitor, and the tracking results are updated on both information window and tracking window as shown in Fig. 8 .

The integral imaging system is configured with the specification shown in Table 1 . The lens array with 10 mm lens pitch is used because the lenses with large pitch mean the small number of elemental images and make elemental images from tracking results be generated in almost real time. The reason we experimented in the virtual mode is that the secondary viewing zone is narrow enough to display on tracking window with small magnification ratio. The central depth plane is located 85.6 mm behind the lens array. It means the center position of 3D objects shown in Fig. 9(a) and (b) . Figure 9(c) and (d) are the elemental images of each 3D object in the conventional integral imaging system and Fig. 9(e) and (f) are those of each 3D object in the tracking integral imaging system for a viewer, which seem like a part of Fig. 9(c) and 9(d). The different objects for two viewers are selected to emphasize no correlation between them. It makes no difference whether the same 3D objects are used or not.

In our experimental condition, each elemental image covers a specific angle 6.5° to a viewer as shown in Fig. 3. Therefore the tracking error can be ignored when the tracked result is in the angle. But since the errors from other IR lights like the sun can make critical errors, those should be prohibited by filtering the size of the shape by software.

The result of tracking two viewers is shown in Fig. 10 . Each viewer has two IR LEDs which are arranged in the same distance. The distance from a lens array can be obtained from the distance in IR camera images between two IR LED points. This tracking window is supposed to show the boundary for no overlap with blue and orange squares which are inner and outer boundaries. Figure 10 is the case which has no overlapping problem because the secondary viewer is in orange squares and out of blue squares. From this tracking result, the elemental images as shown in Fig. 11 are generated. Any elemental image does not overlap with each other.

When the positions of two viewers are fixed as shown in Fig. 10, the images from 7 positions are captured as shown in Fig. 12 . The primary viewer is supposed to watch ‘3D’, and the secondary viewer is supposed to watch ‘SNU’. It can be seen that images from positions next to viewer 1 have image of ‘3D’ but distorted, and other images from positions next to viewer 2 have incomplete image of ‘SNU’. The integrated image of ‘3D’ can be seen only within the narrow viewing zone around the primary viewer and image of ‘SNU’ only within the narrow viewing zone around the secondary viewer. There is little correlation between the two images as shown in Fig. 12. The leftmost image is captured from a position (−800, 0, 1850) mm, and the rightmost image from a position (600, 0, 1850) mm. Other images are obtained from balanced positions between the leftmost and the rightmost positions.

Comparatively, when the secondary viewer is in the blue rectangle in the tracking window as shown in Fig. 13 , the overlapping problem of two elemental images for two viewers occurred. Figure 14 shows the partly overlapped elemental images and the overlapped integrated image which shows ‘3D’ and ‘SNU’ simultaneously in the position around the primary viewer or the secondary viewer. The overlapped elemental images make distortions called the facet-braiding effect because the parts of whole elemental images which are needed to make an integrated image are hidden by the elemental images for the other viewer [21, 22].

## 4. Conclusion

Multi-viewer tracking integral imaging system is proposed to enhance viewing angle and viewing zone for multiple viewers. When the elemental images are made from the results of viewer tracking, the elemental images do not have to be made for a wide angle display. They need to integrate a 3D image only from a viewer's direction. Therefore the whole area in elemental image plane corresponding to each lens in a lens array is not used, just a part of the area is used to integrate 3D images from a direction. The rest of the area can be used to display the elemental images for other viewers, but the overlapping problem still remains. To avoid this problem, we analyzed the conditions for overlapping not to occur in the case with two users and plotted viewing zones where the conditions are satisfied. The secondary viewer’s viewing zone without any overlap can be expanded much more if the magnification ratio in the integral system is larger such as in the focused mode. Then, it will be possible that two or more viewers can watch their own integrated images with wider viewing angle.

## Acknowledgment

This research was supported by the IT R&D program of MKE/IITA. [2009-F-208-01, Signal Processing Elements and their SoC Developments to Realize the Integrated Service System for Interactive Digital Holograms].

## References and links

**1. **T. Okoshi, *Three-Dimensional Imaging Techniques* (Academic Press, New York, 1976).

**2. **B. Lee, J.-H. Park, and S.-W. Min, “Three-dimensional display and information processing based on integral imaging,” in *Digital Holography and Three-Dimensional Display*, T.-C. Poon, ed. (Springer, 2006), Chap. 12, 333–378.

**3. **J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “Analysis of viewing parameters for two display methods based on integral photography,” Appl. Opt. **40**(29), 5217–5232 (2001). [CrossRef]

**4. **J.-H. Park, S. Jung, H. Choi, and B. Lee, “Integral imaging with multiple image planes using a uniaxial crystal plate,” Opt. Express **11**(16), 1862–1875 (2003). [CrossRef] [PubMed]

**5. **J.-S. Jang and B. Javidi, “Large depth-of-focus time-multiplexed three-dimensional integral imaging by use of lenslets with nonuniform focal lengths and aperture sizes,” Opt. Lett. **28**(20), 1924–1926 (2003). [CrossRef] [PubMed]

**6. **D.-H. Shin and E.-S. Kim, “Computational integral imaging reconstruction of 3D object using a depth conversion technique,” J. Opt. Soc. Korea **12**(3), 131–135 (2008). [CrossRef]

**7. **M.-O. Jeong, N. Kim, and J.-H. Park, “Elemental image synthesis for integral imaging using phase-shifting digital holography,” J. Opt. Soc. Korea **12**(4), 275–280 (2008). [CrossRef]

**8. **A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE **94**(3), 591–607 (2006). [CrossRef]

**9. **S. Jung, J. Hong, J.-H. Park, Y. Kim, and B. Lee, “Depth-enhanced integral-imaging 3D display using different optical path lengths by polarization devices or mirror barrier array,” J. Soc. Inf. Disp. **12**(4), 461–467 (2004). [CrossRef]

**10. **H. Liao, M. Iwahara, Y. Katayama, N. Hata, and T. Dohi, “Three-dimensional display with a long viewing distance by use of integral photography,” Opt. Lett. **30**(6), 613–615 (2005). [CrossRef] [PubMed]

**11. **R. Martínez-Cuenca, H. Navarro, G. Saavedra, B. Javidi, and M. Martínez-Corral, “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Opt. Express **15**(24), 16255–16260 (2007). [CrossRef] [PubMed]

**12. **Y. Kim, H. Choi, J. Kim, S.-W. Cho, Y. Kim, G. Park, and B. Lee, “Depth-enhanced integral imaging display system with electrically variable image planes using polymer-dispersed liquid-crystal layers,” Appl. Opt. **46**(18), 3766–3773 (2007). [CrossRef] [PubMed]

**13. **J.-H. Park, J. Kim, Y. Kim, and B. Lee, “Resolution-enhanced three-dimension / two-dimension convertible display based on integral imaging,” Opt. Express **13**(6), 1875–1884 (2005). [CrossRef] [PubMed]

**14. **C. Cruz-Naira, D. J. Sandin, and T. A. DeFanti, “Surround-screen projection-based virtual reality: the design and implementation of the CAVE,” Proc. SIGGRAPH, 135–142 (1993).

**15. **M. Agrawala, A. C. Beers, B. Fröhlich, P. Hanrahan, I. McDowall, and M. Bolas, “The two-user responsive workbench: support for collaboration through individual views of a shared space,” Proc. SIGGRAPH, 327–332 (1997).

**16. **Y. Kitamura, T. Nakayama, T. Nakashima, and S. Yamamoto, “The Illusionhole with polarization filters,” Proc. of the ACM Symposium on Virtual Reality Software and Technology, 244–251 (2006).

**17. **R. Haussler, S. Reichelt, N. Leister, E. Zschau, R. Missbach, and A. Schwerdtner, “Large real-time holographic displays: from prototypes to a consumer product,” Proc. SPIE **7237**, 72370S (2009). [CrossRef]

**18. **A. Schwerdtner, N. Leister, R. Häussler, and S. Reichelt, “Eye-tracking solutions for real-time holographic 3-D display,” Soc. Inf. Display Digest (SID’08), 345–347 (2008).

**19. **G. Park, J. Hong, Y. Kim, and B. Lee, “Enhancement of viewing angle and viewing distance in integral imaging by head tracking,” in *Digital Holography and Three-Dimensional Imaging*, OSA Technical Digest (Optical Society of America, 2009), DWB27.

**20. **“OpenCV,” http://opencv.willowgarage.com/wiki.

**21. **M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Multifacet structure of observed reconstructed integral images,” J. Opt. Soc. Am. A **22**, 597–603 (2005). [CrossRef]

**22. **R. Martínez-Cuenca, G. Saavedra, A. Pons, B. Javidi, and M. Martínez-Corral, “Facet braiding: a fundamental problem in integral imaging,” Opt. Lett. **32**(9), 1078–1080 (2007). [CrossRef] [PubMed]