Abstract

In recent years, head-mounted display technologies have greatly advanced. In order to overcome the accommodation-convergence conflict, light field displays reconstruct three-dimensional (3D) images with a focusing cue but sacrifice resolution. In this paper, a hybrid head-mounted display system that is based on a liquid crystal microlens array is proposed. By using a time-multiplexed method, the display signals can be divided into light field and two-dimensional (2D) modes to show comfortable 3D images with high resolution compensated by the 2D image. According to the experimental results, the prototype supports a 12.28 ppd resolution in the diagonal direction, which reaches 82% of the traditional virtual reality (VR) head-mounted display (HMD).

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Over the past decade, three-dimensional (3D) display technologies [1–3] have greatly advanced to improve the sense of reality in flat panel displays (FPDs) [4]. To further facilitate human life, 3D display technologies have been combined with head-mounted displays (HMDs) [5,6] for augmented reality (AR) [7], mixed reality (MR) [8], or virtual reality (VR) [9] applications. However, most current HMD products generate 3D images through binocular parallax [10,11], which causes a mismatch between the accommodation distance and the convergence distance of human eyes. This accommodation-convergence conflict (AC conflict) [12–14], as Figs. 1(a) and 1(b) show, dizzies observers and further produces visual fatigue. Consequently, to prevent the reduction of usage time and improve the immersive feeling of the observers, the AC conflict issue must be addressed in HMDs.

 

Fig. 1 The relationship between accommodation and convergence distances in (a) the real world, (b) a binocular 3D display, and (c) a light field 3D display.

Download Full Size | PPT Slide | PDF

To overcome the AC conflict issue, light field (LF) display technology [15–25] has been proposed and considered as a promising solution, as it displays 3D contents in a more natural way for the human visual system, as shown in Fig. 1(c). In a typical LF display system, a microlens array (MLA) is placed in front of a display panel with an appropriate gap, and each micro-lens covers an elemental image of a number of pixels. LF images can simultaneously provide stereo parallaxes and focus cues, making the accommodation of the users’ eyes coincide with the depth of an object. In this manner, the AC conflict issue can be apparently solved. Moreover, compared with holographic displays, LF displays do not require coherent light sources, because they work based on refraction, which is more suitable and friendlier to practical applications.

However, in an LF display system, since the panel carries not only spatial but also angular information, the resolution of the reconstructed 3D images is sacrificed enormously. Generally, the resolution may be affected by ray aberration, diffraction, defocusing, and the sampling rate [26–28]. To enhance the resolution of LF displays, diverse methods have been proposed, such as mechanically moving the MLA [29], using an MLA with an adjustable focal length [30,31], or reducing the pixel size [32]. Unfortunately, current technologies cannot easily achieve fast-response mechanical components, and the pixel size and lens pitch cannot be manufactured without limitations. Consequently, in this paper, a new hybrid HMD is proposed by combining the LF display technology and conventional 2D images through time multiplexing to solve the AC conflict and compensate for the resolution simultaneously.

2. Principle and design of hybrid VR HMD

2.1 Optical system principle

Conceptually, the proposed hybrid HMD works by switching between an LF mode and a 2D mode, where an LF image and a full-resolution 2D image are shown in the two modes, respectively. As long as the switching is fast enough to conduct time multiplexing [33,34], observers can perceive a resolution-enhanced image with depth information via visual persistence. To achieve such a system, a microdisplay panel, a liquid crystal (LC) MLA with a static LC alignment, a twisted nematic (TN) cell, a linear polarizer, and a main lens were adopted. The birefringent LC MLA [35,36] only deflects the light polarized parallel to its alignment direction. The TN cell [37] acts as a fast polarization rotator. In addition, the synchronization of the frame rate between each component should also be considered. Figure 2 illustrates the working principle and driving methods of each component.

 

Fig. 2 The driving methods and working functions of the display, LC MLA, and TN cell in our proposed system.

Download Full Size | PPT Slide | PDF

In the LF mode, as shown in Fig. 3, the elemental images, which are generated by inversely tracing chief rays from a 3D scene to the display panel, are shown on the panel [38]. After the unpolarized light from the display passes through the LC MLA, the light deflected by the lenses reconstructs an LF image. Meanwhile, an electric voltage is applied on the TN cell to maintain the polarization state of the light passing through the TN cell. In this manner, the unnecessary light, which passes through the LC MLA without being defected, is blocked by the linear polarizer. Finally, through the magnification of the main lens, an observer can see a low-resolution LF image with depth information in the LF mode.

 

Fig. 3 Schematic layout of the hybrid VR HMD in light field mode.

Download Full Size | PPT Slide | PDF

In the 2D mode, as shown in Fig. 4, full-resolution 2D images are directly displayed on the panel. Similarly, the unpolarized light from the panel passes through the LC MLA. However, the TN cell at this point rotates the polarization direction of the incoming light 90 degrees by not applying a voltage; thus, the light not deflected by the LC MLA is further refracted by the main lens to generate full-resolution 2D images.

 

Fig. 4 Schematic layout of the hybrid VR HMD in 2D mode.

Download Full Size | PPT Slide | PDF

2.2 Optical components

An organic light-emitting diode (OLED) panel with a high pixel density and fast response provided by AU Optronics (AUO) was adopted as the display panel of our system, whose specifications are shown in Table 1. Since the panel size and the pixel size of the panel are different in the vertical and horizontal directions, the field of view (FOV) [39] and the resolution of our system will be separately discussed in the two directions. In our structure, since the frame rate of the OLED panel (72 Hz) is not sufficiently fast, the perceived hybrid image may exhibit flickering, which can be easily solved with an OLED panel with a higher frame rate.

Tables Icon

Table 1. Specifications of OLED panel.

The gradient-index lens LC MLA focuses the light polarized parallel to the LC molecules and has no bending power for the other polarization direction; hence, it considerably affects the image quality of both modes in our proposed system. According to the characteristics of low viscosity and common LC phase, the nematic E7 LC with positive dielectric anisotropy (ε//) was selected as the material of the LC MLA to achieve a high consistency of fabrication. Figure 5 and Table 2 show the structure and specification of our fabricated LC MLA. To achieve a smooth gradient of the electric field distribution, i.e., better lens quality, a Nb2O5 high resistance (Hi-R) [40–42] layer with a thickness of 20 nm was coated on the top of the aluminum electrode to import the electric line of force into the central part of the LC MLA. In addition, with this layer, the applied voltage on the LC MLA can be reduced to achieve a simpler driving method. Finally, a polyimide (PI) layer was coated on both the Hi-R layer and the bottom electrode by a spin-coating machine to simplify the alignment.

 

Fig. 5 (a)(b) Structure and (c) electric line of the LC MLA optical component.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 2. Specifications of LC MLA.

Corresponding to the phase retardation caused by the gradient of the LC molecule distribution, the focal length of the LC MLA is given by Eq. (1) with respect to the optical path length (OPL) [43],

fn0(D/2)22Δnd,
where f and D represent the effective focal length and diameter of the lens aperture, respectively. Moreover, the difference between the refractive indices of the central and marginal regions of the LC cell (Δn) and the LC cell gap (d) is also considered. To generate a focal length of 2.0 mm, the LC MLA was driven with 8 V at 1 MHz to produce a Δn = 0.29. Furthermore, to verify the focal length and quality of the LC MLA, the interference pattern and point spread function (PSF) produced by a He-Ne laser unit are measured using a confocal microscope with a polarizer and an analyzer, as shown in Fig. 6. In the interference pattern, the intervals between every two fringes are almost the same, indicating that the index is distributed with a smooth gradient, resulting in good lens quality. In addition, the center of each fringe pattern coincides with the center of each lens, revealing that the image quality degradation caused by an off-center lens can be largely avoided.

 

Fig. 6 (a)(b) Interference pattern (IP) and (c) point spread function (PSF) of the HiR LC MLA with 8 V and 1 MHz as the driving conditions.

Download Full Size | PPT Slide | PDF

A TN cell (model X-FPM(L) from Unice E-O Services Inc.) was adopted as a polarization switch, whose specifications are shown in Table 3. In particular, the response speed of the TN cell is fast enough for our requirement when the applied voltage is 24 V. Moreover, the polarization contrast in the visible spectrum reaches almost 700 in the polarization-altering state and 4000 in the non-altering state, as Fig. 7 shows. Finally, a lens from Google Cardboard ver. 2.0 [44] was chosen as the main lens, and its specifications are shown in Table 4.

Tables Icon

Table 3. Specifications of the TN cell.

 

Fig. 7 The polarization contrast of the TN cell at different wavelengths.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 4. Specifications of the main lens.

2.3 Hybrid VR HMD design

In this paper, our proposed binocular system could be regarded as two separate monocular systems. The process of the optical layout can be divided into three parts, and the structure of the monocular optical system is illustrated in Fig. 8.

 

Fig. 8 Structure and parameters of the monocular optical system.

Download Full Size | PPT Slide | PDF

First, the distance between the eye box and the LF virtual imaging plane was set to 1 m (1 diopter) for comfortable eye accommodation, and the eye relief between the eyebox and the main lens was set to a common value of 18 mm. In this manner, the position of the reconstructed LF image plane located 0.95 mm under the OLED panel can be determined using the Gaussian lens formula.

Second, to approach the typical depth of focus of human eyes, the interval between the 2D and LF virtual imaging planes was set to 0.6 diopter. Therefore, the locations of the 2D virtual imaging plane and the display panel were decided to be 81.6 mm and 56.32 mm from the eye position, respectively.

Last but not least, depending on whether the panel-lens gap equals the focal length of the LC MLA, LF displays can be separated into depth priority integral imaging (DPII) system and resolution priority integral imaging (RPII) system [45] types. In our design, to achieve high resolution in a limited depth of field, the RPII type was adopted by creating an appropriate central depth plane (CDP) [46] of the LC MLA and reconstructing LF images around the CDP. In addition, the CDP of the LC MLA should be located at the position of the reconstructed LF imaging plane, and the object distance needs to fit the paraxial imaging equation. However, with the restriction of the LC MLA fabrication process, the designed focal length was restricted by three adjustable factors; lens pitch, cell gap, and the LC material. Generally, the lens pitch determines the number of pixels underneath a single microlens and consequently affects the tradeoff between angular and spatial information. With more covered pixels, the viewing zone is enlarged, but the image resolution is decreased. Therefore, the criteria for LF image formation would restrain the lens pitch and focal length in the LC MLA. Consequently, the system parameters can only be generated when the LC MLA reconstructs a virtual LF image.

For simplicity, the whole design process can be summarized into an algorithm. By using the thin lens imaging formula,

d=SoLFfM(LΔxer)fM+(LΔxer),
the lateral magnification in the two modes and the distance between the panel and the reconstructed LF imaging plane can be calculated. In addition, the relationship between the focal length of the LC MLA, which was eventually decided as 2 mm due to the fabrication limitation, and the optical performance of the hybrid HMD can also be determined by simulation, as shown in Fig. 9. Because the pixel size of our adopted panel is different in the horizontal and vertical directions, the resolution should be discussed in two dimensions. Moreover, the field of view (FOV) of the proposed system is dominated by the specification of the main lens as 80°. In addition, the resolution of the 2D virtual image is independent of the LC MLA; therefore, the resolution in pixel per degree (ppd) of the hybrid HMD can be given by Eq. (3):
ResolutionLF=1Pi(g+dg)(LerSoLF)L2π90,
Resolution2D=1Pi(LΔxerSoLFd)LΔx2π90,
where Pi is the pixel size, and g is the gap between the panel and the LC MLA. Finally, Table 5 summarizes all the system parameters.

 

Fig. 9 Theoretical resolution performance of the hybrid HMD.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 5. Specifications of the proposed hybrid VR HMD.

In the proposed system, the light efficiency needs to be considered since the additional absorptive optical components are used. In the LC MLA layer, the aperture ratio of the top electrode is 90%, and the transmittances of the bottom electrode material, ITO, and the LC material, E7 are 95% and 90%. Therefore, the transmittance of the LC MLA is 77% by multiplexing these three factors. Moreover, based on the specification in Table 3, the transmittance of the TN cell with the polarizer is 43.5%. Compared with traditional VR HMD, the light efficiency of our system is 33.5%, Such a sacrifice in light efficiency is necessary for the time-multiplexing and can be easily overcome by using a higher brightness of the display panel.

3. Experiments and results

3.1 Experimental setup

After determining all the parameters and fabricating the optical components, a prototype of our hybrid VR HMD system was set up to verify the proposed design. Figure 10 illustrates the configuration of the hybrid VR HMD, which includes a movable and rotatable stage with six degrees of freedom. In the experiment, first, the OLED panel was placed on the bottom of the optical stage, which was fixed on an optical table. Second, the fabricated LC MLA connected to a function generator was put on the OLED panel with the determined gap, 0.98 mm, to cover the entire display area. Third, the TN cell and polarizer were stacked on the LC MLA. To synchronize the display images with the driving signals in both LF mode and 2D mode, a flexible printed circuit (FPC) board printed from AUO was connected with the OLED panel and the TN cell, separately. Finally, the main lens was fixed with a translation stage to adjust the interval between the eye pupils of corresponding observers. Before the experiment, a beam-expanded He-Ne laser was used to determine the focal point of the main lens by finding the smallest light spot of the laser bundle passing through the main lens.

 

Fig. 10 The optical components and experimental setup of the hybrid VR HMD.

Download Full Size | PPT Slide | PDF

To measure the FOV, resolution and evaluate the image quality, an industrial camera with a high-precision mobile stage was used to imitate the human eyes because the entrance pupil of the camera (5.7 mm) is similar to that of human eye pupils. Next, to measure the depth range, a Canon 5D Mark II camera with a 100-mm lens was adopted because it has a very shallow depth of focus (DOF), which can sensitively reflect the depth variation.

Moreover, the field of view (FOV), which significantly affects the sense of immersion, was verified using a testing pattern containing symmetrical lines. Based on the experimental results, the monocular FOV of the prototype was 76° in both the horizontal and vertical directions. With 50° overlap, the binocular FOV could be calculated as 102° in the horizontal direction. Compared with the theoretical restricted value of the main lens, 80°, the FOV of our design is acceptable and sufficient to produce the immersive visual experience in the hybrid HMD.

3.2 Resolution assessment

To measure the visual resolution in ppd [47], the USAF 1951 test chart [48] with corresponding sizes in different depth positions by criterion was adopted to provide line pairs with different spatial frequencies, whose values can be calculated with Eq. (5).

Resolution(lp/mm)=2groupnumber+(elementnumber1)/6.
According to the design, the LF and 2D images are located 100 cm and 62.5 cm from the eye box, respectively. In addition, the hybrid imaging plane can be assumed to be at a position between the LF and 2D planes, which will be discussed and evaluated in the next section.

In the experiment, the original USAF 1951 images should be resized separately by considering different pixel sizes in two dimensions and magnifications with different depths to keep the same FOV. After capture by the industrial camera, the experimental results of the resolution assessment with normalized size are shown in Fig. 11. According to the conversion equation and look-up table, the resolution of the LF, 2D and hybrid virtual images could be calculated in Table 6. Based on the results, the resolution of the hybrid virtual image was obviously enhanced in our system as 12.28 ppd in the diagonal direction, which verified the evaluated method of ideal resolution performance in Fig. 9.

 

Fig. 11 Captured images of the USAF pattern in the (a) LF, (b) hybrid, and (c) 2D image planes.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 6. Image resolution of the LF, 2D and hybrid image planes in the hybrid VR HMD.

To further evaluate the image quality, a picture containing a 3D cube and a colorful diamond was used, as shown in Fig. 12, where the resolution of the captured hybrid virtual image was indeed enhanced. A video showing the hybrid virtual image is revealed in Visualization 1. In the video, the depth information of the hybrid image could be perceived. In addition, slight flickers can be observed in the video, which can be easily solved with an OLED panel with a higher frame rate.

 

Fig. 12 Comparison between the LF, hybrid, and 2D virtual images (see Visualization 1).

Download Full Size | PPT Slide | PDF

3.3 Depth evaluation

In the depth experiment, the monocular depth information of the perceived images confirmed that the proposed system could supply natural focus cues to solve the AC conflict by LF technology. To generate the experimental patterns, first, the magnification effect and the correct resizing of the 2D and LF virtual images for corresponding depths need to be considered. In addition, the pattern for the LF virtual image should be further computed as an elemental image with a continuous depth map by the LF ray tracing algorithm. At the same time, for the hybrid virtual image, the input patterns of both modes should be processed under the same method as the prior one; however, the resizing ratio should be modified to cooperate with the hybrid depth distance for superposition. Based on the design, the depths of the 2D, and LF virtual images are located at 62.5 cm, and 100 cm, respectively, and the depth of the hybrid virtual image is between these two planes.

To confirm the depth characteristic of the proposed system, the experiment could be separated into two parts: reconstructed planes of virtual images and continuous depth range with focus cues. First, in the verification of depth positions, the camera lens was set to focus at 62.5 cm, 100 cm, and the position between them from the eye box. Figure 13 presents the captured 2D, hybrid, and LF virtual images. According to the results, once the camera focused at 100 cm, the LF virtual image was well reconstructed, but the 2D and hybrid virtual images still contained blur effects at the edge of images. This phenomenon indicates that the depth position at 100 cm is the best imaging plane of the LF virtual image only. In contrast, the depth position at 62.5 cm could be verified as the 2D virtual image plane since it contained a sharper edge than the others, but the LF and hybrid images were not the clearest based on a similar method of comparison. Furthermore, of the measured results, the hybrid virtual image at around 80 cm presents the best resolution compared with the others, which is almost located at the middle of the LF and 2D virtual imaging planes.

 

Fig. 13 The verified positions of the reconstructed LF, 2D, and hybrid imaging planes.

Download Full Size | PPT Slide | PDF

Second, the continuous depth range with focus cues of the proposed system was verified by experiment. Owing to the depth-fused effect by time multiplexing in our approach, the depth difference in the hybrid virtual image would be smaller than that in the LF virtual image. By using a narrow-DOF camera, the depth information of the hybrid virtual image could be captured. Based on the specification of the camera, the DOF is 0.95 cm at 80 cm far, when the focal length of lens is 100 mm and the f number is f / 2.8. For the designed diamond pattern with depth information, where the depth planes of the purple and the red diamonds were set as 147 cm, and 75 cm in the LF image, respectively, shown in Fig. 14, when the camera lens focused on the purple diamond, the purple diamond was clearly captured but the red diamond was blurred, and vice versa. Consequently, the hybrid virtual images could be verified with natural depth information, which eliminates the visual fatigue issue caused by the AC conflict.

 

Fig. 14 The depth range with focus cues of the hybrid virtual image.

Download Full Size | PPT Slide | PDF

3.4 Discussion

After the above experiments, the FOV, resolution and depth information of the hybrid image in the proposed system were verified and measured. The comparison between a traditional VR HMD, a light field VR HMD, and the proposed hybrid VR HMD is shown in Table 7. In our structure, the disadvantage is the hardware requirement, which requires a high-response-speed panel and a TN cell. Otherwise, the flicking issue would appear when the frame rate is not fast enough. Moreover, a circuit board synchronizing all the components with signal contents is also necessary. However, in the proposed system, not only could the AC conflict be solved by light field technology but also the resolution and quality of the reconstructed virtual image could be compensated by the 2D image. In the design of this paper, the monocular depth range of the hybrid image in the prototype is from 66 cm to 121 cm, which is mixed by the depth positions of the 2D image, 62.5 cm, and the light field image, from 70 cm to 180 cm. With the binocular parallax designed by image contents, the depth range of the proposed system could be further extended. On the other hand, the resolution of the hybrid virtual image in the diagonal direction is 12.28 ppd, where the light field image, 6.07 ppd, is compensated by the 2D image, 15.01 ppd.

Tables Icon

Table 7. Comparison of traditional, light field, and hybrid VR HMDs.

In the proposed system, the generation of the hybrid image is similar to a two-focal-plane depth-fused display (DFD) [49–52], which reconstructs 3D images by merging front and rear images. In a DFD, the perceived depth of the fused images could be considered as a weighted sum of the depths of the two focal planes with the depth-weighted fusing functions of illuminance. However, since the image generation principle of the proposed system is different, where the integral imaging system considers the eyebox size but the DFD system does not, the depth of the hybrid virtual images cannot be calculated in the same manner. To our knowledge, it is the first time such a system combining 2D and LF images is proposed, thus few literature can provide an accurate equation to determine the depth of the hybrid image. In this manner, although the depth of the hybrid image was proved to be between the 2D and LF planes through photographs before, in future studies, the accurate accommodation response to the hybrid virtual images needs more physiological experiments.

4. Conclusions

Over the past few years, the technologies for VR HMDs have been greatly advanced by several companies to give users high-quality VR visual perception in diverse fields, such as entertainment, education, medicine and military training. Based on the principles of displaying 3D images, VR HMD technologies can be divided into stereoscope and LF systems. However, both technologies still have some serious issues that need to be solved. For the stereoscope system, since the accommodation and convergence distances of the human eyes are mismatched, the observer can easily feel visual fatigue from the AC conflict. In contrast, for LF technology, to eliminate the AC conflict, the resolution of the 3D image is decreased due to sacrificing pixels of the original panel to record the angular information. Therefore, in this paper, a new type of HMD called hybrid HMD was developed to simultaneously solve the above problems using a time-multiplexed method.

In the proposed system, the hybrid HMD was designed to consist of LF and 2D modes, which compensate for the resolution of the LF image by a full-resolution 2D image. The components in the hybrid HMD include an OLED panel, an electric circuit, a LC MLA, a TN cell, a polarizer, and a main lens. In the structure, the LC MLA functions as a lens only in the rubbing direction, and the TN cell and polarizer can be assumed as a polarization switch to control the necessary light to reconstruct the virtual image in each mode. By switching two images faster than human visual persistence, the observer could see a hybrid image with high resolution and depth information by the time-multiplexed method.

In this paper, a prototype of the hybrid HMD was built to verify our proposed structure. Based on the experimental results, the hybrid HMD supported a wide FOV, almost 76° in both directions. In addition, the effective resolution of the hybrid virtual image could be compensated to 12.28 ppd in the diagonal direction, which is much higher than the resolution of the LF virtual image, 6.07 ppd, and reaches 82% of the traditional VR HMD, 15.01 ppd. Furthermore, the depth planes of the LF, 2D and hybrid images and the depth range of the natural focusing cues were also verified in the experiment. Therefore, this paper announced a new structure called a hybrid HMD to display a high-resolution image with a focus cue to solve the AC conflict by a time-multiplexed method.

Funding

Ministry of Science and Technology (MOST) in Taiwan (contract No. MOST 104-2628-E-009-012-MY3 and 107-2221-E-009-115-MY3).

Acknowledgments

The OLED panel and circuit board were generously provided by the small-size OLED product design department of AU Optronics Company (AUO), Taiwan.

References

1. N. Holliman, “3D display systems,” in Department of Computer Science (University of Durham, 2005).

2. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013). [CrossRef]   [PubMed]  

3. L. Hill and A. Jacobs, “3-D liquid crystal displays and their applications,” Proc. IEEE 94(3), 575–590 (2006). [CrossRef]  

4. S. W. Depp and W. E. Howard, “Flat-panel displays,” Sci. Am. 266(3), 90–97 (1993). [CrossRef]  

5. E. Sutherland, “A head-mounted three dimensional display,” in fall joint computer conference (1968).

6. K. Keller, A. State, and H. Fuchs, “Head mounted displays for medical use,” J. Disp. Technol. 4(4), 468–472 (2008). [CrossRef]  

7. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014). [CrossRef]   [PubMed]  

8. J. P. Rolland and H. Hua, “Head-mounted display systems,” in Encyclopedia of Optical Engineering (Dekker, 2005).

9. H. McLellan, “Virtual realities,” in Handbook of research for educational communications and technology, (1996).

10. C. Wheatstone, “Contributions to the physiology of vision.–Part the first. On some remarkable, and hitherto unobserved, phenomena of binocular vision,” in Philosophical transactions of the Royal Society of London (1838).

11. M. Lappe, F. Bremmer, and A. Van den Berg, “Perception of self-motion from visual flow,” Trends Cogn. Sci. (Regul. Ed.) 3(9), 329–336 (1999). [CrossRef]   [PubMed]  

12. R. Burke and L. Brickson, “Focus cue enabled head-mounted display via microlens array,” TOG 32, 220 (2013).

13. K. Akşit, J. Kautz, and D. Luebke, “Slim near-eye display using pinhole aperture arrays,” Appl. Opt. 54(11), 3422–3427 (2015). [CrossRef]   [PubMed]  

14. F.-C. Huang, D. P. Luebke, and G. Wetzstein, “The light field stereoscope,” ACM Transactions on Graphics 34, 60:1–60:12 (2015).

15. J.-S. Jang and B. Javidi, “Three-dimensional integral imaging of micro-objects,” Opt. Lett. 29(11), 1230–1232 (2004). [CrossRef]   [PubMed]  

16. M. Levoy, Z. Zhang, and I. McDowall, “Recording and controlling the 4D light field in a microscope using microlens arrays,” J. Microsc. 235(2), 144–162 (2009). [CrossRef]   [PubMed]  

17. E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing (MIT Press, 1991), pp. 3–20.

18. S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44(2), 71–74 (2005). [CrossRef]  

19. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications,” Appl. Opt. 52(4), 546–560 (2013). [CrossRef]   [PubMed]  

20. N. Balram and I. Tošić, “Light-field imaging and display systems,” Inf. Disp. 32(4), 6–13 (2016). [CrossRef]  

21. X. Jin, L. Liu, and Q. Dai, “Approximation and blind reconstruction of volumetric light field,” Opt. Express 26(13), 16836–16852 (2018). [CrossRef]   [PubMed]  

22. M. Liu, C. Lu, H. Li, and X. Liu, “Bifocal computational near eye light field displays and Structure parameters determination scheme for bifocal computational display,” Opt. Express 26(4), 4060–4074 (2018). [CrossRef]   [PubMed]  

23. C. Yao, D. Cheng, T. Yang, and Y. Wang, “Design of an optical see-through light-field near-eye display using a discrete lenslet array,” Opt. Express 26(14), 18292–18301 (2018). [CrossRef]   [PubMed]  

24. Z. Cai, X. Liu, X. Peng, and B. Z. Gao, “Ray calibration and phase mapping for structured-light-field 3D reconstruction,” Opt. Express 26(6), 7598–7613 (2018). [CrossRef]   [PubMed]  

25. Z. Xin, D. Wei, X. Xie, M. Chen, X. Zhang, J. Liao, H. Wang, and C. Xie, “Dual-polarized light-field imaging micro-system via a liquid-crystal microlens array for direct three-dimensional observation,” Opt. Express 26(4), 4035–4049 (2018). [CrossRef]   [PubMed]  

26. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef]   [PubMed]  

27. H. Huang and H. Hua, “Systematic characterization and optimization of 3D light field displays,” Opt. Express 25(16), 18508–18525 (2017). [CrossRef]   [PubMed]  

28. N. Viganò, H. Der Sarkissian, C. Herzog, O. de la Rochefoucauld, R. van Liere, and K. J. Batenburg, “Tomographic approach for the quantitative scene reconstruction from light field images,” Opt. Express 26(18), 22574–22602 (2018). [CrossRef]   [PubMed]  

29. V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings IEEE Conference on Computational Photography (IEEE, 2014), pp. 1–10. [CrossRef]  

30. L. Erdmann and K. J. Gabriel, “High-resolution digital integral photography by use of a scanning microlens array,” Appl. Opt. 40(31), 5592–5599 (2001). [CrossRef]   [PubMed]  

31. T.-H. Jen, X. Shen, G. Yao, Y.-P. Huang, H.-P. D. Shieh, and B. Javidi, “Dynamic integral imaging display with electrically moving array lenslet technique using liquid crystal lens,” Opt. Express 23(14), 18415–18421 (2015). [CrossRef]   [PubMed]  

32. H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A 15(8), 2059–2065 (1998). [CrossRef]  

33. G. Johansson, “Visual perception of biological motion and a model for its analysis,” Percept. Psychophys. 14(2), 201–211 (1973). [CrossRef]  

34. S.-H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12(19), 4579–4588 (2004). [CrossRef]   [PubMed]  

35. C.-W. Chen, M. Cho, Y.-P. Huang, and B. Javidi, “Three-dimensional imaging with axially distributed sensing using electronically controlled liquid crystal lens,” Opt. Lett. 37(19), 4125–4127 (2012). [CrossRef]   [PubMed]  

36. M. Martinez-Corral, P.-Y. Hsieh, A. Doblas, E. Sanchez-Ortiga, G. Saavedra, and Y.-P. Huang, “Fast axial-scanning widefield microscopy with constant magnification and resolution,” J. Disp. Technol. 11(11), 913–920 (2015). [CrossRef]  

37. M. Schadt and W. Helfrich, “Voltage‐dependent optical activity of a twisted nematic liquid crystal,” Appl. Phys. Lett. 18(4), 127–128 (1971). [CrossRef]  

38. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26(3), 157–159 (2001). [CrossRef]   [PubMed]  

39. K. W. Arthur and F. P. Brooks, Jr., “Effects of field of view on performance with head-mounted displays,” in University of North Carolina at Chapel Hill (2000).

40. P.-Y. Hsieh, P.-Y. Chou, H.-A. Lin, C.-Y. Chu, C.-T. Huang, C.-H. Chen, Z. Qin, M. M. Corral, B. Javidi, and Y.-P. Huang, “Long working range light field microscope with fast scanning multifocal liquid crystal microlens array,” Opt. Express 26(8), 10981–10996 (2018). [CrossRef]   [PubMed]  

41. Y.-C. Chang, T.-H. Jen, C.-H. Ting, and Y.-P. Huang, “High-resistance liquid-crystal lens array for rotatable 2D/3D autostereoscopic display,” Opt. Express 22(3), 2714–2724 (2014). [CrossRef]   [PubMed]  

42. A. Hassanfiroozi, Y.-P. Huang, B. Javidi, and H.-P. D. Shieh, “Hexagonal liquid crystal lens array for 3D endoscopy,” Opt. Express 23(2), 971–981 (2015). [CrossRef]   [PubMed]  

43. Y. P. Huang, L. Y. Liao, and C. W. Chen, “2‐D/3‐D switchable autostereoscopic display with multi‐electrically driven liquid‐crystal (MeD‐LC) lenses,” J. Soc. Inf. Disp. 18(9), 642–646 (2010). [CrossRef]  

44. D. MacIsaac, “Google Cardboard: A virtual reality headset for $10?” Phys. Teach. 53(2), 125 (2015). [CrossRef]  

45. M. Cho, M. Daneshpanah, I. Moon, and B. Javidi, “Three-dimensional optical sensing and visualization using integral imaging,” Proc. IEEE 99(4), 556–575 (2011). [CrossRef]  

46. C.-J. Kim, M. Chang, M. Lee, J. Kim, and Y.-H. Won, “Depth plane adaptive integral imaging using a varifocal liquid lens array,” Appl. Opt. 54(10), 2565–2571 (2015). [CrossRef]   [PubMed]  

47. Z. Qin, P.-J. Wong, W.-C. Chao, F.-C. Lin, Y.-P. Huang, and H.-P. D. Shieh, “Contrast-sensitivity-based evaluation method of a surveillance camera’s visual resolution: improvement from the conventional slanted-edge spatial frequency response method,” Appl. Opt. 56(5), 1464–1471 (2017). [CrossRef]  

48. G. Brooker, N. Siegel, V. Wang, and J. Rosen, “Optimal resolution in Fresnel incoherent correlation holographic fluorescence microscopy,” Opt. Express 19(6), 5047–5062 (2011). [CrossRef]   [PubMed]  

49. S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010). [CrossRef]   [PubMed]  

50. X. Hu and H. Hua, “Design and assessment of a depth-fused multi-focal-plane display prototype,” J. Disp. Technol. 10(4), 308–316 (2014). [CrossRef]  

51. S.-G. Park, J.-H. Jung, Y. Jeong, and B. Lee, “Depth-fused display with improved viewing characteristics,” Opt. Express 21(23), 28758–28770 (2013). [CrossRef]   [PubMed]  

52. S.-G. Park, J.-Y. Hong, C.-K. Lee, and B. Lee, “Real-mode depth-fused display with viewer tracking,” Opt. Express 23(20), 26710–26722 (2015). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. N. Holliman, “3D display systems,” in Department of Computer Science (University of Durham, 2005).
  2. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013).
    [Crossref] [PubMed]
  3. L. Hill and A. Jacobs, “3-D liquid crystal displays and their applications,” Proc. IEEE 94(3), 575–590 (2006).
    [Crossref]
  4. S. W. Depp and W. E. Howard, “Flat-panel displays,” Sci. Am. 266(3), 90–97 (1993).
    [Crossref]
  5. E. Sutherland, “A head-mounted three dimensional display,” in fall joint computer conference (1968).
  6. K. Keller, A. State, and H. Fuchs, “Head mounted displays for medical use,” J. Disp. Technol. 4(4), 468–472 (2008).
    [Crossref]
  7. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014).
    [Crossref] [PubMed]
  8. J. P. Rolland and H. Hua, “Head-mounted display systems,” in Encyclopedia of Optical Engineering (Dekker, 2005).
  9. H. McLellan, “Virtual realities,” in Handbook of research for educational communications and technology, (1996).
  10. C. Wheatstone, “Contributions to the physiology of vision.–Part the first. On some remarkable, and hitherto unobserved, phenomena of binocular vision,” in Philosophical transactions of the Royal Society of London (1838).
  11. M. Lappe, F. Bremmer, and A. Van den Berg, “Perception of self-motion from visual flow,” Trends Cogn. Sci. (Regul. Ed.) 3(9), 329–336 (1999).
    [Crossref] [PubMed]
  12. R. Burke and L. Brickson, “Focus cue enabled head-mounted display via microlens array,” TOG 32, 220 (2013).
  13. K. Akşit, J. Kautz, and D. Luebke, “Slim near-eye display using pinhole aperture arrays,” Appl. Opt. 54(11), 3422–3427 (2015).
    [Crossref] [PubMed]
  14. F.-C. Huang, D. P. Luebke, and G. Wetzstein, “The light field stereoscope,” ACM Transactions on Graphics 34, 60:1–60:12 (2015).
  15. J.-S. Jang and B. Javidi, “Three-dimensional integral imaging of micro-objects,” Opt. Lett. 29(11), 1230–1232 (2004).
    [Crossref] [PubMed]
  16. M. Levoy, Z. Zhang, and I. McDowall, “Recording and controlling the 4D light field in a microscope using microlens arrays,” J. Microsc. 235(2), 144–162 (2009).
    [Crossref] [PubMed]
  17. E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing (MIT Press, 1991), pp. 3–20.
  18. S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44(2), 71–74 (2005).
    [Crossref]
  19. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications,” Appl. Opt. 52(4), 546–560 (2013).
    [Crossref] [PubMed]
  20. N. Balram and I. Tošić, “Light-field imaging and display systems,” Inf. Disp. 32(4), 6–13 (2016).
    [Crossref]
  21. X. Jin, L. Liu, and Q. Dai, “Approximation and blind reconstruction of volumetric light field,” Opt. Express 26(13), 16836–16852 (2018).
    [Crossref] [PubMed]
  22. M. Liu, C. Lu, H. Li, and X. Liu, “Bifocal computational near eye light field displays and Structure parameters determination scheme for bifocal computational display,” Opt. Express 26(4), 4060–4074 (2018).
    [Crossref] [PubMed]
  23. C. Yao, D. Cheng, T. Yang, and Y. Wang, “Design of an optical see-through light-field near-eye display using a discrete lenslet array,” Opt. Express 26(14), 18292–18301 (2018).
    [Crossref] [PubMed]
  24. Z. Cai, X. Liu, X. Peng, and B. Z. Gao, “Ray calibration and phase mapping for structured-light-field 3D reconstruction,” Opt. Express 26(6), 7598–7613 (2018).
    [Crossref] [PubMed]
  25. Z. Xin, D. Wei, X. Xie, M. Chen, X. Zhang, J. Liao, H. Wang, and C. Xie, “Dual-polarized light-field imaging micro-system via a liquid-crystal microlens array for direct three-dimensional observation,” Opt. Express 26(4), 4035–4049 (2018).
    [Crossref] [PubMed]
  26. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997).
    [Crossref] [PubMed]
  27. H. Huang and H. Hua, “Systematic characterization and optimization of 3D light field displays,” Opt. Express 25(16), 18508–18525 (2017).
    [Crossref] [PubMed]
  28. N. Viganò, H. Der Sarkissian, C. Herzog, O. de la Rochefoucauld, R. van Liere, and K. J. Batenburg, “Tomographic approach for the quantitative scene reconstruction from light field images,” Opt. Express 26(18), 22574–22602 (2018).
    [Crossref] [PubMed]
  29. V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings IEEE Conference on Computational Photography (IEEE, 2014), pp. 1–10.
    [Crossref]
  30. L. Erdmann and K. J. Gabriel, “High-resolution digital integral photography by use of a scanning microlens array,” Appl. Opt. 40(31), 5592–5599 (2001).
    [Crossref] [PubMed]
  31. T.-H. Jen, X. Shen, G. Yao, Y.-P. Huang, H.-P. D. Shieh, and B. Javidi, “Dynamic integral imaging display with electrically moving array lenslet technique using liquid crystal lens,” Opt. Express 23(14), 18415–18421 (2015).
    [Crossref] [PubMed]
  32. H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A 15(8), 2059–2065 (1998).
    [Crossref]
  33. G. Johansson, “Visual perception of biological motion and a model for its analysis,” Percept. Psychophys. 14(2), 201–211 (1973).
    [Crossref]
  34. S.-H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12(19), 4579–4588 (2004).
    [Crossref] [PubMed]
  35. C.-W. Chen, M. Cho, Y.-P. Huang, and B. Javidi, “Three-dimensional imaging with axially distributed sensing using electronically controlled liquid crystal lens,” Opt. Lett. 37(19), 4125–4127 (2012).
    [Crossref] [PubMed]
  36. M. Martinez-Corral, P.-Y. Hsieh, A. Doblas, E. Sanchez-Ortiga, G. Saavedra, and Y.-P. Huang, “Fast axial-scanning widefield microscopy with constant magnification and resolution,” J. Disp. Technol. 11(11), 913–920 (2015).
    [Crossref]
  37. M. Schadt and W. Helfrich, “Voltage‐dependent optical activity of a twisted nematic liquid crystal,” Appl. Phys. Lett. 18(4), 127–128 (1971).
    [Crossref]
  38. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26(3), 157–159 (2001).
    [Crossref] [PubMed]
  39. K. W. Arthur and F. P. Brooks, Jr., “Effects of field of view on performance with head-mounted displays,” in University of North Carolina at Chapel Hill (2000).
  40. P.-Y. Hsieh, P.-Y. Chou, H.-A. Lin, C.-Y. Chu, C.-T. Huang, C.-H. Chen, Z. Qin, M. M. Corral, B. Javidi, and Y.-P. Huang, “Long working range light field microscope with fast scanning multifocal liquid crystal microlens array,” Opt. Express 26(8), 10981–10996 (2018).
    [Crossref] [PubMed]
  41. Y.-C. Chang, T.-H. Jen, C.-H. Ting, and Y.-P. Huang, “High-resistance liquid-crystal lens array for rotatable 2D/3D autostereoscopic display,” Opt. Express 22(3), 2714–2724 (2014).
    [Crossref] [PubMed]
  42. A. Hassanfiroozi, Y.-P. Huang, B. Javidi, and H.-P. D. Shieh, “Hexagonal liquid crystal lens array for 3D endoscopy,” Opt. Express 23(2), 971–981 (2015).
    [Crossref] [PubMed]
  43. Y. P. Huang, L. Y. Liao, and C. W. Chen, “2‐D/3‐D switchable autostereoscopic display with multi‐electrically driven liquid‐crystal (MeD‐LC) lenses,” J. Soc. Inf. Disp. 18(9), 642–646 (2010).
    [Crossref]
  44. D. MacIsaac, “Google Cardboard: A virtual reality headset for $10?” Phys. Teach. 53(2), 125 (2015).
    [Crossref]
  45. M. Cho, M. Daneshpanah, I. Moon, and B. Javidi, “Three-dimensional optical sensing and visualization using integral imaging,” Proc. IEEE 99(4), 556–575 (2011).
    [Crossref]
  46. C.-J. Kim, M. Chang, M. Lee, J. Kim, and Y.-H. Won, “Depth plane adaptive integral imaging using a varifocal liquid lens array,” Appl. Opt. 54(10), 2565–2571 (2015).
    [Crossref] [PubMed]
  47. Z. Qin, P.-J. Wong, W.-C. Chao, F.-C. Lin, Y.-P. Huang, and H.-P. D. Shieh, “Contrast-sensitivity-based evaluation method of a surveillance camera’s visual resolution: improvement from the conventional slanted-edge spatial frequency response method,” Appl. Opt. 56(5), 1464–1471 (2017).
    [Crossref]
  48. G. Brooker, N. Siegel, V. Wang, and J. Rosen, “Optimal resolution in Fresnel incoherent correlation holographic fluorescence microscopy,” Opt. Express 19(6), 5047–5062 (2011).
    [Crossref] [PubMed]
  49. S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010).
    [Crossref] [PubMed]
  50. X. Hu and H. Hua, “Design and assessment of a depth-fused multi-focal-plane display prototype,” J. Disp. Technol. 10(4), 308–316 (2014).
    [Crossref]
  51. S.-G. Park, J.-H. Jung, Y. Jeong, and B. Lee, “Depth-fused display with improved viewing characteristics,” Opt. Express 21(23), 28758–28770 (2013).
    [Crossref] [PubMed]
  52. S.-G. Park, J.-Y. Hong, C.-K. Lee, and B. Lee, “Real-mode depth-fused display with viewer tracking,” Opt. Express 23(20), 26710–26722 (2015).
    [Crossref] [PubMed]

2018 (7)

X. Jin, L. Liu, and Q. Dai, “Approximation and blind reconstruction of volumetric light field,” Opt. Express 26(13), 16836–16852 (2018).
[Crossref] [PubMed]

M. Liu, C. Lu, H. Li, and X. Liu, “Bifocal computational near eye light field displays and Structure parameters determination scheme for bifocal computational display,” Opt. Express 26(4), 4060–4074 (2018).
[Crossref] [PubMed]

C. Yao, D. Cheng, T. Yang, and Y. Wang, “Design of an optical see-through light-field near-eye display using a discrete lenslet array,” Opt. Express 26(14), 18292–18301 (2018).
[Crossref] [PubMed]

Z. Cai, X. Liu, X. Peng, and B. Z. Gao, “Ray calibration and phase mapping for structured-light-field 3D reconstruction,” Opt. Express 26(6), 7598–7613 (2018).
[Crossref] [PubMed]

Z. Xin, D. Wei, X. Xie, M. Chen, X. Zhang, J. Liao, H. Wang, and C. Xie, “Dual-polarized light-field imaging micro-system via a liquid-crystal microlens array for direct three-dimensional observation,” Opt. Express 26(4), 4035–4049 (2018).
[Crossref] [PubMed]

N. Viganò, H. Der Sarkissian, C. Herzog, O. de la Rochefoucauld, R. van Liere, and K. J. Batenburg, “Tomographic approach for the quantitative scene reconstruction from light field images,” Opt. Express 26(18), 22574–22602 (2018).
[Crossref] [PubMed]

P.-Y. Hsieh, P.-Y. Chou, H.-A. Lin, C.-Y. Chu, C.-T. Huang, C.-H. Chen, Z. Qin, M. M. Corral, B. Javidi, and Y.-P. Huang, “Long working range light field microscope with fast scanning multifocal liquid crystal microlens array,” Opt. Express 26(8), 10981–10996 (2018).
[Crossref] [PubMed]

2017 (2)

2016 (1)

N. Balram and I. Tošić, “Light-field imaging and display systems,” Inf. Disp. 32(4), 6–13 (2016).
[Crossref]

2015 (7)

2014 (3)

2013 (4)

2012 (1)

2011 (2)

M. Cho, M. Daneshpanah, I. Moon, and B. Javidi, “Three-dimensional optical sensing and visualization using integral imaging,” Proc. IEEE 99(4), 556–575 (2011).
[Crossref]

G. Brooker, N. Siegel, V. Wang, and J. Rosen, “Optimal resolution in Fresnel incoherent correlation holographic fluorescence microscopy,” Opt. Express 19(6), 5047–5062 (2011).
[Crossref] [PubMed]

2010 (2)

S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010).
[Crossref] [PubMed]

Y. P. Huang, L. Y. Liao, and C. W. Chen, “2‐D/3‐D switchable autostereoscopic display with multi‐electrically driven liquid‐crystal (MeD‐LC) lenses,” J. Soc. Inf. Disp. 18(9), 642–646 (2010).
[Crossref]

2009 (1)

M. Levoy, Z. Zhang, and I. McDowall, “Recording and controlling the 4D light field in a microscope using microlens arrays,” J. Microsc. 235(2), 144–162 (2009).
[Crossref] [PubMed]

2008 (1)

K. Keller, A. State, and H. Fuchs, “Head mounted displays for medical use,” J. Disp. Technol. 4(4), 468–472 (2008).
[Crossref]

2006 (1)

L. Hill and A. Jacobs, “3-D liquid crystal displays and their applications,” Proc. IEEE 94(3), 575–590 (2006).
[Crossref]

2005 (1)

S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44(2), 71–74 (2005).
[Crossref]

2004 (2)

2001 (2)

1999 (1)

M. Lappe, F. Bremmer, and A. Van den Berg, “Perception of self-motion from visual flow,” Trends Cogn. Sci. (Regul. Ed.) 3(9), 329–336 (1999).
[Crossref] [PubMed]

1998 (1)

1997 (1)

1993 (1)

S. W. Depp and W. E. Howard, “Flat-panel displays,” Sci. Am. 266(3), 90–97 (1993).
[Crossref]

1973 (1)

G. Johansson, “Visual perception of biological motion and a model for its analysis,” Percept. Psychophys. 14(2), 201–211 (1973).
[Crossref]

1971 (1)

M. Schadt and W. Helfrich, “Voltage‐dependent optical activity of a twisted nematic liquid crystal,” Appl. Phys. Lett. 18(4), 127–128 (1971).
[Crossref]

Aksit, K.

Arai, J.

Arimoto, H.

Balram, N.

N. Balram and I. Tošić, “Light-field imaging and display systems,” Inf. Disp. 32(4), 6–13 (2016).
[Crossref]

Batenburg, K. J.

Boominathan, V.

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings IEEE Conference on Computational Photography (IEEE, 2014), pp. 1–10.
[Crossref]

Bremmer, F.

M. Lappe, F. Bremmer, and A. Van den Berg, “Perception of self-motion from visual flow,” Trends Cogn. Sci. (Regul. Ed.) 3(9), 329–336 (1999).
[Crossref] [PubMed]

Brickson, L.

R. Burke and L. Brickson, “Focus cue enabled head-mounted display via microlens array,” TOG 32, 220 (2013).

Brooker, G.

Burke, R.

R. Burke and L. Brickson, “Focus cue enabled head-mounted display via microlens array,” TOG 32, 220 (2013).

Cai, Z.

Chang, M.

Chang, Y.-C.

Chao, W.-C.

Chen, C. W.

Y. P. Huang, L. Y. Liao, and C. W. Chen, “2‐D/3‐D switchable autostereoscopic display with multi‐electrically driven liquid‐crystal (MeD‐LC) lenses,” J. Soc. Inf. Disp. 18(9), 642–646 (2010).
[Crossref]

Chen, C.-H.

Chen, C.-W.

Chen, M.

Cheng, D.

Cho, M.

C.-W. Chen, M. Cho, Y.-P. Huang, and B. Javidi, “Three-dimensional imaging with axially distributed sensing using electronically controlled liquid crystal lens,” Opt. Lett. 37(19), 4125–4127 (2012).
[Crossref] [PubMed]

M. Cho, M. Daneshpanah, I. Moon, and B. Javidi, “Three-dimensional optical sensing and visualization using integral imaging,” Proc. IEEE 99(4), 556–575 (2011).
[Crossref]

Chou, P.-Y.

Chu, C.-Y.

Corral, M. M.

Dai, Q.

Daneshpanah, M.

M. Cho, M. Daneshpanah, I. Moon, and B. Javidi, “Three-dimensional optical sensing and visualization using integral imaging,” Proc. IEEE 99(4), 556–575 (2011).
[Crossref]

de la Rochefoucauld, O.

Depp, S. W.

S. W. Depp and W. E. Howard, “Flat-panel displays,” Sci. Am. 266(3), 90–97 (1993).
[Crossref]

Der Sarkissian, H.

Doblas, A.

M. Martinez-Corral, P.-Y. Hsieh, A. Doblas, E. Sanchez-Ortiga, G. Saavedra, and Y.-P. Huang, “Fast axial-scanning widefield microscopy with constant magnification and resolution,” J. Disp. Technol. 11(11), 913–920 (2015).
[Crossref]

Erdmann, L.

Fuchs, H.

K. Keller, A. State, and H. Fuchs, “Head mounted displays for medical use,” J. Disp. Technol. 4(4), 468–472 (2008).
[Crossref]

Gabriel, K. J.

Gao, B. Z.

Geng, J.

J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013).
[Crossref] [PubMed]

Hassanfiroozi, A.

Helfrich, W.

M. Schadt and W. Helfrich, “Voltage‐dependent optical activity of a twisted nematic liquid crystal,” Appl. Phys. Lett. 18(4), 127–128 (1971).
[Crossref]

Herzog, C.

Hill, L.

L. Hill and A. Jacobs, “3-D liquid crystal displays and their applications,” Proc. IEEE 94(3), 575–590 (2006).
[Crossref]

Hong, J.-Y.

Hong, S.-H.

Hoshino, H.

Howard, W. E.

S. W. Depp and W. E. Howard, “Flat-panel displays,” Sci. Am. 266(3), 90–97 (1993).
[Crossref]

Hsieh, P.-Y.

P.-Y. Hsieh, P.-Y. Chou, H.-A. Lin, C.-Y. Chu, C.-T. Huang, C.-H. Chen, Z. Qin, M. M. Corral, B. Javidi, and Y.-P. Huang, “Long working range light field microscope with fast scanning multifocal liquid crystal microlens array,” Opt. Express 26(8), 10981–10996 (2018).
[Crossref] [PubMed]

M. Martinez-Corral, P.-Y. Hsieh, A. Doblas, E. Sanchez-Ortiga, G. Saavedra, and Y.-P. Huang, “Fast axial-scanning widefield microscopy with constant magnification and resolution,” J. Disp. Technol. 11(11), 913–920 (2015).
[Crossref]

Hu, X.

X. Hu and H. Hua, “Design and assessment of a depth-fused multi-focal-plane display prototype,” J. Disp. Technol. 10(4), 308–316 (2014).
[Crossref]

Hua, H.

Huang, C.-T.

Huang, H.

Huang, Y. P.

Y. P. Huang, L. Y. Liao, and C. W. Chen, “2‐D/3‐D switchable autostereoscopic display with multi‐electrically driven liquid‐crystal (MeD‐LC) lenses,” J. Soc. Inf. Disp. 18(9), 642–646 (2010).
[Crossref]

Huang, Y.-P.

P.-Y. Hsieh, P.-Y. Chou, H.-A. Lin, C.-Y. Chu, C.-T. Huang, C.-H. Chen, Z. Qin, M. M. Corral, B. Javidi, and Y.-P. Huang, “Long working range light field microscope with fast scanning multifocal liquid crystal microlens array,” Opt. Express 26(8), 10981–10996 (2018).
[Crossref] [PubMed]

Z. Qin, P.-J. Wong, W.-C. Chao, F.-C. Lin, Y.-P. Huang, and H.-P. D. Shieh, “Contrast-sensitivity-based evaluation method of a surveillance camera’s visual resolution: improvement from the conventional slanted-edge spatial frequency response method,” Appl. Opt. 56(5), 1464–1471 (2017).
[Crossref]

A. Hassanfiroozi, Y.-P. Huang, B. Javidi, and H.-P. D. Shieh, “Hexagonal liquid crystal lens array for 3D endoscopy,” Opt. Express 23(2), 971–981 (2015).
[Crossref] [PubMed]

M. Martinez-Corral, P.-Y. Hsieh, A. Doblas, E. Sanchez-Ortiga, G. Saavedra, and Y.-P. Huang, “Fast axial-scanning widefield microscopy with constant magnification and resolution,” J. Disp. Technol. 11(11), 913–920 (2015).
[Crossref]

T.-H. Jen, X. Shen, G. Yao, Y.-P. Huang, H.-P. D. Shieh, and B. Javidi, “Dynamic integral imaging display with electrically moving array lenslet technique using liquid crystal lens,” Opt. Express 23(14), 18415–18421 (2015).
[Crossref] [PubMed]

Y.-C. Chang, T.-H. Jen, C.-H. Ting, and Y.-P. Huang, “High-resistance liquid-crystal lens array for rotatable 2D/3D autostereoscopic display,” Opt. Express 22(3), 2714–2724 (2014).
[Crossref] [PubMed]

C.-W. Chen, M. Cho, Y.-P. Huang, and B. Javidi, “Three-dimensional imaging with axially distributed sensing using electronically controlled liquid crystal lens,” Opt. Lett. 37(19), 4125–4127 (2012).
[Crossref] [PubMed]

Isono, H.

Jacobs, A.

L. Hill and A. Jacobs, “3-D liquid crystal displays and their applications,” Proc. IEEE 94(3), 575–590 (2006).
[Crossref]

Jang, J.-S.

Javidi, B.

P.-Y. Hsieh, P.-Y. Chou, H.-A. Lin, C.-Y. Chu, C.-T. Huang, C.-H. Chen, Z. Qin, M. M. Corral, B. Javidi, and Y.-P. Huang, “Long working range light field microscope with fast scanning multifocal liquid crystal microlens array,” Opt. Express 26(8), 10981–10996 (2018).
[Crossref] [PubMed]

A. Hassanfiroozi, Y.-P. Huang, B. Javidi, and H.-P. D. Shieh, “Hexagonal liquid crystal lens array for 3D endoscopy,” Opt. Express 23(2), 971–981 (2015).
[Crossref] [PubMed]

T.-H. Jen, X. Shen, G. Yao, Y.-P. Huang, H.-P. D. Shieh, and B. Javidi, “Dynamic integral imaging display with electrically moving array lenslet technique using liquid crystal lens,” Opt. Express 23(14), 18415–18421 (2015).
[Crossref] [PubMed]

H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014).
[Crossref] [PubMed]

X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications,” Appl. Opt. 52(4), 546–560 (2013).
[Crossref] [PubMed]

C.-W. Chen, M. Cho, Y.-P. Huang, and B. Javidi, “Three-dimensional imaging with axially distributed sensing using electronically controlled liquid crystal lens,” Opt. Lett. 37(19), 4125–4127 (2012).
[Crossref] [PubMed]

M. Cho, M. Daneshpanah, I. Moon, and B. Javidi, “Three-dimensional optical sensing and visualization using integral imaging,” Proc. IEEE 99(4), 556–575 (2011).
[Crossref]

S.-H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12(19), 4579–4588 (2004).
[Crossref] [PubMed]

J.-S. Jang and B. Javidi, “Three-dimensional integral imaging of micro-objects,” Opt. Lett. 29(11), 1230–1232 (2004).
[Crossref] [PubMed]

H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26(3), 157–159 (2001).
[Crossref] [PubMed]

Jen, T.-H.

Jeong, Y.

Jin, X.

Johansson, G.

G. Johansson, “Visual perception of biological motion and a model for its analysis,” Percept. Psychophys. 14(2), 201–211 (1973).
[Crossref]

Jung, J.-H.

Kautz, J.

Keller, K.

K. Keller, A. State, and H. Fuchs, “Head mounted displays for medical use,” J. Disp. Technol. 4(4), 468–472 (2008).
[Crossref]

Kim, C.-J.

Kim, J.

C.-J. Kim, M. Chang, M. Lee, J. Kim, and Y.-H. Won, “Depth plane adaptive integral imaging using a varifocal liquid lens array,” Appl. Opt. 54(10), 2565–2571 (2015).
[Crossref] [PubMed]

S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44(2), 71–74 (2005).
[Crossref]

Lappe, M.

M. Lappe, F. Bremmer, and A. Van den Berg, “Perception of self-motion from visual flow,” Trends Cogn. Sci. (Regul. Ed.) 3(9), 329–336 (1999).
[Crossref] [PubMed]

Lee, B.

Lee, C.-K.

Lee, M.

Levoy, M.

M. Levoy, Z. Zhang, and I. McDowall, “Recording and controlling the 4D light field in a microscope using microlens arrays,” J. Microsc. 235(2), 144–162 (2009).
[Crossref] [PubMed]

Li, H.

Liao, J.

Liao, L. Y.

Y. P. Huang, L. Y. Liao, and C. W. Chen, “2‐D/3‐D switchable autostereoscopic display with multi‐electrically driven liquid‐crystal (MeD‐LC) lenses,” J. Soc. Inf. Disp. 18(9), 642–646 (2010).
[Crossref]

Lin, F.-C.

Lin, H.-A.

Liu, L.

Liu, M.

Liu, S.

Liu, X.

Lu, C.

Luebke, D.

MacIsaac, D.

D. MacIsaac, “Google Cardboard: A virtual reality headset for $10?” Phys. Teach. 53(2), 125 (2015).
[Crossref]

Martinez-Corral, M.

M. Martinez-Corral, P.-Y. Hsieh, A. Doblas, E. Sanchez-Ortiga, G. Saavedra, and Y.-P. Huang, “Fast axial-scanning widefield microscopy with constant magnification and resolution,” J. Disp. Technol. 11(11), 913–920 (2015).
[Crossref]

X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications,” Appl. Opt. 52(4), 546–560 (2013).
[Crossref] [PubMed]

McDowall, I.

M. Levoy, Z. Zhang, and I. McDowall, “Recording and controlling the 4D light field in a microscope using microlens arrays,” J. Microsc. 235(2), 144–162 (2009).
[Crossref] [PubMed]

Min, S.-W.

S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44(2), 71–74 (2005).
[Crossref]

Mitra, K.

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings IEEE Conference on Computational Photography (IEEE, 2014), pp. 1–10.
[Crossref]

Moon, I.

M. Cho, M. Daneshpanah, I. Moon, and B. Javidi, “Three-dimensional optical sensing and visualization using integral imaging,” Proc. IEEE 99(4), 556–575 (2011).
[Crossref]

Okano, F.

Park, S.-G.

Peng, X.

Qin, Z.

Rosen, J.

Saavedra, G.

M. Martinez-Corral, P.-Y. Hsieh, A. Doblas, E. Sanchez-Ortiga, G. Saavedra, and Y.-P. Huang, “Fast axial-scanning widefield microscopy with constant magnification and resolution,” J. Disp. Technol. 11(11), 913–920 (2015).
[Crossref]

Sanchez-Ortiga, E.

M. Martinez-Corral, P.-Y. Hsieh, A. Doblas, E. Sanchez-Ortiga, G. Saavedra, and Y.-P. Huang, “Fast axial-scanning widefield microscopy with constant magnification and resolution,” J. Disp. Technol. 11(11), 913–920 (2015).
[Crossref]

Schadt, M.

M. Schadt and W. Helfrich, “Voltage‐dependent optical activity of a twisted nematic liquid crystal,” Appl. Phys. Lett. 18(4), 127–128 (1971).
[Crossref]

Shen, X.

Shieh, H.-P. D.

Siegel, N.

State, A.

K. Keller, A. State, and H. Fuchs, “Head mounted displays for medical use,” J. Disp. Technol. 4(4), 468–472 (2008).
[Crossref]

Stern, A.

Ting, C.-H.

Tošic, I.

N. Balram and I. Tošić, “Light-field imaging and display systems,” Inf. Disp. 32(4), 6–13 (2016).
[Crossref]

Van den Berg, A.

M. Lappe, F. Bremmer, and A. Van den Berg, “Perception of self-motion from visual flow,” Trends Cogn. Sci. (Regul. Ed.) 3(9), 329–336 (1999).
[Crossref] [PubMed]

van Liere, R.

Veeraraghavan, A.

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings IEEE Conference on Computational Photography (IEEE, 2014), pp. 1–10.
[Crossref]

Viganò, N.

Wang, H.

Wang, V.

Wang, Y.

Wei, D.

Won, Y.-H.

Wong, P.-J.

Xiao, X.

Xie, C.

Xie, X.

Xin, Z.

Yang, T.

Yao, C.

Yao, G.

Yuyama, I.

Zhang, X.

Zhang, Z.

M. Levoy, Z. Zhang, and I. McDowall, “Recording and controlling the 4D light field in a microscope using microlens arrays,” J. Microsc. 235(2), 144–162 (2009).
[Crossref] [PubMed]

Adv. Opt. Photonics (1)

J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013).
[Crossref] [PubMed]

Appl. Opt. (6)

Appl. Phys. Lett. (1)

M. Schadt and W. Helfrich, “Voltage‐dependent optical activity of a twisted nematic liquid crystal,” Appl. Phys. Lett. 18(4), 127–128 (1971).
[Crossref]

Inf. Disp. (1)

N. Balram and I. Tošić, “Light-field imaging and display systems,” Inf. Disp. 32(4), 6–13 (2016).
[Crossref]

J. Disp. Technol. (3)

M. Martinez-Corral, P.-Y. Hsieh, A. Doblas, E. Sanchez-Ortiga, G. Saavedra, and Y.-P. Huang, “Fast axial-scanning widefield microscopy with constant magnification and resolution,” J. Disp. Technol. 11(11), 913–920 (2015).
[Crossref]

K. Keller, A. State, and H. Fuchs, “Head mounted displays for medical use,” J. Disp. Technol. 4(4), 468–472 (2008).
[Crossref]

X. Hu and H. Hua, “Design and assessment of a depth-fused multi-focal-plane display prototype,” J. Disp. Technol. 10(4), 308–316 (2014).
[Crossref]

J. Microsc. (1)

M. Levoy, Z. Zhang, and I. McDowall, “Recording and controlling the 4D light field in a microscope using microlens arrays,” J. Microsc. 235(2), 144–162 (2009).
[Crossref] [PubMed]

J. Opt. Soc. Am. A (1)

J. Soc. Inf. Disp. (1)

Y. P. Huang, L. Y. Liao, and C. W. Chen, “2‐D/3‐D switchable autostereoscopic display with multi‐electrically driven liquid‐crystal (MeD‐LC) lenses,” J. Soc. Inf. Disp. 18(9), 642–646 (2010).
[Crossref]

Jpn. J. Appl. Phys. (1)

S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44(2), 71–74 (2005).
[Crossref]

Opt. Express (17)

S.-H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12(19), 4579–4588 (2004).
[Crossref] [PubMed]

T.-H. Jen, X. Shen, G. Yao, Y.-P. Huang, H.-P. D. Shieh, and B. Javidi, “Dynamic integral imaging display with electrically moving array lenslet technique using liquid crystal lens,” Opt. Express 23(14), 18415–18421 (2015).
[Crossref] [PubMed]

X. Jin, L. Liu, and Q. Dai, “Approximation and blind reconstruction of volumetric light field,” Opt. Express 26(13), 16836–16852 (2018).
[Crossref] [PubMed]

M. Liu, C. Lu, H. Li, and X. Liu, “Bifocal computational near eye light field displays and Structure parameters determination scheme for bifocal computational display,” Opt. Express 26(4), 4060–4074 (2018).
[Crossref] [PubMed]

C. Yao, D. Cheng, T. Yang, and Y. Wang, “Design of an optical see-through light-field near-eye display using a discrete lenslet array,” Opt. Express 26(14), 18292–18301 (2018).
[Crossref] [PubMed]

Z. Cai, X. Liu, X. Peng, and B. Z. Gao, “Ray calibration and phase mapping for structured-light-field 3D reconstruction,” Opt. Express 26(6), 7598–7613 (2018).
[Crossref] [PubMed]

Z. Xin, D. Wei, X. Xie, M. Chen, X. Zhang, J. Liao, H. Wang, and C. Xie, “Dual-polarized light-field imaging micro-system via a liquid-crystal microlens array for direct three-dimensional observation,” Opt. Express 26(4), 4035–4049 (2018).
[Crossref] [PubMed]

H. Huang and H. Hua, “Systematic characterization and optimization of 3D light field displays,” Opt. Express 25(16), 18508–18525 (2017).
[Crossref] [PubMed]

N. Viganò, H. Der Sarkissian, C. Herzog, O. de la Rochefoucauld, R. van Liere, and K. J. Batenburg, “Tomographic approach for the quantitative scene reconstruction from light field images,” Opt. Express 26(18), 22574–22602 (2018).
[Crossref] [PubMed]

H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014).
[Crossref] [PubMed]

S.-G. Park, J.-H. Jung, Y. Jeong, and B. Lee, “Depth-fused display with improved viewing characteristics,” Opt. Express 21(23), 28758–28770 (2013).
[Crossref] [PubMed]

S.-G. Park, J.-Y. Hong, C.-K. Lee, and B. Lee, “Real-mode depth-fused display with viewer tracking,” Opt. Express 23(20), 26710–26722 (2015).
[Crossref] [PubMed]

P.-Y. Hsieh, P.-Y. Chou, H.-A. Lin, C.-Y. Chu, C.-T. Huang, C.-H. Chen, Z. Qin, M. M. Corral, B. Javidi, and Y.-P. Huang, “Long working range light field microscope with fast scanning multifocal liquid crystal microlens array,” Opt. Express 26(8), 10981–10996 (2018).
[Crossref] [PubMed]

Y.-C. Chang, T.-H. Jen, C.-H. Ting, and Y.-P. Huang, “High-resistance liquid-crystal lens array for rotatable 2D/3D autostereoscopic display,” Opt. Express 22(3), 2714–2724 (2014).
[Crossref] [PubMed]

A. Hassanfiroozi, Y.-P. Huang, B. Javidi, and H.-P. D. Shieh, “Hexagonal liquid crystal lens array for 3D endoscopy,” Opt. Express 23(2), 971–981 (2015).
[Crossref] [PubMed]

G. Brooker, N. Siegel, V. Wang, and J. Rosen, “Optimal resolution in Fresnel incoherent correlation holographic fluorescence microscopy,” Opt. Express 19(6), 5047–5062 (2011).
[Crossref] [PubMed]

S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010).
[Crossref] [PubMed]

Opt. Lett. (3)

Percept. Psychophys. (1)

G. Johansson, “Visual perception of biological motion and a model for its analysis,” Percept. Psychophys. 14(2), 201–211 (1973).
[Crossref]

Phys. Teach. (1)

D. MacIsaac, “Google Cardboard: A virtual reality headset for $10?” Phys. Teach. 53(2), 125 (2015).
[Crossref]

Proc. IEEE (2)

M. Cho, M. Daneshpanah, I. Moon, and B. Javidi, “Three-dimensional optical sensing and visualization using integral imaging,” Proc. IEEE 99(4), 556–575 (2011).
[Crossref]

L. Hill and A. Jacobs, “3-D liquid crystal displays and their applications,” Proc. IEEE 94(3), 575–590 (2006).
[Crossref]

Sci. Am. (1)

S. W. Depp and W. E. Howard, “Flat-panel displays,” Sci. Am. 266(3), 90–97 (1993).
[Crossref]

TOG (1)

R. Burke and L. Brickson, “Focus cue enabled head-mounted display via microlens array,” TOG 32, 220 (2013).

Trends Cogn. Sci. (Regul. Ed.) (1)

M. Lappe, F. Bremmer, and A. Van den Berg, “Perception of self-motion from visual flow,” Trends Cogn. Sci. (Regul. Ed.) 3(9), 329–336 (1999).
[Crossref] [PubMed]

Other (9)

N. Holliman, “3D display systems,” in Department of Computer Science (University of Durham, 2005).

F.-C. Huang, D. P. Luebke, and G. Wetzstein, “The light field stereoscope,” ACM Transactions on Graphics 34, 60:1–60:12 (2015).

E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing (MIT Press, 1991), pp. 3–20.

E. Sutherland, “A head-mounted three dimensional display,” in fall joint computer conference (1968).

J. P. Rolland and H. Hua, “Head-mounted display systems,” in Encyclopedia of Optical Engineering (Dekker, 2005).

H. McLellan, “Virtual realities,” in Handbook of research for educational communications and technology, (1996).

C. Wheatstone, “Contributions to the physiology of vision.–Part the first. On some remarkable, and hitherto unobserved, phenomena of binocular vision,” in Philosophical transactions of the Royal Society of London (1838).

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings IEEE Conference on Computational Photography (IEEE, 2014), pp. 1–10.
[Crossref]

K. W. Arthur and F. P. Brooks, Jr., “Effects of field of view on performance with head-mounted displays,” in University of North Carolina at Chapel Hill (2000).

Supplementary Material (1)

NameDescription
» Visualization 1       This video shows the hybrid virtual image containing a 3D cube and a colorful diamond to evaluate the image quality of the proposed hybrid light field head-mounted display, which consists an OLED panel, an electric circuit, a LC MLA, a TN cell, a pol

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1
Fig. 1 The relationship between accommodation and convergence distances in (a) the real world, (b) a binocular 3D display, and (c) a light field 3D display.
Fig. 2
Fig. 2 The driving methods and working functions of the display, LC MLA, and TN cell in our proposed system.
Fig. 3
Fig. 3 Schematic layout of the hybrid VR HMD in light field mode.
Fig. 4
Fig. 4 Schematic layout of the hybrid VR HMD in 2D mode.
Fig. 5
Fig. 5 (a)(b) Structure and (c) electric line of the LC MLA optical component.
Fig. 6
Fig. 6 (a)(b) Interference pattern (IP) and (c) point spread function (PSF) of the HiR LC MLA with 8 V and 1 MHz as the driving conditions.
Fig. 7
Fig. 7 The polarization contrast of the TN cell at different wavelengths.
Fig. 8
Fig. 8 Structure and parameters of the monocular optical system.
Fig. 9
Fig. 9 Theoretical resolution performance of the hybrid HMD.
Fig. 10
Fig. 10 The optical components and experimental setup of the hybrid VR HMD.
Fig. 11
Fig. 11 Captured images of the USAF pattern in the (a) LF, (b) hybrid, and (c) 2D image planes.
Fig. 12
Fig. 12 Comparison between the LF, hybrid, and 2D virtual images (see Visualization 1).
Fig. 13
Fig. 13 The verified positions of the reconstructed LF, 2D, and hybrid imaging planes.
Fig. 14
Fig. 14 The depth range with focus cues of the hybrid virtual image.

Tables (7)

Tables Icon

Table 1 Specifications of OLED panel.

Tables Icon

Table 2 Specifications of LC MLA.

Tables Icon

Table 3 Specifications of the TN cell.

Tables Icon

Table 4 Specifications of the main lens.

Tables Icon

Table 5 Specifications of the proposed hybrid VR HMD.

Tables Icon

Table 6 Image resolution of the LF, 2D and hybrid image planes in the hybrid VR HMD.

Tables Icon

Table 7 Comparison of traditional, light field, and hybrid VR HMDs.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

f n 0 ( D/2 ) 2 2Δnd ,
d= S oLF f M ( LΔx e r ) f M +( LΔx e r ) ,
Resolutio n LF = 1 P i ( g+d g )( L e r S oLF ) L 2 π 90 ,
Resolutio n 2D = 1 P i ( LΔx e r S oLF d ) LΔx 2 π 90 ,
Resolution(lp/mm)= 2 group number+( element number1 )/6 .

Metrics