Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Holographic near-eye display system with large viewing area based on liquid crystal axicon

Open Access Open Access

Abstract

In this paper, a liquid crystal axicon based holographic near-eye display system with large viewing area is proposed. The viewing area of the system is extended by implementing the liquid crystal axicon. The error diffusion algorithm is used to calculate the computer-generated hologram (CGH). When incident on the liquid crystal axicon placed at the back focal plane of Fourier lens, the reconstruction light modulated by the CGH is deflected into two directions resulting in a viewing area extension. Meanwhile, to illustrate the potential of the proposed system, two-dimensional viewing area extension is demonstrated. It combines the frequency spectrum shift with the proposed system and achieves a double expansion of the horizontal viewing area and three-times expansion of the vertical viewing area. Feasibility of the proposed system is verified by optical experiments. The proposed system has potential applications in holographic augmented reality (AR) display.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Augmented reality (AR) is thought to have promising applications in education, gaming, manufacturing and other various fields. The near-eye display system serves as a great approach to implementing the AR technology and providing users with immersive experience. Typical techniques for near-eye 3D display systems include stereoscopic display, light field display, varifocal display and holographic display [15]. Among them, holographic display holds great potential, because of its capability of modulating the wavefront precisely and providing all the depth cues [6]. However, the limited field of view (FOV) and small viewing area hinder the application of holographic near-eye displays. Many efforts have been put into the research.

Some researchers tilt several spatial light modulators (SLMs) in planar [7,8] or circular [9,10] configurations to extend the viewing angle and enlarge the viewing area. The increased number of the SLM leads to an increase in spatial bandwidth product, but makes the system bulky, expensive and unsuitable for near-eye display. Meanwhile, the temporal multiplexing method which fully utilizes the refresh rate of the SLM has also been proposed [11,12]. By using the temporal scanning techniques, the viewing area and the screen size can be increased at the same time [13], even to achieve 360-degree table screen reconstruction [14]. The temporal multiplexing method makes it possible to increase the viewing area using only one SLM but at the cost of complex synchronizing systems and flicker problems. Additionally, specially designed optical elements may serve as alternative options. The viewing angle can be increased by a specially designed lens with a large numerical aperture [15]. A customized holographic optical element (HOE) manufactured by holographic printer can extend the viewing area with FOV of 50° [16]. When using a MEMS mirror to alter the direction of the laser beam incident on the pupil-shifting HOE, the viewing area in near-eye display can be actively extended with pupil-tracking [17]. But the pupil-tracking system inevitably increases the cost. Since a conical holographic optical element can reflect reconstruction wavefront with a wide-spread angle, it can also be used to enlarge the viewing area [18]. Besides, non-periodic photon sieves provide fresh perspectives on viewing area extension [19]. Owing to the deep-subwavelength dimension of nanosieves, the novel photon-nanosieve hologram offers a large viewing angle of 40°×40° [20].

In recent years, planar liquid crystal optical elements have attracted much attention because of their merits of customized design, low-cost manufacturing and ability to precisely control phase retardation [2123]. In holographic display, transmissive Pancharatnam-Berry (PB) phase lens can be implemented as eyepiece and contributes to a 50° FOV [24]. In Maxwellian display system, the reconstruction light is deflected in three different directions by using the PB grating divided into three regions, therefore the viewing points of the system are extended three times [25]. Nevertheless, the small intervals of the viewing points result in the image overlapping. To realize dynamical pupil-steering in Maxwellian display system, it can be useful to design a cholesteric liquid crystal combiner which contains switchable half-wave plates and cholesteric liquid crystal holographic lens with polarization selectivity [26]. But only two viewing points can be steered into in this configuration. When two liquid-crystal-based lenses are stacked together, the incident light of orthogonal polarization states are diffracted into the left and right half of the FOV area, respectively, doubling the FOV of the display system [27]. However, since the incident light needs to be switched between two orthogonal polarization states, the refresh rates of the system are sacrificed. A two-dimensional beam deflector composed of a quarter wave plate and two polarization gratings whose output light diffracts in x-direction and y-direction, respectively, is also designed for viewing area expansion [28]. Nevertheless, the manufacturing process of the customized two-dimensional beam deflector with a two-layer structure may be demanding.

In this paper, we propose a large-viewing-area holographic near-eye display system based on the liquid crystal axicon. The liquid crystal axicon is put at the spectral plane of the 4f system to enlarge the viewing area. A polarizer and a quarter wave plate (QWP) are combined to alter the polarization state of the reconstruction light incident on the liquid crystal axicon. This ensures that the light intensity is uniform across the whole viewing area. The CGH is calculated by the error diffusion algorithm and loaded onto the SLM. When the reconstruction light of linear polarization (LP) passes through the liquid crystal axicon, it is divided into two beams diffracting to different viewing areas, respectively. Since each beam carries information of the reconstructed image, the viewing area of the proposed system is thereby extended twice. To fully offer the potential of the proposed system, two-dimensional viewing area extension is demonstrated as an example. The frequency spectrum shift based on the carrier waves is utilized to alter the converging points of the reconstruction light at the liquid crystal axicon. The reconstruction light that converges at two spots of which the radii are perpendicular to each other, diffracts in four directions after passing through the liquid crystal axicon. Hence, the viewing area of the system is extended by two times horizontally and three times vertically. The proposed system has a simple configuration and extends the viewing area without the sacrifice of refresh rate.

2. Principle of the proposed system

2.1 Structure of the proposed system

The schematic diagram of the proposed large-viewing-area holographic near-eye system is shown in Fig. 1. The system contains a laser, a pinhole, a collimating lens, two polarizers, an SLM, two beam splitters (BSs), two Fourier lenses, a QWP, an iris, a camera and a liquid crystal axicon. The pinhole and collimating lens are used for beam expansion and collimation. Polarizer 1 controls the LP direction of the laser and adjusts the light intensity of the input light. When the LP light illuminates the SLM loaded with CGH, it is reflected after wavefront modulation and passes through the BS. Fourier lenses 1 and 2 form a 4f system to eliminate the unwanted diffraction light with the help of the iris. The liquid crystal axicon locates at the focal plane of Fourier lens 1. The BS behind Fourier lens 2 combines the reconstructed image with the real scene. The camera is used to capture the reconstructed image.

 figure: Fig. 1.

Fig. 1. Schematic diagram of the proposed system.

Download Full Size | PDF

The proposed system extends the viewing area by taking advantage of the polarization features of the liquid crystal axicon. After passing through polarizer 1, the parallel light with LP illuminates the SLM that is loaded with the phase-only hologram calculated by the error diffusion algorithm. The reflected light of the SLM carries all the information of the reconstructed image and is therefore called the reconstruction light. The reconstruction light then passes through the BS and is converged by Fourier lens 1 on the liquid crystal axicon locating at the Fourier plane. The polarization state of the reconstruction light is altered by polarizer 2 and the QWP before the reconstruction light is incident on the liquid crystal axicon. When passing through the liquid crystal axicon, the reconstruction light is split into two output lights diffracting at different angles. The two output lights contribute to the left and right viewing area, respectively, resulting in the viewing area extension. Finally, after passing through Fourier lens 2, the output lights are combined with the real scene by the BS. The camera is moved laterally to capture the reconstructed image in a large viewing area.

2.2 Polarization features of the liquid crystal axicon

The core component of the proposed system is the liquid crystal axicon. Considering the fact that the phase profile of the liquid crystal axicon is generated by spatially varying the liquid crystal directors instead of optical paths, the liquid crystal axicon can be sufficiently thin which contributes to a compact near-eye system [29]. The polarization dependency of the customized liquid crystal axicon is exploited to steer light and expand the viewing area. The polarization features of the liquid crystal axicon are shown in Fig. 2. Figure 2(a) is the phase change of the liquid crystal axicon, where the red line and blue line represent the phase change of left-handed circular polarization (LCP) light and right-handed circular polarization (RCP) light, respectively. Figure 2(b) is the polarizing optical micrograph under crossed polarizers. As seen in Fig. 2(a), the phase changes of LCP light and RCP light are opposite, which means the LCP light and RCP light diffract in different directions of opposite diffraction angles. Furthermore, Fig. 2(a) illustrates that the phase change varies along any radial directions instead of the horizontal direction. As a result, the diffraction direction of the output light depends on both the polarization state and the incident position of the incident light. Compared with the conventional polarization gratings where the output lights usually diffract in only two directions, the liquid crystal axicon allows more flexibility.

 figure: Fig. 2.

Fig. 2. Polarization features of the liquid crystal axicon. (a) Phase change of the liquid crystal axicon along radius; (b) polarizing optical micrograph under crossed polarizers.

Download Full Size | PDF

The schematic diagram of the liquid crystal axicon operation is shown in Fig. 3. When the LCP light is incident on the liquid crystal axicon, it deflects towards the optical axis at θ and polarization state changes to RCP. When the incident light is RCP light, it changes to LCP light after passing through the liquid crystal axicon and deflects away from the optical axis at -θ. Since the LP light can be considered as a combination of LCP light and RCP light, the output lights of the liquid crystal axicon are two lights with different handedness and opposite diffraction angles for the incident LP light.

 figure: Fig. 3.

Fig. 3. Schematic diagram of the liquid crystal axicon operation for (a) LCP, (b) RCP and (c) LP.

Download Full Size | PDF

2.3 Calculation of the hologram based on error diffusion algorithm

During the CGH calculation process, the angular spectrum theory is used to simulate the diffraction process and the hologram is optimized by the error diffusion algorithm. According to the angular spectrum theory, the complex hologram of the target image is expressed as follows:

$$H({x_1},{y_1}\textrm{) = }{{{\cal F}}^{^{ - 1}}}\left\{ {{{\cal F}}\{{I(x,y)} \}\cdot \exp (ikz\sqrt {1 - {\lambda^2}({f_X^2 + f_Y^2} )} } \right\}, $$
$${f_X} = \cos \alpha /\lambda , $$
$${f_Y} = \cos \beta /\lambda, $$
where H(x1, y1) and I(x, y) are the light field of the hologram plane and target image, respectively, k and λ represent the wavenumber and wavelength, respectively, z is the diffraction distance between the hologram plane and the target image plane, fX and fY are the spatial frequency, α and β are the angles of the wave vector with the x-axis and y-axis, respectively, and ${\cal F}$ represents the Fourier transform and ${\cal F}$-1 represents the inverse Fourier transform.

Currently, the commercial SLM can hardly modulate complex amplitude directly. Therefore, it is necessary to encode the complex hologram into amplitude-only hologram or phase-only hologram. Considering the high diffraction efficiency of phase-only SLM, the complex hologram is encoded into phase-only hologram by extracting the phase. The phase-only hologram h(x1, y1) is denoted in Eq. (4), where angle() represents the function of extracting the phase of the complex amplitude.

$$h({x_1},{y_1}\textrm{) = angle}({H({x_1},{y_1}\textrm{)}} ). $$

Nevertheless, since the phase-only hologram only extracts the phase of the complex hologram while disregarding the amplitude part, there inevitably exist errors compared with the complex hologram. To eliminate these errors that may degrade the quality of the reconstructed image, we use the error diffusion algorithm to optimize the phase distribution of the hologram.

The error E(x1, y1) between the phase-only hologram and the complex hologram is expressed as follows:

$$E({x_1},{y_1}\textrm{) = }H({x_1},{y_1}\textrm{) - }h({x_1},{y_1}). $$

By scanning the hologram pixel by pixel, the errors are diffused into the adjacent pixels following the equation [30]:

$${H^{\prime}}({x_1},{y_1}\textrm{ + 1) = }H({x_1},{y_1}\textrm{ + 1)} + w \times E({x_1},{y_1}\textrm{)}, $$
where H’ is the optimized complex amplitude after error diffusion and w is the weighting coefficient. When diffusing the errors into various neighboring pixels H(x1+1, y1-1), H(x1+1, y1), H(x1+1, y1+1), H(x1, y1+1), the w has a value of 3/16, 5/16, 1/16 and 7/16, respectively. Therefore, the optimized phase-only hologram after error diffusion is given by:
$${h_{\textrm{optimized}}}({x_1},{y_1}\textrm{)} = angle({{H^{\prime}}({x_1},{y_1}\textrm{)}} ). $$

3. Experimental results and discussion

3.1 Verification experiments

To verify the feasibility of the proposed system, the experiments are conducted. A laser with a wavelength of 526.5nm is used as the light source. The SLM has a resolution of 1920 × 1080 and pixel pitch of 6.4µm. The focal length of Fourier lenses in 4f system is 25cm. A 2D object “BH” with a resolution of 1920 × 1080 is used as the target image. The reconstruction distance is set as 30cm. The liquid crystal axicon made of liquid crystal polymers has a luminescent aperture of 20mm and deflection angle of 1°. The camera Canon 77D is used to capture the reconstructed image.

Experimental results of 2D object reconstruction are shown in Fig. 4. The settings of the camera remain unchanged during the experiment. A real object “kitty” at the same depth plane as the reconstructed image is used for reference. The camera moves horizontally and captures the reconstructed image in different positions to simulate the conditions when the viewer observes the reconstructed image in different viewing areas. The different distances between the reconstructed image “BH” and the “kitty” indicate the variations of the viewing area. When using the conventional system without the liquid crystal axicon for holographic reconstruction, the reconstructed image can only be seen in a single viewing area, as shown in Fig. 4(a). Based on the proposed system, the viewing area is extended twice, as shown in Figs. 4(b)-(c). Figure 4(b) and Fig. 4(c) are the reconstructed image captured from the left and right viewing area of the proposed system, respectively. When observing in the left viewing area, the “BH” is further to the left. Thus, the distance between the “BH” and the “kitty” in Fig. 4(b) is smaller than that in Fig. 4(c). The light intensity of Fig. 4(a) is higher than that of Fig. 4(b) and (c) while the light intensity of Fig. 4(b) and (c) is basically the same. It is because the reconstruction light is split into two beams when passing through the liquid crystal axicon causing energy redistribution. The ratio of the energy redistribution changes with the direction of polarizer 2 and QWP. The light intensity of the expanded viewing area is assured uniform by altering polarizer 2 and QWP.

 figure: Fig. 4.

Fig. 4. Experimental results of the reconstruction of 2D object. (a) Reconstructed image of the conventional system; reconstructed image of the proposed system captured from the (b) left viewing area and (c) right viewing area.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Experimental results of the 3D object. Reconstructed image of the conventional system (a) when the “clover” is focused and (b) when the “cherry” is focused; reconstructed image of the proposed system observed from the left viewing area (c) when the “clover” is focused and (d) when the “cherry” is focused; reconstructed image of the proposed system observed from the right viewing area (e) when the “clover” is focused and (f) when the “cherry” is focused.

Download Full Size | PDF

Furthermore, a 3D object is recorded and reconstructed to demonstrate the large viewing area of the proposed system. The 3D object is a “cherry” and a “clover” at a reconstruction distance of 30cm and 60cm, respectively. A real object “kitty” is at the same depth plane as the “cherry” and a real object “chick” is at the same depth plane as the “clover”. The comparisons of the experimental results when using the conventional and proposed system for 3D holographic display are shown in Fig. 5. The change in the position of the reconstructed images relative to the “chick” and the “kitty” indicates a change in the viewing area. The red box in the left corner shows the magnified reconstructed image for clear illustration. Figures 5(a)-(b) are the reconstructed images of the 3D object when using the conventional system. Figures 5(c)-(d) are the reconstructed images captured from the left viewing area of the proposed system while Figs. 5(e)-(f) are that captured from the right viewing area of the proposed system. As shown in Fig. 5(a), Fig. 5(c) and Fig. 5(e), the “clover” is in focus while the “cherry” is blurred. Meanwhile, the “chick” is clear and the “kitty” is blurred. As shown in Fig. 5(b), Fig. 5(d) and Fig. 5(f), the “cherry” is in focus while the “clover” is blurred. At the same time, the “kitty” becomes clear and the “chick” is blurred. The phenomenon that the “clover” and the “cherry” are clear at different depth planes verifies the 3D display features of the proposed system. Since the camera is moved laterally to capture the reconstructed image in different viewing areas, the reconstructed image has a lateral shift relative to the position of the “chick” and the “kitty”. In return, the different lateral positions of the reconstructed image with reference to the “chick” and the “kitty” indicate different viewing areas. As can be seen from the comparisons, the proposed system has a large viewing area which is two times that of the conventional system.

3.2 Two-dimensional viewing area extension

The proposed system has the advantage of greater flexibility in expanding the viewing area. To illustrate the advantage and potential of the proposed system, two-dimensional viewing area extension is demonstrated as an example. The proposed system is combined with the frequency spectrum shift to expand the viewing area both horizontally and vertically.

The frequency spectrum location on the Fourier plane is manipulated [31] and ensures that the convergent points of the reconstruction light illuminate at specific locations on the liquid crystal axicon. The process of the frequency shift is illustrated in Fig. 6. In previous cases, after the reconstruction light modulated by the CGH passing through Fourier lens 1, it converges at the focal point (0, 0). To expand the viewing area in both horizontal and vertical directions, the reconstruction light is expected to converge at (xa, ya), (xb, yb). Since the focal plane of Fourier lens is the Fourier plane of reconstruction light, the convergence point of the reconstruction light can be altered by manipulating the frequency spectrum. To realize this purpose, different carrier waves are introduced to the hologram, and the sub-holograms Ha(x1, y1) and Hb(x1, y1) are calculated by the following equations:

$${H_a}({x_1},{y_1}\textrm{) = }{{{\cal F}}^{^{ - 1}}}\left\{ {{{\cal F}}\{{I(x,y)\cdot \exp [{ik({x\sin {\theta_{xa}} + y\sin {\theta_{ya}}} )} ]} \}\cdot \exp (ikz\sqrt {1 - {\lambda^2}({f_X^2 + f_Y^2} )} } \right\}, $$
$${H_b}({x_1},{y_1}\textrm{) = }{{{\cal F}}^{^{ - 1}}}\left\{ {{{\cal F}}\{{I(x,y)\cdot \exp [{ik({x\sin {\theta_{xb}} + y\sin {\theta_{yb}}} )} ]} \}\cdot \exp (ikz\sqrt {1 - {\lambda^2}({f_X^2 + f_Y^2} )} } \right\}, $$
where ${\theta _{xa}}$, ${\theta _{ya}}$, ${\theta _{xb}}$, ${\theta _{yb}}$ are the angle of inclination in x-direction and y-direction satisfying $\sin {\theta _{xa}} = \frac{{{x_a}}}{f}$, $\sin {\theta _{ya}} = \frac{{{y_a}}}{f}$, $\sin {\theta _{xb}} = \frac{{{x_b}}}{f}$, $\sin {\theta _{yb}} = \frac{{{y_b}}}{f}$, respectively, and f is the focal length of Fourier lens 1.

 figure: Fig. 6.

Fig. 6. Process of the frequency spectrum shift.

Download Full Size | PDF

Then, the composite complex hologram Hcarrier(x1, y1) is generated by adding sub-holograms:

$${H_{\textrm{carrier}}}({x_1},{y_1}\textrm{)} = {H_a}({x_1},{y_1}\textrm{) + }{H_b}({x_1},{y_1}\textrm{)}. $$

After error diffusion and encoding, the phase-only composite hologram hcarrier(x1, y1) is denoted as Eq. (11):

$${h_{\textrm{carrier}}}({x_1},{y_1}\textrm{) = angle}({{H^{\prime}}_{\textrm{carrier}}({x_1},{y_1}\textrm{)}} ), $$
where ${H^{\prime}}_{\textrm{carrier}}({x_1},{y_1}\textrm{)}$ is the optimized complex hologram using the error diffusion algorithm. The relationship between ${H^{\prime}}_{\textrm{carrier}}({x_1},{y_1}\textrm{)}$ and ${H_{\textrm{carrier}}}({x_1},{y_1}\textrm{)}$ satisfies Eq. (6).

The schematic diagram of the two-dimensional viewing area extension is shown in Fig. 7. When the carrier waves are introduced into the hologram, the frequency spectrum of the reconstructed image can be shifted in specific directions. As illustrated in Eqs. (8)-(9), the direction and distance of the frequency spectrum shift depend on the incline angle of the carrier wave. By setting proper incline angles, the reconstruction light can converge at specific spots of the Fourier plane after passing through Fourier lens 1. Therefore, introducing two carrier waves with different incline angles to the hologram allows the reconstruction lights to converge at two specific spots A and B, as illustrated in Fig. 6. The coordinates of A and B are (xa, ya) and (xb, yb), respectively. To expand the viewing area both horizontally and vertically, it should meet the condition that xa= yb = 0, |ya|=|xb|. The liquid crystal axicon is put at the back focal plane of Fourier lens 1. For the reconstruction light converging at spot A, it is equally split into two lights with different longitudinal deflection when passing through the liquid crystal axicon. As for the reconstruction light converging at spot B, it is equally split into two lights with different lateral deflection when passing through the liquid crystal axicon. Thus, the viewing area is expanded both vertically and horizontally. It’s worth mentioning that the vertically expanded viewing areas are directly below the left half of the horizontally expanded viewing area by altering the incline angles of the carrier waves and reconstruction distance.

 figure: Fig. 7.

Fig. 7. Schematic diagram of the two-dimensional viewing area extension.

Download Full Size | PDF

To verify the feasibility of the two-dimensional viewing area extension, optical experiments are conducted. A letter “M” is used as the recorded image with a reconstruction distance of 30cm. A “chick” and two rulers placed perpendicular to each other are used for reference. When the camera is moved laterally and vertically to capture the reconstructed image from different parts of the viewing area, the position of the reconstructed image relative to the rulers and the “chick” varies with it. After passing through Fourier lens 1, the reconstruction light is set to converge at A (0, -9mm) and B (9mm, 0). The focal length of Fourier lens 1 is 25cm. The reconstructed image captured from the two-dimensional viewing area is shown in Fig. 8, where Figs. 8(a)-(b) are the horizontally expanded viewing area and Figs. 8(c)-(d) are the vertically expanded viewing area. Since the vertically expanded viewing area is directly below the left part of the horizontally expanded viewing area, as denoted in Fig. 8(a), Fig. 8(c) and Fig. 8(d), a seamless two-dimensional viewing area extension is realized. The horizontal viewing area is extended by two times and the vertical viewing area is extended by three times compared with that of the conventional system. Besides, the light intensity of the whole viewing area is basically uniform, which conveys a decent display effect.

 figure: Fig. 8.

Fig. 8. Experimental results of the two-dimensional viewing area extension captured from the (a) upper left, (b) upper right, (c) middle left, (d) lower left viewing area.

Download Full Size | PDF

In general, there are three main innovations in this paper that make it different from previously reported researches which use polarization optics to extend the viewing area. Firstly, the core component of the proposed system is the polarization-dependent liquid crystal axicon. Compared with the conventional polarization-dependent optics [2628], the polarization-dependent liquid crystal axicon allows for more flexibility in viewing area expansion. Secondly, the frequency spectrum shift is introduced to show the potential of the proposed system and two-dimensional viewing area extension without the sacrifice of refresh rates is realized. When more converging points are generated at the customized positions of the liquid crystal axicon, the viewing area can be further expanded in the future. Thirdly, the proposed system is cost-friendly considering that the manufacturing of the liquid crystal axicon is quite mature and the viewing area is extended without using synchronizing [13,14] or additional MEMS systems [16].

In optical experiments, the SLM with a pixel pitch of 6.4µm is illuminated by a laser with a wavelength of 526.5nm. The Fourier lenses used to form a 4f system have the same focal length. Thus, the FOV is around 4.8° according to the diffraction theory. In the future, we plan to use an SLM with smaller pixel pitch for reconstruction. For example, the commercial SLM with pixel pitch of 3.74µm has maximum diffraction angle of 8.1° for illumination light of 526.5nm. Meanwhile, by utilizing a high magnification eyepiece or a deformed 4f system with an HOE-based combiner, it is feasible to increase the FOV to above 30°.

For now, it is still difficult to realize full-color holographic display in the liquid crystal axicon based system because of severe chromatic aberration and unequal efficiency. In our next work, we plan to record and integrate liquid crystal axicons working in red, green and blue wavelengths respectively. And the chromatic aberration can be pre-compensated for in the hologram based on theoretical analysis.

4. Conclusions

In this paper, we propose a liquid crystal axicon based holographic near-eye display system with an extended viewing area. The polarization properties of the liquid crystal axicon are used to enlarge the viewing area without the sacrifice of refresh rate and bulky system configuration. The hologram is calculated by the error diffusion algorithm. The viewing area of the proposed method is twice that of the conventional system. To further demonstrate the potential of the proposed system, the two-dimensional viewing area extension is illustrated further. As a result, the viewing area is extended twice horizontally and three times vertically compared to that of the conventional system. Experimental results verify the feasibility of the proposed system. By changing the incline angles of the carrier waves, more converging points at customized positions of the liquid crystal axicon can be generated. As a result, the viewing area of the holographic near-eye system can be extended further. The proposed system has compelling potential in AR/VR display.

Funding

National Key Research and Development Program of China (2021YFB2802100); National Natural Science Foundation of China (62020106010, 62011540406).

Acknowledgments

The authors would like to thank Dr. Fan Chu, Dr. Zhao-Song Li and Dr. Yi Zheng for their contributions to this paper.

Disclosures

The authors declare that there are no conflicts of interest related to this paper.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. H. Urey, K. V. Chellappan, E. Erden, and P. Surman, “State of the art in stereoscopic and autostereoscopic displays,” Proc. IEEE 99(4), 540–555 (2011). [CrossRef]  

2. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013). [CrossRef]  

3. K. Akşit, W. Lopes, J. Kim, P. Shirley, and D. Luebke, “Near-eye varifocal augmented reality display using see-through screens,” ACM Trans. Graph. 36(6), 1–13 (2017). [CrossRef]  

4. Y. L. Li, N. N. Li, D. Wang, F. Chu, S. D. Lee, Y. W. Zheng, and Q. H. Wang, “Tunable liquid crystal grating based holographic 3D display system with wide viewing angle and large size,” Light: Sci. Appl. 11(1), 1–10 (2022). [CrossRef]  

5. D. Wang, C. Liu, C. Shen, Y. Xing, and Q. H. Wang, “Holographic capture and projection system of real object based on tunable zoom lens,” PhotoniX 1(1), 6–15 (2020). [CrossRef]  

6. F. Yaras, H. Kang, and L. Onural, “State of the art in holographic displays: a survey,” J. Disp. Technol. 6(10), 443–454 (2010). [CrossRef]  

7. H. Sasaki, K. Yamamoto, Y. Ichihashi, and T. Senoh, “Image size scalable full-parallax coloured three-dimensional video by electronic holography,” Sci. Rep. 4(1), 4000 (2014). [CrossRef]  

8. P. L. Makowski, T. Kozacki, P. Zdankowski, and W. Zaperty, “Synthetic aperture Fourier holography for wide-angle holographic display of real scenes,” Appl. Opt. 54(12), 3658–3665 (2015). [CrossRef]  

9. Z. Zeng, H. Zheng, Y. Yu, A. K. Asundi, and S. Valyukh, “Full-color holographic display with increased-viewing-angle [Invited],” Appl. Opt. 56(13), F112–F120 (2017). [CrossRef]  

10. F. Yaraş, H. Kang, and L. Onural, “Circular holographic video display system,” Opt. Express 19(10), 9147–9156 (2011). [CrossRef]  

11. Y. Sando, D. Barada, and T. Yatagai, “Aerial holographic 3D display with an enlarged field of view by the time-division method,” Appl. Opt. 60(17), 5044–5048 (2021). [CrossRef]  

12. Y. Z. Liu, X. N. Pang, S. Jiang, and J. W. Dong, “Viewing-angle enlargement in holographic augmented reality using time division and spatial tiling,” Opt. Express 21(10), 12068–12076 (2013). [CrossRef]  

13. Y. Takaki and K. Fujii, “Viewing-zone scanning holographic display using a MEMS spatial light modulator,” Opt. Express 22(20), 24713–24721 (2014). [CrossRef]  

14. T. Inoue and Y. Takaki, “Table screen 360-degree holographic display using circular viewing-zone scanning,” Opt. Express 23(5), 6533–6542 (2015). [CrossRef]  

15. Z. Chen, X. sang, Q. Lin, J. Li, X. Yu, X. Gao, B. Yan, K. Wang, C. Yu, and S. Xie, “A see-through holographic head-mounted display with the large viewing angle,” Opt. Commun. 384, 125–129 (2017). [CrossRef]  

16. J. Jeong, J. Lee, C. Yoo, S. Moon, B. Lee, and B. Lee, “Holographically customized optical combiner for eye-box extended near-eye display,” Opt. Express 27(26), 38006–38018 (2019). [CrossRef]  

17. C. Jang, K. Bang, G. Li, and B. Lee, “Holographic near-eye display with expanded eye-box,” ACM Trans. Graph. 37(6), 195 (2019). [CrossRef]  

18. Y. Sando, K. Satoh, D. Barada, and T. Yatagai, “Holographic augmented reality display with conical holographic optical element for wide viewing zone,” Light Adv. Manuf. 3(1), 1–9 (2022). [CrossRef]  

19. J. Park, K. Lee, and Y. Park, “Ultrathin wide-angle large-area digital 3D holographic display using a non-periodic photon sieve,” Nat. Commun. 10(1), 1–8 (2019). [CrossRef]  

20. K. Huang, H. Liu, G. Si, Q. Wang, J. Lin, and J. Teng, “Photon-nanosieve for ultrabroadband and large-angle-of-view holograms: photon-nanosieve for ultrabroadband and large-angle-of-view holograms,” Laser Photonics Rev. 11(3), 1700025 (2017). [CrossRef]  

21. J. Xiong and S. T. Wu, “Planar liquid crystal polarization optics for augmented reality and virtual reality: from fundamentals to applications,” eLight 1(1), 3–20 (2021). [CrossRef]  

22. J. Kobashi, H. Yoshida, and M. Ozaki, “Planar optics with patterned chiral liquid crystals,” Nat. Photonics 10(6), 389–392 (2016). [CrossRef]  

23. T. Zhan, J. Xiong, J. Zou, and S. T. Wu, “Multifocal displays: review and prospect,” PhotoniX 1(1), 10–31 (2020). [CrossRef]  

24. S. W. Nam, S. Moon, B. Lee, D. Kim, S. Lee, C. K. Lee, and B. Lee, “Aberration-corrected full-color holographic augmented reality near-eye display using a Pancharatnam-Berry phase lens,” Opt. Express 28(21), 30836–30850 (2020). [CrossRef]  

25. L. Wang, Y. Li, S. Liu, X. Li, and Y. Su, “42.6: Maxwellian-viewing-super-multi-view near eye display using a Pancharatnam-Berry optical element,” SID Symp. Dig. Tech. Pap. 52(S2), 533–536 (2021). [CrossRef]  

26. J. Xiong, Y. Li, K. Li, and S. T. Wu, “Aberration-free pupil steerable Maxwellian display for augmented reality with cholesteric liquid crystal holographic lenses,” Opt. Lett. 46(7), 1760–1763 (2021). [CrossRef]  

27. K. Yin, Z. He, K. Li, and S. T. Wu, “Doubling the FOV of AR displays with a liquid crystal polarization-dependent combiner,” Opt. Express 29(8), 11512–11519 (2021). [CrossRef]  

28. T. Lin, T. Zhan, J. Zou, F. Fan, and S. T. Wu, “Maxwellian near-eye display with an expanded eyebox,” Opt. Express 28(26), 38616–38625 (2020). [CrossRef]  

29. J. Kim, Y. Li, M. N. Miskiewicz, C. Oh, M. W. Kudenov, and M. J. Escuti, “Fabrication of ideal geometric-phase holograms with arbitrary wavefronts,” Optica 2(11), 958–964 (2015). [CrossRef]  

30. N. N. Li, D. Wang, Y. L. Li, and Q. H. Wang, “Method of curved composite hologram generation with suppressed speckle noise,” Opt. Express 28(23), 34378–34389 (2020). [CrossRef]  

31. C. Chang, W. Cui, J. Park, and L. Gao, “Computational holographic Maxwellian near-eye display with an expanded eyebox,” Sci. Rep. 9(1), 18749 (2019). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Schematic diagram of the proposed system.
Fig. 2.
Fig. 2. Polarization features of the liquid crystal axicon. (a) Phase change of the liquid crystal axicon along radius; (b) polarizing optical micrograph under crossed polarizers.
Fig. 3.
Fig. 3. Schematic diagram of the liquid crystal axicon operation for (a) LCP, (b) RCP and (c) LP.
Fig. 4.
Fig. 4. Experimental results of the reconstruction of 2D object. (a) Reconstructed image of the conventional system; reconstructed image of the proposed system captured from the (b) left viewing area and (c) right viewing area.
Fig. 5.
Fig. 5. Experimental results of the 3D object. Reconstructed image of the conventional system (a) when the “clover” is focused and (b) when the “cherry” is focused; reconstructed image of the proposed system observed from the left viewing area (c) when the “clover” is focused and (d) when the “cherry” is focused; reconstructed image of the proposed system observed from the right viewing area (e) when the “clover” is focused and (f) when the “cherry” is focused.
Fig. 6.
Fig. 6. Process of the frequency spectrum shift.
Fig. 7.
Fig. 7. Schematic diagram of the two-dimensional viewing area extension.
Fig. 8.
Fig. 8. Experimental results of the two-dimensional viewing area extension captured from the (a) upper left, (b) upper right, (c) middle left, (d) lower left viewing area.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

H ( x 1 , y 1 ) =  F 1 { F { I ( x , y ) } exp ( i k z 1 λ 2 ( f X 2 + f Y 2 ) } ,
f X = cos α / λ ,
f Y = cos β / λ ,
h ( x 1 , y 1 ) = angle ( H ( x 1 , y 1 ) ) .
E ( x 1 , y 1 ) =  H ( x 1 , y 1 ) -  h ( x 1 , y 1 ) .
H ( x 1 , y 1  + 1) =  H ( x 1 , y 1  + 1) + w × E ( x 1 , y 1 ) ,
h optimized ( x 1 , y 1 ) = a n g l e ( H ( x 1 , y 1 ) ) .
H a ( x 1 , y 1 ) =  F 1 { F { I ( x , y ) exp [ i k ( x sin θ x a + y sin θ y a ) ] } exp ( i k z 1 λ 2 ( f X 2 + f Y 2 ) } ,
H b ( x 1 , y 1 ) =  F 1 { F { I ( x , y ) exp [ i k ( x sin θ x b + y sin θ y b ) ] } exp ( i k z 1 λ 2 ( f X 2 + f Y 2 ) } ,
H carrier ( x 1 , y 1 ) = H a ( x 1 , y 1 ) +  H b ( x 1 , y 1 ) .
h carrier ( x 1 , y 1 ) = angle ( H carrier ( x 1 , y 1 ) ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.