Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Geometrical-lightguide-based head-mounted lightfield displays using polymer-dispersed liquid-crystal films

Open Access Open Access

Abstract

Integrating the promising waveguide or lightguide optical combiners to head-mounted lightfield display (LF-HMD) systems offers a great opportunity to achieve both a compact optical see-through capability required for augmented or mixed reality applications and true 3D scene with correct focus cues required for mitigating the well-known vergence-accommodation conflict. Due to the non-sequential ray propagation nature of these flat combiners and the ray construction nature of a lightfield display engine, however, adapting these two technologies to each other confronts several significant challenges. In this paper, we explore the feasibility of combining an integral-imaging-based lightfield display engine with a geometrical lightguide based on microstructure mirror arrays. The image artifacts and the key challenges in a lightguide-based LF-HMD system are systematically analyzed and are further quantified via a non-sequential ray tracing simulation. We further propose to utilize polymer-dispersed liquid-crystal (PDLC) films to address the inherent problems associated with a lightguide combiner such as increasing the viewing density and improving the image coupling uniformity. We finally demonstrate, to our best knowledge, the first lightguide-based LF-HMD system that takes the advantages of both the compact form factor of a lightguide combiner and the true 3D virtual image rendering capability of a lightfield display.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Conventional stereoscopic displays, which enables the perception of a 3D scene via a pair of two-dimensional (2D) perspective images, one for each eye, with binocular disparities and other pictorial depth cues, typically lack the ability to render correct retinal blur effects and stimulate natural eye accommodative responses, which leads to the well-known vergence-accommodation conflict (VAC) problem. Several display methods that are potentially capable of rendering focus cues and overcome the VAC problem have been demonstrated [1], including volumetric displays [2,3], holographic displays [46], multi-focal-plane displays [7,8], Maxwellian view displays [9,10], and light field displays [1113]. Among all these methods, an integral-imaging-based (InI-based) lightfield display is able to reconstruct a 3D scene by reproducing the directional rays apparently emitted by 3D points of different depths of the 3D scene, and therefore is capable of rendering correct focus cues similar to natural viewing scenes. Several pioneering works have already adapted this lightfield rendering approach to a head-mounted display (HMD) design for both immersive virtual reality (VR) and optical see-through augmented or mixed reality (AR/MR) applications. For instance, Lanman and Luebke demonstrated a near-eye immersive lightfield display by placing a microdisplay and microlens array (MLA) in front of viewer’s eye [13]. Hua and Javidi demonstrated an optical see-through LF-HMD system by combining a micro-InI unit with a see-through freeform magnifying eyepiece [11]. More recently, Huang and Hua demonstrated an optical see-through LF-HMD system offering a high spatial resolution of about 3 arc minutes over an extended depth of field of over 3 diopters [14].

Although these pioneering works have successfully demonstrated the potential capabilities of a LF-HMD system for rendering focus cues and therefore addressing the well-known VAC problem in conventional stereoscopic displays, the existing LF-HMD prototypes offering optical see-through capabilities rely upon conventional optical combiners, such as a flat beamsplitter or a freeform beamsplitting surface and are generally bulky and heavy [11,15]. An optical combiner, which combines the displayed virtual images with the real world scene, is a key optical element in the state-of-the-art optical see-through HMDs (OST-HMD). Among the different technologies for optical combiners, waveguide and lightguide optics are the most promising solutions to optical combiners because of their small volume, lightweight and relatively high efficiency. They propagate the light rays from a virtual image by total internal reflection (TIR) in a thin, transparent substrate, and utilize couplers at both ends of the substrate to couple in and extract out the virtual images [16]. Integrating the promising waveguide or lightguide optical combiners to LF-HMD systems can offer a great opportunity to achieve both a compact optical see-through capability required for AR/MR applications and true 3D scene with correct focus cues required for mitigating the VAC problem.

Due to the non-sequential ray propagation nature of waveguide and lightguide combiners and the ray construction nature of a lightfield display engine, however, adapting these two technologies to each other confronts several significant challenges. The key challenges are to efficiently couple out all elemental views with a limited aperture size and to minimize image artifacts with a discrete out-coupler arrangement. In this paper, we have investigated the feasibility of combining an InI-based lightfield display engine with a geometrical lightguide based on microstructure mirror arrays (MMA). The image artifacts and the key challenges in a lightguide-based LF-HMD system have been characterized (Section 2). Based on the characteristics of the inherent problems associated with a lightguide combiner such as loss of the viewing density and image coupling uniformity, we have proposed a method to improve the out-coupled viewing density of a lightguide-based InI lightfield display by increasing the footprint fill factor or equivalently the numerical aperture (NA) of the in-coupling ray bundles (Section 3). Via non-sequential ray tracing simulation, we have characterized the out-coupled image performance and residual artifacts in relation to the expanded footprint fill factor (Section 4). Finally, we have presented a proof-of-concept prototype utilizing multi-layer-stacking polymer-dispersed liquid-crystal (ML-PDLC) films as the NA expansion component, demonstrating the image performance of lightguide-based InI-HMD with two different types of ML-PDLC samples (Section 5). Although the paper has utilized MMA-based lightguide as an example, the general analytical methods and proposed solution are applicable to other waveguide- or lightguide-based combiners.

2. System overview and key challenges in a lightguide-based LF-HMD

There are many different types of waveguides and lightguides which can be classified by their coupling mechanisms as surface relief grating [1719], thin or thick hologram [2022], metasurface [23], resonant waveguide gratings [24], flat beamsplitter [25], partially reflective mirror array [2628] and microstructure mirror array (MMA) [2933]. More generally, they can be classified into two types: holographic waveguides (of which couplers are diffractive optical elements) and geometrical lightguides (of which couplers are reflective optical elements). Among all these different types, MMA-based geometrical lightguides have the advantage of less angular and wavelength dependence, less stray light and easy manufacture, and thus are considered as a promising combiner construction.

Figure 1(a) shows the schematic layout of a geometrical lightguide based integral-imaging HMD system (LG-InI-HMD), which mainly consists of a micro-InI unit, an image collimator, and an MMA-based lightguide. The micro-InI unit is composed of a microdisplay and a micro-lenslet array (MLA), which renders the lightfields of a 3D scene. The microdisplay is to render an array of elemental images (EIs) providing positional sampling of the 3D scene lightfield. Each EI provides a perspective view of the 3D scene and is imaged through a corresponding element of the MLA. The MLA helps to generate directional views of the lightfield, through which the ray bundles from EIs enter into their corresponding microlenses and integrate at their corresponding reconstruction planes to reconstruct the lightfield of a 3D scene. By changing the perspective contents of each EI, objects at different depths can be rendered. An example of rendering a 3D image point is shown in Figs. 1(a) and 1(b), where two elemental ray bundles from two adjacent EIs reconstruct a 3D image point P by rendering two perspective views. An aperture array can be inserted between the microdisplay and the MLA to reduce image crosstalk between adjacent microlenses [34]. The miniature 3D scene generated by micro-InI unit is then magnified by the image collimator and is coupled into the MMA lightguide through the in-coupling surface of the MMA lightguide. The ray bundles from the EIs are coupled into the lightguide substrate by a wedge-shaped in-coupler, propagate through the substrate by TIR, and are coupled out by an MMA out-coupler toward the eyebox. The MMA is an array of slanted microstructure mirrors (about 1mm width) coated at the top of spatially separated wedged grooves. The microstructure mirrors are separated by uncoated transparent flat regions (about 1mm to 2mm width) to enable the see-through view of a real-world scene.

 figure: Fig. 1.

Fig. 1. Two potential image artifacts in InI-based MMA lightguide. (a) and (b): Ray path diagram of two elemental views of a reconstruction point P. Ray path splitting arises in (a) and elemental view missing arises in (b). (c) Camera captured image when image is at infinity (left) and when image is at 3 diopters, image ghost arises due to ray path splitting (right). (d) Camera captured image when all EIs are displayed (left) and when elemental views are lost (right); three sets of Snellen chart InI with 3 × 3 views are rendered at 0.6 diopters away.

Download Full Size | PDF

When attempting to out-couple a 3D lightfield source through a lightguide as shown in Fig. 1(a), there are two critical issues. The first issue relates to the image quality degradation and artifacts over the whole field of view (FOV) in geometrical lightguides with a finite or vari-focal depth, which has been discussed thoroughly in our recent work [29]. Due to the non-sequential ray propagation nature through the substrate and the discrete out-coupling structures of the out-coupler such as micro-mirror arrays, the ray bundles from the same pixel on a display source are usually split into multiple optical paths either by different numbers of TIRs or by different segments of the out-coupler, and are inherently subject to different optical path lengths (OPLs). The ray path splitting issue can induce ghost-like image artifacts and degrade the image quality. It also limits the size of eyebox, since more ray path splitting arises by different numbers of TIRs during pupil replication. An example is shown in Fig. 1(a), where an elemental ray bundle is split into three sub-ray paths (shown in orange, green and red colors) by three micromirrors when it is coupled out through the eyebox. The ray paths splitting does not affect the image performance when the central depth plane (CDP), which refers to the optical conjugate to the microdisplay by the MLA [35], of the micro-InI unit is located at the front focal plane of the collimator. In such collimated condition, since all elemental ray bundles are collimated before being coupled into the lightguide and all sub-ray paths from an elemental ray bundle are coupled out by the same field angle. The collimated condition of the CDP, however, imposes significant compromise on the resolution and depth range of the reconstructed light field. It is therefore preferred to place the CDP inside the front focal plane of the collimator. When the CDP is not at the front focal plane of the collimator, the elemental ray bundles are focused at a finite depth in the visual space, and the split sub-ray paths from an elemental ray bundle will form multiple image points in visual space due to the different OPLs, as demonstrated by the example in Fig. 1(c).

The second issue affects the viewing density and uniformity of the out-coupled light field image and is a specific issue when implementing an InI-engine into an MMA lightguide, and is caused by the reduced footprint size of each elemental ray bundle of the InI-engine on the eye pupil. In an InI-based lightfield rendering system, a 3D point is rendered by several ray bundles emitted from multiple selected pixels of different EIs, and the individual ray bundles are projected into an array of spatially separate footprints on the exit pupil of the collimator lens located on the in-coupling wedge surface. In this case, an elemental ray bundle representing a specific elemental view only occupies a small portion of the overall exit pupil, which makes some elemental views fail to be coupled out through the eyebox. Figure 1(b) shows a ray path diagram where an elemental ray bundle missed the eyebox when it is coupled out due to the limited footprint size of the input elemental ray bundle. As a result, some of the reconstructed elemental views and image contents are not visible through the eyebox after propagating through the lightguide. Figure 1(d) shows a captured image through a 4mm eyebox of the InI-based system where the micro-InI unit has rendered three sets of resolution targets with 3 by 3 views at 0.6 diopters away from the viewer. For the purpose of comparison, the left part of Fig. 1(d) shows a captured image directly at the exit pupil of the collimator without using an MMA lightguide, where all the targets are properly reconstructed without missing EIs seen by the eye. The right part of the figure shows the captured image of out-coupled lightfields after implementing the MMA lightguide. It can be clearly seen that several parts of the targets are not properly constructed with noticeable missing parts and degraded resolution. Some of the elemental views are not coupled out due to the mismatched footprint positions of these elemental ray bundles, while some parts of image contents in the red circle are totally missing, since all the elemental views at these fields fail to be coupled out through the eyebox.

3. Method for enhancing the out-coupled viewing density of a LG-InI engine

As discussed in Section 2, two challenges arise when adapting a micro-InI lightfield engine with a geometrical lightguide or waveguide. While the first issue of ghost-image artifacts related to image focal depth change has been extensively discussed in our recent work [29], this paper focuses on addressing the second issue more specific to a lightfield rendering.

Figure 2(a) shows the schematic layout of a typical InI-engine without implementing a lightguide combiner and the projected footprints of elemental ray bundles at the exit pupil of the collimator where 3 by 3 elemental views reconstruct an image point on the CDP. The footprint fill factor of an elemental view, ${P_{fp}},$ is defined as the ratio of the footprint diameter d of an elemental ray bundle to the central distance or pitch, s, between two adjacent views [35,36]. In the example of Fig. 2(a), the fill factor of each elemental view equals to 1. The ${P_{fp}}$ typically ranges from 0 to 1, since it is limited by the arrangement of the MLA and the NA of the ray bundles from the microdisplay. Typically, the constraint of ${P_{fp}} \le 1$ limits the NA of an elemental ray bundle to be no higher than the NA of the MLA to avoid the crosstalk from neighboring elemental views.

 figure: Fig. 2.

Fig. 2. (a) A typical InI-engine when a 3D point is rendered by 3 × 3 views and reconstructed at CDP, the footprint fill factor$\; {P_{fp}} \le 1$. (b) An NA expander (e.g. a holographic diffuser) is inserted at the depth of the CDP to expand the effective NA of the elemental views, making the fill factor ${P_{fp}} > 1{\; }$on the exit pupil. (c) A typical InI-engine when a 3D point is rendered by 3 × 3 views and reconstructed at a depth displaced by $\mathrm{\Delta }{z_0}$ from the CDP, while an NA expander is inserted accordingly at the depth of the reconstruction plane.

Download Full Size | PDF

When coupling the rendered lightfield through a geometrical lightguide, as demonstrated in the example of Fig. 1(d), some of the elemental views fail to be out-coupled inside the outbox and in severe situation some part of the reconstructed content can be entirely loss of view, due to the substantially smaller footprint size of each elemental view on the in-coupling surface of the lightguide than a non-lightfield display. To increase the out-coupling possibilities of elemental ray bundles through the eyebox and let more elemental views to be seen, a possible solution is to increase the footprint size of each elemental ray bundle on the in-coupler of the lightguide. Directly increasing the NA of the micro-InI unit (e.g. by decreasing the f-number of MLA) does not change the out-coupling uniformity of the EIs and can introduce more vignetting on the in-coupler wedge of the lightguide, since it leads to the increased spacing, s, between adjacent views and thus the size of overall exit pupil and reduces the effective viewing density for a fixed size of eye pupil.

An alternative method is to increase the footprint fill factor, ${P_{fp}}$, of each elemental view by increasing its footprint size, d, while maintaining the same spacing and arrangement among adjacent views, so that the projected area of each elemental view can occupy a larger portion of the exit pupil and increase the out-coupling possibilities of the elemental views through the eyebox. Figure 2(b) shows the schematic layout of a method to increase ${P_{fp}}$ without introducing crosstalk by adding an NA expansion component at a reconstruction depth plane of the micro-InI unit. With an NA expander such as a holographic diffuser inserted at the same depth as that of the CDP, the emitting angle of each elemental ray bundle originated from the CDP is expanded, which results in an increased projected footprint diameter d on the exit pupil plane of the collimator without changing the footprint arrangement and pitch size s. With an increased footprint fill factor, the elemental ray bundles are more likely to be coupled out through the eyebox by the MMA out-coupler, which improves the effective viewing density and image uniformity of the out-coupled lightfield.

Based on the schematic layout in Fig. 2, the projected pitch size, s, of an elemental ray bundle on the exit pupil of the collimator, which only depends on the micro-InI unit and the collimator and is not affected by the NA expander, can be calculated by

$$s = \frac{{{N_{view}}}}{{(1 + {N_{view}}) \cdot f/{\# _{MLA}}}} \cdot \frac{{{z_0}}}{{z^{\prime}}} \cdot (z^{\prime} + {z_{LG}}).$$
Where ${N_{view}}$ is the number of elemental views in vertical or horizontal direction of the reconstructed 3D scene, which equals to the lateral magnification ${M_{MLA}}$ of a lenslet in the MLA on the CDP. $f/{\# _{MLA}}$ is the f-number of a lenslet in the MLA, ${z_0}$, $z^{\prime}$and ${z_{LG}}$ are the distances from the collimator to the reconstructed image plane, the image plane in visual space and the exit pupil plane, respectively. The ${z_{LG}}$ equals to the focal length of the collimator to maintain object-space telecentricity, which reduces the size of the exit pupil as well as the crosstalk and image artifacts from adjacent EIs. The expanded footprint diameter ${d_e}$, which depends on the expanded NA, $N{A_e}$, of the NA expander as well as the collimator, can be calculated by
$${d_e} = \frac{{2N{A_e} \cdot {z_0}}}{{z^{\prime}}} \cdot (z^{\prime} + {z_{LG}})$$
where $N{A_e} = \tan \theta $, $\theta $ is the half emitting angle of each elemental view after passing through the NA expander. From Eqs. (1) and (2), the ${P_{fp}}$ can be changed by varying the half emitting angle $\theta $ of the NA expander and can be greater than 1, which allows the projected footprints to overlap on the exit pupil of the collimator as illustrated in Fig. 2(b). For an InI-based MMA lightguide system, the value of ${P_{fp}}$ should be determined by considering the tradeoffs between the effective viewing density of the out-coupled light field image and the image artifacts induced by the ray path splitting. Using high $N{A_e}$ diffusers with large ${P_{fp}}$ can significantly increase the footprint size of elemental images and further increase the viewing density of the out-coupled light field image. However, more ray paths are generated at the out-coupler area, which induces severe image artifacts and degrades the image performance.

By inserting an NA expander at the depth of the CDP, a natural consequence to the expanded NA and footprints of the elemental views is the more rapid image quality degradation and thus reduced depth of field, in comparison to systems without NA expansion, when reconstructing objects deviated away from the CDP due to the inherent defocusing effects for InI-based light field rendering [35]. These adverse effects may be remedied by dynamically adjusting the placement of the NA expander to make its axial location coincide with the depth of reconstruction. Figure 2(c) shows the case when the NA expander is displaced from the depth of the CDP by a distance of $\mathrm{\Delta }{z_0}$ and a 3D point is reconstructed at the same depth as that of an NA expander. Although the microdisplay is still conjugate to the CDP, the elemental images on the microdisplay are altered such that the chief rays of the corresponding elemental views intersect at the point of reconstruction and the footprints of their corresponding ray bundles are projected on the CDP as three spatially separated elemental image points but projected on the NA expander as a single overlapping area, which was shown by the inset image in Fig. 2(c). Although the size of the overlapping ray footprint on the NA expander increases as ${\Delta z}$ increases due to the defocusing effects of elemental views inherent to InI-based rendering, which will then result in resolution degradation to the reconstructed object, this effect is independent of the property of the NA expander. The NA expander placed at the same depth as the object of reconstruction will expand the effective NA of the rays departing the NA expander and thus increase their footprint size on the collimator aperture, but will not increase the apparent size of the ray projection area on the NA expander and thus the apparent resolution of the reconstructed object. Therefore, in order to obtain an optimal image quality and correct focus cues across a large depth range, the position of the NA expander should be dynamically adjusted to be in close proximity to the depth of the reconstruction plane, as shown in Fig. 2(c). Though ideal image quality can be obtained when the NA expander is exactly on the reconstruction plane, some depth mismatch is allowed, which depends on the diffusing angle and the depth of field after expansion, and is discussed in Section 5.

4. Characterizing display engine by image performance

Based on the method in Section 3, a micro-InI unit can potentially be adapted with a geometrical lightguide to enable a compact lightfield display by effectively increasing the fill factor of the elemental views. On the other hand, considering the tradeoffs between the effective viewing density of the out-coupled light field image and the image artifacts induced by the ray path splitting, we anticipate that the choice of the NA expander plays a critical role in image performance and an optimal ${P_{fp}}$ should be selected. By adopting the modeling and retinal image simulation methods we previously developed for MMA-based lightguide in [29,30], we investigate the out-coupled image performance of an InI-LG system with different footprint fill factors ${P_{fp}}$ and develop guidelines for an optimal choice of NAe and its resulted Pfp.

For the purpose of simulation, the microdisplay is assumed to have a 4.7um pixel size and an active resolution of 1485 by 892 pixels. The MLA has a pitch size of 1mm and an f-number of 3.3. The lateral magnification ${M_{MLA}}{\; }$of a lenslet in the MLA is 3, which gives the rendered scene with 3 by 3 views. The image collimator has a focal length of 20.82mm, which gives a FOV of 19.11° x 11.49°. The CDP of the reconstructed lightfield after the collimator is located at 0.6 diopters from the eye pupil in the visual space, and a 3D scene is rendered on the CDP in the simulation. The in-coupling surface of the MMA lightguide is located at the back focal distance of the collimator, which coincides with the exit pupil of the collimator. The lightguide has a dimension of 51mm (L) x 4.39mm (W) x 16mm (H), with an in-coupler wedge surface width of 13.42mm and an MMA out-coupler area of 13.3mm (L) x 1.53mm (W) x 16mm (H). The eyebox is located at 23mm away from the inner surface of the lightguide, with a circular shape of 4mm in diameter. The choice of these parameters was based on available parts for prototype implementation.

The simulation was done in LightTools by setting up the lightguide model and tracing each elemental ray bundle from a pixel on the microdisplay of the InI-engine. The simulation method has been discussed in detail [29,30]. For simplicity, we only simulated the three elemental views in YOZ plane, since the elemental views in the XOZ plane experienced similar ray paths and had similar image performances. The data of each elemental ray bundle hitting on the eyebox was iteratively collected, including the number of ray paths, ray positions and directional cosines of the rays for further image reconstruction and performance evaluation. To compare the image performances and efficiencies for InI-engines of different fill factors, the beam width of the in-coupling elemental ray bundle was altered in accordance with the emitting angles of different NA expanders. Table 1 summarized the half emitting angles$\; \theta $, the corresponding footprint diameters ${d_e}$ on the lightguide in-coupler wedge, and the footprint fill factors ${P_{fp}}$ of all the simulated cases.

Tables Icon

Table 1. Selected NA expansion conditions and fill factors on reconstruction plane in simulations.

Figure 3 summarizes the main simulation results charactering the effects of different footprint fill factors in InI-based MMA lightguide system. The in-coupling field angle$\; {\theta _i}$, which is labeled in Fig. 2(a), is defined as the angular positon of an image point from the reconstruction plane to the center of collimator. The number of image points or ray paths ${n_r}$ originated from each elemental ray bundle coupling through the eyebox was plotted in Fig. 3(a) to estimate where the ghost-like image artifact (Fig. 1(c)) or the elemental view missing (Fig. 1(d)) arises at ${\theta _i}$. The data of the three elemental ray bundles rendering three elemental views of each 3D image point in the YOZ plane are labeled as blue, red and yellow stars in the figure, respectively, which counts for the number of ray paths originated from the blue, red and green elemental ray bundles in Fig. 2(b), respectively. Each ray path will reconstruct an image point seen by the viewer. Since an elemental ray bundle may be split into multiple ray paths and generate multiple image points due to the ray path splitting, the image point generated by the ray path with the highest optical power in a ray bundle is the primary image point, and all the other image points are ghost image points. In this case, the reconstructed 3D scene will be free of image artifacts if ${n_r} = 1$ for all elemental ray bundles, while the elemental view will suffer ghost-like image artifact if$\; {n_r} \ge 2$; or the elemental view will be totally missing if ${n_r} = 0$ for a specific elemental ray bundle. The simulated results of ${n_r}$ as a function of in-coupling field angle ${\theta _i}$ under three NA expansion conditions when$\; {P_{fp}}$ equals to 1, 3.71 and 5.61 are shown in Fig. 3(a). It can be seen that when there is no NA expander (${P_{fp}} = 1$), the EIs from +1.2° to +4.2° are totally missing, since no elemental image is able to be coupled out at these fields; while these elemental images are able to be coupled out if ${P_{fp}}$ is 3.71 or 5.61. Figure 3(b) plots a normalized statistical distribution of the number of ray paths ${n_r}$ from all 1485 elemental ray bundles when the footprint fill factor varies from 1 to 7.07. The results shows that there is a tradeoff between the two aforementioned artifacts. Some of the elemental views are missing (${n_r} = 0$) when$\; {P_{fp}} < 3.71$, while the percentage of image ghost increases as the fill factor increases. Figure 3(c) plots the power ratio of the primary image point to the overall out-coupled elemental ray bundle, when $\; {P_{fp}}$ equals to 1, 3.71 and 5.61. It is also shown that as ${P_{fp}}$ increases, the missed EIs from +1.2° to +4.2° are able to be coupled out, but more optical power may be turn into the ghost-image points, which can cause image contrast degradation.

 figure: Fig. 3.

Fig. 3. Simulation results of the number of ray paths (image points) and power distributions under different$\; {P_{fp}}$. (a) The number of out-coupled ray paths (image points) from each ray bundle when$\; {P_{fp}}$ equals to 1, 3.71 and 5.61, three elemental ray bundles are labeled in different colors; (b) The statistical distribution of the number of image points coupled out when the footprint fill factor ${P_{fp}}$ varies as in Table 1 and (c) the power ratio of the out-coupled image point with the highest power to the overall out-coupled power of each elemental view when $\; {P_{fp}}$ equals to 1, 3.71 and 5.61.

Download Full Size | PDF

To evaluate the image performance of the actual out-coupled lightfields, we also simulated the distribution of the reconstructed image points as a function of FOV in visual space. The whole FOV in the YOZ direction is divided by 2.3 arcmins per bin to collect the elemental views, where the sampling density of 2.3 arcmins is consistent with the angular resolution of the reconstructed image in visual space. Figure 4(a) plots the number of elemental views that are actually coupled out through the eyebox, determined by each of the sampling bin, as the primary image points in visual space when $\; {P_{fp}}$ equals to 1, 3.71 and 5.61, which represents the angular distributions of the out-coupled elemental views despite of ghost images. The results also show that the image between +1.2° to +4.2° cannot be coupled out from the lightguide when there is no NA expansion (${P_{fp}} = 1$), while as ${P_{fp}}$ increases, more elemental views can be coupled out. It is worth noting that the number of elemental views in the visual space can be larger than three at some field angles. It is due to the optical path differences induced by the ray path splitting, which leads to a lateral displacement of elemental images in the visual space. The normalized statistical distributions of the number of the primary image point in visual space when ${P_{fp}}$ changes from 1 to 7.07 are plotted in Fig. 4(b), which gives the statistical results of Fig. 4(a) under different ${P_{fp}}$. It is seen that when ${P_{fp}}$ is larger than 3.71, all the viewing directions across the FOV have at least 2 elemental views in the YOZ plane, which satisfies the minimal condition of rendering the depth cues of a 3D scene with at least 2 elemental views to converge at the rendering image depth in InI-based lightfield displays. Figure 4(c) plots the distributions of the overall out-coupled image points including the ghost image points generated by ray path splitting, showing that more ghost images can be observed as the ${P_{fp}}$ increases, which affects the image contrast.

 figure: Fig. 4.

Fig. 4. (a) Angular distribution of the number of the primary image points in visual space when $\; {P_{fp}}$ equals to 1, 3.71 and 5.61; (b) the statistical distributions of the number of elemental views (despite ghost images) seen through the eyebox in visual space across FOV; (c) the statistical distribution of overall image points seen through the eyebox (including primary and ghost image points) as a function of FOV in visual space when $\; {P_{fp}}$ varies as in Table 1.

Download Full Size | PDF

Based on the ray tracing method, the retinal image of an InI-based MMA lightguide system can also be reconstructed. The method of reconstructing the retinal image of a MMA lightguide system has been discussed in [30]. The incoherent retinal point spread function (PSF) of each image point including ghost image point was simulated based on the collected ray path data. The Arizona Eye Model was adopted to simulate the optical performance of the human eyes. Diffraction effect introduced by the pupil function and field-dependent pupil transmittance were also considered. A sinusoid fringe pattern with a spatial frequency of 0.77 cy/deg was employed as the test image and is shown in Fig. 5(a). The test image is rendered on the CDP, which is at 0.6 diopters away in visual space. The simulated retinal images under different fill factors ${P_{fp}}$ were shown in Fig. 5(b), by convolving the original image with the field-dependent PSFs. It is shown that as the fill factor increases, the image uniformity improves significantly, though some ghost images caused by the ray path splitting and image contrast degradation can be observed.

 figure: Fig. 5.

Fig. 5. Retinal image simulation of the InI-based MMA lightguide. (a) The original input image, the image was rendered on the CDP plane and (b) the simulated retinal images when ${P_{fp}}$ varies from 1 to 7.07.

Download Full Size | PDF

In summary, the footprint fill factor ${P_{fp}}$ affects the number of out-coupled elemental views, image uniformity, and the number of ghost images. An optimal fill factor ${P_{fp}}$ should at least enable the image across the whole FOV to be coupled out with correct focus cues and yield an acceptable image uniformity, while maintaining a low ghost image ratio. For example, to maintain a desirable image quality, ${P_{fp}}$ in the range of 3.71 to 4.66 may be preferable, since the image across the whole fields can be coupled out with at least two element views (shown in Fig. 4(b), which enables elemental images to be converged and focused at corrected image depths), a moderate image uniformity (Fig. 5(b)), and a relative low ghost image ratio (Fig. 4(c)) based on the above analysis.

5. Experimental results

Based on the schematics in Fig. 1 and Fig. 2, we implemented a proof-of-concept prototype of an InI-based MMA lightguide system as shown in Fig. 6(a). We used a monochromatic OLEDs microdisplay and an existing MLA to construct as the InI-engine, a commercial objective as the image collimator, and a MMA lightguide as the optical combiner. The system specifications and setups are the same as those used in the simulation described in Section 4. The InI-engine should be aligned well before the MMA lightguide was implemented. The MLA was mounted on a three-dimensional translation stage, whose position relative to the OLEDs could be controlled by three micrometers. A machine vision camera was placed at the exit pupil of the collimator during the alignment. The position and tip-tilt of the OLEDs, MLA and collimator were finely adjusted so that a clear light field image can be captured by the camera at the rendering depths. The machine vision camera was then replaced by the MMA lightguide, and it was set at the exit pupil of the lightguide to capture the out-coupled image through the lightguide. To increase the footprint fill factor$\; {P_{fp}}$, engineered diffusers or PDLCs (Fig. 6(b)) with certain diffusing angles were inserted on the CDP plane.

 figure: Fig. 6.

Fig. 6. (a) Prototype of the InI-based MMA lightguide using an engineered diffuser as the NA expander. (b) An electrical-switching PDLC film with diffusing (top) and transparent (bottom) state was later adopted as NA expander.

Download Full Size | PDF

The first scene was rendered on the CDP, which conjugated to 0.6 diopters in visual space. Three groups of Snellen letter ‘E’s with the same depth were rendered at 0.6 diopters. The angular resolutions of the Snellen letters were 0.63, 0.42 and 0.21 degrees/cycle (top to bottom) in visual space, corresponding to 3, 2, and 1 pixels of line width of the letters. Figure 7(a) shows the displayed elemental views rendered on the microdisplay. The elemental views were then integrated by the MLA and the 3D scene was reconstructed via the collimator apparently located at the depth of 0.6 diopters from the camera. The reconstructed lightfield of the scene is then coupled through the MMA lightguide. Figure 7(b) shows the captured out-coupled image from the InI-LG when no diffuser was inserted, or an engineered diffuser with its full width half maximum (FWHM) diffusing angle of 5, 10, 15 or 30 degrees was inserted, respectively. The camera exposure and aperture were kept unchanged during the capturing. As shown by the image captured without a diffuser, there is a dark region caused by elemental view missing on the right side of the captured image without diffuser (shown in the red rectangle). As the diffusing angle increases, this missing region marked by the red box begins to be coupled out and the overall image uniformity significantly improves, while the overall image becomes dimmer. Some punctate artifacts can also be observed from the images captured with diffusers, which is induced by the irregular structures of the engineered diffusers. The results validate that the NA expansion scheme can help the elemental views to be coupled out and improve the out-coupling uniformity over FOV.

 figure: Fig. 7.

Fig. 7. (a) The Array of elemental images display on the mircrodisplay when rendering three sets of Snellen charts at 0.6 diopters with 3 × 3 views and (b) the captured out-coupled images of the InI-LG with no diffuser, 5deg, 10deg, 15deg or 30deg FWHM engineered diffuser inserted at the CDP. The red rectangle shows the dark fields with all elemental views missing.

Download Full Size | PDF

To study the rendering performance of the InI-engine in a wide depth range, the second scene was rendered with the same three groups of Snellen letters but each group is located at different reconstruction depths, corresponding to 0.01, 0.6 and 3 diopters respectively, (from top-right to bottom-left). The engineered diffuser was replaced by multi-layer-stacking PDLCs (ML-PDLCs) as the NA expander for the reconstructed 3D images at different depths. The transparency of each layer in the ML-PDLCs can be electrically driven in a very fast speed (e.g. less than 1 ms), so that the diffusing depth can be switched between different depths, each approximately matching with the depth of a desired reconstruction target plane. It is worth noting that the collimator need to be moved closer to the InI-engine when inserting a PDLC with a glass substrate to compensate the reduced equivalent air thickness of the substrate. As a comparison, we selected two commercial ML-PDLCs with different specifications as the NA expander, which are named as MP1 (from Kent Optronics) and MP2 (from LightSpace), respectively. Table 2 lists the specifications of the two ML-PDLCs.

Tables Icon

Table 2. Optical properties and specifications of ML-PDLCs.

To compare the optical performances of MP1 and MP2, only two adjacent layers of MP2 were used, while the third layer were kept in transparent state. During the experiments, the back layer of the MP1 or MP2, which is the layer further to a viewer, was placed at the CDP for diffusing distant objects, while the front layer (the layer closer to viewer) was placed at the depth of the near scenes. Figure 8 shows the results of the InI-engine with MP1 as the NA expander. To compare the image performance and focus cues when the diffusing plane is at different depths, we captured the images when the back layer was in diffusing state and the front layer was in transparent state (FtBd in Fig. 8(a)) or vice versa (FdBt in Fig. 8(b)). The targets rendered at 0.01 and 0.6 diopters were closer to the back PDLC layer, while the target of the 3-diopters depth was closer to the front PDLC layer in the dioptric space. The camera focus was also changed in accordance with the depth of the three targets. It can be seen that the targets look much sharper when their depths approximately match with the depths of the corresponding diffusing screen and camera focus. Due to the low transmittance and large diffusing angle of MP1, the captured images are much dimmer compared with the image without ML-PDLC (the 1st figure in Fig. 7(b)), but all of the elemental view are all coupled out and the field uniformity of the captured images are much improved. Besides, the lateral displacement between elemental images when the camera focus moves away from the reconstructed image plane are less visible, and the defocus blur becomes much natural compared with a conventional InI-engine. The results show that the correct depth cues were rendered when the depth of the diffusing screen was placed near the rendering depth.

 figure: Fig. 8.

Fig. 8. Images captured when three targets at 0.01D, 0.6D and 3D were rendered by the InI-engine with MP1 as the NA expander. (a) The back PDLC layer was in diffusing state and the front PDLC layer was in transparent state and (a) the back PDLC layer was in transparent state and the front PDLC layer was in diffusing state. Camera focusing was changed corresponding to the reconstruction depths of three targets, and the exposure was maximized.

Download Full Size | PDF

Figure 9 shows the results of the InI-engine with MP2 as the NA expander, when the back layer (Fig. 9(a)) or the front layer (Fig. 9(b)) was turned into diffusing state, respectively. The overall image coupling efficiency is much higher compared with the results shown in Fig. 8 (camera exposure time in Fig. 8 is more than ten times longer than that used in Fig. 9), because of the higher transmittance and narrower diffusing angle of MP2, as shown in Table 2. However, the field uniformity is not as good as the results shown in Fig. 8 due to the narrower diffusing angle. In addition, the depth cues are less sensitive to the location of the diffusing screen; they rely more on the rendering depth of the InI-engine, when comparing the results in Figs. 9(a) and 9(b). It is worth noting that the image depth in visual space may change due to different OPDs coupling through different field angles, which may induce depth errors in visual space [29]. The depth errors can be pre-calibrated and corrected by calibrating the lateral shifts of the elemental views and compensating the shifts digitally by moving the rendered contents laterally.

 figure: Fig. 9.

Fig. 9. Images captured when three targets at 0.01D, 0.6D and 3D were rendered by the InI-engine with MP2 as NA expander. (a) The back PDLC layer was in diffusing state and the front PDLC layer was in transparent state and (b) the back PDLC layer was in transparent state and the front PDLC layer was in diffusing state. Camera focusing was changed corresponding to the reconstruction depths of three targets. The camera exposure was less than that in Fig. 8.

Download Full Size | PDF

In a summary, the optical performance of an InI-LG system depends on the optical properties of the NA expander. When adapting an InI-engine to the lightguide-based OST-HMD, the diffusing angle and the position of the NA expander are the two major factors determining the image quality and implementation of the InI-LG. The diffusing angle determines the footprint fill factor ${P_{fp}}$ of each EI, which affects the uniformity and efficiency of the out-coupled image. When the diffusing angle of the NA expander increases, the increased ${P_{fp}}$ gives EIs more possibilities to be coupled out, while the chances of a larger portion of the ray bundles from EIs that are lost through the lightguide also increase. In this case, the out-coupled image becomes much more uniform at the cost of image coupling efficiency, which has been proven by the results shown in Fig. 5 and Fig. 7. On the other hand, the diffusing angle can also affect the positional sensitivity of the NA expander. When the diffusing angle is large so that the projected beam size on the reconstructed image plane is much larger than the original beam size of elemental ray bundles, the image quality is more sensitive to the location of PDLC because of the shallower depth of fields. In this case, stacking ML-PDLC should be adopted as the NA-expander, because the position of NA expansion becomes the dominant factor affecting reconstructed angular resolution of the InI-engine, rather than the depth displacement between the reconstructed image plane and CDP. The location and layer separation of each PDLC become dominant factors affecting the image quality and need to be well arranged based on the displayed depth and desired image quality. Although targets of discrete depths were utilized for the purpose of demonstrating the focusing effects, it should be noted that a continuous sub-volume of a 3D scene with a continuous depth is rendered by a single layer of PDLC diffuser, owing to the nature of light field rendering. For instance, in the example shown in Fig. 8 with MP1, which has a large diffusing angle, when the NA expander layer coincides with the CDP depth of 0.6 diopters, as shown in Fig. 8(a), the in-focused image of the targets at 0.01 diopters (the third column) is nearly as sharp as the in-focused image of the targets at 0.6 diopters (the second column). The in-focused image of the targets at 3 diopters (the first column), however, is too much blurred due to the large depth separation of 2.4 diopters from the PDLC layer and thus require a second layer of PDLC placed near the 3-diopter depth as shown in Fig. 8(b). With this particular PDLC sample a layer placed at 0.6 diopters can render a continuous sub-volume of ±0.6 diopters. On the contrary, when the PDLC has a narrower diffusing angle and high transmittance, one PDLC layer with a fixed position is enough to display a 3D scene over a wide depth range, since the depth of field is extended and image quality is less sensitive to the PDLC position. In the meantime, the image uniformity will get worse due to a narrower diffusing angle. As a comparison example, Fig. 9 shows the results of a different PDLC sample which has a much smaller diffusing angle. It is very evident that even a single PDLC layer placed at the depth of 0.6 diopters can produce a good quality, in-focused image for targets located at the 3-diopters depth, as shown by the first column of Fig. 9(a), while the image uniformity is not as good as that of Fig. 8 in general. Overall, the depth of field afforded by a single layer PDLC depends on the diffusing properties of the PDLC. Consequently, the layer separation and the number of PDLC layers required for reconstructing a large continuous volume vary. Two PDLC layers, with a reasonable range of diffusing angle as suggested earlier in Section 4, shall be adequate for rendering a continuous 3D volume of about 3 diopters. As such, a switch speed of 120 Hz shall enable to switch between the two layers at an overall refresh rate of 60 Hz, above the critical flickering frequency of the human visual system, and a higher speed above 120 Hz would be desirable to minimize flickering artifacts. The switching speed requirement shall naturally increase if more layers are desired.

6. Conclusion

In conclusion, a lightguide-based OST-HMD with the capability of rendering a 3D light field by an InI-engine was presented for the first time. With a MMA lightguide as the optical combiner, the system takes the advantage of small form factor with eyeglass appearance; with an InI-engine, the system is able to offer a true light field 3D image, which alleviates the vergence-accommodation discrepancy problem and visual discomfort for conventional lightguide-based system. Engineered diffusers or ML-PDLCs are effective NA expansion components that help the InI-engine to increase the viewing density and help to improve the image coupling uniformity of the MMA lightguide. Future work can be done on quantifying the rendering image depth error versus the number of views coupled into the eye, and improving the overall image efficiency by customizing the ML-PDLC diffusing angle, layer separation, or increase the brightness of the microdisplay panel.

Disclosures

Dr. Hong Hua has a disclosed financial interest in Magic Leap Inc. The terms of this arrangement have been properly disclosed to The University of Arizona and reviewed by the Institutional Review Committee in accordance with its conflict of interest policies.

References

1. H. Hua, “Enabling focus cues in head-mounted displays,” Proc. IEEE 105(5), 805–824 (2017). [CrossRef]  

2. G. E. Favalora, “Volumetric 3D displays and application infrastructure,” Comput. 38(8), 37–44 (2005). [CrossRef]  

3. R. Hirayama, D. M. Plasencia, N. Masuda, and S. Subramanian, “A volumetric display for visual, tactile and audio presentation using acoustic trapping,” Nature 575(7782), 320–323 (2019). [CrossRef]  

4. H. Yu, K. Lee, J. Park, and Y. Park, “Ultrahigh-definition dynamic 3D holographic display by active control of volume speckle fields,” Nat. Photonics 11(3), 186–192 (2017). [CrossRef]  

5. G. Li, D. Lee, Y. Jeong, J. Cho, and B. Lee, “Holographic display for see-through augmented reality using mirror-lens holographic optical element,” Opt. Lett. 41(11), 2486–2489 (2016). [CrossRef]  

6. S. Tay, P. A. Blanche, R. Voorakaranam, A. V. Tunç, W. Lin, S. Rokutanda, T. Gu, D. Flores, P. Wang, G. Li, and P. St Hilaire, “An updatable holographic three-dimensional display,” Nature 451(7179), 694–698 (2008). [CrossRef]  

7. S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010). [CrossRef]  

8. J. Rolland, M. Krueger, and A. Goon, “Multifocal planes head-mounted displays,” Appl. Opt. 39(19), 3209–3215 (2000). [CrossRef]  

9. J. S. Lee, Y. K. Kim, M. Y. Lee, and Y. H. Won, “Enhanced see-through near-eye display using time-division multiplexing of a Maxwellian-view and holographic display,” Opt. Express 27(2), 689–701 (2019). [CrossRef]  

10. S. B. Kim and J. H. Park, “Optical see-through Maxwellian near-to-eye display with an enlarged eyebox,” Opt. Lett. 43(4), 767–770 (2018). [CrossRef]  

11. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014). [CrossRef]  

12. G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012). [CrossRef]  

13. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013). [CrossRef]  

14. H. Huang and H. Hua, “High-performance integral-imaging-based light field augmented reality display using freeform optics,” Opt. Express 26(13), 17578–17590 (2018). [CrossRef]  

15. H. Huang and H. Hua, “Generalized methods and strategies for modeling and optimizing the optics of 3D head-mounted light field displays,” Opt. Express 27(18), 25154–25171 (2019). [CrossRef]  

16. B. C. Kress, Optical Architectures for Augmented-, Virtual-, and Mixed-Reality Headsets (SPIE, 2020).

17. B. C. Kress and W. J. Cummings, “11-1: Invited Paper: Towards the Ultimate Mixed Reality Experience: HoloLens Display Architecture Choices,” SID Symp. Digest Tech. Pap. 48(1), 127–131 (2017). [CrossRef]  

18. T. Levola and P. Laakkonen, “Replicated slanted gratings with a high refractive index material for in and outcoupling of light,” Opt. Express 15(5), 2067–2074 (2007). [CrossRef]  

19. P. Äyräs, P. Saarikko, and T. Levola, “Exit pupil expander with a large field of view based on diffractive optics,” J. Soc. Inf. Disp. 17(8), 659–664 (2009). [CrossRef]  

20. J. D. Waldern, A. J. Grant, and M. M. Popovich, “17-4: DigiLens AR HUD Waveguide Technology,” SID Symp. Digest Tech. Pap. 49(1), 204–207 (2018). [CrossRef]  

21. J. Xiao, J. Liu, Z. Lv, X. Shi, and J. Han, “On-axis near-eye display system based on directional scattering holographic waveguide and curved goggle,” Opt. Express 27(2), 1683–1692 (2019). [CrossRef]  

22. C. Yoo, K. Bang, C. Jang, D. Kim, C. K. Lee, G. Sung, and B. Lee, “Dual-focal waveguide see-through near-eye display with polarization-dependent lenses,” Opt. Lett. 44(8), 1920–1923 (2019). [CrossRef]  

23. G. Y. Lee, J. Y. Hong, S. Hwang, S. Moon, H. Kang, S. Jeon, H. Kim, J. H. Jeong, and B. Lee, “Metasurface eyepiece for augmented reality,” Nat. Commun. 9(1), 1–10 (2018). [CrossRef]  

24. G. Quaranta, G. Basset, O. J. Martin, and B. Gallinet, “Recent advances in resonant waveguide gratings,” Laser Photonics Rev. 12(9), 1800017 (2018). [CrossRef]  

25. A. Wilson and H. Hua, “Design and demonstration of a vari-focal optical see-through head-mounted display using freeform Alvarez lenses,” Opt. Express 27(11), 15627–15637 (2019). [CrossRef]  

26. Y. Amitai, “P-21: Extremely Compact High-Performance HMDs Based on Substrate-Guided Optical Element,” SID Symp. Digest Tech. Pap. 35(1), 310–313 (2004). [CrossRef]  

27. Y. Amitai, “P-27: A Two-Dimensional Aperture Expander for Ultra-Compact, High-Performance Head-Worn Displays,” SID Symp. Digest Tech. Pap. 36(1), 360–363 (2005). [CrossRef]  

28. D. Cheng, Y. Wang, C. Xu, W. Song, and G. Jin, “Design of an ultra-thin near-eye display with geometrical waveguide and freeform optics,” Opt. Express 22(17), 20705–20719 (2014). [CrossRef]  

29. M. Xu and H. Hua, “Finite-depth and varifocal head-mounted display based on geometrical lightguide,” Opt. Express 28(8), 12121–12137 (2020). [CrossRef]  

30. M. Xu and H. Hua, “Methods of optimizing and evaluating geometrical lightguides with microstructure mirrors for augmented reality displays,” Opt. Express 27(4), 5523–5543 (2019). [CrossRef]  

31. M. Xu and H. Hua, “Ultrathin optical combiner with microstructure mirrors in augmented reality,” Proc. SPIE 10676, 1067614 (2018). [CrossRef]  

32. M. Xu and H. Hua, “Effects of image focal depth in geometrical lightguide head mounted displays,” Proc. SPIE 11310, 113100N (2020). [CrossRef]  

33. K. Sarayeddline, K. Mirza, P. Benoit, and X. Hugel, “Monolithic light guide optics enabling new user experience for see-through AR glasses,” Proc. SPIE 9202, 92020E (2014). [CrossRef]  

34. H. Huang and H. Hua, “An integral-imaging-based head-mounted light field display using a tunable lens and aperture array,” J. Soc. Inf. Disp. 25(3), 200–207 (2017). [CrossRef]  

35. H. Huang and H. Hua, “Systematic characterization and optimization of 3D light field displays,” Opt. Express 25(16), 18508–18525 (2017). [CrossRef]  

36. H. Huang and H. Hua, “Effects of ray position sampling on the visual responses of 3D light field displays,” Opt. Express 27(7), 9343–9360 (2019). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Two potential image artifacts in InI-based MMA lightguide. (a) and (b): Ray path diagram of two elemental views of a reconstruction point P. Ray path splitting arises in (a) and elemental view missing arises in (b). (c) Camera captured image when image is at infinity (left) and when image is at 3 diopters, image ghost arises due to ray path splitting (right). (d) Camera captured image when all EIs are displayed (left) and when elemental views are lost (right); three sets of Snellen chart InI with 3 × 3 views are rendered at 0.6 diopters away.
Fig. 2.
Fig. 2. (a) A typical InI-engine when a 3D point is rendered by 3 × 3 views and reconstructed at CDP, the footprint fill factor$\; {P_{fp}} \le 1$. (b) An NA expander (e.g. a holographic diffuser) is inserted at the depth of the CDP to expand the effective NA of the elemental views, making the fill factor ${P_{fp}} > 1{\; }$on the exit pupil. (c) A typical InI-engine when a 3D point is rendered by 3 × 3 views and reconstructed at a depth displaced by $\mathrm{\Delta }{z_0}$ from the CDP, while an NA expander is inserted accordingly at the depth of the reconstruction plane.
Fig. 3.
Fig. 3. Simulation results of the number of ray paths (image points) and power distributions under different$\; {P_{fp}}$. (a) The number of out-coupled ray paths (image points) from each ray bundle when$\; {P_{fp}}$ equals to 1, 3.71 and 5.61, three elemental ray bundles are labeled in different colors; (b) The statistical distribution of the number of image points coupled out when the footprint fill factor ${P_{fp}}$ varies as in Table 1 and (c) the power ratio of the out-coupled image point with the highest power to the overall out-coupled power of each elemental view when $\; {P_{fp}}$ equals to 1, 3.71 and 5.61.
Fig. 4.
Fig. 4. (a) Angular distribution of the number of the primary image points in visual space when $\; {P_{fp}}$ equals to 1, 3.71 and 5.61; (b) the statistical distributions of the number of elemental views (despite ghost images) seen through the eyebox in visual space across FOV; (c) the statistical distribution of overall image points seen through the eyebox (including primary and ghost image points) as a function of FOV in visual space when $\; {P_{fp}}$ varies as in Table 1.
Fig. 5.
Fig. 5. Retinal image simulation of the InI-based MMA lightguide. (a) The original input image, the image was rendered on the CDP plane and (b) the simulated retinal images when ${P_{fp}}$ varies from 1 to 7.07.
Fig. 6.
Fig. 6. (a) Prototype of the InI-based MMA lightguide using an engineered diffuser as the NA expander. (b) An electrical-switching PDLC film with diffusing (top) and transparent (bottom) state was later adopted as NA expander.
Fig. 7.
Fig. 7. (a) The Array of elemental images display on the mircrodisplay when rendering three sets of Snellen charts at 0.6 diopters with 3 × 3 views and (b) the captured out-coupled images of the InI-LG with no diffuser, 5deg, 10deg, 15deg or 30deg FWHM engineered diffuser inserted at the CDP. The red rectangle shows the dark fields with all elemental views missing.
Fig. 8.
Fig. 8. Images captured when three targets at 0.01D, 0.6D and 3D were rendered by the InI-engine with MP1 as the NA expander. (a) The back PDLC layer was in diffusing state and the front PDLC layer was in transparent state and (a) the back PDLC layer was in transparent state and the front PDLC layer was in diffusing state. Camera focusing was changed corresponding to the reconstruction depths of three targets, and the exposure was maximized.
Fig. 9.
Fig. 9. Images captured when three targets at 0.01D, 0.6D and 3D were rendered by the InI-engine with MP2 as NA expander. (a) The back PDLC layer was in diffusing state and the front PDLC layer was in transparent state and (b) the back PDLC layer was in transparent state and the front PDLC layer was in diffusing state. Camera focusing was changed corresponding to the reconstruction depths of three targets. The camera exposure was less than that in Fig. 8.

Tables (2)

Tables Icon

Table 1. Selected NA expansion conditions and fill factors on reconstruction plane in simulations.

Tables Icon

Table 2. Optical properties and specifications of ML-PDLCs.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

s = N v i e w ( 1 + N v i e w ) f / # M L A z 0 z ( z + z L G ) .
d e = 2 N A e z 0 z ( z + z L G )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.