Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Framework for optimizing AR waveguide in-coupler architectures

Open Access Open Access

Abstract

Waveguide displays have been shown to exhibit multiple interactions of light at the in-coupler diffractive surface, leading to light loss. Any losses at the in-coupler set a fundamental upper limit on the full-system efficiency. Furthermore, these losses vary spatially across the beam for each field, significantly decreasing the displayed image quality. We present a framework for alleviating the losses based on irradiance, efficiency, and MTF maps. We then derive and quantify the innate tradeoff between the in-coupling efficiency and the achievable modulation transfer function (MTF) characterizing image quality. Applying the framework, we show a new in-coupler architecture that mitigates the efficiency vs image quality tradeoff. In the example architecture, we demonstrate a computation speed that is 2,000 times faster than that of a commercial non-sequential ray tracer, enabling faster optimization and more thorough exploration of the parameter space. Results show that with this architecture, the in-coupling efficiency still meets the fundamental limit, while the MTF achieves the diffraction limit up to and including 30 cycles/deg, equivalent to 20/20 vision.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Augmented reality (AR) displays are an emerging technology that overlay computer-generated information onto a user's field of view (FOV). The earliest head-worn AR display was implemented in 1968 with Ivan Sutherland's “Sword of Damocles” [1]. In the intervening decades, leaps and bounds have been made in technology to make AR displays smaller, brighter, and more immersive [24]. They are applied in diverse fields, ranging from education to entertainment, engineering to medicine [47].

A key component of head-worn AR displays is the optical combiner, which combines light from a display engine and the real world so the user sees both together [24,810]. A myriad of approaches have been proposed, including freeform mirrors and prisms [1115], birdbath architectures [3,16], retinal scanning displays [1719], and waveguide combiners [8,2036]. A waveguide combiner is currently a common approach for AR devices because it is typically a thin plate in front of the eyes approaching eyeglass format while maintaining an extensive eyebox that is crucial for user comfort, as illustrated in Fig. 1(a).

 figure: Fig. 1.

Fig. 1. Impact of multi-zone in-coupler on irradiance uniformity. (a) Illustration of a waveguide AR display showing beam expansion. The in-coupler is in the area with the dashed frame around the diffractive surface, and the red cone indicates an incident beam from a point on an extended light source shown as a dark grey rectangle. (b & c) Expanded views of the highlighted region in (a). The incident beam irradiance map is the red region to the left, and the in-coupled beam irradiance map is the blue region to the right. Rays inside and outside the waveguide are shown with dashed and solid lines, respectively. Ray thickness indicates relative power. (b) The in-coupler is a single 3 mm wide diffractive surface with 100% diffraction efficiency. The green arrow shows the interaction separation, ${s_x}$. (c) The in-coupler is 3 mm wide and consists of three optimized diffractive surfaces, as indicated by the three zones separated by the vertical black lines. For this example architecture, the optimal diffraction efficiencies are 95%, 54%, and 27%, and the widths of each zone are 1.06 mm, 1.01 mm, and 0.93 mm.

Download Full Size | PDF

Waveguide combiners rely on an in-coupler to couple light into the waveguide through total internal reflection (TIR). The TIR property guides the light until it reaches the expansion region, where some light is redirected toward the out-coupler each time light interacts with it, while the remainder continues to propagate. These interactions replicate each beam and expand the eyebox along one dimension. The redirected light then begins interacting with the out-coupler, which expands the light along the orthogonal direction and extracts it toward the user [8,2127]. Alternatively, the expansion and out-coupling can be done in a single region using crossed gratings [3133]. The replication of rays expands the eyebox without reducing the full FOV, ultimately increasing the system's etendue [8,2024,2832] at the expense of reducing the displayed brightness.

Waveguide combiners can be coarsely divided into two types: geometric or diffractive, where geometric and diffractive waveguides use reflection or diffraction, respectively, to redirect and replicate light [36]. Geometric waveguides rely on embedded mirrors or prisms [2,2022,3639] to redirect light, while diffractive waveguides typically use surface relief, holographic, or metasurface gratings to the same end. Some systems combine both approaches [23,38]. This paper focuses on diffractive waveguides that tend to be thinner and, thus, even more limited by multiple interactions with the in-coupler than geometric waveguides. However, the methods and findings apply broadly to all waveguide in-couplers.

Waveguide displays also face several challenges that impede the realization of optimal image quality and user experience. These challenges include limited FOV, eyebox uniformity, image plane brightness, and image sharpness. Some of these are fundamental to waveguides, such as how the maximum FOV is limited by the refractive index of the waveguide, which in turn affects the critical angle of TIR [21,23,35]. Additionally, because waveguide combiners (especially diffractive waveguides) are often very thin (<1 mm thick), multiple interactions with the in-coupler limit system efficiency [40]. This in-coupling efficiency limit bottlenecks the system's brightness since any light lost at the in-coupler cannot be regained.

This work asks how the waveguide in-coupler affects the displayed image quality. In general, the image quality can be affected by both the amplitude and the phase transfer function of the in-coupler [41]. An ideal amplitude transfer function for the in-coupler is unity. However, in-coupling losses (demonstrated in [39]) vary spatially over the incident beam and thus result in a spatially varying amplitude transfer function. These spatially varying losses affect the in-coupled beam's irradiance distribution, as shown by the beam irradiance maps in Figs. 1(b) and (c). In Fig. 1(b), the blue area of the original incident beam is lost entirely, equivalent to cutting off part of the exit pupil from the display engine, which lowers the diffraction-limited MTF cut-off frequency. Additionally, low uniformity over the in-coupled irradiance map will lower the MTF diffraction limit curve, limiting the contrast of mid- and higher-resolution details. The display engine may deliver diffraction-limited information to the waveguide, but information loss at the in-coupler will blur the image. Thus, maintaining uniformity (i.e., near constant amplitude transfer function) over the entire in-coupled beam for each field angle prevents the waveguide from lowering the displayed image sharpness via MTF loss.

Phase imparted by the in-coupler also impacts image quality. An ideal phase transfer function is a linear phase ramp that guarantees a planar wavefront for the in-coupled beam as it propagates and replicates within the waveguide. If the in-coupler imparts anything but linear phase to the beam, the overlapping out-coupled wavefronts will no longer match, blurring the viewed image. Here, we assume an ideal linear phase for the in-coupler surface and focus on the image quality degradation due to the in-coupling losses that reduce the amplitude transfer function. Other factors like dispersion, ghost images, scattered light from diffractive surfaces, and manufacturing and misalignment errors of the waveguide and their impacts on image quality have been discussed in previous work [10,20,21]. However, to the best of our knowledge, this is the first time the diffraction limit imposed by the waveguide in-coupler has been investigated.

Therefore, we developed a framework to design a waveguide in-coupler that accounts for efficiency and MTF losses by deriving the limit for each waveguide geometry. The framework enables mitigation of the tradeoff between image quality and efficiency by splitting the in-coupler into multiple zones, resulting in a more uniform irradiance over the in-coupled beam area for all field points. We demonstrate the framework and the improvements in irradiance uniformity using an example geometry as shown in Fig. 1. One previous paper implemented a waveguide where one portion of the in-coupler had a thin film under it to increase in-coupling efficiency for some of the FOV. However, this was done for a system where the fields were not converging at the in-coupler and did not focus on image sharpness limitations from the in-coupler [42]. The current methodology assumes a general display engine with an exit pupil coplanar with the in-coupler, as Appendix A shows. Having the exit pupil be coplanar with the in-coupler ensures that the irradiance distribution across the in-coupler is the same for each FOV angle.

This paper describes a framework for understanding the causes of current limitations of waveguide in-couplers and using this understanding to overcome them. Section 2 briefly reviews prior work that derived the limitation on in-coupling efficiency posed by multiple interactions with the in-coupler. We then add to this understanding by calculating irradiance maps, which can be used to calculate a limit on in-coupled image quality. Investigating both in-coupling efficiency and in-coupled image quality reveals a fundamental tradeoff between in-coupling efficiency and image sharpness, assuming a single-zone in-coupler, due to losses from multiple interactions. Section 3 breaks this assumption and splits the in-coupler into multiple zones. Allowing multiple zones opens an exploration of how varying the width and diffraction efficiency of each zone can mitigate the efficiency vs image quality tradeoff. We use these results to determine the maximum number of zones required to achieve the theoretical performance limit. Finally, we validate our framework by comparing a single-zone in-coupler with 100% diffraction efficiency to an optimized three-zone in-coupler, illustrating that the in-coupling efficiency limit posed by multiple interactions can be met without compromising image sharpness. The framework applies generally to all waveguide geometries with multiple interactions (i.e., thin waveguides), paving the way for designing a broad range of devices coupling light with reflective or diffractive surfaces of all types.

2. Framework for quantifying the system limitations and tradeoffs

Multiple interactions with the in-coupler cause losses, as indicated by the loss of light in the in-coupled pupils in Fig. 1(b) and (c). As demonstrated in [40], if the light that has already interacted with the in-coupler interacts again, the sum of the diffraction efficiencies of the already in-coupled light and the new light cannot be greater than 100%. This sum comes from the principle of reversibility and energy conservation. Thus, a diffractive surface designed with 100% diffraction efficiency for the first interaction will be 0% efficient for any light that interacts with it again after diffracting once. For this paper, both diffraction efficiency and ${\eta _1}$ refers to the first interaction efficiency, and the second interaction efficiency is assumed to be 100% minus ${\eta _1}$.

An in-coupler with ${\eta _1} = $100% is shown in Fig. 1(b), where all light from the left-hand side of the beam irradiance map (the blue area) is lost because it interacted with the in-coupler more than once. Furthermore, the in-coupler with ${\eta _1} = $100% sets a limit on in-coupling efficiency, as demonstrated in [40]. We calculate the in-coupling efficiency for a given field by dividing the sum of the in-coupled irradiance map by the sum of the incident beam irradiance map. This in-coupling efficiency calculation can be done for each field to yield an in-coupling efficiency limit map over the full FOV, as shown in Fig. 2(a), given a basic set of parameters for the waveguide combiner.

 figure: Fig. 2.

Fig. 2. Impact of waveguide geometry on in-coupling efficiency limit: (a) In-coupling efficiency limit over the full FOV for the waveguide geometry in this paper. The red circle indicates the minimum field efficiency (MFE). (b) Impact of waveguide thickness on MFE.

Download Full Size | PDF

In this paper, the example waveguide substrate is N-BK7 (n = 1.52 @ 532 nm) glass, which is common for affordable optical components. The waveguide thickness is 0.5 mm, also common for N-BK7 wafers. The in-coupler is 3 mm by 3 mm, close to the human vision pupil size in daylight conditions. The light source is monochromatic at 532 nm and spatially incoherent for assessing image sharpness. Polarization is not considered for this analysis, but it would be crucial for designing real diffractive surfaces. The full FOV is 20° x 20°. The grating period of the coupler is 453 nm, calculated such that the minimum diffracted angle within the waveguide over the FOV is just greater than the critical angle for TIR.

In a full system with a display, the brightest points in the FOV can be digitally dimmed to achieve uniform brightness over the full FOV. However, darker points can only be increased until the display's maximum brightness. Consequently, the uniform brightness of the complete display is constrained by the FOV point with the lowest in-coupling efficiency value. This value is called the minimum field efficiency (MFE). In the case of this waveguide, the MFE occurs at the (−10,0) field and has a value of 29%. As evidenced by Fig. 2(b), this limit is specific to the geometry of the waveguide, especially the thickness, which in turn is directly proportional to the separation between interactions with the in-coupler [labeled ${s_x}$ in Fig. 1(b)]. Other factors like refractive index and grating period also affect ${s_x}$, but thickness has the most direct impact. The method for calculating the MFE limit applies to any waveguide geometry, although the limit is trivial for thicker waveguides without a second interaction.

2.1 Calculating irradiance maps for each field point

Prior work derived a formula for calculating the MFE limit assuming an in-coupler with ${\eta _1} = $100% [40]. However, that formula cannot quantify the actual in-coupling efficiency if the in-coupler is less than 100% efficient because the in-coupled irradiance varies over the beam. Additionally, it does not calculate the map of irradiance distribution for the in-coupled beam. We can calculate both the in-coupling efficiency and the MTF limit by mapping the in-coupled irradiance.

To calculate an in-coupled irradiance map for a point in the FOV, we developed an iterative method, illustrated in Fig. 3. The first step of our iterative method is to compute the beam irradiance incident on the in-coupler [shown in Fig. 3(a)] and map ${\eta _1}$ over the in-coupler for the first interaction [shown in Fig. 3(b)]. The beam irradiance map and in-coupler efficiency map are multiplied to get the in-coupled irradiance map after the first interaction [Fig. 3(c)]. For successive interactions with the in-coupler, the beam is shifted by ${s_x}$ for each interaction [Fig. 3(d)] and the secondary interaction efficiency is 100% minus ${\eta _1}$ [Fig. 3(e)]. For as long as the beam and in-coupler overlap [Fig. 3(f)], the beam irradiance map and secondary in-coupler efficiency map are multiplied together [Fig. 3(g)] and the beam is shifted again each time until the beam and in-coupler no longer overlap. The in-coupled irradiance map is complete once the beam passes the in-coupler, as shown in Fig. 3(h).

 figure: Fig. 3.

Fig. 3. Flowchart showing the method for calculating the in-coupled irradiance map for a given field angle. (a) The incident beam irradiance map. The blue area is zero-padding to allow the beam to shift. (b) The in-coupler map for the first interaction. The teal area is the diffractive surface, and the reddish-brown area is one-padding. (c) Product of (a) and (b). (d) Shift (c) by ${s_x}$. (e) The in-coupler map for secondary interactions. The yellow-green area is the diffractive surface region, and the reddish-brown area is one-padding. (f) Checkpoint to see if the diffractive surface and the beam still overlap. The tan arrow indicates the loop to continue, and the green arrow indicates breaking the loop. (g) Product of (d) and (e). The empty yellow squares pointed to by the yellow arrows in (d) and (g) outline the location of the in-coupler relative to the beam. (h) Final in-coupled beam irradiance map. The area in the white-dashed outline can be re-centered and zoomed in to isolate just the extent of the beam.

Download Full Size | PDF

The first interaction can, therefore, be represented mathematically as

$${E_1}({x,y} )= {E_0}({x,y} )\mathrm{\ast }{\eta _1}({x,y} )$$
where ${E_1}$ is the irradiance map after the first interaction, ${E_0}({x,y} )$ is the irradiance map of the beam incident on the in-coupler, and ${\eta _1}({x,y} )$ is the diffraction efficiency map of the in-coupler. Successive interactions are then represented as
$${E_{n + 1}}({x,y} )= {E_n}({x + {s_x},y + {s_y}} )\mathrm{\ast }[{1 - {\eta_1}({x,y} )} ]$$
where ${E_{n + 1}}$ is the beam irradiance map after interaction $n + 1$ and ${E_n}({x + {s_x},y + {s_y}} )$ is the beam irradiance map after interaction n shifted by the interaction separations ${s_x}$ and ${s_y}$ along the x- and y-directions, respectively. The maximum number of interactions, N, is calculated by $N = \textrm{ceil}\left( {\frac{w}{{{s_x}}}} \right)$ where w is the width of the in-coupler and $\textrm{ceil}()$ rounds up to the nearest integer.

We implemented this method in MATLAB. A comparison validating this calculation against a LightTools model can be found in Appendix A. The time to compute an irradiance map like the one shown in Fig. 3(h) with 120 × 120 points across the beam takes about 0.02 seconds on average with our tool. Computing a similar map with non-sequential ray tracing on the same computer using 5 million rays takes 53 seconds, which shows that our method is more than 2000 times faster for this task. While this example shows a square in-coupler and beam shifting only along the x-direction, our method works for arbitrary in-coupler and beam shapes and shifting in any direction in the x-y plane.

2.2 Tradeoff between in-coupling efficiency and image sharpness

Once the in-coupled beam irradiance map is calculated, we can calculate both in-coupling efficiency and the modulation transfer function for the given field. Figures 4(a) & (c) show the beam irradiance maps when the initial in-coupler ${\eta _1}$ value is 100% or 40%, respectively. In-coupling efficiency is calculated by integrating the incident and in-coupled irradiance maps to get incident and in-coupled power and then dividing the in-coupled power by the incident power.

 figure: Fig. 4.

Fig. 4. Impact of ${\eta _1}$ on beam irradiance map, in-coupling efficiency (i.e., MFE), and MTF: (a) Beam irradiance map after in-coupling if ${\eta _1}$ is 100%. The overlayed white values in (a) & (c) are the MFE values. (b) The X (solid orange) and Y (dotted yellow) slices of the MTF for the beam irradiance map in (a). The black dashed lines in (b) & (d) correspond to 30 cy/deg (i.e., 20/20 vision). The blue curve shows the diffraction limit for the original incident beam irradiance map from Fig. 1. (c) Beam irradiance map after in-coupling if ${\eta _1}$ is 40%. (d) The MTF curves along the X and Y directions of the beam irradiance map in (c). (e) MFE (blue) vs. MTF-X at 30 cy/deg (red) curves as a function of ${\eta _1}$ when the in-coupler is a single uniform diffractive surface. The dashed lines show the respective limits for each.

Download Full Size | PDF

The beam irradiance map is used as the pupil amplitude function for calculating a diffraction-limited modulated transfer function (MTF) [41]. In Fig. 4(a), the irradiance map along x is narrow (0.875 mm) compared to the incident beam (3 mm). Thus, the MTF-X drops to 0.0 at 30 cy/deg rather than 0.71 for the original diffraction limit curve, as shown in Fig. 4(b). MTF-X is a slice along the x-axis of the full 2D MTF calculated from the pupil amplitude function (i.e., beam irradiance map). In Fig. 4(c), more areas of the beam are successfully coupled into the waveguide but less efficiently in each area. However, the in-coupled beam is larger and more uniform than in Fig. 4(a), so the MTF-X is higher, as seen in Fig. 4(d).

Assuming the in-coupler consists of a single diffractive surface, we can look at the MFE vs. MTF-X tradeoff by plotting the values for each on a single plot, as shown in Fig. 4(e). We look specifically at 30 cy/deg, corresponding to 20/20 vision [43]. These curves demonstrate the tradeoff between image brightness (MFE) and sharpness (MTF-X) when the in-coupler is uniform. As the MFE increases, MTF-X decreases. A new architecture is sought to overcome this fundamental tradeoff.

3. Mitigating the system tradeoffs using a multi-zone in-coupler

In the previous section, we considered an in-coupler with one diffractive surface covering the entire area, as seen in Fig. 5(a). However, the framework and methods we described above can also be used to evaluate the performance of multi-zone in-couplers, which can mitigate the inherent tradeoff between MFE and MTF-X, as shown in Fig. 5(b). To visualize the tradeoff for a uniform in-coupler [shown in Fig. 5(a)] as a single value, the product of the MFE and MTF-X values at the (−10,0) field are normalized by the maximum value, as shown in 5(b). The maximum value for the MFE is 29%, and the maximum value for the MTF-X is 0.71, as set by the original diffraction limit curve shown in Fig. 4(b) before in-coupling. The maximum value for the MFE-MTF-X product is 0.29*0.71 = 0.206. The best case for a single diffractive surface is ${\eta _1} = $ 40%, which achieves only about 65% of the maximum MFE-MTF metric.

 figure: Fig. 5.

Fig. 5. Splitting the single-value in-coupler into multiple zones and impact on normalized MFE-MTF product: (a) Illustration of a single zone in-coupler when with a single diffraction efficiency. (b) Normalized MFE * MTF value as a function of ${\eta _1}\; $ when the in-coupler is a single zone. (c) Illustration of the in-coupler when it is split into two equal zones (indicated by the different colors). (d) Normalized MFE * MTF value when Zone 1 (y-axis) and Zone 2 (x-axis) diffraction efficiencies vary independently. (e) Illustration of the in-coupler when it is split into three equal zones (indicated by the different colors). (f) Normalized MFE * MTF value when Zone 2 (y-axis) and Zone 3 (x-axis) diffraction efficiencies vary independently. The diffraction efficiency of Zone 1 is fixed to be 100%.

Download Full Size | PDF

If the in-coupler is split into multiple zones and ${\eta _1}$ in each zone is allowed to vary independently, as shown in Fig. 5(c) and (e), the MFE vs. MTF tradeoff can be mitigated, as shown by the higher MFE*MTF values in Fig. 5(d) and (f). In the two-zone case shown in Fig. 5(c), the diffraction efficiencies of Zone 1 and Zone 2 vary independently. The merit function map in Fig. 5(b) peaks when ${\eta _1}$ of Zone 1 is 70%, and Zone 2 is 35%. The MFE is 27%, and the MTF-X value is 0.61. The normalized merit function is about 80% of the maximum metric, an improvement over the single zone case.

Splitting the in-coupler into three zones improves the merit function even further. By keeping the first zone efficiency 100% and varying the efficiency of the other two, the merit function achieves 99% of the maximum metric, as shown in Fig. 5(f). The merit function peaks when the ${\eta _1}$ = 60% in Zone 2, and ${\eta _1}$ = 30% in Zone 3. The MFE is 29%, and the MTF-X value is 0.70. With a multi-zone in-coupler, enough light couples into the waveguide to meet the MFE limit, and the irradiance map of the in-coupled beam can be tailored such that the MTF meets the original diffraction limit.

3.1 Optimizing the zones

To understand the impact of splitting the in-coupler into multiple zones more fully, we built an optimizer to maximize the merit function (i.e., MFE * MTF-X) using each zone's width and ${\eta _1}$ as variables. The widths are constrained such that the sum of the widths of all sections will still be equal to the original in-coupler width. We ran the optimizer assuming a different number of zones each time, as shown in Fig. 6. An example MTF-X curve resulting from optimization [shown in Fig. 6(a)] shows that after splitting the in-coupler into three zones, the MTF-X curve for the (−10,0) field maintains the original diffraction limited performance up to and even beyond 30 cy/deg. The data in Fig. 6(b) shows that it also achieves the MFE limit.

 figure: Fig. 6.

Fig. 6. Effect of the number of zones on the MFE-MTF product. (a) The optimized MTF function when the in-coupler is split into three zones. The arrow indicates the corresponding point in (b). (b) The optimized normalized merit function (black curve) maximizes the product of the MFE (red curve) and the MTF-X (blue curve) at 30 cy/deg. Each zone's width and diffraction efficiency were used as variables for optimization. (c) The effect of waveguide thickness (colored lines) on the number of zones needed to maximize the MFE*MTF merit function.

Download Full Size | PDF

It is apparent from Fig. 6(b) that the merit function (black curve) does not increase significantly after three zones. This plateau in the merit function corresponds to the maximum number of times most rays would interact with the in-coupler. The MFE curve stabilizes at the in-coupling efficiency limit of 29%, and the MTF-X for the (−10,0) field achieves the original diffraction limit of 0.71 at 30 cy/deg. This plateau shows that splitting the in-coupler into three regions can mitigate the MFE-MTF tradeoff for this waveguide geometry. There is a slight increase in the MTF-X and the merit function after five zones. This is due to apodization-like effects increasing the MTF-X above the original diffraction limit at 30 cy/deg but reducing it at higher frequencies, as seen in Appendix B [41].

To assess how the waveguide's geometry generally impacts the optimal number of zones for improving in-coupling efficiency and MTF-X for the (−10,0) field at 30 cy/deg, another simulation progressively increased both the waveguide's thickness and the number of zones before each optimization cycle, as depicted in Fig. 6(c). As the thickness increases, so does the MFE limit, as shown in Fig. 2(b), so the merit function maximum also increases. The beginning of the plateau for each curve shows how many zones the in-coupler should be divided into to maximize performance. The merit function plateaus at only two zones for the 0.9 mm and 1 mm lines. The plateau for these two cases occurs because ${s_x}$ is greater than 1.5 mm so there are at most two interactions for any ray in the in-coupler. The merit function plateaus at three zones for the 0.5–0.8 mm lines. In each case, most rays undergo three interactions with the in-coupler at most. For the 0.1–0.4 mm lines, the merit function plateaus by four zones. In the 0.1 mm case, ${s_x} = 0.175\; \textrm{mm}$ which means some rays will undergo as many as eighteen interactions with the in-coupler. However, these plateaus show that the in-coupler only needs to be split into at most four zones to mitigate the MTF vs. MFE tradeoff, even when there are more than four interactions.

3.2 Impact on system performance

To demonstrate the impact of splitting the in-coupler on system performance, we compare the baseline 100% single zone case [Fig. 7(a)-(c)] against an optimized three-zone in-coupler [Fig. 7(d)-(f)].

 figure: Fig. 7.

Fig. 7. Improving MTF without sacrificing MFE by splitting the in-coupler into multiple zones: (a) Illustration of a single zone with 100% diffraction efficiency everywhere. (b & e) In-coupling efficiency maps over the FOV in cases (a) & (d). (c & f) MTF-X maps over the FOV in cases (a) & (d). (d) Illustration of the in-coupler being split into three zones where the diffraction efficiencies are 95%, 54%, and 27% and are 1.06 mm, 1.01 mm, and 0.93 mm wide, respectively.

Download Full Size | PDF

Note from the case's in-coupling efficiency maps in Fig. 7(b) and (e) that the MFE is still 29% in both cases. The maximum in-coupling efficiency is lower in Fig. 7(e), but this is beneficial as it increases image uniformity. The methodology in Section 2.1 calculates irradiance maps for each field, which are then used to compute in-coupling efficiency and MTF. This data is summarized in an MTF-X map to see contrast at 30 cy/deg over the FOV, as shown in Fig. 7(c) and (f). The minimum MTF-X jumps from 0.0 in Fig. 7(c) to over 0.64 in (f). Note that the minimum MTF-X no longer occurs at the (−10,0) field but at the (10,10) and (10,−10) fields. However, since the MTF-X over the FOV is still near the original diffraction limit, the waveguide in-coupler will not significantly reduce the image quality provided by the display engine [43].

4. Discussion

Section 2 reviewed how the multiple interaction problem limits in-coupling efficiency and is the origin of the MFE limit. We demonstrated a framework for calculating in-coupled irradiance maps, which are used to calculate MTF limits. This framework has two main advantages when designing an in-coupler architecture: (1) the simultaneous evaluation of both the in-coupling efficiency and MTF maps (like those shown in Fig. 7) and (2) modeling speed. Efficiency maps over the FOV can be similarly produced in other software, but getting diffraction-based MTF effects is more difficult. Waveguide design typically occurs in non-sequential ray-tracers, which often do not track the effect of diffraction on image quality like we show in this work.

Furthermore, non-sequential ray tracing is time-consuming as each ray has to be traced individually. In our tool, we bypass the individual ray tracing by iteratively calculating the full irradiance map instead, as shown in Fig. 3. Our method is over 2000 times faster than non-sequential ray tracing at calculating the irradiance map and MTF for a single field for the same geometry. The MTF maps in Fig. 7 are calculated for 81 × 81 field points, which means the 0.02-second vs. 53-second difference in simulation time for a single field's irradiance map would magnify to two minutes (our method) vs. four days for the full field MTF map. Therefore, our framework enables a much faster in-coupler design and optimization cycle.

We used the developed methodology to show how a single diffractive surface at the in-coupler results in a tradeoff between MFE and MTF-X. The value of the MTF-X curve for the (−10,0) field (i.e., corresponding to the MFE) at 30 cy/deg was chosen as the metric for evaluating image quality since it corresponds to 20/20 vision. Additionally, the goal of attaining an MTF diffraction limit of 0.71 at 30 cy/deg was chosen based on the diffraction limit of the original incident beam. However, image quality metrics vary based on application. For example, diffraction-limited MTF values at 60 cy/deg (the limit of human foveal vision) may also be important for systems displaying extremely high-resolution images. Alternatively, the Strehl ratio could have been used. A different metric may better uniformly increase MTF at all frequencies rather than focusing on a single target frequency. Additionally, this optimization must be done at multiple field points to yield target diffraction efficiency curves for designing a real grating.

While investigating the optimized performance shown in Fig. 6(c), the plateaus in merit function showed that, at most, four zones seemed necessary to mitigate the MFE-MTF tradeoff. However, the merit function curves undergo small variations. For example, the 0.3 mm line plateaus at three zones, dips slightly at five zones, and then rises again at six. Constancy may improve with a different image sharpness metric or a different method for combining the contributions of the in-coupling efficiency and image sharpness rather than multiplying them. Furthermore, the optimization algorithm is also expected to yield small differences.

Regardless of how the merit function and optimization scheme are implemented, dividing the in-coupler into multiple zones is useful for the waveguide designer. The key concept remains that when multiple interactions are present, splitting the in-coupler into multiple zones allows the system to meet the MFE limit and maintain higher image sharpness than for a single-zone in-coupler. This kind of optimization is more quickly and easily done in our tool, but full system evaluation of the final design should still be done using a non-sequential ray-tracer and real gratings. Understanding multiple interactions, irradiance maps, and multiple zones broadly applies to systems that rely on TIR and diffractive surfaces to transmit information.

5. Conclusions

This paper presents a framework for designing waveguide in-couplers that meet the upper limits on both MFE and MTF. We first demonstrated an efficient method for calculating irradiance maps for each field after coupling into a thin AR waveguide and using them to calculate in-coupling efficiency and MTF. To the best of our knowledge, this is the first time an upper limit on image sharpness through a waveguide display has been quantified. Using these irradiance maps, we showed an inherent tradeoff between MFE (in-coupling efficiency) and MTF-X (image sharpness) for waveguides with multiple interactions on a single zone in-coupler.

To overcome this tradeoff, we split the in-coupler into multiple zones, allowing ${\eta _1}$ and width to vary during optimization in each zone. We demonstrated the developed framework by designing an in-coupler for an example waveguide geometry. The design was optimized using a single-value metric derived by multiplying the MFE by the MTF-X at 30 cy/deg. The goal of this metric was to maintain an in-coupling efficiency equal to the MFE limit (29%) and to achieve a diffraction-limited MTF (0.71). The optimization process over multiple zones showed that three zones were required for the example waveguide geometry to mitigate the MFE-MTF tradeoff. Further, in the more general case, a maximum of four zones was necessary to mitigate this tradeoff, even for very thin waveguides (0.1 mm).

The framework laid out herein applies broadly to AR systems that rely on diffractive elements and TIR to relay light from the display engine to the eye of the user. We evaluate and use irradiance maps to visualize tradeoffs in the in-coupler design and guide multi-zone optimization. Effects over the entire FOV can be understood using efficiency and MTF maps computed from the irradiance maps for each field. The irradiance map calculation enabled by our framework is 2000 times faster for each field point than modeling the system using non-sequential ray tracing. This massive speed increase enables faster parameter space exploration and optimization of the waveguide in-coupler. The thorough understanding and tools we presented led to in-coupler designs with the highest possible in-coupling efficiency and image quality for a given waveguide geometry.

Appendix A

This section compares the method for calculating the irradiance maps of the in-coupled beam using MATLAB from Section 2.1 to a LightTools model. To verify the validity of the MATLAB calculations, we made a LightTools model to perform a non-sequential ray trace, as shown in Fig. 8(a). The display is 2 mm x 2 mm, and the eyepiece's focal length is 5.6713 mm. The resulting full FOV is 20$^\circ $ x 20$^\circ $. In the case of the data shown here, a single point source at the right edge of the display (corresponding to the source of the black rays) is used. This point source is the (-10,0) field discussed throughout this paper after being collimated. The eyepiece is telecentric in the display plane, and its entrance pupil is coplanar with the in-coupler, ensuring that each beam's irradiance map is as uniform as possible. The lenses in the eyepiece are perfect lenses using the LightTools perfect lens optical property. The data from the incident beam detector and in-coupled beam detector shown in Fig. 8(a) are imported into MATLAB, normalized by the maximum irradiance from the incident beam, and then displayed on the same color scale as the MATLAB models. The MATLAB data is shown in Fig. 8(b) and (d), and the LightTools data is shown in Fig. 8(c) and (e). The models show excellent agreement with each other.

 figure: Fig. 8.

Fig. 8. Verifying MATLAB calculation with a LightTools model. (a) Illustration of the LightTools model with a display, an eyepiece, an in-coupler with a coplanar detector for capturing the incident beam, and a detector for capturing the in-coupled beam. The red and black rays show the on-axis and (-10,0) field, respectively. (b,c) In-coupled beam irradiance map for the 100% efficient in-coupler case when modeled by MATLAB (b) and LightTools (c). (d,e) In-coupled beam irradiances for the optimized three-zone in-coupler case when modeled by MATLAB (d) and LightTools (e).

Download Full Size | PDF

The MATLAB model can use either a point source or uniform irradiance distribution. The point source irradiance distribution matches the LightTools model for this system. Uniform irradiance can be used for a more general system approximation or display engines that produce uniform irradiance distributions. The difference between this system's uniform and point source irradiance can be seen in Fig. 9(b) and (c).

 figure: Fig. 9.

Fig. 9. Calculation of beam irradiance incident on a plane from a point-source. (a) Drawing of a point-source and its irradiance pattern at a plane (shown in red) located at the effective focal length, $EFL$. The angle, $\theta $, is the angle between the z-axis and a point $({{x_i},{y_i}} )$ in the red plane and centered at the point-source location. (b) Uniform irradiance pattern. (c) Point-source irradiance pattern.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Data for an optimized seven-zone in-coupler. (a) Beam irradiance map after in-coupling if the in-coupler is optimized for seven zones. (b) The MTF curves along the X and Y directions of the beam map in (a).

Download Full Size | PDF

We use a point-source model to calculate the irradiance at the in-coupler, as shown in Fig. 9(a). Since the eyepiece is telecentric at the display plane, the irradiance for each beam at the in-coupler is the same, regardless of location in the display plane. We start with the equation for irradiance on a plane from an on-axis point-source

$${E_d}({r,\theta } )= \frac{{\mathrm{\Phi }\cos \theta }}{{4\pi {r^2}}}$$
where $\mathrm{\Phi }$ is the source flux, $\theta $ is the angle between a location on the plane [$({{x_i},{y_i}} )$] and the surface normal of the plane centered at the point-source [shown in Fig. 9(a)], and r is the distance from the point-source to the location $({{x_i},{y_i}} )$.

From trigonometry, it is clear that

$$\cos \theta = \frac{z}{r}$$
and
$$r = \sqrt {{x^2} + {y^2} + {z^2}} $$

Substituting the identities in Eq. (4) and (5) into Eq. (3) yields

$${E_d}({x,y,z} )= \frac{{\mathrm{\Phi }\frac{z}{{\sqrt {{x^2} + {y^2} + {z^2}} }}}}{{4\pi {{\left( {\sqrt {{x^2} + {y^2} + {z^2}} } \right)}^2}}}$$

Equation (6) can be reduced to

$${E_d}({x,y,z} )= \frac{{\mathrm{\Phi }z}}{{4\pi {{({{x^2} + {y^2} + {z^2}} )}^{\frac{3}{2}}}}}$$

In the case of the system used in this paper, x and y are the coordinates on the in-coupler, and z is the effective focal length (EFL) of the eyepiece.

Appendix B

This section shows an in-coupled beam irradiance map with a corresponding MTF-X at 30 cy/deg greater than the 0.71 limit, seen in Fig. 10. The in-coupler is split into seven zones in this case. The zone diffraction efficiencies are 100%, 97%, 59%, 55%, 37%, 28% and 20%, respectively. The widths are 0.36, 0.51, 0.39, 0.48, 0.37, 0.48, and 0.41 mm, respectively.

The MFE is still 29%, but the MTF-X at 30 cy/deg is 0.73. Note how the MTF-X curve is slightly higher than the original diffraction limit curve in Fig. 10(b) from 0-40 cy/deg. After 40 cy/deg, the MTF-X curve drops below the original curve. This pattern is similar to systems designed with apodised apertures targeted specifically to increase contrast at certain frequencies [41].

Funding

National Science Foundation (DGE-1922591).

Acknowledgments

We thank Synopsys Optical Solutions for the student license of LightTools and MathWorks for the student license of MATLAB.

Disclosures

The authors declare no conflicts of interest.

Data availability

The paper and the Supplementary Materials present all the data needed to evaluate the conclusions. Additional data related to this paper may be requested from the authors.

References

1. I. E. Sutherland, “A head-mounted three dimensional display,” in Proceedings of the Fall Joint Computer Conference, part I (ACM), (1968), 757–764.

2. B. C. Kress, Optical Architectures for Augmented-, Virtual-, and Mixed-Reality Headsets (SPIE Press, 2020), Vol. PM316.

3. A. Bauer and J. P. Rolland, “The Optics of Augmented Reality Displays,” in Springer Handbook of Augmented Reality, A. Y. C. Nee, eds. (Springer, 2023), pp. 187–209.

4. J. Peddie, Augmented reality: Where we will all live, 2 ed. (Springer, 2023), Vol. 349.

5. J. Garzón, “An overview of twenty-five years of augmented reality in education,” Multimodal Technologies and Interaction 5(7), 37 (2021). [CrossRef]  

6. P.-H. Diao and N.-J. Shih, “Trends and research issues of augmented reality studies in architectural and civil engineering education—A review of academic journal publications,” Appl. Sci. 9(9), 1840 (2019). [CrossRef]  

7. M. Eckert, J. S. Volmerg, and C. M. Friedrich, “Augmented reality in medicine: systematic and bibliographic review,” JMIR mHealth and uHealth 7(4), e10967 (2019). [CrossRef]  

8. A. Cameron, Optical waveguide technology and its application in head-mounted displays, SPIE Defense, Security, and Sensing (SPIE, 2012), Vol. 8383.

9. Y. Itoh, T. Langlotz, J. Sutton, et al., “Towards indistinguishable augmented reality: A survey on optical see-through head-mounted displays,” ACM Comput. Surv. 54(6), 1–36 (2022). [CrossRef]  

10. J. Xiong, E.-L. Hsiang, Z. He, et al., “Augmented reality and virtual reality displays: emerging technologies and future perspectives,” Light: Sci. Appl. 10(1), 216 (2021). [CrossRef]  

11. A. Bauer and J. P. Rolland, “Visual space assessment of two all-reflective, freeform, optical see-through head-worn displays,” Opt. Express 22(11), 13155–13163 (2014). [CrossRef]  

12. D. K. Nikolov, A. Bauer, F. Cheng, et al., “Metaform optics: Bridging nanophotonics and freeform optics,” Sci. Adv. 7, 1 (2021). [CrossRef]  

13. D. Cheng, H. Chen, C. Yao, et al., “Design, stray light analysis, and fabrication of a compact head-mounted display using freeform prisms,” Opt. Express 30(18), 36931–36948 (2022). [CrossRef]  

14. A. Takagi, S. Yamazaki, Y. Saito, et al., “Development of a stereo video see-through HMD for AR systems,” in Proceedings IEEE and ACM International Symposium on Augmented Reality, (2000), 68–77.

15. J. P. Rolland, M. A. Davies, T. J. Suleski, et al., “Freeform optics for imaging,” Optica 8(2), 161–176 (2021). [CrossRef]  

16. C. C. Wu, K.-T. Shih, J.-W. Huang, et al., “A novel birdbath eyepiece for light field AR glasses,” Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) IV (2023). [CrossRef]  

17. S.-B. Kim and J.-H. Park, “Optical see-through Maxwellian near-to-eye display with an enlarged eyebox,” Opt. Lett. 43(4), 767–770 (2018). [CrossRef]  

18. L. Mi, C. P. Chen, Y. Lu, et al., “Design of lensless retinal scanning display with diffractive optical element,” Opt. Express 27(15), 20493–20507 (2019). [CrossRef]  

19. M. von Waldkirch, P. Lukowicz, and G. Tröster, “Defocusing simulations on a retinal scanning display for quasi accommodation-free viewing,” Opt. Express 11(24), 3220–3233 (2003). [CrossRef]  

20. B. C. Kress and I. Chatterjee, “Waveguide combiners for mixed reality headsets: a nanophotonics design perspective,” Nanophotonics 10(1), 41–74 (2020). [CrossRef]  

21. B. C. Kress, “Optical waveguide combiners for AR headsets: features and limitations,” in Digital Optical Technologies 2019, (SPIE, 2019), 75–100.

22. C.-J. Wang and B. Amirparviz, “Image waveguide with mirror arrays,” U.S. patent 8189263B1May 292012.

23. Y. Wu, C. Pan, C. Lu, et al., “Hybrid waveguide based augmented reality display system with extra large field of view and 2D exit pupil expansion,” Opt. Express 31(20), 32799–32812 (2023). [CrossRef]  

24. K. Yin, H. Y. Lin, and S. T. Wu, “Chirped polarization volume grating for wide FOV and high-efficiency waveguide-based AR displays,” J. Soc. Inf. Disp. 28(4), 368–374 (2020). [CrossRef]  

25. A. Liu, Y. Zhang, Y. Weng, et al., “Diffraction Efficiency Distribution of Output Grating in Holographic Waveguide Display System,” IEEE Photonics J. 10(4), 1–10 (2018). [CrossRef]  

26. D. Ni, D. Cheng, Y. Wang, et al., “Design and fabrication method of holographic waveguide near-eye display with 2D eye box expansion,” Opt. Express 31(7), 11019–11040 (2023). [CrossRef]  

27. Y. Weng, Y. Zhang, W. Wang, et al., “High-efficiency and compact two-dimensional exit pupil expansion design for diffractive waveguide based on polarization volume grating,” Opt. Express 31(4), 6601 (2023). [CrossRef]  

28. E. Muslimov, D. Akhmetov, D. Kharitonov, et al., Composite waveguide holographic display, SPIE Photonics Europe (SPIE, 2022), Vol. 12138.

29. J. Han, J. Liu, X. Yao, et al., “Portable waveguide display system with a large field of view by integrating freeform elements and volume holograms,” Opt. Express 23(3), 3534–3549 (2015). [CrossRef]  

30. H. Boo, Y. S. Lee, H. Yang, et al., “Metasurface wavefront control for high-performance user-natural augmented reality waveguide glasses,” Sci. Rep. 12(1), 5832 (2022). [CrossRef]  

31. D. J. Grey, “The ideal imaging AR waveguide,” in Digital Optical Technologies 2017, (SPIE, 2017), 66–74.

32. H.-H. M. Cheng, Y. Chen, A. Christophe, et al., “Optimization and tolerance for an exit pupil expander with 2D grating as out-coupler,” in Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) IV, (SPIE, 2023), 199–208.

33. C. H. Gan, M.-E. Kleemann, A. Golos, et al., “Effects of polarisation and spatial coherence in the pupil expansion with crossed gratings in an AR display,” in Digital Optics for Immersive Displays II, (SPIE, 2020), 9–16.

34. S. Yan, E. Zhang, J. Guo, et al., “Eyebox uniformity optimization over the full field of view for optical waveguide displays based on linked list processing,” Opt. Express 30(21), 38139–38151 (2022). [CrossRef]  

35. J. Xiong, G. Tan, T. Zhan, et al., “Breaking the field-of-view limit in augmented reality with a scanning waveguide display,” OSA Continuum 3(10), 2730–2740 (2020). [CrossRef]  

36. Y. Ding, Q. Yang, Y. Li, et al., “Waveguide-based augmented reality displays: perspectives and challenges,” eLight 3(1), 24 (2023). [CrossRef]  

37. Y. Amitai, “P-27: A Two-Dimensional Aperture Expander for Ultra-Compact, High-Performance Head-Worn Displays,” in SID Symposium Digest of Technical Papers, (Wiley Online Library, 2005), 360–363.

38. Z. Wu, J. Liu, and Y. Wang, “A high-efficiency holographic waveguide display system with a prism in-coupler,” J. Soc. Inf. Disp. 21, 524–528 (2013). [CrossRef]  

39. D. Cheng, Y. Wang, C. Xu, et al., “Design of an ultra-thin near-eye display with geometrical waveguide and freeform optics,” Opt. Express 22(17), 20705–20719 (2014). [CrossRef]  

40. J. Goodsell, P. Xiong, D. K. Nikolov, et al., “Metagrating meets the geometry-based efficiency limit for AR waveguide in-couplers,” Opt. Express 31(3), 4599–4614 (2023). [CrossRef]  

41. J. W. Goodman, Introduction to Fourier optics, 4th ed. (W. H. Freeman, New York, NY, USA, 1969), p. 564.

42. Y. Lin, H. Xu, R. Shi, et al., “Enhanced diffraction efficiency with angular selectivity by inserting an optical interlayer into a diffractive waveguide for augmented reality displays,” Opt. Express 30(17), 31244–31255 (2022). [CrossRef]  

43. J. Schwiegerling, Field guide to visual and ophthalmic optics (SPIE, 2004).

Data availability

The paper and the Supplementary Materials present all the data needed to evaluate the conclusions. Additional data related to this paper may be requested from the authors.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Impact of multi-zone in-coupler on irradiance uniformity. (a) Illustration of a waveguide AR display showing beam expansion. The in-coupler is in the area with the dashed frame around the diffractive surface, and the red cone indicates an incident beam from a point on an extended light source shown as a dark grey rectangle. (b & c) Expanded views of the highlighted region in (a). The incident beam irradiance map is the red region to the left, and the in-coupled beam irradiance map is the blue region to the right. Rays inside and outside the waveguide are shown with dashed and solid lines, respectively. Ray thickness indicates relative power. (b) The in-coupler is a single 3 mm wide diffractive surface with 100% diffraction efficiency. The green arrow shows the interaction separation, ${s_x}$. (c) The in-coupler is 3 mm wide and consists of three optimized diffractive surfaces, as indicated by the three zones separated by the vertical black lines. For this example architecture, the optimal diffraction efficiencies are 95%, 54%, and 27%, and the widths of each zone are 1.06 mm, 1.01 mm, and 0.93 mm.
Fig. 2.
Fig. 2. Impact of waveguide geometry on in-coupling efficiency limit: (a) In-coupling efficiency limit over the full FOV for the waveguide geometry in this paper. The red circle indicates the minimum field efficiency (MFE). (b) Impact of waveguide thickness on MFE.
Fig. 3.
Fig. 3. Flowchart showing the method for calculating the in-coupled irradiance map for a given field angle. (a) The incident beam irradiance map. The blue area is zero-padding to allow the beam to shift. (b) The in-coupler map for the first interaction. The teal area is the diffractive surface, and the reddish-brown area is one-padding. (c) Product of (a) and (b). (d) Shift (c) by ${s_x}$. (e) The in-coupler map for secondary interactions. The yellow-green area is the diffractive surface region, and the reddish-brown area is one-padding. (f) Checkpoint to see if the diffractive surface and the beam still overlap. The tan arrow indicates the loop to continue, and the green arrow indicates breaking the loop. (g) Product of (d) and (e). The empty yellow squares pointed to by the yellow arrows in (d) and (g) outline the location of the in-coupler relative to the beam. (h) Final in-coupled beam irradiance map. The area in the white-dashed outline can be re-centered and zoomed in to isolate just the extent of the beam.
Fig. 4.
Fig. 4. Impact of ${\eta _1}$ on beam irradiance map, in-coupling efficiency (i.e., MFE), and MTF: (a) Beam irradiance map after in-coupling if ${\eta _1}$ is 100%. The overlayed white values in (a) & (c) are the MFE values. (b) The X (solid orange) and Y (dotted yellow) slices of the MTF for the beam irradiance map in (a). The black dashed lines in (b) & (d) correspond to 30 cy/deg (i.e., 20/20 vision). The blue curve shows the diffraction limit for the original incident beam irradiance map from Fig. 1. (c) Beam irradiance map after in-coupling if ${\eta _1}$ is 40%. (d) The MTF curves along the X and Y directions of the beam irradiance map in (c). (e) MFE (blue) vs. MTF-X at 30 cy/deg (red) curves as a function of ${\eta _1}$ when the in-coupler is a single uniform diffractive surface. The dashed lines show the respective limits for each.
Fig. 5.
Fig. 5. Splitting the single-value in-coupler into multiple zones and impact on normalized MFE-MTF product: (a) Illustration of a single zone in-coupler when with a single diffraction efficiency. (b) Normalized MFE * MTF value as a function of ${\eta _1}\; $ when the in-coupler is a single zone. (c) Illustration of the in-coupler when it is split into two equal zones (indicated by the different colors). (d) Normalized MFE * MTF value when Zone 1 (y-axis) and Zone 2 (x-axis) diffraction efficiencies vary independently. (e) Illustration of the in-coupler when it is split into three equal zones (indicated by the different colors). (f) Normalized MFE * MTF value when Zone 2 (y-axis) and Zone 3 (x-axis) diffraction efficiencies vary independently. The diffraction efficiency of Zone 1 is fixed to be 100%.
Fig. 6.
Fig. 6. Effect of the number of zones on the MFE-MTF product. (a) The optimized MTF function when the in-coupler is split into three zones. The arrow indicates the corresponding point in (b). (b) The optimized normalized merit function (black curve) maximizes the product of the MFE (red curve) and the MTF-X (blue curve) at 30 cy/deg. Each zone's width and diffraction efficiency were used as variables for optimization. (c) The effect of waveguide thickness (colored lines) on the number of zones needed to maximize the MFE*MTF merit function.
Fig. 7.
Fig. 7. Improving MTF without sacrificing MFE by splitting the in-coupler into multiple zones: (a) Illustration of a single zone with 100% diffraction efficiency everywhere. (b & e) In-coupling efficiency maps over the FOV in cases (a) & (d). (c & f) MTF-X maps over the FOV in cases (a) & (d). (d) Illustration of the in-coupler being split into three zones where the diffraction efficiencies are 95%, 54%, and 27% and are 1.06 mm, 1.01 mm, and 0.93 mm wide, respectively.
Fig. 8.
Fig. 8. Verifying MATLAB calculation with a LightTools model. (a) Illustration of the LightTools model with a display, an eyepiece, an in-coupler with a coplanar detector for capturing the incident beam, and a detector for capturing the in-coupled beam. The red and black rays show the on-axis and (-10,0) field, respectively. (b,c) In-coupled beam irradiance map for the 100% efficient in-coupler case when modeled by MATLAB (b) and LightTools (c). (d,e) In-coupled beam irradiances for the optimized three-zone in-coupler case when modeled by MATLAB (d) and LightTools (e).
Fig. 9.
Fig. 9. Calculation of beam irradiance incident on a plane from a point-source. (a) Drawing of a point-source and its irradiance pattern at a plane (shown in red) located at the effective focal length, $EFL$. The angle, $\theta $, is the angle between the z-axis and a point $({{x_i},{y_i}} )$ in the red plane and centered at the point-source location. (b) Uniform irradiance pattern. (c) Point-source irradiance pattern.
Fig. 10.
Fig. 10. Data for an optimized seven-zone in-coupler. (a) Beam irradiance map after in-coupling if the in-coupler is optimized for seven zones. (b) The MTF curves along the X and Y directions of the beam map in (a).

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

E 1 ( x , y ) = E 0 ( x , y ) η 1 ( x , y )
E n + 1 ( x , y ) = E n ( x + s x , y + s y ) [ 1 η 1 ( x , y ) ]
E d ( r , θ ) = Φ cos θ 4 π r 2
cos θ = z r
r = x 2 + y 2 + z 2
E d ( x , y , z ) = Φ z x 2 + y 2 + z 2 4 π ( x 2 + y 2 + z 2 ) 2
E d ( x , y , z ) = Φ z 4 π ( x 2 + y 2 + z 2 ) 3 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.