Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Efficient evaluation of a three-dimensional eye-box in a near-eye display using light-field acquisition of luminance distribution

Open Access Open Access

Abstract

We propose, what we believe to be, a novel assessment methodology for evaluating three-dimensional (3D) characteristics of an eye-box volume in a near-eye display (NED) using a light-field (LF) data acquired at a single measuring distance. In contrast to conventional evaluation methods for the eye-box, where a light measuring device (LMD) changes its position in lateral and longitudinal directions, the proposed method requires an LF of the luminance distribution (LFLD) for the NEDs captured only at the single observation distance, and the 3D eye-box volume is evaluated via a simple post-analysis. We explore an LFLD-based representation for the efficient evaluation of the 3D eye-box, and the theoretical analysis is validated by simulation results using Zemax OpticStudio. As experimental verifications, we acquired an LFLD for an augmented reality NED at a single observation distance. The assessed LFLD constructed a 3D eye-box successfully over the distance range of 20 mm, which included assessment conditions where it was hard to measure the light rays’ distributions directly in the conventional methodologies. The proposed method is further verified by comparing with actual observed images of the NED both inside and outside of the evaluated 3D eye-box.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

As interest in near-eye displays (NEDs) continues to increase, the evaluation methods to measure their optical quality have attracted attention. Various optical properties such as an eye-box, a field of view (FOV), and a luminance are used to evaluate the representative quality of the NEDs [1,2]. Among them, the eye-box, which represents an available space for the human eye to be placed in order to view entire virtual images without the vignetting, is a key design parameter for the NEDs to be widespread in the market, since the wide extent of the eye-box is crucial to be adopted by a large population of consumers [3].

The evaluation methods for the size of the eye-box have been actively studied. One of the most intuitive ways to measure it is based on the luminance distribution against the observation positions as shown in Fig. 1(a) [46]. For a device under test (DUT), a light measuring device (LMD) assesses changes in the luminance according to measuring positions. As guided by the International Electrotechnical Commission (IEC), the eye-box is determined by an area where the luminance decreases into a threshold at the marginal field angles [6]. In general, the eye-box evaluation is mainly conducted at an eye-relief distance [46], and is often dealt with a two-dimensional (2D) area rather than a three-dimensional (3D) volume. Since measured results contain no information about the longitudinal (z-axis) direction, one should repeat the entire evaluation process to estimate the size of the eye-box at different observation distances.

 figure: Fig. 1.

Fig. 1. Schematics of evaluating the eye-box of the NED: a conventional approach for assessing (a) the 2D eye-box at the eye-relief distance, (b) the 3D eye-box using the repetitive evaluations at the multiple observation distances, and (c) the 3D eye-box evaluation using the LFLD acquisition at the single distance (the proposed method).

Download Full Size | PDF

Considering the diverse shapes of human faces, a 3D description of the eye-box is inevitable in both the design and manufacturing process of NED products [7]. However, as shown in Fig. 1(b), a full characterization of the 3D eye-box volume leads to the time-consuming process to conduct the layer-based evaluation at multiple observation distances. For N layers of the eye-boxes distributed along the z-axis, the measurement processes are required to be repeated N times for multiple transverse planes. In Refs. [810], the 3D eye-boxes were investigated by using the luminance data at the various distances along the optical axis of the NEDs, but the number of the measuring distances was limited to only six (6) in the simulations [8] or three (3) in the experiments [9,10]. Moreover, in those distance-sliced measurements, the inter-plane distributions, which are not actually measured, may be inaccurate, and the versatile use of the acquired data in another observation distances is hardly achievable.

Recently, a few methods for evaluating the 3D eye-box fast by using the ray trajectories confined by an optical aperture of the NEDs have been proposed [11,12]. However, multiple acquisitions along the optical axis, such as moving a diffuser along the z-axis and capturing the diffused images at various distances [11] or capturing at least two images at different observation distances along the optical axis [12], were still required rather than the single measuring distance. Additionally, those studies require careful alignments between the optical axis of the DUT and the movement direction of the LMD, and are sensitive to the optical aberrations practically occurring in multiple lenses and apertures in the NEDs.

In this paper, we propose a novel method for evaluating the 3D eye-box volume more efficiently by using the light-field (LF) data captured at a single observation distance as in Fig. 1(c). The main advantage of the proposed method is that all of the assessment processes are completed at the single observation distance, and the viewing characteristics at arbitrary distances could be evaluated after the acquisition. Since the LF of luminance distribution (LFLD) for the NEDs contains the spatio-angular information about the light rays, it could be used to evaluate eye-boxes at arbitrary distances which are displaced from the captured one. In Sec. 2, we develop an analysis model for the LFLD-based evaluation of the 3D eye-box from the data acquired at the single distance, and simulation results by using Zemax OpticStudio verify the proposed method. In Sec. 3, experimental verifications are provided using a commodity of the see-through NED. We acquired the LFLD at the single measuring distance, and evaluated the 3D eye-box volume. To the authors’ knowledge, it is the first study to adopt the LF analysis on measurements of the optical properties in the NEDs.

2. Principles

2.1 LFLD analysis model for evaluating 3D eye-box

In this section, we first explore an LFLD-based analysis model for evaluating a 3D eye-box of a NED. In general, since an LF representation of the light rays contains information about their propagation directions and positions at a given depth [1317], the LFLD at the single observation distance allows us to evaluate the luminance profiles at other distances by a simple post-analysis. In contrast to the conventional measurement methodology that requires quite time-consuming tasks more than tens (or hundreds) of the assessments at different positions and angles [4,6], it is possible to efficiently evaluate the 3D eye-box, or a series of the 2D eye-boxes at the arbitrary observation distances, by the use of the LFLD at the single distance.

As shown in Fig. 2(a), most NEDs could be simplified as a composition of a display panel and an effectively modeled eyepiece lens, which are distant from each other by almost a focal length of the lens [1,8]. Suppose each pixel in the display has an emission angle of φ, the maximum size of an eye-box at an eye-relief distance (ze) could be determined by overlapped area for (quasi-)collimated rays after refracting by the eyepiece lens, as denoted as Bx,max in Fig. 2(a). The eye-box size, Bx(z), varies according to the longitudinal distances along z-axis, and shows a minimum value (i.e., zero) at the z = zmin and z = zmax in Fig. 2(a).

 figure: Fig. 2.

Fig. 2. Demonstration of the LFLD-based evaluation for the eye-box of the NED: (a) the schematic diagram for forming the eye-box in the simplified NED configuration, (b) the visualization of the LFLD at the eye-relief, (c) after, and (d) before the eye-relief distance.

Download Full Size | PDF

In this paper, we denote the LFLD of the NED by L(x, y, θx, θy) with being parameterized by four coordinates (x, y, θx, θy), where (x, y) is a spatial coordinate and (θx, θy) is an angular coordinate at a given distance. The distance for acquiring the LFLD and the acquired LFLD are expressed as z = zi and Li(x, y, θx, θy), respectively, where a subscript i represents the initial distance and the initial LFLD data for the analysis. It is worthwhile noting that it is possible to acquire the LFLD at arbitrary observation distances, but a region of interest in the spatial domain is the smallest, when the LFLD is acquired at the near positions around the eye-relief of the NEDs.

Suppose the LFLD of the NED is acquired at the eye-relief distance (i.e., zi = ze). Figure 2(b) depicts a 2D slice of a four-dimensional (4D) LFLD into the (x, θx) coordinate: Li(x, θx) with y = 0 and θy = 0 at z = zi = ze. Li(x, θx) is bandlimited to Bx,max (spatially) and Ωx (angularly). In this paper, we define the FOV and the eye-box by using the luminance falloff criterion [6]. In the luminance falloff criterion, the marginal field angles and the FOV could be determined by an angular range where the measured luminance is brighter than the threshold (lt,Ω) for the LMD located at the eye-point. The eye-box is defined as the area where the observed luminance for the marginal field angles (i.e., the outermost regions of the virtual images) is higher than the threshold (lt). In Fig. 2(b), for example, as the marginal fields and the FOV are determined by the angular range satisfying L(x = 0, θx)≥lt,Ω, the marginal fields are θxΩx/2 and the FOV is Ωx. Hence, the eye-box is given for a spatial range of x satisfying the below inequality.

$$L(x, - \frac{{{\Omega _x}}}{2} \le {\theta _x} \le \frac{{{\Omega _x}}}{2}) \ge {l_t},$$
where lt denotes the threshold for the luminance falloff criterion for the eye-box. In the luminance falloff criterion, both the FOV and the eye-box could be determined according to the relative changes in the luminance distribution. Hence, lt,Ω = kΩ×lc and lt =k × lc, where kΩ and k are coefficients determining the FOV and the eye-box, respectively, and lc is a maximum luminance of a center pixel observed at the eye-point [6]. Both kΩ and k are unitless coefficients for the normalized LFLD, which are within [0, 1]. If the spatial range of x is expressed as xmin ≤ x ≤ xmax, the width of the eye-box at the measuring distance is Bx = xmaxxmin. For example, at every x position within [–Bx,max/2, Bx,max/2] of Fig. 2(b), all of the light rays with the field angles within [–Ωx/2, Ωx/2] are viewable with a higher luminance than the threshold. Hence, the eye-box size at the eye-relief distance is expressed as the width of the rectangle, Bx,max, in the (x, θx) coordinate as presented in Fig. 2(b).

At z = ze+Δz, the light rays emitted from a leftmost pixel (θx=–Ωx/2) are not observed at the left position such as x=–Bx,max/2, and vice versa. It is represented as a sheared shape of the LFLD as depicted in Fig. 2(c). Similarly, in the LF theory, as the light rays are propagated by Δz, the LF is sheared and showed the parallelogram shape with a slope closely related to the propagation distance (Δz) [14,16,17]. Since the light rays with the angular range of [–Ωx/2, Ωx/2] have to be observable in the eye-box, the eye-box shrinks into the horizontal range of Bx(z) in Fig. 2(c). If we consider the observation distance in front of z = ze (i.e., Δz < 0) as shown in Fig. 2(d), an LFLD is sheared in the opposite direction compared to that of Fig. 2(c). In both cases of Figs. 2(c) and 2(d), as the observation distance is displaced from the eye-relief, the eye-box becomes smaller.

Even though the aforementioned analysis on the eye-box based on the LFLD was described for the 2D LFLD of the (x, θx) coordinate, the process and principle for evaluating the eye-box remain identical to the 4D LFLD. In a straightforward manner, the eye-box at a given distance z = zi+Δz is defined as the spatial range of (x, y) satisfying the below inequality for the 4F LFLD.

$${L_{\Delta z}}(x,y,{\Omega _{x,\min }} \le {\theta _x} \le {\Omega _{x,\max }},{\Omega _{y,\min }} \le {\theta _y} \le {\Omega _{y,\max }}) \ge {l_t},$$
where LΔz(x, y, θx, θy) denotes the LFLD at z = zi+Δz, Ωx,min (Ωy,min), and Ωx,max (Ωy,max) mean the minimum and maximum field angles in the virtual image of the NED, respectively. According to the aforementioned definition of the marginal fields, Ωx,min (Ωy,min), and Ωx,max (Ωy,max) are the minimum and maximum values in the field angles of θx (θy) that satisfy L(x = 0, y = 0, θx, θy)≥lt,Ω in the x-axis (y-axis), respectively. From Li(x, y, θx, θy), the LFLD at arbitrary observation distances of z = zi+Δz, LΔz(x, y, θx, θy), is able to be calculated by considering the propagation of the rays [16,17].
$${L_{\Delta z}}(x,y,{\theta _x},{\theta _y}) = {L_i}(x - \Delta z\tan {\theta _x},y - \Delta z\tan {\theta _y},{\theta _x},{\theta _y}).$$

The LMD to acquire LFLD could be implemented based on an LF camera (with an additional imaging lens) such as a configuration presented in Ref. [17], or an LMD-array such as a configuration proposed in Ref. [18]. Regardless of the type of implementation, the acceptable range of the angular and spatial domains for the LMD is required to be larger than the FOV and the eye-box, respectively.

Figure 3 represents a schematic diagram for the LMD based on a micro-lens-array (MLA) on a detector as the conventional LF camera. In this case, the focal length and the size of lenslets in the MLA mainly determine the acceptable range of the LFLD.

 figure: Fig. 3.

Fig. 3. Schematic diagram for the LFLD acquisition system based on the MLA and the detector.

Download Full Size | PDF

In order to derive the requirements, the LMD is supposed to be located at z = ze. Measurable field angles of the LMD and the size of the synthetic apertures of the (imaged) MLA are required to be larger than the FOV of the NED, and the eye-box of the NED, respectively. Otherwise, a part of the marginal fields or the marginal rays for each field may be missed, and the eye-box is not calculated correctly. The requirements of the LMD are expressed in Eqs. (4) and (5).

$$\frac{{\Delta x}}{{{f_{LF}}}} \ge 2\tan \left( {\frac{{{\Omega _{x,\max }} - {\Omega _{x,\min }}}}{2}} \right),$$
$${N_x} \times \Delta x \ge {B_{x,\max }},$$
where Δx and Nx represent a sampling interval and the number of sampling points of the LFLD in the spatial coordinates, and fLA represents the focal length of lenslets. From Eq. (4), high numerical aperture (NA) in the MLA is preferred. Similarly, in the case of the LMD-array, individual LMDs in the array need to have measurement field angles and synthetic apertures larger than 2×tan((Ωx,max-Ωx,min)/2) and Bx,max, respectively.

2.2 Simulation results

By using the analysis model in Sec. 2.1, we can predict the light distribution at the arbitrary observation distances from the NEDs by using the initial LFLD, which is acquired at the single observation distance. In order to validate the proposed method, we built a simple NED system composed of an achromatic eyepiece lens and a micro-display by using a commercial optical design software, Zemax OpticStudio. We carried out the non-sequential ray-tracing simulation for the light distribution of the NED system, and acquired an LFLD at the eye-relief distance to evaluate the 3D eye-box.

Figure 4(a) shows a simulation geometry built by OpticStudio. In order to virtually capture the LFLD of the NED, we placed 35 (H) × 35 (V) numbers of virtual detectors in an array with small apertures (0.5 mm) in front of them. Each virtual detector captured the light intensity in the angular domain at a specific spatial position, and the distance between the eyepiece lens and the captured position of the LFLD was 27.5 mm. The emission angle of displayed pixels (φ) in the micro-display was assumed to be 13.3 ° in the full width at half maximum (FWHM). Since the effective focal length of the eyepiece lens was 30 mm and the size of the display was 7 mm in both width and height, Ωx=Ωy = 13.0 ° in the FWHM. Other simulation parameters are provided in Table 1. In Table 1, Δx (Δy), Δθx (Δθy), and Nx (Ny) represent sampling intervals in the spatial and the angular coordinates, and the number of the sampling points in the spatial coordinate, respectively.

 figure: Fig. 4.

Fig. 4. Numerical simulation using Zemax OpticStudio: (a) the configuration of the non-sequential ray-tracing simulation, (b) the visualization of the simulated LFLD at z = zi = 27.5 mm. Note that we enhanced the brightness of the magnified pictures in the inset of sky-blue (II) and green (III) colors for increasing readability.

Download Full Size | PDF

Tables Icon

Table 1. Specifications for the LFLD acquisition simulation

A high resolution of the LFLD is preferred to evaluate the 3D eye-box with a small Δz. In order to enhance the angular and spatial resolution of the LFLD, we carried out an upsampling process onto the initial LFLD by using the interpolation between adjacent luminance distributions. Note that since the LFLD in the proposed method is composed of a smooth variation of the luminance rather than the complicated texture as in the typical LF images used in the computer vision applications such as Refs. [1317], a simple interpolation with the adjacent data could be used. Additionally, the upsampling process of the LFLD also helps to relieve the required resolution of the LMD in practice. The resolution of the LFLD is enhanced in the spatial coordinates by followed equation, where Llow(·) and Lhigh(·) denote the LFLD before and after the upsampling process, and Δx and Δy represent the sampling interval of the LFLD in the x- and y-axis, respectively.

$$\scalebox{0.97}{$\displaystyle {L_{high}}(x,y,{\theta _x},{\theta _y}) = \left\{ {\begin{array}{@{}cl@{}} {{L_{low}}({x_{low}},{y_{low}},{\theta_x},{\theta_y}),}&{\textrm{for }x = {x_{low}},y = {y_{low}}}\\ {\frac{1}{2}\sum\limits_{j = 1}^2 {{L_{low}}({x_{low}} + {{( - 1)}^j}\frac{{\Delta x}}{2},{y_{low}},{\theta_x},{\theta_y}),} }&{\textrm{for }x \ne {x_{low}},y = {y_{low}}}\\ {\frac{1}{2}\sum\limits_{k = 1}^2 {{L_{low}}({x_{low}},{y_{low}} + {{( - 1)}^k}\frac{{\Delta y}}{2},{\theta_x},{\theta_y}),} }&{\textrm{for }x = {x_{low}},y \ne {y_{low}}}\\ {\frac{1}{4}\sum\limits_{k = 1}^2 {\sum\limits_{j = 1}^2 {{L_{low}}({x_{low}} + {{( - 1)}^j}\frac{{\Delta x}}{2},{y_{low}} + {{( - 1)}^k}\frac{{\Delta y}}{2},{\theta_x},{\theta_y}),} } }&{\textrm{for }x \ne {x_{low}},y \ne {y_{low}}} \end{array}} \right..$}$$

Figure 4 presents the normalized LFLD at z = zi after the upsampling process, where they were processed by Eq. (6) twice. The initially acquired LFLD, Llow(x, y, θx, θx), had a resolution of (35 × 35 × 61 × 61), and the upsampled LFLD, Lhigh(x, y, θx, θx), had a resolution of (137 × 137 × 61 × 61). In the magnified pictures on the right side of Fig. 4(b), the change of the angular distribution according to the position is presented. For example, in the inset of (I), which shows the angular distribution of the luminance at the center of the eye-box, a squared pattern confined by the angular range of (–6.5°, –6.5°) < (θx, θy) < (6.5°, 6.5°) was presented (i.e., Ωx=Ωy = 13°). We chose kΩ = 0.5 (i.e., lt,Ω = 0.5×lc) to determine the FOV of the simulated LFLD. In the inset of (II), as the spatial position moved along –x direction, the overall luminance decreased, and some parts of the distribution in the horizontal direction disappeared. Similarly, in the inset of (III), the overall luminance decreased more compared to the inset of (II), and the vignetting was observed in both θx and θy directions.

Figure 5(a) shows Li(x, θx) for the LFLD of Fig. 4(b). As we simulated the x-y symmetric optical system, Li(y, θy) showed an almost identical shape to Fig. 5(a). In Fig. 5(a), Bx,max was calculated by normalizing the LFLD and setting k = 0.5 (i.e., lt = 0.5×lc) according to Ref. [6]. In Fig. 5(b), we present Li(x, θx=–6.5°) and Li(x, θx = 6.5°). The range in the x-axis with a higher value of the normalized luminance than 0.5 for θx=±6.5° is the size of the eye-box. We also present LΔz(x, θx) in Figs. 5(c) and 5(d), when Δz is –10 mm and 10 mm, respectively. The directions of the sheared slopes in Figs. 5(c) and 5(d) were opposed to each other, since the propagation directions of the LFLD were inverse. The evaluated sizes of eye-boxes at different distances according to Eq. (1) are marked in Figs. 5(c) and 5(d).

 figure: Fig. 5.

Fig. 5. Simulation results for evaluating the eye-box from the LFLD: (a) Li(x, θx), (b) Li(x, θx=±6.5°), LΔz(x, θx) when Δz were (c) –10 mm, and (d) 10 mm. We normalized the LFLD by using the maximum value of the acquired luminance distribution.

Download Full Size | PDF

Figure 6(a) presents a 3D eye-box evaluated from the LFLD, where an eye-box at each observation distance is calculated according to Eqs. (2) and (3). In Fig. 6(a), the 3D eye-box was presented in terms of the 2D sliced ones at the diverse observation distances with an interval of 5.0 mm. The yellow region at each Δz indicates the area where the observer is able to view the entire marginal field angles of the NED (±6.5°) over half of the maximum luminance. The shape of the eye-box was almost circular, and the size at z = zi was 5.9 mm in diameter. As the observation distance changed, the size of the eye-box decreased for both the positive and the negative Δz.

 figure: Fig. 6.

Fig. 6. The evaluated 3D eye-box and the observing simulation results: (a) the 3D eye-box evaluated from the acquired LFLD, (b) the eye-box representation at z = zi and z = zi + 10 mm (the yellow region: the evaluated eye-box at each observation distance), (c) the observing simulation results at different conditions inside and outside of the 3D eye-box, and (d) the intensity profiles along A–A’.

Download Full Size | PDF

For the verification of the evaluated 3D eye-box, we further simulated the observed images at various positions inside and outside of the eye-box volume. In the simulation, a virtual eye composed of the ideal thin lens and a detector moved its position, and the pupil of the virtual eye was assumed to be 4.0 mm in diameter. We marked simulated conditions for the observation in Fig. 6(b). Cases (I) and (II) were within the 3D eye-box volume, and Cases (III) and (IV) were chosen outside of the 3D eye-box. Cases (II) and (IV) had an identical lateral position (x, y), but a different longitudinal position (z). A comparison between those two conditions showed the importance of characterizing the 3D eye-box volume: we have to consider the reduced sizes of the eye-box, as the observer is displaced from the eye-relief along the z-axis. In Fig. 6(c) and 6(d), observed intensities showed the maximum values at Case (I). In Case (II), even though the intensity decreased compared to Case (I), the overall image was brighter than the threshold (k = 0.5). Meanwhile, in Cases (III) and (IV), the intensity of the marginal field was below the threshold, as expected. In Fig. 6(d), we present a comparison among intensity profiles along dashed lines A–A’ at center rows in Fig. 6(c). The peak intensities at the marginal field were 0.50, 0.42, 0.31, and 0.23 for Cases (I), (II), (III), and (IV), respectively. Comparing Cases (II) and (IV), intensities at the center field were similar, but the intensity of Case (IV) severely decreased as the observed field became larger.

In practice, if elemental lenslets in the MLA or individual LMDs in the LMD-array have different optical characteristics such as transmittance, an additional calibration process is required among them. Several methods have verified that the camera calibration could be achieved by capturing reference patterns such as uniformly lit white surfaces [19] and point light sources in front of the array [20] in common. Once calibrating the parameters among the elemental lenslets or the individual LMDs, the overall transmittance of the LMD hardly affects the eye-box calculation in the post-analysis process. Since the LFLD is normalized, it is possible to estimate the eye-box based on the relative changes of the luminance distribution without accurate absolute values that may differ according to the common transmittance of the array. In Fig. 7, we compared the effect of the transmittance of the LMD-array on the evaluated eye-box by using the configuration of Fig. 4(a). We applied the common transmittance for all of the virtual detectors in the array with varying the transmittance of the apertures in front of them as follows: 1.0 (as reference), 0.95, 0.9, and 0.5. As shown in Fig. 7, even though the maximum of the LFLD (lc) differed according to the transmittance of the aperture, the calculated eye-box sizes (Bx) showed consistency regardless of the conditions.

 figure: Fig. 7.

Fig. 7. Simulation results for comparing the change of the maximum LFLD values and the calculated eye-boxes according to the common transmittance conditions of the LMD-array.

Download Full Size | PDF

3. Experiments

For further verifications of the proposed method, we experimentally acquired an LFLD for a commercial product of the NED, which presented see-through contents by the reflective optics. For capturing the LFLD, we acquire numbers of angular luminance distributions by moving an LMD at a target observation distance. As shown in Fig. 8(a), the LMD of Gamma Scientific Inc. (USA) was used. The LMD moved with the interval of 1.5 mm (H) × 1.5 mm (V) in the spatial domain, and we integrated the acquired luminance distributions in terms of the LFLD. The angular range and the sampling interval of the LMD were 24° (H) × 16° (V) and 2° (H) × 2° (V), respectively.

 figure: Fig. 8.

Fig. 8. Experimental setups and captured LFLD: (a) a picture of the experimental setup for measuring the luminance profile of the NED product, and (b) the visualization of the LFLD after the upsampling process (inset: originally acquired LFLD without the upsampling process).

Download Full Size | PDF

As the exact value of the desirable eye-relief was not disclosed, we located the LMD at a distance of 20 mm behind the last surface of the lens of the DUT as guided in Ref. [12]. We also upsampled the LFLD to enhance the resolution: from the original LFLD, Llow(x, y, θx, θy), with a resolution of (9 × 7 × 13 × 9) to the upsampled one, Lhigh(x, y, θx, θy), with a resolution of (33 × 25 × 25 × 17). In contrast to Sec 2.2, the angular distributions of the acquired LFLD at each position were enhanced further by 2 (H) × 2 (V) times using the interpolation, before applying Eq. (6) in the spatial domain. The marginal fields for the analysis with considering the angular sampling of the LMD were set to be as follows: Ωx,min=–8.0°, Ωx,max = 9.0°, Ωy,min=–5.0°, and Ωy,max = 4.0°. Hence, the entire FOV in the evaluation process of the 3D eye-box was Ωx = 17.0° and Ωy = 9.0°.

In Fig. 8(b), the upsampled LFLD and the originally acquired one are provided for comparison. As presented in the right side of Fig. 8(b), magnified pictures of the captured LFLD at different positions, the changes in the luminance according to the positions are observable.

Figures 9(a), 9(b), and 9(c) show LΔz(x, θx) and LΔz(y, θy), when Δz was –10 mm, 0 mm, and 10 mm, respectively. The obvious change in the degree of the shearing according to the observation distances was found in the visualized LFLD. We also marked the targeted marginal fields in Fig. 9. Even in the case of Δz = 0 mm, the LFLD shows the sheared shapes in both the (x, θx) and the (y, θx) coordinates. This is because the acquisition distance of the LFLD (i.e., 20 mm behind the NED) exceeded the desirable eye-relief of the NED. However, since the acquired LFLD had full information about the light rays emitted from the NEDs, it was possible to evaluate the 3D eye-box volume.

 figure: Fig. 9.

Fig. 9. The visualization of the normalized LFLD in the (x, θx) and the (y, θy) coordinates, when Δz is (a) –10 mm, (b) 0 mm, and (c) 10 mm.

Download Full Size | PDF

Figure 10(a) presents the 3D eye-box volume within the distance range of 20 mm. Due to the uniformity issue in the luminance distribution of the actual NED, k was set to 0.25 (i.e., lt = 0.25×lc) in the experiments. There was no area to view the entire field angles after Δz = 5 mm. It is worth noting Δz=–15 mm condition was quite hard to assess the eye-box by the conventional methodology, because there was not enough space between the LMD and the DUT for the measurement. Meanwhile, the proposed method enables to simulate the light distributions at arbitrary distances in the post-analysis process. Hence, the proposed method is capable of obtaining the viewing characteristics of the NEDs including the 3D eye-box even in circumstances that make it hard to measure the light rays’ distributions directly otherwise.

 figure: Fig. 10.

Fig. 10. The evaluated results for the 3D eye-box volume using the LFLD: (a) the constructed 3D eye-box, and (b) the eye-boxes at different two observation distances (Δz=–10 mm, and 0 mm).

Download Full Size | PDF

Figure 10(b) presents eye-boxes at Δz=–10 mm, and 0 mm. The maximum sizes of the eye-boxes were 6.0 mm (H), 4.1 mm (V), and 2.6 mm (H), 1.5 mm (V) at Δz=–10 mm and 0 mm, respectively. As the further verification for the evaluated 3D eye-box using the actual observations, we captured the virtual images provided by the DUT at different four (4) positions, as marked as Cases (I), (II), (III), and (IV) in Fig. 10(b). Cases (I), (II), and (III) were within the 3D eye-box volume, and Case (IV) was located outside of the derived 3D eye-box.

In Fig. 11, we present the captured virtual images at the observation positions of Cases (I), (II), (III), and (IV). Three (3) patterns were used as the image sources: a full-white pattern, a resolution pattern, and a typical full-color image pattern (from bigbuck bunny [21]) as provided in Fig. 11(a). A smartphone camera was moved by linear stages along the x-, y-, and z-axis, so that it captured the virtual images at the various positions under the identical settings of the exposure, brightness, and focus in the capturing process. The observed images at Case (IV) (outside of the 3D eye-box) showed a drastic degradation in its brightness regardless of the image patterns, and the vignetting appeared in the virtual images of Case (IV) as shown in Fig. 11(b).

 figure: Fig. 11.

Fig. 11. Experimental setups and observed virtual images by the DUT: (a) a picture of the experimental setup for capturing the virtual images at the different observation positions inside and outside of the 3D eye-box, (b) the observed images (Visualization 1), and (c) intensity profile of captured images for the full-white pattern (first column of Fig. 11(b)) along line A–A’. Source image by $\copyright$ copyright 2008, Blender Foundation/ www.bigbuckbunny.org.

Download Full Size | PDF

For a more quantitative analysis of the observed luminance distribution, the intensity profiles of the captured images with the full-white pattern, along the lines A–A’, were provided in Fig. 11(c). For Case (IV), the observed intensity of the rightmost marginal field in the virtual images decreased more than half of the others. For the captured images at Cases (I), (II), and (III) within the 3D eye-box, the entire field angles were clearly observable without the severe decrease of the intensity and the vignetting. Visualization 1 showed the changes in the observed images, when the camera moved only along the z-axis (Δz within the range of –10 mm and 0 mm) for the verification of the 3D eye-box volume evaluated by the proposed method.

4. Discussion

In this paper, we have presented the analysis, the simulations, and the experiments based on the luminance falloff criterion for the marginal fields, which is one of the most widely used methods for the decision of the eye-box [6,12]. However, since the LFLDs contain the spatio-angular information of the light rays emitted from the NEDs, it could also be applied to other criteria for determining the eye-box. In this section, we briefly discuss the possibility and the principles of the applications of the LFLD in evaluating the eye-box with other criteria such as a center luminance falloff criterion [9], a field-nonuniformity criterion against a few displayed pixels [3,8], and contrast falloff criterion using Michelson contrast of the marginal fields [22].

In the center luminance falloff criterion, the change of center luminance is supposed to be analyzed, so that LΔz(x, y, θx = 0°, θy = 0°) could be used. In the post-analysis at each observation distance with Δz, the eye-box is able to be decided by the spatial range of (x, y) with satisfying LΔz(x, y, θx = 0°, θy = 0°) ≥lt.

If one uses the field-nonuniformity criterion for the selected few virtual pixels as in Refs. [3,8], the eye-box is computed by measuring the Michelson contrast (c) among the virtual pixels which are positioned at the center and edges of the virtual image. The Michelson contrast (c) is expressed as Eq. (7).

$$c = \frac{{{I_{\max }} - {I_{\min }}}}{{{I_{\max }} + {I_{\min }}}},$$
where Imax and Imin represent the maximum and minimum luminances of the selected pixels. Hence, the contrast c at a position of (x, y) is calculated by using the maximum and minimum in a set of LΔz(x, y, θx=α, θy=β), where (α, β) is a set of predefined field angles corresponding to the positions of the virtual pixels.

Similarly, in the contrast falloff criterion at the marginal fields as in Ref. [22], Eq. (7) is calculated for each marginal field at a position of (x, y). The eye-box is determined by the area satisfying that the contrast c for each marginal field is larger than the threshold value.

In Fig. 12, we provided simulation results for the preliminary validation of the application of the LFLD based on the field-nonuniformity and the contrast falloff criteria. The simulation configuration is similar to that of Sec. 2.2, but we changed test patterns at the display surface and enhanced the resolution of the virtual detectors to capture the rapid changes in the luminance distribution. As the test patterns, we used a five-point pattern for the field-nonuniformity criterion as in Ref. [3], and a horizontally varying binary pattern for the contrast falloff criterion as in Ref. [22].

 figure: Fig. 12.

Fig. 12. Application of the LFLD to determine the eye-boxes using different criteria: (a) L(x = 0, y = 0, θx, θy) for the field-nonuniformity criterion (adopting the five-point field-nonuniformity method), (b) for the contrast falloff criterion, (c) the visualization of the normalized LFLD in the (x, θx) coordinate and the field-nonuniformity along the x-axis at different observation distances, and (d) the LFLD and the changes in the contrast of the marginal fields along the x-axis. In the post-analysis process, the conditions of Δz = 0 mm, 5 mm, and 10 mm were simulated.

Download Full Size | PDF

In Fig. 12(a) and 12(b), the visualizations of Li(x = 0, y = 0, θx, θy) represent observed images at the eye-point, which is considered as the reference location of the measurements to acquire the optimal performance from the NEDs [6], with the small aperture. In Figs. 12(c) and 12(d), we presented the LFLD visualization in the (x, θx) coordinate and changes of the eye-boxes according to the observation distances for the field-nonuniformity criterion and the contrast falloff criterion, respectively. In the process to evaluate eye-boxes at Δz of 0 mm, 5 mm, and 10 mm, the LFLDs were propagated as addressed in Sec. 2. The field-nonuniformity and the contrast of the marginal fields were calculated at each sampling point in the spatial domain by using Eq. (7). In each case, an eye-box was presented by an extent with satisfying the threshold value of 0.5.

As shown in Figs. 12(c) and (d), the eye-boxes became smaller as Δz increased. In Fig. 12(c), if the field-nonuniformity is zero, the observed images have a uniform distribution in their brightness. The higher value of the field-nonuniformity means that a part of the analyzed fields became dim or vignetted. Meanwhile, in the contrast falloff criterion, as the contrast is calculated against the small angular range of the field angles around the marginal fields, the higher value means a better image quality for a given spatial frequency. In the right column of Fig. 12(d), we visualized the minimum values of the contrast for each marginal field at each spatial position (x). As shown in Fig. 12, the LFLD could be applied for the other types of criteria to determine the eye-box of the NED.

5. Conclusion

In this paper, we proposed a new methodology to characterize the 3D eye-box volume of the NEDs by using the LFLD capturing system and the post-analysis. The data acquisition process in the proposed method only requires to be conducted at the single observation distance, and the eye-box profiles at the arbitrary observation distances are able to be constructed via the simple post-analysis.

We derived the analysis model for evaluating the 3D eye-box using the LFLD. The theoretical analysis was verified by the numerical simulations using Zemax OpticStudio, where we acquired the LFLD and demonstrated the 3D eye-box evaluation process by building the simplified NED system. The observing simulation results at the various positions using OpticStudio coincided with the proposed analysis for characterizing the 3D eye-box. The experimental verification using the commercial NED with the targeted FOV of 17.0° × 9.0° was also presented. We acquired the LFLD at 20 mm behind the NED, and evaluated the 3D eye-box within the large distance range of 20 mm. The view images actually captured by the camera verified the effectiveness of the evaluated 3D eye-box in the test bed. It is worthwhile to note that the range and interval in the z-axis for assessing the eye-boxes are able to be further improved, since the LFLD contains the full information of the directions and the positions for the light rays emitted from the NEDs.

Funding

Korea Institute for Advancement of Technology, (KIAT), (P0014183) grant funded by the Korea government (MOTIE); Korea Evaluation Institute of Industrial Technology (KEIT), (20016869); grant funded by Korea government (MOTIE).

Acknowledgments

This work was supported by Korea Institute for Advancement of Technology (KIAT) grant funded by the Korea government (MOTIE) (No. P0014183, Establishment on the Empirical Grounding for AR/VR Device and Service, 50%), and Korea Evaluation Institute of Industrial Technology (KEIT) grant funded by the Korea government (MOTIE) (No. 20016869, Development of light field technology and optical component supporting adaptive focus for AR devices, 50%).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. B. C. Kress, Optical architectures for augmented-, virtual-, and mixed-reality headsets, SPIE.

2. J. Xiong, E.-L. Hsiang, Z. He, T. Zhan, and S.-T. Wu, “Augmented reality and virtual reality displays: emerging technologies and future perspectives,” Light: Sci. Appl. 10, 216 (2021). [CrossRef]  

3. S. A. Cholewiak, Z. Başgöze, O. Cakmakci, D. M. Hoffman, and E. A. Cooper, “A perceptual eyebox for near-eye displays,” Opt. Express 28(25), 38008–38028 (2020). [CrossRef]  

4. K. Oshima, K. Naruse, K. Tsurutani, J. Iwai, T. Totani, S. Uehara, S. Ouchi, Y. Shibahara, H. Takenaka, Y. Sato, T. Kozakai, M. Kurashige, and H. Wakemoto, “Eyewear Display Measurement Method: Entrance Pupil Size Dependence in Measurement Equipment,” in SID Symposium Digest of Technical Papers, vol. 47 (Wiley Online Library, 2016), 1064–1067.

5. D. A. Fellowes and R. S. Draper, “Near to eye display test station,” Proc. SPIE 5079, 51–62 (2003). [CrossRef]  

6. IEC 63145-20-10 Ed1.0 Eyewear display— Part 20-10: Fundamental measurement methods — Optical properties, International Electrotechnical Commission, August 16, 2019.

7. J. Ratcliff, A. Supikov, S. Alfaro, and R. Azuma, “Thin VR: Heterogeneous microlens arrays for compact, 180 degree FOV VR near-eye displays,” IEEE Trans. Visual. Comput. Graphics26, 1981–1990 (2020).

8. O. Cakmakci, D. M. Hoffman, and N. Balram, “3D eyebox in augmented and virtual reality optics,” in SID Symposium Digest of Technical Papers, vol. 50 (Wiley Online Library, 2019), 438–441.

9. O. Cakmakci, Y. Qin, P. Bosel, and G. Wetzstein, “Holographic pancake optics for thin and lightweight optical see-through augmented reality,” Opt. Express 29(22), 35206–35215 (2021). [CrossRef]  

10. R. L. Austin, B. S. Denning, B. C. Drews, V. B. Fedoriouk, and R. C. Calpito, “Qualified viewing space determination of near-eye and head-up displays,” J. Soc. Inf. Disp. 26(9), 567–575 (2018). [CrossRef]  

11. H. Hong, “Fast measurement of eyebox and field of view of virtual and augmented reality devices using the ray trajectories extending from positions on virtual image,” Curr. Opt. Photonics 4, 336–344 (2020). [CrossRef]  

12. R. Varshneya, R. S. Draper, J. Penczek, B. M. Pixton, T. F. Nicholas, and P. A. Boynton, “Standardizing Fundamental Criteria for Near Eye Display Optical Measurements: Determining Eye Point Position,” in SID Symposium Digest of Technical Papers, vol. 49 (Wiley Online Library, 2018), 961–964.

13. J. Park, “Efficient calculation scheme for high pixel resolution non-hogel-based computer generated hologram from light field,” Opt. Express 28(5), 6663–6683 (2020). [CrossRef]  

14. G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11(7), 926–954 (2017). [CrossRef]  

15. A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26(3), 69 (2007). [CrossRef]  

16. W. Fu, X. Tong, C. Shan, S. Zhu, and B. Chen, “Implementing light field image refocusing algorithm,” Proc. 2nd Int. Conf. Opto-Electron. Appl. Opt., 1–8, Oct. 2015.

17. Y. Jeong, J. Kim, J. Yeom, C. -K. Lee, and B. Lee, “Real-time depth controllable integral imaging pickup and reconstruction method with a light field camera,” Appl. Opt. 54(35), 10333–10341 (2015). [CrossRef]  

18. X. Lin, J. Wu, G. Zheng, and Q. Dai, “Camera array based light field microscopy,” Biomed. Opt. Express 6(9), 3179–3189 (2015). [CrossRef]  

19. H. Nanda and R. Cutler, Practical calibrations for a real-time digital omnidirectional camera (Technical sketches, Computer Vision and Pattern Recognition, 2001).

20. J. C. Yang, M. Everett, C. Buehler, and L. McMillan, “A real-time distributed light field camera,” in Proceedings of the 13th Eurographics Workshop on Rendering (Eurographics Association, 2002), 77–86.

21. Bigbuckbunny. http://peach.blender.org/.

22. IEC 63145-20-20 Ed1.0 Eyewear display— Part 20-20: Fundamental measurement methods — Image quality, International Electrotechnical Commission, September 13, 2019.

Supplementary Material (1)

NameDescription
Visualization 1       observed images according to positions inside and outside of the 3D eye-box (Fig. 9(b))

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Schematics of evaluating the eye-box of the NED: a conventional approach for assessing (a) the 2D eye-box at the eye-relief distance, (b) the 3D eye-box using the repetitive evaluations at the multiple observation distances, and (c) the 3D eye-box evaluation using the LFLD acquisition at the single distance (the proposed method).
Fig. 2.
Fig. 2. Demonstration of the LFLD-based evaluation for the eye-box of the NED: (a) the schematic diagram for forming the eye-box in the simplified NED configuration, (b) the visualization of the LFLD at the eye-relief, (c) after, and (d) before the eye-relief distance.
Fig. 3.
Fig. 3. Schematic diagram for the LFLD acquisition system based on the MLA and the detector.
Fig. 4.
Fig. 4. Numerical simulation using Zemax OpticStudio: (a) the configuration of the non-sequential ray-tracing simulation, (b) the visualization of the simulated LFLD at z = zi = 27.5 mm. Note that we enhanced the brightness of the magnified pictures in the inset of sky-blue (II) and green (III) colors for increasing readability.
Fig. 5.
Fig. 5. Simulation results for evaluating the eye-box from the LFLD: (a) Li(x, θx), (b) Li(x, θx=±6.5°), LΔz(x, θx) when Δz were (c) –10 mm, and (d) 10 mm. We normalized the LFLD by using the maximum value of the acquired luminance distribution.
Fig. 6.
Fig. 6. The evaluated 3D eye-box and the observing simulation results: (a) the 3D eye-box evaluated from the acquired LFLD, (b) the eye-box representation at z = zi and z = zi + 10 mm (the yellow region: the evaluated eye-box at each observation distance), (c) the observing simulation results at different conditions inside and outside of the 3D eye-box, and (d) the intensity profiles along A–A’.
Fig. 7.
Fig. 7. Simulation results for comparing the change of the maximum LFLD values and the calculated eye-boxes according to the common transmittance conditions of the LMD-array.
Fig. 8.
Fig. 8. Experimental setups and captured LFLD: (a) a picture of the experimental setup for measuring the luminance profile of the NED product, and (b) the visualization of the LFLD after the upsampling process (inset: originally acquired LFLD without the upsampling process).
Fig. 9.
Fig. 9. The visualization of the normalized LFLD in the (x, θx) and the (y, θy) coordinates, when Δz is (a) –10 mm, (b) 0 mm, and (c) 10 mm.
Fig. 10.
Fig. 10. The evaluated results for the 3D eye-box volume using the LFLD: (a) the constructed 3D eye-box, and (b) the eye-boxes at different two observation distances (Δz=–10 mm, and 0 mm).
Fig. 11.
Fig. 11. Experimental setups and observed virtual images by the DUT: (a) a picture of the experimental setup for capturing the virtual images at the different observation positions inside and outside of the 3D eye-box, (b) the observed images (Visualization 1), and (c) intensity profile of captured images for the full-white pattern (first column of Fig. 11(b)) along line A–A’. Source image by $\copyright$ copyright 2008, Blender Foundation/ www.bigbuckbunny.org.
Fig. 12.
Fig. 12. Application of the LFLD to determine the eye-boxes using different criteria: (a) L(x = 0, y = 0, θx, θy) for the field-nonuniformity criterion (adopting the five-point field-nonuniformity method), (b) for the contrast falloff criterion, (c) the visualization of the normalized LFLD in the (x, θx) coordinate and the field-nonuniformity along the x-axis at different observation distances, and (d) the LFLD and the changes in the contrast of the marginal fields along the x-axis. In the post-analysis process, the conditions of Δz = 0 mm, 5 mm, and 10 mm were simulated.

Tables (1)

Tables Icon

Table 1. Specifications for the LFLD acquisition simulation

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

$$L(x, - \frac{{{\Omega _x}}}{2} \le {\theta _x} \le \frac{{{\Omega _x}}}{2}) \ge {l_t},$$
$${L_{\Delta z}}(x,y,{\Omega _{x,\min }} \le {\theta _x} \le {\Omega _{x,\max }},{\Omega _{y,\min }} \le {\theta _y} \le {\Omega _{y,\max }}) \ge {l_t},$$
$${L_{\Delta z}}(x,y,{\theta _x},{\theta _y}) = {L_i}(x - \Delta z\tan {\theta _x},y - \Delta z\tan {\theta _y},{\theta _x},{\theta _y}).$$
$$\frac{{\Delta x}}{{{f_{LF}}}} \ge 2\tan \left( {\frac{{{\Omega _{x,\max }} - {\Omega _{x,\min }}}}{2}} \right),$$
$${N_x} \times \Delta x \ge {B_{x,\max }},$$
$$\scalebox{0.97}{$\displaystyle {L_{high}}(x,y,{\theta _x},{\theta _y}) = \left\{ {\begin{array}{@{}cl@{}} {{L_{low}}({x_{low}},{y_{low}},{\theta_x},{\theta_y}),}&{\textrm{for }x = {x_{low}},y = {y_{low}}}\\ {\frac{1}{2}\sum\limits_{j = 1}^2 {{L_{low}}({x_{low}} + {{( - 1)}^j}\frac{{\Delta x}}{2},{y_{low}},{\theta_x},{\theta_y}),} }&{\textrm{for }x \ne {x_{low}},y = {y_{low}}}\\ {\frac{1}{2}\sum\limits_{k = 1}^2 {{L_{low}}({x_{low}},{y_{low}} + {{( - 1)}^k}\frac{{\Delta y}}{2},{\theta_x},{\theta_y}),} }&{\textrm{for }x = {x_{low}},y \ne {y_{low}}}\\ {\frac{1}{4}\sum\limits_{k = 1}^2 {\sum\limits_{j = 1}^2 {{L_{low}}({x_{low}} + {{( - 1)}^j}\frac{{\Delta x}}{2},{y_{low}} + {{( - 1)}^k}\frac{{\Delta y}}{2},{\theta_x},{\theta_y}),} } }&{\textrm{for }x \ne {x_{low}},y \ne {y_{low}}} \end{array}} \right..$}$$
$$c = \frac{{{I_{\max }} - {I_{\min }}}}{{{I_{\max }} + {I_{\min }}}},$$
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.