Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Aberration-corrected full-color holographic augmented reality near-eye display using a Pancharatnam-Berry phase lens

Open Access Open Access

Abstract

We present a full-color holographic augmented reality near-eye display using a Pancharatnam-Berry phase lens (PBP lens) and its aberration correction method. Monochromatic and chromatic aberrations of the PBP lens are corrected by utilizing complex wavefront modulation of the holographic display. A hologram calculation method incorporating the phase profile of the PBP lens is proposed to correct the monochromatic aberration. Moreover, the chromatic aberration is corrected by warping the image using the mapping function obtained from ray tracing. The proposed system is demonstrated with the benchtop prototype, and the experimental results show that the proposed system offers 50° field of view full-color holographic images without optical aberrations.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Recently, augmented reality (AR) devices are gaining attention in both research and industrial field. Consequently, the development of a compact near-eye display becomes an important issue for the commercialization of AR devices. Various types of near-eye displays using free-form lens [1,2], diffractive optical element [3,4], holographic optical element (HOE) [5,6], index-matched anisotropic crystal lens [7], and meta-lens [8] have been proposed to achieve a wide field of view (FOV) with a small form factor.

Pancharatnam-Berry phase (PBP) optical element is an emerging optical element that has advantages for the compact near-eye display. The PBP optical element is a planar optical component that modulates the wavefront with liquid-crystal directors [9,10]. Since the desired phase profile is generated by spatially-varying directions of the liquid-crystal directors, a lens with a high numerical aperture (NA) can be fabricated while obtaining a planar structure. Furthermore, the PBP optical element shows different optical effects depending on the polarization state of the incident light. Owing to its polarization selectivity, high numerical aperture, and planar structure, it has been used in near-eye displays for versatile use, such as a coupler, deflector, and lens [1116]. Moreover, the development of the PBP optical element enabled the implementation of complex structures [1719]. However, chromatic aberration due to wavelength dependency of the PBP optical element hinders the realization of a full-color near-eye display.

In our previous research, PBP eyepiece (PBPE) for the AR near-eye display has been proposed, and additional HOEs were adopted to compensate the chromatic aberration [14,15]. In [14], stacked diffuser-HOEs with physical gaps were used as transparent screens, but the gap between HOEs made the bulky system. To reduce the form factor, chromatic HOE that compensates the focal length of PBPE was adopted to a Maxwellian-view display in [15]. However, a monochromatic aberration that degrades the resolution at the outermost part of the image was observed. Furthermore, focus cues were not provided to users in both systems. Therefore, a method to correct both chromatic and monochromatic aberration without additional optical components while providing focus cues should be devised.

In this paper, we present a holographic AR near-eye display using PBPE and a methodology to correct the optical aberrations from PBPE. The complex wavefront modulation of the holographic display makes it possible to provide focus cues while correcting the aberrations through proper computer-generated hologram (CGH) calculation [20,21]. Here, the monochromatic aberration is corrected with a proposed point-spread function (PSF) based CGH calculation method. In order to consider the phase profile of PBPE while avoiding an aliasing issue, the holographic image is generated with the superposition of PSFs during the backpropagation. The lateral chromatic aberration is corrected by warping the image with the mapping function obtained from ray tracing. Also, the axial chromatic aberration is corrected by adjusting the depth of the holographic image according to the wavelength. The presented display system is implemented with the benchtop prototype. Experimental results show that the proposed CGH calculation method corrects the observed aberrations of full-color images. The display system provides holographic AR images with 50° FOV and focus cues.

2. Backgrounds

2.1 Pancharatnam-Berry phase lens eyepiece

Unlike a conventional spherical lens that modulates the wavefront by optical path difference, the phase profile of a PBP lens is determined by the direction of spatially-varying liquid-crystal directors [9,10]. Therefore, the PBP lens has a thin planar structure regardless of its focal length. The planar structure of the PBP lens has an advantage over the conventional spherical lens as an eyepiece for the compact near-eye display. Figure 1(a) shows the operation principle of the PBP lens. The PBP lens functions as a convex lens for an incident right-handed circularly polarized (RCP) light while it functions as a concave lens with the identical focal length for a left-handed circularly polarized (LCP) light. In both cases, the polarization state of the incident light changes into the opposite direction; the RCP light becomes the LCP light, and the LCP light becomes the RCP light.

In our previous research, PBPE was proposed as a transmissive AR eyepiece [14]. PBPE consists of two PBP lenses, a linear polarizer (LP), and a quarter-wave plate (QWP), as shown in Fig. 1(b). LP and QWP between two PBP lenses make the output light of the first PBP lens become RCP regardless of the incident polarization state. Therefore, the RCP light passes through two convex lenses, while the LCP light passes through a convex lens and a concave lens with the same focal length $f$. The infinitesimally small gap between two stacked PBP lenses makes PBPE act as a convex lens with effective focal length $f/2$ for the RCP light. On the contrary, PBPE becomes a transparent plate for the LCP light as it undergoes two conjugate lenses. Hence, an AR eyepiece is obtained by providing a virtual image with RCP and a real-world image with LCP. Furthermore, since all the optical components of PBPE have small thicknesses, PBPE maintains a thin planar structure. However, due to the linear polarizer inside PBPE, the efficiency for the unpolarized incident light is 25%.

 figure: Fig. 1.

Fig. 1. Operation principle of (a) a single PBP lens and (b) PBPE. The schematic diagrams are not to scale, and there is no gap between optical components of PBPE. LPS indicates linear polarization state.

Download Full Size | PDF

One of the major issues in implementing the PBP lens is chromatic aberration. Since the phase profile of the PBP lens is equivalent for a wide spectrum range, the focal length changes according to the wavelength. The relationship between the designed wavelength $\lambda _d$, the designed focal length $f_d$, the wavelength of the incident light $\lambda _i$, and the corresponding focal length $f_i$ is expressed as Eq. (1) [8].

$${\phi _{\textrm{PBP}}}\left( r \right) = \frac{{2\pi }}{{{\lambda _{d}}}}\left( {{f_d} - \sqrt {{r^2} + {f_d}^2} } \right) = \frac{{2\pi }}{{{\lambda _{i}}}}\left( {{f_i} - \sqrt {{r^2} + {f_i}^2} } \right),$$
where ${\phi _{{\textrm {PBP}}}}\left ( r \right )$ is the phase profile of the PBP lens. Since PBPE consists of two stacked PBP lenses, the phase profile of PBPE becomes ${\phi _{{\textrm {PBPE}}}}\left ( r \right )= 2{\phi _{{\textrm {PBP}}}}\left ( r \right )$. With paraxial approximation stated as $r\ll {f_d}$, the focal length of the PBP lens ${f_i}$ corresponding to the wavelength $\lambda _i$ is expressed as ${f_i} = {\lambda _d}{f_d}/{\lambda _i}$ from the above equation. Different focal lengths depending on the wavelength induce axial and lateral chromatic aberrations. The axial chromatic aberration floats images to different depths, and the lateral chromatic aberration magnifies the images to different sizes according to the wavelength. In this research, chromatic aberration is corrected without additional optical components using CGHs calculated with the proposed aberration correction method. The effect of the chromatic aberration and its correction method will be discussed in detail in section 3.2.

2.2 Holographic near-eye displays

The holographic display reconstructs the wavefront of a real object by a spatial light modulator (SLM), providing focus cues to the user [20]. The holographic near-eye display enlarges FOV by adopting an eyepiece. A short focal length of the eyepiece obtains wide FOV by magnifying the holographic image. However, the adoption of the eyepiece limits the eye-box of the display system. Due to the pixelated structure of SLM, diffraction angle $\theta _d$ is determined by the pixel pitch $p$ of SLM as $\theta _d=2{\sin ^{-1}\left ({\lambda }/{2p}\right )}$. Limited diffraction angle restricts the eye-box to a region around the focal plane of the eyepiece, as shown in Fig. 2. The width of the eye-box $l_f$ and FOV at the focal plane $\theta _{\textrm {FOV}}$ are expressed as

$${l_f} = \frac{{\lambda f}}{p},$$
$${\theta _{\textrm{FOV}}} = 2{\tan ^{ - 1}}\left( {\frac{{{l_{SLM}}}}{{2f}}} \right),$$
where $\lambda$ is the wavelength of the incident light, $f$ is the focal length of the eyepiece, and $l_{SLM}$ is the width of SLM [22]. In order to obtain wide FOV, the eyepiece should have short focal length and large aperture, resulting in high NA.

The holographic display can correct the aberration of the optical system by providing a proper CGH. CGH for the aberration correction is generally obtained with a feedback-based method, which finds CGH that provides sharp PSF to users [5,6,23,24]. Since the feedback-based method repeats the process until sharp PSF is obtained, it guarantees an aberration-corrected holographic image. However, the feedback-based method requires a time-consuming and laborious process, and the termination of the iteration loop is determined by the user or the fit function, which could be inaccurate and subjective. On the contrary, the aberration pre-compensation method based on the knowledge of the optical system provides an efficient way. In previous researches, the aberration of HOE in the holographic display was pre-compensated with ray tracing method and wave vector-based method [25,26]. However, only astigmatism was considered, and other aberrations from the eyepiece were ignored. Therefore, a CGH calculation method that incorporates the phase profile of the eyepiece should be devised.

 figure: Fig. 2.

Fig. 2. Schematic diagram of the holographic near-eye display

Download Full Size | PDF

3. Proposed CGH calculation methods for aberration correction

3.1 Monochromatic aberration correction

In our previous research about the near-eye display using PBPE, the resolution of the image was lower at the outermost part than the central part [15]. This monochromatic aberration results from the non-ideal imaging of PBPE. Here, the monochromatic aberration is corrected with the CGH calculation method incorporating the phase profile of PBPE. A straightforward method is to propagate the target virtual image backward to SLM while considering the effect of the optical system. The complex amplitude is calculated by repeating the free-space propagation and multiplying the phase of the optical components. However, it is hard to apply the wave optics in the near-eye display due to an aliasing issue resulting from high NA of the eyepiece. To express the phase profile of the lens in the wave optics simulation, the sampling condition to avoid aliasing should be satisfied. The sampling condition for the lens is expressed as $\Delta x \le \frac {\lambda }{2\textrm{NA}}$, where $\Delta x$ is the sampling distance, and NA is the numerical aperture of the lens [27]. Therefore, the short sampling distance due to high NA of the eyepiece makes the memory size exceed the computation limit during the propagation.

In order to avoid the aliasing issue, we propose a PSF-based aberration correction method. In this method, a point from the virtual image plane is backpropagated to obtain an analytical expression of PSF at the input image plane. The superposition of PSFs generates the aberration-corrected holographic image. In the single-lens imaging system, PSF is expressed as a Fourier transform of the pupil function with paraxial approximation [28]. While calculating PSF, the lens phase is not explicitly expressed, and only the phase term that induces aberration appears in the pupil function. Based on this idea, we rewrite the phase profile of the eyepiece $\phi _{\textrm {PBPE}}$ as $W_{\textrm {lens}}+W_{\textrm {aberr}}$, where $W_{\textrm {lens}}$ is the ideal single-lens imaging term, and $W_{\textrm {aberr}}$ is the aberration term. In the analytical expression of PSF, high NA phase term $W_{\textrm {lens}}$ vanishes, and only slowly-varying phase term $W_{\textrm {aberr}}$ remains in the efficient pupil function of PBPE. Therefore, it is possible to avoid the aliasing issue by finding the efficient pupil function of the holographic near-eye display system.

Figure 3 shows the schematic diagram of the single-lens imaging system in the holographic near-eye display. For convenience of explanation, the plane where the input image exists is expressed as the object plane, and the figure is depicted for the case where the imaging system forms a real image. SLM generates the holographic image at the object plane, and the virtual image is floated at the image plane by the eyepiece located at the lens plane. $\left ( {{x_o},{y_o}} \right )$, $\left ( {{x_l},{y_l}} \right )$, and $\left ( {{x_i},{y_i}} \right )$ are the coordinates of the ray at the object, lens, and image plane obtained from the ray tracing method, which will be discussed in the section 3.2. With paraxial approximation, the horizontal and vertical diffraction angle of SLM is limited to ${\theta _x} = \frac {\lambda }{{{p_x}}}, {\theta _y} = \frac {\lambda }{{{p_y}}}$, respectively, where ${p_x},{p_y}$ are the pixel pitches along $x$ and $y$ directions, respectively. Accordingly, the efficient pupil function is limited to the region inside a rectangular area having a width of ${l_x} = {z_o}{\theta _x}$ and a height of ${l_y} = {z_o}{\theta _y}$ centered around $\left ( {{x_l},{y_l}} \right )$, generating different PSFs depending on the lateral location of the object point. Therefore, the aberration-corrected image is obtained with point-by-point superposition of the spatially-varying PSFs.

 figure: Fig. 3.

Fig. 3. Schematic diagram of the backpropagation process of the lens imaging system

Download Full Size | PDF

In order to obtain PSF at $\left ( {{x_o},{y_o}} \right )$, a spherical wave originating from $\left ( {{x_i},{y_i}} \right )$ is backpropagated to the object plane via $\left ( {{x_l},{y_l}} \right )$. Complex amplitude at the lens plane ${U_l}\left ( {x,y} \right )$ becomes multiplication of the backpropagated spherical wave ${\hat {U}_l}\left ( {x,y} \right )$ and the conjugation of PBPE, which is expressed as

$${\hat{U}_l}\left( {x,y} \right) = \frac{1}{{{r_i}\left( {x,y} \right)}}\exp \left( { - \frac{{{z_i}}}{{\left| {{z_i}} \right|}}jk{r_i}\left( {x,y} \right)} \right),$$
$${U_l}\left( {x,y} \right) = \exp \left( { - j{\phi _{{\textrm{PBPE}}}}\left( {x,y} \right)} \right)\operatorname{rect} \left( {\frac{{x - {x_l}}}{{{l_x}}},\frac{{y - {y_l}}}{{{l_y}}}} \right){\hat{U}_l}\left( {x,y} \right),$$
where ${r_i}\left ( {x,y} \right ) = \sqrt {z_i^2 + {{\left ( {x - {x_i}} \right )}^2} + {{\left ( {y - {y_i}} \right )}^2}}$ is the distance between an arbitrary point $\left (x,y\right )$ at the lens plane and $\left ( {{x_i},{y_i}} \right )$.

The complex amplitude at the object plane ${U_o}\left ( {x,y} \right )$ is obtained with the convolution of ${U_l}\left ( {x,y} \right )$ and the conjugation of the free-space propagation impulse response $h\left ( {x,y,z} \right )$. From the Rayleigh-Sommerfeld solution, the impulse response of the free-space propagation and ${U_o}\left ( {x,y} \right )$ becomes

$$h\left( {x,y,z} \right) = \frac{1}{{j\lambda }}\frac{{\exp \left( {jkr} \right)}}{r}\frac{z}{r},$$
$$\begin{aligned}{U_o}\left( {x,y} \right) = &\iint {\exp \left( { - j{\phi _{{\textrm{PBPE}}}}\left( {x^\prime,y^\prime} \right)} \right)\operatorname{rect} \left( {\frac{{{x^\prime } - {x_l}}}{{{l_x}}},\frac{{{y^\prime } - {y_l}}}{{{l_y}}}} \right){\hat{U}_l}\left( {{x^\prime },{y^\prime }} \right)}\\ &{h^*}\left( {x - {x^\prime },y - {y^\prime },{z_o}} \right)d{x^\prime }d{y^\prime }, \end{aligned}$$
where $r = \sqrt {{x^2} + {y^2} + {z^2}}$ [28]. To rewrite Eq. (7) into Fourier transform of the efficient pupil function, the coordinates of the object plane and the lens plane are shifted. The backpropagated field at the lens plane is valid only in the region centered around $\left ( {{x_l},{y_l}} \right )$, and PSF at the object plane has valid value only near $\left ( {{x_o},{y_o}} \right )$. Therefore, the origins of the object plane and the lens plane are shifted to $\left ( {{x_o},{y_o}} \right )$ and $\left ( {{x_l},{y_l}} \right )$ by substituting $\left ( {\tilde x,\tilde y} \right ) = \left ( {x - {x_o},y - {y_o}} \right )$ and $\left ( {{{\tilde x}^\prime },{{\tilde y}^\prime }} \right ) = \left ( {{x^\prime } - {x_l},{y^\prime } - {y_l}} \right )$, respectively. Also, $\left ( {{x_o},{y_o}} \right ) = \left ( {{x_l},{y_l}} \right )$ is used since the light from SLM is parallel to the z-axis in the proposed system. By assuming $\tilde x,\tilde x^{\prime} ,\tilde y,\tilde y^{\prime} \ll {z_o}$, we can rewrite Eq. (7) with Taylor series approximation. Third-order and higher-order terms are ignored in the approximation.
$${U_o}\left( {\tilde x,\tilde y} \right) = \frac{1}{{j\lambda {z_o}}}\exp \left( { - j\frac{k}{{2{z_o}}}\left( {{{\tilde x}^2} + {{\tilde y}^2}} \right)} \right){\left. {{\mathcal{F}^{ - 1}}\left[ {{P_{\textrm{PBPE}}}\left( {{{\tilde x}^\prime },{{\tilde y}^\prime }} \right)} \right]} \right|_{{\nu _x} = \frac{{\tilde x}}{{\lambda {z_o}}},{\nu _y} = \frac{{\tilde y}}{{\lambda {z_o}}}}},$$
$$\begin{aligned} {P_{\textrm{PBPE}}}\left( {{{\tilde x}^\prime },{{\tilde y}^\prime }} \right) = &\exp \left( { - j{\phi _{{\textrm{PBPE}}}}\left( {{{\tilde x}^\prime } + {x_l},{{\tilde y}^\prime } + {y_l}} \right)} \right)\operatorname{rect} \left( {\frac{{{{\tilde x}^\prime }}}{{{l_x}}},\frac{{{{\tilde y}^\prime }}}{{{l_y}}}} \right){\hat{U}_l}\left( {{{\tilde x}^\prime } + {x_l},{{\tilde y}^\prime } + {y_l}} \right)\\ &\times\exp \left( { - j\frac{k}{{2{z_o}}}\left( {{{\tilde x}^{\prime2} } + {{\tilde y}^{\prime2}}} \right)} \right). \end{aligned}$$
As a result, PSF at the object plane is expressed as ${U_o}\left ( {\tilde x,\tilde y} \right )$ with the efficient pupil function ${P_{\textrm {PBPE}}}$ and the additional parabolic phase profile.

In order to identify that the proposed method avoids the aliasing issue, a sampling distance for the wavefront of the efficient pupil function $P_{\textrm {PBPE}}$ is calculated. From Eq. (9), the wavefront of the efficient pupil function $P_{\textrm {PBPE}}$ is expressed as $W_{\textrm {aberr}}$ in Eq. (10) by separating rectangular function from phase terms. In addition, a sampling condition for an arbitrary wavefront $W\left (x,y\right )$ is expressed as Eq. (11), where $\Delta x, \Delta y$ are horizontal and vertical sampling distance, respectively [27].

$${P_{{\textrm{PBPE}}}}\left( {{{\tilde x}^\prime },{{\tilde y}^\prime }} \right) = \operatorname{rect} \left( {\frac{{{{\tilde x}^\prime }}}{{{l_x}}},\frac{{{{\tilde y}^\prime }}}{{{l_y}}}} \right)\exp \left( {j{W_{\textrm{aberr}}}\left( {{{\tilde x}^\prime },{{\tilde y}^\prime }} \right)} \right),$$
$$\Delta x{\left| {\frac{{\partial W\left( {x,y} \right)}}{{\partial x}}} \right|_{\max }} \leq \pi \textrm{, and } \Delta y{\left| {\frac{{\partial W\left( {x,y} \right)}}{{\partial y}}} \right|_{\max }} \leq \pi.$$
The wavefront $W\left (x,y\right )$ in Eq. (11) becomes $\phi _{\textrm {PBPE}}$ when the phase profile of PBPE is directly expressed, resulting in the small sampling distance due to high NA. On the contrary, $W\left (x,y\right )$ becomes $W_{\textrm {aberr}}$ in the proposed method, which requires a longer sampling distance. Substituting the 532 nm wavelength and a virtual image floated at a depth of 100 mm as an example, the sampling distance becomes 0.427 $\mu$m for $\phi _{\textrm {PBPE}}$, and 25.7 $\mu$m for $W_{\textrm {aberr}}$. Relatively long sampling distance of the proposed method requires less memory while avoiding the aliasing issue.

Figure 4 shows the wavefront of the efficient pupil function and the phase profile of PBPE at the edge of the lens plane. The width and height of the displayed phase profile in Figs. 4(a) and 4(b) are $l_x$ and $l_y$, which are both 1.6 mm. Due to the strict sampling condition, $\phi _{\textrm {PBPE}}$ shows severe aliasing effect. Shorter sampling distance, which requires larger memory, should be used to fully describe the phase profile of PBPE, as shown in Fig. 4(c). Therefore, the proposed method enables CGH calculation that incorporates the phase profile of the eyepiece without the aliasing issue by using the efficient pupil function.

 figure: Fig. 4.

Fig. 4. (a) The wavefront $W_{\textrm {aberr}}$ of the efficient pupil function $P_{\textrm {PBPE}}$. (b) The phase profile $\phi _{\textrm {PBPE}}$ of PBPE, and (c) its enlargement.

Download Full Size | PDF

3.2 Chromatic aberration correction

Due to the chromatic aberration, a single white color image is separated into three color images with different axial and lateral displacements when it is floated by PBPE, as shown in Fig. 5(a). A simple way to correct the chromatic aberration of PBPE in the holographic display is to integrate an additional lens profile to CGH that compensates wavelength-dependent focal lengths. However, the maximum optical power of the lens that SLM can generate is not enough to compensate the chromatic aberration due to the limited space-bandwidth product of SLM. Therefore, providing properly warped images at different depths depending on the wavelength is a more suitable method. The depth of the input image plane $z_o$ that makes the virtual image at the desired depth $z_i$ for each wavelength is calculated by the thin lens equation. The focal length of PBPE corresponding to each wavelength is calculated from the relation ${f_i} = {\lambda _d}{f_d}/{\lambda _i}$.

Each color image is corrected by the image warping to avoid the lateral chromatic aberration [29]. The mapping function between the input image plane and the virtual image plane is calculated by ray tracing method with the phase profile of PBPE. Considering limited space-bandwidth product of SLM, rays departing from SLM are assumed to be parallel to the z-axis. The direction of the rays refracted by PBPE is calculated by regarding PBPE as a diffraction grating. In this manner, the grating vector of PBPE is expressed as

$${\vec K_{{\textrm{PBPE}}}}\left( {x,y} \right) = \left( {\frac{{\partial {\phi _{{\textrm{PBPE}}}}}}{{\partial x}},\frac{{\partial {\phi _{{\textrm{PBPE}}}}}}{{\partial y}}} \right) = - \frac{{4\pi }}{{{\lambda _d}}}\left( {\frac{x}{{\sqrt {{x^2} + {y^2} + f_d^2} }},\frac{y}{{\sqrt {{x^2} + {y^2} + f_d^2} }}} \right).$$
Therefore, the direction of the refracted ray is calculated by adding the grating vector of PBPE and the lateral wavevector of the incident ray [30]. Since the direction of the incident ray is parallel to z-axis in the proposed system, relation between a point at the input image $\left ( {{x_o},{y_o}} \right )$ and the virtual image $\left ( {{x_i} ,{y_i}} \right )$ depicted in Fig. 5(b) is expressed as
$$\left( {{x_i} ,{y_i}} \right) = \left( {{x_o},{y_o}} \right) - {z_i}\frac{{{{\vec K}_{\textrm{PBPE}}}\left( {{x_o},{y_o}} \right)}}{{\sqrt {k_i^2 - \left| {{{\vec K}_{\textrm{PBPE}}}{{\left( {{x_o},{y_o}} \right)}^2}} \right|} }},$$
where ${k_i} = \frac {{2\pi }}{{{\lambda _i}}}$ is the wavenumber of the incident light, and $z_f$ is the distance between PBPE and the virtual image. The image warping using the mapping function Eq. (13) is applied to obtain the identical images for all wavelengths. This method corrects not only chromatic aberration but also pincushion distortion because it is based on the phase profile of PBPE.

 figure: Fig. 5.

Fig. 5. Schematic diagram of the image formation by PBPE (a) without and (b) with chromatic aberration correction.

Download Full Size | PDF

4. Results

4.1 Benchtop prototype of the proposed display system

Figure 6(a) shows PBPE used in the experiment. The designed wavelength of the PBP lens (Edmund Optics, 45 mm focal length polarization directed flat lens) is 550 nm, and the designed focal length is 45 mm. Therefore, the focal length of PBPE is 18.8 mm, 23.3 mm, and 27.1 mm when the wavelength of the light is 660 nm, 532 nm, and 457 nm, respectively. The diameter is 24.5 mm, and the thickness is less than 1 mm.

The benchtop prototype of the proposed holographic AR near-eye display using PBPE is shown in Fig. 6(b). Linearly polarized red (Cobolt, Flamenco with wavelength of 660 nm), green (Cobolt, Samba with wavelength of 532 nm), and blue (Cobolt, Twist with wavelength of 457 nm) lasers are used as light sources to provide full-color images. Lasers are collimated and combined with beam splitters before entering SLM (Jasper JD7714) via a beam splitter. The resolution of SLM is 4096 by 2400, and the pixel pitch is 3.74 $\mu$m for both horizontal and vertical directions. High-order and DC noise of the holographic image are eliminated by 4-$f$ system with a spatial filter. The focal length of the lenses used in the 4-$f$ system are 50 mm and 85 mm to enlarge the size of SLM 1.7 times and fill the aperture of PBPE. After passing through the 4-$f$ system, the holographic image is combined with the real-world image by a beam splitter, and PBPE is placed after the beam splitter to be used as the eyepiece. In order to separately modulate the real-world image and the holographic image, LP and a half-wave plate (HWP) are placed before the beam splitter, and QWP is placed in front of PBPE. The polarization direction of LP is chosen to make the real-world image into LCP after passing through QWP, and the direction of HWP’s fast axis is chosen to make the holographic image into RCP after passing through QWP. Therefore, only the holographic image is virtually floated while the real-world image is conveyed to the user without any modulation. The light efficiency of the display system is measured with the optical power meter (Newport, 1830-C) and the optical power detector (Newport, 818-SL). The measured light efficiency of the holographic image is 17.2%, 16.7%, and 16.6% for 660 nm, 532 nm, and 457 nm wavelength, respectively.

 figure: Fig. 6.

Fig. 6. The picture of (a) PBPE and (b) the experimental setup for the benchtop prototype of the proposed system

Download Full Size | PDF

4.2 Experimental results

The monochromatic aberration correction method is evaluated with the grid pattern image and PSF. Figure 7(a) shows the observed holographic images and their enlargements with and without aberration correction. FOV of Fig. 7(a) is 70°, which is the full FOV of the monochromatic green image. It is clearly seen that the image at the outermost part becomes as sharp as the central part after the aberration correction. The normalized intensities of PSFs at the outermost part of the red, green, and blue images are plotted in Fig. 7(b). The intensity is measured at the cross-section of PSF along the dashed line depicted in the picture of enlarged PSFs. The peak value increased, and the width decreased after the aberration correction. Therefore, the proposed CGH calculation method corrects the monochromatic aberration of PBPE for all wavelengths. However, since only the blue image utilizes full FOV, the aberration correction is clear for the blue image while less effective for red and green images. FOV of the proposed system will be discussed in section 5.1.

 figure: Fig. 7.

Fig. 7. (a) The holographic images and their enlargements before and after the monochromatic aberration correction. (b) PSF at the outermost part of the image and their intensity of the cross-section before and after the monochromatic aberration correction.

Download Full Size | PDF

An image of white letters is used as a target image to visualize the effect of the chromatic aberration and its correction method, as shown in Fig. 8. When the identical CGH for the green image is used for the red, green, and blue images, both axial and lateral chromatic aberration appears. The clear full-color image is obtained after correcting the chromatic aberration.

 figure: Fig. 8.

Fig. 8. Observed images of white letters before and after the chromatic aberration correction.

Download Full Size | PDF

Figure 9 shows the full-color holographic images obtained with the proposed display system. CGH is generated with an RGB image and a depth map by using the proposed CGH calculation methods. In order to observe only effects of the aberration and its correction, 10 holographic images with different random phases are digitally combined for each color channel to reduce speckle noise [31]. Each color channel is also digitally combined to obtain the full-color image. To obtain 60 Hz frame rate, a display with an 1800 Hz frame rate is needed. Therefore, a high-speed display such as the digital micromirror device, which provides a high frame rate above 10 kHz, can be used for practical use of the proposed system [32]. The focus cues provided by the holographic display system is demonstrated in Fig. 9(a). The nearest object is placed at 10 diopter (D), and the farthest object is placed at 0.5D. The holographic image and the real-world image are combined successfully, as shown in Fig. 9(b). Furthermore, FOV of the proposed display system is about 50°, as shown in Fig. 9(c).

 figure: Fig. 9.

Fig. 9. (a) Observed holographic images without the real-world image, showing focus cues. (b) The holographic AR images, and (c) FOV of the proposed display system.

Download Full Size | PDF

However, some noise appears in the center of the image when CCD is focused on the far distance. This noise is the virtual image of the spatial filter in the 4-$f$ system due to the limited efficiency of PBPE. The image of the spatial filter is floated to an infinite distance by the second lens of the 4-$f$ system. Therefore, part of this image passes through PBPE without any modulation, and it is observed when focused at the far distance.

5. Discussion

5.1 FOV design of the proposed system

As discussed in section 2.2, the eye-box of the holographic near-eye display is located at the focal plane of the eyepiece. Due to the chromatic aberration of PBPE, the position of the eye-box varies depending on the wavelength. Therefore, a viewing position where the maximum FOV is obtained is different for each wavelength. However, since the holographic near-eye display provides images to the user outside the eye-box with decreased FOV, it is possible to define a viewing zone where sufficient FOV is provided for all wavelengths. In order to find this region, FOV of the holographic near-eye display at arbitrary points along the z-axis must be configured.

In the holographic near-eye display, three stops limit FOV, as shown in Fig. 10(a). The first one is SLM, which is denoted as stop 1 in Fig. 10(a). By regarding that the light is coming from the virtual image of SLM, the size of SLM becomes an aperture stop of the display. Therefore, FOV limited by the size of SLM is

$${\theta _{\textrm{FOV,1}}} = 2{\tan ^{ - 1}}\left( {\frac{{M{l_{\textrm{SLM}}} + {d_p}}}{{2\left( {{z_i} + {z_e}} \right)}}} \right),$$
where $z_{i}$ is the distance between the eyepiece and the virtual image, $M$ is the magnification of the eyepiece, $d_{p}$ is a diameter of the user’s pupil, and $z_{e}$ is the eye relief. The second is the aperture of the eyepiece, which is denoted as stop 2 in Fig. 10(a). Since the holographic image is transferred to the user through the eyepiece, the aperture of the eyepiece becomes the stop of the system. FOV limited by the aperture of the eyepiece is expressed as
$${\theta _{\textrm{FOV,2}}} = 2{\tan ^{ - 1}}\left( {\frac{{{D_{ep}} + {d_p}}}{{2{z_e}}}} \right),$$
where $D_{ep}$ is the diameter of the eyepiece. Lastly, the limited angular spectrum due to the pixelated structure of SLM acts as the stop of the holographic near-eye display. Limited angular spectrum makes the eye-box at the focal plane with width $l_f$. Therefore, all the light coming from SLM passes through the limited eye-box at the focal plane, making the eye-box act as the stop, which is denoted as stop 3 in Fig. 10(a). FOV determined by the angular spectrum of SLM is as follows:
$${\theta _{\textrm{FOV,3}}} = 2{\tan ^{ - 1}}\left( {\frac{{{l_f} + {d_p}}}{{2\left| {{z_e} - f} \right|}}} \right).$$

 figure: Fig. 10.

Fig. 10. (a) Top view of the holographic near-eye display with its stops to design FOV. (b) Simulation results of FOV at arbitrary points along the z-axis.

Download Full Size | PDF

The system stop of the display system is chosen to be the smallest among the possible candidates of the aperture stop. Therefore, the minimum value among $\theta _{\textrm {FOV,1}}$, $\theta _{\textrm {FOV,2}}$, and $\theta _{\textrm {FOV,3}}$ is designated as FOV of the holographic near-eye display system. Substituting the proposed system’s specification into the variables and assuming the diameter of pupil as 6 mm, the lateral FOV of the proposed system for each wavelength is plotted as Fig. 10(b). When the eye relief is in a range from 20 mm to 25.6 mm, the lateral FOV of the proposed system becomes 50° to 60°. Therefore, the proposed system can achieve the viewing zone with the axial size of 5.6 mm that provides FOV over 50° for all wavelengths. In the experiment, the eye relief is set to 25.6 mm for the maximum FOV of the full-color image.

In the proposed system, it is hard to define the eye-box in a general way due to the chromatic aberration of PBPE. Therefore, we defined the eye-box of the system as a region where the full-color image is provided at the viewing position. When the eye relief is larger than the focal length of the eyepiece, the size of the eye-box increases, and FOV decreases. Since the viewing position is near the focal plane of the blue image and far from the focal plane of the red and green images, the blue image obtains the smallest eye-box. Therefore, the blue image limits the size of the eye-box to 1.9 mm, which is calculated from Eq. (2).

5.2 Lateral chromatic aberration depending on the user’s focal plane

In section 3.2, lateral chromatic aberration is corrected by warping the image based on the mapping function obtained from ray tracing. However, the lateral chromatic aberration must be corrected to the user’s focal plane, not the virtual image plane. In the holographic near-eye display, the light from the virtual image propagates as a single ray due to the small divergence angle of SLM. Since the wavelength-dependent lens power of PBPE is not compensated, the directions of light rays departing from a single point at the virtual image are different for each wavelength. The small divergence angle of the light occurs reappearance of the chromatic aberration when the user’s focal plane changes.

Figure 11 shows schematic diagrams of the lateral chromatic aberration depending on the user’s focal plane and corresponding experimental results. When the user focuses on the virtual image plane, the lateral chromatic aberration is successfully corrected by warping the input image to the virtual image plane, as shown in Fig. 11(a). However, when the user changes the focal plane to another plane, red, green, and blue images are separated from each other again due to different ray directions, as shown in Fig. 11(b). In this condition, the user observes the defocused virtual image with chromatic aberration. Therefore, the lateral chromatic aberration must be corrected to the user’s focal plane, as shown in Fig. 11(c). Even though the eye moves inside the eye-box, the hologram need not be updated if the user focuses on the identical plane. However, since the pupil swim rotates the user’s focal plane, the hologram should be updated depending on the rotation angle.

 figure: Fig. 11.

Fig. 11. Schematic diagram of the chromatic aberration depending on the user’s focal plane and its corresponding experimental results. The image is corrected to the virtual image plane, and the user focuses on (a) the virtual image plane and (b) the arbitrary plane. (c) The image is corrected to the plane where the user focuses.

Download Full Size | PDF

It is necessary to know the user’s focal plane to warp the input image depending on the user’s focal plane. Therefore, additional sensors to find the focal plane of the eye, such as eye-trackers with a depth camera used in previous researches, should be included in the display system for the actual use [33,34]. However, all the experiments in this research were conducted with the assumption that the user’s focal plane is already known.

5.3 Acceleration of CGH calculation

The proposed CGH calculation method generates the aberration-corrected holographic image with the superposition of every PSF corresponding to the target image. However, calculation and superposition of every PSF take too long computation time. In order to accelerate the calculation, the aberration-corrected image is generated by dividing the target image into multiple patches and using the identical PSF for each patch. Assuming that PSF is a spatially slowly-varying function, PSF inside each patch is considered to be constant. Therefore, the aberration-corrected image is obtained by merging the convolution results of each patch and PSF corresponding to the center of the patch. However, the result of this calculation method must be different from the ground truth generated with the point-by-point superposition of PSFs. Therefore, the patch size that has sufficiently small error compared to the ground truth must be obtained.

In order to evaluate the generated aberration-corrected image, peak signal-to-noise ratio (PSNR) is used. The image with resolution of 4096$\times$2400, whose intensity of every pixel is normalized to one, is used as the target image. Figure 12(a) shows PSNR and the calculation time according to the patch size. A square-shaped patch is used, and the patch size is defined as the width of the patch expressed in the number of pixels. When the patch size is 1, generated image becomes the ground truth, and the calculation time is about 10 hours. As the patch size increases, both PSNR and the calculation time decreases. In this research, the lower bound of PSNR is set to 30 dB; therefore, the patch size is determined to 40. The star-shaped marks in Fig. 12(a) indicate the patch size of 40, which is used in the experiment. Corresponding calculation time is 32 seconds, and PSNR is 29.41 dB.

 figure: Fig. 12.

Fig. 12. (a) Calculation time and PSNR of the aberration-corrected image according to the patch size. (b) Squared error between the ground truth and the obtained aberration-corrected image when the patch size is 40 and 400.

Download Full Size | PDF

Figure 12(b) visualizes the squared error between the ground truth and the generated aberration-corrected image when the patch size is 40 and 400. Though the patch size of 400 obtains a short calculation time of 2 seconds, it shows high squared error since the large patch size makes non-constant PSF inside the patch. On the contrary, the small patch size of 40 obtains high PSNR and low squared error across the entire image. Therefore, the CGH calculation process can be accelerated while obtaining the quality of the aberration-corrected image. Though CPU processing using MATLAB is used in this research, GPU-accelerated calculation can be used to obtain the accurate aberration-corrected image since the calculation method can be fully parallelized.

6. Conclusion

In this research, the full-color holographic AR near-eye display using PBPE and the CGH calculation method for the aberration correction is presented. By using PBPE as an eyepiece, the holographic AR near-eye display with 50° FOV is obtained. When PBPE is used as an eyepiece, monochromatic and chromatic aberrations appear due to its non-ideal imaging function and wavelength dependency. In this paper, we correct the aberrations by wavefront modulation of the holographic display. The monochromatic aberration is corrected with the proposed CGH calculation method, which incorporates the phase profile of PBPE in the backpropagation process. In order to avoid an aliasing issue during the backpropagation process, PSFs at the input image plane of PBPE are calculated and superposed to generate an aberration-corrected holographic image. Axial chromatic aberration is corrected by floating the holographic images at different depths depending on the wavelength. Also, lateral chromatic aberration is corrected by warping the image with the mapping function obtained from ray tracing. The proposed system is demonstrated with the benchtop prototype. Experimental results show that the proposed display system provides full-color holographic images with 50° FOV while correcting the observed aberrations of PBPE.

Disclosures

The authors declare no conflicts of interest.

References

1. D. Cheng, Y. Wang, H. Hua, and M. Talha, “Design of an optical see-through head-mounted display with a low f-number and large field of view using a freeform prism,” Appl. Opt. 48(14), 2655–2668 (2009). [CrossRef]  

2. X. Hu and H. Hua, “High-resolution optical see-through multi-focal-plane head-mounted display using freeform optics,” Opt. Express 22(11), 13896–13903 (2014). [CrossRef]  

3. T. Levola, “Diffractive optics for virtual reality displays,” J. Soc. Inf. Disp. 14(5), 467–475 (2006). [CrossRef]  

4. P. Äyräs, P. Saarikko, and T. Levola, “Exit pupil expander with a large field of view based on diffractive optics,” J. Soc. Inf. Disp. 17(8), 659–664 (2009). [CrossRef]  

5. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 85 (2017). [CrossRef]  

6. C. Jang, K. Bang, G. Li, and B. Lee, “Holographic near-eye display with expanded eye-box,” ACM Trans. Graph. 37(6), 195 (2019). [CrossRef]  

7. J.-Y. Hong, C.-K. Lee, S. Lee, B. Lee, D. Yoo, C. Jang, J. Kim, J. Jeong, and B. Lee, “See-through optical combiner for augmented reality head-mounted display: index-matched anisotropic crystal lens,” Sci. Rep. 7, 2753 (2017). [CrossRef]  

8. G.-Y. Lee, J.-Y. Hong, S. Hwang, S. Moon, H. Kang, S. Jeon, H. Kim, J.-H. Jeong, and B. Lee, “Metasurface eyepiece for augmented reality,” Nat. Commun. 9, 4562 (2018). [CrossRef]  

9. J. Kim, Y. Li, M. N. Miskiewicz, C. Oh, M. W. Kudenov, and M. J. Escuti, “Fabrication of ideal geometric-phase holograms with arbitrary wavefronts,” Optica 2(11), 958–964 (2015). [CrossRef]  

10. T. Zhan, Y.-H. Lee, G. Tan, J. Xiong, K. Yin, F. Gou, J. Zou, N. Zhang, D. Zhao, J. Yang, S. Liu, and S.-T. Wu, “Pancharatnam–Berry optical elements for head-up and near-eye displays,” J. Opt. Soc. Am. B 36(5), D52–D65 (2019). [CrossRef]  

11. Y.-H. Lee, G. Tan, T. Zhan, Y. Weng, G. Liu, F. Gou, F. Peng, N. V. Tabiryan, S. Gauza, and S.-T. Wu, “Recent progress in Pancharatnam–Berry phase optical elements and the applications for virtual/augmented realities,” Opt. Data Process. and Storage 3(1), 79–88 (2017). [CrossRef]  

12. T. Zhan, Y.-H. Lee, and S.-T. Wu, “High-resolution additive light field near-eye display by switchable Pancharatnam–Berry phase lenses,” Opt. Express 26(4), 4863–4872 (2018). [CrossRef]  

13. C. Yoo, M. Chae, S. Moon, and B. Lee, “Retinal projection type lightguide-based near-eye display with switchable viewpoints,” Opt. Express 28(3), 3116–3135 (2020). [CrossRef]  

14. S. Moon, C.-K. Lee, S.-W. Nam, C. Jang, G.-Y. Lee, W. Seo, G. Sung, H.-S. Lee, and B. Lee, “Augmented reality near-eye display using Pancharatnam-Berry phase lenses,” Sci. Rep. 9, 6616 (2019). [CrossRef]  

15. S. Moon, S.-W. Nam, Y. Jeong, C.-K. Lee, H.-S. Lee, and B. Lee, “Compact augmented reality combiner using Pancharatnam-Berry phase lens,” IEEE Photonics Technol. Lett. 32(5), 235–238 (2020). [CrossRef]  

16. C. Yoo, J. Xiong, S. Moon, D. Yoo, C.-K. Lee, S.-T. Wu, and B. Lee, “Foveated display system based on a doublet geometric phase lens,” Opt. Express 28(16), 23690–23702 (2020). [CrossRef]  

17. Z. He, Y.-H. Lee, R. Chen, D. Chanda, and S.-T. Wu, “Switchable Pancharatnam–Berry microlens array with nano-imprinted liquid crystal alignment,” Opt. Lett. 43(20), 5062–5065 (2018). [CrossRef]  

18. K. Yin, Z. He, and S.-T. Wu, “Reflective polarization volume lens with small f-number and large diffraction angle,” Adv. Opt. Mater. 8(11), 2000170 (2020). [CrossRef]  

19. Z. He, K. Yin, and S.-T. Wu, “Standing wave polarization holography for realizing liquid crystal Pancharatnum-Berry phase lenses,” Opt. Express 28(15), 21729–21736 (2020). [CrossRef]  

20. F. Yaraş, H. Kang, and L. Onural, “State of the art in holographic displays: a survey,” J. Disp. Technol. 6(10), 443–454 (2010). [CrossRef]  

21. J. Upatnieks, A. Vander Lugt, and E. Leith, “Correction of lens aberrations by means of holograms,” Appl. Opt. 5(4), 589–593 (1966). [CrossRef]  

22. J. Hahn, H. Kim, Y. Lim, G. Park, and B. Lee, “Wide viewing angle dynamic holographic stereogram with a curved array of spatial light modulators,” Opt. Express 16(16), 12372–12386 (2008). [CrossRef]  

23. A. Kaczorowski, G. S. Gordon, A. Palani, S. Czerniawski, and T. D. Wilkinson, “Optimization-based adaptive optical correction for holographic projectors,” J. Disp. Technol. 11(7), 596–603 (2015). [CrossRef]  

24. A. Kaczorowski, G. S. Gordon, and T. D. Wilkinson, “Adaptive, spatially-varying aberration correction for real-time holographic projectors,” Opt. Express 24(14), 15742–15756 (2016). [CrossRef]  

25. H.-J. Yeom, H.-J. Kim, S.-B. Kim, H. Zhang, B. Li, Y.-M. Ji, S.-H. Kim, and J.-H. Park, “3d holographic head mounted display using holographic optical elements with astigmatism aberration compensation,” Opt. Express 23(25), 32025–32034 (2015). [CrossRef]  

26. J.-H. Park and S.-B. Kim, “Optical see-through holographic near-eye-display with eyebox steering and depth of field control,” Opt. Express 26(21), 27076–27088 (2018). [CrossRef]  

27. D. Voelz, Computational Fourier Optics: a Matlab Tutorial Tutorial text (Society of Photo-Optical Instrumentation Engineers, Bellingham, 2011).

28. J. W. Goodman, Introduction to Fourier Optics, 4th ed. (W. H. Freeman and Company, 2017).

29. G. Wolberg, Digital Image Warping, vol. 10662 (IEEE Computer Society Press, Los Alamitos, CA, 1990).

30. N. Uchida, “Calculation of diffraction efficiency in hologram gratings attenuated along the direction perpendicular to the grating vector,” J. Opt. Soc. Am. 63(3), 280–287 (1973). [CrossRef]  

31. J. Amako, H. Miura, and T. Sonehara, “Speckle-noise reduction on kinoform reconstruction using a phase-only spatial light modulator,” Appl. Opt. 34(17), 3165–3171 (1995). [CrossRef]  

32. B. Lee, D. Yoo, J. Jeong, S. Lee, D. Lee, and B. Lee, “Wide-angle speckleless dmd holographic display using structured illumination with temporal multiplexing,” Opt. Lett. 45(8), 2148–2151 (2020). [CrossRef]  

33. R. Konrad, N. Padmanaban, K. Molner, E. A. Cooper, and G. Wetzstein, “Accommodation-invariant computational near-eye displays,” ACM Trans. Graph. 36(4), 1–12 (2017). [CrossRef]  

34. N. Padmanaban, R. Konrad, and G. Wetzstein, “Autofocals: Evaluating gaze-contingent eyeglasses for presbyopes,” Sci. Adv. 5(6), eaav6187 (2019). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Operation principle of (a) a single PBP lens and (b) PBPE. The schematic diagrams are not to scale, and there is no gap between optical components of PBPE. LPS indicates linear polarization state.
Fig. 2.
Fig. 2. Schematic diagram of the holographic near-eye display
Fig. 3.
Fig. 3. Schematic diagram of the backpropagation process of the lens imaging system
Fig. 4.
Fig. 4. (a) The wavefront $W_{\textrm {aberr}}$ of the efficient pupil function $P_{\textrm {PBPE}}$. (b) The phase profile $\phi _{\textrm {PBPE}}$ of PBPE, and (c) its enlargement.
Fig. 5.
Fig. 5. Schematic diagram of the image formation by PBPE (a) without and (b) with chromatic aberration correction.
Fig. 6.
Fig. 6. The picture of (a) PBPE and (b) the experimental setup for the benchtop prototype of the proposed system
Fig. 7.
Fig. 7. (a) The holographic images and their enlargements before and after the monochromatic aberration correction. (b) PSF at the outermost part of the image and their intensity of the cross-section before and after the monochromatic aberration correction.
Fig. 8.
Fig. 8. Observed images of white letters before and after the chromatic aberration correction.
Fig. 9.
Fig. 9. (a) Observed holographic images without the real-world image, showing focus cues. (b) The holographic AR images, and (c) FOV of the proposed display system.
Fig. 10.
Fig. 10. (a) Top view of the holographic near-eye display with its stops to design FOV. (b) Simulation results of FOV at arbitrary points along the z-axis.
Fig. 11.
Fig. 11. Schematic diagram of the chromatic aberration depending on the user’s focal plane and its corresponding experimental results. The image is corrected to the virtual image plane, and the user focuses on (a) the virtual image plane and (b) the arbitrary plane. (c) The image is corrected to the plane where the user focuses.
Fig. 12.
Fig. 12. (a) Calculation time and PSNR of the aberration-corrected image according to the patch size. (b) Squared error between the ground truth and the obtained aberration-corrected image when the patch size is 40 and 400.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

ϕ PBP ( r ) = 2 π λ d ( f d r 2 + f d 2 ) = 2 π λ i ( f i r 2 + f i 2 ) ,
l f = λ f p ,
θ FOV = 2 tan 1 ( l S L M 2 f ) ,
U ^ l ( x , y ) = 1 r i ( x , y ) exp ( z i | z i | j k r i ( x , y ) ) ,
U l ( x , y ) = exp ( j ϕ PBPE ( x , y ) ) rect ( x x l l x , y y l l y ) U ^ l ( x , y ) ,
h ( x , y , z ) = 1 j λ exp ( j k r ) r z r ,
U o ( x , y ) = exp ( j ϕ PBPE ( x , y ) ) rect ( x x l l x , y y l l y ) U ^ l ( x , y ) h ( x x , y y , z o ) d x d y ,
U o ( x ~ , y ~ ) = 1 j λ z o exp ( j k 2 z o ( x ~ 2 + y ~ 2 ) ) F 1 [ P PBPE ( x ~ , y ~ ) ] | ν x = x ~ λ z o , ν y = y ~ λ z o ,
P PBPE ( x ~ , y ~ ) = exp ( j ϕ PBPE ( x ~ + x l , y ~ + y l ) ) rect ( x ~ l x , y ~ l y ) U ^ l ( x ~ + x l , y ~ + y l ) × exp ( j k 2 z o ( x ~ 2 + y ~ 2 ) ) .
P PBPE ( x ~ , y ~ ) = rect ( x ~ l x , y ~ l y ) exp ( j W aberr ( x ~ , y ~ ) ) ,
Δ x | W ( x , y ) x | max π , and  Δ y | W ( x , y ) y | max π .
K PBPE ( x , y ) = ( ϕ PBPE x , ϕ PBPE y ) = 4 π λ d ( x x 2 + y 2 + f d 2 , y x 2 + y 2 + f d 2 ) .
( x i , y i ) = ( x o , y o ) z i K PBPE ( x o , y o ) k i 2 | K PBPE ( x o , y o ) 2 | ,
θ FOV,1 = 2 tan 1 ( M l SLM + d p 2 ( z i + z e ) ) ,
θ FOV,2 = 2 tan 1 ( D e p + d p 2 z e ) ,
θ FOV,3 = 2 tan 1 ( l f + d p 2 | z e f | ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.