Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Holographic display using layered computer-generated volume hologram

Open Access Open Access

Abstract

The spatial frequency of the reconstructed image of planar computer-generated hologram(CGH) is limited by the sampling interval and the lack of thickness. To break through this limitation of planar CGH, we propose a new computer-generated volume hologram(CGVH) for full-color dynamic holographic three-dimensional(3D) display, and an iteration-free layered CGVH generation method. The proposed CGVH is equivalent to a volume hologram sampled discretely in three directions. The generation method employs the layered angular spectral diffraction to calculate the light field in the layered CGVH, and then encodes it into a CGVH. Numerical simulation results show that the CGVH can accurately reconstruct full-color 3D objects, where better imaging quality, more concentrated diffraction energy, denser reconstructed spatial frequency information, and larger viewing angle are achieved. The proposed CGVH is expected to be applied to realize dynamic modulation, wavelength multiplexing, and angle multiplexing in various optical fields in the future.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Holography is a technology that records both amplitude and phase information in a hologram by interference [1]. Holograms can also be generated by computer simulation, which is called CGH [2]. Holograms can accurately reconstruct 3D images of the recorded 3D objects. Therefore, holographic display based on the CGH has been considered to be the ultimate technology for 3D display [3]. However, the spatial frequency of planar CGH is limited by the sampling interval and the lack of thickness, leading to the loss of high-frequency information and resulting in small reconstruction, poor reconstruction quality, and small viewing angle.

To address the problem of low resolution, some researchers attempted horizontal splicing of multiple spatial light modulators (SLM) [4,5], but this approach increase the cost and optical complexity. Similarly, time division multiplexing [6,7], pupil tracking [8,9] and pupil point duplication [10,11] have been used to address the problem of small viewing angle. However, time division multiplexing will greatly increase the complexity and volume of the system; pupil tracking and pupil point duplication can only meet the needs of single-person observation. In addition, none of the above methods can truly improve the spatial frequency of CGH. The only way to improve the spatial frequency of planar CGH is to reduce the sampling interval, which is typically equal to the pixel spacing of the SLM loaded with CGH. While SLM with horizontal pixel spacing of 1µm has been produced [12], its longitudinal pixel spacing is large and it can’t realize dynamic display. Theoretically, the spatial frequency of CGH display is limited by the sampling interval. None of the above methods can break this limit, so the improvement effect is limited. Therefore, it is a key issue to propose a new holographic display scheme that can overcome the limitation of plane sampling interval and reconstruct higher spatial frequency.

Theoretically, CGVH is more akin to the optical hologram due to its thickness, allowing it to reconstruct higher spatial frequency information. However, existing CGVH is either realized by recording planar CGH on an optical medium superimposed without actually calculating CGVH [13]; or calculated by a slow iterative algorithm [14]. Neither of them can realize dynamic holographic display.

To solve the above problems, we propose a new CGVH for full-color dynamic holographic 3D display, and an iteration-free layered CGVH generation method. The proposed CGVH is equivalent to a volume hologram sampled discretely in the x, y, and z directions. Each pixel has a very small independent amplitude, phase, or complex amplitude modulation for the passing light. The generation method uses layered angular spectral diffraction to calculate the light field distribution of the target object in the layered CGVH, and then encodes it into a CGVH by unique phase, amplitude, or complex amplitude encoding. Our CGVH retains the advantages of free encoding of planar CGH, and thus can reconstruct both real and virtual objects. At the same time, it also has the advantages of the conventional volume hologram: it can record and reconstruct higher spatial frequency information to expand the viewing angle without twin-image and higher order diffraction; it has angle selectivity and wavelength selectivity, and can realize angle multiplexing [15] and wavelength multiplexing [16].

The advantages mentioned above can pave the way for CGVH to offer better display effect in holographic display and wider application prospects in holographic-optical element, beam shaping and optical tweezers. As a comparison, when we use CGH and CGVH in 3D display, planar CGH can only display a single view image in a small viewing angle as shown in Fig. 1(a), in contrast CGVH can display 3D objects with horizontal parallax in a large viewing angle as shown in Fig. 1(b), and can display different images at different angles as shown in Fig. 1(c).

 figure: Fig. 1.

Fig. 1. Comparison of CGH and CGVH display effects (a) CGH display effect with small viewing angle (b)CGVH display effect with large viewing angle (c) CGVH displays different images at different angles

Download Full Size | PDF

In the second part of this paper, we provide a detailed introduction of the calculation, coding and reconstruction methods of CGVH. In the third part, we analyze and discuss the reconstruction quality, angle selectivity, wavelength selectivity and reconstruction frequency of CGVH through theory and simulation.

2. Theory

2.1 CGVH model

Before calculating, we specify the discrete sampling method of the CGVH. The proposed CGVH is similar to the optical volume hologram discretely sampled in x, y and z axis directions, with sampling intervals of $\Delta x$, $\Delta y$ and $\Delta z$ as shown in Fig. 2(b). For the convenience of calculation, we transform the 3D discrete lattice into a series of two-dimensional discrete planar lattices. The pixel intervals on each plane are $\Delta x$ and $\Delta y$, and the distance between planes is $\Delta z$, as shown in Fig. 2(c). It should be noted that these two forms are completely equivalent, with identical pixel distribution.

 figure: Fig. 2.

Fig. 2. CGVH reconstructing and sampling schematic (a)CGVH reconstructing schematic (b)CGVH sampled discretely in the x, y and z directions (c) CGVH sampled discretely as a series of planes

Download Full Size | PDF

Optical volume hologram is a 3D interference pattern recorded in a medium with a certain thickness [17]. Interference patterns are generally recorded in the form of dielectric constant changes. In optical volume hologram, the dielectric constant changes practically continuously, which continuously modulate the amplitude and phase of the light field passing through the volume hologram. Pixels in CGVH are discrete, each pixel has a certain modulation on the light field passing through. There are many modulation mechanisms, such as liquid crystal modulation or acousto-optic modulation. Therefore, we do not consider the mechanism, only consider the amplitude modulation and phase modulation of each pixel on the light field. Our theory aims to calculate the modulation value of the pixels in CGVH, called recording process, and to calculate the reconstruction image of CGVH, which is called reconstruction process.

The single-frame recording process is as follows: first, obtain the 3D information of the objects at the specific frame, then calculate the light field distribution of the objects within the volume of CGVH, and encode it into a CGVH. The single-frame reconstruction process is shown in Fig. 2(a), which is: first, use the reconstruction laser to illuminate the dynamic volume display device loaded with the CGVH, the image of the object is reconstructed after diffraction, and the reconstruction image can be directly viewed without filtering. We could realize full-color display by synthesizing three monochromatic CGVHs or using wavelength multiplexing, and dynamic display can be achieved by calculating and refreshing the CGVH on the volume display device in real-time. The above process is similar to planar CGH display. The main difference between CGVH and planar CGH is that both the hologram and the display device are changed from planar to volumetric.

2.2 Recording

We divide the recording process into three steps, obtaining 3D information, calculating light field distribution and coding it into a CGVH.

The first step is to obtain the 3D information of the object and layer it, as shown in Fig. 3(a). We use 3D Studio Max to create the 3D model and obtain the position information and intensity information of each point of the 3D object through rendering. The information obtained by this method has already addressed the occlusion problem. Then we layer these points according to the z coordinate. The light field distribution of the m-th layer is ${U_m}({{x_m},{y_m}} )$.

 figure: Fig. 3.

Fig. 3. Algorithm flow chart(a) obtain the 3D information of the object and layer it (b) calculate the light field distribution on the wavefront recording plane (c) calculate the light field distribution on each layer of the CGVH (d) encode the light field distribution into a CGVH

Download Full Size | PDF

The second step is calculating the light field distribution of the objects in the volume of CGVH. As mentioned above, we regard the 3D discrete lattice like Fig. 2(b) as a series of two-dimensional discrete planar lattices like Fig. 2(c). This allows us to transform the problem from calculating the light field distribution in a volume to calculating the light field distribution in multiple planes. The frequently-used methods to calculate the light field distribution are the point source method [18] or the sub-hologram method [19]. However, for 3D objects, it is computationally expensive to calculate the light field distribution of multiple planes, so we insert an angular spectrum recording plane. Unlike wavefront recording plane [20], we calculate angular spectrum distribution instead of light field distribution. We first use the layered angular spectrum diffraction [21] to calculate the angular spectrum on the recording plane, as shown in Fig. 3(b), and the equation is

$${A_{\textrm{rp}}}({f_x},{f_y}) = \sum\limits_{m = 1}^n {\mathrm{{\cal F}}\{{{U_m}({{x_m},{y_m}} )} \}\cdot {H_m}({{f_x},{f_y}} )}$$
where ${A_{\textrm{rp}}}({{f_x},\; {f_y}} )$ is the angular spectrum on the recording plane; n is the number of layers of the layered 3D objects; $\mathrm{{\cal F}}$ is Fourier transform operators; ${U_m}({{x_m},{y_m}} )$ is the light field distribution of the m-th layer of the objects; ${H_m}({{f_x},{f_y}} )$ is the transfer function from the m-th layer to the wavefront recording plate, which varies with the depth of the object layer, and the transfer function is
$$H({{f_x},{f_y}} )= \exp \left[ {\textrm{j}kz\sqrt {1 - {{({\lambda {f_x}} )}^2} - {{({\lambda {f_y}} )}^2}} } \right]$$
where k is the wave vector; $\lambda $ is the wavelength; ${f_x}$ and ${f_x}$ are the spatial frequencies; z is the distance between the two planes where the propagation starts and ends. When z is negative, CGVH will reconstruct the real image of the objects, otherwise will reconstruct the virtual image.

After calculating the angular spectrum on the recording plane, we also use the angular spectrum diffraction to calculate the light field distribution on each layer of the CGVH, as shown in Fig. 3(c), the equation is

$${U_{\textrm{Obj}}}({{x_i},{y_i}} )= {\mathrm{{\cal F}}^{ - 1}}\{{{A_{\textrm{rp}}}({{f_x},{f_y}} )\cdot {H_i}({{f_x},{f_y}} )} \}$$
where ${U_{\textrm{Obj}}}({{x_i},{y_i}} )$ is the light field distribution on the i-th layer of the CGVH; ${H_i}({{f_x},{f_y}} )$ is the transfer function from the wavefront recording plane to the i-th layer of the CGVH, which will change with the z coordinate of CGVH layer, and the transfer function is Eq. (2); ${\mathrm{{\cal F}}^{ - 1}}$ is inverse Fourier transform operators. It is worth noting that the position variable of the recording plane is eliminated in the expansion of Eq. (3), which means that the position of the recording plane can be arbitrarily selected without affecting the calculation results.

The third step is encoding the light field distribution into a CGVH, as shown in Fig. 3(d). There are three encoding methods, namely complex amplitude encoding, phase encoding, and amplitude encoding. Complex amplitude coding only needs to make the reconstructed light field equal to ${U_{\textrm{Obj}}}$. It is simple and accurate, and is widely used in holographic simulations, but it is difficult to be applied to experiments. The modulation functions of phase encoding and amplitude encoding can be obtained by imitating the recording process of the optical volume hologram and ignoring the useless terms:

$$\Delta \varphi = {\varphi _{\textrm{out}}} - {\varphi _{\textrm{in}}} = \alpha ({{U_{\textrm{Obj}}}{R^ \ast } + RU_{\textrm{Obj}}^ \ast } )$$
$$- \Delta E = \frac{{{E_{\textrm{out}}} - {E_{\textrm{in}}}}}{{{E_{\textrm{in}}}}} = \alpha ({{U_{\textrm{Obj}}}{R^ \ast } + RU_{\textrm{Obj}}^ \ast{+} C} )$$
where $\mathrm{\Delta }\varphi $ is the phase modulation; ${\varphi _{\textrm{in}}}$ and ${\varphi _{\textrm{out}}}$ are the phases of the incident light and the outgoing light, respectively; $\mathrm{\Delta }E$ is the absolute value of the amplitude relative modulation, ${E_{\textrm{in}}}$ and ${E_{\textrm{out}}}$ are the amplitudes of incident light and outgoing light, respectively; R is the reference light field and also the reconstructed light field; ${U_{\textrm{Obj}}}{R^\ast }$ can reconstruction the light field; $R{U_{\textrm{Obj}}}^\ast $ is the conjugate term, which ensure that the modulation value is a real number; C is a constant used to ensure that the amplitude modulation is always negative; $\alpha $ is the modulation coefficient and far less than 1 to ensure $\mathrm{\Delta }E$ and $\mathrm{\Delta }\varphi $ of each point in CGVH are far less than 1. Under this approximate condition, the output complex amplitude can be approximated as
$$\begin{aligned} {{\boldsymbol E}_{\textrm{out}}} &= (1 - \Delta E){{\boldsymbol E}_{\textrm{in}}}{\textrm{e}^{\textrm{j}\Delta \varphi }}\\ &\approx {{\boldsymbol E}_{\textrm{in}}} - (\Delta E - \textrm{j}\Delta \varphi ){{\boldsymbol E}_{\textrm{in}}} \end{aligned}$$
where ${{\boldsymbol E}_{\textrm{out}}}$ is the output complex amplitude; ${{\boldsymbol E}_{\textrm{in}}}$ is the incident complex amplitude, and in our method it is the incident light R. The approximate light field consists of two terms, the incident light field ${{\boldsymbol E}_{\textrm{in}}}$ and the diffraction field $- ({\Delta E - \textrm{j}\Delta \varphi } ){{\boldsymbol E}_{\textrm{in}}}$. We define the plane modulation factor on the i-th layer as ${\beta _i}({{x_i},{y_i}} )$
$${\beta _i}({{x_i},{y_i}} )={-} ({\Delta {E_i}({{x_i},{y_i}} )- \textrm{j}\Delta {\varphi_i}({{x_i},{y_i}} )} )$$
where $\Delta {E_i}({{x_i},{y_i}} )$ is the relative amplitude modulation of the i-th layer, which can be calculated by Eq. (4); $\Delta {\varphi _i}({{x_i},{y_i}} )$ is the phase modulation of the i-th layer, which can be calculated by Eq. (5). Noted that $\Delta \varphi $ equals 0 for amplitude modulation and $\Delta E$ equals 0 for phase modulation.

2.3 Reconstruction

The reconstruction process requires that the CGVH satisfy the approximate condition. The approximate condition is that $\Delta E$ and $\Delta \varphi $ of each point in CGVH are far less than 1. Under this condition, CGVH diffraction light field can be written by the solution of integral equation theory [22]:

$${U_\textrm{o}}({{x_\textrm{o}},{y_\textrm{o}},{z_\textrm{o}}} )= \frac{\pi }{{\textrm{j}{\lambda ^2}}}\int\!\!\!\int\!\!\!\int {\frac{{{\textrm{e}^{\textrm{j}kr}}}}{r}\beta ({x,y,z} )R({x,y,z} )dxdydz}$$
where ${U_\textrm{o}}({{x_\textrm{o}},{y_\textrm{o}},{z_\textrm{o}}} )$ is the reconstruction light field; $R({x,y,z} )$ is the incident light; $\beta ({x,y,z} )$ is the volume modulation factor of CGVH; r is the distance between point $({x,y,z} )$ and point $({{x_\textrm{o}},{y_\textrm{o}},{z_\textrm{o}}} )$, the equation is
$$r = \sqrt {{{({x_\textrm{o}} - x)}^2} + {{({y_\textrm{o}} - y)}^2} + {{({z_\textrm{o}} - z)}^2}}$$

Transform the triple integral of Eq. (8) to scatter sum in z direction, and the equation becomes

$${U_\textrm{o}}({{x_\textrm{o}},{y_\textrm{o}},{z_\textrm{o}}} )= \sum\limits_{i = 1}^n {\frac{1}{{\textrm{j}\lambda }}\int\!\!\!\int {\frac{{{\textrm{e}^{\textrm{j}kr}}}}{r}{\beta _i}({{x_i},{y_i}} ){R_i}({{x_i},{y_i}} )d{x_i}d{y_i}} }$$
where ${\beta _i}({{x_i},{y_i}} )= \frac{{\mathrm{\Delta }z}}{\lambda }\beta ({{x_i},{y_i},{z_i}} )$ is the planar modulation of the i-th layer, which can be calculated by Eq. (7); $\mathrm{\Delta }z$ is the thickness of the layer; ${R_i}({{x_i},{y_i}} )= R({{x_i},{y_i},{z_i}} )$ is the incident light of the i-th layer. It can be seen that the formula in the summation sign is Fresnel-Kirchhoff diffraction formula. Eq. (10) shows that the diffraction field of CGVH is equal to the summation of diffraction fields of each layer. While calculating the light field distribution of a layer, according to the scalar diffraction theory, the Fresnel-Kirchhoff formula can be replaced by the angular spectrum diffraction formula. After the replacement, the light field distribution on the reconstruction plane is
$${U_{\textrm{rp}}}({{x_\textrm{o}},{y_\textrm{o}}} )= \sum\limits_{i = 1}^n {{\mathrm{{\cal F}}^{ - 1}}\{{\mathrm{{\cal F}}\{{{\beta_i}({{x_i},{y_i}} )\cdot R({{x_i},{y_i}} )} \} { \cdot {H_{io}}({{f_x},{f_y}} )} \}} }$$
where ${U_{\textrm{rp}}}({{x_\textrm{o}},{y_\textrm{o}}} )$ is the light field on the reconstruction plane; ${H_{io}}({{f_x},{f_y}} )$ is the transfer function from the i-th plane to the reconstruction plane, which can be calculated by Eq. (2).

We use peak signal-to-noise ratio (PSNR) to measure the CGVH reconstruction effect, the equation of PSNR is

$$\textrm{PSNR} = 10 \times \textrm{lo}{\textrm{g}_{10}}\left( {\frac{{MN}}{{\sum\nolimits_{m = 1}^M {\sum\nolimits_{n = 1}^N {{{[{{I_0}({m,n} )- {I_\textrm{r}}({m,n} )} ]}^2}} } }}} \right)$$
where M and N are the numbers of rows and columns of the original image, respectively. ${I_0}({m,n} )$ and ${I_r}({m,n} )$ denote the normalized intensity of the pixel $({m,n} )$ of the original image and the reconstructed image, respectively.

3. Numerical simulation and analysis of characteristic

3.1 3D reconstruction

In order to verify the correctness and feasibility of our method, we conducted a complete simulation of transmissive full-color CGVH, including the calculation, encoding, and reconstruction process. We used phase encoding as the encoding method, and the phase modulation can be calculated by Eq. (4). The resolution of the 3D object used in our simulation in x, y, and z directions is 2048 × 2048 × 256, and the pixel size is 3.7µm × 3.7µm × 3.92µm. The resolution of the CGVH is 2048 × 2048 × 100, and the pixel size is 3.7µm × 3.7µm × 7.3µm. The reconstruction distance is 10 mm. The wavelengths of red, green, and blue reconstruction lights are 638 nm,520 nm, and 450 nm respectively. As shown in Fig. 4(c), the azimuth $\varphi $ and the zenith angle $\theta $ of the incident direction of the reconstruction light are both 45°, so the incidence angles between the reconstruction light and the x, y, and z axes of the CGVH are 60°, 60°, and 45°, respectively.

 figure: Fig. 4.

Fig. 4. Simulation reconstruction results (a) the color intensity image and depth image of the 3D object (b) the reconstruction 3D image focusing on the person (c) the incident angle of the reconstruction light, $\varphi = 45^\circ $, $\theta = 45^\circ $ (d) the local reconstruction images at different depths (e) the graph of PSNR changing with the number of layers

Download Full Size | PDF

The color intensity and depth images of the 3D objects used in the calculation are shown in Fig. 4(a). The reconstruction result focusing on the person is shown in Fig. 4(b). Only when reconstructing 3D objects, we use the time average method [23] to eliminate the crosstalk between layers of 3D objects. Figure 4(d) shows the local reconstruction images at different depths, demonstrating that CGVH can achieve accurate reconstruction of 3D objects.

To calculate the PSNR, we calculate and reconstruct 2D objects. We use the intensity image in Fig. 4(a) as a 2D object to calculate the reconstructed results of the CGVH with different numbers of layers (different thicknesses). Figure 4(e) shows the relationship between the PSNR and the number of layers of the CGVH. It can be seen that the PSNR of each color increases with the number of layers. The average PSNR reaches a maximum value of 47.28 at 100 layers. The average PSNR reached 30.20 at 14 layers, far exceeding the PSNR of 11.41 achieved by planar CGH when the number of layers is 1. Therefore, good reconstruction quality is one of the advantages of CGVH.

In terms of calculation time, we used a computer with i7-12700F(2.1 GHz) to calculate a CGVH with resolution of 2048 × 2048 × 100.The calculation time is 69.74s. Additionally, we calculated a CGVH with resolution of 64 × 64 × 64 to compare the calculation time with the projection-onto-constraint-sets optimization algorithm. The calculation time of the projection-onto-constraint-sets optimization algorithm is less than a minute [24], while the calculation time of our method is only 0.86s. This is sufficient to demonstrate the superiority of our method in terms of calculation time.

3.2 Angle selectivity and wavelength selectivity

In our CGVH model, each layer of CGVH generates a diffraction light field, and these diffraction light fields are coherent. The coherent summation of these light fields is the final diffraction field. Therefore, CGVH has angle selectivity and wavelength selectivity, similar to optical volume hologram. As is known to all, only when the incident light satisfies the Bragg condition, namely that the incident light and the reference light have the same direction and wavelength, volume hologram has large diffraction efficiency. The Bragg condition also applies to CGVH.

The reconstruction image of CGVH is generated by the interference of light fields of layers, as is angle selectivity and wavelength selectivity. Assume that the angle between the incident light direction and the z-axis of CGVH is $\theta $, the wave vector is k, and the angle between the diffraction light direction and the z-axis is $\alpha $. Then the phase difference $\Delta \varphi $ of the diffraction light fields of the two adjacent layers with an interval $\Delta z$ is

$$\Delta \varphi = k\Delta z(\cos \theta - \cos \alpha )$$

Therefore, when we irradiate CGVH with the reference light and another incident light, the difference between the phase differences in two cases is

$$\delta = \Delta {\varphi _1} - \Delta {\varphi _0} = \Delta z[{{k_1}\cos {\theta_1} - {k_0}\cos {\theta_0} - ({{k_1} - {k_0}} )\cos \alpha } ]$$
where $\Delta {\varphi _0}$ is the phase difference when the reference light is incident; $\Delta {\varphi _1}$ is the phase difference when the incident light is incident; ${k_0}$ and ${k_1}$ are wave vectors of reference light and incident light respectively; ${\theta _0}$ and ${\theta _1}$ are the incident angle of reference light and incident light respectively.

We can call $\delta $ the phase shift factor of CGVH, that is, the phase shift of the diffraction light field between layers when the incident light does not meet the Bragg condition. When we accumulate light fields of multiple layers, it is equivalent to accumulating many light fields with the phase difference $\delta $ and the same amplitude. The summation intensity of the light field is

$$\begin{array}{c} I = \Delta {A_0} + {A_0}{\textrm{e}^{j\delta }} + {A_0}{\textrm{e}^{j2\delta }} + \cdots + {A_0}{\textrm{e}^{jn\delta }}{\Delta ^2}\\ = \left\{ \begin{array}{l} A_0^2{\left( {\frac{{1 - {\textrm{e}^{jn\delta }}}}{{1 - {\textrm{e}^{j\delta }}}}} \right)^2},\delta \ne 0\\ {n^2}{A_0}^2,\delta = 0 \end{array} \right\} \end{array}$$
where ${A_0}$ is the amplitude of each light field; n is the number of CGVH layers.

To validate the above theory, we conduct simulations using the gray scale of the intensity image in Fig. 4(a) as the original image. The pixel size is 3.7µm × 3.7µm × 7.3µm. The reference light wavelength is 520 nm. The angles between the reference light and the x, y, and z axes of the CGVH are 60°, 60°, and 45°. The number of CGVH layers is 100. We concentrate diffraction energy near zero frequency in simulation, so $\alpha $ is taken as 0. Figure 5(a) shows the theoretical curve calculated by Eq. (15) and the simulation value of CGVH diffraction intensity when the incident light angle is varied. Figure 5(b) shows the theoretical curve and simulation value of CGVH diffraction intensity when the incident light wavelength changes. Obviously, the simulation values closely match the theoretical curve, which confirms the correctness of the above theory. It is apparent from the figure that when the angle or wavelength of the incident light is altered, the diffraction intensity of CGVH will decrease rapidly. Therefore, CGVH has angle selectivity and wavelength selectivity. It is worth noting that Eq. (15) is only applicable when the angle and wavelength of the incident light are not significantly different from the reference light. When the difference is large, the diffraction light field of each layer will change greatly, and the summation diffraction energy will be very small.

These two kinds of selectivity can be utilized to realize wavelength multiplexing and angle multiplexing. Wavelength multiplexing can be realized by superimposing the red, green, and blue CGVHs on one CGVH during calculation. In this way, the color image can be reconstructed by illuminating the red, green, and blue lasers on the CGVH simultaneously, as shown in Fig. 5(c). To demonstrate that angle multiplexing can be implemented, we calculate the CGVHs of four pictures using four reference lights from different directions and superimpose them to obtain the angle multiplexed CGVH. The angle between four reference lights and z-axis are 72°, 45°, 36° and 0 ° respectively. When reconstructing, we illuminate the CGVH with the reconstruction light at the same angle as the reference light, and then the image recorded at this angle can be reconstructed, as shown in Fig. 5(d). There is almost no crosstalk between different reconstruction images.

 figure: Fig. 5.

Fig. 5. Verification of angle selectivity and wavelength selectivity (a) relation diagram of relative diffraction intensity and incident angle (b) relation diagram of relative diffraction intensity and wavelength (c) reconstructed images of wavelength multiplexing CGVH (d) original and reconstructed images of angle multiplexing CGVH

Download Full Size | PDF

In addition, angle selectivity can be used to explain some characteristics of CGVH. Firstly, as we mentioned above, CGVH has no conjugate image. Taking phase modulation as an example, when reference light incident into CGVH, the diffraction light field can be written by Eq. (6)

$$\begin{aligned} \textrm{j}\Delta \varphi R &= \textrm{j}\alpha {U_{\textrm{Obj}}}{R^ \ast }R + \textrm{j}\alpha U_{\textrm{Obj}}^ \ast {R^2}\\ &= \textrm{j}\alpha ({U_{\textrm{Obj}}}{R^ \ast })R + \textrm{j}\alpha {[(U_{\textrm{Obj}}^{}{R^ \ast }){R^ \ast }]^ \ast } \end{aligned}$$
where $\mathrm{j\Delta }\varphi R$ is the diffraction light field; $\mathrm{\Delta }\varphi $ is phase modulation and can be calculated by Eq. (4). Eq. (16) indicates that the diffraction field of the conjugate term $\textrm{j}\alpha U_{\textrm{Obj}}^\ast {R^2}$ equals to the conjugate of $({{U_{\textrm{Obj}}}{R^\ast }} ){R^\ast }$, which is generated by ${R^ \ast }$ incident on the reconstruction term ${U_{\textrm{Obj}}}{R^\ast }$. ${R^\ast }$ is the light whose incidence angle is opposite to the reference light, and obviously dissatisfy the Bragg condition. According to the angle selectivity of CGVH, the energy of the diffraction field $({{U_{\textrm{Obj}}}{R^\ast }} ){R^\ast }$ obtained by incidence light ${R^\ast }$ is far less than the diffraction field $({{U_{\textrm{Obj}}}{R^\ast }} )R$ obtained by incidence light R. Therefore, there’s no conjugate image in CGVH reconstruction image.

Secondly, the angle selectivity can be applied to discuss the influence of the subsequent layers on the diffraction light field of the front layer. In our proposed CGVH model, the diffraction light field generated by the i-th layer is inevitably modulated by the subsequent layers. Although the modulation factor $\alpha $ is very small, the influence of the subsequent layers should be considered when there are many layers. According to the angular spectrum theory, the diffraction field of the i-th layer can be regarded as the superposition of a series of plane waves with different propagation directions, and the angular spectrum is ${A_i}({{f_x},{f_y}} )$. Moreover, the follow $i + 1$-th to n-th layers can be regarded as a CGVH with $n - i$ layers. Therefore, the diffraction field of i-th layer passing through the subsequent layers can be considered as a series of plane waves passing through a CGVH with $n - i$ layers. Based on the angle selectivity of CGVH, when the spatial frequencies $({{f_x},{f_y}} )$ of these plane waves are significantly different from the spatial frequency of the reference light, the diffraction field energy is negligible and can be disregarded. In this condition, we can approximately consider that the diffraction light field of i-th layer is not affected by the subsequent layers. Otherwise, when the spatial frequencies $({{f_x},{f_y}} )$ of these plane waves are close to the spatial frequency of the reference light, the generated diffraction light field is large, and will affect the diffraction light field of i-th layer. To avoid such influence, when we calculate CGVH, the spatial frequency $({{f_x},{f_y}} )$ of the recorded light field should not coincide with the spatial frequency of the reference light, meaning that the propagation direction of the recorded light field is different from that of the reference light. This will cause the direction of the reconstruction image is different from that of the reference light, but the effect is minimal.

3.3 Reconstruction spatial frequency

CGVH has a significant advantage over planar CGH in that it can break through the limitation of sampling interval, record and reconstruct light field with higher spatial frequencies. It is well known that the discrete sampling of planar CGH will produce high order diffraction, which limits its maximum spatial frequency. In CGVH, each layer has multiple diffraction orders. However, when the diffraction light fields of layers are coherently superimposed, only the diffraction orders that satisfy the Bragg condition can be coherently added with high diffraction efficiency, while other orders have phase shift

$$\delta = k\Delta z(\cos {\alpha _1} - \cos {\alpha _0})$$
where ${\alpha _0}$ is the diffraction angle of the recorded order, ${\alpha _1}$ is the diffraction angle of the other order. Similar to the discussion on angle selectivity and wavelength selectivity, the light intensity of other orders can also be calculated by Eq. (15). As shown in Fig. 6(a), when the number of CGVH layers increases, the 0-order that we recorded coherently superposes, resulting in a gradual increase in its diffraction energy.

 figure: Fig. 6.

Fig. 6. (a) the graph between the amplitude of each diffraction order changes and the number of layers (b) the reconstructed images of different orders and the graph between the intensity of these orders and the number of layers (c) reconstructed images at different viewing angles of CGH and CGVH

Download Full Size | PDF

However, the diffraction energies of other orders fluctuate at a lower value. In this way, we can concentrate the diffraction energy on a specific order, thus improving the diffraction efficiency of CGVH. As shown in Fig. 6(b), we record the image information on the +1-order, and only the +1-order can reconstruct the image. Other orders have minimal diffraction energies. Further, due to the superposition principle of linear optics, we can superimpose multiple independent CGVHs that record light fields of different orders on a composite CGVH. When we reconstruct the composite CGVH, we can see reconstruction image of single independent CGVH in each order, which is unaffected by other independent CGVHs. By superimposing the CGVHs of different angle images of the same object, we can see the multi-angle images of the object in a large continuous range, as shown in second line of Fig. 6(c). In contrast, we can only see the similar images at different orders of CGH, as shown in first line of Fig. 6(c). Therefore, CGVH can realize binocular parallax and expand the motion parallax of holographic display. By superimposing the CGVHs of different objects, we can see different images in different directions, as shown in third line of Fig. 6(c). It should be noted that in the simulation of Fig. 6(c), we independently normalized the images of each angle, so the light intensity at each angle looks similar. However, in reality, the larger the angle, the lower the diffraction intensity of a single pixel. This will result in the image at large angle being darker than the image at small angle, but the image quality at large angle is still as good as the simulation results. To sum up, it is concluded that CGVH is not limited by sampling interval, can record and reconstruct light field with higher spatial frequencies.

4. Conclusion

We propose a new CGVH for full-color dynamic holographic 3D display, and an iteration-free layered CGVH generation method. CGVH generated by our method can accurately reconstruct full-color 3D objects, and offers several advantages over planar CGH, including higher imaging quality, more concentrated diffraction energy, and higher reconstruction spatial frequency, and can realize angle multiplexing and wavelength multiplexing. These advantages are expected to solve the key shortcomings of planar CGH displays such as small viewing angle, difficulty in achieving motion parallax, poor imaging quality, and low resolution.

In the next step, we plan to produce CGVH, and conduct experiment to further verify the feasibility and superiority of CGVH. Nowadays, direct-write techniques that use ultrashort laser pulses to modify transparent media have been used to create aperiodic volume hologram [25]. However, this method can only produce binary volume holograms and cannot fully meet the requirements of our CGVH. We design another feasible experimental method, which is to use holographic printing technology [26] to print each layer of CGVH onto the recording medium one by one, and then paste these layers together to become CGVH. We plan to try to implement it in our future work, which will pave the way for designing dynamic volume displays capable of loading CGVH. We anticipate that in the future, CGVH can become the preferred 3D display method with the best display effect, large viewing angle, and all depth information required by human eyes, and will be applied to realize dynamic modulation in various optical fields.

Funding

National Natural Science Foundation of China (61975014, 62035003, U22A2079); Beijing Municipal Science & Technology Commission, Administrative Commission of Zhongguancun Science Park (Z211100004821012).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. G. Dennis, “A new microscopic principle,” Nature 161(4098), 777–778 (1948). [CrossRef]  

2. E. Sahin, E. Stoykova, J. Makinen, and A. Gotchev, “Computer-generated holograms for 3D imaging: a survey,” ACM Comput. Surv. 53(2), 1–35 (2021). [CrossRef]  

3. D. Pi, J. Liu, and Y. Wang, “Review of computer-generated hologram algorithms for color dynamic holographic three- dimensional display,” Light: Sci. Appl. 11(1), 231 (2022). [CrossRef]  

4. K. Yamamoto, Y. Ichihashi, T. Senoh, R. Oi, and T. Kurita, “3D objects enlargement technique using an optical system and multiple SLMs for electronic holography,” Opt. Express 20(19), 21137–21144 (2012). [CrossRef]  

5. H. Sasaki, K. Yamamoto, K. Wakunami, Y. Ichihashi, R. Oi, and T. Senoh, “Large size average PSNR rimensional video by electronic holography using multiple spatial light modulators,” Sci. Rep. 4(1), 6177 (2014). [CrossRef]  

6. Y. Sando, D. Barada, and T. Yatagai, “Holographic 3D display observable for multiple simultaneous viewers from all horizontal directions by using a time division method,” Opt. Lett. 39(19), 5555–5557 (2014). [CrossRef]  

7. Y. Lim, K. Hong, H. Kim, H. Kim, E. Chang, S. Lee, T. Kim, J. Nam, H. Choo, J. Kim, and J. Hahn, “360-degree tabletop electronic holographic display,” Opt. Express 24(22), 24999–25009 (2016). [CrossRef]  

8. X. Shi, J. Liu, Z. Zhang, Z. Zhao, and S. Zhang, “Expanding eyebox with tunable viewpoints for see-through near-eye display,” Opt. Express 29(8), 11613–11626 (2021). [CrossRef]  

9. J. An, K. Won, Y. Kim, J. Hong, H. Kim, Y. Kim, H. Song, C. Choi, Y. Kim, J. Seo, A. Morozov, H. Park, S. Hong, S. Hwang, and K. Kim, “Slim-panel holographic video display,” Nat. Commun. 11(1), 5568 (2020). [CrossRef]  

10. S. Kim and J. Park, “Optical see-through Maxwellian near-to-eye display with an enlarged eyebox,” Opt. Lett. 43(4), 767 (2018). [CrossRef]  

11. T. Lin, T. Zhan, J. Zou, F. Fan, and S. Wu, “Maxwellian near-eye display with an expanded eyebox,” Opt. Express 28(26), 38616 (2020). [CrossRef]  

12. J. Yang, J. Choi, J. Pi, C. Hwang, G. H. Kim, W. Lee, H. Kim, K. Choi, Y. Kim, and C. Hwang, “High-resolution spatial light modulator on glass for digital holographic display,” Proc. SPIE 10943, 109430K (2019). [CrossRef]  

13. Z. Wang, L. Cao, H. Zhang, and G. Jin, “Three-Dimensional Display Based on Volume Holography,” Chin. J. Laser 42(9), 0909003 (2015). [CrossRef]  

14. E. N. Kamau, C. Falldorf, and R. B. Bergmann, “A new approach to dynamic wave field synthesis using computer generated volume holograms,” IEEE (2013).

15. K. Curtis, A. Pu, and D. Psaltis, “Method for holographic storage using peristrophic multiplexing,” Opt. Lett. 19(13), 993–994 (1994). [CrossRef]  

16. G. A. Rakuljic, V. Leyva, and A. Yariv, “Optical data storage by using orthogonal wavelength-multiplexed volume holograms,” Opt. Lett. 17(20), 1471 (1992). [CrossRef]  

17. R. R. A. Syms, “Practical volume holography,” Clarendon Press (1990).

18. M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993). [CrossRef]  

19. H. Ma, C. Wei, J. Wei, Y. Han, D. Pi, Y. Yang, W. Zhao, Y. Wang, and J. Liu, “Superpixel-based sub-hologram method for real-time color three-dimensional holographic display with large size,” Opt. Express 30(17), 31287–31297 (2022). [CrossRef]  

20. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009). [CrossRef]  

21. D. Pi, J. Wang, J. Liu, J. Li, Y. Sun, Y. Yang, W. Zhao, and Y. Wang, “Color dynamic holographic display based on complex amplitude modulation with bandwidth constraint strategy,” Opt. Lett. 47(17), 4379 (2022). [CrossRef]  

22. G. P. Agrawal, “Nonlinear fiber optics,” Nonlinear Science at the Dawn of the 21st Century. Springer, Berlin, Heidelberg (2000).

23. S. Liu, D. Wang, and Q. Wang, “Speckle noise suppression method in holographic display using time multiplexing technique,” Opt. Commun. 436, 253–257 (2019). [CrossRef]  

24. T. D. Gerke and R. Piestun, “Aperiodic volume optics,” Nat. Photonics 4(3), 188–193 (2010). [CrossRef]  

25. E. N. Glezer, M. Milosavljevic, L. Huang, R. J. Finlay, and E. Mazur, “Three-dimensional optical storage inside transparent materials,” Opt. Lett. 21(24), 2023–2025 (1996). [CrossRef]  

26. F. K. Bruder, T. Fcke, R. Hagen, D. Hnel, T. P. Kleinschmidt, E. Orselli, C. Rewitz, T. Rlle, and G. Walze, “Diffractive optics in large sizes: computer-generated holograms (CGH) based on Bayfol HX photopolymer,” Proc. SPIE 9385, 93850C (2015). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Comparison of CGH and CGVH display effects (a) CGH display effect with small viewing angle (b)CGVH display effect with large viewing angle (c) CGVH displays different images at different angles
Fig. 2.
Fig. 2. CGVH reconstructing and sampling schematic (a)CGVH reconstructing schematic (b)CGVH sampled discretely in the x, y and z directions (c) CGVH sampled discretely as a series of planes
Fig. 3.
Fig. 3. Algorithm flow chart(a) obtain the 3D information of the object and layer it (b) calculate the light field distribution on the wavefront recording plane (c) calculate the light field distribution on each layer of the CGVH (d) encode the light field distribution into a CGVH
Fig. 4.
Fig. 4. Simulation reconstruction results (a) the color intensity image and depth image of the 3D object (b) the reconstruction 3D image focusing on the person (c) the incident angle of the reconstruction light, $\varphi = 45^\circ $, $\theta = 45^\circ $ (d) the local reconstruction images at different depths (e) the graph of PSNR changing with the number of layers
Fig. 5.
Fig. 5. Verification of angle selectivity and wavelength selectivity (a) relation diagram of relative diffraction intensity and incident angle (b) relation diagram of relative diffraction intensity and wavelength (c) reconstructed images of wavelength multiplexing CGVH (d) original and reconstructed images of angle multiplexing CGVH
Fig. 6.
Fig. 6. (a) the graph between the amplitude of each diffraction order changes and the number of layers (b) the reconstructed images of different orders and the graph between the intensity of these orders and the number of layers (c) reconstructed images at different viewing angles of CGH and CGVH

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

A rp ( f x , f y ) = m = 1 n F { U m ( x m , y m ) } H m ( f x , f y )
H ( f x , f y ) = exp [ j k z 1 ( λ f x ) 2 ( λ f y ) 2 ]
U Obj ( x i , y i ) = F 1 { A rp ( f x , f y ) H i ( f x , f y ) }
Δ φ = φ out φ in = α ( U Obj R + R U Obj )
Δ E = E out E in E in = α ( U Obj R + R U Obj + C )
E out = ( 1 Δ E ) E in e j Δ φ E in ( Δ E j Δ φ ) E in
β i ( x i , y i ) = ( Δ E i ( x i , y i ) j Δ φ i ( x i , y i ) )
U o ( x o , y o , z o ) = π j λ 2 e j k r r β ( x , y , z ) R ( x , y , z ) d x d y d z
r = ( x o x ) 2 + ( y o y ) 2 + ( z o z ) 2
U o ( x o , y o , z o ) = i = 1 n 1 j λ e j k r r β i ( x i , y i ) R i ( x i , y i ) d x i d y i
U rp ( x o , y o ) = i = 1 n F 1 { F { β i ( x i , y i ) R ( x i , y i ) } H i o ( f x , f y ) }
PSNR = 10 × lo g 10 ( M N m = 1 M n = 1 N [ I 0 ( m , n ) I r ( m , n ) ] 2 )
Δ φ = k Δ z ( cos θ cos α )
δ = Δ φ 1 Δ φ 0 = Δ z [ k 1 cos θ 1 k 0 cos θ 0 ( k 1 k 0 ) cos α ]
I = Δ A 0 + A 0 e j δ + A 0 e j 2 δ + + A 0 e j n δ Δ 2 = { A 0 2 ( 1 e j n δ 1 e j δ ) 2 , δ 0 n 2 A 0 2 , δ = 0 }
j Δ φ R = j α U Obj R R + j α U Obj R 2 = j α ( U Obj R ) R + j α [ ( U Obj R ) R ]
δ = k Δ z ( cos α 1 cos α 0 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.