Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Expansion of a vertical effective viewing zone for an optical 360° holographic display

Open Access Open Access

Abstract

Cylindrical holography, as a promising 360° display technology, has already attracted a lot of attention. In a previous study, an optical 360° cylindrical holography has been achieved in the visible spectrum using a planar spatial light modulator (SLM) and a 45° conical mirror. Although the 360° viewing zone is successfully achieved in the horizontal direction, in the previous study, the vertical viewing zone remains as narrow as the planar holography, and its expansion is not only necessary but also potential due to the waste of vertical viewing zone in application scenarios such as tabletop and ceiling. In this paper, we propose a method of expanding the vertical effective viewing zone for optical 360° holographic display by using a conical mirror with a base angle of less than 45°. The proposed method can expand the vertical effective viewing zone by shifting the wasted vertical viewing zone into an effective vertical viewing zone from the base to the top angle direction of the conical mirror, which is up to two times theoretically. The feasibility and effectiveness of the proposed method are demonstrated by optical experiments. We believe that it would be promising in the field of augmented reality.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Since holography can reconstruct the entire optical wavefield of the three-dimensional (3D) scene and provide the human eye with complete parallax and depth information [14], the holographic display becomes a promising solution for augmented and virtual reality (AR/VR) applications. With the rapid development of computer technology, the computer-generated holographic display is becoming the dominant solution for holographic displays, in which the computer-generated holograms (CGHs) can be calculated by powerful computers instead of optical recording, and CGHs are more flexible than optical holograms [57]. Usually, a spatial light modulator (SLM) is used to load CGHs and reconstruct them optically in holographic displays [8]. However, limited by the semiconductor technology, the smaller pixel pitch of the SLM is hard to be manufactured. Current commercial SLMs cannot provide enough diffraction angle, causing a narrow viewing area [9,10], which adversely affects the viewing experience of holographic displays in AR/VR applications.

Many attempts have been devoted to expanding the viewing zone of the holographic display [1113]. Among the reported methods, the cylindrical computer-generated hologram (CCGH) is considered to be an effective method [1423]. Sando et al. firstly proposed a fast calculation method by convolution algorithm in cylinder coordinate system for CCGH [14]. Yamaguchi et al. realized a CCGH that is viewable at 360° by printing its segmental fringes with a prototype fringe printer and proposed a fast calculation method by segmentation and table for the horizontal direction [15]. Jackin et al. proposed another fast calculation method for CCGH based on wave propagation from cylindrical surfaces in the spectral domain [16]. Sando et al. proposed a method for calculating CCGH based on the spectral relation between a 3D object and its diffracted wavefront and used a Bessel function expansion to save computing time and memory usage [1719]. Zhao et al. proposed a fast calculation method for CCGH by using a wave-front recording plane (WRP) [20]. Wang et al. proposed a fast calculation method of CCGH by convolution algorithm between two concentric cylindrical surfaces with analysis of non-constant obliquity factor and unified inside-out propagation (IOP) and outside-in propagation (OIP) models by the unified expression of their obliquity factors [21,22]. However, it is a big challenge to realize optical implementation in the visible spectrum for CCGH. There are two main reasons: one is that there is no commercial cylindrical SLM available, and the other is that terahertz wavelength has to be used in simulation according to the sampling theorem for most of the above researches on CCGH to avoid the massive amounts of computing and memory consumption. Recently, Han et al. realized an optical 360° cylindrical holography in the visible spectrum by using a commercial planar SLM and a 45° conical mirror [23]. Although the horizontal viewing zone is successfully achieved at 360° in the visible spectrum, in this work, the vertical viewing zone remains as narrow as the planar holography. And due to the waste of vertical viewing zone in application scenarios such as tabletop and ceiling, the expansion of vertical viewing zone is not only necessary but also potential. Therefore, how to make full use of the vertical viewing zone is an issue worth studying.

In this paper, we propose a method of expanding the vertical effective viewing zone for optical 360° holographic display by using a conical mirror with a base angle of less than 45°. The proposed method can expand the vertical effective viewing zone by shifting the wasted vertical viewing zone into an effective vertical viewing zone from the base angle direction to the top angle direction of the conical mirror, which is up to two times theoretically. The entire diffraction process of the proposed method is redefined, and its diffraction calculation model is given, which consists of three stages: plane-to-plane diffraction, plane-to-cone diffraction, and cone-to-cylinder diffraction, respectively. For plane-to-cone diffraction, an approximation approach by planar diffraction is proposed to reduce the computational time, and numerical simulations demonstrate the feasibility of this approximation. Moreover, the feasibility and effectiveness of the proposed expanding method are demonstrated by optical experiments.

2. Principle

In previous research [23], an optical 360° holographic display in the visible spectrum was proposed to be realized by using a 45° conical mirror. Instead of using only the 45° conical mirror, we now use the conical mirror with other base angles to expand the effective viewing zone in the vertical direction where the horizontal 360° display has already been achieved. The significant difference is that the isophase surface is no longer a cylindrical surface but a conical surface after the planar lightwave is reflected by the conical mirror with a base angle other than 45°. Figure 1(a) shows the schematic diagram of the proposed method. In the diffraction model of the proposed method, the entire diffraction process contains three stages, as shown in Fig. 1(b). In the first stage, the light is diffracted from the hologram plane to the middle plane which is at the top of the conical mirror and parallel to the hologram plane. In the second stage, the light is diffracted from the middle plane to the inner conical surface by the reflection of the conical mirror. In the last stage, the light is diffracted from the inner conical surface to the outer object surface. To calculate the hologram, the inverse diffraction at each stage will be discussed in detail in the subsequent sections. In this model, the hologram plane is a circular plane of radius a. The middle plane is a virtual circular plane of radius a, centered at the apex V of the conical mirror. The circle at the base of the conical mirror is called the refocusing plane, and its radius is a. The length of the hypotenuse of the inner conical surface is also a. In addition, because of the good rotational symmetry of this model, most of the theoretical analysis is performed in a cylinder coordinate system.

 figure: Fig. 1.

Fig. 1. (a) Schematic diagram of proposed method. (b) Diffraction model of proposed method.

Download Full Size | PDF

2.1 Diffraction from object surface to inner conical surface

The conical diffraction will be discussed in this section. As shown in Fig. 2(a), the object surface is a cylinder and the inner conical surface is a part of the cone. To facilitate computation, in this diffraction, the origin O’ is set in the middle of the truncated cone. Q’(r0, θ0, z0) represents the source point on the object surface, and Q(rq, θq, zq) represents the destination point on the inner conical surface.

 figure: Fig. 2.

Fig. 2. (a) Conical diffraction calculation model. (b) Side view of the cone.

Download Full Size | PDF

The wavefield distributions of the outer and inner surfaces are respectively denoted by u’(θ0, z0) and u(θq, zq). And as shown in Fig. 2(b), the radius rq and the height zq of point Q are related as follows:

$${r_q} = \frac{{{r_1} - {r_2}}}{H} \times {z_q} + \frac{{{r_1} + {r_2}}}{2},$$
where r1 and r2 are the upper and lower radius of the inner conical surface, respectively, and H is its height. In addition, the inclination of the inner conical surface is denoted by α, which is a key factor in expanding the vertical effective viewing zone and will be discussed in detail in Subsection 2.5.

In the cylindrical coordinate system, the conical diffraction formula [24] can be written as:

$$u({\theta _q},{z_q}) = \int\!\!\!\int_S {u^{\prime}({\theta _0},{z_0})} \times h\{ {\theta _q} - {\theta _0},{z_0},{z_q})\} d{\theta _0}d{z_0},$$
where S denotes the entire object surface. And the point spread function (PSF) h(θ, z0, zq) is defined as:
$$h(\theta ,{z_0},{z_q}) = \frac{1}{{j\lambda }}\frac{{\exp (i\frac{{2\pi }}{\lambda }d(\theta ,{z_0},{z_q}))}}{{d(\theta ,{z_0},{z_q})}},$$
where λ is the wavelength. And the propagation distance d(θ, z0, zq) can be calculated by:
$$d(\theta ,{z_0},{z_q}) = \sqrt {r_0^2 + r_q^2 - 2{r_0}{r_q}\cos ({\theta _q} - {\theta _0}) + {{({z_q} - {z_0})}^2}} .$$

The convolution form of Eq. (2) can be written as:

$$u({\theta _q},{z_q}) = \int {u^{\prime}({\theta _0},{z_0})} { \ast _\theta }h(\theta ,{z_0},{z_q})d{z_0},$$
where denotes the one-dimensional convolution integral in the azimuthal direction. And Eq. (5) can be written in the form of the fast Fourier transform (FFT):
$$u({\theta _q},{z_q}) = \int {IFF{T_\theta }[FF{T_\theta }(u^{\prime}({\theta _0},{z_0}))} \times FF{T_\theta }(h(\theta ,{z_0},{z_q}))]d{z_0}.$$

In addition, we need to consider the divergence angle of each source point. The half divergence angle β can be defined as the maximum diffraction angle of the grating [25]:

$$\beta = \arcsin (\frac{\lambda }{{2p}}),$$
where p is the sampling pitch. So a filtering function is required to limit the PSF, which is defined as:
$$g(\theta ,{z_0},{z_q}) = \left\{ \begin{array}{l} 1,|{{\theta_q} - {\theta_0}} |\le \beta \textrm{ }and\textrm{ }|{{z_q} - {z_0}} |\le ({r_q} - {r_0}) \times \tan \beta \\ 0, otherwise \end{array} \right..$$
Therefore, the restricted PSF h’(θ, z0, zq) should be written as:
$$h^{\prime}(\theta ,{z_0},{z_q}) = h(\theta ,{z_0},{z_q}) \cdot g(\theta ,{z_0},{z_q}).$$

2.2 Diffraction from inner conical surface to middle plane

The diffraction process from the inner conical surface to the middle plane is the crucial step to achieve the 360° display in the horizontal direction and expand the effective viewing zone in the vertical direction. It is analyzed in detail in this section. Figure 3(a) shows the cone-to-plane diffraction calculation model. The base angle of the conical mirror is γ, and the radius of its base is a. Thus, the height of the conical mirror can be represented by a · tanγ, as shown in Fig. 3(b). V is the apex of the conical mirror, and O is the center of its base. The middle plane is a circle centered at the apex V with radius a. The length of the hypotenuse and base angle of the inner conical surface are a and 2γ, respectively. Meanwhile, its inclination α can be represented by π / 2 - 2γ. Q - M - P represents an optical path in this diffraction process.

 figure: Fig. 3.

Fig. 3. (a) cone-to-plane diffraction calculation model. (b) Side view.

Download Full Size | PDF

With the point source (PS) method, the wavefield distribution of the middle plane can be accurately calculated:

$${u_p}({r_p},{\theta _p},{z_p}) = \int\!\!\!\int\limits_\Pi {{u_q}({r_q},{\theta _q},{z_q})\frac{{\exp(jk{L_{pq}})}}{{{L_{pq}}}}} ,$$
where $\mathrm{\Pi }$ represents the inner conical surface. P represents the point on the middle plane, M represents the point on the conical mirror, and Q represents the point on the inner conical surface. In the cylinder coordinate system, they can be expressed as P(rp, θp, zp), M(rm, θm, zm), and Q(rq, θq, zq), respectively. On the conical mirror, the equation zm + rm · tanγ = a · tanγ is always true so that the coordinates of the point M can be expressed as (rm, θm, (a - rm) · tanγ). And the normal vector of point M is $\mathrm{\vec{n}}$(sinγ, θm, cosγ). As can be seen in Fig. 3, the propagation distance Lpq = Lmp + Lmq, and:
$$\begin{array}{l} {L_{mp}} = \sqrt {r_p^2 + r_m^2 - 2{r_p}{r_m}\cos ({\theta _p} - {\theta _m}) + {{({z_p} + ({r_m} - a)\tan \gamma )}^2}} ,\\ {L_{mq}} = \sqrt {r_q^2 + r_m^2 - 2{r_q}{r_m}\cos ({\theta _q} - {\theta _m}) + {{({z_q} + ({r_m} - a)\tan \gamma )}^2}}.\end{array}$$
The coordinates of points P and Q are known. In order to get the propagation distance Lpq, it is also necessary to solve for the coordinate of the point M, i.e. to solve for the unknowns rm and θm. According to Fermat's Principle:
$$\frac{{\partial {L_{pq}}({r_m},{\theta _m})}}{{\partial {\theta _m}}} = \frac{{{r_p}\sin ({\theta _p} - {\theta _m})}}{{{L_{mp}}}} + \frac{{{r_q}\sin ({\theta _q} - {\theta _m})}}{{{L_{mq}}}} = 0,$$
$$\begin{array}{c} \frac{{\partial {L_{pq}}({r_m},{\theta _m})}}{{\partial {r_m}}} = \frac{{{r_p}\cos ({\theta _p} - {\theta _m}) - ({z_p}\tan \gamma + {r_m}(1 + {{\tan }^2}\gamma ) - a{{\tan }^2}\gamma )}}{{{L_{mp}}}} + \\ \frac{{{r_q}\cos ({\theta _q} - {\theta _m}) - ({z_q}\tan \gamma + {r_m}(1 + {{\tan }^2}\gamma ) - a{{\tan }^2}\gamma )}}{{{L_{mq}}}} = 0. \end{array}$$
The coordinate of point M can be solved by Eqs. (12) and (13), and then the propagation distance Lpq can be obtained. Consequently, we can use the PS method to calculate the wavefield distribution of the middle plane. However, solving the system of equations and obtaining the diffraction distribution using the PS method are both time-consuming.

To reduce the computation time, the bottom surface of the conical mirror, called the refocusing plane, can be used to approximate the inner conical surface. In this way, planar diffraction can be used instead of cone-to-plane diffraction. Since point $\mathrm{\tilde{Q}}$(a - zq / sin2γ, θq, 0) is the mapping point of Q on the refocusing surface. Although $\mathrm{P\tilde{Q}\;\ =\ \;\ PQ}$ holds when θp = θq but no longer when θpθq, fortunately, an approximation is possible since the transformation speed of Lpq is far less than exp(jkLpq) in Eq. (10). The approximation will hold when the approximate propagation distance ${\textrm{L}_{\mathrm{p\tilde{q}}}}$ and the actual propagation distance Lpq satisfy:

$$k|{{L_{pq}} - {L_{p\widetilde q}}} |\ll 2\pi .$$
When Eq. (14) is satisfied, the difference between approximate and actual phases is very little, and the approximation is considered reasonable at this point. Therefore, the diffraction process from the inner conical surface to the hologram plane can be approximated as the planar diffraction from the refocusing plane to the hologram plane. The feasibility of this approximation will be further discussed in Subsection 3.1.

2.3 Diffraction from middle plane to hologram plane

Despite the good rotational symmetry of the model, the current commercial SLMs are mainly of rectangular grid shapes, which requires the conversion of the cylinder coordinate system to the Cartesian coordinate system. Although this conversion process is mathematically accurate, it is bound to produce some errors in discrete calculations.

The diffraction from the middle plane to the hologram plane in the Cartesian coordinate system is shown in Fig. 4, and the planar diffraction is calculated by the angular spectrum method (ASM), which is expressed by:

$$U^{\prime}(x^{\prime},y^{\prime}) = IFFT\{ FFT\{ U(x,y)\} \cdot {H_f}({f_x},{f_y})\} ,$$
where U’(x’, y’) and U(x, y) denote the wavefield distributions of the hologram plane and the middle plane, respectively. FFT and IFFT denote the fast Fourier transform and inverse fast Fourier transform algorithm, respectively. And Hf (fx, fy) is the transfer function given by:
$${H_f}({f_x},{f_y}) = \left\{ \begin{array}{l} \exp (i\frac{{2\pi }}{\lambda }z\sqrt {1 - {{(\lambda {f_x})}^2} - {{(\lambda {f_y})}^2}} ),\,\, if \sqrt {f_x^2 + f_y^2} < \frac{1}{\lambda }\\ 0, \qquad \qquad otherwise \end{array} \right.,$$
where λ is the wavelength, z is the propagation distance, and fx, fy are spatial frequencies.

 figure: Fig. 4.

Fig. 4. Planar diffraction calculation model.

Download Full Size | PDF

2.4 Sampling conditions

Discrete computation is necessary in practice, and discretization should be implemented correctly based on the sampling theorem. In the diffraction from the middle plane to the hologram plane, the transfer function in ASM can only be correctly sampled when [26]:

$$z \le \frac{{(N + {N_p})\varDelta _x^2}}{\lambda }\sqrt {1 - {{(\frac{\lambda }{{2{\varDelta _x}}})}^2}} ,$$
where N and Np represent the sampling number of the input field and the number of padded zeros, respectively. Δx is the sampling interval in the x direction. The same condition needs to be satisfied in the y direction.

In the diffraction from the inner conical surface to the object surface, the Nyquist theorem must be satisfied in both the azimuthal and the vertical directions. Since the spatial frequency of the object function u’(θ0, z0) is not high relative to h’(θ, z), the maximum value of the spatial frequency only depends on h’(θ, z). And the spatial transformation speed of (1 / jλd) is far less than exp(jkd) in h’(θ, z). Thus, the azimuthal and vertical local spatial frequencies can be expressed respectively as:

$$\begin{array}{l} {f_\theta }(\theta ,z) \approx \frac{1}{{2\pi }}\frac{{\partial h^{\prime}(\theta ,z)}}{{\partial \theta }} \approx \frac{1}{\lambda }\frac{\partial }{{\partial \theta }}[d \times g(\theta ,z)],\\ {f_z}(\theta ,z) \approx \frac{1}{{2\pi }}\frac{{\partial h^{\prime}(\theta ,z)}}{{\partial z}} \approx \frac{1}{\lambda }\frac{\partial }{{\partial z}}[d \times g(\theta ,z)]. \end{array}$$
Considering that half divergence angle β is very small, when θ = β, z = 0, and rq = r2, fθ (θ, z) can reach the maximum, while when θ = 0, z = (r2 - r0)×tanβ, fz (θ, z) can reach the maximum:
$${|{{f_\theta }} |_{\max }} = \frac{{{r_0}{r_2}\beta }}{{\lambda ({r_0} - {r_2})}},{|{{f_z}} |_{\max }} = \frac{\beta }{{\lambda \sqrt {1 + {\beta ^2}} }}.$$
Therefore, the minimum number of samples Nθ and Nz are:
$$\begin{array}{l} {|{{N_\theta }} |_{\textrm{min}}} = \frac{{2\pi }}{{{{(2{{|{{f_\theta }} |}_{\max }})}^{ - 1}}}} = \frac{{4\pi {r_0}{r_2}\beta }}{{\lambda ({r_0} - {r_2})}},\\ {|{{N_z}} |_{\min }} = \frac{{a\sin 2\gamma }}{{{{(2{{|{{f_z}} |}_{\max }})}^{ - 1}}}} = \frac{{2a\beta \sin 2\gamma }}{{\lambda \sqrt {1 + {\beta ^2}} }}. \end{array}$$

2.5 Expansion of the vertical effective viewing zone

The benefit of using the conical mirror with other base angles is that the effective viewing zone in the vertical direction can be expanded on top of the horizontal 360° display already achieved. The expansion of the vertical effective viewing zone will be illustrated theoretically in this section.

As shown in Fig. 5, in practical application scenarios such as tabletop and ceiling, the viewing zone consists of three parts: the expanded part of the viewing zone, the overlapping part of the viewing zone, and the useless part of the viewing zone. The expanded part means that it can be expanded by the proposed method, and the useless part means that it is hard to be observed in practical applications. There are two main reasons why this part of the viewing zone is considered useless: one is that if a 45° conical mirror is used, a part of the wavefront is blocked by the table or ceiling, resulting in a wasted viewing zone, and the other is that we are not so interested in the viewing zone below the table compared to the viewing zone above the table and the viewing zone above the ceiling is unlikely to be used. When the planar lightwave is reflected by a 45° conical mirror, the isophase plane turns into a cylindrical surface and its vertical viewing zone angle Φ is determined by the maximum diffraction φdiff_max of the SLM:

$$\Phi = 2{\varphi _{diff\_max}} = 2\arcsin (\frac{\lambda }{{2p}}),$$
where λ represents the illumination light wavelength, and p represents the pixel pitch of the SLM. The pixel pitch of the existing commercial SLM is typically a few microns, for example, the pixel pitch of Holoeye phase-only SLM (Pluto) is 8 um. When the SLM with 8 um pixel pitch is illuminated with 671 nm red light, the viewing zone angle is about 4.8°. However, in the above practical applications, half of the viewing zone becomes a useless viewing zone, and the effective viewing zone angle ΦE only accounts for half of the viewing zone angle, i.e. equal to the maximum diffraction angle φdiff_max:
$${\Phi _E} = {\varphi _{diff\_max}}.$$

 figure: Fig. 5.

Fig. 5. Expansion of vertical effective viewing zone in practical application scenarios.

Download Full Size | PDF

If a conical mirror with a base angle γ of less than 45° is used, the isophase plane becomes a conical surface, and the effective viewing zone angle equals the maximum diffraction angle φdiff_max plus the inclination α of the conical surface. Since the inclination α can be represented by π / 2 - 2γ, the effective viewing zone angle after expansion can be expressed by:

$$\Phi _E^{\exp } = \frac{\pi }{2} - 2\gamma + {\varphi _{diff\_max}}.$$
In addition, to obtain an expanded effective viewing zone angle in the above practical applications, inclination α and the maximum diffraction angle φdiff_max should satisfy α ≤ φdiff_max, namely:
$$\frac{\pi }{2} - 2\gamma \le {\varphi _{diff\_max}}.$$
When π / 2 - 2γ = φdiff_max, the effective viewing zone angle can be expanded by up to two times:
$$\Phi _E^{\exp } = 2{\varphi _{diff\_max}} = 2{\Phi _E}.$$

Therefore, the conical mirror with different base angles can be selected to increase the effective viewing zone according to such conditions as the pixel pitches for the SLMs or the wavelength of illumination light. And with our proposed method, the effective viewing zone in the vertical direction can be expanded by up to two times.

3. Results

3.1 Feasibility of approximating inner conical surface with refocusing plane

As mentioned in Section 2.2, the diffraction from the inner conical surface to the hologram plane can be approximated as the planar diffraction from the refocusing plane to the hologram plane. The feasibility of this approximation will be discussed in this section.

Simulation conditions are as follows: the radius a and the height zp’ of the hologram plane are respectively 4.32 mm and 300 mm, the wavelength λ is 671 nm, and the resolution of both the hologram plane and the inner conical surface is 64 × 64 (approx. 644 ≈ 17M rays). As shown in Fig. 6, the horizontal coordinate is the error rate $|{{L_{pq}} - {L_{p\tilde{q}}}} |/$. λ, and the vertical coordinate is the cumulative percentage of rays not exceeding this error rate out of the total number of rays.

 figure: Fig. 6.

Fig. 6. Cumulative percentage of rays not exceeding this error rate out of the total number of rays with different base angles.

Download Full Size | PDF

It can be seen from the red box in Fig. 6 that as the base angle of the conical mirror decreases, the error will decrease. Taking the 44° conical mirror, namely 2γ = 88°, α = 2° (blue line), as an example, the point on the blue line (0.05, 79.51%) indicates that the error rate of 79.51% of rays is less than 0.05, while the percentage of rays with an error rate of below 0.2 reaches 95.43%, indicating that the majority of the 17M rays satisfy the approximation condition proposed in Eq. (14).

To further demonstrate the feasibility of this approximate method, 200 images are randomly selected from the MNI database [27] as the target image on the inner conical surface. Holograms are obtained respectively by planar diffraction (the approximate method) and cone-plane diffraction (the PS method), and they are both reconstructed by cone-plane diffraction. For a middle plane of the entire diffraction process, we pay more attention to the similarity of complex amplitudes. Hence we choose the correlation coefficient (CC) as evaluation indexes, expressed as follows:

$$CC = \frac{{\sum\limits_{m = 1}^M {\sum\limits_{n = 1}^N {[{R_a}(m,n) - \overline {{R_a}} ][{R_p}(m,n) - \overline {{R_p}} ]} } }}{{\sqrt {\sum\limits_{m = 1}^M {\sum\limits_{n = 1}^N {{{[{R_a}(m,n) - \overline {{R_a}} ]}^2}\cdot \sum\limits_{m = 1}^M {\sum\limits_{n = 1}^N {{{[{R_p}(m,n) - \overline {{R_p}} ]}^2}} } } } } }},$$
where M and N are pixels in the azimuthal and vertical directions, respectively, and (m, n) is the pixel coordinate. Ra and Rp represent the real parts of reconstructed complex amplitudes with the approximate and PS methods, respectively, and $\overline {{\textrm{R}_\textrm{a}}} $ and $\overline {{\textrm{R}_\textrm{p}}} $ are the averages of the real parts. In general, a larger CC indicates a better correlation between these two reconstructed complex amplitudes.

As can be seen from Table 1, the real parts of the reconstructed results by the complex amplitude hologram without the random phase, called Complex w/o RP, have a high CC-index, i.e. a high degree of similarity. In reality, however, the SLM can only modulate the amplitude or phase alone, and in order to obtain its maximum viewing zone, random phases need to be added to the object image. Therefore, the CC-index of the real parts of the reconstructed results by the phase-only hologram obtained by the amplitude-truncation with the random phase, called Phase-only w/ RP, is more noteworthy. Compared to the reconstructed results by Complex w/o RP, the CC-index of the real parts of the reconstructed results by Phase-only w/ RP fell by about 0.04. This is because some valuable information is inevitably lost in the process of amplitude truncation. In addition, the addition of a random phase will certainly introduce some errors compared to a constant phase as the phase part is very sensitive. Nonetheless, the reconstructed results of the PS and approximate methods by Phase-only w/ RP still maintain a good similarity.

Tables Icon

Table 1. Correlation coefficient of real parts of reconstructed complex amplitudes

To describe this similarity more intuitively, we take one set of simulations (2γ = 88°) as an example. As shown in Fig. 7, the number “3” is selected as the object image. Figures 7(b)-1 and (b)-2 are the amplitudes of the reconstructed results with the approximate and PS methods, respectively. Figures 7(c)-1 and (c)-2 are the real parts of the reconstructed results with the approximate and PS methods, respectively. The column pixel-mean curves are plotted in Fig. 7(d), showing the fitting degree between the real parts of the two reconstructed results of the approximate and PS methods. It can be seen that the two curves have a good fit, indicating that our proposed approximation method is feasible.

 figure: Fig. 7.

Fig. 7. Reconstructed results by the phase-only hologram obtained by the amplitude-truncation with the random phase. (a)-1 and (a)-2 are the object image and the hologram obtained by the approximate method, respectively. (b)-1 and (b)-2 are the amplitudes of the reconstructed results with approximate and PS methods, respectively. (c)-1 and (c)-2 are the real parts of the reconstructed results. (d) Column pixel-mean curves of the real parts of the reconstructed results with the approximate and PS methods, respectively.

Download Full Size | PDF

3.2 Optical experiments

In this section, the feasibility and effectiveness of the proposed expanding method are demonstrated by optical experiments. The optical reconstruction system is shown in Fig. 8. Following are experimental conditions: A phase SLM (8 um pixel pitch, 1920 × 1080 pixels) is used. A 671 nm all-solid-state laser is used as the light source. A 4-f filter system is applied to remove the zero-order light and the higher-order diffraction image. Two conical mirrors with base angles of 45° and 44° are selected to reflect planar lightwave. A 3D-printed cylindrical receiver is used to receive the reconstructed image. A camera (NIKON D810) is used to capture the reconstructed image.

 figure: Fig. 8.

Fig. 8. Optical reconstruction system.

Download Full Size | PDF

The optical results of the 360° display are shown in Fig. 9. The letter (Fig. 9(a)) is used as the object image with a resolution of 3840 × 540. The radius r0 of the outer object surface is 10 mm, the radius a of the base of the conical mirror is 4.32 mm, and the height zp’ of the hologram plane is 300 mm. The reconstructed image is first imaging on a cylindrical receiver and then captured by the camera. The figures show the reconstructed images from four angles of 0°, 90°, 180°, and 270°. Figures 9(b)1-4 and (c)1-4 are the results of the original method and the proposed method using a 45° conical mirror, respectively. And Figs. 9(d)1-4 are the results of the proposed method using a 44° conical mirror. A 360° dynamic holographic video is presented in Visualization 1, which is achieved by the proposed method using the 44° conical mirror. In terms of optical results, the object image can be well optically reconstructed by our proposed method using conical mirrors with different base angles, and the images from these viewing angles are consistent with expectations. This demonstrates the applicability of our proposed method to conical mirrors with different base angles, including the 45° conical mirror. In other words, our proposed method is more general and applicable to different practical applications.

 figure: Fig. 9.

Fig. 9. Imaging on a cylindrical receiver. (a) object image. (b)1-4 and (c)1-4 are with original method and proposed method using a 45° conical mirror, respectively. (d)1-4 are with proposed method using a 44° conical mirror.

Download Full Size | PDF

To verify whether the proposed method is capable of expanding the vertical effective viewing zone, a threadlike object and a letter “E” are used as the object images. Figure 10 schematically illustrates the experimental setup for the viewing zone measurement. The image is captured by a camera moving vertically along a circular path. The measured distance from the object to the digital camera is about 400 mm. As shown in Fig. 11, Figs. 11(a)1-5 and (b)1-5 are the results of the original method and the proposed method using a 45° conical mirror, respectively. And Figs. 11(c)1-5 are the results of the proposed method using a 44° conical mirror. The reconstructed image is in the green box, and the change of calibration in the red box shows the change in capturing angle. The change of calibration in the pictures has no practical significance and is only used to indicate the change of shooting angle. Compared with the use of a 45° conical mirror, when using a 44° conical mirror, the viewing zone changes from −5° (−35 mm) ∼ + 5° (+35 mm) to −3° (−21 mm) ∼+7° (+49.1 mm), an overall shift of 2° (14 mm) to the positive direction. In the practical applications mentioned in Sec. 2.5, only the positive viewing zone is effective. Therefore, compared to using a 45° conical mirror, when using a 44° conical mirror, the effective viewing zone angle has been expanded by 2°. We can get a more visual sense of the expansion of the effective viewing zone in Figs. 11(d)-1 and (d)-2. At the extreme viewing angle position using a 45° conical mirror, the threadlike object can be clearly seen using a 44° conical mirror. For the letter “E” as shown in Fig. 12, moving the digital camera vertically in a circular path and rotating around the middle line of “E”, we get the same result. Therefore, it indicates that with our proposed method, the viewing zone can be shifted to where we are more concerned about, i.e. the effective viewing zone can be expanded.

 figure: Fig. 10.

Fig. 10. Schematic diagram of measurement of viewing zone.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Threadlike object captured from different viewpoints in vertical direction. (a)1-5 and (b)1-5 are of original method and proposed method using a 45° conical mirror, respectively. (c)1-5 are of proposed method using a 44° conical mirror. (d)1 and (d)2 are captured at angle of view 5° with proposed method using 45° and 44° conical mirrors, respectively.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. Letter “E” captured from different viewpoints in vertical direction. (a)1-5 and (b)1-5 are of original method and proposed method using a 45° conical mirror, respectively. (c)1-5 are of proposed method using a 44° conical mirror. (d)1 and (d)2 are captured at angle of view 5° with proposed method using 45° and 44° conical mirrors, respectively.

Download Full Size | PDF

In addition, an experimental detail is worth explaining. As the camera moves vertically along the circular path, the captured image change from short to long and then from long to short, moving a total of about 10°, which is inconsistent with our theoretical result that the viewing zone angle is 4.8°. As shown in Fig. 10, this is because the camera lens has a certain aperture, which will increase the angle of camera movement. In the process of moving the camera, the center line of the camera lens is always facing the threadlike object. When the center line of the camera lens is not in the viewing zone, the incident light is blocked more by the aperture so that the captured image will become shorter. Therefore, the true viewing zone angle of the reconstructed image is the range of views where the length of the captured image does not change, and it is about 5°, which is in line with the theoretical result, as shown in Figs. 11(c)2-4 and Figs. 12(c)2-4.

To verify whether the proposed method is capable of expanding the vertical effective viewing zone for 360° display, we capture the image from various angles in the horizontal direction. The image is reconstructed by using a 44° conical mirror. As shown in Fig. 13, the image “F”, “H”, “L”, and “E” is captured from four angles of 0°, 90°, 180°, and 270° in the horizontal direction, and the first and second rows are captured at the angle of view 4.5° and −0.5° in the vertical direction, respectively. From the optical result, at each angle in the horizontal direction, we get the same vertical viewing zone as in Figs. 11(c)2-4 and Figs. 12(c)2-4 above.

 figure: Fig. 13.

Fig. 13. Reconstructed image captured from four angles of 0°, 90°, 180°, and 270° in horizontal direction. (a)1-4 and (b)1-4 are captured at angle of view 4.5° and −0.5° in vertical direction, respectively.

Download Full Size | PDF

The time multiplexing (TM) method can be used to suppress speckle noise and improve reconstruction quality as shown in Fig. 14. Figures 14(a)1-4 are the reconstructed results of the phase-only hologram with the random phase when using a 44° conical mirror. Figures 14(b)1-4 show the results of using TM with 30 multiplexings, and it is clear from the green box that the speckle noise is well suppressed. Figures 14(c)-1 and (c)-2 are the reconstructed images that are captured directly by the digital camera without and with the TM method, respectively.

 figure: Fig. 14.

Fig. 14. Reconstructed results of phase-only hologram with a random phase (a) without and (b) with TM method (30 multiplexings). (c)1-2 are captured directly by digital camera without and with TM method.

Download Full Size | PDF

4. Discussion

In the practical applications mentioned in Sec. 2.5, when using a 45° conical mirror, the vertical viewing zone in the proposed method inherits the viewing zone of the plane, but part of it becomes a useless viewing zone. With our proposed method, the viewing zone can be shifted to where we care about, thus expanding the effective viewing zone, which can be expanded to twice the original at most. Figure 15 shows that the effective viewing zone angle can be expanded at most using SLM with different pixel pitches. When the SLM with 8 um pixel pitch is illuminated with 671 nm red light, if a 43.8° conical mirror is used, the effective viewing zone angle can be expanded from 2.4° to 4.8°, while when the SLM with 3.74 um pixel pitch is illuminated with 671 nm red light, if a 42.45° conical mirror is used, the effective viewing zone angle can be expanded from 5.1° to 10.2°. In addition, the capacity of our proposed method can be enhanced by the planar method of viewing zone expansion [2830].

 figure: Fig. 15.

Fig. 15. Effective viewing zone angle that can be expanded at most using SLM with different pixel pitches.

Download Full Size | PDF

5. Conclusion

In this work, a method of expanding the vertical effective viewing zone is proposed for optical 360° holographic display by using a conical mirror with a base angle of less than 45°. With our method, in some practical applications, the vertical effective viewing zone can be expanded by up to 2 times in theory. And the experiment results have demonstrated the feasibility and effectiveness of our proposed method. Moreover, compared with using only a 45° conical mirror, our proposed method is more well-established and general. We can choose some suitable conical mirrors to expand the vertical viewing zone according to different application scenarios. We believe that this proposed method would be promising in the AR field.

Funding

National Natural Science Foundation of China (62275178, U1933132); Chengdu Science and Technology Program (2022-GH02-00016-HZ).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. F. Yaras, H. Kang, and L. Onural, “State of the Art in Holographic Displays: A Survey,” J. Disp. Technol. 6(10), 443–454 (2010). [CrossRef]  

2. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef]  

3. J. Park, H. Kang, E. Stoykova, Y. Kim, S. Hong, Y. Choi, Y. Kim, S. Kwon, and S. Lee, “Numerical reconstruction of a full parallax holographic stereogram with radial distortion,” Opt. Express 22(17), 20776–20788 (2014). [CrossRef]  

4. D. Zheng, W. Wang, S. Wang, D. Qu, H. Liu, Y. Kong, S. Liu, S. Chen, R. Rupp, and J. Xu, “Real-time dynamic holographic display realized by bismuth and magnesium co-doped lithium niobate,” Appl. Phys. Lett. 114(24), 241903 (2019). [CrossRef]  

5. Y.-Z. Liu, J.-W. Dong, Y.-Y. Pu, B.-C. Chen, H.-X. He, and H.-Z. Wang, “High-speed full analytical holographic computations for true-life scenes,” Opt. Express 18(4), 3345–3351 (2010). [CrossRef]  

6. D. Blinder and P. Schelkens, “Phase added sub-stereograms for accelerating computer generated holography,” Opt. Express 28(11), 16924–16934 (2020). [CrossRef]  

7. Z. Wang, L. M. Zhu, X. Zhang, P. Dai, G. Q. Lv, Q. B. Feng, A. T. Wang, and H. Ming, “Computer-generated photorealistic hologram using ray-wavefront conversion based on the additive compressive light field approach,” Opt. Lett. 45(3), 615–618 (2020). [CrossRef]  

8. M. Kovachev, R. Ilieva, P. Benzie, G. B. Esmer, L. Onural, J. Watson, and T. Reyhan, “Holographic 3DTV displays using spatial light modulators,” in Three-Dimensional Television: Capture, Transmission, Display, H. M. Ozaktas and L. Onural, eds., (Springer-Verlag, 2007), pp. 529–556.

9. L. Xu, C. Chang, S. Feng, C. Yuan, and S. Nie, “Calculation of computer-generated hologram (CGH) from 3D objects of arbitrary size and viewing angle,” Opt. Commun. 402, 211–215 (2017). [CrossRef]  

10. J. Hahn, H. Kim, Y. Lim, G. Park, and B. Lee, “Wide viewing angle dynamic holographic stereogram with a curved array of spatial light modulators,” Opt. Express 16(16), 12372–12386 (2008). [CrossRef]  

11. B. G. Chae, “Wide viewing-angle holographic display based on enhanced-NA Fresnel hologram,” Opt. Express 29(23), 38221–38236 (2021). [CrossRef]  

12. R. Kang, J. Liu, D. Pi, and X. Duan, “Fast method for calculating a curved hologram in a holographic display,” Opt. Express 28(8), 11290–11300 (2020). [CrossRef]  

13. D. Wang, N. N. Li, Z. S. Li, C. Chen, B. Lee, and Q. H. Wang, “Color curved hologram calculation method based on angle multiplexing,” Opt. Express 30(2), 3157–3171 (2022). [CrossRef]  

14. Y. Sando, M. Itoh, and T. Yatagai, “Fast calculation method for cylindrical computer-generated holograms,” Opt. Express 13(5), 1418–1423 (2005). [CrossRef]  

15. T. Yamaguchi, T. Fujii, and H. Yoshikawa, “Fast calculation method for computer-generated cylindrical holograms,” Appl. Opt. 47(19), D63–D70 (2008). [CrossRef]  

16. B. Jackin and T. Yatagai, “Fast calculation method for computer-generated cylindrical hologram based on wave propagation in spectral domain,” Opt. Express 18(25), 25546–25555 (2010). [CrossRef]  

17. Y. Sando, D. Barada, and T. Yatagai, “Fast calculation of computer-generated holograms based on 3-D Fourier spectrum for omnidirectional diffraction from a 3-D voxel-based object,” Opt. Express 20(19), 20962–20969 (2012). [CrossRef]  

18. Y. Sando, D. Barada, and T. Yatagai, “Hidden surface removal of computer-generated holograms for arbitrary diffraction directions,” Appl. Opt. 52(20), 4871–4876 (2013). [CrossRef]  

19. Y. Sando, D. Barada, B. Jackin, and T. Yatagai, “Bessel function expansion to reduce the calculation time and memory usage for cylindrical computer-generated holograms,” Appl. Opt. 56(20), 5775–5780 (2017). [CrossRef]  

20. Y. Zhao, M. Piao, G. Li, and N. Kim, “Fast calculation method of computer-generated cylindrical hologram using wave-front recording surface,” Opt. Lett. 40(13), 3017–3020 (2015). [CrossRef]  

21. J. Wang, Q. H. Wang, and Y. Hu, “Unified and accurate diffraction calculation between two concentric cylindrical surfaces,” J. Opt. Soc. Am. A 35(1), A45–A52 (2018). [CrossRef]  

22. J. Wang, Q. H. Wang, and Y. Hu, “Fast diffraction calculation of cylindrical computer generated hologram based on outside-in propagation model,” Opt. Commun. 403, 296–303 (2017). [CrossRef]  

23. H. Han, J. Wang, Y. Wu, and J. Zhang, “Optical realization of 360° cylindrical holography,” Opt. Express 30(11), 19597–19610 (2022). [CrossRef]  

24. Z. Zhou, J. Wang, Y. Wu, F. Jin, Z. Zhang, Y. Ma, and N. Chen, “Conical holographic display to expand the vertical field of view,” Opt. Express 29(15), 22931–22943 (2021). [CrossRef]  

25. H. K. Cao, S. F. Lin, and E. S. Kim, “Accelerated generation of holographic videos of 3-D objects in rotational motion using a curved hologram-based rotational-motion compensation method,” Opt. Express 26(16), 21279–21300 (2018). [CrossRef]  

26. W. Zhang, H. Zhang, and G. Jin, “Frequency sampling strategy for numerical diffraction calculations,” Opt. Express 28(26), 39916–39932 (2020). [CrossRef]  

27. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86(11), 2278–2324 (1998). [CrossRef]  

28. Y. Takaki and Y. Hayashi, “Increased horizontal viewing zone angle of a hologram by resolution redistribution of a spatial light modulator,” Appl. Opt. 47(19), D6–D11 (2008). [CrossRef]  

29. Y. Z. Liu, X. N. Pang, S. Jiang, and J. W. Dong, “Viewing-angle enlargement in holographic augmented reality using time division and spatial tiling,” Opt. Express 21(10), 12068–12076 (2013). [CrossRef]  

30. Z. Zeng, H. Zheng, Y. Yu, A. K. Asundi, and S. Valyukh, “Full-color holographic display with increased-viewing-angle [Invited],” Appl. Opt. 56(13), F112–F120 (2017). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       A 360° dynamic holographic video

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. (a) Schematic diagram of proposed method. (b) Diffraction model of proposed method.
Fig. 2.
Fig. 2. (a) Conical diffraction calculation model. (b) Side view of the cone.
Fig. 3.
Fig. 3. (a) cone-to-plane diffraction calculation model. (b) Side view.
Fig. 4.
Fig. 4. Planar diffraction calculation model.
Fig. 5.
Fig. 5. Expansion of vertical effective viewing zone in practical application scenarios.
Fig. 6.
Fig. 6. Cumulative percentage of rays not exceeding this error rate out of the total number of rays with different base angles.
Fig. 7.
Fig. 7. Reconstructed results by the phase-only hologram obtained by the amplitude-truncation with the random phase. (a)-1 and (a)-2 are the object image and the hologram obtained by the approximate method, respectively. (b)-1 and (b)-2 are the amplitudes of the reconstructed results with approximate and PS methods, respectively. (c)-1 and (c)-2 are the real parts of the reconstructed results. (d) Column pixel-mean curves of the real parts of the reconstructed results with the approximate and PS methods, respectively.
Fig. 8.
Fig. 8. Optical reconstruction system.
Fig. 9.
Fig. 9. Imaging on a cylindrical receiver. (a) object image. (b)1-4 and (c)1-4 are with original method and proposed method using a 45° conical mirror, respectively. (d)1-4 are with proposed method using a 44° conical mirror.
Fig. 10.
Fig. 10. Schematic diagram of measurement of viewing zone.
Fig. 11.
Fig. 11. Threadlike object captured from different viewpoints in vertical direction. (a)1-5 and (b)1-5 are of original method and proposed method using a 45° conical mirror, respectively. (c)1-5 are of proposed method using a 44° conical mirror. (d)1 and (d)2 are captured at angle of view 5° with proposed method using 45° and 44° conical mirrors, respectively.
Fig. 12.
Fig. 12. Letter “E” captured from different viewpoints in vertical direction. (a)1-5 and (b)1-5 are of original method and proposed method using a 45° conical mirror, respectively. (c)1-5 are of proposed method using a 44° conical mirror. (d)1 and (d)2 are captured at angle of view 5° with proposed method using 45° and 44° conical mirrors, respectively.
Fig. 13.
Fig. 13. Reconstructed image captured from four angles of 0°, 90°, 180°, and 270° in horizontal direction. (a)1-4 and (b)1-4 are captured at angle of view 4.5° and −0.5° in vertical direction, respectively.
Fig. 14.
Fig. 14. Reconstructed results of phase-only hologram with a random phase (a) without and (b) with TM method (30 multiplexings). (c)1-2 are captured directly by digital camera without and with TM method.
Fig. 15.
Fig. 15. Effective viewing zone angle that can be expanded at most using SLM with different pixel pitches.

Tables (1)

Tables Icon

Table 1. Correlation coefficient of real parts of reconstructed complex amplitudes

Equations (26)

Equations on this page are rendered with MathJax. Learn more.

r q = r 1 r 2 H × z q + r 1 + r 2 2 ,
u ( θ q , z q ) = S u ( θ 0 , z 0 ) × h { θ q θ 0 , z 0 , z q ) } d θ 0 d z 0 ,
h ( θ , z 0 , z q ) = 1 j λ exp ( i 2 π λ d ( θ , z 0 , z q ) ) d ( θ , z 0 , z q ) ,
d ( θ , z 0 , z q ) = r 0 2 + r q 2 2 r 0 r q cos ( θ q θ 0 ) + ( z q z 0 ) 2 .
u ( θ q , z q ) = u ( θ 0 , z 0 ) θ h ( θ , z 0 , z q ) d z 0 ,
u ( θ q , z q ) = I F F T θ [ F F T θ ( u ( θ 0 , z 0 ) ) × F F T θ ( h ( θ , z 0 , z q ) ) ] d z 0 .
β = arcsin ( λ 2 p ) ,
g ( θ , z 0 , z q ) = { 1 , | θ q θ 0 | β   a n d   | z q z 0 | ( r q r 0 ) × tan β 0 , o t h e r w i s e .
h ( θ , z 0 , z q ) = h ( θ , z 0 , z q ) g ( θ , z 0 , z q ) .
u p ( r p , θ p , z p ) = Π u q ( r q , θ q , z q ) exp ( j k L p q ) L p q ,
L m p = r p 2 + r m 2 2 r p r m cos ( θ p θ m ) + ( z p + ( r m a ) tan γ ) 2 , L m q = r q 2 + r m 2 2 r q r m cos ( θ q θ m ) + ( z q + ( r m a ) tan γ ) 2 .
L p q ( r m , θ m ) θ m = r p sin ( θ p θ m ) L m p + r q sin ( θ q θ m ) L m q = 0 ,
L p q ( r m , θ m ) r m = r p cos ( θ p θ m ) ( z p tan γ + r m ( 1 + tan 2 γ ) a tan 2 γ ) L m p + r q cos ( θ q θ m ) ( z q tan γ + r m ( 1 + tan 2 γ ) a tan 2 γ ) L m q = 0.
k | L p q L p q ~ | 2 π .
U ( x , y ) = I F F T { F F T { U ( x , y ) } H f ( f x , f y ) } ,
H f ( f x , f y ) = { exp ( i 2 π λ z 1 ( λ f x ) 2 ( λ f y ) 2 ) , i f f x 2 + f y 2 < 1 λ 0 , o t h e r w i s e ,
z ( N + N p ) Δ x 2 λ 1 ( λ 2 Δ x ) 2 ,
f θ ( θ , z ) 1 2 π h ( θ , z ) θ 1 λ θ [ d × g ( θ , z ) ] , f z ( θ , z ) 1 2 π h ( θ , z ) z 1 λ z [ d × g ( θ , z ) ] .
| f θ | max = r 0 r 2 β λ ( r 0 r 2 ) , | f z | max = β λ 1 + β 2 .
| N θ | min = 2 π ( 2 | f θ | max ) 1 = 4 π r 0 r 2 β λ ( r 0 r 2 ) , | N z | min = a sin 2 γ ( 2 | f z | max ) 1 = 2 a β sin 2 γ λ 1 + β 2 .
Φ = 2 φ d i f f _ m a x = 2 arcsin ( λ 2 p ) ,
Φ E = φ d i f f _ m a x .
Φ E exp = π 2 2 γ + φ d i f f _ m a x .
π 2 2 γ φ d i f f _ m a x .
Φ E exp = 2 φ d i f f _ m a x = 2 Φ E .
C C = m = 1 M n = 1 N [ R a ( m , n ) R a ¯ ] [ R p ( m , n ) R p ¯ ] m = 1 M n = 1 N [ R a ( m , n ) R a ¯ ] 2 m = 1 M n = 1 N [ R p ( m , n ) R p ¯ ] 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.