Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Analysis on the reconstruction error of EPISM based full-parallax holographic stereogram and its improvement with multiple reference plane

Open Access Open Access

Abstract

To reduce the reconstruction error of holographic stereogram fabricated by effective perspective images’ segmentation and mosaicking method (EPISM), a multiple-reference-plane (MRP) approach is proposed and validated. The reconstruction error for traditional EPISM is analyzed, and the results indicate that the distortion as well as the blur will be involved for object points located far away from the reference plane. A new method by introducing multiple reference planes is proposed, which divides the 3D scene into several parts along its depth direction, and sets a reference plane for each of the object part. By resynthesizing all the effectively synthetic perspective images referred to their own reference planes of the object parts, the finally effectively synthetic perspective image exposed to one holographic elemental by only once exposure is generated. The optically experimental results demonstrate the validity of the proposed method, and the reconstruction error of full-parallax holographic stereogram printed by MRP based EPISM can be reduced evidently while the displayed depth range of 3D scene can be extended, compared to the traditional EPISM approach.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The holographic stereogram, which combines the merits of both holography and stereogram, is a promising approach for 3D static display, especially for the printing of hologram and glasses free 3D display [15]. The key to this display technology is storing a series of coded 2D perspective images in small holographic elements, which are called hogels for short. Due to the essence of holographic stereogram, the amount of the data volume is compressed so significantly that the sampling for real 3D scene or the rendering for virtual 3D objects can be more economical of time. Along with the lower time consuming of sampling or rendering, other flexibilities can also be obtained, such as the reduction of the laser power requirement, the reduced system size, the extended area of the whole hologram, and so on. The smooth motion parallax also can be achieved when the super-multiple-viewing (SMV) condition is satisfied [6,7]. The printing technology of holographic stereogram has become one of the research focuses [811], and the holographic stereograms are also widely applied in many fields, such as artistic, industrial, military, medical, and so on [1214].

As far as in 1967, the first holographic stereogram was realized by Pole et al., where a 2D lens array was used to capture the multiple perspective images, and these perspective images were recorded into a plate by holography. According to this approach, the virtual stereo images of the 3D object can be reconstructed [1]. DeBitetto et al. used a rectangular mask as the optical aperture to expose the perspective images, which could enhance the reconstruction quality of the holographic stereogram significantly. Based on DeBitetto’s work, the two-step exposure of holographic stereogram was proposed by King et al. [3], and then the holographic stereogram could be illuminated by white light. Based on the methods of DeBitette and King, many efforts had been contributed on the other improvements of the printing method and reconstruction quality until 1990s [2,15,16]. However, the digital image processing was still not introduced, the perspective images used to fabricate holographic stereogram were captured from real objects by real cameras, thus only the 3D scene existed in real word could be recorded, and the performance of the reconstructed 3D scene was restricted by the geometrical relationship between the real object and sampling cameras. In 1991, the concept of Ultragram was proposed by Halle et al. [4], where an infinite camera was used to capture the 3D real/virtual scene, and the captured perspective images were further processed by computer to access the pre-distorted images, therefore, the correspondence between the photographic capture, holographic recording, and final viewing geometries was decoupled that the additional flexibilities were introduced to the improvement of the fabrication of holographic stereogram, while the viewing experience was also enhanced. In 1992, a one-step Lippmann printing method was proposed by Yamaguchi [5]. Supplemented by the digital image processing, the 3D virtual models can be printed by this approach, and the reconstructed scene was with the merits of distortionless and full parallax. The idea of basic pixel swapping or I-S transformations for direct-write digital holography (DWDH) was proposed by Brotherton-Ratcliffe et al., which can be traced back to as early as 2002 [1720], whose core idea is the image transformation from the camera film plane to the spatial light modulator (SLM) plane, which is usually referred as “I-to-S” transformation. In 2013, Keehoon Hong et al. proposed a hogel overlapping method for the holographic printer to enhance the lateral resolution of holographic stereograms [21]. Instead of reducing the size of hogel, the lateral resolution of holographic stereograms can be enhanced by printing overlapped hogels, which makes it possible to take advantage of multiplexing property of the volume hologram. In 2015, the point-source method often used in computer generated hologram was introduced into the printing of holographic stereogram by L. Cao et al. [22]. Based on their method, the depth cue of the reconstructed 3D scene can be well presented, and the convergence-accommodation conflict can be eliminated. In 2019, G. Lv et al. proposed a concept of resolution priority HS by adding a quadratic phase term on the conventional Fourier transform, and a multi-plane technique as well as multi-exposure technique was used to print the hogel, and the fabricated holographic stereogram was with high resolution and enhanced depth range [23]. Due to the excellent performance of straightforward capture of 3D scene and lower computational load, the hologram stereogram was also put forward for the application of real-time holographic display, especially with the recent development of updatable holographic recording medium, such as the updatable photorefractive polymer [24,25].

Recently, our group demonstrated a new printing way of full-parallax holographic stereogram, which is named as effective perspective images’ segmentation and mosaicking method (EPISM) [26] and is verified as an effective way for static 3D display because of the less data-intensive algorithm as well as the one-step printing procedure. The essence of our method is an imitation of two-step holographic stereogram printing, however, the generation of the master hologram is implemented virtually by computer. The effective perspective images used to expose the hogels of the transfer hologram are segmented and mosaicked by the proposed method. According to EPISM, holographic stereogram with floating-out effect can be fabricated with fewer captured perspective images as well as only one step holographic exposure, which can reduce the time consuming and enhance the printing efficiency significantly. However, during the effective perspective images’ segmentation and mosaicking, the pixel values of some target points are approximated with that of their adjacent points, and this approximation can result in an inaccuracy of synthetic perspective images, further affects the reconstruction quality, especially causes the image distortion and resolution reduction. Some attempts, such as the optimization of the hogel size as well as the orientation arrangement of the 3D scene [27,28], were used to improve the EPISM, despite the quality of optical reconstruction is acceptable but is still not quite satisfactory, especially for the 3D scene with large depth range.

In this work, to the best of our knowledge, a multiple-reference-plane (MRP) based EPISM is proposed for the first time, to diminish the reconstruction error and extend the reconstructed depth range of the holographic stereogram printed by traditional EPISM. The reconstruction error of the traditional EPISM is analyzed, the principle of the MRP based EPISM is proposed, and its detailed implementation is introduced. The experimental results demonstrate that the MRP based EPISM can improved the reconstruction quality of full-parallax holographic stereogram significantly compared to that of the traditional one.

1. Analysis on the reconstruction error of EPISM

2.1 Brief idea of EPISM

The original intention of our approach is that conventional two-step method printing can be compressed to one step by proposed EPISM and the resolution of the effectively synthetic perspective images is high enough, which ensures higher quality of reconstruction. During this process, a small number of perspective images are used to obtain the effectively synthetic perspective images with high resolution. In that aspect, EPISM can be understood as a method to generate dense light-field data from the sparse camera array. The basic idea of EPISM can be carried out in three steps. The principle of EPISM is illustrated in one dimension for simplicity, as shown in Fig. 1. Firstly, a real pin-hole or an ideal virtual camera array is used to sample the 3D scene, and the perspective images in full parallax are acquired, as shown in Fig. 1(a). Secondly, the sampled perspective images are further processed using the EPISM to generate a series of effectively synthetic perspective images, and its core idea is to segment a group of sampled perspective images to produce the segments of effective perspective images and then tile these images’ segments together to get the effectively synthetic perspective images. As shown in Fig. 1(b), the position of the holographic stereogram can be determined according to the geometrical relationship of the holographic stereogram and the reconstructed 3D scene. Then a reference plane is set which is also chosen as the central plane of the 3D scene in depth direction. $\overline {{C_1}{C_2}} $ denotes the perspective image sampled by cameram, and point C is the camera position of captured image $\overline {{C_1}{C_2}} $. $C{C_u} = C{C_l}$ is half of the sampling interval. Here the geometrical size for the left part of the reference plane should be wholly zoomed to match the size of the hologram, thus the camera sampling interval is a zoomed value. Point O is the central point of the hogeln. Then the effective scene segment that $\overline {{C_1}{C_2}} $ contributes to hogeln can be defined as the image segment $\overline {{E_1}{E_2}} $, which is the intersected image segment between the perspective image $\overline {{C_1}{C_2}} $ sampled by cameram and the ray frustrum spanned by point O and the cameram’s placeholder $\overline {{C_u}{C_l}} $. Image segment $\overline {{E_1}{E_2}} $ is just the effective image data range dedicated by cameram to hogeln . If the field of view (FOV) of the hogeln is supposed as $\theta $ , in the same way, we can find all the contributed image segments captured by all the possible camera positions that are included in the FOV of hogeln, and the completely effectively synthetic perspective image $\overline {{O_1}{O_2}} $ used to expose hogeln can be obtained by mosaicking all these effective image segments together. The pixels segmentation and mosaicking of the effective perspective image segment, such as $\overline {{E_1}{E_2}} $, can be carried out according the simple trigonometry. During the effective perspective images’ segmentation and mosaicking, a nonlinear pixel mapping is employed and the pixel values of some target points are approximated with that of their adjacent points [26], which reduces the number of captured perspective images. Actually, a pseudoscopic image conversion is also added during the effective perspective images’ segmentation and mosaicking, and then the orthophoto image can be obtained. Finally, the effectively synthetic perspective image $\overline {{O_1}{O_2}} $ is loaded on a spatial light modulator, such as a LCD panel, and holographically exposed on hogeln (see Fig. 2(c)). All the hogels can be printed in the same way, and then a complete holographic stereogram is fabricated. The detailed principle as well as the algorithm of segmentation and mosaicking is described in our previous work [26].

 figure: Fig. 1.

Fig. 1. Principal implementation of EPISM. (a) capturing perspective images, (b) segmentation and mosaicking of effective perspective image, (c) exposure of hogel.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Analysis on the reconstruction error of EPISM

Download Full Size | PDF

2.2 Analysis on the reconstruction error

A ray tracking approach can be applied to analyze the causes of the reconstruction error in EPISM. During the reconstruction of 3D scene, each pixel can be considered as a light ray, either for pixels in the sampled perspective images captured by cameras or for that in the effectively synthetic perspective images. A pixel on the sampled perspective image means a light ray that incident into the camera, while a pixel on the effectively synthetic perspective image denotes a light ray that emits from a hogel when the holographic stereogram is reconstructed.

As shown in Fig. 2, assume that there is a point ${P_1}$ on the 3D scene which is also located on the reference plane, and it emits a ray ${P_1}C$, and then ${P_1}C$ is recorded as pixel ${P_1}^\prime $ on the captured perspective image $\overline {{C_1}{C_2}} $. Due to the high resolution of current commercial camera’s CCD, the quantization error can be neglected so that the pixel ${P_1}^\prime $ is then the precise crossed point between the ray ${P_1}C$ and the reference plane, i.e, ${P_1}^\prime $ and ${P_1}$ are concurrent. After ${P_1}^\prime $ is exposed in hogeln, the object point ${P_1}$ is reconstructed as point ${P_1}^{\prime \prime }$, where ${P_1}^{\prime \prime }$ precisely coincides with the original object point ${P_1}$ as well as the pixel ${P_1}^\prime $. Thus, it can be thought that all the object points located on the reference plane can be reconstructed faithfully without reconstruction error.

However, if we consider another point ${P_2}$ on the 3D scene, which is no longer located on the reference plane but is on the further side of the 3D scene to the camera. The ray ${P_2}C$ meets $\overline {{O_1}{O_2}} $ at point ${P_2}^\prime $. Thus, the pixel ${P_2}^\prime $ on $\overline {{O_1}{O_2}} $ can represent the ray emitted from point ${P_2}$ on the 3D scene. Nevertheless, after ${P_2}^\prime $ is exposed in hogeln, the reconstructed ray $O{P_2}^\prime $ hits the 3D scene at point ${P_2}^{\prime \prime }$, where ${P_2}^{\prime \prime }$ is no longer overlapped with ${P_2}$. In this situation, it is obvious that the reconstructed point does not coincide with the original object point which means that a reconstruction error is introduced.

For the object points that are not located on the reference plane but on the closer side of the 3D scene to the camera, such as point ${P_3}$, the similar analysis can be operated and the similar reconstruction error occurs. To investigate this reconstruction error more quantitatively, we can put the discussion further. As shown in Fig. 3, for simplicity, line OC is supposed to be perpendicular to both the hologram plane and the camera plane. The camera’s sampling interval is denoted as ${\Delta _c}$, the distance between the camera plane and the reference plane is denoted as ${L_1}$ while ${L_2}$ is the distance between the hologram plane and the reference plane. For an object point ${P_R}$ located on the further side of the reference plane with distance offset $\Delta {x_R}$, it is distant $\Delta {y_R}$ from the line OC. ${P_R}^\prime $ is its recorded pixel on the perspective image. For an actual 3D object, its surface is always varied smoothly and the FOV of $\theta $ is always not too large, then the point ${P_R}^{\prime \prime }$, which is the intersected point between the line $O{P_R}^\prime $ and the vertical line passing object point ${P_R}$, can be seen as the reconstructed point. Therefore, the position offset between ${P_R}$ and ${P_R}^{\prime \prime }$, $\Delta {v_R}$, can be taken as the reconstruction error,

$$\Delta {v_R} = \frac{{{L_1} + {L_2}}}{{({{L_1} + \Delta {x_R}} ){L_2}}}\Delta {x_R}\Delta {y_R}$$
Where $\Delta {x_R} < {L_2},\Delta {y_R} < \frac{{{L_2}({{L_1} + \Delta {x_R}} )}}{{2{L_1}({{L_1} + {L_2}} )}}{\Delta _c}$. It is obvious that when $\Delta {x_R} = 0$ or $\Delta {y_R} = 0$, the reconstruction is without any error and is precise. For $\Delta {y_R} = 0$, it means that the object locates on line OC, or stands for the paraxial approximation should be satisfied. For points on the reference plane ($\Delta {x_R} = 0$) or near the reference plane ($\Delta {x_R} \approx 0$), the reconstruction error will also be neglectable. It also can be seen that the larger either the $\Delta {x_R}$ or the $\Delta {y_R}$ is, the more significant the reconstruction error $\Delta {v_R}$ will be. Therefore, to reduce the reconstruction error, the object points should be placed closer to both the optical axis and the reference plane.

 figure: Fig. 3.

Fig. 3. Characteristic of reconstruction error for object points not located on the reference plane.

Download Full Size | PDF

Similarly, for an object point ${P_L}$ located on the closer side of the reference plane, as shown in Fig. 3, the reconstruction error can be presented as

$$\Delta {v_L} = \frac{{{L_1} + {L_2}}}{{({{L_1} - \Delta {x_L}} ){L_2}}}\Delta {x_L}\Delta {y_L}$$
Where $\Delta {x_L} < {L_1},\Delta {y_L} < \frac{{{L_2}({{L_1} + \Delta {x_L}} )}}{{2{L_1}({{L_1} + {L_2}} )}}{\Delta _c}$. From Eq. (1) the similar conclusion can be yielded for the characteristics of reconstruction error. From Fig. 3, it should be noticed that the reconstructed points behind the reference plane (see point ${P_R}^{\prime \prime }$) is closer to the optical axis than their original objects, which means that the part of the reconstructed 3D scene behind the reference plane will emerge a shrunken effect. Correspondingly, there exists an expanded effect for the part of the reconstructed 3D scene before the reference plane. These shrunken/expanded distortion effects result in the reconstruction error and degrade the visual quality of the holographic stereogram. It also should be pointed out that, even when there are two object points located at the mirrored position corresponding to the reference plane, i.e., $\Delta {x_L} = \Delta {x_R} = \Delta x,\Delta {y_L} = \Delta {y_R} = \Delta y$, the absolute values of their reconstruction errors are still not equal, and we have
$$\frac{{\Delta {v_R}}}{{\Delta {v_L}}} = \frac{{{L_1} = \Delta x}}{{{L_1} + \Delta x}} < 1$$
which demonstrates that magnitude of the shrunken effect before the reference plane is more serious than that of the expanded effect after the reference plane. It is worth mentioning that if the camera interval is small enough, the effectively synthetic perspective images can be obtained relatively accurate no matter how much the position offset of the point of the 3D object is. As shown in Fig. 3, if the camera’s sampling interval ${\Delta _c}$ trends to 0, the available object points volume degenerates into the line OC from the triangular area. Hence, the reconstruction error $\Delta {v_R}$ no longer exists regardless of how much $\Delta {x_R}$ and $\Delta {y_R}$ are.

An example is exemplified as shown in Fig. 4. The typical parameters are used as ${L_1} = {L_2} = 175\textrm{ mm}$, ${\Delta _c} = 5\textrm{ mm}$, ${\Delta _c} \ll ({{L_1} + {L_2}} )$ is because that a sampling with intensive viewing points is applied which can eliminate the angular view hopping. The right part of the plot gives the reconstruction errors of the object points that not located on the reference plane but are on the further side to the camera (or the closer side to the hologram), while the left one give that of the object points located on the closer side to the camera (or the further side to the hologram). It can be seen that a star shaped object positioned near the reference plane and the optical axis will have less reconstruction error. The farther the object point located away from the reference plane and the optical axis, the larger the reconstruction error will be. The results also indicate that points on the expanded area suffer relatively more serious distortion than the points on the shrunken area.

 figure: Fig. 4.

Fig. 4. Numerical example of the reconstruction error. ${L_1} = {L_2} = 175\textrm{ mm}$, ${\Delta _c} = 5\textrm{ mm}$, and $OC \bot {C_u}{C_l}$.

Download Full Size | PDF

According to the analysis, the reconstruction error is not resulted from the capturing process of the 3D scene, but the physical nature of the method. The lower angular sampling frequency along with the pixel mapping for the resynthesizing of effective perspective images are the major sources of the EPISM’s reconstruction error. The above analysis is just for the reconstruction error of one hogel. Additionally, the 3D information of an object point may not be recorded in only one hogel but in several different adjacent hogels, and the reconstruction error for each hogel is not the same. Thus, the visual reconstructed points for a specific object point are not overlapped together but form a points cluster, which is presented as the image blur macroscopically when perceiving the holographic stereogram. These distortions and blurs combine together, and result in a degradation of the reconstruction quality. In fact, some assumptions made during the above formulation are not always true. For example, the line OC is not always perpendicular to the hologram plane, and the object’s surface may vary very severely but not smoothly, this makes the analysis would be more complicated and difficult. However, the above conclusions will also work, and the trends of the reconstruction errors will also be true.

2.3 Improving EPISM with multiple reference plane

According to the analysis above, all the points that located on the reference plane are without reconstruction error, and the object points closer to the reference plane are with smaller reconstruction error. Thus, a criterion for the setting of the reference plane is to choose a plane to make as many object points closer or on this plane as possible. The depth central plane is often chosen as the reference plane. However, for a real 3D object with certain depth, it is impossible to make all the points be close the reference plane. Therefore, the reconstruction error is inevitable for the EPISM based holographic stereogram, especially when the depth range of the 3D scene is large enough. However, if we can divide the 3D object into several different parts along its depth direction, and set a reference plane for each part to perform the perspective images’ sampling as well as the effective perspective images’ segmentation and mosaicking, then some effective perspective images can be obtained for these reference planes. After resynthesizing these MRPs based effective perspective images, the final effectively synthetic perspective image can be obtained to expose the hogel. Since the 3D object is divided into some different small-depth parts and each part has its own reference plane, all the object points will have a relatively small distance offset from their own reference plane, then the reconstruction error can be reduced. The reference plane for each small-depth part can be chosen as its own depth central plane. It can be understood that, the essence of the MRP based EPISM is to divide the 3D scene into several parts along the depth direction, and each part is processed by EPISM with its own reference plane, which is equivalent to the decreasing of 3D scene’s depth. Obviously, employing the MRP approach, the error caused by the position offset in EPISM can be reduced. The schematic design of the MRP based EPISM is shown in Fig. 5.

 figure: Fig. 5.

Fig. 5. Principle of the MRP based EPISM. RP: reference plane, ESPI: effectively synthetic perspective image.

Download Full Size | PDF

The key problem to perform the MRP based EPISM is how to resynthesize these MRP based effective perspective images. Since the 3D scene is divided into some different parts along the depth direction, during the reconstruction, the part that near the hologram (i.e., the part that has larger depth to the camera, named as the nearer part) should be occluded by the part that far away from the hologram (i.e., the part that has smaller depth to the camera, named as the farther part). Therefore, to implement this occluding, some certain pixels on the effective perspective images of the nearer part should be overlapped by that of the farther part. For simplicity, here we take a 3D scene with two reference planes as an example, see Fig. 6. The reference plane near the hologram is denoted as RPnear while the farther one is denoted as RPfar, corresponded to the 3D scene’s parts of partnear (the crying face) and partfar (the smiling face), respectively. For a certain hogel, suppose the effectively synthetic perspective image of partfar referred to RPfar is Ifar(i,j) where i and j are the pixels’ indexes, then we can generate a binary foreground mask M(I,j), on which the pixels with scene’s information are set as opaque while the other pixels are set as totally transparent. In practice, the 3DStudio MAX and MATLAB 2016a are used to generated the effectively synthetic perspective image, and the format PNG image with alpha channel is employed. Alpha channel’s value of 0 stands for total transparency while that of 255 is totally opaque, thus we have

$$M({i,j} )= \left\{ \begin{array}{l} 0,\textrm{ }{I_{\textrm{far}}}({i,j} )\textrm{alpha = 255}\\ 1,\textrm{ }{I_{\textrm{far}}}({i,j} )\textrm{alpha = 0} \end{array} \right.$$

 figure: Fig. 6.

Fig. 6. Implementation of the MRP based EPISM

Download Full Size | PDF

The binary foreground mask $M({i,j} )$ multiplies the ${I_{\textrm{near}}}({i,j} )$ which is the effectively synthetic perspective image of partnear referred to RPnear, and adds the foreground data on ${I_{\textrm{far}}}({i,j} )$ to yield to the MRP based effective perspective image ${I_{\textrm{MRP}}}({i,j} )$

$${I_{\textrm{MRP}}}({i,j} )= M({i,j} )\times {I_{\textrm{near}}}({i,j} )+ [{1 - M({i,j} )} ]\times {I_{\textrm{far}}}({i,j} )$$
where the sign ${\times}$ denotes the operation of multiplying point wise. $M({i,j} )\times {I_{\textrm{near}}}({i,j} )$ represents the effectively synthetic perspective image occluded by ${I_{\textrm{far}}}({i,j} )$, and Eq. (4) means that the pixels on ${I_{\textrm{near}}}({i,j} )$ occluded by ${I_{\textrm{far}}}({i,j} )$ are replaced by the corresponding pixels on ${I_{\textrm{far}}}({i,j} )$ to resynthesize the final MRP based effectively synthetic perspective image ${I_{\textrm{MRP}}}({i,j} )$.

Generally, for a 3D scene with large depth, it can be divided into much more parts, supposed as n parts of part 1, part 2, … part n, along its depth direction, and the reference plane can be set as RPk $({k = 1,2, \ldots ,n} )$ placed at the center depth of each part. The effectively synthetic perspective image referred to RPk of part k is then obtained as ${I_k}({i,j} )$. Then the resynthesizing algorithm can be expressed as following iterative steps: (i) using ${I_1}({i,j} )$ and ${I_2}({i,j} )$ to replace ${I_{\textrm{far}}}({i,j} )$ and ${I_{\textrm{near}}}({i,j} )$ respectively, and the binary foreground mask is ${M_{1,2}}({i,j} )$ which can be generated from Eq. (3), and then yields to a resynthesized effectively perspective image ${I_{1 \to 2}}({i,j} )$ according to Eq. (4) for part 1 and part 2. (ii) using ${I_{1 \to 2}}({i,j} )$ and ${I_3}({i,j} )$ to replace ${I_{\textrm{far}}}({i,j} )$ and ${I_{\textrm{near}}}({i,j} )$ again, and ${M_{1 \to 2,3}}({i,j} )$ is binary foreground mask between ${I_{1 \mapsto 2}}(i,j)$ and ${I_3}({i,j} )$, and yields to a resynthesized effectively perspective image ${I_{1 \to 3}}({i,j} )$ for part 1, part 2 and part 3. (iii) repeating the process of (ii) for the rest parts of part 4, part 5, … , and part n, we can finally get the MRP based effectively synthetic perspective image ${I_{\textrm{MRP}}}({i,j} )$ as the following iterative form,

$$\begin{array}{l} {I_{1 \to 2}}({i,j} )= {M_{1,2}}({i,j} )\times {I_2}({i,j} )+ [{1 - {M_{1,2}}({i,j} )} ]\times {I_1}({i,j} )\\ {I_{1 \to 3}}({i,j} )= {M_{1 \to 2,3}}({i,j} )\times {I_3}({i,j} )+ [{1 - {M_{1 \to 2,3}}({i,j} )} ]\times {I_{1 \to 2}}({i,j} )\\ \\ \textrm{ } \vdots \\ {I_{1 \to k}}({i,j} )= {M_{1 \to ({k - 1} ),k}}({i,j} )\times {I_k}({i,j} )+ [{1 - {M_{1 \to ({k - 1} ),k}}({i,j} )} ]\times {I_{1 \to ({k - 1} )}}({i,j} )\\ \textrm{ } \vdots \\ {I_{\textrm{MRP}}}({i,j} )= {M_{1 \to ({n - 1} ),n}}({i,j} )\times {I_n}({i,j} )+ [{1 - {M_{1 \to ({n - 1} ),n}}({i,j} )} ]\times {I_{1 \to ({n - 1} )}}({i,j} )\end{array}$$
where ${I_{1 \to 2({k - 1} )}}({i,j} )({3 \le k \le n} )$ is the resynthesized effectively perspective image for part 1, part 2, …, and part $({k - 1} )$, and ${M_{1 \to ({k - 1} ),k}}$ is the binary foreground mask between ${I_{1 \to ({k - 1} )}}({i,j} )$ and ${I_k}({i,j} )$.

Figure 7 depicts the geometrical configuration of a sampling example along with the generated effectively synthetic perspective images. As shown in Fig. 7(a) and 7(b), a magic-like disjointed virtual 3D object is chosen as the 3D scene, and is tilted 20° to exhibit a better stereoscopic effect. It is divided into three parts and each part has a reference plane, and the sampling is implemented by a virtual camera array. After capturing the full-parallax perspective images, the EPISM as well as the MRP based EPISM is performed. Figure 7(c)–7(e) show the generated effectively synthetic perspective images based on EPISM with single reference plane RP1, RP2, and RP3 respectively. It is obvious that the pixel blocks on or near the reference plane have no or little distortion, while the pixel blocks referred to the object’s part far away from the reference plane have distinct distortions, as shown within the dot-circles. It also can be seen that the distortions become more serious when the object’s part locates farther away from the reference plane. The resynthesized effectively synthetic perspective image based on multiple reference plane (RP1, RP2, and RP3) is shown in Fig. 7(f). Compared to the single reference plane EPISM, it is obvious that the distortions have been eliminated remarkably, and the deformation as well as the serration is only slightly exhibited around the object’s part that located away from its sub reference plane. Mosaic images exist tiny crack when the pixels on ${I_{\textrm{near}}}({i,j} )$ occluded by ${I_{\textrm{far}}}({i,j} )$ are replaced by the corresponding pixels on ${I_{\textrm{far}}}({i,j} )$ to resynthesize the final MRP based effectively synthetic perspective image ${I_{\textrm{MRP}}}({i,j} )$. There are some mosaic cracks existed which are resulted from the resynthesis process, however, these mosaic cracks are tiny and difficult to perceive unless the number of the MRP is large enough. An appropriate image evaluation criterion for the effect of mosaic crack caused by MPRs on the quality of reconstructed images remains unknown. The methods resynthesized these MRPs based effective perspective images to eliminate mosaic crack are the next work we plan to do.

 figure: Fig. 7.

Fig. 7. (a) configuration of the sampling, (b) geometrical parameters for the generation of effectively synthetic perspective images, (c)–(e) the generated effectively synthetic perspective image based on EPISM with single reference plane RP1, RP2, and RP3 respectively, (f) the effectively synthetic perspective image generated by MRP based EPISM.

Download Full Size | PDF

2. Experiments and discussions

The experimental setup is illustrated in Fig. 8. A continuous wave 400 mW 639 nm single-longitudinal-mode linearly polarized solid-state laser (model CNIMSL-FN-639 @CNI) is employed as the laser source. The laser output is modulated by a mechatronic shutter (model SSH-C2B @Sigma Koki) to control the exposure time. A $\lambda /2$ wave plate and a polarization-dependent beam splitter are used to split the input laser into the object arm and the reference arm. By rotating the $\lambda /2$ before the beam splitter, the power ratio between the object beam and the reference beam also can be adjusted. Another $\lambda /2$ wave plate is inserted to the object beam to rotate the polarization direction of the object beam as the same as that of the reference beam. The 40× objective lens is used to expand the object beam to illuminate the LCD. An adapted LCD panel (model VVX09F035M20 @Panasonic) is used as the spatial light modulator to load and project the effectively synthetic perspective images. The LCD is 8.9inch and has the pixels of 1920×1200, corresponding to a pixel pitch of 0.1mm. The LCD’s background light as well as the polarizer is removed, and only the diffusor is retained. After passing through the LCD, the object beam is projected onto the silver halide plate which is placed 17.5 cm away from the LCD panel and is with the photosensitivity of $\Delta E$ =1250 μJ/cm2 @639nm. A 5 mm × 5 mm square aperture is positioned before the silver halide plate to block the unexpected object light. An attenuator is inserted to the reference arm to adjust the power of the reference beam, and then adjust the power ratio of object/reference. A spatial filter comprised of a 40× objective and a 15 μm pinhole is used to filter out the higher spatial frequency. A collimating lens with f=150 is place behind the spatial filter to collimate the reference beam as planar wave, and its focus point is coincided with the pinhole. The reference beam interferences with the object beam at an angle around 30°. Another 5 mm × 5 mm square aperture is used to block the unexpected reference light, whose aperture is precisely aligned to the aperture of the opposite one, and the silver halide plate is sandwiched by these two apertures, and then the hogel size is equal to the size of the aperture, i.e., 5 mm × 5 mm. The silver halide plate is installed on a two dimensional $x - y$ linear track (model KSA300 @Zolix) to move the holographic plate to the position of the next hogel after the exposure of the former one is finished. A time-synchronization system is developed to synchronize the shutter, the LCD, and the motion of the holographic plate.

 figure: Fig. 8.

Fig. 8. Experiment setup of the printing of full-parallax holographic stereogram using the MRP based EPISM.

Download Full Size | PDF

To verify the principle of the MRP based EPISM, a simple virtual 3D model including two surfaces, a crying face nearer to the hologram plane and a smiling face farther away from the hologram plane, is used as the 3D object. The geometrical relationship is shown in Fig. 9(a). For the traditional EPISM using single reference plane, the central depth plane RP0 is used as the reference plane. 3DStudio MAX is applied to render the perspective images. The distance from camera plane to RP0 is 175 mm and the camera’s sampling interval ${\Delta _c}$ is 5 mm. Then 55×55 perspective images are obtained with the resolution of each 1000×1000. These perspective images are resynthesized to generate the effectively synthetic perspective image based on EPISM with single reference plane RP0. The number of effectively synthetic perspective image is 16×16 and the resolution is 1000×1000. The reconstructed images are shown in Fig. 9(b) and 9(c), where the image in Fig. 9(b) is focused on the smiling face, while Fig. 9(c) is focused on the crying face. It can be seen that the reconstructed 3D scene has a remarkable distortion and blur on either RP1 or RP2, especially for the serrated discontinuities that aroused at the edge area of the 3D object. In other words, all the object points located out of the single reference plane RP0 are not reconstructed exactly.

 figure: Fig. 9.

Fig. 9. (a) Geometrical configuration for the validation, (b) and (c) are reconstructed images focused on the smiling face and crying face with traditional EPISM respectively, and (d) and (e) are reconstructed images focused on the smiling face and crying face with MRP based EPISM respectively.

Download Full Size | PDF

For the MRP based EPISM, the two planes RP1 and RP2 are used as the multiple reference planes (see Fig. 9(a)). The sampling parameters are the same as the former and two groups of perspective images are obtained. The number of perspective images in each group are both 55×55 and the resolution of all perspective images is 1000×1000. Finally, the effectively synthetic perspective images with the same number and resolution are generated compared with the traditional EPISM. The reconstructed images focused on RP1 and RP2 are shown in Fig. 9(d) and 9(e), respectively. Obviously, the sharply, clearly and smoothly reconstructed images on different planes can be realized, and the serrated discontinuities occurred at the edge area are eliminated and replaced with smooth and continuous edges, which implies that the holographic stereogram printed by the MRP based EPISM has a relatively better reconstruction quality, especially with less distortions and blurs for large-depth 3D scene. Thus, the experimental results demonstrate the validity of the MRP based EPISM.

Furthermore, the magic-like disjointed virtual 3D object described in Fig. 7(a) and 7(b) is used as a relatively complicated 3D scene to exhibit the potential of MRP based EPISM. The plane RP2 is selected as the reference plane of the traditional EPISM, and the geometrical configuration between the camera plane and the 3D object is shown in Fig. 7(b). The camera’s sampling interval ${\Delta _c}$ is 5 mm. The number of each perspective images is 55×55 and the resolution of each perspective images is 1000×1000. The number of effectively synthetic perspective images is 16×16 and its resolution is 1000×1000. The reconstructed images are shown in Fig. 10(a) focused on the part 2 and Fig. 10(b) focused on the part 1. For comparison, the three planes RP1, RP2 and RP3 are chosen as the multiple reference planes to execute the MRP based EPISM. The sampling parameters are the same as the former. The MPR based EPISM is implemented on a MATLAB 2016a platform. A general computer equipped with windows 10 and Inter(R) Xeon(R) CPU E5-2620 v4 is applied for the calculation. Memory usage in algorithm running is about 5.9GB for the traditional EPISM or the proposed one. The time consume for the generation of an effectively synthetic perspective images is about 0.5s for the MRP based EPISM, which is about four times as much as the traditional one. Reduction of the resolution of the perspective images can improve the computational efficiency. For example, if the resolution of perspective images is decreased to 800×800 and keep the number of them, the time cost to produce an image for the MRP is reduced to 0.3s, which is still about four times as that of the single reference plane. Figure 10(c) and Fig. 10(d) show the reconstructed images focused on part 2 and part 1 respectively. For the object points located around RP2, i.e. the points on part 2, their reconstructed images have a similar definition for both the traditional EPISM and the MRP based EPISM, as shown in Fig. 10(a) and 10(c). However, for the object points located on part 1, their reconstructions appear relatively significant distortions and blurs under the traditional EPISM (see Fig. 10(b)) since they are located far away from the single reference plane RP2, while the MRP based EPISM gives a well reconstructed result (see Fig. 10(d)) because there is a sub reference plane of RP1 that is placed crossing part 1. To show the well-presented 3D stereo effect, the reconstructed perspective images captured at nine different angles of view are arranged in Fig. 11(a)–11(i), from which we can see that the full-parallax reconstruction is achieved along with clear and distortionless reconstruction quality within the depth range of the 3D scene. Therefore, the MRP approach can be used to reduce the reconstruction error of EPISM based full-parallax holographic stereogram, especially when the 3D scene’s depth range is large. In other words, the MRP based EPISM is also an efficient way to extend the reconstructed depth range of 3D scene along with well reconstructed quality, as long as one can divide the 3D scene into more parts and select more reference planes. It should be pointed out that, according to the discussions in Sec. 2.2, though the reconstruction error will be reduced more significantly if more reference planes are employed, however, which corresponds to a more complicated resynthesizing of the MRP based effectively synthetic perspective image ${I_{\textrm{MRP}}}({i,j} )$, and results in an additional time consuming. Therefore, the appropriate number of reference planes should be determined to achieve a compromise between the time cost and the reconstruction quality.

 figure: Fig. 10.

Fig. 10. Reconstructed images focused on part 2 (a) and part 1 (b) with traditional EPISM, and focused on part 2 (c) and part 1 (d) with MRP based EPISM.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. (a)–(i) are the reconstructed perspective images perceived at nine different angles of view.

Download Full Size | PDF

Compared with the typical method of depth-image-based rendering (depth-IBR) [2931], the proposed MRP based EPISM does not need the interpolation to obtain non-existent light field information relatively accurately from adjacent light when depth information of 3D objects is known. The effectively synthetic perspective images can be generated from the perspective images directly through a pixel mapping process [26], thus the algorithm complexity is relatively low. However, since the light field information along with depth information are used in the depth-IBR method, especially when assisted by the CNN [32], the accuracy of the reconstrued 3D scene may be much higher for depth-IBR, which is especially beneficial to the acquisition of the dense light-field data of real 3D scene from sparse camera array. Meanwhile, its principle is also direct and can be apprehended intuitively. Of course, if the 3D data is already acquired or the 3D scene is a virtual 3D model, the proposed method is still available, although it seems that the solution is some roundabout since we can accurately generate the light-rays from the 3D data. However, because of the simple implementation as well as the relatively low time consuming, this method may have potential applications in the holographic stereogram based real-time 3D display [24,25] when the 3D data is easy to obtain. Furthermore, if we don’t have 3D data or the 3D data is difficult to acquire, the capturing perspective images as well as the pixel depth data can be obtained by RGB-D camera directly. Thus, the effectively synthetic perspective image can be generated simply by our method without further three-dimensional reconstruction. The pixels in the captured perspective images can be classified into different groups according to the depth data and then the MRPs with identical interval can be set and the MRP based perspective images can be obtained.

3. Conclusion

In this work, we analyze the reconstruction error of EPISM based full-parallax holographic stereogram, and provide a multiple-reference-plane approach to reduce the reconstruction error and enhance the reconstruction quality. The core idea is to divided the 3D scene into several parts, and sets independent reference plane for each of the object part. Referred to its own reference plane, the effectively synthetic perspective image can be generated for each object part using the traditional EPISM. According the proposed MRP based algorithm, the finally effectively synthetic perspective image required to expose the hogel can be resynthesized. Both theoretical and experimental results demonstrate the potential of reconstruction error reduction and reconstruction quality enhancement by employing the proposed MRP based EPISM. This approach is especially appropriate to fabricate the resolution-priority full-parallax holographic stereogram of 3D scene with large depth size. Simple reference plane setting with identical interval ignores the geometric distribution of the object. This leads to the failure of taking better care of the reconstruction overall. Hence, the optimization of the number and the positions of multiple reference planes as well as some other issues is the work which will be investigated in the future.

Funding

The National Key Research and Development Program of China (2017YFB1104500); National Natural Science Foundation of China (61775240); Foundation for the Author of National Excellent Doctoral Dissertation of the People's Republic of China (201432).

References

1. R. V. Pole, “3-D Imagery and Holograms of Objects Illuminated in White Light,” Appl. Phys. Lett. 10(1), 20–22 (1967). [CrossRef]  

2. D. J. DeBitetto, “Holographic panoramic stereograms synthesized from white light recordings,” Appl. Opt. 8(8), 1740–1741 (1969). [CrossRef]  

3. M. C. King, A. M. Noll, and D. Berry, “A new approach to computer-generated holography,” Appl. Opt. 9(2), 471–475 (1970). [CrossRef]  

4. M. W. Halle, S. A. Benton, M. A. Klug, and J. S. Underkoffler, “Ultragram: a generalized holographic stereogram,” Proc. SPIE 1461, 142–155 (1991). [CrossRef]  

5. M. Yamaguchi, N. Ohyama, and T. Honda, “Holographic three-dimensional printer: new method,” Appl. Opt. 31(2), 217–222 (1992). [CrossRef]  

6. Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18(9), 8824–8835 (2010). [CrossRef]  

7. Y. Takaki, “High-density directional display for generating natural three-dimensional images,” Proc. IEEE 94(3), 654–663 (2006). [CrossRef]  

8. T. Utsugi and M. Yamaguchi, “Reduction of the recorded speckle noise in holographic 3D printer,” Opt. Express 21(1), 662–674 (2013). [CrossRef]  

9. M. Yamaguchi, “Light-field and holographic three-dimensional displays,” J. Opt. Soc. Am. A 33(12), 2348–2364 (2016). [CrossRef]  

10. P.-A. J. Blanche, C. M. Bigler, J.-W. Ka, and N. N. Peyghambarian, “Fast and continuous recording of refreshable holographic stereograms,” Opt. Eng. 23(4), 1–18 (2017). [CrossRef]  

11. J. Su, X. Yan, Y. Huang, X. Jiang, Y. Chen, and T. Zhang, “Progress in the synthetic holographic stereogram printing technique,” Appl. Sci. 8(6), 851 (2018). [CrossRef]  

12. P. Liu, X. Sun, Y. Zhao, and Z. Li, “Ultrafast volume holographic recording with exposure reciprocity matching for TI/PMMAs application,” Opt. Express 27(14), 19583–19595 (2019). [CrossRef]  

13. Z. Lu and Y. Sakamoto, “Holographic display method for volume data by volume rendering,” Opt. Express 27(2), 543–556 (2019). [CrossRef]  

14. H. Zhang, Y. Zhao, L. Cao, and G. Jin, “Three-dimensional display technologies in wave and ray optics: a review,” Chin. Opt. Lett. 12(6), 060002 (2014). [CrossRef]  

15. D. J. DeBitetto, “Transmission bandwidth reduction of holographic stereograms recorded in white light,” Appl. Phys. Lett. 12(10), 343–344 (1968). [CrossRef]  

16. T. Yatagai, “Stereoscopic approach to 3-D display using computer-generated holograms,” Appl. Opt. 15(11), 2722–2729 (1976). [CrossRef]  

17. D. Brotherton-Ratcliffe and A. Rodin, “Holographic printer,” US patent 7161722 (2002).

18. D. Brotherton-Ratcliffe, A. Rodin, and L. Hrynkiw, “Method of writing a composite 1-step hologram,” US patent 7333252 (2002).

19. H. Bjelkhagen and D. Brotherton-Ratcliffe, “Ultra-realistic imaging: advanced techniques in analogue and digital colour holography,” (Taylor & Francis: CRC Press, 2013).

20. H. I. Bjelkhagen and D. Brotherton-Ratcliffe, “Ultrarealistic imaging: The future of display holography,” Opt. Eng. 53(11), 112310 (2014). [CrossRef]  

21. H. Keehoon, P. Soon-Gi, Y. Jiwoon, K. Jonghyun, C. Ni, P. Kyungsuk, C. Chilsung, K. Sunil, A. Jungkwuen, and L. Hong-Seok, “Resolution enhancement of holographic printer using a hogel overlapping method,” Opt. Express 21(12), 14047–14055 (2013). [CrossRef]  

22. H. Zhang, Y. Zhao, L. Cao, and G. Jin, “Fully computed holographic stereogram based algorithm for computer-generated holograms with accurate depth cues,” Opt. Express 23(4), 3901–3913 (2015). [CrossRef]  

23. Z. Wang, G. Lv, Q. Feng, A. Wang, and H. Ming, “Resolution priority holographic stereogram based on integral imaging with enhanced depth range,” Opt. Express 27(3), 2689–2702 (2019). [CrossRef]  

24. S. Tay, P.-A. Blanche, R. Voorakaranam, A. Tunç, W. Lin, S. Rokutanda, T. Gu, D. Flores, P. Wang, and G. Li, “An updatable holographic three-dimensional display,” Nature 451(7179), 694–698 (2008). [CrossRef]  

25. P.-A. Blanche, A. Bablumian, R. Voorakaranam, C. Christenson, W. Lin, T. Gu, D. Flores, P. Wang, W.-Y. Hsieh, and M. Kathaperumal, “Holographic three-dimensional telepresence using large-area photorefractive polymer,” Nature 468(7320), 80–83 (2010). [CrossRef]  

26. J. Su, Q. Yuan, Y. Huang, X. Jiang, and X. Yan, “Method of single-step full parallax synthetic holographic stereogram printing based on effective perspective images’ segmentation and mosaicking,” Opt. Express 25(19), 23523–23544 (2017). [CrossRef]  

27. J. Su, X. Yan, X. Jiang, Y. Huang, Y. Chen, and T. Zhang, “Characteristic and optimization of the effective perspective images’ segmentation and mosaicking (EPISM) based holographic stereogram: an optical transfer function approach,” Sci. Rep. 8(1), 4488 (2018). [CrossRef]  

28. X. Yan, Y. Chen, J. Su, T. Zhang, Z. Chen, S. Chen, and X. Jiang, “Characteristic and improvement on the reconstructed quality of effective perspective images’ segmentation and mosaicking-based holographic stereogram,” Appl. Opt. 58(5), A128–A134 (2019). [CrossRef]  

29. J. Jurik, T. Burnett, M. Klug, and P. Debevec, “Geometry-corrected light field rendering for creating a holographic stereogram,” in IEEE Computer Vision and Pattern Recognition Workshops (CVPR), pp. 9–13 (2012).

30. M. Yamaguchi and K. Wakunami, “13.7 Scanning Vertical Camera Array for Computational Holography,” in Multi-Dimensional Imaging (eds B. Javidi, E. Tajahuerce, and P. Andrés), IEEE Press, Wiley, pp. 315–322 (2014)

31. E. Sahin, S. Vagharshakyan, J. Makinen, R. Bregovic, and A. Gotchev, “Shearlet-domain light field reconstruction for holographic stereogram generation,” in IEEE International Conference on Image Processing (ICIP), pp. 1479–1483 (2016).

32. Y. Li, X. Sang, D. Chen, P. Wang, H. Wang, J. Yuan, K. Wang, and B. Yan, “A Hole-filling Method for DIBR Based on Convolutional Neural Network,” in Conference on Lasers and Electro-Optics/Pacific Rim, pp. F1F. 5 (2018).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Principal implementation of EPISM. (a) capturing perspective images, (b) segmentation and mosaicking of effective perspective image, (c) exposure of hogel.
Fig. 2.
Fig. 2. Analysis on the reconstruction error of EPISM
Fig. 3.
Fig. 3. Characteristic of reconstruction error for object points not located on the reference plane.
Fig. 4.
Fig. 4. Numerical example of the reconstruction error. ${L_1} = {L_2} = 175\textrm{ mm}$, ${\Delta _c} = 5\textrm{ mm}$, and $OC \bot {C_u}{C_l}$.
Fig. 5.
Fig. 5. Principle of the MRP based EPISM. RP: reference plane, ESPI: effectively synthetic perspective image.
Fig. 6.
Fig. 6. Implementation of the MRP based EPISM
Fig. 7.
Fig. 7. (a) configuration of the sampling, (b) geometrical parameters for the generation of effectively synthetic perspective images, (c)–(e) the generated effectively synthetic perspective image based on EPISM with single reference plane RP1, RP2, and RP3 respectively, (f) the effectively synthetic perspective image generated by MRP based EPISM.
Fig. 8.
Fig. 8. Experiment setup of the printing of full-parallax holographic stereogram using the MRP based EPISM.
Fig. 9.
Fig. 9. (a) Geometrical configuration for the validation, (b) and (c) are reconstructed images focused on the smiling face and crying face with traditional EPISM respectively, and (d) and (e) are reconstructed images focused on the smiling face and crying face with MRP based EPISM respectively.
Fig. 10.
Fig. 10. Reconstructed images focused on part 2 (a) and part 1 (b) with traditional EPISM, and focused on part 2 (c) and part 1 (d) with MRP based EPISM.
Fig. 11.
Fig. 11. (a)–(i) are the reconstructed perspective images perceived at nine different angles of view.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

Δ v R = L 1 + L 2 ( L 1 + Δ x R ) L 2 Δ x R Δ y R
Δ v L = L 1 + L 2 ( L 1 Δ x L ) L 2 Δ x L Δ y L
Δ v R Δ v L = L 1 = Δ x L 1 + Δ x < 1
M ( i , j ) = { 0 ,   I far ( i , j ) alpha = 255 1 ,   I far ( i , j ) alpha = 0
I MRP ( i , j ) = M ( i , j ) × I near ( i , j ) + [ 1 M ( i , j ) ] × I far ( i , j )
I 1 2 ( i , j ) = M 1 , 2 ( i , j ) × I 2 ( i , j ) + [ 1 M 1 , 2 ( i , j ) ] × I 1 ( i , j ) I 1 3 ( i , j ) = M 1 2 , 3 ( i , j ) × I 3 ( i , j ) + [ 1 M 1 2 , 3 ( i , j ) ] × I 1 2 ( i , j )   I 1 k ( i , j ) = M 1 ( k 1 ) , k ( i , j ) × I k ( i , j ) + [ 1 M 1 ( k 1 ) , k ( i , j ) ] × I 1 ( k 1 ) ( i , j )   I MRP ( i , j ) = M 1 ( n 1 ) , n ( i , j ) × I n ( i , j ) + [ 1 M 1 ( n 1 ) , n ( i , j ) ] × I 1 ( n 1 ) ( i , j )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.