Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Pixel-density enhanced integral three-dimensional display with two-dimensional image synthesis

Open Access Open Access

Abstract

Integral three-dimensional (3D) displays can display naturally viewable 3D images. However, displaying 3D images with high pixel density is difficult because the maximum pixel number is restricted by the number of lenses of a lens array. Therefore, we propose a method for increasing the maximum pixel density of 3D images by optically synthesizing the displayed images of an integral 3D display and high-definition two-dimensional display using a half mirror. We evaluated the improvements in 3D image resolution characteristics through simulation analysis of the modulation transfer function. We developed a prototype display system that can display 3D images with a maximum resolution of 4K and demonstrated the effectiveness of the proposed method.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Light field displays have attracted considerable attention in recent years. With this display method, naturally viewable three-dimensional (3D) images with smooth motion parallax can be displayed without the use of special glasses by reproducing light rays from objects. In particular, an integral 3D display [15], which is one of the light field display methods, can display 3D images with parallaxes in both horizontal and vertical directions. Because of this advantage, it is expected to be applied to a wide range of fields, such as television, medicine, education, and digital signage. Integral 3D displays are based on integral photography [6], a 3D photographic technique proposed by Gabriel Lippmann in 1908. In typical integral 3D displays, elemental images are displayed on a display device, and a lens array consisting of numerous tiny elemental lenses is placed in front of it. Generally, direct-view displays such as liquid crystal displays (LCDs) and organic light-emitting diode displays or projectors [79] are used as display devices for integral 3D displays. In particular, the use of a direct-view display allows the display system to be constructed in a thin form. One of the challenges of integral 3D displays is that a large volume of image information is required to display high-resolution 3D images. The maximum pixel number in an integral 3D image is generally the same as the number of lenses constituting the lens array, which is several hundredths of the number of pixels in the display device. Therefore, it is difficult to display 3D images with high pixel density when the display system is configured with a single display device and lens array. To increase the pixel density of integral 3D images, display methods using multiple display devices or time-division multiplexing display techniques have been proposed [812]. For example, in methods that use multiple projectors [8,9], elemental images are projected onto a lens array in a superimposed manner. More condensed light spots, which become pixels in a 3D image, are generated than the number of lenses, thereby increasing the pixel density of the 3D image. In the method using an electrically controllable mask array [12], the mask array is placed close to the lens array, and both the elemental images displayed on the display device and the aperture positions of the mask array are switched synchronously in a frame-sequential manner. The apertures of the mask array become pixels in a 3D image, thereby increasing the pixel density of the 3D image. However, the pixel-density improvement ratio in these methods depends on either the number of display devices or the multiplicity of time divisions. In the prototype display systems with conventional methods, the pixel density was only increased by several times, and no further significant increase was realized.

Layered 3D displays have been studied as a light field display method to display 3D images with high pixel density. Layered 3D displays are based on a different principle in comparison to integral 3D displays. In layered 3D displays, multiple two-dimensional (2D) images are stacked with a gap between each other. A light ray that is emitted from behind passes through different pixels in the 2D images depending on its emission position and angle of travel. Therefore, the luminance values of all the light rays in the viewing zone can be expressed using the pixel values of the 2D images. A 3D image can be displayed by generating each 2D image using optimization calculations to approximate these luminance values with the luminance values of the target light field image to be reproduced. Layered 3D displays are broadly classified into two types, multiplicative and additive, depending on the display system configuration and the method of expressing the luminance values of the reproduced light rays [13]. Multiplicative layered 3D displays consist of multiple transmissive LCDs stacked at a distance from each other [1419]. In this configuration, the luminance values of all light rays in the viewing zone are expressed as the product of the transmittance of the LCD pixels. Conversely, in an additive layered 3D display, an arrangement that uses multiple holographic optical elements (HOEs) stacked at a distance from each other in combination with multiple projectors has been proposed [20]. The 2D image projected from each projector is imaged on each HOE, and the viewer can observe an image in which all the 2D images of the projectors are optically synthesized. The luminance values of all the light rays in the viewing zone are expressed as the sum of the pixel values of the projected 2D images. Compared to multiplicative layered 3D displays that display 3D images by attenuating the luminance of light rays, additive displays have the advantage of displaying brighter 3D images more easily. In principle, although the reproduced light rays have errors in the luminance values, layered 3D displays have the advantage of displaying 3D images with a high maximum pixel density. For example, when a display system is configured with two LCDs and a 3D image is displayed at the same depth distance as either LCD surface, the upper-limit spatial frequency of the 3D image is the same as that of the LCD [15,17]. However, to improve the resolution characteristics of 3D images at deep depth positions, it is necessary to increase the number of 2D images. Therefore, a method for increasing the number of 2D images using a half mirror was proposed [21]. In addition, display methods that increase the number of apparent 2D images using the time-division multiplexing display technique and special optical elements such as a reflective polarizer or geometric phase lenses have also been devised [22,23].

As described above, the basic system configuration of integral 3D displays is simple and consists only of a display device and a lens array; however, the maximum pixel density of 3D images is low. On the contrary, layered 3D displays have the advantage of 3D images with higher pixel density, but they require multiple 2D images to be stacked at a distance from each other, thereby increasing the complexity of the display system. To solve these problems, a layered 3D display using a directional backlight in combination with a transmissive LCD has been proposed [15]. The directional backlight used in the conventional study consists of an LCD and two orthogonally stacked lenticular sheets, which is equivalent to a low-resolution integral 3D display. The use of a directional backlight can reduce the errors in luminance values of the reproduced light rays in comparison with typical multiplicative layered 3D displays using multiple LCDs and a uniform backlight. In addition, 3D images can be displayed even with a system configuration consisting of a directional backlight with a single closely placed LCD, which has the advantage of constructing a thin and high pixel density 3D display system. This method combines the advantages of integral 3D displays and layered 3D displays. Since conventional research is based on the principle of multiplicative layered 3D displays, the 3D images become dark unless a high-brightness backlight is used. Besides, the prototype display system consisted of a lenticular sheet with a lens pitch of 2.54 mm and LCDs with a pixel pitch of 282 µm; hence, the resolution characteristics of the 3D images were not high.

Here we propose a method for displaying 3D images with a high pixel density by optically synthesizing a single 2D image on an integral 3D display. In the proposed method, elemental images and a 2D image are generated in advance based on the principle of additive layered 3D displays. The elemental images are used to display an integral 3D image and are displayed on the display device comprising the integral 3D display. The 2D image is used to improve the pixel density of the integral 3D image and is displayed on the 2D display. The displayed integral 3D image and 2D image are optically synthesized with a half mirror. Our study holds three main advantages over and contributions to existing research [15]. First, while existing research is based on the principle of multiplicative layered 3D displays, the proposed method is based on the principle of additive layered 3D displays that can display brighter 3D images without the need for a high-brightness backlight. Second, while the upper-limit spatial frequency of 3D images is discussed in existing literature, we have evaluated the improvement effect of the modulation transfer function (MTF) of 3D images through simulation analysis and obtained a design policy for the display system. Third, we prototyped a display system using an LCD with a pixel pitch of 55.5 µm and realized a display of 3D images with a maximum resolution of 4K. Although the scale of our prototype display system is large because of the half mirror used, a thinner display system could be constructed in the future by applying a 3D/2D conversion technique. In a previous study, a method for increasing the pixel density of 3D images by synthesizing an integral 3D image and 2D image using the time-division multiplexing display technique was proposed [24]. The aforementioned research, even though conceptually similar to our study, has not described the details of how to generate images. Therefore, to the best of our knowledge, we are the first to propose a method for displaying 3D images with high pixel density by optically synthesizing an integral 3D image and a 2D image based on the principle of additive layered 3D displays. In Section 2, the principle of the proposed 3D display method and the method for generating elemental images and 2D image are explained. In Section 3, the simulation analysis results of the MTF of 3D images are described, and the design policy for the display system is also discussed. In Sections 4 and 5, the specifications of the prototype display system and the experimental results of displaying 3D images are explained, respectively. Finally, the summary and prospects of our research are described in Section 6.

2. Pixel-density enhanced integral 3D display

2.1 Basic principle

The basic configuration of the proposed 3D display method is illustrated in Fig. 1(a). The display system consists of an integral 3D display, a 2D display, and a half mirror. The integral 3D display consists of a display device and a lens array that is made up of numerous tiny elemental lenses arrayed in two dimensions. The lens array is placed at a distance of $f$, the focal length of the lens array, from the display device. The integral 3D display and 2D display are arranged such that each display surface is at an angle of 45° to the surface of the half mirror. $D$ is the distance between the center position of the half mirror and lens array, and $D + d$ is the distance between the center position of the half mirror and the 2D display. The viewer observes the image in which the displayed images of the integral 3D display and 2D display are optically synthesized using the half mirror. Hereafter, the image displayed by the integral 3D display is described as the integral 3D image, the image displayed by the 2D display as the 2D image, and the image optically synthesized by the half mirror as the synthetic 3D image. Further, the image displayed by a general integral 3D display without the proposed method is described as the normal integral 3D image.

 figure: Fig. 1.

Fig. 1. Basic principle of proposed display method: (a) basic configuration of display system and (b) enlarged view of configuration optically equivalent to basic configuration.

Download Full Size | PDF

Figure 1(b) shows an enlarged view of the configuration that is optically equivalent to the basic configuration shown in Fig. 1(a) when the attenuation of luminance due to the half mirror is ignored. ${I_{s,\;t}}$ is the image when only the integral 3D display is observed from an arbitrary viewpoint $s,\;t$. Each elemental lens constituting the lens array is a pixel of ${I_{s,\;t}}$. The luminance value of each pixel in ${I_{s,\;t}}$ when observed from a certain viewpoint is determined by the pixel value of the corresponding pixel in the elemental images, as shown in Fig. 1(b). L is the image displayed by the 2D display. For simplicity, the pixel pitches of the 2D image and elemental images were set to be equal. Consider a single light ray that is emitted from position $({i,j} )$ in the 2D image L and travels in the direction of an arbitrary viewpoint $s,\;t$. This light ray passes through the position $({i - \tilde ds,\;j - \tilde dt} )$ in the integral 3D image ${I_{s,\;t}}$. The luminance of this light ray is the sum of the luminances of the 2D image and integral 3D image. The symbol $\tilde d$ represents the amount of disparity proportional to d, the distance between the lens array and 2D image. When the lens array and 2D image are placed at the same depth distance, i.e., $d$ = 0 mm, $\tilde d$ is 0. Let $L({i,j} )$ be the luminance value of the 2D image at position $({i,j} )$ and ${I_{s,\;t}}({m,n} )$ be the luminance value of the integral 3D image at position $({i - \tilde ds,\;j - \tilde dt} )$ and viewpoint $s,\;t$. The luminance value of each light ray that is emitted from position $({i,j} )$ and travels in the direction of viewpoint $s,\;t$ can be expressed as $L({i,j} )+ {I_{s,\;t}}({m,n} )$. Furthermore, consider a target light field image in the same depth position as the lens array with pixel pitch set to be the same as that of the 2D image and elemental images. Let ${V_{s,t}}({i - \tilde ds,\;j - \tilde dt} )$ be the luminance value of the target light field image at position $({i - \tilde ds,\;j - \tilde dt} )$ and viewpoint $s,\;t$. In these conditions, the synthetic 3D image with luminance values that approximate the luminance of the target light field image can be reproduced by generating elemental images and a 2D image according to the following equation using the least-squares method:

$$\mathop {{\rm{argmin}}}\limits_{L({i,j} ),{I_{s,t}}({m,n} )} \left[ {\mathop \sum \limits_{s,t,i,j} {{\{{L({i,j} )+ {I_{s,t}}({m,n} )- {V_{s,t}}({i - \tilde ds,\;j - \tilde dt} )} \}}^2}} \right].$$

The pixel size of the 2D image is smaller than that of the integral 3D image. Therefore, the synthetic 3D image has a finer spatial distribution of luminance than a normal integral 3D image, resulting in higher pixel density in the 3D images. This is the basic principle behind the proposed method. Note that the number of positions $({i,j} )$ and viewpoints $s,\;t$ must be set to a finite number. For example, positions $({i,j} )$ can be set as the center position of each pixel in the 2D image, and viewpoints $s,\;t$ can be set in the direction connecting the center position of each pixel in the elemental images and the center position of the corresponding elemental lens.

There are two types of integral 3D displays: depth priority integral imaging (DPII) and resolution priority integral imaging (RPII) [25]. Although the basic configuration of the integral 3D display used in our proposed method is DPII, the proposed method is also applicable to RPII. With the proposed method, elemental images can be generated based on the geometric correspondence between the light rays from the pixels of the elemental images and the light rays output from the elemental lenses. In DPII, the lens array is placed at a distance of f, i.e., the focal length of the lens array, from the display device. Light rays from a certain pixel of the elemental images are output as collimated light from the elemental lens toward the direction connecting the pixel position and the center position of the elemental lens. In contrast, in RPII, the lens array is placed farther from the display device than the focal length f. Light rays from a certain pixel of the elemental images are output as convergent light from the elemental lens toward the direction connecting the pixel position and the center position of the elemental lens. Based on these geometric correspondences, elemental images can be generated. We chose DPII as the basic configuration for the integral 3D display because it is easier to implement the image generation process than RPII. In DPII, the angle of the light rays output from the elemental lens is the same regardless of the incident position when the light rays from a certain pixel of the elemental images are incident on the elemental lens. In contrast, in RPII, the angle of the light rays output from the elemental lens changes according to the incident position. In RPII, it is necessary to consider the correspondence between the incident positions and output angles when generating images, which complicates the image generation process.

2.2 Generation method of elemental images and 2D image

In conventional studies, two methods have been devised for generating images for layered 3D displays: convolutional neural network (CNN)-based and analytical methods [13]. Analytical methods can generate images with fewer errors in luminance values, whereas CNN-based methods require less processing time for image generation. Among the analytical methods, the image generation methods for multiplicative layered 3D displays use the non-negative tensor factorization, while those for additive ones use the non-negative least-squares method [15,20]. In this study, we developed a dedicated image generation method based on the analytical methods following the system configuration of the proposed display method. Figure 2 shows the flowchart of the image generation method, in which the error between the luminance values of the synthetic 3D image and the target light field image is gradually reduced by alternately iterating the generation of the elemental images and 2D image. Initially, multi-viewpoint images are generated from known 3D models as the target light field image. Subsequently, the initial pixel values of the 2D image are set to 0. After this, the process becomes iterative. Elemental images are generated under the condition that the 2D image is known. Subsequently, the 2D image is generated under the condition that the elemental images are known. This process is iterated a predetermined number of times to complete the process. In the experiments in Sections 3 and 5, the process was iterated 30 times. Although the number of iterations required to converge the error depended on the 3D scene, it tended to converge mostly after approximately 15 times. In the experiments, the number of iterations was set at 30, which is twice that number.

 figure: Fig. 2.

Fig. 2. Flowchart of image generation method of elemental images and 2D image.

Download Full Size | PDF

The elemental images are generated according to the following equation obtained from Eq. (1):

$$\begin{array}{{c}}{{I_{s,t}}({m,n} )= \frac{{\mathop \sum \nolimits_{s,t} {V_{s,t}}({i - \tilde ds,\;j - \tilde dt} )}}{N} - L({i,j} ),}\\ {\{{{I_{s,t}}({m,n} )\;{\rm{|}}\;0 \le {I_{s,t}}({m,n} )\le {v_I}} \},} \end{array}$$
where N is the number of viewpoints and ${v_I}$ is the upper limit of ${I_{s,t}}({m,n} )$. A rounding process is performed each time the elemental images are generated such that ${I_{s,t}}({m,n} )$ falls within the range 0–${v_I}$. Similarly, a 2D image is generated according to the following equation:
$$\begin{array}{{c}} {L({i,j} )= \frac{{\mathop \sum \nolimits_{s,t} {V_{s,t}}({i - \tilde ds,\;j - \tilde dt} )}}{N} - \frac{{\mathop \sum \nolimits_{s,t} {I_{s,t}}({m,n} )}}{N},}\\ {\{{L({i,j} )\;{\rm{|}}\;0 \le L({i,j} )\le {v_L}} \},} \end{array}$$
where ${v_L}$ is the upper limit of $L({i,j} )$. For example, if the luminance values of the integral 3D image and 2D image are the same as the pixel values of the elemental images and 2D image, respectively, and are expressed in eight bits from 0 to 255, then ${v_I}$ and ${v_L}$ should be set to 255.

3. Simulation analysis of MTF of 3D image

3.1 Simulation analysis method

We developed a simulation analysis method to evaluate the improvement effect of the resolution characteristics of the synthetic 3D image compared to that of the normal integral 3D image. The MTF of 3D images can be analyzed under conditions close to those of actual display systems by considering the effect of the pixel aperture in elemental images and a 2D image. A flowchart of the analytical process is shown in Fig. 3. Initially, the depth distance, spatial frequency, and phase of the sine wave are set. Subsequently, elemental images and a 2D image are generated using the multi-viewpoint images of the sine wave as the target light field image such that the 3D image of the sine wave is displayed. The synthetic 3D image is then displayed in virtual space, and the reproduced light rays are projected onto a virtual screen placed at the same depth position as the sine wave based on the geometrical optics. The modulation factor of the sine wave is calculated from its amplitude by fitting the sine wave to the luminance profile of light rays projected on the virtual screen using the least-squares method. The same process is adopted while changing the phase of the sine wave a predetermined number of times. Moreover, the modulation factor is calculated for each phase, and the average value of the modulation factor for each of the phases is set as the modulation factor for that specific depth distance and spatial frequency. The MTF curve of the 3D images at a certain depth distance can be obtained by performing the aforementioned process by changing the spatial frequency of the sine wave.

 figure: Fig. 3.

Fig. 3. Flowchart of simulation analysis of MTF curve of 3D image at depth distance z.

Download Full Size | PDF

As mentioned above, the simulation analysis method includes both the image generation and display processes. Image generation is performed based on the basic principle described in Subsection 2.2. We applied an oversampling process that increases the density of light rays to the image generation and display processes. This allows analysis of the MTF including the effect of pixel aperture of the 2D image and elemental images. This oversampling process can be explained as follows. For simplicity, the MTF in only one direction is analyzed. Consider light rays that are emitted from positions $(i )$ in the 2D image and travel in the directions of viewpoints s. Each light ray is assumed to travel in a straight line with no width. When the pixel aperture ratio of the 2D image is 0%, the emission position of the light rays $(i )$ is the center position of each pixel in the 2D image. When the pixel aperture ratio of the 2D image is 100%, the density of the emission positions is increased by setting the emission positions between the center positions of each pixel again. This allows an approximate representation of the continuous emission region. We define these newly set emission positions as the oversampled emission positions $({i^{\prime}} )$. For example, if the pitch of the emission position before oversampling $(i )$ is ${p_L}$ and the oversampling factor is 10, then the pitch of the oversampled emission position $({i^{\prime}} )$ is ${p_L}/10$. Similarly, consider viewpoints s. When the pixel aperture ratio of the elemental images is 0%, the light rays emitted from the center position of each pixel in the elemental images travel in the direction connecting each emission position and the center position of each facing elemental lens, as illustrated in Fig. 1(b). This direction indicates the viewpoints, and the total number of viewpoints is equal to the number of pixels in the elemental image corresponding to one elemental lens. When the pixel aperture ratio of the elemental images is 100%, the emission positions are set between the center positions of each pixel in the elemental images again. In this condition, the light rays are reproduced in the directions connecting the newly set emission positions and the center position of the facing elemental lenses. We define these directions as the oversampled viewpoints $s^{\prime}$. Based on this concept of oversampling, this simulation analysis method generated elemental images and a 2D image according to the following equation:

$$\mathop {{\rm{argmin}}}\limits_{L(i ),\;\;{I_s}(m )} \left[ {\mathop \sum \limits_{s^{\prime},i^{\prime}} {{\{{L(i )+ {I_s}(m )- {V_{s^{\prime}}}({i^{\prime} - \tilde ds^{\prime}} )} \}}^2}} \right],$$
where $L(i )$ is the luminance value of the 2D image at position $({i^{\prime}} )$ and ${I_s}(m )$ is the luminance value of the integral 3D image at position $({i^{\prime} - \tilde ds^{\prime}} )$ and viewpoint $s^{\prime}$. According to Eq. (4), the image generation process is performed under the condition that only the pixel density and the density of the viewpoints of the target light field image are increased without changing the pixel density of the elemental images and 2D image. Similarly, in the process of projecting light rays on the virtual screen, light rays that are emitted from the oversampled emission positions $({i^{\prime}} )$ and travel in the directions of the oversampled viewpoints $s^{\prime}$ are handled. The modulation factor of the sine wave is calculated by fitting the sine wave to the luminance profile on the virtual screen such that the squared errors are minimized according to the following equation.
$$\begin{array}{{c}} {\mathop {{\rm{argmin}}}\limits_{{h_b}} \left( {\mathop \sum \limits_{s^{\prime},\;i^{\prime}} {{\left[ {P({x({s^{\prime},i^{\prime}} )} )- \frac{{{h_b}{v_P}}}{2}\{{\sin ({2\pi {A_a}x({s^{\prime},\;i^{\prime}} )+ {B_b}} )+ 1} \}} \right]}^2}} \right),}\\ {x({s^{\prime},i^{\prime}} )= i^{\prime} - ({\tilde d + \tilde z} )s^{\prime},} \end{array}$$
where $P(x )$ is the luminance value of the reproduced light ray at position x on the virtual screen, ${h_b}$ is the modulation factor of the sine wave, ${v_P}$ is the upper limit of $P(x )$, ${A_a}$ is the $a$-th spatial frequency, ${B_b}$ is the $b$-th phase, $x({s^{\prime},\;i^{\prime}} )$ is the projection position when a light ray that is emitted from position $({i^{\prime}} )$ in the 2D image and travels in the direction of viewpoint $s^{\prime}$ is projected onto the virtual screen, and $\tilde z$ is the amount of disparity proportional to z, the depth distance of the sine wave. The MTF of the 3D images, including the effect of pixel aperture on the elemental image and 2D image, are analyzed by performing these processes. Table 1 lists the parameters of the MTF analysis experiments. The aperture ratio of the 2D image, elemental images, and elemental lenses were set to 100%. The number of emission positions $(i )$ and viewpoints s before oversampling were 2000 and 10, respectively. Because each was oversampled by a factor of 10, the total number of light rays was 2 million. The image generation process and the process of projecting light rays on the virtual screen were performed with these 2 million light rays.

Tables Icon

Table 1. Parameters of MTF analysis experiments

3.2 Analysis results

Figure 4 shows the analysis results of the MTF of the 3D images. Frequency ${\omega _I}$ is the Nyquist frequency for the normal integral 3D image and is equal to 0.5 cycles/mm; this value was obtained from $1/({2g} )$, where g is the lens pitch of the lens array. Frequency ${\omega _L}$ is the Nyquist frequency for the 2D image and is equal to 5 cycles/mm; this value was obtained from $1/({2{p_L}} )$, where ${p_L}$ is the pixel pitch of the 2D image. Frequency $\psi (z )$ is the improvement-limit frequency, defined as the upper-limit spatial frequency at which the MTF of 3D images can be improved with the proposed method and it is expressed by

$$\begin{array}{{c}} {\psi (z )= {\rm{min}}({{\omega_L},\;{\omega_e}(z )} ),}\\ {{\omega _e}(z )= \frac{1}{{2|z |\tan \frac{\theta }{2}}} = \frac{f}{{g|z |}},} \end{array}$$
where $\theta $ is the viewing angle of the synthetic 3D image and $\theta = 2{\tan ^{ - 1}}({g/2f} )$, as in a typical integral 3D display. Figure 4(a) shows the MTF curves when 3D images are displayed at a depth distance of 0 mm. It can be observed that the MTF of the synthetic 3D image has significantly improved compared to that of the normal integral 3D image. The MTF at frequency ${\omega _I}$ is 0.99 for the synthetic 3D image and 0.41 for the normal integral 3D image. The MTF of the synthetic 3D image at the frequency ${\omega _L}$ is 0.41. Although the MTF improved at frequencies higher than ${\omega _L}$ as well, it could not be evaluated because the signal is folded back beyond the Nyquist frequency. The curves were almost consistent with the square value of the sinc function for both the normal integral 3D image and synthetic 3D image until the MTF dropped from 1 to 0. Figures 4(b) and 4(c) show the MTF curves at depth distances of 5 and 10 mm, respectively. Although the MTF curve of the normal integral 3D image hardly changed, the MTF of the synthetic 3D image decreased as the depth distance increased and approached the MTF curve of the normal integral 3D image. Similarly, the MTF of the synthetic 3D image approached that of the normal integral 3D image as the frequency increased and was almost equal to that of the normal integral 3D image at the improvement-limit frequencies $\psi (5 )$ and $\psi ({10} )$. From these results, we confirmed that the MTF was improved up to the improvement-limit frequency.

 figure: Fig. 4.

Fig. 4. Analysis results of MTF curves for 3D images at different depth distances: depth distances of (a) 0 mm, (b) 5 mm, and (c) 10 mm.

Download Full Size | PDF

Next, we analyze the depth range in which the improvement effect of the MTF of 3D images can be obtained using the proposed method. The graph in Fig. 5(a) shows the relationship between the depth distance and MTF of the 3D images when the frequency of the sine wave is fixed at either $0.5\;{\omega _I}$, $1.0\;{\omega _I}$, or $2.0\;{\omega _I}$. It can be observed that the shorter the depth distance, the higher the improvement to MTF by the proposed method. At frequencies $0.5\;{\omega _I}$, $1.0\;{\omega _I}$, and $2.0\;{\omega _I}$, the MTF improved within an approximate depth range of $|z |\le $ 20 mm, 10 mm, and 0.5 mm, respectively. It can be observed that the lower the frequency, the wider the depth range in which the MTF improves. The graph in Fig. 5(b) shows the relationship between the viewing angle of the synthetic 3D image and the depth range at which MTF improves. In this analysis, only the focal length of the lens array f and the pixel pitch of the elemental images ${p_I}$ were changed, while the lens pitch of the lens array g and the pixel pitch of the 2D image ${p_L}$ were kept fixed. This process changes only the viewing angle while maintaining the spatial resolution of the 3D images and angular density of the light rays. Under this condition, we analyzed the depth range where the MTF of the synthetic 3D image improved by more than 1% compared to that of the normal integral 3D image. The viewing angle of the 3D images was calculated as $\theta = 2{\tan ^{ - 1}}({g/2f} )$. Figure 5(b) demonstrates that the narrower the viewing angle, the wider the depth range in which the pixel density improves. Moreover, it can be seen that the MTF improves over a wider depth range when the frequency is lower.

 figure: Fig. 5.

Fig. 5. Analysis results of depth range where improvement effect of MTF can be obtained: (a) relationship between depth distance and MTF of 3D images when viewing angle is 10° and (b) relationship between viewing angle and depth range where improvement effect of MTF can be obtained.

Download Full Size | PDF

The improvement effect of the MTF with the proposed method decreased as the depth distance of the 3D image became increased. In general, the resolution characteristics of integral 3D images at deep depth positions are determined by the angular density of the light rays [26]. With the proposed method, the maximum pixel density of 3D images can be improved; however, the angular density of light rays cannot be improved. The improvement effect of the MTF is reduced because the deeper the depth position, the greater the effect of the angular density of light rays on the resolution characteristics rather than the maximum pixel density. In the simulation analysis experiments, the weights of luminance for the integral 3D image and 2D image were set to be the same. The change in the weight of luminance affects the MTF. When the weight of the integral 3D image is set to be larger than that of the 2D image, the MTF of the synthetic 3D image at shallow depth positions decreases. Conversely, when the weight of the 2D image is set to be larger than that of the integral 3D image, the MTF of the synthetic 3D image at deep depth positions decreases. Therefore, it is desirable to adjust the luminance of the integral 3D display and the 2D display to be the same when constructing a display system.

Based on these results, we examined the design policy for the display systems. A display device with high pixel density must be used as the 2D display to improve the maximum pixel density of 3D images. Furthermore, the improvement effect of the resolution characteristics is obtained over a wide depth range by narrowing the viewing angle. This requires the use of a lens array in which the ratio of focal length to lens size is large. The resolution characteristics of 3D images at deeper depth positions are the same as that of the normal integral 3D image. To increase the resolution characteristics at deeper depth positions, it is necessary to configure an integral 3D display using a display device with high pixel density [26]. As described above, it is possible to display 3D images with comprehensively high display characteristics by configuring a display system using a lens array with a long focal length and display devices with high pixel density.

4. Prototype display system

We developed a prototype display system based on the design policy described in Section 3. Figure 6 shows the display system and Table 2 lists its specifications. We selected a lens array with a honeycomb arrangement, lens pitch of 1.38 mm, and focal length of 8.846 mm while assuming individual viewing at a viewing angle of approximately 10°. Further, two LCDs (ASTRODESIGN, Inc., DM-3409) with 4K resolution (3840 × 2160 pixels) and a display size of 9.6 inches diagonal were used for both the integral 3D display and 2D display.

 figure: Fig. 6.

Fig. 6. Appearance of prototype display system.

Download Full Size | PDF

Tables Icon

Table 2. Specifications of prototype display system

The adjustment of the installation position and the calibration of the displayed image were performed with high accuracy. First, we performed the calibration process for the integral 3D display by aligning the elemental images with the lens array. The pattern image where the center part of each elemental image was filled with white and the other areas were filled with black was displayed on the display device comprising the integral 3D display. The display position, rotation, and zoom of the pattern image were finely adjusted such that the entire display screen appeared white when viewing the integral 3D display from a distance in front, and these adjustment values were recorded. Next, the integral 3D display, 2D display, and half mirror were placed on a breadboard. The half mirror was placed at an angle of 45° using a protractor. The positions of the integral 3D display and 2D display were adjusted using a tape measure such that the distance between each display and the half mirror became the same. Finally, the displayed image of the 2D display was aligned with that of the integral 3D display. A 3D image of a dot pattern was displayed at a depth distance of 0 mm by the integral 3D display, and a dot pattern image of the same shape was displayed on the 2D display. The display position, rotation, and zoom of the dot pattern image on the 2D display were finely adjusted such that the dot positions were matched while viewing the image optically synthesized by the half mirror, and these adjustment values were recorded. The synthetic 3D image was displayed by displaying the elemental images and 2D image, which were geometrically corrected using the recorded adjustment values, on the integral 3D display and 2D display, respectively.

The luminance of the integral 3D display and 2D display were also calibrated by matching their luminance values at approximately 130 cd/m2. This was achieved by adjusting the backlight luminance of each LCD while measuring the luminance of the display screen through the half mirror when a white image was displayed using a luminance meter. Next, the gamma correction factor was obtained as follows. White, intermediate colors, and black images were displayed on the LCD in sequence, and the luminance value was measured for each image using the luminance meter. The pixel values of the images were 32 increments in the range of 0–255. The gamma correction factor was calculated by fitting a gamma curve to the measured luminance values such that the squared errors were minimized; a gamma correction factor of 0.54 was obtained. Subsequently, pixel values were corrected using the gamma correction factor in the image generation process for the elemental images and 2D image. The elemental images were generated according to the following equation, which was obtained after gamma correction was applied to Eq. (2).

$$\begin{array}{{c}} {{I_{s,t}}({m,n} )= {v_I}{{\left\{ {\frac{{{T_I}({m,\;n} )}}{{{v_I}}}} \right\}}^\gamma },}\\ {{T_I}({m,\;n} )= \frac{{\mathop \sum \nolimits_{s,t} \left[ {{v_I}{{\left\{ {\frac{{{V_{s,t}}({i - \tilde ds,\;j - \tilde dt} )}}{{{v_I}}}} \right\}}^{\frac{1}{\gamma }}}} \right]}}{N} - {v_I}{{\left\{ {\frac{{L({i,j} )}}{{{v_I}}}} \right\}}^{\frac{1}{\gamma }}},} \end{array}$$
where $\gamma $ is the gamma correction factor. Similarly, the 2D image was generated according to the following equation, which was obtained after gamma correction was applied to Eq. (3).
$$\begin{array}{{c}} {L({i,j} )= {v_L}{{\left\{ {\frac{{{T_L}({i,\;j} )}}{{{v_L}}}} \right\}}^\gamma },}\\ {{T_L}({i,\;j} )= \frac{{\mathop \sum \nolimits_{s,t} \left[ {{v_L}{{\left\{ {\frac{{{V_{s,t}}({i - \tilde ds,\;j - \tilde dt} )}}{{{v_L}}}} \right\}}^{\frac{1}{\gamma }}}} \right]}}{N} - \frac{{\mathop \sum \nolimits_{s,t} \left[ {{v_L}{{\left\{ {\frac{{{I_{s,t}}({m,n} )}}{{{v_L}}}} \right\}}^{\frac{1}{\gamma }}}} \right]}}{N}.} \end{array}$$
The display experiments were conducted using this prototype display system.

5. Experimental results

To verify the display performance of the prototype display system, a 3D image of a wedge chart was displayed at depth distances ranging from 0 to 20 mm. The depth distance is positive in the direction from the lens array surface toward the viewer. Figure 7(a) shows the results of re-photographing the displayed 3D image with a digital camera placed in front of the display system. A magnified image of the tip of each wedge chart is presented at the bottom of the figure. It can be confirmed that when the synthetic 3D image is displayed at a depth distance of 0 mm, the seven lines of the wedge chart are displayed up to the vicinity of the Nyquist frequency ${\omega _L}$. Figure 7(b) shows the line profile at each frequency generated from the captured 3D images. At a depth distance of 0 mm, the line profile of the synthetic 3D image at frequency ${\omega _L}$ has seven peaks in the center. This means that the prototype display system can display 3D images with a maximum resolution of 4K. However, the amplitude of the line profile waveform was small. This is because the 2D image was resampled using the geometric correction process for aligning the 2D image to the integral 3D image. At a depth distance of 0 mm, seven peaks can be clearly seen in the line profiles at frequencies lower than ${\omega _L}$. Compared to the line profiles of the normal integral 3D image, it can be confirmed that the pixel density of the 3D image is improved. Subsequently, by checking the line profile of the synthetic 3D image at a depth distance of 5 mm and frequency $\psi (5 )$, it can be observed that the amplitude of the waveform is lower than that at a depth distance of 0 mm, although there are seven peaks. The waveform peaks cease to exist at frequencies higher than $\psi (5 )$, while seven peaks can be clearly seen at frequencies lower than $\psi (5 )$. Similarly, the line profiles of the synthetic 3D image at a depth distance of 10 mm showed no waveform peaks at frequencies higher than $\psi ({10} )$, whereas seven peaks could be clearly observed at lower frequencies. When the wedge chart was displayed at a depth distance of 20 mm, the shapes of the line profiles of the synthetic 3D image and normal integral 3D image were nearly identical. Thus, it can be seen that the pixel density can be improved up to the improvement-limit frequency $\psi ({z} )$ for each depth distance. Furthermore, it can be observed that the improvement effect of the pixel density drops as the depth distance increases when compared at the same frequency. This trend is similar to the results of the simulation analysis described in Section 3, and it can be concluded that the improvement effect of the resolution characteristics of 3D images with the prototype display system is generally in line with theory.

 figure: Fig. 7.

Fig. 7. Results of displaying 3D image of wedge chart with prototype display system: (a) results of re-photographing 3D image and (b) line profile at each depth distance and frequency.

Download Full Size | PDF

One difference from the simulation analysis results is that the MTFs of the synthetic 3D image and normal integral 3D image are equal at the improvement-limit frequency $\psi ({z} )$ in the simulation analysis, whereas the resolution characteristics of the synthetic 3D image are slightly better than that of the normal integral 3D image at the improvement-limit frequency $\psi ({z} )$ in the prototype display system. This is because a lens array with a honeycomb arrangement was used in the prototype display system. The shape of each elemental lens is a hexagon with vertices at the top and bottom, as shown in Fig. 7(a), and each elemental image was generated with the same shape. The viewing zone of the 3D images is the same as that of the elemental image. Because the shape of the viewing zone is hexagonal, the central viewpoint is most common when considered horizontally. In this condition, the image generation process generates elemental images and a 2D image such that the error in the luminance values of the synthetic 3D image is minimized at the central viewpoint. Consequently, it can be considered that the improvement effect of the horizontal resolution characteristics is higher at the central viewpoint.

Figure 8 shows the results of generating elemental images and 2D images from two different 3D scenes to display 3D images of computer graphics (CGs). In the upper 3D scene, the objects are placed such that the woman’s eyes and the landscape photo are at a depth distance of 0 mm and −40 mm, respectively. In the lower 3D scene, the Stanford dragon is placed on the floor with a checker texture. The Stanford dragon's eyes were at a depth of 0 mm. Figure 8(a) shows elemental images of a typical integral 3D display, while Fig. 8(b) shows the elemental images and 2D images generated based on the proposed method. As shown in Fig. 8(b), in the elemental images generated by the proposed method, regions with shallow depth positions and high spatial frequencies, viz., the eyes and contours, were darkened, whereas, in the generated 2D image, these regions are brightened. Figure 9 shows the resulting 3D images displayed in the prototype display system using the aforementioned elemental images. When displaying the normal integral 3D image, the elemental images shown in Fig. 8(a) are displayed on the integral 3D display and a black image is displayed on the 2D display. On comparing the target light field image and normal integral 3D image, it can be observed that the normal integral 3D image has a lower pixel density and is blurred in the deep depth region. Next, on comparing the normal integral 3D image and synthetic 3D image, it was found that the pixel density of the synthetic 3D image is higher than that of the normal integral 3D image in the shallow depth region, such as around the eyes of the woman and the dragon. The pixel density in the deep depth region of the synthetic 3D image is similar to that of the normal integral 3D image. In the deep depth region, the pixel values of the 2D image are low, and the luminance values of the integral 3D image hardly change by optically synthesizing the 2D image. The changes in perspective caused by the optical synthesis were not perceived, and the synthetic 3D image displayed at the correct depth position could be viewed. Figure 10 shows the results of re-photographing the synthetic 3D image of the CG from different viewpoints. Motion parallax can be confirmed by the change in the positional relationship between the dragon and the checkered pattern. The improvement effect of pixel density was largely maintained even when the viewpoint was changed. From these results, it could be confirmed that the prototype display system can display 3D images with an increased maximum pixel density compared with normal integral 3D images.

 figure: Fig. 8.

Fig. 8. Results of image generation: (a) elemental images for typical integral 3D display and

(b) elemental images and 2D image for proposed 3D display method.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Target light field images and images of re-photographing 3D images of CG displayed by prototype display system.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Results of re-photographing synthetic 3D image displayed by prototype display system from different viewpoints.

Download Full Size | PDF

6. Conclusion

We proposed a method for improving the maximum pixel density of 3D images by optically synthesizing an integral 3D display and a 2D display using a half mirror. From experimental results, we demonstrated the effectiveness of the proposed method. The proposed method can be used to display bright 3D images with high pixel density without the use of a high-brightness backlight because it is based on the principle of additive layered 3D displays. One problem with the proposed method is the voluminous size of the display system owing to the use of a half mirror. In the future, we intend to apply 3D/2D conversion techniques using special optical elements such as liquid crystal lenses to make the display system thinner. Another problem is the trade-off relationship between the viewing angle and the improvement effect of the resolution characteristics of the 3D images, which makes it difficult to widen the viewing angle. For individual viewing, eye-tracking techniques can be applied to increase the viewing angle while maintaining the improvement effect of resolution characteristics. The prototype display system cannot perform real-time processing from image generation to display. This is because the amount of image information in the target light field image is large, and the image generation process requires time. Real-time processing will be possible by developing a CNN-based image generation method. We intend to improve the display characteristics of the 3D images by implementing these improvements in the future and develop a 3D display system that is brighter, thinner, and ultrahigh-definition.

Disclosures

The authors declare no conflicts of interest.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef]  

2. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014). [CrossRef]  

3. Y. Momonoi, K. Yamamoto, Y. Yokote, A. Sato, and Y. Takaki, “Light field Mirage using multiple flat-panel light field displays,” Opt. Express 29(7), 10406–10423 (2021). [CrossRef]  

4. N. Okaichi, H. Sasaki, M. Kano, J. Arai, M. Kawakita, and T. Naemura, “Integral three-dimensional display system with wide viewing angle and depth range using time-division display and eye-tracking technology,” Opt. Eng. 61(01), 013103-1 (2022). [CrossRef]  

5. B. Javidi, A. Carnicer, J. Arai, T. Fujii, H. Hua, H. Liao, M. Martínez-Corral, F. Pla, A. Stern, L. Waller, Q. H. Wang, G. Wetzstein, M. Yamaguchi, and H. Yamamoto, “Roadmap on 3D integral imaging: sensing, processing, and display,” Opt. Express 28(22), 32266–32293 (2020). [CrossRef]  

6. G. Lippmann, “Epreuves réversibles photographies integrals,” C.-R. Acad. Sci. 146, 446–451 (1908).

7. J. Arai, F. Okano, M. Kawakita, M. Okui, Y. Haino, M. Yoshimura, M. Furuya, and M. Sato, “Integral three-dimensional television using a 33-megapixel imaging system,” J. Disp. Technol. 6(10), 422–430 (2010). [CrossRef]  

8. M. Yamasaki, H. Sakai, K. Utsugi, and T. Koike, “High-density light field reproduction using overlaid multiple projection images,” Proc. SPIE 7237, 723709 (2009). [CrossRef]  

9. H. Watanabe, N. Okaichi, H. Sasaki, and M. Kawakita, “Pixel-density and viewing-angle enhanced integral 3D display with parallel projection of multiple UHD elemental images,” Opt. Express 28(17), 24731–24746 (2020). [CrossRef]  

10. H. Sasaki, N. Okaichi, H. Watanabe, M. Kano, M. Miura, M. Kawakita, and T. Mishina, “Color moiré reduction and resolution enhancement of flat-panel integral three-dimensional display,” Opt. Express 27(6), 8488–8503 (2019). [CrossRef]  

11. J. S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27(5), 324–326 (2002). [CrossRef]  

12. Y. Oh, D. Shin, B. G. Lee, S. I. Jeong, and H. J. Choi, “Resolution-enhanced integral imaging in focal mode with a time-multiplexed electrical mask array,” Opt. Express 22(15), 17620–17629 (2014). [CrossRef]  

13. K. Maruyama, K. Takahashi, and T. Fujii, “Comparison of layer operations and optimization methods for light field display,” IEEE Access 8, 38767–38775 (2020). [CrossRef]  

14. G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 1–12 (2011). [CrossRef]  

15. G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012). [CrossRef]  

16. A. Maimone, G. Wetzstein, M. Hirsch, D. Lanman, R. Raskar, and H. Fuchs, “Focus 3D: Compressive accommodation display,” ACM Trans. Graph. 32(5), 1–13 (2013). [CrossRef]  

17. T. Saito, Y. Kobayashi, K. Takahashi, and T. Fujii, “Displaying real-world light fields with stacked multiplicative layers: requirement and data conversion for input multiview images,” J. Disp. Technol. 12(11), 1290–1300 (2016). [CrossRef]  

18. K. Takahashi, Y. Kobayashi, and T. Fujii, “From focal stack to tensor light-field display,” IEEE Trans. on Image Process. 27(9), 4571–4584 (2018). [CrossRef]  

19. K. Matsuura, K. Takahashi, and T. Fujii, “Enhancing angular resolution of layered light-field display by using monochrome layers,” IS& T Int. Symp. Electron. 33(2), 12-1–12-7 (2021). [CrossRef]  

20. S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, “Additive light field displays: realization of augmented reality with holographic optical elements,” ACM Trans. Graph. 35(4), 1–13 (2016). [CrossRef]  

21. D. Kim, S. Lee, S. Moon, J. Cho, Y. Jo, and B. Lee, “Hybrid multi-layer displays providing accommodation cues,” Opt. Express 26(13), 17170–17184 (2018). [CrossRef]  

22. N. Y. Jo, H. G. Lim, S. K. Lee, Y. S. Kim, and J. H. Park, “Depth enhancement of multi-layer light field display using polarization dependent internal reflection,” Opt. Express 21(24), 29628–29636 (2013). [CrossRef]  

23. T. Zhan, Y. H. Lee, and S. T. Wu, “High-resolution additive light field near-eye display by switchable Pancharatnam–Berry phase lenses,” Opt. Express 26(4), 4863–4872 (2018). [CrossRef]  

24. P. Y. Chou, J. Y. Wu, S. H. Huang, C. P. Wang, Z. Qin, C. T. Huang, P. Y. Hsieh, H. H. Lee, T. H. Lin, and Y. P. Huang, “Hybrid light field head-mounted display using time-multiplexed liquid crystal lens array for resolution enhancement,” Opt. Express 27(2), 1164–1177 (2019). [CrossRef]  

25. F. Jin, J. S. Jang, and B. Javidi, “Effects of device resolution on three-dimensional integral imaging,” Opt. Lett. 29(12), 1345–1347 (2004). [CrossRef]  

26. H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A 15(8), 2059–2065 (1998). [CrossRef]  

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Basic principle of proposed display method: (a) basic configuration of display system and (b) enlarged view of configuration optically equivalent to basic configuration.
Fig. 2.
Fig. 2. Flowchart of image generation method of elemental images and 2D image.
Fig. 3.
Fig. 3. Flowchart of simulation analysis of MTF curve of 3D image at depth distance z.
Fig. 4.
Fig. 4. Analysis results of MTF curves for 3D images at different depth distances: depth distances of (a) 0 mm, (b) 5 mm, and (c) 10 mm.
Fig. 5.
Fig. 5. Analysis results of depth range where improvement effect of MTF can be obtained: (a) relationship between depth distance and MTF of 3D images when viewing angle is 10° and (b) relationship between viewing angle and depth range where improvement effect of MTF can be obtained.
Fig. 6.
Fig. 6. Appearance of prototype display system.
Fig. 7.
Fig. 7. Results of displaying 3D image of wedge chart with prototype display system: (a) results of re-photographing 3D image and (b) line profile at each depth distance and frequency.
Fig. 8.
Fig. 8. Results of image generation: (a) elemental images for typical integral 3D display and
Fig. 9.
Fig. 9. Target light field images and images of re-photographing 3D images of CG displayed by prototype display system.
Fig. 10.
Fig. 10. Results of re-photographing synthetic 3D image displayed by prototype display system from different viewpoints.

Tables (2)

Tables Icon

Table 1. Parameters of MTF analysis experiments

Tables Icon

Table 2. Specifications of prototype display system

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

argminL(i,j),Is,t(m,n)[s,t,i,j{L(i,j)+Is,t(m,n)Vs,t(id~s,jd~t)}2].
Is,t(m,n)=s,tVs,t(id~s,jd~t)NL(i,j),{Is,t(m,n)|0Is,t(m,n)vI},
L(i,j)=s,tVs,t(id~s,jd~t)Ns,tIs,t(m,n)N,{L(i,j)|0L(i,j)vL},
argminL(i),Is(m)[s,i{L(i)+Is(m)Vs(id~s)}2],
argminhb(s,i[P(x(s,i))hbvP2{sin(2πAax(s,i)+Bb)+1}]2),x(s,i)=i(d~+z~)s,
ψ(z)=min(ωL,ωe(z)),ωe(z)=12|z|tanθ2=fg|z|,
Is,t(m,n)=vI{TI(m,n)vI}γ,TI(m,n)=s,t[vI{Vs,t(id~s,jd~t)vI}1γ]NvI{L(i,j)vI}1γ,
L(i,j)=vL{TL(i,j)vL}γ,TL(i,j)=s,t[vL{Vs,t(id~s,jd~t)vL}1γ]Ns,t[vL{Is,t(m,n)vL}1γ]N.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.