Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Dual convolutional neural network for aberration pre-correction and image quality enhancement in integral imaging display

Open Access Open Access

Abstract

This paper proposes a method that utilizes a dual neural network model to address the challenges posed by aberration in the integral imaging microlens array (MLA) and the degradation of 3D image quality. The approach involves a cascaded dual convolutional neural network (CNN) model designed to handle aberration pre-correction and image quality restoration tasks. By training these models end-to-end, the MLA aberration is corrected effectively and the image quality of integral imaging is enhanced. The feasibility of the proposed method is validated through simulations and optical experiments, using an optimized, high-quality pre-corrected element image array (EIA) as the image source for 3D display. The proposed method achieves high-quality integral imaging 3D display by alleviating the contradiction between MLA aberration and 3D image resolution reduction caused by system noise without introducing additional complexity to the display system.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Among various implementation forms of light-field 3D display, integral imaging 3D display technology utilizes an MLA to capture and reconstruct 3D scene information [14]. By synthesizing the EIA from the element images (EIs) captured by each microlens and employing the MLA for reconstruction, integral imaging enables the creation of high-quality naked-eye 3D stereoscopic images [5,6]. This technology offers an alluring 3D display solution, requiring minimal data and relatively simple equipment, making it highly attractive in scientific and industrial arenas. However, using MLA in the display process introduces inherent aberrations that severely compromise image quality [7]. Consequently, this leads to issues such as 3D display image blur and quality degradation [8,9], which need to be addressed to enhance the imaging system's effectiveness.

To mitigate the aberrations caused by microlenses in integral imaging 3D display, various approaches such as optical design [1014] and image correction methods [1519] are commonly employed. For instance, [10] utilizes the third-order aberration values across the entire field of view to establish a function that constrains the initial structure of the optical system and corrects the lens aberrations throughout the whole field of view. Another method [11] introduces a free-form surface design in the optical system to eliminate wavefront aberrations based on the human eye. However, incorporating complex optical components into MLAs poses significant challenges in fabrication and system limitations. To address this, researchers have proposed algorithmic correction techniques. For example, [16] utilizes electrically adjustable achromatic lenses driven by reinforcement learning to correct chromatic aberrations, while another research [14] develops an algorithm to optimize the structural function of optical systems based on the relationship between sub-aberration coefficients and zoom system optimization. Furthermore, researchers have increasingly adopted deep learning to enhance image quality in light-field displays [2025]. For instance, one method proposed in [25] employs a pre-corrected CNN to reduce image aberrations and another approach in [24] utilizes pre-filtering and joint pre-correction with a CNN based on changes in aberration severity. However, when applied to MLA imaging, the pre-correction method of individual EIs still encounters significant edge aberration issues, and aberration pre-correction introduces additional system noise to a certain extent [7], resulting in a severe loss of 3D image resolution. As a remedy, researchers have proposed deep-learning solutions for addressing image quality issues [26,27].

This paper presents a method for enhancing image quality by utilizing a dual neural network model to address the trade-off between MLA aberration and image resolution reduction caused by system noise. The approach employs two cascaded CNN models, which realize two tasks of aberration pre-correction and image quality restoration, and conducts end-to-end joint training. Using the resultant high-quality pre-corrected EIA as the image source for integral imaging 3D display, the feasibility of the method is validated through simulation and optical experiments. By mitigating the conflict between MLA aberration and 3D image resolution reduction caused by system noise, the proposed approach achieves high-quality integral imaging 3D display without increasing the complexity of the display system.

2. Method

In the conventional integral imaging 3D display process, the presence of aberration in the MLA often leads to a decline in the quality of the 3D images. Previous approaches have suggested using pre-filters or CNN based pre-correction methods to mitigate the impact of image aberration. However, these methods of pre-correction cannot eliminate the inherent system noise of the microlens. To address this inherent trade-off, a new approach using a dual CNN model for enhancing image quality is proposed. This method aims to reconcile the conflict between aberration and resolution degradation in integral imaging displays.

Figure 1 illustrates the overall schematic diagram of the proposed method, which involves four key steps: (1) EI digital collection; (2) aberration pre-correction of MLA; (3) image quality restoration of pre-corrected EIs; (4) optical reconstruction and 3D display. During the digital acquisition stage, a virtual camera array is utilized to capture EIs. In the aberration pre-correction stage, the imaging process of each EI is simulated by calculating the transfer function of the individual microlens. To build a lightweight aberration pre-correction CNN model, this step defines the loss function that minimizes the intensity distribution errors between the original EI and the imaging EI. The model is optimized using the stochastic gradient descent (SGD) algorithm. In the stage of image quality restoration processing, the present approach constructed a CNN model by employing dimensionality stretching of the pre-corrected EIs and their corresponding noise levels as input. Multi-task end-to-end training is performed by connecting the two network models in series. Finally, in the optical reconstruction and 3D display stage, the image quality enhanced EIA is loaded onto the LCD to achieve a high-quality 3D integral imaging display.

 figure: Fig. 1.

Fig. 1. Schematic diagram of the proposed method.

Download Full Size | PDF

2.1 EI imaging simulation based on a single microlens

The optical reconstruction stage of the 3D display utilizes an MLA. However, the inherent optical aberration of the MLA results in a deviation of the actual light wavefront reaching the observation plane from the ideal reference wavefront, leading to a degradation in the quality of the displayed image. To account for the optical aberration and apply aberration pre-correction to EIs, the imaging process of each EI is simulated by calculating the transfer function of the single microlens, as shown in Fig. 2(a). Taking one microlens unit within the MLA as an example, the wave aberration function is used to simulate the extent to which the EI is affected by the aberration of the MLA [28,29]. The Zernike polar coordinates polynomial is employed in this simulation to characterize wavefront aberrations, as indicated in Eq. (1).

$$\textrm{W(H},\mathrm{\rho )\ =\ }\sum\limits_\textrm{k} {\sum\limits_l {\sum\limits_m {{\textrm{W}_{klm}}{\textrm{H}^k}{\mathrm{\rho }^l}} } } {\cos ^m}\mathrm{\varphi }$$
where ${\textrm{W}_{klm}}$ represents wave aberration coefficients of different orders, H represents the normalized field coordinate in the y direction, $\mathrm{\rho }$ is the normalized pupil coordinate $\mathrm{\rho =\ (}{\mathrm{\rho }_x},{\mathrm{\rho }_y})$, where ${\mathrm{\rho }_x} = \mathrm{\rho sin\varphi }\;{\mathrm{\rho }_y} = \mathrm{\rho cos\varphi }$, φ represents the azimuth angle of aperture coordinates. Since a single microlens unit is a rotationally symmetric system, the covariates ${\textrm{H}^2}$, ${\mathrm{\rho }^2}$ and $\mathrm{H\rho cos\varphi }$ in Eq. (1) will remain constant. Therefore, the above expression for wave aberration can be simplified as follows.
$$\textrm{W(H},\mathrm{\rho ,\varphi )\ =\ }\sum\limits_p {\sum\limits_n {\sum\limits_m {{\textrm{W}_{klm}}{\textrm{H}^\textrm{k}}{\mathrm{\rho }^l}} } } {\cos ^m}\mathrm{\varphi }$$
$$k = 2p + m$$
$$l = 2n + m$$

 figure: Fig. 2.

Fig. 2. Polar coordinate system divides the field of view of a single EI into equidistant ring regions. (a) Imaging of a single EI. (b) Field of view division of EI.

Download Full Size | PDF

By treating the aberration of a single microlens as Seidel aberration, Eq. (5) can be obtained by expanding Eq. (2) with fourth-order wave aberration.

$$\textrm{W(H},\mathrm{\rho ,\varphi )\ =\ }{\textrm{W}_{040}}{\mathrm{\rho }^4}\textrm{ + }{\textrm{W}_{131}}{\mathrm{\rho }^3}\mathrm{cos\varphi +\ }{\textrm{W}_{222}}{\textrm{H}^2}{\mathrm{\rho }^2}\textrm{co}{\textrm{s}^2}\mathrm{\varphi +\ }{\textrm{W}_{220}}{\textrm{H}^2}{\mathrm{\rho }^2}\textrm{ + }{\textrm{W}_{311}}{\textrm{H}^3}\mathrm{\rho cos\varphi } + {\textrm{W}_{400}}{\textrm{H}^4}$$
where the wave aberration coefficients ${\textrm{W}_{040}}$, ${\textrm{W}_{131}}$, ${\textrm{W}_{222}}$, ${\textrm{W}_{220}}$, ${\textrm{W}_{311}}$ and ${\textrm{W}_{400}}$ represent the correlation coefficients of spherical aberration, coma, astigmatism, field curvature, and distortion in wavelength λ, respectively. For simplicity, as field curvature and field distortion do not affect the lens transfer function, only primary spherical aberration, coma, and astigmatism need to be considered in the aberration pre-correction method adopted in the proposed method. Therefore, the wavefront aberration of the microlens here can be further simplified to Eq. (6).
$$\textrm{W(H},\mathrm{\rho ,\varphi )\ =\ }{\textrm{W}_{040}}{\mathrm{\rho }^4}\textrm{ + }{\textrm{W}_{131}}{\mathrm{\rho }^3}\mathrm{cos\varphi +\ }{\textrm{W}_{222}}{\textrm{H}^2}{\mathrm{\rho }^2}\textrm{co}{\textrm{s}^2}\mathrm{\varphi +\ }{\textrm{W}_{220}}{\textrm{H}^2}{\mathrm{\rho }^2}\textrm{ + }{\textrm{W}_{400}}{\textrm{H}^4}$$

The incident light wave of the LCD plane is regarded as a discrete point light source. To ensure that the aberrations on the field of view at different positions have similar image quality characteristics and make the division of the field of view of the microlens with gyro-symmetric features more uniform, the polar coordinate system is used to divide the field of view of a single EI into equidistant ring regions, as shown in Fig. 2(b). Regional aberrations are characterized by the polar diameter and Angle of the center point of each sector’s small ring region. For example, the field of view at point M is used to characterize the field of view of the sector’s small ring region. That is, the wave field distribution of each small region represented by each incident point is

$$E(\mathrm{\rho },\mathrm{\varphi ,H\ =\ 0}) = \exp (j\frac{k}{{2g}}{\mathrm{\rho }^2})$$

Propagation from the exit pupil plane to the display plane of the microlens is described by the Fresnel-Kirchhoff diffraction theory, which can be expressed by

$$\scalebox{0.92}{$\displaystyle\mathrm{U(\rho ^{\prime},\varphi ^{\prime},z)\ =\ }\frac{g}{l}\int_0^1 {\int_0^{2\pi } {\exp (j\frac{k}{{2g}}{\mathrm{\rho }^2})\exp [j\mathrm{W(H,\rho ,\varphi )}]} } \exp (jk\frac{{{\mathrm{\rho }^2}}}{{2z}})\exp [ - jk\frac{{\mathrm{\rho \rho }^{\prime}\cos (\mathrm{\varphi } - \mathrm{\varphi }^{\prime})}}{z}]\mathrm{\rho }d\mathrm{\rho }d\mathrm{\varphi }$}$$
where $\mathrm{\rho ^{\prime}}$, $\mathrm{\varphi ^{\prime}}$ denotes the radial and angular coordinates in the process of light propagation. l is the distance from the LCD to the MLA, g is the distance from the MLA to the display plane, and z is the propagation distance, all of which are measured in mm. According to Fourier optics theory [30], the transfer function of the imaging region of EI after microlens transformation is
$$\mathrm{PSF(\rho ,\varphi )\ =\ }\frac{1}{{{\mathrm{\lambda }^2}{g^2}}}{\left|\left|{\int\!\!\!\int {p(r\textrm{,}\theta )\exp [ - j\frac{{2\mathrm{\pi }}}{\mathrm{\lambda }}\textrm{W}(r\textrm{,}\theta )]\exp [ - j2\pi \mathrm{\rho }r\mathrm{cos(\varphi } - \theta \textrm{)}]rdrd\theta } } \right|\right|^2}$$
where $p(r\textrm{,}\theta )$ represents the exit pupil function of the microlens. To ensure that the aberration of the microlens edge can be effectively corrected, take $p(r\textrm{,}\theta )$=1. λ represents the wavelength in nm. The superposition of three wavelengths of PSF(blue:486 nm, green:587 nm, red:656 nm) is used in the optical simulation to describe the intensity of a single object point. Therefore, the EI optical simulation with aberrations in the display plane can be expressed as
$$\textrm{SEI}\, = \,\textrm{IEI}\ast \mathrm{PSF(\rho ,\varphi )}$$
where SEI represents the intensity distribution of EI in the display plane after aberration optical simulation, IEI represents the initial intensity distribution of EI, and * represents the convolution operation.

2.2 Lightweight aberration pre-correction CNN

Based on the analysis presented in the previous section, the imaging of original EI by microlens with aberrations can be simulated through the convolution of original EI and PSF, as described in Eq. (9)–(10). In order to correct the aberrations introduced by the MLA, a virtual camera array captures the ideal aberration-free original EI based on the pinhole model. Here, the SGD method is utilized to correct the aberrations introduced by the MLA, as illustrated in Fig. 3. The initial EI is used as the input to the iterative algorithm, wherein the algorithm updates and optimizes the initial EI by minimizing the image difference between the original EI and the simulated EI through backpropagation. This iterative process results in a pre-corrected EI that exhibits imaging effects closest to the original EI. To assess the image differences, a loss function comprising the weighted sum of the Structural Similarity Index (SSIM) and the Root Mean Square Error (MSE) is employed, as depicted in Eq. (11).

$$\textrm{los}{\textrm{s}_{abe}}\mathrm{\ =\ \omega los}{\textrm{s}_{ssim}} + (1 - \mathrm{\omega })\textrm{los}{\textrm{s}_{mse}}$$

 figure: Fig. 3.

Fig. 3. Flow chart of SGD aberration pre-correction method.

Download Full Size | PDF

$\textrm{loss}_{abe}$ represents the aberration pre-correction loss function of SGD iteration, $\textrm{los}{\textrm{s}_{ssim}}$ and $\textrm{los}{\textrm{s}_{mse}}$ represents the loss value of SSIM and MSE, and can be expressed by Eq. (12)–(13) respectively.$\mathrm{\omega }$ denotes the weight value of each of the two loss functions that can be set between 0 and 1. When $\mathrm{\omega }$ is set to 0.8, the optimization has the best result. In Eq. (12), n represents the total number of image pixels, $Y{s_j}$ denotes the jth pixel value of the aberration simulation EI, and $Y{i_j}$ denotes the jth pixel value of the original EI. In Eq. (13), ${\mathrm{\mu }_s}$ represents the pixel mean of aberration simulated EI, ${\mathrm{\mu }_i}$ represents the pixel mean of original EI, ${\mathrm{\sigma }_s}$ represents the pixel standard deviation of aberration simulated EI, ${\mathrm{\sigma }_i}$ represents the pixel standard deviation of original EI, ${\mathrm{\sigma }_{si}}$ represents the pixel covariance of aberration simulated EI and original EI, ${C_1}$ and ${C_2}$ is two constants to avoid the denominator being zero.

$$\textrm{los}{\textrm{s}_{mse}} = \frac{1}{n}\sum\limits_{j = 1}^n {{{(Y{s_j} - Y{i_j})}^2}}$$
$$\textrm{los}{\textrm{s}_{ssim}} = \frac{{(2{\mathrm{\mu }_s}{\mathrm{\mu }_i} + {C_1})(2{\mathrm{\sigma }_{si}} + {C_2})}}{{(\mathrm{\mu }_s^2 + \mathrm{\mu }_i^2 + {C_1})(\mathrm{\sigma }_s^2 + \mathrm{\sigma }_i^2 + {C_2})}}$$

However, using a simple SGD optimization method may trap the pre-corrected image into local optima. To address this issue and enhance the generalization capability of the iterative algorithm, a lightweight aberration pre-correction CNN is introduced to extract features during the SGD iterative process. The CNN model consists of two convolutional layers and one deconvolution layer, as shown in Fig. 4(b). The process of the aberration pre-correction method, incorporating the lightweight CNN, is depicted in Fig. 4(a). The intensity distribution error between the characteristic image of the original EI and the simulated EI is calculated using the loss function expressed in Eq. (11), backpropagation is then performed by minimizing the value of the first loss function.

 figure: Fig. 4.

Fig. 4. (a) Flowchart of lightweight aberration pre-correction CNN algorithm. (b) Schematic diagram of lightweight CNN structure.

Download Full Size | PDF

Subsequently, the original EI and simulated EI can be optically reconstructed separately. In order to bring the pre-corrected EI closer to the desired EI image for the real 3D scene, the method evaluates the discrepancy in intensity distribution between the original reconstructed image and the simulated reconstructed image, using another loss function based on Eq. (11). The second loss function is updated to correct the aberration of the entire MLA, rather than a single microlens. Backpropagation is performed again by minimizing the value of the second loss function.

In the SGD optimization process, the proposed approach utilizes the Adam optimizer to update the feature image of the original EI. Adam is an adaptive learning rate method, which combines the characteristics of the momentum method and adaptive learning rate. It estimates the adaptive learning rate by utilizing the first-order moment (mean) and second-order moment (variance) of the gradient. With its strong convergence and generalization performance, it enhances the capability of the algorithm. Finally, by setting an acceptable error threshold, the pre-corrected EI for aberration can converge towards the global optimal solution after SGD optimization.

2.3 Image quality restoration CNN

During the pre-correction process of MLA aberrations in the original EI, the introduction of a lightweight aberration pre-correction CNN can help mitigate the impact of MLA aberration. Still, experimental results indicate that this process does not eliminate the inherent system noise of the microlens, resulting in resolution reduction and compromising the image quality of the 3D display. To tackle this challenge, this study proposes a CNN for image quality restoration for the pre-corrected EI. This approach aims to address the introduced noise and restore the image quality to its optimal state. To commence, it is necessary to establish a degradation model for the EI. Typically, the decrease in resolution can be portrayed by a bicubic downsampling operation, as illustrated in Eq. (14):

$$LR = (H{R_{ \downarrow s}}) \otimes K + HN$$

In this equation, LR represents a low-resolution image that has undergone image quality degradation, HR represents a high-resolution image that has not experienced any degradation, ↓s symbolizes the downsampling operation with a scale factor of s, K denotes the blur kernel causing the degradation, ${\otimes}$ signifies the convolution operation between the original image and the blur kernel, and HN represents the inherent noise level of the microlens.

The underlying principle of image quality restoration based on image quality degradation modeling involves simulating the degradation of image quality through the convolution of a given original image with a blur kernel. The resultant degraded image is then synthesized by applying bi-cubic downsampling with a specific scale factor, along with an intrinsic noise level added. Subsequently, a nonlinear mapping is established using a CNN to perform the image restoration process from the undamaged image to the degraded image. The CNN adopts a loss function for image quality restoration, which can be expressed as follows, in which sn represents the number of LR/HR patch pairs for each epoch, $f({Y_{H{R_i}}})$ represents the intensity value of a high-resolution image with degradation, ${Y_{L{R_\textrm{i}}}}$ denotes the intensity value of a low-resolution image.

$$\textrm{los}{\textrm{s}_{res}} = \frac{1}{{{s^2}{n^2}}}\sum\limits_{i = 1}^{sn} {{{(f({Y_{H{R_i}}}) - {Y_{L{R_\textrm{i}}}})}^2}}$$

While obtaining the input fusion matrix of the CNN, given an input pre-corrected EI of size N × N × C, and a noise level of size p × p, the noise level is first transformed into a one-dimensional matrix of size p2× 1. This matrix is then projected into a t-dimensional space using Principal Component Analysis (PCA). Subsequently, it is expanded to the size of N × N × 1 using the corresponding multiplicative factor. Thus, the input fusion matrix of image quality restoration CNN is a multidimensional matrix of size N × N × (C + 1).

Figure 5 illustrates the schematic of the image quality restoration CNN model. After the initial two convolutional layers, which are responsible for feature extraction and shrinking of input, CNN utilizes a series of cascaded mapping units to establish nonlinear mapping relationships. Each mapping unit consists of a convolutional layer with a 3 × 3 convolutional kernel and a batch normalization layer. In the final part of the CNN, a sub-pixel convolutional layer is employed following a deconvolution layer to upscale the input image by a scale factor of s. This operation redistributes the information of each pixel to multiple pixels, ensuring that the number of output channels remains the same as the number of input channels. For example, multiple feature maps of size N × N × s2C are transformed into a single EI of size sN × sN × C. Finally, the output image is restored to an EI of size sN × sN × C.

 figure: Fig. 5.

Fig. 5. Schematic diagram of image quality restoration CNN.

Download Full Size | PDF

2.4 Joint training of dual neural network model

In order to accomplish the tasks of aberration pre-correction and image quality restoration, the present method develops a series dual CNN model by sequentially connecting the above-mentioned CNNs. Figure 6 illustrates the schematic diagram of the proposed algorithm. Initially, an original input EI undergoes processing by a lightweight aberration pre-correction CNN. Subsequently, the output is subjected to the introduction of a random blur noise level and then passed to the image quality restoration CNN for further refinement. The output of the image quality restoration CNN represents the final result of the approach. This series dual CNN model can efficiently address both aberration pre-correction and image quality restoration tasks, thereby improving overall performance.

 figure: Fig. 6.

Fig. 6. Schematic diagram of the proposed dual CNN model algorithm.

Download Full Size | PDF

This method adopts an end-to-end joint training method for the dual CNN model. It refers to the process of training and optimizing two neural networks as a whole to jointly complete the two tasks. This training method aims to achieve complete end-to-end learning directly from input to output, with the two neural networks interdependent, learning and adjusting their internal weights and parameters together. During the joint training process, the two networks use a common backpropagation algorithm for gradient updates and parameter adjustments to maximize the performance of the entire dual network structure. The loss functions of Eq. (11) and Eq. (15) have been weighted with the weight coefficient α to obtain the joint loss function used, as shown in Eq. (16). During backpropagation, the proposed method set the loss function of the series dual CNN model to update the feature parameters of both networks simultaneously for end-to-end joint training.

$$\textrm{los}{\textrm{s}_{total}}\mathrm{\ =\ \alpha los}{\textrm{s}_{abe}} + (1 - \mathrm{\alpha })\textrm{los}{\textrm{s}_{res}}$$

By jointly training these two neural networks, performance indicators can be optimized across the entire system, rather than individually optimizing the performance of each network. This means that they learn together how to best work together to achieve the best results for the entire dual neural network structure. Hence, in order to ensure comprehensive and accurate results, as well as to obtain a genuine evaluation of the proposed method, it is necessary to assess and analyze the overall performance of the entire system.

This study uses the Microsoft COCO2014 dataset and select a total of 80,000 images from it to train the dual neural network model. The COCO2014 dataset is a widely adopted large-scale image dataset in computer vision, containing diverse real-life scenes with common objects and their contextual information. To prepare the dataset, the images are first resized to a resolution of 104 × 104, which serves as input for the neural network. The large data volume of the input dataset guarantees a good diversity of images, and exposes the network to a wider range of image features. Thus, this helps prevent it from being too sensitive to specific training samples. The proposed network model is programmed on PyTorch, and the training and testing processes have been conducted on NVIDIA RTX 3060 GPUs.

During the training stage, regularization technique with weight decay is adopted to ensure the stability of the model. The Adam optimizer is employed to optimize the network's loss function and update the weight parameters. The training process chose a batch size of 128 and set the initial learning rate to 1 × 10−3. After training for 360 epochs, the learning rate drops to 1 × 10−4, and the training error no longer continues to decrease. The learning rate scheduling strategy allows for finer adjustments in the later stages of training, preventing sudden large changes that may lead to overfitting. At this point, the loss function can be considered to be approaching convergence, and the network model corresponding to this training checkpoint is preserved. Figure 7 depicts the loss curve concerning the number of epochs. When the training error reaches a plateau and ceases to decrease further, it indicates that the loss function is approaching convergence. At this stage, the network model at that specific checkpoint is deemed to be preserved.

 figure: Fig. 7.

Fig. 7. Loss function curve of the proposed dual CNN model.

Download Full Size | PDF

This study utilizes a series dual CNN model to address the challenges of microlens aberration correction and image quality enhancement. First, a lightweight aberration pre-correction CNN is utilized for correcting microlens aberrations, followed by an image quality restoration CNN to enhance the overall image quality and resolution. This approach effectively tackles the issue of degraded image resolution caused by inherent microlens noise after aberration correction, resulting in enhanced EIA with improved image quality. Furthermore, the end-to-end training of the interconnected network models significantly improves training efficiency. By sharing the feature parameters between the aberration correction and resolution enhancement networks, both tasks are jointly trained, reducing the number of parameters that require updates and enhancing the correlation between these two tasks. This holistic training approach contributes to optimized performance. The reconstructed EIA obtained through the dual neural network model is loaded onto the display panel, which serves as the display image source and utilizes an MLA for fast and high-quality integral imaging 3D display. The method achieves microlens aberration pre-correction and image quality enhancement through a practical dual neural network framework, enabling advances in integral imaging 3D display capabilities.

3. Simulation and experiment

3.1 Simulation and image quality evaluation

The proposed method employs the same optical parameters of MLA for both simulation and experiment, aiming to achieve accurate validation of its effectiveness. The MLA has a lens pitch of 7.47 mm and includes 37 × 21 microlenses, each with a refractive index of 1.49 and a focal length of 29.5 mm. The simulation process involves calculating the convolution between EI and the PSF associated with a specific angle of a single microlens to simulate the imaging process. Subsequently, the individually simulated imaging units are concatenated based on their respective angular positions to generate the final simulated image. This approach provides a comprehensive perspective on the imaging performance, as it incorporates the contributions and interactions of each individual microlens within the system. Taking the 0 degree and 10 degree viewing angles as examples, Fig. 8 shows the PSF matrix of the microlens calculated by Eq. (9).

 figure: Fig. 8.

Fig. 8. PSF matrix of microlens at (a) 0-degree, (b) 10-degree.

Download Full Size | PDF

In order to evaluate the performance of the proposed method in correcting MLA aberrations and minimizing crosstalk, the observing range is expanded in simulation by adjusting the distance between the display screen and the MLA. the simulation results show potential instances of crosstalk. A simulation angle range of −10 to 10 degrees is selected to ensure that a comprehensive range of viewing angles is captured, and shows potential negative effects of crosstalk and aberration for analysis and assessment.

Figure 9(a)-(c) and Fig. 9(d)-(f) present a comparative analysis between the numerical simulation results and ideal display results for 3D models with horizontal and vertical parallax, both before and after dual neural network processing. The parallax effects are demonstrated from −10° to 10° in horizontal and vertical directions. Specifically, Fig. 9(a) and Fig. 9(d) exhibit the display results of the simulations under ideal conditions, while Fig. 9(b) and Fig. 9(e) demonstrate the display results with simulated MLA aberration damage. Finally, Fig. 9(c) and Fig. 9(f) illustrate the display results after the proposed method is applied for aberration correction. The corresponding SSIM is provided on the simulated images. To evaluate the generalization capability of the dual CNN model pre-correction method and the quality of the corrected images, simulations on various 3D models have been conducted. Figure 10 showcases the simulation results for three models: flowers, tanks, and dinosaurs. Specifically, Fig. 10(a)-(c) present the simulation images of the different models under ideal conditions, aberration damage, and aberration pre-correction, respectively. The PSNR for each model at a 0° angle of view is calculated and included as a label on its corresponding simulation image.

 figure: Fig. 9.

Fig. 9. Simulation results of (a) ideal image with a horizontal parallax, (b) uncorrected image with a horizontal parallax, (c) dual CNN corrected image with horizontal parallax, (d) ideal image with vertical parallax, (e) uncorrected image with vertical parallax, (f) dual CNN corrected image with vertical parallax.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Simulation results of (a) ideal condition, (b) uncorrected images, and (c) dual CNN corrected images under three models of flowers, tanks, and dinosaurs.

Download Full Size | PDF

In order to obtain more information and observe potential degradation of image edge quality, we simulated images under an expanded viewing range. The simulation results indicate that expanding the viewing range may introduce unnecessary crosstalk to the display system, this problem can be solved through further optimization algorithms or hardware limitations in optical experiments.

To compare the effectiveness of the method with existing approaches for MLA aberration pre-correction in integral imaging 3D displays, the performance of the pre-filter method [23] and the single CNN pre-correction method [24] is evaluated through numerical simulations employing the flower model. The performance evaluation of the proposed method is conducted by comparing it to the theoretically calculated ideal image that is free from aberrations and noise, using three commonly used performance indicators in the field of image processing, namely SSIM, PSNR, and VIF. These indicators quantify the disparity between the processed image and the ideal image, with smaller disparities indicating greater restoration achieved by the method employed. The numerical values of the evaluation indicators demonstrate the improvement in image quality achieved by the proposed method [3238].

Table 1 provides a comprehensive summary of the performance of three image quality evaluation metrics, namely, SSIM, PSNR, and VIF [31], with a focus on the central perspective of each method. Figure 10(a)-(c) depict a comparative analysis of the SSIM, PSNR, and VIF of the three methods across different perspectives, presented as histograms. Furthermore, for the image with dual neural network pre-correction relative to the uncorrected image, the line charts expressed as a percentage increment in Fig. 11 illustrates the improvement in performance metrics. By comparing the SSIM, PSNR, and VIF of these three methods with their incremental percentages, we can quantitatively obtain the degree of improvement in image quality, noise removal, and human visual perception of the proposed method compared to the filtering method and a single neural network method. Due to the non-symmetrical nature of the EI under various perspectives of the flower model, the incremental improvement in performance indicators exhibits variations across views. This graph clearly shows that the proposed method provides significant improvements in SSIM, PSNR, and VIF, and the image quality in the −10° to 10° viewing angle range outperforms that of the uncorrected image and the images corrected by the other two methods. These results underscore the effectiveness of the approach in aberration correction and image quality improvement.

 figure: Fig. 11.

Fig. 11. Comparison curves of performance indicators (a) SSIM, (b) PSNR, (c) VIF under different perspectives using different methods.

Download Full Size | PDF

Tables Icon

Table 1. Image quality using different methods

The simulation results demonstrate the detrimental effect of MLA aberration on the display quality. In comparison, the proposed dual neural network pre-correction method outperforms other approaches in terms of accurate reconstruction. The method effectively enhances image quality by correcting the aberration and eliminating noise to preserve high resolution. This method effectively resolves the conflict between aberration and resolution in the displayed EIA, thereby elevating the overall quality of the 3D image.

3.2 Experimental results

The experimental setup shown in Fig. 12(a) verifies the feasibility of the proposed method through optical experiments. A 4 K liquid crystal display (LCD) screen is used as the display panel for loading the EIAs, where the resolution of a single EI is 104 × 104.The EIAs before and after pre-correction by the dual neural network model proposed in this paper are shown in Fig. 12(b) and Fig. 12(c). The LCD screen has a resolution of 3840 × 2160 and measures 12.5 inches, with a pixel density of 352 PPI and a pixel size of 0.072 mm. The MLA consists of 37 × 21 microlenses with a lens pitch of 7.47 mm. Each individual microlens possesses a focal length of 29.5 mm and a refractive index of 1.49. The distance separating the display panel from the MLA is set to 36 mm, while the viewing distance for observers is 1 m.

 figure: Fig. 12.

Fig. 12. (a) Optical experimental device of the integral imaging display system. (b) EIA corrected with dual CNN. (c) EIA with uncorrected aberration.

Download Full Size | PDF

In Fig. 13(a)-(c), the top row exhibits uncorrected 3D images captured from various viewing angles, while the bottom row showcases corresponding 3D images improved by utilizing the dual network model. By comparing the local zoomed-in views of the pre-corrected and post-corrected 3D images, it can be noticed that the image quality of the latter is improved, and finer details are presented more effectively. The corrected images exhibit enhanced clarity, rendering finer textures, contours, and edges sharper compared to their uncorrected counterparts, providing a more realistic and immersive 3D experience. Through these detailed qualitative analyses, it is evident that the proposed method effectively improves the image quality. The enhanced images showcase improved clarity, color vividness, and visual effects, thereby emphasizing the advantages and specific enhancements achieved by the proposed method in practical applications.

 figure: Fig. 13.

Fig. 13. Multi-view integral imaging 3D display results of (a) flower model, (b) tank model, and (c) dinosaur model.

Download Full Size | PDF

It is important to note that the proposed dual CNN model proves to be widely applicable, showcasing versatility across various integral imaging display systems. This indicates that the method is capable of being implemented in diverse model scenarios, facilitating improved 3D visualization.

4. Conclusion

This paper presents a novel approach to mitigate the MLA aberrations in integral imaging and address the consequent deterioration of 3D image quality using a dual CNN model. The proposed method entails end-to-end training of a cascaded dual neural network model for both aberration pre-correction and image quality restoration, thereby effectively enhancing the image quality of integral imaging 3D displays. To validate the feasibility of the approach, an optimized, high-quality, pre-corrected EIA has been used as the image source of the overall imaging 3D display. The efficacy of the approach is validated through rigorous simulations and optical experiments. These experiments demonstrate its ability to effectively mitigate the trade-off between MLA aberrations and the degradation of 3D image resolution caused by noise present in the microlens. Importantly, this is achieved without introducing any unnecessary complexity to the display system. Consequently, the present method provides a compelling solution for advancing 3D display technology. It can be extended to a broader range of fields, such as generalized light field displays and holographic 3D displays, which opens up more possible applications in medical imaging and virtual reality education.

Funding

National Key Research and Development Program of China (2021YFB2802300); National Natural Science Foundation of China (NSFC) (61975014, 62035003, U22A2079); Beijing Municipal Science & Technology Commission, Administrative Commission of Zhongguancun Science Park (Z211100004821012).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. Z.-F. Zhao, J. Liu, Z.-Q. Zhang, and L.-F. Xu, “Bionic-compound-eye structure for realizing a compact integral imaging 3D display in a cell phone with enhanced performance,” Opt. Lett. 45(6), 1491–1494 (2020). [CrossRef]  

2. C. Li, J. Liu, H. Ma, J. Li, and S. Cao, “Viewing angle enhancement for integral imaging display using two overlapped panels,” Opt. Express 31(13), 21772–21783 (2023). [CrossRef]  

3. Y. Piao, Z. Rong, M. Zhang, Y. Zhang, and X. Ji, “Deep Learning for Single View Focal Plane Reconstruction in Integral Imaging,” in Digital Holography and Three-Dimensional Imaging (Optica Publishing Group, 2019), paper M3B.2.

4. J. Jung, J. Kim, and B. Lee, “Solution of pseudoscopic problem in integral imaging for real-time processing,” Opt. Lett. 38(1), 76–78 (2013). [CrossRef]  

5. P. Wani, G. Krishnan, T. O’Connor, and B. Javidi, “Information theoretic performance evaluation of 3D integral imaging,” Opt. Express 30(24), 43157–43171 (2022). [CrossRef]  

6. N.-Q. Zhao, J. Liu, and Z.-F. Zhao, “High performance integral imaging 3D display using quarter-overlapped microlens arrays,” Opt. Lett. 46(17), 4240–4243 (2021). [CrossRef]  

7. Y. Li, X. Sang, S. Xing, Y. Guan, S. Yang, D. Chen, L. Yang, and B. Yan, “Real-time optical 3D reconstruction based on Monte Carlo integration and recurrent CNNs denoising with the 3D light field display,” Opt. Express 27(16), 22198–22208 (2019). [CrossRef]  

8. C. Lu, Q. Tian, L. Zhu, R. Gao, H. Yao, F. Tian, Q. Zhang, and X. Xin, “Mitigating the ambiguity problem in the CNN-based wavefront correction,” Opt. Lett. 47(13), 3251–3254 (2022). [CrossRef]  

9. Y. Xu, J. Liu, Z. Lv, L. Xu, and Y. Yang, “Holographic optical elements with a large adjustable focal length and an aberration correction,” Opt. Express 30(18), 33229–33240 (2022). [CrossRef]  

10. X. Chen, X. Zhang, Z. Su, J. Yu, and L. Wang, “Method to construct the initial structure of optical systems based on full-field aberration correction,” Appl. Opt. 62(17), 4571–4882 (2023). [CrossRef]  

11. X. Liu, J. Zhou, L. Wei, L. Feng, J. Jing, X. He, L. Yang, and Y. Li, “Optical design of Schwarzschild imaging spectrometer with freeform surfaces,” Opt. Commun. 480, 126495 (2021). [CrossRef]  

12. A. Y. Yi and T. W. Raasch, “Design and fabrication of a freeform phase plate for high-order ocular aberration correction,” Appl. Opt. 44(32), 6869–6876 (2005). [CrossRef]  

13. E. Lee, Y. Jo, S. Nam, and B. Lee, “View-dependent Distortion Correction Method for a Multiview Lenticular Light Field Display System,” in Frontiers in Optics + Laser Science (Optica Publishing Group, 2022), paper JTu4A.73.

14. S. Pal, “Aberration correction of zoom lenses using evolutionary programming,” Appl. Opt. 52(23), 5724–5732 (2013). [CrossRef]  

15. R. Pandharkar, A. Kirmani, and R. Raskar, “Lens Aberration Correction Using Locally Optimal Mask Based Low Cost Light Field Cameras,” in Imaging Systems (Optica Publishing Group, 2010), paper IMC3.

16. K. Schmidt, N. Guo, W. Wang, J. Czarske, and N. Koukourakis, “Chromatic aberration correction employing reinforcement learning,” Opt. Express 31(10), 16133–16147 (2023). [CrossRef]  

17. Z. Wang, Y. Cai, Y. Liang, D. Dan, B. Yao, and M. Lei, “Aberration correction method based on double-helix point spread function,” J. Biomed. Opt. 24(03), 1–11 (2018). [CrossRef]  

18. Y. Qiu, Z. Zhao, J. Yang, Y. Cheng, Y. Liu, B. R. Yang, and Z. Qin, “Light field displays with computational vision correction for astigmatism and high-order aberrations with real-time implementation,” Opt. Express 31(4), 6262–6280 (2023). [CrossRef]  

19. I. Vishniakou and J. D. Seelig, “Wavefront correction for adaptive optics with reflected light and deep neural networks,” Opt. Express 28(10), 15459–15471 (2020). [CrossRef]  

20. X. Guo, X. Sang, D. Chen, P. Wang, H. Wang, X. Liu, Y. Li, S. Xing, and B. Yan, “Real-time optical reconstruction for a three-dimensional light-field display based on path-tracing and CNN super-resolution,” Opt. Express 29(23), 37862–37876 (2021). [CrossRef]  

21. H. Ren, Q. H. Wang, Y. Xing, M. Zhao, L. Luo, and H. Deng, “Super-multiview integral imaging scheme based on sparse camera array and CNN super-resolution,” Appl. Opt. 58(5), A190–A196 (2019). [CrossRef]  

22. H. Fan, D. Liu, Z. Xiong, and F. Wu, “Two-stage convolutional neural network for light field super-resolution,” in International Conference on Image Processing1167–1171 (IEEE, 2017).

23. W. Zhang, X. Sang, X. Gao, X. Yu, B. Yan, and C. Yu, “Wavefront aberration correction for integral imaging with the pre-filtering function array,” Opt. Express 26(21), 27064–27075 (2018). [CrossRef]  

24. X. Su, X. Yu, D. Chen, H. Li, X. Gao, X. Sang, X. Pei, X. Xie, Y. Wang, and B. Yan, “Regional selection-based pre-correction of lens aberrations for light-field displays,” Opt. Commun. 505, 127510 (2022). [CrossRef]  

25. X. Yu, H. Li, X. Sang, X. Su, X. Gao, B. Liu, D. Chen, Y. Wang, and B. Yan, “Aberration correction based on a pre-correction convolutional neural network for light-field displays,” Opt. Express 29(7), 11009–11020 (2021). [CrossRef]  

26. K. Zhang, W. Zuo, and L. Zhang, “Learning a Single Convolutional Super-Resolution Network for Multiple Degradations,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition3262–3271 (2018)

27. K. Zhang, W. Zuo, and L. Zhang, “Deep Plug-And-Play Super-Resolution for Arbitrary Blur Kernels,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition1671–1681 (2019)

28. Z. Zhao, J. Liu, L. Xu, Z. Zhang, and N. Zhao, “Wave-optics and spatial frequency analyses of integral imaging three-dimensional display systems,” J. Opt. Soc. Am. A 37(10), 1603–1613 (2020). [CrossRef]  

29. A. Karimzadeh, “Integral imaging system optical design with aberration consideration,” Appl. Opt. 54(7), 1765–1769 (2015). [CrossRef]  

30. Z. Wang, O. Baladron-Zorita, C. Hellmann, and F. Wyrowski, “Theory and algorithm of the homeomorphic Fourier transform for optical simulations,” Opt. Express 28(7), 10552–10571 (2020). [CrossRef]  

31. H. R. Sheikh and A. C. Bovik, “Image information and visual quality,” in IEEE Transactions on Image Processing, 430–444 (2006).

32. F. Khader, G. Müller-Franzes, S. Tayebi Arasteh, et al., “Denoising diffusion probabilistic models for 3D medical image generation,” Sci. Rep. 13(1), 7303 (2023). [CrossRef]  

33. V. Mannam, Y. Zhang, Y. Zhu, E. Nichols, Q. Wang, V. Sundaresan, S. Zhang, C. Smith, P. W. Bohn, and S. S. Howard, “Real-time image denoising of mixed Poisson–Gaussian noise in fluorescence microscopy images using ImageJ,” Optica 9(4), 335–345 (2022). [CrossRef]  

34. J.-S. Jang, Y.-S. Oh, and B. Javidi, “Spatiotemporally multiplexed integral imaging projector for large-scale high-resolution three-dimensional display,” Opt. Express 12(4), 557–563 (2004). [CrossRef]  

35. Y. Chen, X. Wang, J. Zhang, S. Yu, Q. Zhang, and B. Guo, “Resolution improvement of integral imaging based on time multiplexing sub-pixel coding method on common display panel,” Opt. Express 22(15), 17897–17907 (2014). [CrossRef]  

36. A. Markman, J. Wang, and B. Javidi, “Three-dimensional integral imaging displays using a quick-response encoded elemental image array,” Optica 1(5), 332–335 (2014). [CrossRef]  

37. H. Navarro, J. C. Barreiro, G. Saavedra, M. Martínez-Corral, and B. Javidi, “High-resolution far-field integral-imaging camera by double snapshot,” Opt. Express 20(2), 890–895 (2012). [CrossRef]  

38. M. A. Taylor, T. Nöbauer, A. Pernia-Andrade, F. Schlumm, and A. Vaziri, “Brain-wide 3D light-field imaging of neuronal activity with speckle-enhanced resolution,” Optica 5(4), 345–353 (2018). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Schematic diagram of the proposed method.
Fig. 2.
Fig. 2. Polar coordinate system divides the field of view of a single EI into equidistant ring regions. (a) Imaging of a single EI. (b) Field of view division of EI.
Fig. 3.
Fig. 3. Flow chart of SGD aberration pre-correction method.
Fig. 4.
Fig. 4. (a) Flowchart of lightweight aberration pre-correction CNN algorithm. (b) Schematic diagram of lightweight CNN structure.
Fig. 5.
Fig. 5. Schematic diagram of image quality restoration CNN.
Fig. 6.
Fig. 6. Schematic diagram of the proposed dual CNN model algorithm.
Fig. 7.
Fig. 7. Loss function curve of the proposed dual CNN model.
Fig. 8.
Fig. 8. PSF matrix of microlens at (a) 0-degree, (b) 10-degree.
Fig. 9.
Fig. 9. Simulation results of (a) ideal image with a horizontal parallax, (b) uncorrected image with a horizontal parallax, (c) dual CNN corrected image with horizontal parallax, (d) ideal image with vertical parallax, (e) uncorrected image with vertical parallax, (f) dual CNN corrected image with vertical parallax.
Fig. 10.
Fig. 10. Simulation results of (a) ideal condition, (b) uncorrected images, and (c) dual CNN corrected images under three models of flowers, tanks, and dinosaurs.
Fig. 11.
Fig. 11. Comparison curves of performance indicators (a) SSIM, (b) PSNR, (c) VIF under different perspectives using different methods.
Fig. 12.
Fig. 12. (a) Optical experimental device of the integral imaging display system. (b) EIA corrected with dual CNN. (c) EIA with uncorrected aberration.
Fig. 13.
Fig. 13. Multi-view integral imaging 3D display results of (a) flower model, (b) tank model, and (c) dinosaur model.

Tables (1)

Tables Icon

Table 1. Image quality using different methods

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

$$\textrm{W(H},\mathrm{\rho )\ =\ }\sum\limits_\textrm{k} {\sum\limits_l {\sum\limits_m {{\textrm{W}_{klm}}{\textrm{H}^k}{\mathrm{\rho }^l}} } } {\cos ^m}\mathrm{\varphi }$$
$$\textrm{W(H},\mathrm{\rho ,\varphi )\ =\ }\sum\limits_p {\sum\limits_n {\sum\limits_m {{\textrm{W}_{klm}}{\textrm{H}^\textrm{k}}{\mathrm{\rho }^l}} } } {\cos ^m}\mathrm{\varphi }$$
$$k = 2p + m$$
$$l = 2n + m$$
$$\textrm{W(H},\mathrm{\rho ,\varphi )\ =\ }{\textrm{W}_{040}}{\mathrm{\rho }^4}\textrm{ + }{\textrm{W}_{131}}{\mathrm{\rho }^3}\mathrm{cos\varphi +\ }{\textrm{W}_{222}}{\textrm{H}^2}{\mathrm{\rho }^2}\textrm{co}{\textrm{s}^2}\mathrm{\varphi +\ }{\textrm{W}_{220}}{\textrm{H}^2}{\mathrm{\rho }^2}\textrm{ + }{\textrm{W}_{311}}{\textrm{H}^3}\mathrm{\rho cos\varphi } + {\textrm{W}_{400}}{\textrm{H}^4}$$
$$\textrm{W(H},\mathrm{\rho ,\varphi )\ =\ }{\textrm{W}_{040}}{\mathrm{\rho }^4}\textrm{ + }{\textrm{W}_{131}}{\mathrm{\rho }^3}\mathrm{cos\varphi +\ }{\textrm{W}_{222}}{\textrm{H}^2}{\mathrm{\rho }^2}\textrm{co}{\textrm{s}^2}\mathrm{\varphi +\ }{\textrm{W}_{220}}{\textrm{H}^2}{\mathrm{\rho }^2}\textrm{ + }{\textrm{W}_{400}}{\textrm{H}^4}$$
$$E(\mathrm{\rho },\mathrm{\varphi ,H\ =\ 0}) = \exp (j\frac{k}{{2g}}{\mathrm{\rho }^2})$$
$$\scalebox{0.92}{$\displaystyle\mathrm{U(\rho ^{\prime},\varphi ^{\prime},z)\ =\ }\frac{g}{l}\int_0^1 {\int_0^{2\pi } {\exp (j\frac{k}{{2g}}{\mathrm{\rho }^2})\exp [j\mathrm{W(H,\rho ,\varphi )}]} } \exp (jk\frac{{{\mathrm{\rho }^2}}}{{2z}})\exp [ - jk\frac{{\mathrm{\rho \rho }^{\prime}\cos (\mathrm{\varphi } - \mathrm{\varphi }^{\prime})}}{z}]\mathrm{\rho }d\mathrm{\rho }d\mathrm{\varphi }$}$$
$$\mathrm{PSF(\rho ,\varphi )\ =\ }\frac{1}{{{\mathrm{\lambda }^2}{g^2}}}{\left|\left|{\int\!\!\!\int {p(r\textrm{,}\theta )\exp [ - j\frac{{2\mathrm{\pi }}}{\mathrm{\lambda }}\textrm{W}(r\textrm{,}\theta )]\exp [ - j2\pi \mathrm{\rho }r\mathrm{cos(\varphi } - \theta \textrm{)}]rdrd\theta } } \right|\right|^2}$$
$$\textrm{SEI}\, = \,\textrm{IEI}\ast \mathrm{PSF(\rho ,\varphi )}$$
$$\textrm{los}{\textrm{s}_{abe}}\mathrm{\ =\ \omega los}{\textrm{s}_{ssim}} + (1 - \mathrm{\omega })\textrm{los}{\textrm{s}_{mse}}$$
$$\textrm{los}{\textrm{s}_{mse}} = \frac{1}{n}\sum\limits_{j = 1}^n {{{(Y{s_j} - Y{i_j})}^2}}$$
$$\textrm{los}{\textrm{s}_{ssim}} = \frac{{(2{\mathrm{\mu }_s}{\mathrm{\mu }_i} + {C_1})(2{\mathrm{\sigma }_{si}} + {C_2})}}{{(\mathrm{\mu }_s^2 + \mathrm{\mu }_i^2 + {C_1})(\mathrm{\sigma }_s^2 + \mathrm{\sigma }_i^2 + {C_2})}}$$
$$LR = (H{R_{ \downarrow s}}) \otimes K + HN$$
$$\textrm{los}{\textrm{s}_{res}} = \frac{1}{{{s^2}{n^2}}}\sum\limits_{i = 1}^{sn} {{{(f({Y_{H{R_i}}}) - {Y_{L{R_\textrm{i}}}})}^2}}$$
$$\textrm{los}{\textrm{s}_{total}}\mathrm{\ =\ \alpha los}{\textrm{s}_{abe}} + (1 - \mathrm{\alpha })\textrm{los}{\textrm{s}_{res}}$$
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.