Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Neural invertible variable-degree optical aberrations correction

Open Access Open Access

Abstract

Optical aberrations of optical systems cause significant degradation of imaging quality. Aberration correction by sophisticated lens designs and special glass materials generally incurs high cost of manufacturing and the increase in the weight of optical systems, thus recent work has shifted to aberration correction with deep learning-based post-processing. Though real-world optical aberrations vary in degree, existing methods cannot eliminate variable-degree aberrations well, especially for the severe degrees of degradation. Also, previous methods use a single feed-forward neural network and suffer from information loss in the output. To address the issues, we propose a novel aberration correction method with an invertible architecture by leveraging its information-lossless property. Within the architecture, we develop conditional invertible blocks to allow the processing of aberrations with variable degrees. Our method is evaluated on both a synthetic dataset from physics-based imaging simulation and a real captured dataset. Quantitative and qualitative experimental results demonstrate that our method outperforms compared methods in correcting variable-degree optical aberrations.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Optical aberrations are defects introduced in the design, manufacturing and assembly of camera lenses. The defects cause the incident light to diffuse and fail to focus to form a sharp image, producing images with blurry and dispersive appearance [1]. Despite the rapid development of optical design and digital imaging technologies, they still can not totally avoid image degradation caused by optical aberrations, especially from lightweight cheap lenses and camera lenses of smart phones, whose optical aberrations are relatively significant.

In practice, optical aberrations are mitigated by incorporating an array of lenses, using aspheric lenses, and employing special glass materials for lenses. However, the increase in lens types and lens materials poses challenges to the manufacturing process and raises production costs. Therefore, recent works have shifted the genre of aberration corrections from sophisticated lens design to post-processing [2].

Currently, two main technical routes have been proposed for optical aberrations correction in post-processing. One is the model-driven traditional methods, which assume an image degradation model, use various natural image priors, optimize in multiple iterations to find the degradation kernel, and perform deconvolution to obtain the sharp image. However, traditional methods are not robust enough in dealing with spatially varying degradation. The other is the data-driven deep learning methods, which have recently become increasingly popular. These methods utilize training data to train the neural networks and recover sharp images from degraded images. However, most of them can not deal with variable-degree aberrations since they are exclusively designed for a specific degree of degradation. In addition, they leverage a single feed-forward autoencoder architecture and usually suffer from information loss during the encoding and decoding process.

To address the information loss problem, we propose a novel aberration correction method based on invertible neural networks (INNs) to learn the transformation from aberration images to aberration-free images, where the invertible design assures that the neural networks can preserve information, especially the details of images [3,4]. Due to the limited nonlinear transformation ability of invertible neural networks [5], we introduce a feature extraction module to improve the non-linear transformation capability of the INNs. In order to process optical aberrations with variable degrees, we propose enhanced conditional encoding modules, which use the degradation degree of the aberration image as the input. This provides our method with the capability to restore sharp images from input images with variable degradation degrees.

Since images captured by cameras are naturally degraded by optical aberrations, large-scale real paired aberration and aberration-free images are unavailable. To mitigate this issue, we establish an imaging simulation process to synthesize realistic degraded images from sharp reference images. The image simulation process incorporates lens parameters of an optical system and is able to perform physics-based ray tracing to simulate the optical aberrations with variable degrees. We leverage this approach to produce large-scale paired datasets for the proposed aberration correction method.

Experimental results show that the proposed method achieves better numerical metrics and visual effects on both synthetic images and real aberration-degraded images. The visual results verify that our method can recover more details by the inference along the forward direction, thanks to the highly invertible design in the architecture. Meanwhile, our method brings the benefit that it can synthesize aberration images from sharp images along the reverse direction. The contributions of this paper can be summarized as follows:

  • • We design an imaging simulation process based on ray tracing and spatial convolution to generate large-scale paired datasets with variable degradation degrees.
  • • We propose an invertible neural network architecture for optical aberration correction that can largely alleviate the information loss problem and improve image quality.
  • • We introduce conditional encoding modules for the invertible neural network to deal with varying degrees of optical aberrations.

2. Related work

2.1 Optical aberrations correction

Due to the inherent optical aberrations of optical systems, the captured image will be degraded. This degradation can hardly be totally avoided by sophisticated optical system design, so recent works turn to post-processing for removing aberrations. The current methods mostly perform the process in two steps: first, estimate the point spread function (PSF) of the target optical system, and then use the non-blind deconvolution or deep neural networks to restore the image. Optical aberrations are spatially varying, and the methods for obtaining spatially varying PSF can be divided into three categories: real shooting-based methods, calibration-based methods and optical simulation-based methods. In [6], the point spread function was directly measured by imaging the pinhole grid pattern in a dark room. The work in [7] used the frame random mode to calibrate PSFs. The work in [8] calculated the PSFs by raytracing and coherent superposition in a simulation manner.

After obtaining PSFs, some methods use a deconvolution process [1,9] to solve the linear inverse problem. The scholars of [10] used a two-step scheme to correct the aberration of a single image, and then used a convolutional neural network to remove the remaining chromatic aberration in the image. However, deconvolution involves a complex iterative process. Due to the strong fitting ability of the deep neural network, some methods [8,1113] used the autoencoder-based architecture to restore degraded images. The work in [9] designed a PSF aware neural network, which takes degraded images and PSF images as inputs and generates latent high-quality images by combining depth prior. The work in [13] proposed a frequency-based adaptive block, which was inserted into the neural network to perform feature based deconvolution to correct non-uniform blur. However, these networks need to be calibrated PSFs as the input. The scholars of [12] proposed an end-to-end neural network to remove the aberrations in the input image. However, their architecture based on a feed-forward autoencoder was unable to deal with varying degrees of aberrations. In contrast, our method does not need a complex PSF estimation process and the proposed a conditional invertible neural network allows to correct degradations with variable degrees.

2.2 Invertible neural networks

The development of invertible neural networks (INN) can be traced back to nonlinear independent component estimation (NICE), which was proposed by [5]. It learns the nonlinear bi-directional mapping of input data to latent space in an unsupervised way. The forward calculation process and the reverse process have shared model parameters. Based on this, RevNet [14] was put forward, which can complete the back propagation of the network without storing activation. In this way, the memory consumption of the model can be greatly reduced. In order to deal with image-related tasks, the scholars of [15] introduced the convolution layer and multi-scale layer in the coupling model to reduce the computing cost and improve the model regularization ability. The work in [16] built a reversible network architecture i-RevNet based on RevNet, which retains all information of the input signal in all intermediate representations except the last layer. The article also proves that information loss is not a necessary condition when learning can be generalized to the representation of unfamiliar data. The work in [17] proposed an effective reversible 1 $\times$ 1 convolution block and Glow, which can synthesize and process large images efficiently and realistically.

Because of the information-lossless and powerful generation ability, INN has been used in many image restoration tasks. The work in [18] used INN to learn the reversible bijection transformation of image downscaling and upscaling to achieve information-lossless image rescaling. Scholars of [3] designed a reversible neural network for denoising tasks. In the forward process, the noise image is mapped to a low resolution image and a latent representation space; In the reverse process, sampling from a prior distribution will replace the latent representation to discard the noise. Other image restoration tasks using INN include image decolorization [19], image hiding [20], etc. However, to the best of our knowledge, there has been no work trying to apply INN to the task of optical aberration correction.

3. Method

In this section, we first introduce the simulation method of degraded images based on ray tracing in Section 3.1. Second, we illustrate the overall architecture of our invertible aberration correction neural network in Section 3.2. And in Section 3.3, we elaborate on the composition of the loss function.

3.1 Raytracing based imaging simulation

The main problem of the supervised aberrations correction algorithm based on deep learning is the lack of real paired datasets. Established methods ignore the underlying optical systems and simply synthesize the degradation caused by optical aberration with a Gaussian degenerate kernel, leading to a large gap between the synthetic paired data and the real data. For optical systems with large fields of view and large apertures, the commonly used Gaussian degenerate kernel causes inaccurate simulation since the actual point spread functions (PSFs) are spatially varying across the field of view (FOV).

The recent work [8] has proposed using the imaging simulation method without shooting or registration operation to solve the above problems. This method is easy to migrate to different optical systems. We adopt this approach [8] and introduce a distance to the focal plane as one of the simulation inputs to generate synthetic degraded images with varying-degree aberrations.

Our imaging simulation process consists of two steps: first, we calculate patch-wise PSFs with accurate raytracing; second, assuming that the degradation in a local region of a natural image is similar, The sharp image patches are convolved with the patch-wise PSFs to simulate the degradation process. Assuming that the degradation degree of the image patch $I_{degraded}(i,j,d)$ is consistent when shooting at the same object distance, the degradation process can be modeled as follows:

$$I_{degraded}(i,j,d) = I_{sharp}(i,j) \otimes k(i,j,d) + n(i,j)$$
where $(i,j)$ indicates the spatial coordinates of the patch $I_{degraded}(i,j,d)$. $d$ is the distance to the focal plane. $I_{sharp}(i,j)$ is the latent sharp image of $I_{degraded}(i,j,d)$. $\otimes$ is the operation of convolution. $k(i,j,d)$ is the normalized point spread function, representing the energy diffusion caused by the aberrations of the optical system. $n(i,j)$ models the random noise introduced in the imaging process, and noise can be approximated as the well-established heteroscedastic Gaussian model [21].

Point spread function calculation. According to the lens parameters of the optical system used for imaging, the wavefront aberrations and the point spread functions of the optical system are calculated by sequential raytracing. Figure 1 shows the process of raytracing and FOV-dependent PSFs We define a ray $\mathcal {R}$ = $(x, y, z)$ as follows:

$$\mathcal{R} = \mathcal{O} + t\mathcal{D},t \geq 0,$$
where $\mathcal {O}=(x_0, y_0, z_0)$ is the starting point of the ray $\mathcal {R}$, $\mathcal {D}=(d_x, d_y, d_z)$ is the normalized direction vector of the ray $\mathcal {R}$, $\textit {t}$ is the ray marching distance from the starting point $\mathcal {O}$.

The first step of sequential raytracing is to calculate the intersection point of the ray and the surface. For spheric surfaces, the $t$ value for the intersection point is solved analytically. For aspheric surfaces, we define them with sagittal height expression as follows:

$$z=\frac{c s^2}{1+\sqrt{1-c^2 s^2}}+M_2 s^2+M_4 s^4+\cdots+M_j s^j$$
where $z$ is the longitudinal coordinate of a point on the surface, $c$ is the curvature of the spherical part, $s = \sqrt {x^2 + y^2}$ is the distance from the point to $z$ axis. $M_2$, $M_4$ and $M_j$ are the coefficients of higher order terms. The value of $t$ for intersections can be solved by plunging Eq. (2) into Eq. (3). Since Eq. (3) contains high-order terms, the coordinate of the intersection point can only be calculated by multiple iterations numerically [8].

The second step of sequential raytracing is to calculate the refraction direction of the ray. Given the refractive indices $n_1$ and $n_2$ on both sides of the refraction surface and the incident angle $I$ of the incident ray, we use Snell’s law to calculate the direction of the refracted ray.

The wavefront aberrations are the deviation of the actual wavefront and the ideal wavefront, which are expressed by the optical path difference. When calculating the optical path difference for a given ray, the ray starts from the object plane, reaches the image plane, and then reversely traces from the image plane back to the reference sphere at the exit pupil. The complex pupil function can be constructed by combining the phase information of the optical path difference and the amplitude information formed by the exit pupil. The pupil function can be expressed as:

$$\mathcal{P}(x, y) = A(x, y) e^{j \frac{2\pi}{\lambda} \phi(x, y)}$$
where $(x, y)$ represents the pupil plane coordinates, $A(x, y)$ is the complex amplitude distribution of the exit pupil surface, $\phi (x, y)$ is the optical path difference between the ray at the exit pupil and the chief ray.

The point spread function is the spot formed by the rays from point light sources after passing through the optical system. The amplitude spread function is the Fourier transform of the pupil function $\mathcal {P}(x, y)$. The amplitude spread function can be expressed as:

$$h(u, v) = \int\limits_{-\infty}^{\infty} \int\limits_{-\infty}^{\infty} \mathcal{P}(x, y) e^{{-}j 2 \pi(u x+v y)} d x d y$$
where $(u, v)$ represents the image plane coordinates. The point spread function is the squared magnitude of the amplitude spread function $h(u, v)$.

Patch-wise spatial domain convolution. First, we segment the image into 32 $\times$ 32 uniform patches. These image patches are respectively convolved with PSFs of the corresponding center FOVs in the spatial domain to simulate imaging. See Eq. (1) for the specific operations. Then, we splice the degraded image patches together. Finally, we multiply pixel values of the degraded image by the relative illuminance coefficient at the corresponding FOV, which can be obtained according to the pixel position. It should be noted that two additional operations in this process: On the one hand, the PSF needs to be normalized in advance to keep the energy of the image unchanged; On the other hand, to ensure that the smoothness of the image is not affected by patch-wise convolution, the edges of the patches need to be interpolated.

3.2 Invertible aberration correction architecture

We design a conditional invertible neural network to conduct the aberration correction. Figure 2 shows the overall architecture, which consists of a feature extraction module and a conditional invertible module to correct optical aberrations of variable degrees.

 figure: Fig. 1.

Fig. 1. The optical system structure and PSFs of GCO-232005 lens. The upper-left shows the cross-sectional view. The upper-right shows the PSFs of the GCO-232005 lens. The lower-left shows the raytracing process in the 2-dimensional plane. The lower-right shows the PSFs for three representative FOVs, illustrating that the PSF varies with FOV.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Overview of the proposed conditional invertible neural network with feature extraction for variable-degree optical aberrations correction. In the forward process(black arrow), the input degraded image passes through a conditional invertible neural network composed of a feature extraction module and 12 conditional invertible blocks to obtain a sharp image $\mathcal {G}(\mathbf {Y})$. In the reverse process(red arrow), we reverse $\mathcal {G}(\mathbf {Y})$ into the network to obtain the degraded image $\mathcal {F}(\mathcal {G}(\mathbf {Y}))$. Under the joint forward and reverse process, the information of the image is kept as much as possible, making details of the restored image clearer.

Download Full Size | PDF

Feature Extraction Module. The design of an invertible neural network (INN) needs to ensure strict reversibility, thus INN usually has limited the nonlinear transformation ability. Thus, we add a feature extraction module in front of the INN to improve the nonlinear transformation ability. The feature extraction module is based on multi-scale ResBlocks [22], including up-sampling and down-sampling processes. The details are shown in Fig. 3. It should be noted that the weights of the feature extraction module of the forward and reverse processes are not shared.

 figure: Fig. 3.

Fig. 3. The detailed architecture of feature extraction module. "Conv7-64" means that the convolution kernel size of this layer is 7 $\times$ 7, and the number of convolution kernels is 64.

Download Full Size | PDF

Conditional Invertible Module. The conditional invertible module is composed of $k$ conditional invertible blocks, where $k$ is set to 12. Each conditional invertible block consists of the squeeze operation, invertible 1 $\times$ 1 convolution, conditional affine coupling layer, and unsqueeze operation. These operations are invertible, thus the entire INN is completely invertible. Next, we elaborate on the components of the conditional invertible block.

Squeeze and unsqueeze. The squeeze operation [15] is similar to the convolution operation in CNN, reducing the size of the feature map and increasing the channel of the feature map to capture correlation and structure over a greater spatial distance. Unlike the convolution operation, the squeeze operation extracts features according to the checkerboard pattern to ensure reversibility, as shown in Fig. 4. The squeeze operation increases the channel dimension while retaining the local correlation of the image. The unsqueeze operation is the inverse process of the squeeze operation to recover the original size of the feature map.

 figure: Fig. 4.

Fig. 4. The squeeze operation resets a 4 $\times$ 4 $\times$ 1 tensor (on the left) to a 2 $\times$ 2 $\times$ 4 tensor (on the right).

Download Full Size | PDF

Invertible 1 $\times$ 1 convolution. The invertible 1 $\times$ 1 convolution [17] is a learnable convolution. It fuses information between different feature channels. This allows more interaction and fusion between the information from different incoming data flows.

Conditional affine coupling layers. The schematic diagram of the proposed layers is shown in Fig. 5. The conditional affine coupling layer [23] adds coded condition variables to the affine coupling layer [5,15] to improve the efficiency of the flow model. Here, the conditional code $h$ represents the aberrations degree of the image, specifically referring to the object distance to the focal plane. We use binary mode to encode the distance. The images in our dataset have 101 different distances, so the length of the binary code is 7-bit. For the i-th affine coupling layer, the input $u^i$ is divided into $u^i_1$ and $u^i_2$ along the channel direction, and then they undergo augmented affine transformation [15,18]:

$$\begin{aligned} & u_1^{i},u_2^{i}=Split(u^{i})\\ & u_1^{i+1}=u_1^i \odot \exp \left(\psi\left(u_2^i,h\right)\right)+\phi\left(u_2^i\right) \\ & u_2^{i+1}=u_2^i \odot \exp \left(\rho\left(u_1^{i+1}\right)\right)+\eta\left(u_1^{i+1}\right)\\ & u^{i+1}=Concat(u_1^{i+1},u_2^{i+1}) \end{aligned}$$

 figure: Fig. 5.

Fig. 5. The conditional affine coupling layer for forward and reverse process

Download Full Size | PDF

The Eq. (6) corresponds to a forward process, and the outputs [$u^{i+1}_1$, $u^{i+1}_2$] are concatenated again and passed to the next affine coupling block. In the reverse process, only addition (+) and multiplication ($\times$) operations are reversed to subtraction (-) and division (/), while the internal transformation functions($\psi ()$, $\phi ()$, $\rho ()$, $\eta ()$) do not need to be reversible, and they can be represented by arbitrary neural networks. we employ a multi-scale residual concatenated convolutional block, which is a simplified version of the feature enhancement module. Specifically, we changed the 64 convolution kernels in the first pink block "Conv7-64" to 32 convolution kernels("Conv7-32") in Fig. 3, and we reduced the middle "ResBlock $\times$ 4" to "ResBlock $\times$ 2". When the output $u^{i+1}$ is given, the corresponding reverse process can be expressed as:

$$\begin{aligned} & u_1^{i+1},u_2^{i+1}=Split(u^{i+1})\\ & u_2^i=\left(u_2^{i+1}-\eta\left(u_1^{i+1}\right)\right) \odot \exp \left(-\rho\left(u_1^{i+1}\right)\right) \\ & u_1^i=\left(u_1^{i+1}-\phi\left(u_2^i\right)\right) \odot \exp \left(-\psi\left(u_2^i,h\right)\right)\\ & u^{i}=Concat(u_1^{i},u_2^{i}) \end{aligned}$$

3.3 Loss function

We optimize the proposed invertible aberration correction neural network end-to-end with the following loss function:

$$\begin{array}{r} \mathcal{L}_{total} = \lambda_{1}\mathcal{L}_{forward}(\mathcal{G}(\mathbf{Y}),\mathbf{X}) + \lambda_{2}\mathcal{L}_{reverse}(\mathcal{F}(\mathcal{G}(\mathbf{Y})),\mathbf{Y}) \\+ \lambda_{3}\mathcal{L}_{edge}(\mathcal{G}(\mathbf{Y}),\mathbf{X}) + \lambda_{4}\mathcal{L}_{perceptual}(\mathcal{G}(\mathbf{Y}),\mathbf{X}) \end{array}$$
where $\lambda _{1}$, $\lambda _{2}$, $\lambda _{3}$, $\lambda _{4}$ are hyperparameters, which control the importance of different loss items respectively. They are empirically set as $\lambda _{1}$ = 1, $\lambda _{2}$ = 0.5, $\lambda _{3}$ = 0.05 and $\lambda _{4}$ = 0.02. $\mathcal {G}$ stands for the forward transformation from degraded images to sharp images. $\mathcal {F}$ is the inverse process of $\mathcal {G}$. $\mathbf {X}$ is the reference sharp image. $\mathbf {Y}$ is the input image with optical aberrations.

Forward Loss. This loss is applied in the forward process to eliminate the optical aberrations of the image, such that the output content is close to the content of the reference. The $l_1$ norm regularized pixel level loss function is used as it provides better quality than other norms, like the $l_2$ norm.

$$\mathcal{L}_{forward}(\mathcal{G}(\mathbf{Y}),\mathbf{X})=\|\mathcal{G}(\mathbf{Y})-\mathbf{X}\|_1$$

Reverse Loss. The reverse loss makes the learning process more stable and increases the robustness of the neural network. $\mathcal {G}(\mathbf {Y})$ is the output of the degraded image $\mathbf {Y}$ after passing through the forward network, and $\mathcal {F}(\mathcal {G}(\mathbf {Y}))$ is the degraded image after the reverse process. The reverse loss make the content of $\mathcal {F}(\mathcal {G}(\mathbf {Y}))$ close to the initial degraded image $\mathbf {Y}$.

$$\mathcal{L}_{reverse}(\mathcal{F}(\mathcal{G}(\mathbf{Y}),\mathbf{Y})=\|\mathcal{F}(\mathcal{G}(\mathbf{Y}))-\mathbf{Y}\|_1$$

Edge Loss. The edge loss can take high frequency texture structure information into account, and improve the details of hyper-segmented images.

$$\mathcal{L}_{edge}(\mathcal{G}(\mathbf{Y}),\mathbf{\mathbf{X}}) = \left\|\Delta\left(\mathcal{G}(\mathbf{Y})\right)-\Delta(\mathbf{X})\right\|_1$$
where $\Delta$ denotes the Laplacian operator.

Perceptual Loss. Perceptual Loss [24] measures the difference between two images by features extracted from the benchmark VGG model [25]. It enhances the perceptual similarity between the generated image and the reference image, thus helping to produce a more realistic image.

$$\mathcal{L}_{perceptual}(\mathcal{G}(\mathbf{Y}),\mathbf{X})=\frac{1}{C_m H_m W_m}\|\varphi_m(\mathbf{X})-\varphi_m(\mathcal{G}(\mathbf{Y}))\|_1$$
where $m$ represents the m-th layer; $C_m$, $H_m$ and $W_m$ stand for the number of channels, height, and width of the feature maps, respectively; $\varphi _m(\mathbf {X})$ represents the feature response of the sharp $\mathbf {X}$ at the m-th layer, and $\varphi _m(\mathcal {G}(\mathbf {Y}))$ represents the feature response of $\mathcal {G}(\mathbf {Y})$ at the m-th layer. In this work, we use the tenth ($m=10$) convolutional layer of the pretrained VGG-19 network to extract features.

4. Experiments

This section first introduces the details of the experimental setup and then qualitatively and quantitatively compares our proposed method with state-of-the-art methods on both the synthetic data and the real data. Finally, we further conducted ablation studies and provide an analysis of the results.

4.1 Experimental settings

We use the DIV2K [26] dataset, consisting of 1000 high-quality 2K resolution images. We randomly select a part of the DIV2K dataset and ISO 12233 chart, and we conduct batch imaging simulation to generate paired synthetic datasets. The target optical system is the GCO 232005 optical lens. In imaging simulation, we set the object distance to the focal plane from $-125$ mm to $125$ mm to construct the synthetic dataset, with an interval of $2.5$ mm, corresponding to 101 different degrees of aberrations. For this optical system, when the object distance to the focal plane is more than 80 mm or less than $-80$ mm, the simulated image can be considered as heavily degraded, which brings great challenges to the aberration correction task.

The synthetic dataset contains 6000 image pairs, which are divided into the training set, validation set and test set in a 4:1:1 ratio. We implement the proposed method using PyTorch and train neural networks on two RTX 5000 GPUs with Adam [27] optimizer($\beta _1$ = 0.9, $\beta _2$ = 0.999) for total 150 epochs. The initial learning rate is fixed at $1 {\times } 10^{-4}$, which decays by half every 50 epochs. The training patch size is 256 ${\times }$ 256 and the batch size is 8. We employ random cropping, flipping and rotation to augment the training data as in [28,29]. Generally, it takes around 1.5 days to train a model for 150 epochs.

We also set up an experimental optical system to capture real aberration data. The optical system consists of GCO 232005 optical lens and an MER-131-210U3C-L CMOS sensor. Same as in our simulation, we capture real images by setting the imaging object distance to the focal plane as values ranging from $-125$ mm to 125 mm.

4.2 Evaluation on synthetic images

To demonstrate the advantages of the proposed method, we firstly compare it with five state-of-the-art methods on the synthetic test images, including DeblurGANv2 [30], FOV-KPN [12], MIMO-UNet [31], MPRNet [32], Stripformer [33]. For a fair comparison, all the compared methods adopt the default settings from the original papers. The training/test dataset consists of 4000/1000 degradation images, which are generated by the proposed imaging simulation framework and include 101 degradation degrees. The compared methods are retrained on the synthetic training dataset. We evaluate these methods through commonly used metrics, including Peak Signal Noise Ratio (PSNR), Structural Similarity Index (SSIM) [34], Learned Perceptual Image Patch Similarity (LPIPS) [35].

Table 1 and Fig. 6 show the quantitative and qualitative results of our method and the compared methods on the synthetic test dataset, respectively. Our method outperforms the previous best method MIMO-UNet by 0.34dB in PSNR, as shown in Table 1. Figure 6 demonstrates that our method can recover images with better quality than all the compared methods. Our method yields sharper results with more details, especially in highly textured regions such as the text on the clothes. Furthermore, our method benefits from the invertible design and the combined forward loss and reverse loss, which effectively avoids unrealistic information and artifacts in the results.

 figure: Fig. 6.

Fig. 6. Qualitative comparisons on the synthetic dataset. The results are produced by DeblurGANv2 [30], FOV-KPN [12], MIMO-UNet [31], MPRNet [32], Stripformer [33] and our method. Here, "Distance" represents the imaging object distance to the focal plane. The first scene is obtained by cropping the image patch from the ISO 12233 test chart. The second and third scenes are from DIV2K dataset [26].

Download Full Size | PDF

Tables Icon

Table 1. Comparisons of the evaluation metrics on the synthetic dataset and the number of parameters.a

4.3 Evaluation on real images

In Fig. 7, we compare our method with the state-of-the-art methods on realistic test images captured by our experimental camera. In addition to the methods mentioned in the Section 4.2, we add the model-driven optimization algorithm [36] for comparison. For convenience, we refer to [36] as "Dark Channel Prior". As can be seen from Fig. 7, the proposed method outperforms all methods in terms of the visual effect. It effectively eliminates the degradation caused by optical aberrations, and the conditional invertible neural network can largely retain details of the original image, such as text edges and hair structures. Although the DeblurGANv2 recovers images with sharper edges, it introduced unrealistic information and severe noise. It is worth noting that MIMO-UNet [31], which ranks second in quantitative results on the synthetic dataset, cannot handle severe spatially variant degradation. Other methods fail to deal with the optical aberrations well, and the restored images are not sufficiently clear. Overall, our method performs better in resolving optical aberrations on real images.

 figure: Fig. 7.

Fig. 7. Qualitative comparisons on the real test images. The restored results from left to right are produced by DeblurGANv2 [30], FOV-KPN [12], MIMO-UNet [31], MPRNet [32], Stripformer [33], Dark Channel Prior [36] and our method. Here, "Distance" represents the imaging object distance to the focal plane.

Download Full Size | PDF

To further evaluate the improvement of image quality, we analyze the MTF of restored images. The second scene of Fig. 7 is used because there are many edges of text. We show the MTF curves of the degraded input, the sharp image restored by our methods, and the images restored by other methods for comparison, which can be seen in Fig. 8. Our method improves the MTF50 from 0.025 to 0.092 c/p, demonstrating that our method generates images with sharp edges. Although DeblurGANv2 produces a higher MTF50 value, their model leads to over-sharp edges which are unrealistic and causes severe noise which can not be reflected by the MTF curve. The restored image by Stripformer [33] also suffers from severe noise which can be seen in the blue background.

 figure: Fig. 8.

Fig. 8. The MTF curve of the real captured image and the corresponding restored sharp image produced by our method and other models.

Download Full Size | PDF

4.4 Ablation studies

In this section, we evaluate the impact of every component of our method by ablating different parts of the neural network and comparing them with the complete architecture, as shown in Table 2. We conduct ablation experiments on the synthetic dataset. Details about the dataset can be found in Section 4.1. We use the Adam [27] optimizer with a learning rate of 0.0001 to train for 150 epochs, and the learning rate decreases by half every 50 epochs.

Tables Icon

Table 2. Quantitative results of ablation studies in terms of PSNR, SSIM and LPIPS on test dataset.a

Analysis of the feature extraction module. To verify the performance of the proposed nonlinear feature extraction module, we conduct a corresponding ablation study on the module, as shown in Table 2. Specifically, we train the proposed method with and without the feature extraction module respectively and keep the other training settings the same. Figure 9 shows the visualizations of the two models evaluated on the test dataset, and we can observe that using the feature enhancement module effectively improves the image quality.

 figure: Fig. 9.

Fig. 9. Ablation study on feature extraction module. This image is from DIV2K dataset [26].

Download Full Size | PDF

Analysis of the conditional invertible blocks. Removing the conditional code, the quantitative scores significantly drop as shown in the second row of Table 2. It means that the conditional code is crucial for image fidelity and perceptual quality. After removing the entire conditional invertible module and only keeping the feature extraction module as shown in the third row, the aberrations correcting ability of the network gets worse, which demonstrates the necessity of the conditional invertible module.

Analysis of the FOV encoder [12]. The work in [8] found that adding the field of view (FOV) as an additional input can improve the model performance. We also try to add a FOV encoder [12] in front of the forward process. However, the model performance is lower than the proposed method without FOV as can be seen in Table 2. Thus, we do not include the FOV encoder in the proposed method.

Analysis of the number of conditional invertible blocks. We verify the effect of different numbers of conditional reversible blocks on the optical aberration correction performance of our method in Table 3. Reducing the number $k$ of conditionally reversible blocks will lead to artifacts in the restored results, and the PSNR and SSIM also decrease significantly, such as when $k=8$. When increasing the number of blocks to 12, the proposed method is able to recover more image details, and the performance of the model is greatly improved. When the number of blocks is further increased to 16, the performance improvement of the model is not obvious. However, the increase in the number of reversible blocks means an increase in the number of parameters and inference time, so in order to better balance model performance and efficiency, we use $k=12$ as the default option.

Tables Icon

Table 3. Comparisons on the effect of different numbers of conditional invertible blocks.

Analysis of the proposed loss functions. The loss functions are applied during the training stage to minimize the difference between the restored image and the ground truth image, as described in section 3.3. We conduct ablation studies on different loss functions to verify the impact of them, and the results are shown in Table 4. We noticed that the proposed method without any of the reverse loss, edge loss, and perceptual loss leads to worse performance. The reverse loss makes the model more stable. Edge loss and perceptual loss minimize the difference between two images from the perspective of high-frequency details and perceptual quality, enabling the model to generate sharper images.

Tables Icon

Table 4. Comparisons on the effect of the proposed loss functions.

5. Conclusions

In this work, we have proposed an enhanced conditional invertible neural framework to correct variable-degree optical aberrations. The conditional invertible neural network can effectively avoid information loss and restore image details. Meanwhile, to better handle different degrees of aberrations, we embed the degree of degradation into the model as a conditional encoding. Comprehensive experiments verify that our method outperforms compared methods in correcting optical aberrations on both synthetic images and real images. Furthermore, our method performs quite competitively in terms of model size. The proposed method is promising to be embedded into ISP systems to improve imaging quality.

Funding

Ministry of Science and Technology of the People's Republic of China (2021YFB3601404); Chinese Academy of Sciences (E2RC5901).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. C. J. Schuler, M. Hirsch, S. Harmeling, and B. Schölkopf, “Non-stationary correction of optical aberrations,” in International Conference on Computer Vision, (IEEE, 2011), pp. 659–666.

2. C. J. Schuler, M. Hirsch, S. Harmeling, and B. Schölkopf, “Blind correction of optical aberrations,” in European Conference on Computer Vision, (Springer, 2012), pp. 187–200.

3. Y. Liu, Z. Qin, S. Anwar, P. Ji, D. Kim, S. Caldwell, and T. Gedeon, “Invertible denoising network: A light solution for real noise removal,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (IEEE, 2021), pp. 13365–13374.

4. S. Zhang, C. Zhang, N. Kang, and Z. Li, “ivpf: Numerical invertible volume preserving flow for efficient lossless compression,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (IEEE, 2021), pp. 620–629.

5. L. Dinh, D. Krueger, and Y. Bengio, “Nice: Non-linear independent components estimation,” presented at the Third International Conference on Learning Representations, San Diego, CA, USA, 7-9 May. 2015.

6. Y. Shih, B. Guenter, and N. Joshi, “Image enhancement using calibrated lens simulations,” in European Conference on Computer Vision, (Springer, 2012), pp. 42–56.

7. F. Heide, M. Rouf, M. B. Hullin, B. Labitzke, W. Heidrich, and A. Kolb, “High-quality computational imaging through simple lenses,” ACM Trans. Graph. 32(5), 1–14 (2013). [CrossRef]  

8. S. Chen, H. Feng, D. Pan, Z. Xu, Q. Li, and Y. Chen, “Optical aberrations correction in postprocessing using imaging simulation,” ACM Trans. Graph. 40(5), 1–15 (2021). [CrossRef]  

9. X. Li, J. Suo, W. Zhang, X. Yuan, and Q. Dai, “Universal and flexible optical aberration correction using deep-prior based deconvolution,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, (IEEE, 2021), pp. 2613–2621.

10. T. Eboli, J.-M. Morel, and G. Facciolo, “Fast two-step blind optical aberration correction,” in European Conference on Computer Vision, (Springer, 2022), pp. 693–708.

11. Q. Tian, C. Lu, B. Liu, L. Zhu, X. Pan, Q. Zhang, L. Yang, F. Tian, and X. Xin, “DNN-based aberration correction in a wavefront sensorless adaptive optics system,” Opt. Express 27(8), 10765–10776 (2019). [CrossRef]  

12. S. Chen, H. Feng, K. Gao, Z. Xu, and Y. Chen, “Extreme-quality computational imaging via degradation framework,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, (IEEE, 2021), pp. 2632–2641.

13. T. Lin, S. Chen, H. Feng, Z. Xu, Q. Li, and Y. Chen, “Non-blind optical degradation correction via frequency self-adaptive and finetune tactics,” Opt. Express 30(13), 23485–23498 (2022). [CrossRef]  

14. A. N. Gomez, M. Ren, R. Urtasun, and R. B. Grosse, “The Reversible Residual Network: Backpropagation Without Storing Activations,” Advances in Neural Information Processing Systems 30, 2214–2224 (2017).

15. L. Dinh, J. Sohl-Dickstein, and S. Bengio, “Density estimation using real nvp,” presented at the Fourth International Conference on Learning Representations, San Juan, Puerto Rico, 2-4 May. 2016.

16. J.-H. Jacobsen, A. Smeulders, and E. Oyallon, “i-revnet: Deep invertible networks,” presented at the Sixth International Conference on Learning Representations, Vancouver, BC, Canada, 30 April - 3 May. 2018.

17. D. P. Kingma and P. Dhariwal, “Glow: Generative flow with invertible 1×1 convolutions,” Advances in Neural Information Processing Systems 31, 10236–10245 (2018).

18. M. Xiao, S. Zheng, C. Liu, Y. Wang, D. He, G. Ke, J. Bian, Z. Lin, and T.-Y. Liu, “Invertible image rescaling,” in European Conference on Computer Vision, (Springer, 2020), pp. 126–144.

19. R. Zhao, T. Liu, J. Xiao, D. P. Lun, and K.-M. Lam, “Invertible image decolorization,” IEEE Trans. on Image Process. 30, 6081–6095 (2021). [CrossRef]  

20. J. Jing, X. Deng, M. Xu, J. Wang, and Z. Guan, “HiNet: deep image hiding by invertible network,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, (IEEE, 2021), pp. 4733–4742.

21. A. Foi, “Clipped noisy images: Heteroskedastic modeling and practical denoising,” Signal Processing 89(12), 2609–2629 (2009). [CrossRef]  

22. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 770–778.

23. L. Ardizzone, C. Lüth, J. Kruse, C. Rother, and U. Köthe, “Guided image generation with conditional invertible neural networks,” arXiv, arXiv:1907.02392 (2019). [CrossRef]  

24. J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in European Conference on Computer Vision, (Springer, 2016), pp. 694–711.

25. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” presented at the Third International Conference on Learning Representations, San Diego, CA, USA, 7-9 May. 2015.

26. E. Agustsson and R. Timofte, “Ntire 2017 challenge on single image super-resolution: Dataset and study,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, (IEEE, 2017), pp. 126–135.

27. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” presented at the Third International Conference on Learning Representations, San Diego, CA, USA, 7-9 May. 2015.

28. D. Park, D. U. Kang, J. Kim, and S. Y. Chun, “Multi-temporal recurrent neural networks for progressive non-uniform single image deblurring with incremental temporal training,” in European Conference on Computer Vision, (Springer, 2020), pp. 327–343.

29. C. Mou, Q. Wang, and J. Zhang, “Deep generalized unfolding networks for image restoration,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (IEEE, 2022), pp. 17399–17410.

30. O. Kupyn, T. Martyniuk, J. Wu, and Z. Wang, “Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, (IEEE, 2019), pp. 8878–8887.

31. S.-J. Cho, S.-W. Ji, J.-P. Hong, S.-W. Jung, and S.-J. Ko, “Rethinking coarse-to-fine approach in single image deblurring,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, (IEEE, 2021), pp. 4641–4650.

32. S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Multi-stage progressive image restoration,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (IEEE, 2021), pp. 14821–14831.

33. F.-J. Tsai, Y.-T. Peng, Y.-Y. Lin, C.-C. Tsai, and C.-W. Lin, “Stripformer: Strip transformer for fast image deblurring,” in European Conference on Computer Vision, (Springer, 2022), pp. 146–162.

34. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

35. R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018), pp. 586–595.

36. J. Pan, D. Sun, H. Pfister, and M.-H. Yang, “Blind image deblurring using dark channel prior,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 1628–1636.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. The optical system structure and PSFs of GCO-232005 lens. The upper-left shows the cross-sectional view. The upper-right shows the PSFs of the GCO-232005 lens. The lower-left shows the raytracing process in the 2-dimensional plane. The lower-right shows the PSFs for three representative FOVs, illustrating that the PSF varies with FOV.
Fig. 2.
Fig. 2. Overview of the proposed conditional invertible neural network with feature extraction for variable-degree optical aberrations correction. In the forward process(black arrow), the input degraded image passes through a conditional invertible neural network composed of a feature extraction module and 12 conditional invertible blocks to obtain a sharp image $\mathcal {G}(\mathbf {Y})$. In the reverse process(red arrow), we reverse $\mathcal {G}(\mathbf {Y})$ into the network to obtain the degraded image $\mathcal {F}(\mathcal {G}(\mathbf {Y}))$. Under the joint forward and reverse process, the information of the image is kept as much as possible, making details of the restored image clearer.
Fig. 3.
Fig. 3. The detailed architecture of feature extraction module. "Conv7-64" means that the convolution kernel size of this layer is 7 $\times$ 7, and the number of convolution kernels is 64.
Fig. 4.
Fig. 4. The squeeze operation resets a 4 $\times$ 4 $\times$ 1 tensor (on the left) to a 2 $\times$ 2 $\times$ 4 tensor (on the right).
Fig. 5.
Fig. 5. The conditional affine coupling layer for forward and reverse process
Fig. 6.
Fig. 6. Qualitative comparisons on the synthetic dataset. The results are produced by DeblurGANv2 [30], FOV-KPN [12], MIMO-UNet [31], MPRNet [32], Stripformer [33] and our method. Here, "Distance" represents the imaging object distance to the focal plane. The first scene is obtained by cropping the image patch from the ISO 12233 test chart. The second and third scenes are from DIV2K dataset [26].
Fig. 7.
Fig. 7. Qualitative comparisons on the real test images. The restored results from left to right are produced by DeblurGANv2 [30], FOV-KPN [12], MIMO-UNet [31], MPRNet [32], Stripformer [33], Dark Channel Prior [36] and our method. Here, "Distance" represents the imaging object distance to the focal plane.
Fig. 8.
Fig. 8. The MTF curve of the real captured image and the corresponding restored sharp image produced by our method and other models.
Fig. 9.
Fig. 9. Ablation study on feature extraction module. This image is from DIV2K dataset [26].

Tables (4)

Tables Icon

Table 1. Comparisons of the evaluation metrics on the synthetic dataset and the number of parameters.a

Tables Icon

Table 2. Quantitative results of ablation studies in terms of PSNR, SSIM and LPIPS on test dataset.a

Tables Icon

Table 3. Comparisons on the effect of different numbers of conditional invertible blocks.

Tables Icon

Table 4. Comparisons on the effect of the proposed loss functions.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

I d e g r a d e d ( i , j , d ) = I s h a r p ( i , j ) k ( i , j , d ) + n ( i , j )
R = O + t D , t 0 ,
z = c s 2 1 + 1 c 2 s 2 + M 2 s 2 + M 4 s 4 + + M j s j
P ( x , y ) = A ( x , y ) e j 2 π λ ϕ ( x , y )
h ( u , v ) = P ( x , y ) e j 2 π ( u x + v y ) d x d y
u 1 i , u 2 i = S p l i t ( u i ) u 1 i + 1 = u 1 i exp ( ψ ( u 2 i , h ) ) + ϕ ( u 2 i ) u 2 i + 1 = u 2 i exp ( ρ ( u 1 i + 1 ) ) + η ( u 1 i + 1 ) u i + 1 = C o n c a t ( u 1 i + 1 , u 2 i + 1 )
u 1 i + 1 , u 2 i + 1 = S p l i t ( u i + 1 ) u 2 i = ( u 2 i + 1 η ( u 1 i + 1 ) ) exp ( ρ ( u 1 i + 1 ) ) u 1 i = ( u 1 i + 1 ϕ ( u 2 i ) ) exp ( ψ ( u 2 i , h ) ) u i = C o n c a t ( u 1 i , u 2 i )
L t o t a l = λ 1 L f o r w a r d ( G ( Y ) , X ) + λ 2 L r e v e r s e ( F ( G ( Y ) ) , Y ) + λ 3 L e d g e ( G ( Y ) , X ) + λ 4 L p e r c e p t u a l ( G ( Y ) , X )
L f o r w a r d ( G ( Y ) , X ) = G ( Y ) X 1
L r e v e r s e ( F ( G ( Y ) , Y ) = F ( G ( Y ) ) Y 1
L e d g e ( G ( Y ) , X ) = Δ ( G ( Y ) ) Δ ( X ) 1
L p e r c e p t u a l ( G ( Y ) , X ) = 1 C m H m W m φ m ( X ) φ m ( G ( Y ) ) 1
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.