Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Contrast enhancement method in aero thermal radiation images based on cyclic multi-scale illumination self-similarity and gradient perception regularization

Open Access Open Access

Abstract

In aerospace, the effects of thermal radiation severely affect the imaging quality of infrared (IR) detectors, which blur the scene information. Existing methods can effectively remove the intensity bias caused by the thermal radiation effect, but they have limitations in the ability of enhancing contrast and correcting local dense intensity or global dense intensity. To address the limitations, we propose a contrast enhancement method based on cyclic multi-scale illumination self-similarity and gradient perception regularization solver (CMIS-GPR). First, we conceive to correct for intensity bias by amplifying gradient. Specifically, we propose a gradient perception regularization (GPR) solver to correct intensity bias by directly decomposing degraded image into a pair of high contrast images, which do not contain intensity bias and exhibit inverted intensity directions. However, we find that the GPR fails for dense intensity area due to small gradient of the scene. Second, to cope with the cases of dense intensity, we regard the dense intensity bias as the sum of multiple slight intensity bias. Then, we construct a cyclic multi-scale illumination self-similarity (CMIS) model by using multi-scale Gaussian filters and structural similarity prior to removing the dense intensity layer by layer. The result acts as coarse correction for GPR, which does not need to be overly concerned with whether the result has intensity residuals or not. Finally, the coarse corrected result is input to the GPR module to further correct residual intensity bias by enhancing contrast. Extensive experiments in real and simulated data have demonstrated the superiority of the proposed method.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Temperature fluctuations within the lens and other mechanical components of IR detector cause intensity bias in the IR image [13]. In addition to this, temperature effects from external factors, such as targets and environment, can also degrade the IR image by intensity bias. Such intensity bias can be universally called thermal radiation bias field. In aerospace, IR imaging systems are subjected to high-speed airflow, leading to uneven temperature rise in the optical window, which generate the thermal radiation effect of aerospace [46]. The pixel values of the output image tend to be large at high temperature area or show saturation at the center of the heat source, and tend to be small at low temperature area [7]. Thermal radiation noise tends to exhibit characteristics of local smooth intensity and global intensity bias, and it is often regarded as low-frequency noise in the IR field. Overall, thermal radiation effect severely degrades the imaging quality of the IR system, which makes difficulties for subsequent image processing tasks, such as target detection and target tracking [811]. Therefore, it is important to solve the thermal radiation effect effectively.

In order to reduce the impact of thermal radiation effects on imaging quality, the intensity bias can be suppressed by physical methods, such as selecting the appropriate detector angle, suitable optical materials, suitable design of optical lens, and reasonable cooling settings. However, these physical implementations are complex and expensive [9,1215]. There are many excellent algorithms proposed by exploiting the characteristics of small local intensity bias and large global intensity bias of thermal radiation noise. These methods use sparsity constraint or polynomial fitting to extract low-frequency noise, and then subtract the low-frequency noise from the degraded image to obtain the corrected image [2,4,7,1620]. These methods can result in clear images due to removing the intensity bias from the degraded image. However, their performances still have limitations.

These existing methods aim at recovering the ideal image as much as possible. Firstly, in case that the corrected image is very clear, the contrast cannot be improved higher due to the fact that estimated intensity bias is smooth and doesn’t include the scene contours. Specifically, the intensity bias is theoretically smooth, so the estimated intensity bias needs to be smooth as well. Because of that, the corrected image will only become clear and get high contrast to some extent. But contrast will not be enhanced further. In the IR image field, the contrast needs to be further enhanced even if there is no thermal radiation effect on the image. Therefore, the estimated smooth intensity bias, though it is desirable, will limit the ability of contrast improvement. In addition, if the parameters are not chosen correctly, it is easy to generate intensity residuals by estimating smooth intensity bias. Finally, when the intensity bias is slight, but the local bias is very strong, then the degraded image will exhibit local dense intensity. When intensity bias is dense, no matter whether the bias is strong or not, the whole scene of the degraded image is obscured by the dense intensity. For such dense intensity cases, the robustness of the existing fitting-based methods is not high enough. More properties of intensity bias will be detailed analysed in simulated experiments of section 4.2.1.

In response to above limitations, we propose a contrast enhancement method by investigating structural-similarity constraint, which exists between intensity bias and degraded images, and exploring regularization solution technique. The method improves the quality of thermal radiation images in aero field, the contributions include:

  • • We propose the GPR solver based on a basic structural-similarity correction model to efficiently correct weak intensity bias images. The GPR model does not require estimating intensity bias map, as shown in Fig. 1.
  • • We propose the CMIS module, which is put in front of GPR, to coarsely correct dense intensity bias map layer by layer. The CMIS module does not need to be overly concerned about whether the coarsely corrected results have intensity residuals or not.
  • • We integrate the GPR and CMIS modules together to effectively deal with images with dense intensity bias.
  • • Experiments show that the proposed method can cope with both slight intensity bias and dense intensity bias at the same time, which demonstrate its superiority over the comparison algorithms.

2. Related work

There are three types of methods based on single image intensity bias correction: polynomial-based fitting [2,4,7], gradient sparsity-based variational model [7,16,17], and methods of combining variation and polynomial fitting [7,1720].

The basic mathematical model for intensity bias removal is as follows:

$$S=D+I+n$$
where $S$ is the observed degraded image, $D$ is the clear image, $I$ is the intensity bias map, $n$ is the noise.

In [2], Cao et al. described intensity bias as the product of model parameter and bivariate polynomial terms. This method used a modified bilateral filter to remove the gradient of the objects or noise to optimize the gradient of the degraded image. The model parameters can be obtained by error function between the derivatives of intensity bias and the gradient of the degraded image.

Partial differential equation-based models and variational methods have achieved excellent results in the field of image restoration, so research results are widely used in other image-related tasks [21,22]. The fidelity between the clear image and the observed image is often used in regularization models. In [16], Li et al. applied the smoothness of the intensity and the sparsity of the clear image to the problem model. His problem model is as follows:

$$(\hat{D}, \hat{I})=\underset{D, I}{\arg \min } \frac{1}{2}\|S-D-I\|_2^2+\alpha \|\nabla I\|_2^2+\beta \|\nabla D\|_1,$$

In [17], Liu et al. constructed two sub-models for the problem by combining the fidelity term between the clear image and the observed image, and the $\ell _p$ norm of the clear image, as in Eqs. (3) and (4). Bivariate polynomial fitting is incorporated into each iteration to estimate the smooth intensity bias. His sub-models for the problem are as follows:

$$\min _{D, I} \eta\|S-D-I\|_2^2+\alpha\|\nabla D\|_{p}^p$$
$$\min _I \eta\|S-D-I\|_2^2$$

In [7], Shi et al. constructed a basic regularization model based on gradient sparsity constraint. The Chebyshev polynomial surface fitting is introduced first to fit intensity bias and overcome the ill-posed matrix problem of high-order bivariate polynomials fitting. Finally, multi-scale iteration was applied to speed up the iterative optimization. Eq. (5) shows the model formulation, and $Wa$ represents the fitting intensity bias:

$$\begin{aligned} (\hat{D}, \hat{I}) & =\underset{D, I}{\arg \min }\|S-D-I\|_2^2+\alpha\|\nabla S-\nabla D-\nabla I\|_2^2 +\beta \|\nabla D|_0+\gamma \|\nabla I-\nabla Wa\|_2^2, \end{aligned}$$

In [4], Hong et al. constructed a progressive correction model based on bilateral filter and Bézier surface fitting, which combined with the gradient orientation constrain of thermal radiation bias field and the corrected image. This method firstly obtains the initial intensity bias by constraining the gradient orientation of thermal radiation bias field and the corrected image. Then, a more accurate intensity bias is fitted by using Bézier fitting. Finally, the intermediate clear image obtained in the current iteration is used as the degraded image for the next iteration. Equation (6) shows the model formulation, and $R(:)$ represents the gradient orientation constraint:

$$\left\{\begin{array}{l} \hat{I}^i=\arg \min \eta\left\|\hat{D}^{i-1}-\hat{D}^i-\hat{I}^i\right\|_2^2+\alpha R\left(\hat{D}^i\right)+\beta R\left(\hat{I}^i\right), \\ I^i=\arg \min \left\|I^i-\hat{I}^i\right\|=\text{B{\'e}zier}_{m, n}\left(\hat{I}^i\right), \\ \hat{D}^i=\hat{D}^{i-1}-\gamma I^i . \end{array}\right.$$

In our previous study [23], intensity bias can be successfully corrected by constraining the similarity of intensity bias with current degraded image and adjacent frame degraded image in IR sequence. However, intensity bias cannot be removed completely when only constraining the similarity of intensity bias with current degraded image in single image application. Due to the limited space, we simply describe the above models. Please refer to the related papers to get specific details.

The above algorithms obtain clear images by subtracting the estimated smooth intensity bias from the original degraded image, which can only make the image clearer and improve the contrast to a certain extent, but do not correct the intensity bias from the point of improving contrast, so it has a limited ability to improve the contrast. In addition, these methods have to constrain the smoothness of the intensity bias, so that the estimated intensity map does not necessarily match the true intensity map. Therefore, the corrected image is prone to carry residual intensity. Last, these methods have limited ability to correct for dense intensity. In this paper, we will investigate these problems, the method will be described in detail in section 3.

3. Methodology

3.1 Base model

We have done extensive experiments using a series of IR video sequences with intensities bias. By subtracting between frames, the intensities bias always disappear. This means that the intensity bias between frames could be see as fixed-pattern noise and the main component is independent of the clear image. As mentioned in section 2, basic mathematical model of intensity bias are additive. Mathematically, we ignore the noise term and simplify it to the following expression:

$$S=D+I.$$
where $S$ and $D$ are degraded image and corrected image without intensity bias, respectively. $I$ denotes intensity bias.

In our previous study [23], we proposed a temporal method to remove fixed intensity bias in IR sequences, which demonstrated that the similarity between intensity and degraded images is effective. In this paper, we focus on single image intensity bias correction, so we will only consider the structural-similarity between intensity and degraded image in a single image. Here, we introduce a partial model of [23] as the base model for this paper, its mathematical expression is as follows:

$$\begin{aligned} & \min _D \frac{1}{2}\|S-D\|_2^2+\frac{1}{2}\|S-D-S\|_2^2+\lambda_1\|\nabla(S-D)\|_1 \\ & =\min _D \frac{1}{2}\|S-D\|_2^2+\frac{1}{2}\|D\|_2^2+\lambda_1\|\nabla(S-D)\|_1. \end{aligned}$$
where $\nabla$ denotes gradient operator matrices in row and column. $\lambda _1$ aims to control the fineness of intensity map $I$. The bigger the $\lambda _1$, the smoother the $I$ is. In Eq. (8), the first term means $D$ should be similar to $S$ in structure. The second term means we want $I$ be similar to $S$ in structure. The combination of these two terms produce good constraint effect according to our previous work [23]. The last term have two aims, one is to control the small textures in $I$, the other is to facilitate the use of iterative solutions to refine $D$ and $I$.

The above objective function seeks to find an optimal $I$, which can perfectly match the strength and shape of the ideal intensity bias. However, according to our previous study, although the above objective function can correct the intensity to a certain extent using conventional model solver , there exist cases of the extracted intensity map not being smooth enough or the corrected image containing intensity residuals. It is for this reason that the inter-frame properties were taken into account in the previous study. Obviously, it is not reliable to continue to use this model and its conventional solution for intensity bias correction on single image.

In addition, we mentioned that the existing methods cannot improve the image contrast higher while removing the intensity bias in the related work section. In the image enhancement field, clear IR images often need to further enhance the contrast to increase the visual impact. Therefore, we intend to explore a method to correct intensity bias through contrast enhancement. We investigate a method to improve solver of objective function in Eq. (8) to achieve the purpose. For easy understanding, we first explain the process of the conventional solution.

3.1.1 Conventional regularization solver

According to the conventional regularization solver, solving Eq. (8) will obtain a corrected image and a low-frequency image. Equation (8) contains one $\ell _1$ norm term, which is nondifferentiable. Therefore, an effective mathematical method is needed for this question. In [22], Liang et al. have used alternating direction method of multipliers (ADMM) to solve variational model. In our paper, we adopt ADMM to optimize Eq. (8).

First, we introduce a auxiliary variable $C$. So we rewrite object function as follow:

$$\begin{aligned} \min _D \frac{1}{2}\|S-D\|_2^2+\frac{1}{2}\|D\|_2^2+\lambda_1\|\nabla(S-D)\|_1, \end{aligned}$$
$$\begin{aligned}\text{ s.t. } \quad C=\nabla(S-D). \end{aligned}$$

Then, the augmented Lagrangian function for Eq. (9) is:

$$\begin{gathered} \mathcal{L}(D, C, y)=\frac{1}{2}\|S-D\|_2^2+\frac{1}{2}\|D\|_2^2+\lambda_1\|C\|_1 +(C-\nabla(S-D))^{\top} y+\frac{\rho}{2}\left(\|C-\nabla(S-D)\|_2^2\right), \end{gathered}$$
where $C$ is Lagrangian dual variable. Our aim is to minimize the subproblem and maximizing the dual problems at each iteration $k$.

Step 1: Solving $D^{k+1}$. We omit the terms not related to $D$ in Eq. (11), Then the objective function with $D^{k+1}$ is a quadratic programming problem. The $D^{k+1}$ estimation corresponds to minimizing:

$$\begin{aligned} & {D}^{k+1}=\arg \min _{{D}}\left\{\frac{1}{2}\|S-D\|_2^2+\frac{1}{2}\|D\|_2^2\right. \left.+(C-\nabla(S-D))^{\top} y+\frac{\rho}{2}\left(\|C-\nabla(S-D)\|_2^2\right)\right\}. \end{aligned}$$

Convert Eq. (11) to Fourier domain

$$\!{D}^{k+1}\!=\mathrm{fft}^{{-}1}\!\left(\frac{\mathrm{fft}( \!{S})\!+\!\mathrm{fft}({S})\! \cdot\! \rho\! \cdot\! \mathrm{fft}\left(\nabla_{x y}\right)\!+\!{f}}{2+\rho^k \cdot \mathrm{fft}\left(\nabla_{x y}\right)}\right),$$
where
$${f}=\mathrm{fft}^*\left(\nabla_x\right) \cdot {f} {x}^k+\text{fft}^*\left(\nabla_y\right) \cdot {f} {y}^k,$$
$$\begin{aligned} & {f} {x}^k=\text{fft}\left(\rho^k\left({C}_{2,1}^{{k}}+\frac{{y}_{2,1}^{{k}}}{\rho^k}\right)\right), \\ & {f} {y}^k=\text{fft}\left(\rho^k\left( {C}_{2,2}^{{k}}+\frac{{y}_{2,2}^{{k}}}{\rho^k}\right)\right), \end{aligned}$$
$$\text{fft}\left(\nabla_{x y}\right)\!=\!\text{fft}^*\left(\nabla_x\right)\cdot \text{fft}\left(\nabla_x\right)\!+\!\text{fft}^*\left(\nabla_y\right) \!\cdot \text{fft}\left(\nabla_y\right).$$

${C}_{2,1}^{{k}}$ and ${C}_{2,2}^{{k}}$ correspond to the components of $C^{k}$ in the $x$ and $y$ directions, respectively. Operator fft, ${\rm fft}^\ast$ and ${\rm fft}^{-1}$ means Fourier transform, conjugate Fourier transform and inverse FFT, respectively.

Step 2: Solving $C^{k+1}$.

$$\begin{aligned} C^{k+1}=\arg \min _C\left\{\frac{2 \lambda_1}{\rho^k}\|C\|_1\right. \left.+\left\|C-\nabla\left(S-D^{k+1}\right)+\frac{y^k}{\rho^k}\right\|_2^2\right\}, \end{aligned}$$

$C^{k+1}$ usually is solved through soft-shrinkage operation. Here we can get

$$C^{k+1}=\mathcal{T}_{\lambda_1 / \rho^k}\left(\nabla (S-D^{k+1})-y_1^k / \rho^k\right),$$
where $\mathcal {T}_{\lambda _1 / \rho ^k}$ means soft-shrinkage symbol.

Step 3: Update $y^{k+1}$ and $\rho ^{k+1}$.

$$\begin{aligned} & y^{k+1}=y^k+\rho^k\left(C^k-\nabla\left(S-D^{k+1}\right)\right), \\ & \rho^{k+1}=p \cdot \rho^k, \end{aligned}$$
where $p$ is update step for $\rho$.

Step 4: Output corrected image $D$.

Estimated $I$ can be obtained by $I=S-D$. With above solver, corrected image and intensity map are extracted. Generally, the corrected image can be readjusted by adjusting the weight of the intensity map as follows:

$$D=S-g \cdot I.$$

The inverted intensity bias does not occur as long as the range of $g$ is appropriate.

3.1.2 Gradient perception regularization solver

In section 3.1, we mentioned that conventional solver gets unsatisfactory results for Eq. (8) and enhancing contrast is necessary. The purpose of intensity bias correction is to obtain a high contrast image without brightness bias. Figure 2 shows that two images with exposed details and uniform brightness of inverted directions are added together. Then it obtains an image with intensity bias, and some details of this image are obscured by intensity. If the reverse process of Fig. 2 can be achieved, we no longer correct for this intensity bias by estimating it. Inspired by this phenomenon, we conceived that the area with obvious intensity noise could also be decomposed into a pair of areas with inverted intensity directions, and then the details of this area would be exposed like input images of Fig. 2. After decomposition, brightness bias will be reduced. Instead of extracting the intensity from the degraded image, we amplify the gradient of the degraded image to correct for the bias. For degraded images with weak intensity bias, even though this low-frequency noise reduces the contrast of the image, the gradient of a specific pixel or the gradient in its neighborhood is always existing. In other words, the gradient of a degraded image can be perceived despite it has low contrast. Therefore, we propose a gradient perception regularization solver (GPR) to implement the reverse process of Fig. 2.

The gradient of $S-D^{k+1}$ ($\nabla \left (S-D^{k+1}\right )$) is used to calculate the iteration parameters in Eqs. (17) to (19). We replace the gradient of $\left (S-D^{k+1}\right )$ with the gradient of $D^{k+1}$ in Eqs. (17) to (19). It can let the gradient of $S-D^{k+1}$ and $D^{k+1}$ form an adversarial property. Then the fitting parameters calculated by Eqs. (17) to (19) will result in a pair of images with inverted intensity directions. After suitable iterations, the degraded image is decomposed into a pair of images that do not contain intensity bias, which is like Fig. 1. The idea of gradient perception regularization is simple but effective. In the section 4.1, we will perform detailed ablation experiments for Eq. (8) about the structural-similarity term, the results between the proposed GPR solver and the conventional regularization solver.

 figure: Fig. 1.

Fig. 1. The real degraded uniform surface is decomposed into a pair of images with inverted intensity characteristics by GPR solver. (a) Uniform surface IR image with intensity bias. (b) Corrected image. (c) Degraded image minus corrected image directly.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Two images with inverted pixel features and uniform intensity are added together can obtain a low contrast image with intensity bias. (b) is visual representation of (a). Pixels colors closer to blue represent smaller pixel values, and pixels colors closer to yellow represent larger pixel values.

Download Full Size | PDF

Finally, automatic gamma [24] is introduced to improve tone of image $D$.

$$\tilde{D}=D^{\log (h) / \log \left(\frac{1}{{n} }\left(\sum_{i=1}^n D(i)\right)\right)},$$
where $n$ refers to the number of pixels in the matrix $D$, $h$ is expected brightness. After gamma correction, the contrast can be further adjusted by maximum or minimum pixel cropping.

When the input degraded images are with weak intensity bias, the intensity bias can be corrected directly after GPR module processing due to the fact that their gradients can be perceived. When the intensity bias is dense, the performance of GPR is limited due to the low gradient in local or global area of degraded image. As shown in Fig. 3, when the intensity is dense in local or global, there may exist residual intensity or model failure. We will improve the applicability of GPR for this case in the next section.

 figure: Fig. 3.

Fig. 3. The limitation of GPR. The left image of (a) shows the degraded image with local dense intensity, which is a drastically varying intensity bias map. The right image of (a) shows the corrected result of the GPR module. The left image of (b) shows the degraded image with global dense intensity bias, and the right image of (b) shows the corrected result of the GPR module.

Download Full Size | PDF

3.2 Cyclic multi-scale illumination self-similarity

In section 3.1.2, we pointed out that the proposed GPR model cannot correct dense intensity efficiently. In this section, we will design a new module to address this problem and put it in front of the GPR module for coarse correction. As mentioned in our previous study [23] and section 2, only using the similarity constraint term between intensity bias and degraded images in a single image may cause intensity residuals in corrected image or scene information in estimated intensity map. Therefore, more reference information is required.

In the aerial field [25], the mask dodging technique removes uneven illumination using a low-pass filter. Here, we use multi-scale Gaussian filters to obtain multiple low-frequency images by filtering on the degraded image, and these low-frequency images can be regarded as rough reference illumination. The intensity bias in the original degraded image has some similarity with these low frequency images. According to our previous study, this similarity can be constrained by $\ell _2$ norm, therefore the multi-scale illumination can be used for reference information. In addition, when the intensity is very dense, the performance of intensity removal decreases and the possibility of intensity residuals increases. We consider a dense intensity bias map as the sum of multiple slight intensity bias, i.e.

$$I=I_1+I_2+\cdots+I_n.$$

Therefore, the image corrupted by dense intensity bias can be corrected by cyclically removing the slight intensity bias. Summarizing the above scheme, a cyclic multi-scale self-similar model is finally designed:

$$\min _{D^j} \frac{1}{2}\left\|S_{in}^{j-1}-D^j\right\|_2^2 \mid{+}\sum_{m=1}^n \frac{1}{2}\left\|S_{refm}^{j-1}-\left(S_{in}^{j-1}-D^j\right)\right\|_2^2+\lambda_2\left\|\nabla\left(S_{in}^{j-1}-D^j\right)\right\|_1.$$

It is easy to find that this model is evolved based on Eq. (8). $j$ represents the number of loops and $D^{j}$ refers to the corrected image obtained from each loop. $S_{in}^{j-1}$ refers to the input image of the current loop, which refers to the output image $D^{j-1}$ of the last loop. $n$ refers to the multi-scale. $S_{refm}^{j-1}$ refers to the multi-scale illumination obtained based on $S_{in}^{j-1}$. $\lambda _2$ is the constraint factor, which is used to constrain the smoothness of the intensity bias extracted from each loop. Since Eq. (23) and Eq. (8) are very similar, the solver of Eq. (23) can be easily obtained according to the ADMM process of section 3.1.1. We will not describe the solver process of Eq. (23) in detail here.

The CMIS acts as a coarse correction, although the intensity bias can be removed as much as possible at this stage. However, by combining GPR and CMIS, we do not need to care about whether the results of CMIS have intensity residuals, because the GPR module can further correct the residual intensity bias by improving the contrast (see the edges of Fig. 10(g) and (h)). The whole algorithm flow is shown in Algorithm 1.

4. Experiments

4.1 Experiments on real data

Section 3.1 designed the GPR module to correct weak intensity bias from the perspective of contrast enhancement, and this method correct the bias by image decomposition. In this section, we will perform ablation experiments, qualitative and quantitative analyses of GPR module on real data.

4.1.1 Experimental details

The experimental details are as follows:

1) Datasets. The experiments for GPR are based on real data, and the samples are shown in the first column of Fig. 7. The first two rows of Fig. 7 are 320*256 resolution, the third row of Fig. 7 is from the Cuprite dataset [26] with a size of 640*512 resolution, and the fourth row is a uniform surface of 640*512 resolution.

2) Comparison methods. The comparison methods are from Cao et al. [2], Liu et al. [17], Shi et al. [7] and Hong et al. [4], respectively.

3) Parameter settings. For GPR, the parameter $k$ in section 3.1.2 needs to be set carefully. Too small or too large value is detrimental to form an adversarial property between the gradient of $S-D^{k+1}$ and $D^{k+1}$. We use the ICV metric to evaluate the impact of $k$ on the uniformity of intensity. According to our practice, the desired results are often obtained when $k$ is set around 10 (see Fig. 4). Here, we set it to 10. $\lambda _1$ is the constraint coefficient for constructing the base model. In GPR module, its value is not an important factor affecting the visual result according to our practice. Here, we set it to 0.1. For aero images, $h$ in Eq. (21) is setted to 1/2.5. For uniform surface, the gamma correction function is not used here due to the absence of scene information.

 figure: Fig. 4.

Fig. 4. Iterative analysis for parameter $k$ in section 3.1.2. Data 1 is a complex scene and Data 2 is a uniform surface data. Three regions are selected on each data and tested ICV values at different k. The larger the ICV, the more uniform the intensity is.

Download Full Size | PDF

4) Metrics. Contrast enhancement performance is evaluated with boost of measure evaluation (EME) [27], sum of modulus of gray difference (SMD) [28], product of modulus of gray difference (SMD2) [29], and inverse coefficient of variation (ICV) [30].

Here, EME is used to measure the contrast of an image, and a larger value tends to represent better image quality. Its mathematical formula is as follows:

$$ E\!M\!E_{k_1, k_2}=\frac{1}{k_1 * k_2} \sum_{l=1}^{k_1} \sum_{k=1}^{k_2} 20 * \text{In} \frac{I_{\max ; k, l}}{I_{\min ; k, l}+c}$$
$k_1$ and $k_2$ are the number of blocks divided by image row and column, respectively. They are all set to 8 in our evaluation . $I_{\max ; k, l}$ denotes the maximum pixel value in sub-block. $I_{\min ; k, l}$ denotes the minimum value in sub-block. $c$ is a very small constant which is used to avoid dividing by 0. It is usually set to 0.0001.

SMD and SMD are used to evaluate the sharpness of an image. Larger values tend to represent clearer images. SMD is denoted by following formula:

$$S\!M\!D(I)\!=\!\sum_y\!\sum_x(|I(x, y)-I(x, y-1)|+|I(x, y)-I(x+1, y)|).$$

SMD2 is denoted by following formula:

$$S\!M\!D2(I)\!=\!\sum_y\!\sum_x|I(x, y)-I(x+1, y)||I(x, y)-I(x, y+1)|.$$

$I$ refers to the image to be evaluated, $x$ and $y$ refer to the horizontal and vertical positions of the pixels, respectively. The SMD2 metric is a modified version of the SMD, and they are tested simultaneously to evaluate overall performance.

ICV is used to evaluate the uniformity of intensity, which can be obtained from the following mathematical equation:

$$\mathrm{ICV}=\frac{R_{mean(a)}}{R_{{std(a)}}},$$
where $R_{mean(a)}$ and $R_{{std(a)}}$ are the average value and standard deviation of pixel intensities in the patch $a$. Larger values tend to represent more uniform intensity of images. Since the standard deviation of complex scenes is relatively large, it is difficult to evaluate the performance of ICV comprehensively. Hong et al. uses smooth regions to evaluate ICV, here we select three regions (see Fig. 7) in the uniform surface data for ICV index evaluation.

4.1.2 Ablation experiments

Figure 5 shows the ablation experiment (Eq. (8)) for the structural-similarity term between intensity map and degraded image based on conventional regularization solver. From Fig. 5(b) and left image of Fig. 5(c), it can be seen that the similarity between intensity and degraded image is valid. Although it has been validated by our previous study, which provides a theoretical basis for the multi-scale illumination similarity term in section 3.2. Meanwhile, it can be seen that the estimated intensity bias (right image of Fig. 5(c)) contains scene information due to that Eq. (8) only considers one reference information (similarity between intensity bias and degraded image ). In conclusion, under the conventional solver, it is difficult to get the desired result with insufficient reference information.

 figure: Fig. 5.

Fig. 5. Validation of the structural-similarity term based on conventional regularization solver for Eq. (8). (a) Original degraded image. (b) Corrected result is w/o structural-similarity term. (c) The results is w/ structural-similarity term. The left image of (c) is the corrected result, and the right image of (c) is the estimated intensity bias.

Download Full Size | PDF

Figure 6 shows the characteristics of image decomposition under GPR solver, and compares the impact of structural-similarity term on the results of GPR solver Eq. (8) . It can be seen that after GPR solver, the degraded image is no longer decomposed into the regular smooth intensity bias and clear image, but into a pair of images with inverted intensity directions. By this design, we skip the study on the smoothness and validity of intensity bias, and can directly obtain a pair of intensity uniform images. As can be seen from the final corrected images in Fig. 6(a) and (b), intensity is more uniform when the model incorporates structural-similarity term. Comparison of Fig. 5(c) and Fig. 6(c) shows the difference between conventional solver and GPR solver.

 figure: Fig. 6.

Fig. 6. Comparison of (a) w/o structural-similarity term and (b) w/ structural-similarity term based on GPR solver for Eq. (8). (a) and (b) contain two images respectively, which represent a pair of maps with inverted intensity directions obtained by GPR module. The left image represents the $I$ obtained by the GPR module decomposition (Step 4), and the right image is the final corrected image.

Download Full Size | PDF

The above ablation experiments verify the validity of the similarity between intensity bias and degraded image, and also verify that the GPR module can correct the intensity bias and improve contrast. In the next section, we will verify the validity of the proposed GPR model through qualitative and quantitative tests.

4.1.3 Qualitative and quantitative experiments

In this section, we validate the effectiveness of GPR module and the comparison methods on real-world data. For the first aero images in Fig. 7, all methods work for intensity bias correction. Ours and Hong’s results are close in terms of removing global intensity bias, but the details of our results look sharper. The other three comparison algorithms made improvements in correcting the intensity bias, but the details of the images are not prominent. As mentioned in the related work section, the reason is that the comparison algorithms obtain corrected images by discarding the smooth intensity bias from the degraded images, and thus they aim to restore the original scene. Instead of extracting smooth intensity bias, we directly treats intensity bias correction as a problem of contrast enhancement. The proposed GPR solver decomposes the degraded image into a pair of images with inverted intensity by the idea of gradient magnification. The last image in Fig. 7 shows the uniform surface without any scene, and it can be seen that the proposed method removes the local and global intensity bias better. Next, quantitative metrics are used to verify the improvement in contrast and sharpness.

 figure: Fig. 7.

Fig. 7. Comparison among different methods through real data. (a) Degraded image. (b) Cao et al. [2] (c) Liu et al. [17] (d) Shi et al. [7] (e) Hong et al. [4] (f) GPR.

Download Full Size | PDF

We evaluate the corrected results by EME, SMD, SMD2 and ICV metrics. The results are shown in Table 1. EME is a comprehensive consideration of the global and local contrast of an image. In Table 1, it refers to the contrast boost between the corrected image and the original degraded image. It can be seen that all the metrics except ours and the third image of Shi et al. are less than 1. The global and local intensity bias of the degraded image is too large, and the comparison algorithms correct the intensity bias by removing the bias between local/adjacent pixels, thus leading that the EME improvement is easily lower than 1. The proposed method solves the problem by enhancing the gradients of the local region in the image, therefore, the EME improvement is easily greater than 1. As a result, the contrast improvement is prominent.

Tables Icon

Table 1. Comparison of metrics for real data (Fig. 7)

To further verify this effectiveness of our method, we compare the pixel difference values for the 200th row and 128th column of the second image of Fig. 7 by using difference operator. Difference curves are shown in Fig. 8. As seen in the zoomed area, the difference values of the proposed algorithm at 128th column and 200th row are larger than that of the degraded image and other comparison algorithms. The difference values of the comparison algorithms are generally smaller than that of the degraded images. This confirms our above analysis. Therefore, the EME metric proves the effectiveness of GPR module for contrast improvement.

 figure: Fig. 8.

Fig. 8. Comparison of pixel difference values in the 200th row and 128th column based on the second image of Fig. 7.

Download Full Size | PDF

For SMD, Hong et al. achieved the highest metric. This confirms the effectiveness of Hong’s method. Our SMD is lower than that of Hong, but SMD2 is higher than that of Hong. SMD2 is an improved metric of SMD, and SMD2 is more sensitive to gradient. Therefore, it can be proved that the proposed method can improve the sharpness of degraded images. For ICV, it is used to evaluate the uniformity of brightness. We calculated the average values of the three regions (red boxes in Fig. 7), and the proposed GPR model obtains the best value. Combined with Fig. 7, Hong et al. obtain good result for global bias correction, but the local bias correction is weaker than ours, so the ICV index is lower than ours.

Overall, the effectiveness of the proposed method can be seen from the above qualitative and quantitative analyses. Here, we provide Visualization 1 to demonstrate the effect of the proposed GPR module more intuitively.

4.2 Experiments on simulated data

Section 3.2 designed the CMIS module to cope with the case of dense intensity bias. In this section, we will test the effectiveness of CMIS module and final CMIS-GPR model by simulated images.

4.2.1 Experimental details

The experimental details are as follows:

1) Datasets. The first and third images of Fig. 9 and Fig. 10 are taken by NASA’s WB-57 aircraft [4] with 256*256 resolution. The second image is 320*256 resolution. We use Gaussian kernel as the basic tool for intensity bias simulation.

 figure: Fig. 9.

Fig. 9. Comparison among different methods through simulated slight intensity bias. (a) Degraded imade. (b) Clear image. (c) Cao et al. [2] (d) Liu et al. [17] (e) Shi et al. [7] (f) Hong et al. [4] (g) CMIS. (h) CMIS-GPR.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Comparison among different methods through simulated dense intensity bias. (a) Degraded imade. (b) Clear image. (c) Cao et al. [2] (d) Liu et al. [17] (e) Shi et al. [7] (f) Hong et al. [4] (g) CMIS. (h) CMIS-GPR.

Download Full Size | PDF

When Gaussian kernel is used to simulate the intensity bias, we consider that the intensity bias has three main properties. The first is the drasticness of the bias, which is affected by the variance of the Gaussian kernel. The smaller the variance, the bias changes more drastically. As shown in the first and second intensity bias of Fig. 9, when the variance is small, the intensity exhibits denseness in local area although the intensity bias is slight. The larger the variance, the bias changes more slowly, as in the third intensity bias of Fig. 9. The second property is the shape of intensity bias, which is affected by the center position and variance of the intensity bias. The third is the thickness of the intensity bias. Dense intensity can be obtained by accumulating multiple slight intensity bias. In one image, the three properties may be present at the same time. For convenience, the intensity bias in our simulated experiments are divided into slight intensity bias and dense intensity bias. Slight and dense intensity bias contain different shapes and different variance of intensity bias, respectively. Fig. 10 has same intensity shapes as Fig. 9, but the thickness of intensity bias in Fig. 10 is 20 times of that in Fig. 9.

2) Parameter settings. For the CMIS module, the parameter $j$ in Eq. (23) needs to be chosen carefully. In general, a thicker intensity bias tends to require more loops for better coarse correction. A slighter intensity bias tends to require fewer loops to get better coarse correction. Excessive $j$ may cause the estimated intensity bias carry scene information, but it is acceptable for residual intensity in the corrected result of CMIS. Because the subsequent GPR module can further correct residual intensity. According to our practice, $j$ is set to 10 in our experiment. $\lambda _2$ is used to constrain the smoothness of intensity. According to our tests, it has no significant visual difference to the coarse corrected results, here we set it to 0.6. For multi-scale illumination reference information, we use three scale Gaussian kernels with sizes of 30*30, 50*50 and 80*80, and the standard deviation are set to 6, 10 and 16, respectively. The result of CMIS inputing to GPR module get the final result (CMIS-GPR).

3) Metrics. Peak signal-to-noise ratio (PSNR) [31], structural similarity (SSIM) [32] are used to evaluate the simulated data. PSNR is defined by mean square error (MSE) to measure the fidelity between corrected image and ground truth. Its mathematical formula is given below:

$$\text{PSNR}=10 \cdot \log _{10}\left(\frac{2^n-1}{\text{MSE}}\right),$$
$n$ refers to the bits of image. MSE is defined as:
$$\text{MSE}(x, y)=\frac{1}{m * n} \sum_{i=0}^{m-1} \sum_{j=0}^{n-1}[x(i, j)-y(i, j)]^2,$$
where $x$ and $y$ refer to corrected image and ground truth image, respectively.

SSIM is used to evaluate the similarity between corrected image and ground truth, and the larger value indicates the better quality of the image. Its mathematical expression is as follows:

$$\text{SSIM}(x, y)=\frac{\left(2 \mu_x \mu_y+C_1\right)\left(2 \sigma_{x y}+C_2\right)}{\left(\mu_x^2+\mu_y^2+C_1\right)\left(\sigma_x^2+\sigma_y^2+C_2\right)},$$
where $\mu _x$ and $\mu _y$, $\sigma _x$ and $\sigma _y$, $\sigma _{x y}$ are local mean, standard deviation and covariance, respectively.

4.2.2 Experiments on slight intensity bias

In this section, we test the performance of removing slight intensity bias for all algorithms. The second column of Fig. 9 is the original clear images and the first column is the simulated degraded images. The simulated images are numbered as 1, 2 and 3 in Table 2, respectively. It can be seen that the first intensity map is characterized by small pixel values surrounded by large pixel values. The second intensity bias is characterized by large pixel values surrounded by small pixel values. The bias of these two intensity map vary very drastically and are dense in the middle region of the image. The intensity bias of the third image shows high pixel values in the middle region and low pixels near the edges.

Tables Icon

Table 2. Comparison of metrics for simulated data with slight intensity bias (Fig. 9)

Shi et al. and Hong et al. achieve effective results for the first image and fails for the second one. Other methods do not work for these degraded images. CMIS method achieves distinctly effective results for both images. Here demonstrates that the proposed method has an advantage in correcting the local dense intensity. For the third image, result of Shi et al. is most similar to the clear original image, therefore it achieves the largest PSNR and SSIM (see Table 2). In addition to this, Hong et al. and our method also obtain effective results. Cao et al. and Liu et al. obtain good results on real data experiments due to the fact that the real images’ intensity bias is weak. Inputting the results of CMIS into GPR module can further improve the contrast, which can be seen in the last column of Fig. 9. Table 2 tests the PSNR and SSIM metrics for the proposed CMIS model and comparison algorithms. Table 2 combining with Fig. 9 show that the proposed method has advantages for removing slight intensity bias.

4.2.3 Experiments on dense intensity bias

In this section, we perform experiments on degraded images with dense intensity bias. The simulated images are numbered as 4, 5 and 6 in Table 3. The thickness of intensity bias in Fig. 10 is 20 times of that in Fig. 9. Figure 10(a) shows that the scene is completely obscured by the intensity map. For the first and second images, all the comparison algorithms fail due to drastic change and denseness of intensity bias. The proposed CIMS obtains clearer results by separating the slight intensity bias map from the dense intensity bias map cyclically. For the first image, CMIS has black intensity at edge, but the edge effect can be corrected after GPR processing. This is because GPR itself can directly correct the weak intensity bias. Even if the image after CMIS correction contains intensity residuals, the intensity residuals can continue to be corrected after GPR module. For the third image, the simulated intensity bias has large variance so that the intensity value changes slowly. For this image, result of Shi et al. obtains the basic outline of the original image, but contains much intensity bias residuals. It is obvious that CMIS has the best corrected result for this image. The experiments demonstrate that the proposed method has an advantage in correcting the global dense intensity.

Tables Icon

Table 3. Comparison of metrics for simulated data with dense intensity bias (Fig. 10)

Table 3 shows the quantitative metrics. Combining Table 3 and Fig. 10, it can be seen that the proposed CMIS is useful for removing dense intensity bias. CMIS is designed to remove the dense intensity bias, GPR is used to correct the weak intensity bias and enhance the contrast. Even though CIMS has intensity residuals after correction, the incorporation of GPR can further correct the residual intensity bias. Combining the simulated experiments of slight intensity bias in section 4.2.2 and ablation experiments in section 4.1.2, it can be seen that the integration of CMIS and GPR provides the algorithm with higher robustness against the local dense intensity and global dense intensity. Here, we provide Visualization 2 to intuitively demonstrate the ability of CMIS-GPR to cope with dense intensity bias.

4.3 Computational time

In this section, we will calculate the computational performance of all the algorithms. Our all experiments are run in MATLAB R2015a and conducted on a computer with 3.61-GHz CPU and 32-GB RAM. Average time of all algorithms are tested using 100 frames real aerial data with weak intensity bias. Then the performance of the whole proposed algorithm (CMIS-GPR) is tested based on 100 frames of data with dense intensity bias. As shown in Table 4, the GPR module achieves the least time on the weak intensity bias correction. When the image is severely corrupted by dense intensity bias, the computational time may be longer. For the case of dense intensity bias correction, the runtime of the proposed algorithm is lower than Shi et al. and Hong et al. Therefore, the proposed method has certain advantage in runtime.

Tables Icon

Table 4. Average runtime for image with 256*320 resolution

5. Conclusion

In this paper, we construct a contrast enhancement method for improving the quality of aero thermal radiation images by proposing a gradient perception regularization module (GPR) and a cyclic multi-scale illumination self-similarity module (CMIS).

Firstly, to address the problem that existing methods can remove the intensity bias to make the image clearer but cannot boost the contrast higher, we consider the intensity correction as a problem of contrast enhancement and design a GPR method. Specifically the method decomposes an image into a pair of images without intensity bias, which exhibit inverted intensity directions. The proposed GPR method differs from previous algorithms due to that we do not focus on estimating the smooth intensity bias anymore. The experiment results show that GPR module can correct the weak intensity bias of the real scene by boosting the contrast.

Secondly, for the cases of local or global dense intensity, the GPR module may fail due to the fact that the image gradient is too small to be perceived. We design a coarse correction module in front of the GPR. Specifically, to address the problem that intensity residuals or over-extraction of scene information using base model of Eq. (8), we add reference information by using Gaussian filter to construct multi-scale illumination layers. Then, we construct a multi-scale illuminance self-similarity model based on the similarity prior between intensity bias and illumination of degraded images. Next, we equate the dense intensity bias to multiple slight intensity bias, and process the multi-scale illuminance self-similarity module cyclically (CMIS) to complete the coarse correction. The corrected result of CMIS is input into the GPR module to cope with the case of dense intensity bias.

Experiments demonstrate that the proposed GPR module can correct weak intensity bias images by enhancing contrast, CMIS can successfully remove large amounts of dense intensity, and CMIS-GPR can effectively deal with dense intensity bias. Extensive experiments show that the proposed method copes better with local dense intensity and global dense intensity bias than the comparison algorithms (Fig. 9 and Fig. 10). We believe that the proposed method has practical application value.

Finally, GPR model is essentially a contrast enhancement model, and it inevitably amplifies high-frequency noise. In practical applications, it is recommended to be placed after high-frequency noise removal or applied in complex scenes. In the future, we will investigate this limitation to broaden the application of proposed GPR model.

Funding

The Leading Technology of Jiangsu Basic Research Plan (BK20192003).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are available in Ref. [33].

References

1. J. LaVeigne, G. Franks, K. Sparkman, et al., “LWIR NUC using an uncooled microbolometer camera,” Proc. SPIE 7663, 766306 (2010). [CrossRef]  

2. Y. Cao and C.-L. Tisse, “Single-image-based solution for optics temperature-dependent nonuniformity correction in an uncooled long-wave infrared camera,” Opt. Lett. 39(3), 646–648 (2014). [CrossRef]  

3. J. Huang, Y. Ma, F. Fan, et al., “A scene-based nonuniformity correction algorithm based on fuzzy logic,” Opt. Rev. 22(4), 614–622 (2015). [CrossRef]  

4. H. Hong, J. Liu, Y. Shi, et al., “Progressive nonuniformity correction for aero-optical thermal radiation images via bilateral filtering and bézier surface fitting,” IEEE Photonics J. 15(2), 1–11 (2023). [CrossRef]  

5. W. Hui, S. Chen, W. Zhang, et al., “Evaluating imaging quality of optical dome affected by aero-optical transmission effect and aero-thermal radiation effect,” Opt. Express 28(5), 6172–6187 (2020). [CrossRef]  

6. W. Zhang, L. Ju, Z. Fan, et al., “Optical performance evaluation of an infrared system of a hypersonic vehicle in an aero-thermal environment,” Opt. Express 31(16), 26517–26534 (2023). [CrossRef]  

7. Y. Shi, J. Chen, H. Hong, et al., “Multi-scale thermal radiation effects correction via a fast surface fitting with chebyshev polynomials,” Appl. Opt. 61(25), 7498–7507 (2022). [CrossRef]  

8. Y. Yang, Z. Ren, B. Li, et al., “Infrared and visible image fusion based on infrared background suppression,” Opt. Lasers Eng. 164, 107528 (2023). [CrossRef]  

9. Y. Shi, H. Hong, X. Hua, et al., “Aero-optic thermal radiation effects correction with a low-frequency prior and a sparse constraint in the gradient domain,” J. Opt. Soc. Am. A 36(9), 1566–1572 (2019). [CrossRef]  

10. L. Chen, X. Chen, P. Rao, et al., “Space-based infrared aerial target detection method via interframe registration and spatial local contrast,” Opt. Lasers Eng. 158, 107131 (2022). [CrossRef]  

11. D. Zou and B. Yang, “Infrared and low-light visible image fusion based on hybrid multiscale decomposition and adaptive light adjustment,” Opt. Lasers Eng. 160, 107268 (2023). [CrossRef]  

12. L. Xu and Y. Cai, “Influence of altitude on aero-optic imaging deviation,” Appl. Opt. 50(18), 2949–2957 (2011). [CrossRef]  

13. A. DiGiovanni, “Improved ir windows for severe aerothermal environments,” Tech. rep., Technology Assessment and Transfer Inc. (2004).

14. I. Ferralli, T. Blalock, M. Brunelle, et al., “Manufacturing and metrology for ir conformal windows and domes,” Proc. SPIE 10179, 101790M (2017). [CrossRef]  

15. R. Hodge, P. Raghuraman, and A. Murray, “Window cooling technology program,” J. Spacecr. Rocket. 30(4), 466–476 (1993). [CrossRef]  

16. L. Li, L. Yan, N. Sang, et al., “Aero-thermal radiation correction via multi-scale bias field estimation,” in 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), (IEEE, 2015), pp. 246–250. [CrossRef]  

17. L. Liu and T. Zhang, “Intensity non-uniformity correction of aerothermal images via lp-regularized minimization,” J. Opt. Soc. Am. A 33(11), 2206–2212 (2016). [CrossRef]  

18. L. Liu, L. Xu, and H. Fang, “Simultaneous intensity bias estimation and stripe noise removal in infrared images using the global and local sparsity constraints,” IEEE Trans. Geosci. Remote Sensing 58(3), 1777–1789 (2020). [CrossRef]  

19. Z. Li, G. Xu, Y. Cheng, et al., “A structure prior weighted hybrid l2–lp variational model for single infrared image intensity nonuniformity correction,” Optik 229, 165867 (2021). [CrossRef]  

20. L. Liu and T. Zhang, “Optics temperature-dependent nonuniformity correction via l0-regularized prior for airborne infrared imaging systems,” IEEE Photonics J. 8(5), 1–10 (2016). [CrossRef]  

21. Y. Chang, L. Yan, H. Fang, et al., “Anisotropic spectral-spatial total variation model for multispectral remote sensing image destriping,” IEEE Trans. on Image Process. 24(6), 1852–1866 (2015). [CrossRef]  

22. Z. Liang, J. Xu, D. Zhang, et al., “A hybrid l1-l0 layer decomposition model for tone mapping,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2018), pp. 4758–4766.

23. Y. Wang, Y. Wang, T. Liu, et al., “Enhancing infrared imaging systems with temperature-dependent nonuniformity correction via single-frame and inter-frame structural similarity,” Appl. Opt. 62(26), 7075–7082 (2023). [CrossRef]  

24. P. Babakhani and P. Zarei, “Automatic gamma correction based on average of brightness,” Adv. Comput. Sci. an Int. J. 4, 156–159 (2015).

25. Z. Zhang and S. Zou, “An improved algorithm of mask image dodging for aerial image,” Proc. SPIE 8006, 80060S (2011). [CrossRef]  

26. B. Rasti and B. Koirala, “SUnCNN: Sparse unmixing using unsupervised convolutional neural network,” IEEE Geosci. Remote Sens. Lett. 19, 1–5 (2021). [CrossRef]  

27. S. S. Agaian, B. Silver, and K. A. Panetta, “Transform coefficient histogram-based image enhancement algorithms using contrast entropy,” IEEE Trans. on Image Process. 16(3), 741–758 (2007). [CrossRef]  

28. F. Wu and J. Han, “Study on defect imaging technology of optical elements based on micro-raman spectroscopy,” Rev. Sci. Instrum. 94(6), 065112 (2023). [CrossRef]  

29. S. Zhao, Y. He, J. Qin, et al., “A semi-supervised deep learning method for cervical cell classification,” Anal. Cell. Pathol. 2022, 4376178 (2022). [CrossRef]  

30. G. M. Smith and P. J. Curran, “Methods for estimating image signal-to-noise ratio (snr),” Advances in remote sensing and GIS analysis pp. 61–74 (1999).

31. M. Gangadharappa, R. Kapoor, and H. Dixit, “An efficient hierarchical 16-qam dynamic constellation to obtain high psnr reconstructed images under varying channel conditions,” IET Commun. 10, 139–147 (2016). [CrossRef]  

32. Y. Shahriari, R. Fidler, M. M. Pelter, et al., “Electrocardiogram signal quality assessment based on structural image similarity metric,” IEEE Trans. Biomed. Eng. 65(4), 745–753 (2018). [CrossRef]  

33. Y. Wang, “CMIS-GPR,”Github (2023). https://github.com/wangyuro/CMIS-GPR

Supplementary Material (2)

NameDescription
Visualization 1       Visualisation1 for real data with weak intensity bias noise
Visualization 2       Visualisation2 for simulated dense intensity bias noise

Data availability

Data underlying the results presented in this paper are available in Ref. [33].

33. Y. Wang, “CMIS-GPR,”Github (2023). https://github.com/wangyuro/CMIS-GPR

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. The real degraded uniform surface is decomposed into a pair of images with inverted intensity characteristics by GPR solver. (a) Uniform surface IR image with intensity bias. (b) Corrected image. (c) Degraded image minus corrected image directly.
Fig. 2.
Fig. 2. Two images with inverted pixel features and uniform intensity are added together can obtain a low contrast image with intensity bias. (b) is visual representation of (a). Pixels colors closer to blue represent smaller pixel values, and pixels colors closer to yellow represent larger pixel values.
Fig. 3.
Fig. 3. The limitation of GPR. The left image of (a) shows the degraded image with local dense intensity, which is a drastically varying intensity bias map. The right image of (a) shows the corrected result of the GPR module. The left image of (b) shows the degraded image with global dense intensity bias, and the right image of (b) shows the corrected result of the GPR module.
Fig. 4.
Fig. 4. Iterative analysis for parameter $k$ in section 3.1.2. Data 1 is a complex scene and Data 2 is a uniform surface data. Three regions are selected on each data and tested ICV values at different k. The larger the ICV, the more uniform the intensity is.
Fig. 5.
Fig. 5. Validation of the structural-similarity term based on conventional regularization solver for Eq. (8). (a) Original degraded image. (b) Corrected result is w/o structural-similarity term. (c) The results is w/ structural-similarity term. The left image of (c) is the corrected result, and the right image of (c) is the estimated intensity bias.
Fig. 6.
Fig. 6. Comparison of (a) w/o structural-similarity term and (b) w/ structural-similarity term based on GPR solver for Eq. (8). (a) and (b) contain two images respectively, which represent a pair of maps with inverted intensity directions obtained by GPR module. The left image represents the $I$ obtained by the GPR module decomposition (Step 4), and the right image is the final corrected image.
Fig. 7.
Fig. 7. Comparison among different methods through real data. (a) Degraded image. (b) Cao et al. [2] (c) Liu et al. [17] (d) Shi et al. [7] (e) Hong et al. [4] (f) GPR.
Fig. 8.
Fig. 8. Comparison of pixel difference values in the 200th row and 128th column based on the second image of Fig. 7.
Fig. 9.
Fig. 9. Comparison among different methods through simulated slight intensity bias. (a) Degraded imade. (b) Clear image. (c) Cao et al. [2] (d) Liu et al. [17] (e) Shi et al. [7] (f) Hong et al. [4] (g) CMIS. (h) CMIS-GPR.
Fig. 10.
Fig. 10. Comparison among different methods through simulated dense intensity bias. (a) Degraded imade. (b) Clear image. (c) Cao et al. [2] (d) Liu et al. [17] (e) Shi et al. [7] (f) Hong et al. [4] (g) CMIS. (h) CMIS-GPR.

Tables (4)

Tables Icon

Table 1. Comparison of metrics for real data (Fig. 7)

Tables Icon

Table 2. Comparison of metrics for simulated data with slight intensity bias (Fig. 9)

Tables Icon

Table 3. Comparison of metrics for simulated data with dense intensity bias (Fig. 10)

Tables Icon

Table 4. Average runtime for image with 256*320 resolution

Equations (30)

Equations on this page are rendered with MathJax. Learn more.

S = D + I + n
( D ^ , I ^ ) = arg min D , I 1 2 S D I 2 2 + α I 2 2 + β D 1 ,
min D , I η S D I 2 2 + α D p p
min I η S D I 2 2
( D ^ , I ^ ) = arg min D , I S D I 2 2 + α S D I 2 2 + β D | 0 + γ I W a 2 2 ,
{ I ^ i = arg min η D ^ i 1 D ^ i I ^ i 2 2 + α R ( D ^ i ) + β R ( I ^ i ) , I i = arg min I i I ^ i = B{\'e}zier m , n ( I ^ i ) , D ^ i = D ^ i 1 γ I i .
S = D + I .
min D 1 2 S D 2 2 + 1 2 S D S 2 2 + λ 1 ( S D ) 1 = min D 1 2 S D 2 2 + 1 2 D 2 2 + λ 1 ( S D ) 1 .
min D 1 2 S D 2 2 + 1 2 D 2 2 + λ 1 ( S D ) 1 ,
 s.t.  C = ( S D ) .
L ( D , C , y ) = 1 2 S D 2 2 + 1 2 D 2 2 + λ 1 C 1 + ( C ( S D ) ) y + ρ 2 ( C ( S D ) 2 2 ) ,
D k + 1 = arg min D { 1 2 S D 2 2 + 1 2 D 2 2 + ( C ( S D ) ) y + ρ 2 ( C ( S D ) 2 2 ) } .
D k + 1 = f f t 1 ( f f t ( S ) + f f t ( S ) ρ f f t ( x y ) + f 2 + ρ k f f t ( x y ) ) ,
f = f f t ( x ) f x k + fft ( y ) f y k ,
f x k = fft ( ρ k ( C 2 , 1 k + y 2 , 1 k ρ k ) ) , f y k = fft ( ρ k ( C 2 , 2 k + y 2 , 2 k ρ k ) ) ,
fft ( x y ) = fft ( x ) fft ( x ) + fft ( y ) fft ( y ) .
C k + 1 = arg min C { 2 λ 1 ρ k C 1 + C ( S D k + 1 ) + y k ρ k 2 2 } ,
C k + 1 = T λ 1 / ρ k ( ( S D k + 1 ) y 1 k / ρ k ) ,
y k + 1 = y k + ρ k ( C k ( S D k + 1 ) ) , ρ k + 1 = p ρ k ,
D = S g I .
D ~ = D log ( h ) / log ( 1 n ( i = 1 n D ( i ) ) ) ,
I = I 1 + I 2 + + I n .
min D j 1 2 S i n j 1 D j 2 2 + m = 1 n 1 2 S r e f m j 1 ( S i n j 1 D j ) 2 2 + λ 2 ( S i n j 1 D j ) 1 .
E M E k 1 , k 2 = 1 k 1 k 2 l = 1 k 1 k = 1 k 2 20 In I max ; k , l I min ; k , l + c
S M D ( I ) = y x ( | I ( x , y ) I ( x , y 1 ) | + | I ( x , y ) I ( x + 1 , y ) | ) .
S M D 2 ( I ) = y x | I ( x , y ) I ( x + 1 , y ) | | I ( x , y ) I ( x , y + 1 ) | .
I C V = R m e a n ( a ) R s t d ( a ) ,
PSNR = 10 log 10 ( 2 n 1 MSE ) ,
MSE ( x , y ) = 1 m n i = 0 m 1 j = 0 n 1 [ x ( i , j ) y ( i , j ) ] 2 ,
SSIM ( x , y ) = ( 2 μ x μ y + C 1 ) ( 2 σ x y + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.