Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Dual-polarization defogging method based on frequency division and blind separation of polarization information

Open Access Open Access

Abstract

The current advancements in image processing have led to significant progress in polarization defogging methods. However, most existing approaches are not suitable for scenes with targets exhibiting a high degree of polarization (DOP), as they rely on the assumption that the detected polarization information solely originates from the airlight. In this paper, a dual-polarization defogging method connecting frequency division and blind separation of polarization information is proposed. To extract the polarization component of direct transmission light from the detected polarized signal, blind separation of overlapped polarized information is performed in the low-frequency domain based on visual perception. Subsequently, after estimating airlight, a high-quality defogging image can be restored. Extensive experiments conducted on real-world scenes and comparative tests confirm the superior performance of our proposed method compared to other competitive methods, particularly in reconstructing objects with high DOP. This work provides a quantitative approach for estimating the contributions of polarization light from different sources and further expands the application range of polarimetric defogging imaging.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Imaging under severe weather conditions often suffers from poor visibility, low contrast, and color fidelity, due to the presence of suspended aerosol and dust particles [1,2]. The captured degraded images have adverse impacts on computer vision-based applications, such as security monitoring [3], smart transportation systems [4,5], and maritime search [6]. Therefore, image defogging techniques have become increasingly crucial. Traditional defogging approaches employ image enhancement algorithms or various priors to address the ambiguity of this ill-posed problem in terms of light intensity [712]. However, as scene depth and fog concentration increase, the target information gradually becomes submerged in background noise. Consequently, defogging in a single dimension of intensity or color makes it difficult to meet the growing requirements of practical applications. In recent years, the rapid developments of deep learning have significantly improved the defogging effect. However, it is important to note that learning-based methods heavily rely on datasets. The lack of realistic paired samples often results in the presence of artifacts and color distortion in the defogged outcomes, particularly when dealing with scenes characterized by large depth and heavy fog [1319].

As an alternative strategy, polarization defogging methods use as few as two polarized sub-images to remove the impact of fog and have been widely applied in fields of dehazing imaging and underwater polarization imaging [2025]. The representative schemes were proposed by Schechner et al. Based on the assumption that only airlight is partially linearly polarized and the directly transmitted light is unpolarized, that is single polarization assumption, Schechner et al. utilized polarized difference to separate the target signal from the airlight and then removed fog’s impact [20,26]. Up to now, various polarization defogging methods with different focuses have been proposed, such as the Stokes vector method [21], angle of polarization (PA) method [27], circular polarization [28], spatial frequency division and fusion (SFDF) method [22], and polarization-based image dehazing regularization constraint [23], etc. While existing polarization defogging methods have shown efficacy, they often fall short when applied to scenes with high DOP. Specifically, for some special scene regions where object radiance could also contribute to the polarization, such as metal architectures, windows, water surfaces, and road surfaces, the assumption fails and results in serious information loss. As shown in Fig. 1 shows, the recovered results of PA and SFDF methods represent unacceptable color distortion and information loss for the realistic foggy scenes region with specular dielectric objects of high DOP, such as solar roofs, roads, and lakes, as marked in the red circles in Fig. 1(c1)-(d3). Therefore, it is necessary to consider both of the polarized characteristics of the object radiation and the airlight, that is the dual-polarization assumption.

 figure: Fig. 1.

Fig. 1. The defogging results based on the single polarization assumption for the realistic foggy scenes with specular dielectric objects of high DOP. (a1)-(a3) the foggy images, (b1)-(b3) the DOP images, (c1)-(c3) the defogging image by PA method, and (d) the defogging image by SFDF method.

Download Full Size | PDF

Since the directly transmitted light is mixed with the airlight, it is impossible to directly measure the targets’ polarization in foggy conditions. To address this issue, Namer et al. [29] manually recognized the abnormally recovered regions in the airlight map and then re-estimated the polarized property of the airlight by a simple interpolation method. However, the correction process sharply increased the computational cost and was confined to small mirror targets. Fang et al. [30] jointed the polarization effects of the airlight and the object radiance in the imaging process, and proposed a decorrelation-based algorithm to obtain the DOP of the directly transmitted light from the polarized foggy images. Although this work made great progress in polarized foggy imaging theory, the defogging processes employed complex calculations of covariance and differential equations. Ling et al. [31] simply used a Gaussian filter to obtain the DOP of the target and achieve a slight improvement in the outcomes.

Herein, we propose a dual-polarization defogging method connecting frequency division and blind separation of polarization information. This method assumes that the polarization information of targets is independent of the airlight but correlated with reflection. Therefore, the polarized portion of directly transmitted light can be effectively blindly extracted in the low-frequency domain based on the visual perception principle, and the airlight can be estimated by subtracting the polarized portion of target light from the low-frequency information. Extensive experiments on real-world images show that the proposed method not only solves the problem of polarization information loss, but also has significant advantages in recovering the target’s structure and texture information in long-distance and dense-fog scenes. In addition, our method outperforms existing dual-polarization methods both of in terms of subjective visual effects and objective evaluation of image quality. The proposed method is computationally simple, does not require prior knowledge, and has potential advantages in applications of computer vision and pattern recognition.

2. Dual-polarization-based reconstruction theory

2.1 Contribution of different polarized components on foggy image

The image degradation model indicates that the total intensity of a captured foggy image I contains two components, the attenuated intensity from the scene D and the intensity of the multi-scattered airlight A [2,32]:

$$I = D + A = L \cdot t + {A_\infty } \cdot ({1 - t} )$$
Where L represents the fog-free image. ${A_\infty }$ represents the airlight at infinity. $t = {e^{ - \beta (\lambda )d}}$ represents the transmission of the atmospheric scattering medium. d represents the distance between the camera and the target. $\beta (\lambda )$ represents the extinction coefficient of the scattering medium, which can be treated as a globally invariant parameter if the fog is assumed to be isotropic and uniformly distributed. According to Eq. (1), the fog-free image can be expressed as [2,32]:
$$L = \frac{{I - A}}{{1 - A/{A_\infty }}}$$

Equation (2) shows that the key to recovering the fog-free image is to estimate the airlight parameters A and ${A_\infty }$ accurately. To solve this problem, ordinary polarization defogging methods utilized the polarized difference of the directly transmitted light and airlight. However, these methods are restricted to some special conditions where the directly transmitted light from the target is treated as unpolarized.

There is a well-founded conclusion that both of polarization of airlight and object radiance contribute to the polarization of foggy images [26]. To analytically distinguish the contributions of different polarized components on foggy imaging, the captured polarized light at a single pixel is split into two parts by referring to the condition of single polarization assumption [20,26], as shown in Fig. 2. $E_A^P$ and $E_D^P$ represents the electric field of the polarized components of airlight and directly transmitted light, respectively. When a polarizer is mounted, the observed polarized intensity changes as a function of the polarizer’s orientation angle $\alpha$.

 figure: Fig. 2.

Fig. 2. The polarized components of directly transmitted light and airlight along the polarizer’s orientation angle. $x$- and $y$- axis represents the polarized directions of 0° and 90° in the Cartesian coordinate of the camera plane, respectively. $E_A^P$ and $E_D^P$ represents the electric field of the polarized components of airlight and directly transmitted light, respectively. The electric field projections of airlight on $x$- and $y$- axes are denoted by $E_{Ax}^P$ and $E_{Ay}^P$, respectively. The electric field projections of directly transmitted light on $x$- and $y$- axes are denoted by $E_{Dx}^P$ and $E_{Dy}^P$, respectively. $\alpha$ means the orientation angle between the polarizer and the $x$- axis of the Cartesian coordinate. ${\theta _A}$ and ${\theta _D}$ represents the polarized angles of airlight and target light related to the $x$- axis of the Cartesian coordinate.

Download Full Size | PDF

For any orientation $\alpha$, the captured polarized intensity at each pixel is expressed as:

$$\begin{aligned} I_\alpha ^p &= {[{E_A^P \cdot \cos ({\theta_A} - \alpha ) + E_D^P \cdot \cos ({\theta_D} - \alpha )} ]^2}\\ &= {A^P} \cdot {\cos ^2}({\theta _A} - \alpha ) + {D^P} \cdot {\cos ^2}({\theta _D} - \alpha ) + 2\sqrt {{A^P} \cdot {D^P}} \cdot \cos ({\theta _A} - \alpha )\cos ({\theta _D} - \alpha ) \end{aligned}$$
Where ${A^P}$ and ${D^P}$ represents the polarized components of airlight and target light, respectively. While the captured unpolarized intensity at each pixel is expressed as:
$$I_\alpha ^{UP} = \frac{{A \cdot ({1 - {P_A}} )}}{2} + \frac{{D \cdot ({1 - {P_D}} )}}{2}$$

Therefore, the total intensity of the captured foggy image at each pixel is the superposition of the polarized and unpolarized components:

$$\begin{array}{c} {I_\alpha } = \frac{{A \cdot ({1 - {P_A}} )}}{2} + \frac{{D \cdot ({1 - {P_D}} )}}{2} + {A^P} \cdot {\cos ^2}({\theta _A} - \alpha ) + {D^P} \cdot {\cos ^2}({\theta _D} - \alpha )\\ + 2\sqrt {{A^P} \cdot {D^P}} \cdot \cos ({\theta _A} - \alpha )\cos ({\theta _D} - \alpha )\\ \textrm{ } \end{array}$$

This equation simultaneously takes into account the contributions of the polarized states of airlight and target light. Furthermore, when the polarization of target light is negligible, that is ${P_D} = 0$ and ${D^P} = 0$, Eq. (5) is simplified as:

$${I_\alpha } = \frac{{A \cdot ({1 - {P_A}} )}}{2} + {A^P} \cdot {\cos ^2}({\theta _A} - \alpha ) + \frac{D}{2}$$

Herein, the theoretical formula of the dual-polarization condition returns to the single polarization assumption [20,26].

Setting $\alpha$ = 0°, 45°, 90° and 135° in Eq. (5), then the Stokes parameter and DOP of the captured polarized foggy image are obtained as:

$$\begin{array}{{c}} {{S_0} = {I_0} + {I_{90}}}\\ {{S_1} = {I_0} - {I_{90}}}\\ {{S_2} = {I_{45}} - {I_{135}}} \end{array},P = \frac{{\sqrt {{{({{S_1}} )}^2} + {{({{S_2}} )}^2}} }}{{{S_0}}}$$

To compare the contributions of different polarized components on foggy imaging, Fig. 3(a) shows two simulated curves using Eq. (5). The simulation parameters are set to $D$=0.42, $A$=0.04, ${P_A}$=0.66, ${P_D}$=0 (corresponding to red curves) or 0.15 (corresponding to purple curves), ${\theta _A}$=0.99, ${\theta _D}$=0.69, $\alpha \in [{0,2\pi } ]$ based on experiences and practical applications. The red and purple curves represent the light intensity at one pixel of the collected polarized foggy images as a function of $\alpha$ under single (without ${P_D}$) and dual (with ${P_D}$) polarization assumptions, respectively. This stronger fluctuation on the purple curve demonstrates that the polarization of object radiation holds a significant influence on foggy image.

 figure: Fig. 3.

Fig. 3. (a) Simulated results of varying intensities of the collected polarized foggy images as a function of $\alpha$. (b) Experimental results. (c) Scene division of the intensity image. (d) DOP image, the brighter, the higher of DOP. Polarized foggy images with $\alpha$ = (e) 0°, (f) 45°, (g) 90° and (h) 135°, respectively, which were captured by a linear polarized camera (BLACKFLY SBFS-U3-51S5P, FLIR Systems, Inc, U.S.A).

Download Full Size | PDF

To further experimentally verify the significant contribution of the light’s polarized component, a foggy scene shown in Fig. 3(c) was designed, which contains a white paper printed with a Chinese label, a Macbeth color checker, a model truck, and a linear polarized filter wheel (with different polarized orientations of 0°, 30°, 60°, 90°, 120°, 150°). The bright light beam shown in Fig. 3(c) is a laser beam and is used to monitor the fog concentration in real time. Figure 3(d) shows the linear DOP of this scene, which displays high DOP values in the linear polarized filter wheel (ignore the monitoring laser) and low DOP values in the rest part. Six interesting target areas (as marked in Fig. 3(c)) are selected, among which 1-3 represent non-polarized targets and 4-6 represent polarized target areas. The average intensity of each selected area in different polarized foggy images (0°, 45°, 90°, and 135°) is calculated and drawn in Fig. 3(b). The curves in Fig. 3(b) show the measured data and fitting results of the six interesting target areas in Fig. 3(e)-(h). In particular, the deviation of the polarized target region is larger than that of the non-polarized target region. This is because the deviation in the non-polarized target area is caused only by the scattering effect of the fog, while the difference in the polarized target area is the polarization superposition of airlight and object radiance. These experimental results are in good agreement with the simulation result in Fig. 3(a) and objectively describe the dual-polarization contributions of the direct transmission light and airlight on the imaging process.

Refer to Eq. (7), the DOP of airlight and directly transmitted light are expressed as:

$${P_A} = \frac{{\sqrt {{{({{S_{A1}}} )}^2} + {{({{S_{A2}}} )}^2}} }}{{{S_{A0}}}},{P_D} = \frac{{\sqrt {{{({{S_{D1}}} )}^2} + {{({{S_{D2}}} )}^2}} }}{{{S_{D0}}}}$$

Herein, ${S_{A0}}$, ${S_{A1}}$, and ${S_{A2}}$ are the Stokes parameters of airlight. ${S_{D0}}$, ${S_{D1}}$, and ${S_{D2}}$ are the Stokes parameters of direct transmission. ${S_{D0}}$ and ${S_{A0}}$ corresponds to D and A, respectively. Then the following atmospheric scattering model based on dual-polarized characteristics is established:

$$\left\{ {\begin{array}{{c}} {I = A + D = {A_\infty } \cdot ({1 - t} )+ L \cdot t}\\ {A = {A^P} + {A^{UP}}}\\ {D = {D^P} + {D^{UP}}}\\ {t = 1 - \frac{A}{{{A_\infty }}}} \end{array}} \right.$$
Where, ${A^{UP}}$ and ${D^{UP}}$ is the unpolarized components of airlight and direct transmission light, respectively. These parameters are functions of the optical wavelength and thus can be processed independently in spectral channel RGB. It can be seen from Eq. (9) that the key to solving the model is to estimate the A and ${A_\infty }$ under each spectral channel. Herein, the polarization information of the direct transmission light needs to be extracted first.

2.2 Blind separation of polarization information from polarized foggy images based on visual perception

To blindly separate the polarized information from polarized foggy images, the characteristics of frequency domain intensity distribution of and visual perception of different light components are used. The intensity distribution characteristics of different light components is described as follows [22]: the fog background mainly contributes to the low-frequency part of the foggy image, while the target’s detail features dominate in the high-frequency part. Moreover, considering that the nonsubsampled contourlet transform (NSCT) [33] has significant advantages in maintaining the image’s texture and shaping feature extraction, this paper uses NSCT to decompose the polarized foggy images into different frequency domain parts:

$$\{{{I_{\alpha ,low}},I_{\alpha ,high}^{m,n}} \}= NSC{T_{m,n}}({{I_\alpha }} ),\alpha \in [{{0^ \circ },{{45}^ \circ },{{90}^ \circ }\textrm{,13}{\textrm{5}^ \circ }} ],1 \le m \le M,1 \le n \le {2^{{N_m}}}$$

Here, ${I_{\alpha ,low}}$ and $I_{\alpha ,high}^{m,n}$ represents the low- and high-frequency part of the polarized foggy image with $\alpha$, respectively. m is the number of decomposition levels of the non-subsampled pyramid, and n is the order of the non-subsampled directional filter bank. After decomposing the polarized foggy images $({{I_0},{I_{45}},{I_{90}},{I_{135}}} )$, four low-frequency parts $({{I_{0,low}},{I_{45,low}},{I_{90,low}},{I_{135,low}}} )$ and the corresponding high-frequency parts $({{I_{0,high}},{I_{45,high}},{I_{90,high}},{I_{135,high}}} )$ are obtained.

The model of the lightness and color perception of human vision (i.e. visual perception principle or Retinex theory) [34] describes that sensations of color show a strong correlation with reflectance, because the amount of visible light reaching the eye (detector) depends on the product of reflectance and illumination. In other words, the sensation of color is sensitive to the reflectance of the object surfaces, while relatively independent of non-uniformity of ambient illumination. This theory leverages the human visual system's innate ability to address the separating problem of object reflectance and ambient illumination, and is used to extract the polarization information of object radiation in this paper. The low-frequency part of the polarized foggy images in Eq. (10) is sequentially decomposed into the reflectance component R, the illumination component L, and the detection noise component n (the detection noise is ignored here):

$${I_{\alpha ,low}} = {R_\alpha } \cdot {L_\alpha },\alpha \in [{{0^ \circ },{{45}^ \circ }\textrm{,9}{\textrm{0}^ \circ }\textrm{,13}{\textrm{5}^ \circ }} ]$$

${R_\alpha }$ represents the inherent reflectance of the object after being attenuated by fog, which has a small dynamic range. While ${L_\alpha }$ represents the influence of ambient illumination affected by the fog, which has a large dynamic range. Since there is a dynamic range difference between ${R_\alpha }$ and ${L_\alpha }$, and the influence of ambient illumination on foggy images can be expressed by the Gaussian function. The reflectance component of the low-frequency part of polarized foggy images is sequentially blindly separated by mathematical processing methods [34]. After the reflectance component of the low-frequency part of four polarized foggy images $({{R_{0,low}},{R_{45,low}},{R_{90,low}},{R_{135,low}}} )$ is sequentially blindly separated by using Eq. (11), the Stokes parameter and the DOP of the reflectance component can be obtained by reference to Eq. (8):

$$\left\{ {\begin{array}{{c}} {{S_{R0}} = {R_0} + {R_{90}}}\\ {{S_{R1}} = {R_0} - {R_{90}}}\\ {{S_{R2}} = {R_{45}} - {R_{135}}} \end{array}} \right.,{P_R} = \frac{{\sqrt {{{({{S_{R1}}} )}^2} + {{({{S_{R2}}} )}^2}} }}{{{S_{R0}}}}$$

Thus the polarized component of the object radiation of the polarized foggy images is estimated:

$${R^P} = R \cdot {P_R}$$
where R and PR represent the object radiation and its DoP. It should be emphasized that Eq. (13) is derived from the low-frequency part of the polarized foggy image, while the target details are mainly distributed in the high-frequency part. Therefore, this RP represents the polarized component of the direct transmission light in the low-frequency domain, rather than the true polarization information of the detected object. In spite of this, using RP to replace the polarization information of detected objects is till reasonable enough, because the high DoP of object mainly comes from the polarizing effect of specular reflection (such as water surface, asphalt road and metal) and the information contained in this reflection is low-frequency.

2.3 Estimation of A and ${A_\infty }$

Considering that the low-frequency part of the foggy image is mainly ascribed to the fog background, it is legitimately assumed that the unpolarized component of the direct transmission light in the low-frequency part is very weak compared to the total intensity. Therefore, the total light intensity of the low-frequency part of the foggy image can be estimated from Eq. (913) as below:

$$\begin{aligned} {I_{low}} &= {D_{low}} + {A_{low}}\\ &= [{{D_{low}} \cdot {P_{D,low}} + {D_{low}} \cdot ({1 - {P_{D,low}}} )} ]+ {A_{low}}\\ & \approx {D_{low}} \cdot {P_{D,low}} + {A_{low}}\\ &\approx {R^P} + {A_{low}} \end{aligned}$$

Consequently, the airlight of the low-frequency part of the foggy image is estimated as:

$${A_{low}} = {I_{low}} - R \cdot {P_R}$$

Here, the airlight in the low-frequency part in Eq. (15) is slightly different from the real airlight of the foggy image. In addition, the absolutely dehazing processed image is a dark vision of the sky region and loses the depth of field. Therefore, a bias factor $\gamma$ is introduced to estimate the airlight $A$:

$$A = \frac{{{A_{low}}}}{\gamma }$$

This bias factor in Eq. (16) is the only coefficient that needs to be set manually, which actually can be set as a constant and will be discussed at the end of this article.

Considering the sufficient complexity and richness of the captured real images, the well-known dark channel prior [8] is used to determine the airlight at infinity ${A_\infty }$. In most of the non-sky patches of fog-free outdoor images, at least one color channel has very low intensity and even tends to be zero at some pixels (called dark pixels) [8]:

$${J^{dark}}(x )= \mathop {\min }\limits_c \left( {\mathop {\min }\limits_{y \in \varOmega (x )} ({I_{low}^c(y )} )} \right) = 0$$
where, ${J^{dark}}$ is the dark channel composed of these dark pixels. $I_{low}^c$ is a color channel of the low-frequency part of the foggy image. $\varOmega (x )$ is a local patch centered at pixel x. These dark pixels have higher intensity values in the corresponding foggy images due to the influence of fog. Thus the parameters ${A_\infty }$ can be automatically estimated by picking the top 0.1% brightest pixels in the dark channel:
$${A_\infty } = \sum\limits_{{I_{low}} \ge 0.1\%{J^{dark}}} {({{I_{low}}} )}$$

2.4 Image restoration model

Since the polarization of the object radiance cannot be ignored, we will use the new atmospheric scattering model based on dual-polarized characteristics presented in Eq. (9), which considers both of the polarization effects of the airlight and the object radiance, to remove the effect of fog.

Substituting Eqs. (16), (18) to Eq. (9), the expression of the intensity of the scenes radiance in the low-frequency domain ${L_{low}}$ (i.e. defogging image of the low-frequency part) is as follows:

$${L_{low}} = \frac{{{I_{low}} - {A_{low}}}}{{1 - {{{A_{low}}} / {{A_\infty }}}}}$$

High-frequency sub-bands represent the detailed components of the source images such as edges, contours, and object boundaries. The most commonly used fusion rule for high-frequency sub-band is selecting the coefficient having maximum absolute value. However, this scheme is sensitive to noise and there is the possibility of losing some important information as the coefficient selection is based on a single coefficient value without considering neighboring coefficients. Another is coefficient selection scheme is based on activity level measurement value. In the proposed method, the maximum weighted sum-modified Laplacian (WSML) [35] is used to fuse and enhance the high-frequency part of the information. The Modified Laplacian (ML) of $I_{\alpha ,high}^{m,n}(x,y)$ is:

$$\begin{aligned} ML[{I_{\alpha ,high}^{m,n}(x,y)} ]&= |{2I_{\alpha ,high}^{m,n}({x,y} )- I_{\alpha ,high}^{m,n}({x - 1,y} )- I_{\alpha ,high}^{m,n}({x + 1,y} )} |\\ &+ |{2I_{\alpha ,high}^{m,n}({x,y} )- I_{\alpha ,high}^{m,n}({x,y - 1} )- I_{\alpha ,high}^{m,n}({x,y + 1} )} |\end{aligned}$$

WSML is of $I_{\alpha ,high}^{m,n}(x,y)$ is:

$$WSML[{I_{\alpha ,high}^{m,n}(x,y)} ]= \sum\limits_{p ={-} 1}^1 {\sum\limits_{q ={-} 1}^1 {\{{w({p + 1,q + 1} )\cdot ML[{I_{\alpha ,high}^{m,n}(x,y)} ]} \}} }$$
where w is the weight matrix based on experiences and practical applications, and is set as:
$$w = \frac{1}{{16}}\left[ {\begin{array}{ccc} 1&2&1\\ 2&4&2\\ 1&2&1 \end{array}} \right]$$

To efficiently enhance the detection target information, the rule of maximizing WSML is used to obtain the fused high-frequency sub-band coefficients:

$$Fused_{_{high}}^{m,n}(x,y) = I_{\alpha \_\max ,high}^{m,n}(x,y),WSML[{I_{\alpha \_\max ,high}^{m,n}(x,y)} ]= \max \{{WSML[{I_{\alpha ,high}^{m,n}(x,y)} ]} \}$$

After polarization defogging and fusion processing for low- and high-frequency parts, respectively, the hazy-free image is reconstructed by inverting the NSCT transform [33]:

$$L = NSCT_{m,n}^{\prime}({{L_{low}},Fused_{high}^{m,n}} ),1 \le m \le M,0 \le n \le {2^{{N_m}}}$$

Thus the clear image without the loss of spatial resolution and polarization information is reconstructed according to Eq. (23).

3. Proposed dual-polarization defogging algorithm

Figure 4 shows the flowchart of the proposed scheme consisting of four steps. The first step is to preprocess and decompose the raw mul-polarized foggy images I. After extracting four polarized sub-images from the foggy image, the Optimized Residual Interpolation (ORI) algorithm [22] is used to obtain four high-resolution polarized sub-images $({{I_0},{I_{45}},{I_{90}},{I_{135}}} )$ to avoid spatial resolution loss. Then the $({{I_0},{I_{45}},{I_{90}},{I_{135}}} )$ are decomposed into four low-frequency images $({{I_{0,low}},{I_{45,low}},{I_{90,low}},{I_{135,low}}} )$ and corresponding high-frequency parts $({{I_{0,high}},{I_{45,high}},{I_{90,high}},{I_{135,high}}} )$ by using Eq. (10). In the second step, the low-frequency parts are processed by the dual-polarization defogging theory as described in Eq. (1119). To efficiently enhance the detailed component of the detected scene, the third step is to fuse and enhance the high-frequency parts by using the maximum WSML as described in Eq. (2022). Finally, in the last step, the fog-free image ${L_{low}}$ is reconstructed according to Eq. (23). Considering the synchronously amplified noise during the fusing process in the high-frequency domain, guided filtering [36] is optionally used.

 figure: Fig. 4.

Fig. 4. Flowchart of the proposed algorithm.

Download Full Size | PDF

4. Experimental results

4.1 Test in a controlled environment

To verify the effectiveness of the proposed method, a controlled experiment is designed as shown in Fig. 5(a). A laser power monitoring module is used to monitor the power in real time after attenuation through the fog. Aiming at comparing the recovering performance of the polarimetric information, we select a low-polarized target (car model) and a high-polarized target (linear polarized filter wheel) to test our algorithm. The clear and polarized foggy images are captured by the linear polarized camera (BLACKFLY SBFS-U3-51S5P, FLIR Systems, Inc, U.S.A). Figure 5(b) - (e) shows the detected targets and their corresponding DOP images. It can be seen from Fig. 5(d) that the overall polarization information of the car model is relatively weak, except for details such as the windshield, and body contour, while the polarized filter wheel shows some highly polarized regions.

 figure: Fig. 5.

Fig. 5. (a) Schematic diagram of the designed controllable experiment, clear image of (b) the car model and (c) the linear polarized filter wheel, the DOP image of (d) the car model, and (e) the linear polarized filter wheel.

Download Full Size | PDF

The defogging results of the car model and linear polarized filter wheel under different optical thicknesses are compared with the SFDF method based on a single polarization assumption, as shown in Fig. (6). We set the bias factor as $\gamma = 1.7$ to achieve the best defogging effect, and set optimal parameters for the SFDF methods to generate its promising results for a fair comparison. Compared with the SFDF method, the defogging image using the proposed method has clearer details such as the body contour and wheel hub for the car model. In particular, the object detected in fog with an optical thickness of $\tau = 3.15$ is almost completely submerged in fog, and the visual effect of the defogging image of the proposed method is more significant. In addition, as can be seen from Fig. 6(e1)-(e5) and (f1)-(f5), although the SFDF method eliminates the influence of fog for the linear polarized filter wheel, the high polarized sector areas (as mask in the white triangle in Fig. 6(e1)-(f5)) are distorted to black, that is, information distortion. Especially in the case of a small $\tau$ value, the polarization of the target has a great influence on the foggy image. In contrast, as shown in Fig. 6(f1)-(f5), the visual results of sector areas of the proposed method are highly consistent with those in Fig. 5(c1)-(c5). The results indicate that the proposed method can accurately restore the polarization information of polarized targets in fog.

 figure: Fig. 6.

Fig. 6. (a1)-(a5) The foggy images, defogging images of (b1)-(b5) the SFDF method and (c1)-(c5) the proposed method of low polarized targets under different optical thicknesses. (d1)-(d5) The fog images, defogging images of (e1)-(e5) the SFDF method and (f1)-(f5) the proposed method of high polarized targets under different optical thicknesses.

Download Full Size | PDF

4.2 Actual field experiments

To analyze the performance of the proposed method, six typical natural scenes with highly polarized targets (lake, car, tile, solar panel roof, road, and aluminum water tower) with colorful and complex structures, located several kilometers away, are used for comparison. The polarized foggy images are captured by the linear polarized camera (BLACKFLY SBFS-U3-51S5PC-C, FLIR Systems, Inc, U.S.A). The distance between the target and the camera is detected by a remote laser rangefinder, and the detected distance in Fig. 7 covers 553-2640 m.

  • A. Comparison of overall de-fogging effect. The subjective visual effects are shown in Fig. 7, where Fig. 7(a1)-(a6) are the foggy images. Due to the influence of fog, the images look blurry, and the detailed information of distant architectural and mountain targets is submerged in the fog. Some of the areas, such as the lake surface, solar roof, aluminum water towers, etc., show high polarized characteristics, as shown in the corresponding DOP images in Fig. 7(b1)-(b6). Figure 7(c1)-(c6) are the defogging results of the PA method based on the single polarization assumption, Fig. 7(d1)-(d6) are the defogging results of the SFDF method, and Fig. 7(e1)-(e6) are the defogging results of the proposed method ($\gamma$ = 1.25, 1.25, 1.5, 1.5, 1.75, 1.7). For a fair comparison, we set optimal parameters for the above competing methods to generate their promising results. As can be seen from the figure, both of the PA and SFDF methods suffer from different degrees of information loss in highly polarized target areas circled in red. In contrast, the proposed method produces correct, robust, and clear visual effects in all 6 complex scenes, and both of the overall visibility and clarity of the images have been significantly improved.
  • B. Polarization information recovery comparison. To compare the recovery effects of the polarization information, the local areas marked by the yellow rectangles in Fig. 7 are enlarged, as shown in Fig. 8. There is serious information loss in the highly polarized target region restored by the PA and SFDF method. Compared with the foggy images and the defogging methods based on the single polarization assumption, the detail features such as solar roofs, building windows, and vehicles on the road in the defogging images of the proposed method are restored with high quality. In particular, the surface shadows and texture details of the artificial water tower target in Scene 6 are visible, as shown in Fig. 8(d6).
  • C. Restore and compare details. To more intuitively compare the recovery effect of remote detail information, the local areas marked by the green rectangles in Fig. 7 are enlarged, as shown in Fig. 9. Compared with PA and SFDF methods, the method proposed in this paper has the best fog removal effect, and the window and tree texture can be seen at a distance over kilometers. Table 1–Table 3 shows the quantitative comparison results of defogging images in Fig. 9. It can be seen that the index values of the proposed method are the highest in five out of six scenarios. The numerical analysis results are consistent with the subjective evaluation of Fig. 9. These results prove that the method proposed not only effectively solves the problem of polarization information loss, but also has significant advantages in restoring target structure, and texture information in far-distance scenes.

 figure: Fig. 7.

Fig. 7. The defogging results of six complex scenes several kilometers away. (a1)-(a6) The foggy images, (b1)-(b6) the DOP images, the defogging images of (c1)-(c6) PA method, (d1)-(d6) SFDF method, and (e1)-(e6) proposed method.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. The enlarged images of the local areas marked by the yellow rectangles in Fig. 7, (a1)-(a6) the foggy images, the defogging images of (b1)-(b6) PA method, (c1)-(c6) SFDF method, and (d1)-(d6) proposed method.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. The enlarged images of the local areas marked by the green rectangles in Fig. 7, (a1)-(a6) the foggy images, the defogging images of (b1)-(b6) PA method, (c1)-(c6) SFDF method, and (d1)-(d6) proposed method.

Download Full Size | PDF

Tables Icon

Table 1. Standard deviation evaluation results of the defogging image quality of six scenes.

Tables Icon

Table 2. Entropy evaluation results of the defogging image quality of six scenes.

Tables Icon

Table 3. Contrast evaluation results of the defogging image quality of six scenes.

4.3 Comparison with the existing dual-polarization defogging method

To further evaluate the performance of the proposed method, the proposed method is compared with the classical single-polarization defogging method [18] and the existing dual-polarization defogging methods [27,28]. The defogging performance is evaluated using both of subjective visual and objective image quality. The subjective visuals are shown in Fig. 10. Figure 10(a) is a set of orthogonal polarized foggy images. Figure 10(b) is a DOP image, which shows that the lake area has a higher DOP, i.e., this target has significant polarized characteristics. Figure 10(c) is the defogging result based on the single polarization assumption, Fig. 10(d) is the defogging result of Namer et al.’s method, Fig. 10(e) is the defogging result of Fang et al.’s method, and Fig. 10(f) is the defogging result of the proposed method ($\gamma$ = 1.25). Comparison of Fig. 10(a), (c)-(f) shows that when only the polarized characteristics of airlight are considered for defogging processing, serious information loss occurs in the lake area, and due to inaccurate transmission, there is still significant residual fog. When both of the polarized characteristics of airlight and target light are considered, an excellent defogging effect can be achieved in the entire scene. To better compare the results, the yellow local areas in Fig. 10(d)-(f) are magnified for comparison, as shown in Fig. 10(g)-(i). It can be seen that the target boundary in Fig. 10(g) is rather blurry and difficult to distinguish. In Fig. 10(h), the overall visual effect is biased towards gray-blue, and the visual effect of dark targets is poor. In contrast, Fig. 10(g) has significant clarity, contrast, and visibility, and the architectural details and clear lines between different targets in the scene are visible. In addition, the rich color information of targets such as white roofs, blue lakes, and green grasslands are all well restored.

 figure: Fig. 10.

Fig. 10. The comparison results between the method proposed and existing dual-polarization defogging methods, (a) a set of orthogonal polarized foggy images, (b) DOP images, (c) defogging images of single polarization-based method, (d) Namer et al.’s method, (e) Fang et al.’s method, (f) proposed method, and (g) - (i) the enlarged images of the local areas marked by the yellow rectangles in (d) - (f), respectively.

Download Full Size | PDF

To quantitatively compare with the existing advanced dual-polarization defogging methods, the quantitative comparison result of defogging images in Fig. 9 is given, as shown in Table 4. It can be seen that the index values of the proposed method are the highest except for the value of standard deviation and contrast of Fang et al.’s method slightly higher. In particular, metrics of entropy, spatial frequency, and average gradient of the defogging image of the proposed method exhibit a marked improvement of 18%, 257%, and 130%, respectively. Those results are in good agreement with the visual effect in Fig. 10, which verifies that the proposed method not only effectively solves the distortion problem of polarization information through fog, but also has significant advantages for restoring the overall visual clarity, the structural details, and texture information at a distance scene.

Tables Icon

Table 4. Objective evaluation of the comparison results with existing advanced dual-polarization defogging methods.

4.4 Robustness analysis of the proposed method

In the proposed method, the accurate determination of bias factor γ is the key to ensuring the defogging effectiveness. To explore the influence of the bias factor on the proposed algorithm, we carefully scan the empirical values of $\gamma$ and obtain a series of defogging images. Figure 11 shows the defogging images of scenes 1 and scenes 2 with $\gamma$ = 1, 1.25, 1.5, 1.75, 2, 2.25, 2.5, and 3. When $\gamma$=1, the image is over-defogged, resulting in black in the sky and dark areas. As the bias factor gradually increases, the fog is gradually retained, so that the image becomes more and more brighter, and the clarity of the scene gradually decreases. To quantitatively study the relationship between the bias factor $\gamma$ and the defogging image, the contrast of the defogging images is calculated and drawn in Fig. 12. When $\gamma$ increases, the contrast presents a parabolic downward trend. After $\gamma$ = 2.25, it gradually maintains a stable horizontal trend. Because the contrast of a defogged image is mainly ascribed to the near scenes having slight fog and a larger bias factor represents a smaller defogging level, a large bias factor is appropriate to remove the impact of fog for near scenes. As a result, the proposed method is robust to the bias factor $\gamma$ in complex outdoor scenes.

 figure: Fig. 11.

Fig. 11. The defogging images with bias factor $\gamma$ = 1, 1.25, 1.5, 1.75, 2, 2.25, 2.5, 3, (a1) - (h1) corresponding to the scene 1 in Fig. 8, (a2) - (h2) corresponding to the scene 2 in Fig. 8.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. The relationship between the bias factor $\gamma$ and defogging image quality, corresponding to (a) scene 1 and (b) scene 2 in Fig. 11.

Download Full Size | PDF

5. Conclusion and discussion

This paper proposes a dual-polarization defogging method connecting frequency division and blind separation of polarization information to address the issues of polarization information distortion. To extract the polarized characteristics of directly transmitted light from mixed information, we analyze the impact of the polarized characteristics of directly transmitted light and airlight on imaging, and separate the polarization information according to a physical model. On this basis, a corresponding dual-polarization defogging algorithm is proposed. The effectiveness and robustness of the proposed method are analyzed by quantitative and qualitative experimental results. The indoor and complex real-world scene experiments indicate that our method not only robustly and effectively solves the problem of polarization information loss, but also has significant advantages in restoring target structure, texture, and color information in distant and heavy foggy scenes. In conclusion, our method advances the field of image defogging with simple calculation and does not require prior knowledge, which has promising potential for real-world engineering applications of computer vision tasks, particularly in actual scenarios with high DOP.

There are still some shortcomings in the method in this paper. The estimated polarization information of the directly transmitted light refers to the target’s polarization after attenuating by the fog particles, and is not equivalent to the true polarized characteristics of the scene target. Moreover, the analysis does not cover the extinction differences across various spectral channels.

Funding

National Natural Science Foundation of China (62305062).

Acknowledgments

We are very grateful to the teachers and students who provided help and support for this paper, and also thank the fund project for the support of this paper.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

The data used in this study can be requested by contacting the corresponding author upon reasonable request.

References

1. R. C. Henry, S. Mahadev, S. Urquijo, et al., “Color perception through atmospheric haze,” J. Opt. Soc. Am. A 17(5), 831–835 (2000). [CrossRef]  

2. S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” Int. J. Comput. Vis. 48(3), 233–254 (2002). [CrossRef]  

3. I. Yoon, S. Kim, D. Kim, et al., “Adaptive defogging with color correction in the hsv color space for consumer surveillance system,” IEEE Trans. Consum. Electron. 58(1), 111–116 (2012). [CrossRef]  

4. P. C. Wu, C. Y. Chang, and C. H. Lin, “Lane-mark extraction for automobiles under complex conditions,” Pattern Recognit. 47(8), 2756–2767 (2014). [CrossRef]  

5. S. Wang, Q. Li, Z. Cui, et al., “Bandit-based data poisoning attack against federated learning for autonomous driving models,” Expert Syst. Appl. 227, 120295 (2023). [CrossRef]  

6. Z. Ma, J. Wen, C. Zhang, et al., “An effective fusion defogging approach for single sea fog image,” Neurocomputing 173, 1257–1267 (2016). [CrossRef]  

7. R. Fattal, “Single image dehazing,” ACM Trans. Graph. 27(3), 1–9 (2008). [CrossRef]  

8. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011). [CrossRef]  

9. G. F. Meng, Y. Wang, J. Y. Duan, et al., “Efficient image dehazing with boundary constraint and contextual regularization,” in 2013 IEEE International Conference on Computer Vision, (ICCV, 2013), 617–624.

10. Q. Zhu, J. Mai, and L. Shao, “A fast single image haze removal algorithm using color attenuation prior,” IEEE Trans. Image Process. 24(11), 3522–3533 (2015). [CrossRef]  

11. D. Berman and S. Avidan, “Non-local image dehazing,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition, (CVPR, 2016), 1674–1682.

12. V. C. Javier, D. F. Graham, and B. Marcelo, “Physical-based optimization for non-physical image dehazing methods,” Opt. Express 28(7), 9327–9339 (2020). [CrossRef]  

13. W. Ren, S. Liu, H. Zhang, et al., “Single image dehazing via multi-scale convolutional neural networks,” in: Proc. Eur. Conf. Comput. Vis. (2016), pp. 154–169.

14. B. Cai, X. Xu, K. Jia, et al., “DehazeNet: An end-to-endsystem for single image haze removal,” IEEE Trans. Image Process. 25(11), 5187–5198 (2016). [CrossRef]  

15. S. Ren, K. He, R. Girshick, et al., “Faster R-CNN: Towards real-time object detection with region proposal networks,” IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017). [CrossRef]  

16. B. Li, X. Peng, Z. Wang, et al., “AOD-net: All-in-One dehazing network,” in: Proc. IEEE Int. Conf. Comput. Vis. (2017), pp. 4780–4788.

17. Y. Song, J. Li, X. Wang, et al., “Single image dehazing using ranking convolutional neural network,” IEEE Trans. Multimedia 20(6), 1548–1560 (2018). [CrossRef]  

18. S. Zhao, L. Zhang, Y. Shen, et al., “RefineDNet: A weakly supervised refinement framework for single image dehazing,” IEEE Trans. Image Process. 30, 3391–3404 (2021). [CrossRef]  

19. Y. Zheng, J. Su, S. Zhang, et al., “Dehaze-AGGAN: Unpaired remote sensing image dehazing using enhanced attention-guide generative adversarial networks,” IEEE T. Geosci Remote 60, 1–12 (2022). [CrossRef]  

20. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in: Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit. (2001), pp. 325–332.

21. J. Mudge and M. Virgen, “Real time polarimetric dehazing,” Appl. Opt. 52(9), 1932–1938 (2013). [CrossRef]  

22. F. Huang, C. Ke, X. Wu, et al., “Polarization dehazing method based on spatial frequency division and fusion for a far-field and dense hazy image,” Appl. Opt. 60(30), 9319–9332 (2021). [CrossRef]  

23. Z. Liang, X. Y. Ding, Z. Mi, et al., “Effective polarization-based image dehazing with regularization constraint,” IEEE Geosci. Remote Sens. Lett. 19, 1–5 (2022). [CrossRef]  

24. H. Wang, H. Hu, J. Jiang, et al., “Automatic underwater polarization imaging without background region or any prior,” Opt. Express 29(20), 31283–31295 (2021). [CrossRef]  

25. H. Wang, J. Li, H. Hu, et al., “Underwater imaging by suppressing the backscattered light based on Mueller matrix,” IEEE Photonics J. 13(4), 1–6 (2021). [CrossRef]  

26. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt. 42(3), 511–525 (2003). [CrossRef]  

27. J. Liang, L. Ren, H. Ju, et al., “Polarimetric dehazing method for dense haze removal based on distribution analysis of angle of polarization,” Opt. Express 23(20), 26146–26157 (2015). [CrossRef]  

28. W. F. Zhang, J. Liang, and L. Y. Ren, “Haze-removal polarimetric imaging schemes with the consideration of airlight’s circular polarization effect,” Optik 182, 1099–1105 (2019). [CrossRef]  

29. E. Namer and Y. Y. Schechner, “Advanced visibility improvement based on polarization filtered images,” SPIE 5888, 588805 (2005). [CrossRef]  

30. S. Fang, X. Xia, X. Huo, et al., “Image dehazing using polarization effects of objects and airlight,” Opt. Express 22(16), 19523–19538 (2014). [CrossRef]  

31. F. Ling, Y. Zhang, Z. Shi, et al., “Defogging algorithm based on polarization characteristics and atmospheric transmission model,” Sensors 22(21), 8132 (2022). [CrossRef]  

32. W. E. K. Middleton and V. Twersky, “Vision through the atmosphere,” Phys. Today 7(3), 21 (1954). [CrossRef]  

33. A. L. D. Cunha, J. Zhou, and M. N. Do, “The nonsubsampled contourlet transform: theory, design, and applications,” IEEE Trans. on Image Process. 15, 3089–3101 (2006). [CrossRef]  

34. Z. Rahman, D. J. Jobson, and G. A. Woodell, “Multi-scale retinex for color image enhancement,” IEEE Trans. on Image Process. 3, 1003–1006 (1996). [CrossRef]  

35. P. Ganasala and V. Kumar, “CT and MR image fusion scheme in nonsubsampled contourlet transform domain,” J Digit Imaging. 27(3), 407–418 (2014). [CrossRef]  

36. K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013). [CrossRef]  

Data availability

The data used in this study can be requested by contacting the corresponding author upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. The defogging results based on the single polarization assumption for the realistic foggy scenes with specular dielectric objects of high DOP. (a1)-(a3) the foggy images, (b1)-(b3) the DOP images, (c1)-(c3) the defogging image by PA method, and (d) the defogging image by SFDF method.
Fig. 2.
Fig. 2. The polarized components of directly transmitted light and airlight along the polarizer’s orientation angle. $x$- and $y$- axis represents the polarized directions of 0° and 90° in the Cartesian coordinate of the camera plane, respectively. $E_A^P$ and $E_D^P$ represents the electric field of the polarized components of airlight and directly transmitted light, respectively. The electric field projections of airlight on $x$- and $y$- axes are denoted by $E_{Ax}^P$ and $E_{Ay}^P$, respectively. The electric field projections of directly transmitted light on $x$- and $y$- axes are denoted by $E_{Dx}^P$ and $E_{Dy}^P$, respectively. $\alpha$ means the orientation angle between the polarizer and the $x$- axis of the Cartesian coordinate. ${\theta _A}$ and ${\theta _D}$ represents the polarized angles of airlight and target light related to the $x$- axis of the Cartesian coordinate.
Fig. 3.
Fig. 3. (a) Simulated results of varying intensities of the collected polarized foggy images as a function of $\alpha$. (b) Experimental results. (c) Scene division of the intensity image. (d) DOP image, the brighter, the higher of DOP. Polarized foggy images with $\alpha$ = (e) 0°, (f) 45°, (g) 90° and (h) 135°, respectively, which were captured by a linear polarized camera (BLACKFLY SBFS-U3-51S5P, FLIR Systems, Inc, U.S.A).
Fig. 4.
Fig. 4. Flowchart of the proposed algorithm.
Fig. 5.
Fig. 5. (a) Schematic diagram of the designed controllable experiment, clear image of (b) the car model and (c) the linear polarized filter wheel, the DOP image of (d) the car model, and (e) the linear polarized filter wheel.
Fig. 6.
Fig. 6. (a1)-(a5) The foggy images, defogging images of (b1)-(b5) the SFDF method and (c1)-(c5) the proposed method of low polarized targets under different optical thicknesses. (d1)-(d5) The fog images, defogging images of (e1)-(e5) the SFDF method and (f1)-(f5) the proposed method of high polarized targets under different optical thicknesses.
Fig. 7.
Fig. 7. The defogging results of six complex scenes several kilometers away. (a1)-(a6) The foggy images, (b1)-(b6) the DOP images, the defogging images of (c1)-(c6) PA method, (d1)-(d6) SFDF method, and (e1)-(e6) proposed method.
Fig. 8.
Fig. 8. The enlarged images of the local areas marked by the yellow rectangles in Fig. 7, (a1)-(a6) the foggy images, the defogging images of (b1)-(b6) PA method, (c1)-(c6) SFDF method, and (d1)-(d6) proposed method.
Fig. 9.
Fig. 9. The enlarged images of the local areas marked by the green rectangles in Fig. 7, (a1)-(a6) the foggy images, the defogging images of (b1)-(b6) PA method, (c1)-(c6) SFDF method, and (d1)-(d6) proposed method.
Fig. 10.
Fig. 10. The comparison results between the method proposed and existing dual-polarization defogging methods, (a) a set of orthogonal polarized foggy images, (b) DOP images, (c) defogging images of single polarization-based method, (d) Namer et al.’s method, (e) Fang et al.’s method, (f) proposed method, and (g) - (i) the enlarged images of the local areas marked by the yellow rectangles in (d) - (f), respectively.
Fig. 11.
Fig. 11. The defogging images with bias factor $\gamma$ = 1, 1.25, 1.5, 1.75, 2, 2.25, 2.5, 3, (a1) - (h1) corresponding to the scene 1 in Fig. 8, (a2) - (h2) corresponding to the scene 2 in Fig. 8.
Fig. 12.
Fig. 12. The relationship between the bias factor $\gamma$ and defogging image quality, corresponding to (a) scene 1 and (b) scene 2 in Fig. 11.

Tables (4)

Tables Icon

Table 1. Standard deviation evaluation results of the defogging image quality of six scenes.

Tables Icon

Table 2. Entropy evaluation results of the defogging image quality of six scenes.

Tables Icon

Table 3. Contrast evaluation results of the defogging image quality of six scenes.

Tables Icon

Table 4. Objective evaluation of the comparison results with existing advanced dual-polarization defogging methods.

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

I = D + A = L t + A ( 1 t )
L = I A 1 A / A
I α p = [ E A P cos ( θ A α ) + E D P cos ( θ D α ) ] 2 = A P cos 2 ( θ A α ) + D P cos 2 ( θ D α ) + 2 A P D P cos ( θ A α ) cos ( θ D α )
I α U P = A ( 1 P A ) 2 + D ( 1 P D ) 2
I α = A ( 1 P A ) 2 + D ( 1 P D ) 2 + A P cos 2 ( θ A α ) + D P cos 2 ( θ D α ) + 2 A P D P cos ( θ A α ) cos ( θ D α )  
I α = A ( 1 P A ) 2 + A P cos 2 ( θ A α ) + D 2
S 0 = I 0 + I 90 S 1 = I 0 I 90 S 2 = I 45 I 135 , P = ( S 1 ) 2 + ( S 2 ) 2 S 0
P A = ( S A 1 ) 2 + ( S A 2 ) 2 S A 0 , P D = ( S D 1 ) 2 + ( S D 2 ) 2 S D 0
{ I = A + D = A ( 1 t ) + L t A = A P + A U P D = D P + D U P t = 1 A A
{ I α , l o w , I α , h i g h m , n } = N S C T m , n ( I α ) , α [ 0 , 45 , 90 ,13 5 ] , 1 m M , 1 n 2 N m
I α , l o w = R α L α , α [ 0 , 45 ,9 0 ,13 5 ]
{ S R 0 = R 0 + R 90 S R 1 = R 0 R 90 S R 2 = R 45 R 135 , P R = ( S R 1 ) 2 + ( S R 2 ) 2 S R 0
R P = R P R
I l o w = D l o w + A l o w = [ D l o w P D , l o w + D l o w ( 1 P D , l o w ) ] + A l o w D l o w P D , l o w + A l o w R P + A l o w
A l o w = I l o w R P R
A = A l o w γ
J d a r k ( x ) = min c ( min y Ω ( x ) ( I l o w c ( y ) ) ) = 0
A = I l o w 0.1 % J d a r k ( I l o w )
L l o w = I l o w A l o w 1 A l o w / A
M L [ I α , h i g h m , n ( x , y ) ] = | 2 I α , h i g h m , n ( x , y ) I α , h i g h m , n ( x 1 , y ) I α , h i g h m , n ( x + 1 , y ) | + | 2 I α , h i g h m , n ( x , y ) I α , h i g h m , n ( x , y 1 ) I α , h i g h m , n ( x , y + 1 ) |
W S M L [ I α , h i g h m , n ( x , y ) ] = p = 1 1 q = 1 1 { w ( p + 1 , q + 1 ) M L [ I α , h i g h m , n ( x , y ) ] }
w = 1 16 [ 1 2 1 2 4 2 1 2 1 ]
F u s e d h i g h m , n ( x , y ) = I α _ max , h i g h m , n ( x , y ) , W S M L [ I α _ max , h i g h m , n ( x , y ) ] = max { W S M L [ I α , h i g h m , n ( x , y ) ] }
L = N S C T m , n ( L l o w , F u s e d h i g h m , n ) , 1 m M , 0 n 2 N m
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.