Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Underwater image enhancement method based on adaptive attenuation-curve prior

Open Access Open Access

Abstract

The attenuation (sum of absorption and scattering), which is caused by the dense and non-uniform medium, generally leads to problems of color degradation and detail loss in underwater imaging. In this study, we describe an underwater image enhancement method based on adaptive attenuation-curve prior. This method uses color channel transfer (CCT) to preprocess the underwater images, light smoothing, and wavelength-dependent attenuation to estimate water light and obtain the attenuation ratio between color channels, and estimates and refines the initial relative transmission of the channel. Additionally, the method calculates the attenuation factor and saturation constraints of the three color channels and generates an adjusted reverse saturation map (ARSM) to address uneven light intensity, after which the image is restored through water light and transmission estimation. Furthermore, we applied white balance fusion globally guided image filtering (G-GIF) technology to achieve color enhancement and edge detail preservation in the underwater images. Comparison experiments showed that the proposed method obtained better color and de-hazing effects, as well as clearer edge details, relative to current methods.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Activities performed underwater have been increasing, and given the relatively complicated underwater environment, issues with obtaining images include dim light, noise, color degradation, and loss of detail. Based on the need for high-quality underwater images, their restoration and enhancement will allow improvements in advanced marine applications and services. Applications and services, such as underwater archaeology, marine life collection, and underwater monitoring, rely heavily on high-quality underwater images. Underwater optics is one of the most fundamental parts, and its key is to solve the problem of underwater light attenuation.

The underwater light attenuation is caused by scattering and absorption [1,2], due to light refraction and dust-like particles floating around in the water, underwater images are always affected by scattering [3,4]. The restoration and/or enhancement of underwater images are mainly achieved through two algorithms and/or technologies: image- and physics-based methods [5]. Traditional image enhancement methods, such as white balance [6], histogram equalization [7], and contrast-limited adaptive histogram equalization, can improve image clarity and color saturation; however, these methods are ineffective for underwater images with complex physical characteristics. Park et al. [8] proposed a comprehensive color model based on histogram stretching, and [9,10] improved the image chromaticity measurement (ICM) method and stretched the input image based on the Rayleigh distribution to preserve the detail in the enhanced area. The physics-based method comprehensively considers the basic physics and underwater imaging theory of light propagation in water media. Because of the influence of light degradation and scattering, Song et al. [11] proposed a statistical model of background light and combined transmission map optimization to eliminate underwater haze and improve image clarity. Additionally, Chang et al. [12] proposed a single-frame underwater image restoration model based on depth estimation and transmission compensation, which solved a series of problems caused by light scattering and absorption.

A recent study combined the blue-green channel and the red channel to create a single underwater image restoration method that first restores the blue-green channel by de-hazing and then corrects the red channel using the gray world hypothesis theory [13]. Although this established an adaptive exposure image to solve the problems of overexposure and underexposure, it failed to preserve image details. A previous study proposed a detail-preserving underexposed image enhancement method using optimal weighted multi-exposure fusion, which effectively preserved the color of the image [14,15], whereas another study proposed a super-resolution underwater image enhancement method that optimized the retinex algorithm and then used a neural network to train the Y channel to improve the dynamic range and clarity of the underwater image [16]. However, in both cases, there remained deficiencies in the acquisition of image color features.

A previous study proposed a submerged scattering method based on a convolutional neural network (CNN) and used adaptive bilateral filtering to refine the estimation results, followed by a balanced method to eliminate image color differences [17]. Although this method showed advantages in qualitative and quantitative analyses, flaws remained in detail and exposure processing of the image. Additionally, image enhancement methods based on deep learning were developed to eliminate image blur and increases color saturation [1820]. However, the results showed incomplete preservation of edge details.

An image restoration method based on the attenuation-curve prior is proposed, which relies on the capability of the color of the clear image to be well approximated by hundreds of different colors, and that the pixels in the same color cluster form a color that represents a power function curve associated with an RGB value [21]. Another study proposed an underwater scene depth estimation method based on image blur and light absorption and capable of application in an image formation model to restore and enhance underwater images [22]. Additionally, a report proposed a new underwater image restoration strategy involving two different transmission coefficient estimation methods, with one based on optical characteristics and the other dependent on image processing knowledge. Subsequent fusion of the two transmission maps produces a final result that is adaptively weighted through their respective saliency maps, with the obtained signal radiance decomposed by point spread function (PSF) deconvolution and color compensation [23].

With the rapid development of deep learning techniques for image restoration and enhancement, deep learning-based underwater image enhancement methods have been widely used. One study applied deep sparse non-negative matrix factorization to estimate the illumination of underwater images, thereby ensuring the constancy of image color [24], and another used the depth map based on CNN estimation to achieve a dehumidification effect of the image and trained it by image equalization [25]. Additionally, a previous study used a super-resolution CNN to solve image blur [26], and another study applied a multi-scale structure to predict the depth map of a scene to enhance the color of underwater images [27]. Although challenges, such as methods that effectively address issues associated with underwater light scattering and restoration and enhancement of colors and details of underwater images, remain, the present study is focused on establishing an underwater image database to increase convenience and enhance research efforts. However, current methods are either inadequate for color enhancement or result in low image definition, especially when edge details in the image are not clear enough and result in a blurred image.

A reflected light reaching the camera through an object is shown in Fig. 1. Scattering is a phenomenon caused by light interaction with the medium in water and is divided into backscattering and forward scattering. Backscattering describes the scattering of ambient light according to the line of sight before finally propagating to the image plane, resulting in large reductions in scene contrast. Forward scattering occurs when part of the reflected light is transmitted at a small angle, it is easy to cause image blur.

 figure: Fig. 1.

Fig. 1. Underwater light propagation scene. Entry into the water from the atmosphere results in gradual attenuation of light. Point x represents the scene point closest to the camera.

Download Full Size | PDF

In this study, we describe the development of an underwater image enhancement method based on adaptive attenuation-curve prior. To improve image quality, we first preprocess the underwater images using color channel transfer (CCT), followed by the estimation of the transmission of each pixel according to the distribution of each pixel on the curve and then the estimation of the attenuation factor to compensate the transmission. We then generated an adjusted reverse saturation map (ARSM) to address issues with image exposure and artificial lighting and used saturation constraints to adjust the transmission of the three color channels to prevent image oversaturation and reduce the noise of each pixel. To achieve color enhancement and preserve edge detail, we used globally guided image filtering (G-GIF) to obtain the best gain factor white balance fusion.

The main contributions of this article are as follows:

  • 1) we developed an underwater image restoration and enhancement method based on adaptive attenuation-curve prior that can simulate the light attenuation process in different underwater scenes and effectively eliminate the effects of image noise, haze, and artificial lighting, thereby allowing image de-blurring, color enhancement, and edge detail preservation;
  • 2) we used CCT as a preprocessing operation for image de-hazing, resulting in improved de-hazing effects relative to previous methods;
  • 3) for image enhancement, white balance fusion G-GIF enhanced image brightness and color while retaining edge details, thereby improving the visual effect;
  • 4) we demonstrated the efficacy of the method in underwater and low-light environments.

2. Related work

2.1 CCT

We applied CCT for the following reasons: 1) to reduce image color blur caused by the harsh underwater environment, light scattering, and attenuation; 2) to reduce distortion of image color caused by underwater de-hazing; and 3) to reduce limitations in existing de-hazing methods concerning the strong attenuation of a single channel. Information loss in underwater images is mainly due to selective attenuation, scattering, or lack of color bands underexposure conditions. CCT transmits information from important channels to attenuation channels.

CCT operations are based on color transmission [28] and shift [29]. To compensate for the loss of information in a color channel, the CCT operation is built into the opponent color space to obtain a reference image directly from the input image and use saliency and detail mapping to adjust for opponent color variations in components affected by strong attenuation channels.

2.2 Underwater image frame model

To describe the simplified underwater image frame model [11], object radiation in water is expressed as follows:

$${W_{vert}}D,\gamma = {W_{intact}}{e^{ - \delta (\gamma)D}},$$
where $\delta (\gamma)$ represents the spectral absorption coefficient of wavelength $\gamma $ and ${W_{intact}}$ represents the radiance of the complete object. The radiance when the incident light is absorbed vertically is represented by ${W_{vert}}D,\; \gamma $, where D represents the depth of the object relative to the water surface. The total radiance detected by the camera is as follows:
$${I^c}(x)= {W_d} + {W_{fs}} + {W_{bs}}\; ,$$
where x is the pixel coordinate, ${W_d}$ represents the direct component, ${W_{fs}}$ represents the forward-scattered component, and ${W_{bs}}$ represents the backscattered component.

Because objects reflect a certain amount of light, according to Beer–Lambert’s empirical law, the characteristics of the medium in water and the direct irradiance show an exponential decay relationship:

$${W_d}({x,D,\gamma })= {W_{vert}}D,\gamma {e^{ - \beta (\gamma)d(x)}}\; \; = {W_{vert}}D,\gamma {t_c}(x),$$
$${t_c}(x)= {e^{ - {\beta ^c}d(x)}} = {e^{ - ({{a^c} + {s^c}})d(x)}},\; \; c \in \{{r,g,b} \},$$

In Eq. (4), ${a^c}$ and ${s^c}$ represent the absorption coefficient and the scattering coefficient, respectively; c represents one of the three color channels (RGB); and ${t_c}(x)$ is an exponential decay function, which represents the part of the scene radiation that is not scattered or absorbed but reaches the camera. Therefore, the medium transmission in Eq. (3) represents the direct irradiance at position x, and the underwater spectral attenuation coefficient, ${\beta ^c}$, and $d(x)$ is the distance between the camera and the target.

For object reflection, light scattered at a small angle is referred to as the forward-scattered component and is expressed as follows:

$${W_{fs}}({x,D,\gamma })= {W_d}({x,D,\gamma })\ast f({x,\gamma }),$$
where ${W_{fs}}({x,D,\gamma })$ represents the forward-scattered component at position x, the convolution relative to x is represented by the operator $\ast $, and $f({x,\gamma })$ represents the PSF [12]. The backscattered component is the result of the interaction between the light beam and the medium in the water and, under uniform illumination conditions, can be expressed as follows:
$$\begin{aligned}{W_{bs}}({x,\gamma }) &= {B^c}({1 - {e^{ - \beta (\gamma)d(x)}}})\\ &= {B^c}({1 - {t_c}(x)}),\; \; \; \; c \in \{{r,g,b} \}, \end{aligned}$$
where ${W_{bs}}({x,\gamma })$ represents the backscattered component at position x and B represents water light. Because the direct- and forward-scattered components (including the signal) are scattered from the target object, the definition of the signal is as follows:
$$\begin{aligned} X({x,\;D,\gamma }) &= {W_d}({x,D,\gamma })+ {W_{fs}}({x,D,\gamma })\\ &= {W_{vert}}D\textrm{, }\gamma {e^{ - \beta (\gamma)d(x)}}{\ast} [{\mathrm{\rho }(x)+ f({x,\gamma })} ]\\ &= {J^c}(x){t_c}(x), \end{aligned}$$
where $\mathrm{\rho }(x)$ represents the Dirac delta function [30] and ${J^c}(x)$ represents the restored radiance (scene brightness) at the pixel point x in the same channel c Inserting Eq. (7) into Eq. (2) returns the following:
$${I^c}(x)= {J^c}(x){t_c}(x)+ ({1 - {t_c}(x)}){B^c},\; c \in \{{r,g,b} \},$$
where the first term on the right represents direct attenuation (i.e., the attenuation of scene brightness in the medium) and the second term represents the water light lift (i.e., the transition from scene brightness to water light).
$${t_c}(x)= {e^{ - {\beta ^c}{d_0}}} \cdot {e^{ - {\beta ^c}{d_n}}} = {K_c} \cdot t_c^n(x),\; c \in \{{r,g,b} \},$$

In Eq. (9), ${d_0}$ represents the distance from the camera to the nearest scene point, ${d_n}$ represents the distance from the nearest scene point to the farthest scene point, ${K_c}$ is a constant attenuation factor, and $t_c^n(x)$ describes relative transmission.

The transmission relationship between the blue-green channel and the red channel is as follows:

$${t_{c^{\prime}}}(x)= {e^{ - {\beta ^{c^{\prime}}}d(x)}} = {({{e^{ - {\beta_r}d(x)}}})^{{\beta ^{c^{\prime}}}/{\beta _r}}} = {t_r}{(x)^{{\delta _{c^{\prime}}}}},$$
where $c^{\prime} \in \{{g,b} \}$, ${\delta _g} = \frac{{{\beta _g}}}{{{\beta _r}}},\; \; {\delta _b} = {\beta _b}/{\beta _r}$, ${\delta _c}$ represents the attenuation ratio between color channels.

The absorption coefficient varies irregularly with the wavelength of light, and the scattering coefficient does not undergo large changes. The quantitative relationship between red, green, and blue light according to some underwater scattering coefficients is summarized through least squares regression [31]:

$$\frac{{{s^{c^{\prime}}}}}{{{s^r}}} = \frac{{ - 0.00113{R^{c^{\prime}}} + 1.6251}}{{ - 0.00113{R^r} + 1.6151}},\; \; \; c^{\prime} \in \{{g,b} \},$$

The present study used a red wavelength at ${R^r}$ = 620nm, a green wavelength ${R^g}$ = 540nm, and a blue wavelength at ${R^b}$ = 450nm. According to its inherent optical properties, the attenuation ratio, ${\delta _{c^{\prime}}}$, is calculated between the color channels:

$${\delta _{c^{\prime}}} = \frac{{{\beta ^{c^{\prime}}}}}{{{\beta ^r}}} = \frac{{{s^{c^{\prime}}}}}{{{s^r}}} \cdot \frac{{{B^r}}}{{{B^{c^{\prime}}}}}\; \; ,\; \; \; \;c^{\prime} \in \{{g,b} \},$$
where the water light determined by the ambient light is expressed as ${B^c}$. Although the accurate ratio cannot be obtained by this formula, it can be used to establish a cluster of attenuation curves in the RGB space and estimate the initial transmission, because the transmission estimation of the curve is stable. However, when calculating the final transmission of the three color channels, the difference in these ratios is very large. The inaccurate estimation will cause image blurring or oversaturation and even reduce the contrast of some color channels in the restored image. The new ratio of the restored image will be calculated by the saturation constraint. Therefore, to address this problem, we recovered the brightness of the scene by estimating the water light and transmission.

2.3 Adaptive attenuation-curve prior

There are many influencing factors in the underwater environment, including dust-like particles in the medium that cause changes in the wavelength-dependent attenuation coefficient. Equation (4) suggests that transmission ${t_c}(x)$ depends on the distance and wavelength. We propose a more efficient prior method (adaptive attenuation-curve prior) that can be used in various complex environments, such as atmospheric fog and low-light underwater areas.

We define pixels with similar colors as belonging to the same cluster. For images in a clear atmospheric environment, a previous study [32] verified that hundreds of clusters can represent all of the colors in the image, resulting in more significant outcomes. In an underwater environment, two factors affect image clarity and changes in the observed image: a change in the distance, $d(x)$, from the camera to the target, which causes differences in transmittance of each pixel, and the wavelength-dependent attenuation coefficient, ${\beta ^c}$, that results in differences in the three elements in the transmission vector [${t_c}(x)$]. These two factors and Eq. (8) result in attenuation (to different degrees) of scene radiation, ${J^c}(x)$, from the same cluster (with similar original colors), leading to different captured colors, ${I^c}(x)$, in the observed image. Because the depth of the camera is different, each pixel value of the cluster is described in the RGB space, and a line will be formed between the pixels starting from the original color, J (if $t$ = 1), and ending at the water color, B. This suggests that when $t$ = 0, the wavelength-dependent attenuation coefficient will be calculated according to Eq. (10) and referred to as the attenuation-curve (Fig. 2).

 figure: Fig. 2.

Fig. 2. Decay curve priority. (a) Clear pixel picture. (b) Clusters corresponding to different positions in the RGB space. (c) Composite image of the same cluster. (d) The corresponding attenuation-curve in RGB space.

Download Full Size | PDF

Insertion of Eq. (10) into Eq. (8) results in the following:

$$({I^{c^{\prime}}} - {B^{c^{\prime}}}) = {({{I^r} - {B^r}})^{{\delta _{c^{\prime}}}}} \cdot \frac{{{J^{c^{\prime}}} - {B^{c^{\prime}}}}}{{{{({{J^r} - {B^r}})}^{{\delta _{c^{\prime}}}}}}}\; \; ,\; \; \; c^{\prime} \in \{{g,b} \}\; ,$$

We define the RGB coordinate space with water light as the origin; therefore, the pixels observed by the intensity of the green or blue channel (${I^{c^{\prime}}} - {B^{c^{\prime}}}$) have a power function relationship with the pixels of the red channel (${I^r} - {B^r}$). Because B and ${\delta _{c^{\prime}}}$ are constants of underwater scenes, the radiance, J, of the scene is the only dependence of the curve, and different radiances correspond to different curves. Therefore, the degradation process of underwater images can be simulated with attenuation curves and adapted to different underwater environments according to Eq. (12).

Four colors to mark four pixels in the pixel clusters of an image are used in Fig. 2(a), and the four clusters distributed in four different positions are shown in Fig. 2(b). We applied Eq. (8) and Eq. (12) to synthesize the observation images. Assuming B = [0.16,0.66,0.58], the corresponding model shown in Fig. 2(c) represents the same cluster composite image shown in Fig. 2(a); however, because the scene depth and attenuation coefficients of the three color channels are different, the color of each cluster is different from the original color. The change in the color space is shown in Fig. 2(d). Curves of the same color should belong to the same cluster. Combining with Eq. (9) returns the power function relationship:

$$({I^{c^{\prime}}} - {B^{c^{\prime}}}) = {({{I^r} - {B^r}})^{{\delta _{c^{\prime}}}}} \cdot \frac{{{K_{c^{\prime}}}({{J^{c^{\prime}}} - {B^{c^{\prime}}}})}}{{{{[{K_r}({{J^r} - {B^r}})]}^{{\delta _{c^{\prime}}}}}}},c^{\prime} \in \{{g,b} \},$$

3. Methods of this article

The flow chart of the method in this paper is shown in Fig. 3.

 figure: Fig. 3.

Fig. 3. Flow chart for the proposed method.

Download Full Size | PDF

3.1 Image preprocessing

The harsh underwater environment and its uneven light distribution and the existence of underwater media result in acquired underwater images being full of noise and fog. Using traditional color constancy methods (gray world, gray edge, gray shadows, etc.) to directly address these issues is ineffective at achieving good color enhancement effects. Moreover, because the attenuation of light is achromatic, the model defined in Eq. (8) cannot achieve color attenuation. Previous studies showed that the use of CCT as an image preprocessing operation for de-hazing significantly improves image quality and color appearance [33].

In this study, we applied the color shift to align the global average and standard deviation of the reference image and the initial image. In practical applications, this is performed to transfer information between a pair of opponent colors when aligning the global features of the source and reference images and completed in three steps [33,34]. First, the average value of the initial image is subtracted, and the image is rescaled according to the ratio of the input standard deviation to the reference standard deviation. The new average value of the reference image is then added. The CIE L*a*b* color space can be used to express color transfer:

$$\begin{aligned} {I_{L\ast }}(x) &= [{{I_{L\ast }}(x)- {{\bar{I}}_{L\ast }}} ]\cdot \sigma _r^{L\ast }/\sigma _s^{L\ast } + \bar{I}_{L\ast }^r\\ {I_{a\ast }}(x) &= [{{I_{a\ast }}(x)- {{\bar{I}}_{a\ast }}} ]\cdot \sigma _r^{a\ast }/\sigma _s^{a\ast } + \bar{I}_{a\ast }^r\\ {I_{b\ast }}(x) &= [{{I_{b\ast }}(x)- {{\bar{I}}_{b\ast }}} ]\cdot \sigma _r^{b\ast }/\sigma _s^{b\ast } + \bar{I}_{b\ast }^r\; , \end{aligned}$$
where ${I_{L\ast }}$, ${I_{a\ast }}$, and ${I_{b\ast }}$ are the average values of each channel ($L\ast $, $a\ast $, and $b\ast $) of the original image($\bar{I}_{L\ast }^r$, $\bar{I}_{a\ast }^r$, and $\bar{I}_{b\ast }^r$, respectively). The parameters $\sigma _r^{L\ast }$, $\sigma _r^{a\ast }$, and $\sigma _r^{b\ast }$ represent the standard deviation of the reference image, and $\sigma _s^{L\ast }$, $\sigma _s^{a\ast }$, and $\sigma _s^{b\ast }$ are the standard deviation of the original image.

Regularly shifting the opposite colors of red and green (yellow-green) helps compensate for the strong attenuation of red or green (blue). During the transfer process, the mean and variance will be modified. This algorithm effectively eliminates unwanted color transmission and tends to compensate for the attenuated color channel. However, the algorithm will dim the color; therefore, to continue adjusting the reference image, we include the color change and original details caused by the salient area. The reference image is expressed as $R(x)$ and calculated as follows:

$$R(x)= G(x)+ D(x)+ S(x)I(x),$$
where $G(x)$ represents a uniform grayscale image (50%), $D(x)$ represents the original input detail layer, and $S(x)$ represents the saliency of the input image, $I(x)$. The input image, $I(x)$ is related to the gray world assumption, in the opposite color space, the average value of the natural image brightness is close to 0.5, whereas the average value of the brightness of the opposite color channel is close to zero. We applied the technique described by Ancuti et al. [34] to calculate $S(x)$. Adding a product between the saliency and the original image helps restore the original color of the image. $D(x)$ is obtained by subtracting the Gaussian blur version from the input image.

We will evaluate this method in three aspects: 1) qualitative comparison; 2) quantitative comparison; 3) complexity analysis. Before the comparison, first explain the role of the CCT method used in this article as a preprocessing operation, as shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. The role of CCT as a preprocessing method. (a) Original image and that (b) after CCT preprocessing. The text method (c) without and (d) with CCT preprocessing.

Download Full Size | PDF

As shown in Fig. 4, CCT preprocessing significantly improved image clarity and color saturation relative to the original image. Moreover, comparative experiments showed that the image obtained from the proposed method showed improved color enhancement and detail preservation, suggesting the important practical significance of CCT as an image preprocessing operation.

3.2 Water light estimation

Water light refers to the uniform background light generated by backscattering [21]. To facilitate accurate estimation of water light, empirical segmentation on the degraded image is performed, and patches smaller than a predefined threshold are selected, followed by the calculation of the difference between the intensity of the red channel and the intensity of the green or blue channel:

$${D_{c^{\prime}}}(x)= {I^{c^{\prime}}}(x)- {I^r}(x)\; ,\; \; c^{\prime} \in \{{g,b} \},$$
where $x \in S$, S = {all pixels in the selected block}. Because the attenuation speed of red light in water is the fastest, the difference in water light is also very large. Therefore, if the difference between the R-G channel and the R-B channel of a pixel is very large, it can be concluded that the pixel is likely a candidate pixel for water light. To identify the pixels, the index function $\textrm{Ind}({\ast})$ is used to sort the differences between ${D_g}$ and ${D_b}$ in descending order:
$$\textrm{In}{\textrm{d}_{c^{\prime}}}(n)= \textrm{sort}({D_{c^{\prime}}}(x)),\; \; c^{\prime} \in \{{g,b} \},$$
where $\textrm{In}{\textrm{d}_{c^{\prime}}}(n)$ outputs the sorted index (coordinates). The intersection point $Z = \textrm{In}{\textrm{d}_\textrm{g}}({n \le N})\cap \textrm{In}{\textrm{d}_\textrm{b}}({n \le N})$ obtains the index of the maximum value in ${D_g}$ and ${D_b}$, with N representing the parameter that controls the number of elements at the intersection point. Therefore, the final estimate of water light, B, expressed as follows:
$${B^c} = {I^c}\left( {\textrm{arg}\mathop {\max }\nolimits_{x \in Z} \sum {I^c}(x)} \right),\; c \in \{{r,g,b} \},$$

This method uses the largest pixel in the red channel as the final water light (Fig. 3, small red circle).

3.3 Transmission estimation

According to Eq. (9), transmission estimation comprises two elements: 1) relative transmission estimation ($t_c^n$) describing estimation of the relative transmission between the farthest scene point and the nearest scene point and 2) the attenuation factor ${K_c}$ estimate, which describes the attenuation from the camera to the nearest scene point. This method is divided into six steps: 1) find the attenuation-curve, 2) estimate $t_c^n$, 3) refine the transmission, 4) adjust the RSM, 5) estimate the attenuation factor, and 6) adjust the transmission of the color channel.

3.3.1 Determining the attenuation-curve

We classified the pixels of the image into the attenuation-curve of the RGB space (Fig. 5); however, it was difficult to directly cluster the pixels into curves according to the RGB coordinates. Therefore, we calculated the logarithm of Eq. (14):

$$\ln |{{I^{c^{\prime}}} - {B^{c^{\prime}}}} |= {\delta _{c^{\prime}}}\ln |{{I^r} - {B^r}} |+ \ln \frac{{{K_{c^{\prime}}}({{J^{c^{\prime}}} - {B^{c^{\prime}}}})}}{{{{[{K_r}({{J^r} - {B^r}})]}^{{\delta _{c^{\prime}}}}}}},\; c^{\prime} \in \{{g,b} \},$$

 figure: Fig. 5.

Fig. 5. Determining the attenuation-curve. (a) The power function relationship between color channels. (b) The linear relationship between color channels. (c) Clustering of the rotating coordinates.

Download Full Size | PDF

In the RGB space, $\ln |{{I^r} - {B^r}} |$ has a linear relationship with $\ln |{{I^{c^{\prime}}} - {B^{c^{\prime}}}} |$ (i.e., the slope of the straight line is the same, but the intercept is different). The intercept is determined by the brightness of the scene. The clustered curves can be classified into straight lines, and the four curves can be transformed into straight lines by applying the logarithm [Fig. 5(b)].

By rotating the three-dimensional coordinate system so that the r-axis is parallel to the straight-line direction, a KD-Tree is constructed according to the predefined mosaic in the rotated GB plane, and the tree is queried according to the rotated G and B coordinates, thereby effectively clustering pixels. The clusters in the rotated RGB system are shown in Fig. 5(c). The attenuation-curve can be obtained only by rotating the G and B coordinates of the pixel.

3.3.2 Estimating $t_c^n$

We re-expressed the underwater model as follows:

$$|{{I^c}(x)- {B^c}} |= t_c^n(x)\cdot {K_c}|{{J^c}(x)- {B^c}} |,\;x \in L,\; c \in \{{r,g,b} \},$$
where L represents a curve, $x \in L$ represents the pixels existing on the curve when $t_c^n(x)\to 1$, the intensity of the pixel ${I^c}(x)$, and the degraded scene brightness is ${K_c}{J^c}(x)$. When $t_c^n(x)\to 0$, ${I^c}(x)$ is close to water light, ${B^c}$. This suggests that the maximum value of $|{{I^c}(x)- {B^c}} |$ is close to ${K_c}|{{J^c}(x)- {B^c}} |$, and the pixel-by-pixel transmission obtained for each curve $\widetilde {t_c^n}(x)$ is represented as follows:
$$\widetilde {t_c^n}(x)= \frac{{|{{I^c}(x)- {B^c}} |}}{{\mathop {\max }\nolimits_{x \in L} |{{I^c}(x)- {B^c}} |}}\; ,\; c \in \{{r,g,b} \},$$

The maximum value of $|{{I^c}(x)- {B^c}} |$ represents the nearest scene point. Because the brightness of the scene point has been degraded by ${K_c}$, the maximum value of $|{{I^c}(x)- {B^c}} |$ is used to estimate ${K_c}|{{J^c}(x)- {B^c}} |$, resulting in a more accurate outcome.

In summary, ${\delta _g}$ and ${\delta _b}$ can be derived from Eq. (12), and the transmission volume of the green and blue channels is derived from Eq. (10). Therefore, to estimate the transmission volume of a color channel, we calculated the transmission volume of the red channel, $\widetilde {t_r^n}(x)$.

3.3.3 Refined transmission

If the curve is short and close to the water light and the background area is too far away, the value estimated using this method will be inaccurate or possibly wrong. Based on depth-smoothing detection, we applied a weighted least squares (WLS) filter [35,36]:

$$\mathop {\min }\nolimits_{\hat{t}_r^n} \mathop \sum \nolimits_x {\left[ {\frac{{\hat{t}_r^n(x)- \tilde{t}_r^n(x)}}{{\sigma_r^2(x)}}} \right]^2} + \delta \mathop \sum \nolimits_x \mathop \sum \nolimits_{y \in {N_x}} \frac{{{{[{\hat{t}_r^n(x)- \hat{t}_r^n(y)} ]}^2}}}{{I(x)- I{{(y)}^2}}}\; \; ,$$
where $\hat{t}_r^n(x)$ represents refined transmission, $\delta $ is a regularization parameter used to adjust data and smoothness, ${N_x}$ represents four adjacent values of x, and ${\sigma _r}(x)$ represents the parameter used to measure the hypothetical reliability of $\tilde{t}_r^n(x)$. Here, we set $\frac{1}{{\sigma _r^2(x)}} \in [{0,1} ]$.

For each curve, we used a standard deviation $\textrm{st}{\textrm{d}_{x \in L}}({{I^r}(x)})$ for the red channel of the pixel on curve L and then used the maximum value of the standard deviation to normalize the standard deviation on each curve: ${\widetilde {\textrm{std}}_{x \in L}}({{I^r}(x)})$. The larger the deviation of the curve, the more reliable is $\tilde{t}_r^n(x)$. Additionally, the larger the number of pixels on each curve, the more reliable the assumption of $\tilde{t}_r^n(x)$. We expressed the counting reliability of each curve as follows:

$$\textrm{coun}{\textrm{t}_{\textrm{reliability}(L)}} = \min \left( {1,\frac{{n(L)}}{{50}}} \right)\; ,$$

Thus, the reliability parameter ${\sigma _r}(x)$ on each curve is obtained:

$$\frac{1}{{{\sigma _r}(x)}} \propto {\widetilde {\textrm{std}}_{x \in L}}({{I^r}(x)})\cdot \textrm{count}\_\textrm{reliability}(L),$$

A previous study showed that a larger $\frac{1}{{{\sigma _r}(x)}}$ increased the reliability of the assumption of $\tilde{t}_r^n(x)$ [35], because the first term of Eq. (23) plays a role in refining the original relative transmission.

3.3.4 ARSM

We then address exposure issues, including the effects of artificial light and uneven lighting. Because the area with higher red channel intensity (AL) in the underwater blue-green image has lower saturation in the color space, we defined the chromaticity purity of the pixel [11] as follows:

$$\begin{aligned} Sat({{I^c}(x)}) &= \mathop \sum \limits_{i = 1}^M \mathop \sum \limits_{j = 1}^N [1 - \frac{{\textrm{min}({{I^c}(x)})}}{{\textrm{max}({{I^c}(x)})}}]\\ Sat({{I^c}(x)}) &= 1,\; \; \textrm{if}\; \max ({{I^c}(x)})= 0, \end{aligned}$$

A color loses its saturation upon the addition of white light containing energy at all wavelengths. Therefore, saturation reduction in a certain area of an image can be achieved by adding white light to the area. In an underwater image, the saturation of a scene without AL will be much greater than the artificially illuminated area. This phenomenon can be represented by the following RSM, where a high RSM value usually denotes the AL-affected area.

$$Sa{t^{rev}}(x)= 1 - Sat({{I^c}(x)}),$$
where $Sa{t^{rev}}(x)$ is the RSM function, which optimizes the estimated refined transmission to reduce the influence of uneven light and artificial illumination. We introduced a fitting parameter $\mathrm{\lambda } \in [{0,\;1} ]$ as an effective scalar multiplier.
$$Sat_{adj}^{rev}(x)= \mathrm{\lambda }{\ast} Sa{t^{rev}}(x),$$
where $Sat_{adj}^{rev}(x)$ represents the ARSM. To effectively improve light uniformity, $\mathrm{\lambda }$ was set to 0.7.

As shown in Fig. 6, transmission estimation is inaccurate when the light is uneven or the artificial light is applied. Moreover, the comparison of RSM and ARSM processing of the original image revealed that the overall brightness of the image was improved by ARSM.

$$t_c^n(x)= \mathop \sum \nolimits_{i = 1}^M \mathop \sum \nolimits_{j = 1}^N \textrm{max}({t_r^n({i,j}),Sat_{adj}^{rev}({i,j})}),$$

 figure: Fig. 6.

Fig. 6. ARSM. (a) Original image. (b) Inaccurate transmission estimation. (c) RSM and (d) ARSM.

Download Full Size | PDF

According to ARSM and Eq. (29), transmission estimation was further modified to reduce the intensity of the light while maintaining the intensity of other areas in the image.

3.3.5 Estimating the attenuation factor

We assumed that when ${J^c}({x = \alpha })= 1$ or 0, there exists a brightest or darkest pixel, α, that satisfies the following formula:

$$q = \left\{ \begin{array}{c} {{B^c},\; \; {B^c} > 1 - {B^c}}\\ {1 - {B^c},{B^c} < 1 - {B^c}} \end{array} \right.\; ,$$
$$\mathop {\max }\nolimits_{c \in \{{r,g,b} \}} \frac{{|{{J^c}(\alpha)- {B^c}} |}}{q} = 1\; ,$$
${K_c}|{{J^c}(\alpha)- {B^c}} |$ for all pixels in the image can be estimated by $\textrm{max}|{{I^c}(x)- {B^c}} |$. q is a variable that represents the maximum value in ${B^c}$ and $1 - {B^c}$. Simplify the three attenuation factors into one factor (namely, ${K_c}$ = $K$) and substituting $K|{{J^c}(\alpha)- {B^c}} |= \textrm{max}|{{I^c}(x)- {B^c}} |$ into Eq. (26) returns the following:
$$K = \mathop {\max }\nolimits_{c \in \{{r,g,b} \}} \frac{{\textrm{max}|{{I^c}(x)- {B^c}} |}}{q}\; ,$$

This obtains the relative transmission, $\hat{t}_r^n(x)$, and K, with the recovery transmission, ${\hat{t}_r}(x)$, obtained from Eq. (9).

3.3.6 Adjusting color channel transmission

The reliability of the attenuation ratio depends on accurate estimation of the water light, B, whereas the transmission of other channels continues. Although a smaller ${\delta _g}$ will reduce the range of ${t_g}$ and the contrast of the green channel of the image, a larger ${\delta _g}$ will result in a closer range of ${t_g}$ to ${t_r}$ and restored the appearance of dark or saturated pixels. We defined a self-adjusting ratio, $\delta _c^{\prime}$, to adjust the transmission using Eq. (8) and Eq. (9) to express the resulting saturation constraint of the restored image as follows:

$$0 \le \frac{{{I^c}(x)- {B^c}}}{{K \cdot \hat{t}_r^n{{(x)}^{\delta _c^{\prime}}}}} + {B^c} \le 1\; ,$$

Leading to

$$\left\{ {\begin{array}{l} {\delta_c^{\prime} \ge \ln \left( {\frac{{{I^c}(x)- {B^c}}}{{K({1 - {B^c}})}}} \right)/\ln ({\hat{t}_r^n(x)})+ {\varepsilon_c}\; ,\; \textrm{if}\; {I^c}(x)> {B^c}}\\ {\delta_c^{\prime} \ge \ln \left( {\frac{{{B^c} - {I^c}(x)}}{{K \cdot {B^c}}}} \right)/\ln ({\hat{t}_r^n(x)})+ {\varepsilon_c}\; ,\; \textrm{if}\; {I^c}(x)< {B^c}} \end{array}} \right.,$$
where $\delta _c^{\prime} \in [{{\delta_c},1} ]$, and ${\delta _c} = \textrm{max}({{\delta_g},{\delta_b}})$. To improve the estimation accuracy, we used ${\varepsilon _c}$ to improve image contrast, where ${\varepsilon _c}$ represents tolerance.

3.4 Scene lighting recovery

After obtaining water light and final transmission estimates, the scene brightness, $J$, is calculated as follows:

$${J^c}(x)= \frac{{{I^c}(x)- {B^c}}}{{\textrm{max}({K \cdot \hat{t}_r^n{{(x)}^{\delta_c^{\prime}}},{t_1}})}} + {t_0}{B^c}\; ,\; c \in \{{r,g,b} \},$$
where ${t_0}$ and ${t_1}$ are constant. As previously described [36], we applied a ${t_0}$ value of $\frac{{2e}}{5}$, and the value of ${t_1}$ to 0.1, which improved image contrast.

3.5 Image enhancement

At this point in the process, de-noising, de-hazing, and light brightness restoration have been completed, and image clarity and visual effects have been greatly improved; however, color enhancement and edge detail preservation remain inadequate. The proposed image enhancement method uses white balance fusion G-GIF technology based on the best gain factor [37,38]:

$$\left\{ {\begin{array}{c} {{P_o} = \frac{{{P_i}}}{{{\varphi_{max}} \times \left( {\frac{\mu }{{{u_{ref}}}}} \right) + {\varphi_v}}}}\\ {{u_{ref}} = \sqrt {{{({{u_r}})}^2} + {{({{u_g}})}^2} + {{({{u_b}})}^2}} } \end{array}} \right.\; \; ,$$
where ${P_o}$ and ${P_i}$ represent the color-corrected image and the initial underwater image, respectively; ${u_r}$, ${u_g}$, and ${u_b}$ represent the average value of each RGB channel of the initial underwater image, respectively; and ${P_i}$ and ${\varphi _{max}}$ are estimated using the maximum value of the RGB channel of the initial image. The value of ${\varphi _v}$ is assigned between 0 and 0.5 to obtain the desired color, with a smaller ${\varphi _v}$, resulting in lower brightness of the corrected image. According to the experimental results, a ${\varphi _v}$ value of 0.26 resulted in an optimal image enhancement effect.

According to the application of a WLS filter [35] and G-GIF [39], the edge-preservation smoothing filter calculation is as follows:

$$\mathop {\min }\nolimits_\varphi \mathop \sum \nolimits_x \left[ {{{({\varphi (x)- {O^\ast }(x)})}^2} + \vartheta \left( {\frac{{{{\left( {\frac{{\partial \varphi (x)}}{{\partial x}}} \right)}^2}}}{{{{|{{V^h}(x)} |}^\theta } + \epsilon }} + \frac{{{{\left( {\frac{{\partial \varphi (x)}}{{\partial y}}} \right)}^2}}}{{{{|{{V^v}(x)} |}^\theta } + \epsilon }}} \right)} \right]\; ,$$
where $\vartheta $, $\theta $, and $\epsilon $ are all constants and ${O^\ast }$ represents the output image and defines $\textrm{V} = ({{V^h},{V^v}})$ as the guiding vector field.

The edge-preservation smoothing filter is the image that is smoothed according to a vector field:

$${V^h}(x)= \frac{{\partial {O^\ast }(x)}}{{\partial x}}\; ,\; \; \; {V^v}(x)= \frac{{\partial {O^\ast }(x)}}{{\partial y}}\; ,$$

Similarly, the matrix representation is as follows:

$${({\varphi - {O^\ast }})^T}({\varphi - {O^\ast }})+ \vartheta ({{\varphi^T}D_x^T{B_x}{D_x}\varphi + {\varphi^T}D_y^T{B_y}{D_y}\varphi }),$$
where the matrices ${D_x}$ and ${D_y}$ represent discrete differential operators and the matrices ${B_x}$ and ${B_y}$ are described as:
$${B_x} = \textrm{diag}\left\{ {\frac{1}{{{{|{{V^h}(x)} |}^\theta } + \epsilon }}} \right\}\; ,\; \; {B_y} = \textrm{diag}\left\{ {\frac{1}{{{{|{{V^v}(x)} |}^\theta } + \epsilon }}} \right\}\; ,$$
$$({I + \vartheta ({D_x^T{B_x}{D_x} + D_y^T{B_y}{D_y}})})\varphi = {O^\ast }\; ,$$

Using the previously described fast-separation method allows these calculations to be completed quickly [40]. These processes allow the transformation of underwater images containing noise, haze, uneven light, and blurring into clear and high-quality images.

4. Results and evaluation

To demonstrate the performance of the proposed method, we conducted experimental comparisons with several state-of-the-art methods, including those applying methods involving the new underwater dark channel prior (NUDCP) [11], a super reconstruction (SR)CNN [16], fixed attenuation [21], and blur and absorption [22]. The experimental dataset contained 300 underwater images, and a sample dataset included 18 underwater images (Fig. 7). We conducted qualitative and quantitative comparisons and complexity analysis.

 figure: Fig. 7.

Fig. 7. Sample test image. From left to right and then top to bottom: “Image-1” through “Image-18”.

Download Full Size | PDF

4.1 Qualitative estimation

We show the comparison of the results of the enhancement method and the restoration method in this article in Fig. 8. Moreover, we conduct experiments in different underwater scenes and compare the experimental results of this article with the existing advanced methods, as shown in Fig. 9 and Fig. 10. To further prove the scope and persuasiveness of this research, we apply this method to underwater images (including artificial light) with different lighting conditions, and we compare the results between the proposed method and these well-developed optical methods, as shown in Fig. 11.

 figure: Fig. 8.

Fig. 8. Image comparison before and after enhancement using the proposed method. (a) Original image and that (b) before and (c) after enhancement.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Qualitative comparison of image enhancement between the proposed method and currently used methods. (a) Original image, the resulting image, and corresponding water light generated by different methods. Results for (b) SRCNN [16], (c) fixed attenuation-curve [21], (d) blur and absorption [22], (e) NUDCP [11], and (f) the proposed method.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Comparison of underwater images with different scenes between the proposed method and currently used methods. (a) Original image and those processed using (b) SRCNN [16], (c) fixed attenuation-curve [21], (d) blur and absorption [22], (e) NUDCP [11], and (f) the proposed.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Examples of underwater image enhancement in extreme scenes with uneven lighting conditions and artificial light environments. From left to right are (a) the original images, and the results generated by (b) [1], (c) [3], (d) [4], (e) [19], (f) [20], and (g) the proposed method.

Download Full Size | PDF

An underwater image restored using preprocessing and the proposed adaptive attenuation-curve prior method is shown in Fig. 8. An underwater image enhanced by the white balance fusion G-GIF method is shown in Fig. 8(c), clearly revealing the removal of noise and haze interference observed in the original image and restoration of the image outline and brightness. However, the edge details and color saturation of the image were not ideal. Nevertheless, enhancement increased the clarity of image details and resulted in more realistic image colors, the brightness of the image (including local lighting) has also been significantly improved. Therefore, the enhancement method in this paper has a better improvement in the overall brightness, color saturation, and edge detail preservation of the image.

For water light estimation, we established 6 × 6 areas with a TV threshold of 0.9 and assigned 900 clusters to the image and 3600 clusters to the composite image. The value of δ used in Eq. (23) was 0.05.

Image-1 in Fig. 9 is a turbid green underwater image, where the background of the image is too bright and full of haze. We found that the application of previous methods and the proposed method resulted in different degrees of image improvement. SRCNN [16] removed the haze and turbidity well and resulted in a cleared edge texture, although the image background was not ideal and halation was observed. Both the methods based on a fixed attenuation-curve [21] and blur and absorption [22] partially improved but did not eliminate image turbidity, resulting in a slightly clearer image, whereas the blur and absorption method ignored the attenuation associated with the distance from the camera to the nearest scene point and resulted in an image appearing too green and with a too bright background. By contrast, NUDCP [11] achieved much better results, with a greatly improved image sharpness and outline; however, the background color and brightness were not ideal. We observed that the use of the proposed method effectively addressed the shortcomings found in each of the images processed using the four methods, not only removing image haze and turbidity and retaining clear edge details but also appropriately presenting the image background color and brightness.

Image-2 in Fig. 9 is an underwater image obtained under weak light and blurry conditions. SRCNN [16] improved the clarity of the background, but the overall light intensity was too weak. The fixed attenuation-curve method [21] and the method based on blur and absorption [22] improved the sharpness of the edge contour to a certain extent but did not solve the problem of image blur. Additionally, NUDCP [11] shows the improvement of image edge detail and color; however, the excessive light intensity will cause local distortion, resulting in decreased quality of image effects. By contrast, the proposed method effectively de-hazed and removed the image blur while also enhancing the colors and preserving the edge details.

Image-3 in Fig. 9 contains a bright foreground and a dark background. For this scene, the effect achieved by SRCNN [16] alleviated the problems of blur and a darkened background and effectively preserved the edge details of the image. The fixed attenuation-curve method [21] enhanced image color enhancement reduced the brightness of the foreground but failed to address the darkened background. The blur and absorption method [22] addressed issues of foreground and background illumination, although a darker water light value could have increased the intensity of the restored image, and the edge details required further improvement. Additionally, NUDCP [11] was unable to improve the foreground brightness and dark background. By contrast, the proposed method effectively de-hazed the image, enhanced its color, and preserved the edge detail following adjustment of light intensity. We use more experimental results to prove the effectiveness of this method is shown in Fig. 10.

4.2 Quantitative analysis

We then used entropy (Entropy), gradient average (AVG), and the underwater image quality metric (UIQM) [22,41] to further evaluate the performance of the proposed method. The entropy of the image can be used to represent the characteristics of image statistics according to the average signal; the size of the entropy value intuitively defines the quality and clarity of an image. The AVG not only reflects the degree of image clarity but also indicates changes in edge texture details; larger AVG values result in decreased blur. The UIQM is a linear combination of color, clarity, and contrast and comprises the following parts: underwater image chromaticity measurement (UICM), underwater image sharpness measurement (UISM) [42], and underwater image contrast measurement (UIConM) [43], a larger UIQM value suggests the better performance of image enhancement. The UIQM is calculated as follows:

$$\textrm{UIQM} = \textrm{a}1\ast \textrm{UICM} + \textrm{a}2\ast \textrm{UISM} + \textrm{a}3\ast \textrm{UIConM}\;,$$
where $\textrm{a}1$ is set to 0.0282, $\textrm{a}2$ is set to 0.2953, and $\textrm{a}3$ is set to 3.5753. The parameter ${\alpha _L} = {\alpha _R} = 0.1$ when calculating UICM, for the UISM, the size of the patch is set to 8 × 8, and the coefficient ${\mathrm{\delta }_1}$, ${\mathrm{\delta }_2}$, and ${\mathrm{\delta }_3}$ are 0.299, 0.587, and 0.114, respectively.

The quantified results of the underwater images enhanced using each method are shown in Table 1. The results showed that because SRCNN [16] is better than the other three methods at processing edge details, the entropy and AVG values were slightly higher; however, its ability to address distant background light was less effective relative to the proposed method, resulting in inferior overall image brightness. Moreover, the proposed method outperformed all of the tested methods in terms of image clarity, light comfort, and color saturation, resulting in the highest entropy and AVG values.

Tables Icon

Table 1. Comparison of the entropy and AVG of the enhanced underwater images.

Eighteen sample images and 300 underwater images were then used to confirm the effectiveness of the proposed method. We divided the 300 underwater images into three groups (G1, G2, and G3) (Table 2). In the table, Avg. represents the average value obtained. The results showed that the proposed method returned the highest UIQM values for the underwater images, indicating its superiority in image de-hazing, de-blurring, edge detail preservation, and color enhancement.

Tables Icon

Table 2. UIQM comparison between methods.

From Fig. 10, we observed that SRCNN [16] can achieve a good de-hazing effect, and the edge details of the image have also been improved, but the background of the image after implementation tends to be dark. The fixed attenuation-curve [21] improves the brightness of the background light of the image, but the sharpness of the image is not improved. The blur and absorption [22] achieved the effects of de-hazing and color enhancement, but the edge details of the image did not achieve good results. NUDCP [11] has significant effects on image de-hazing and color enhancement, but it does not solve the problem of excessive local light, which can easily cause halo. The method in this paper not only achieves good de-hazing and color enhancement effects but also achieves good results in terms of edge details and light brightness.

In Fig. 11, image 1 shows a scene in an uneven lighting environment, and images 2 and 3 show an environment with artificial light. [1] can handle uneven lighting well, but it does not effectively improve the image color. Other methods have different degrees of color distortion and blur. For image 2 and image 3, [3] and [20] can effectively process images in artificial light environments, and other methods have problems with dim light or distortion to varying degrees. Through the comparison experiments of the above different scenarios, we have determined that the method we propose is better than other existing methods.

The experimental analysis results are shown in Table 2: SRCNN [16] uses the SRCNN deep learning method to make the image sharpness and color saturation better than fixed attenuation-curve [21], NUDCP [11] is generally better than the analysis results of the other three comparison methods, because NUDCP [11] proposed a statistical model of background light and combined transmission map optimization to eliminate underwater haze and improve image clarity. The UIQM value of the method in this paper is the highest because this paper adopts the adaptive attenuation curve prior and the G-GIF method, which can better achieve the effects of image de-hazing, color enhancement, edge detail preservation, and light intensity adjustment.

4.3 Complexity analysis

The proposed method estimates each curve in a pixel-by-pixel manner. For a pixel with m × n dimensions, the complexity of the initial transmission estimation is O (m × n). According to previous descriptions of evaluation and analysis [35], the computational complexity of refined transmission is also O (m × n), and when clustering pixels are used, KD-Tree is the fastest method (Table 3). Here, we used an Intel Xeon E5-1630 (v.3.0) 3.7 GHz CPU and MATLAB R2015b for our analyses. The calculation times for the SRCNN [16], fixed attenuation-curve [21], blur and absorption [22], and NUDCP [11] methods were all significantly slower than that of the proposed method when processing underwater images at various resolutions. The comparison values are presented in Table 3.

Tables Icon

Table 3. Comparison of the computation time required by each method.

The comparison of the computation time is shown in Table 3, as the pixels increase, the calculation time required for the experiment will increase. The time required for SRCNN [16] and fixed attenuation-curve [21] is relatively longer, which is related to the number of runs of their methods. In short, the method in this paper takes the shortest time to process images with different pixels, so the efficiency of this paper is also the highest.

4.4 Application to images obtained from a non-underwater environment

Although the proposed method was developed for enhancing underwater images, it can also be applied to enhance images not obtained underwater. To demonstrate its effectiveness for this application, we compared its performance against methods currently applied for defogging [14,39], image restoration [44], and general image enhancement [8].

A detail-preserving multiple-exposure image enhancement method is proposed. To preserve the details and improve the sharpness of the image, the energy equation is used to calculate the local contrast and saturation, and then the weighted multiple-exposure fusion method is used to generate the final image [14]. A new global guided image filtering method is composed of a global structure transfer filter and a global edge-preserving smoothing filter, which is used to study the de-hazing of a single image [39]. A new sigmoid function based on the contrast sensitivity of human brightness perception is proposed. In this method, the contrast sensitivity of the human retina is modeled as an exponential function of logarithmic intensity, and use the sensitivity model as the exponent of Steven’s power law to derive the conversion function [8]. First, use depth-related color changes to estimate ambient light. Then, via calculating the difference between the observed intensity and the ambient light (we call the scene ambient light difference), the scene transmission can be estimated, additionally, adaptive color correction is incorporated into the image formation model (IFM) to eliminate color cast while restoring contrast [44]. After using the same attenuation coefficient for the color channel (${t_r}$ = ${t_g}$ = ${t_b}$) in Eq. (8), the results are shown in Fig. 12.

 figure: Fig. 12.

Fig. 12. Comparison of the ability of the proposed method to enhance images obtained from non-underwater scenes. (a) Original image and that enhanced using (b) [14], (c) [39], (d) [8], (e) [44], and (f) the proposed method.

Download Full Size | PDF

As shown in Fig. 12, the proposed method achieved excellent image enhancement effects in processing non-underwater low-light images and atmospheric fog images. To further illustrate the effectiveness of the proposed method, we applied three image quality evaluation indicators: structural similarity (SSIM), peak signal ratio (PSNR), and lightness order error (LOE). SSIM evaluates the similarity of images according to contrast, exposure, and structure, with higher values denoting better the similarity in the enhanced image. PSNR evaluates the error between corresponding pixels and is the most widely used objective evaluation index for images, with a higher value indicating the closeness of the processed image to the real image. LOE represents the brightness error between the original and the enhanced image, with a smaller value denoting a better processing effect. We conducted an analysis of 400 non-underwater image datasets and calculated SSIM, PSNR, and LOE values (Table 4).

Tables Icon

Table 4. Comparison of SSIM and LOE for different methods.

The effectiveness of the proposed method for performing defogging, detail preservation, and color enhancement of non-underwater images is illustrated in Fig. 12 and Table 4.

5. Discussion

The research of underwater optics plays a vital role in underwater activities and marine engineering [45]. How to eliminate the attenuation of underwater light to obtain clear and high-quality underwater images is the key. The effect of the method proposed in this paper is very significant. First, CCT is used to preprocess the image. Experiments have shown that it is a good pavement for subsequent operations such as water and fog removal. Next, water-light estimation and Transmission estimation, and obtain the attenuation rate between the color channels, optimize the initial relative transmittance, calculate the attenuation factors and saturation constraints of the three color channels, to accurately grasp the attenuation law of the image light. Among them, we used ARSM to optimize and adjust the local area of excessive light (including artificial light, etc.) in the image. Finally, the image was restored through accurate water light and transmittance estimates, and the image was defogged and de-noised, the effect of uneven light intensity has also been adjusted.

Section 3.5 of this article implements the image enhancement effect, including the image brightness processing, image color, and edge detail enhancement. After the processing of the previous stage (before section 3.5), the problem of excessively bright local light in the image (including the result caused by artificial light) is well improved. However, specific experiments show that the brightness of different areas in the same scene images with large differences will cause the overall effect of the image to be dark, so this paper adopts the white balance fusion G-GIF, to effectively solve these problems, and the enhancement effect of local color and edge texture is the best. Compared with the comparison methods mentioned in this paper, it can be concluded that these comparison methods are not effective in dealing with shadows with different degrees, and are not so effective in dealing with image clarity and saturated color. By comparing the results of qualitative and quantitative evaluation, it can be seen that the method in this paper is the best.

6. Conclusions

In summary, we described the development of a method for enhancing underwater images based on an adaptive attenuation-curve prior. Compared with a fixed attenuation-curve method, the proposed method simulated the light attenuation process for different underwater scenes and used smoothness and light attenuation to estimate water light, cluster pixels on the curve according to a priori positions, and estimate the transmission of each curve. Additionally, the attenuation factor and the saturation constraints of the three color channels were then calculated to eliminate image oversaturation and noise and address the problem of uneven light intensity through ARSM. Moreover, we applied white balance fusion G-GIF technology based on the best gain factor to achieve color enhancement, edge detail preservation, and light intensity adjustment. The qualitative evaluation revealed that the proposed method improved image contrast, and adjusted the uniformity of light. Furthermore, the quantitative analysis indicated that the proposed method outperformed other methods. Our future work will focus on improving this method to address shortcomings involving scenes in which the light is too dim or too strong.

Funding

National Key Research and Development Program of China (2017YFC0804406).

Acknowledgments

Thanks to my teachers and classmates for their help in the paper writing; it was with their encouragement and guidance that I finally finished this paper. We also thank the anonymous reviewers for their critical comments on the manuscript.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. Z. Duan, Y. Yuan, J. C. Lu, J. L. Wang, Y. Li, S. Svanberg, and G. Y. Zhao, “Underwater spatially, spectrally, and temporally resolved optical monitoring of aquatic fauna,” Opt. Express 28(2), 2600–2610 (2020). [CrossRef]  

2. T. Cui, Q. Song, J. Tang, and J. Zhang, “Spectral variability of sea surface skylight reflectance and its effect on ocean color,” Opt. Express 21(21), 24929–24941 (2013). [CrossRef]  

3. C. John Y and C. Ying-Ching, “Underwater image enhancement by wavelength compensation and dehazing,” IEEE Trans. Image Process. 21(4), 1756–1769 (2012). [CrossRef]  

4. J. Wei, Z. P. Lee, M. Lewis, N. Pahlevan, M. Ondrusek, and R. Armstrong, “Radiance transmittance measured at the ocean surface,” Opt. Express 23(9), 11826–11837 (2015). [CrossRef]  

5. Y. Wang, W. Song, G. Fortino, L. Qi, W. Zhang, and A. Liotta, “An experimental-based review of image enhancement and image restoration methods for underwater imaging,” IEEE Access 7, 140233–140251 (2019). [CrossRef]  

6. M. Mathur and N. Goel, “Enhancement of Underwater images using White Balancing and Rayleigh-Stretching,” in 2018 5th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, 924–929 (2018).

7. C. Li, J. Guo, R. Cong, Y. Pang, and B. Wang, “Underwater Image Enhancement by Dehazing With Minimum Information Loss and Histogram Distribution Prior,” IEEE Trans. Image Process. 25(12), 5664–5677 (2016). [CrossRef]  

8. S. Park, Y. Shin, and S. Ko, “Contrast Enhancement Using Sensitivity Model-Based Sigmoid Function,” IEEE Access 7, 161573–161583 (2019). [CrossRef]  

9. Q. Shen, Y. Yao, J. S. Li, F. F. Zhang, S. L. Wang, Y. H. Wu, H. P. Ye, and B. Zhang, “A CIE Color Purity Algorithm to Detect Black and Odorous Water in Urban Rivers Using High-Resolution Multispectral Remote Sensing Images,” IEEE Trans. Geosci. Remote 57(9), 6577–6590 (2019). [CrossRef]  

10. A. S. A. Ghani and N. A. M. Isa, “Automatic system for improving underwater image contrast and color through recursive adaptive histogram modification,” Comput. Electron. Agricult. 141, 181–195 (2017). [CrossRef]  

11. W. Song, Y. Wang, D. Huang, A. Liotta, and C. Perra, “Enhancement of Underwater Images with Statistical Model of Background Light and Optimization of Transmission Map,” IEEE Trans. Broadcast. 66(1), 153–169 (2020). [CrossRef]  

12. H. Chang, C. Cheng, and C. Sung, “Single Underwater Image Restoration Based on Depth Estimation and Transmission Compensation,” IEEE J. Oceanic Eng. 44(4), 1130–1149 (2019). [CrossRef]  

13. C. Li, J. Quo, Y. Pang, S. Chen, and J. Wang, “Single underwater image restoration by blue-green channels dehazing and red channel correction,” in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, 1731–1735 (2016).

14. S. Liu and Y. Zhang, “Detail-Preserving Underexposed Image Enhancement via Optimal Weighted Multi-Exposure Fusion,” IEEE Trans. Consumer Electron. 65(3), 303–311 (2019). [CrossRef]  

15. G. Zibordi and M. Talone, “On the equivalence of near-surface methods to determine the water-leaving radiance,” Opt. Express 28(3), 3200–3214 (2020). [CrossRef]  

16. Y. Li, C. Ma, T. Zhang, J. Li, Z. Ge, Y. Li, and S. Serikawa, “Underwater Image High Definition Display Using the Multilayer Perceptron and Color Feature-Based SRCNN,” IEEE Access 7, 83721–83728 (2019). [CrossRef]  

17. P. Pan-wang, Y. Fei, and C. En, “De-scattering and edge-enhancement algorithms for underwater image restoration,” Frontiers Inf. Technol. Electronic Eng. 20(6), 862–871 (2019). [CrossRef]  

18. H. Lu, D. Wang, Y. Li, J. Li, Z. Li, H. Kim, S. Serikawa, and I. Humar, “CONet: A Cognitive Ocean Network,” IEEE Wireless Commun. 26(3), 90–96 (2019). [CrossRef]  

19. H. Lu, Y. Li, T. Uemura, H. Kim, and S. Serikawa, “Low illumination underwater light field images reconstruction using deep convolutional neural networks,” Future Gener. Comput. Syst. 82, 142–148 (2018). [CrossRef]  

20. X. Yu, X. Xing, H. Zheng, X. Fu, Y. Huang, and X. Ding, “Man-Made Object Recognition from Underwater Optical Images Using Deep Learning and Transfer Learning,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, (Academic, 2018), pp. 1852–1856.

21. Y. Wang, H. Liu, and L.-P. Chau, “Single underwater image restoration using attenuation-curve prior,” in Proc. IEEE Int. Symp. Circuits Syst. May 2017.

22. Y. Peng and P. C. Cosman, “Underwater Image Restoration Based on Image Blurriness and Light Absorption,” IEEE Trans. Image Process. 26(4), 1579–1594 (2017). [CrossRef]  

23. H. Chang, “Single Underwater Image Restoration Based on Adaptive Transmission Fusion,” IEEE Access 8, 38650–38662 (2020). [CrossRef]  

24. X. Liu, G. Zhong, C. Liu, and J. Dong, “Underwater image color constancy based on DSNMF,” IET Image Process. 11(1), 1–12 (2017). [CrossRef]  

25. X. Ding, Y. Wang, J. Zhang, and X. Fu, “Underwater image dehaze using scene depth estimation with adaptive color correction,” in Proc. IEEE OCEANS Aberdeen, Jun. 2017, pp.1–5.

26. X. Ding, Y. Wang, Z. Liang, J. Zhang, and X. Fu, “Towards underwater image enhancement using super-resolution convolutional neural networks,” in Proc. Int. Conf. Internet Multimedia Comput. Service, (Academic, 2017), pp. 479–486.

27. K. Cao, Y.-T. Peng, and P. C. Cosman, “Underwater image restoration using deep networks to estimate background light and scene depth,” in Proc. IEEE Southwest Symp. Image Anal. Interpretation (SSIAI), (Apr. 2018), pp. 1–4.

28. J. Ahn, S. Yasukawa, T. Sonoda, Y. Nishida, K. Ishii, and T. Ura, “An Optical Image Transmission System for Deep Sea Creature Sampling Missions Using Autonomous Underwater Vehicle,” IEEE J. Oceanic Eng. 45(2), 350–361 (2020). [CrossRef]  

29. M. Oliveira, A. D. Sappa, and V. Santos, “A Probabilistic Approach for Color Correction in Image Mosaicking Applications,” IEEE Trans. Image Process. 24(2), 508–523 (2015). [CrossRef]  

30. H. N. Chaupis, “Generalization of the classical delay-and-sum technique by using nonlinear dirac-delta functions,” in 2017 IEEE XXIV International Conference on Electronics, Electrical Engineering and Computing (INTERCON), Cusco, (Aug. 2017), pp. 1–4.

31. X. Zhao, T. Jin, and S. Qu, “Deriving inherent optical properties from background color and underwater image enhancement,” Ocean Eng. 94, 163–172 (2015). [CrossRef]  

32. D. Huang, C. Wang, and J. Lai, “Locally Weighted Ensemble Clustering,” IEEE Trans. Cybernetics 48(5), 1460–1473 (2018). [CrossRef]  

33. C. O. Ancuti, C. Ancuti, C. De Vleeschouwer, and M. Sbetr, “Color Channel Transfer for Image Dehazing,” IEEE Signal Process. Lett. 26(9), 1413–1417 (2019). [CrossRef]  

34. C. O. Ancuti, C. Ancuti, C. De Vleeschouwer, L. Neumann, and R. Garcia, “Color transfer for underwater dehazing and depth estimation,” in 2017 IEEE International Conference on Image Processing (ICIP), Beijing, (July. 2017), pp. 695–699.

35. R. Li, J. Pan, M. He, Z. Li, and J. Tang, “Task-Oriented Network for Image Dehazing,” IEEE Trans. Image Process. 29, 6523–6534 (2020). [CrossRef]  

36. Z. Yu, T. Li, S. Horng, Y. Pan, H. Wang, and Y. Jing, “An Iterative Locally Auto-Weighted Least Squares Method for Microarray Missing Value Estimation,” IEEE Trans. Nano Biosci. 16(1), 21–33 (2017). [CrossRef]  

37. X. Ding, Y. Wang, J. Zhang, and X. Fu, “Underwater image dehaze using scene depth estimation with adaptive color correction,” OCEANS 2017 - Aberdeen, Aberdeen, (Academic, 2017), pp. 1–5.

38. M. Kumar and A. K. Bhandari, “Contrast Enhancement Using Novel White Balancing Parameter Optimization for Perceptually Invisible Images,” IEEE Trans. Image Process. 29, 7525–7536 (2020). [CrossRef]  

39. Z. Li and J. Zheng, “Single Image De-Hazing Using Globally Guided Image Filtering,” IEEE Trans. Image Process. 27(1), 442–450 (2018). [CrossRef]  

40. D. Min, S. Choi, J. Lu, B. Ham, K. Sohn, and M. Do, “Fast global image smoothing based on weighted least squares,” IEEE Trans. Image Process. 23(12), 5638–5653 (2014). [CrossRef]  

41. K. Panetta, C. Gao, and S. Agaian, “Human-visual-system-inspired underwater image quality measures,” IEEE J. Oceanic Eng. 41(3), 541–551 (2016). [CrossRef]  

42. D. Berman, D. Levy, S. Avidan, and T. Treibitz. “Underwater Single Image Color Restoration Using Haze-Lines and a New Quantitative Dataset,” IEEE Trans. Pattern Analysis and Machine Intelligence, 5-6 Mar. 2020.

43. L. Ke and L. Xujian, “De-Hazing and Enhancement Methods for Underwater and Low-Light Images,” Acta Opt. Sin. 40(19), 1910003 (2020). [CrossRef]  

44. Y. Peng, K. Cao, and P. C. Cosman, “Generalization of the Dark Channel Prior for Single Image Restoration,” IEEE Trans. Image Process. 27(6), 2856–2868 (2018). [CrossRef]  

45. J. Zhang, L. Kou, Y. Yang, F. He, and Z. Duan, “Monte-carlo-based optical wireless underwater channel modeling with oceanic turbulence,” Opt. Commun. 475, 126214 (2020). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Underwater light propagation scene. Entry into the water from the atmosphere results in gradual attenuation of light. Point x represents the scene point closest to the camera.
Fig. 2.
Fig. 2. Decay curve priority. (a) Clear pixel picture. (b) Clusters corresponding to different positions in the RGB space. (c) Composite image of the same cluster. (d) The corresponding attenuation-curve in RGB space.
Fig. 3.
Fig. 3. Flow chart for the proposed method.
Fig. 4.
Fig. 4. The role of CCT as a preprocessing method. (a) Original image and that (b) after CCT preprocessing. The text method (c) without and (d) with CCT preprocessing.
Fig. 5.
Fig. 5. Determining the attenuation-curve. (a) The power function relationship between color channels. (b) The linear relationship between color channels. (c) Clustering of the rotating coordinates.
Fig. 6.
Fig. 6. ARSM. (a) Original image. (b) Inaccurate transmission estimation. (c) RSM and (d) ARSM.
Fig. 7.
Fig. 7. Sample test image. From left to right and then top to bottom: “Image-1” through “Image-18”.
Fig. 8.
Fig. 8. Image comparison before and after enhancement using the proposed method. (a) Original image and that (b) before and (c) after enhancement.
Fig. 9.
Fig. 9. Qualitative comparison of image enhancement between the proposed method and currently used methods. (a) Original image, the resulting image, and corresponding water light generated by different methods. Results for (b) SRCNN [16], (c) fixed attenuation-curve [21], (d) blur and absorption [22], (e) NUDCP [11], and (f) the proposed method.
Fig. 10.
Fig. 10. Comparison of underwater images with different scenes between the proposed method and currently used methods. (a) Original image and those processed using (b) SRCNN [16], (c) fixed attenuation-curve [21], (d) blur and absorption [22], (e) NUDCP [11], and (f) the proposed.
Fig. 11.
Fig. 11. Examples of underwater image enhancement in extreme scenes with uneven lighting conditions and artificial light environments. From left to right are (a) the original images, and the results generated by (b) [1], (c) [3], (d) [4], (e) [19], (f) [20], and (g) the proposed method.
Fig. 12.
Fig. 12. Comparison of the ability of the proposed method to enhance images obtained from non-underwater scenes. (a) Original image and that enhanced using (b) [14], (c) [39], (d) [8], (e) [44], and (f) the proposed method.

Tables (4)

Tables Icon

Table 1. Comparison of the entropy and AVG of the enhanced underwater images.

Tables Icon

Table 2. UIQM comparison between methods.

Tables Icon

Table 3. Comparison of the computation time required by each method.

Tables Icon

Table 4. Comparison of SSIM and LOE for different methods.

Equations (42)

Equations on this page are rendered with MathJax. Learn more.

W v e r t D , γ = W i n t a c t e δ ( γ ) D ,
I c ( x ) = W d + W f s + W b s ,
W d ( x , D , γ ) = W v e r t D , γ e β ( γ ) d ( x ) = W v e r t D , γ t c ( x ) ,
t c ( x ) = e β c d ( x ) = e ( a c + s c ) d ( x ) , c { r , g , b } ,
W f s ( x , D , γ ) = W d ( x , D , γ ) f ( x , γ ) ,
W b s ( x , γ ) = B c ( 1 e β ( γ ) d ( x ) ) = B c ( 1 t c ( x ) ) , c { r , g , b } ,
X ( x , D , γ ) = W d ( x , D , γ ) + W f s ( x , D , γ ) = W v e r t D γ e β ( γ ) d ( x ) [ ρ ( x ) + f ( x , γ ) ] = J c ( x ) t c ( x ) ,
I c ( x ) = J c ( x ) t c ( x ) + ( 1 t c ( x ) ) B c , c { r , g , b } ,
t c ( x ) = e β c d 0 e β c d n = K c t c n ( x ) , c { r , g , b } ,
t c ( x ) = e β c d ( x ) = ( e β r d ( x ) ) β c / β r = t r ( x ) δ c ,
s c s r = 0.00113 R c + 1.6251 0.00113 R r + 1.6151 , c { g , b } ,
δ c = β c β r = s c s r B r B c , c { g , b } ,
( I c B c ) = ( I r B r ) δ c J c B c ( J r B r ) δ c , c { g , b } ,
( I c B c ) = ( I r B r ) δ c K c ( J c B c ) [ K r ( J r B r ) ] δ c , c { g , b } ,
I L ( x ) = [ I L ( x ) I ¯ L ] σ r L / σ s L + I ¯ L r I a ( x ) = [ I a ( x ) I ¯ a ] σ r a / σ s a + I ¯ a r I b ( x ) = [ I b ( x ) I ¯ b ] σ r b / σ s b + I ¯ b r ,
R ( x ) = G ( x ) + D ( x ) + S ( x ) I ( x ) ,
D c ( x ) = I c ( x ) I r ( x ) , c { g , b } ,
In d c ( n ) = sort ( D c ( x ) ) , c { g , b } ,
B c = I c ( arg max x Z I c ( x ) ) , c { r , g , b } ,
ln | I c B c | = δ c ln | I r B r | + ln K c ( J c B c ) [ K r ( J r B r ) ] δ c , c { g , b } ,
| I c ( x ) B c | = t c n ( x ) K c | J c ( x ) B c | , x L , c { r , g , b } ,
t c n ~ ( x ) = | I c ( x ) B c | max x L | I c ( x ) B c | , c { r , g , b } ,
min t ^ r n x [ t ^ r n ( x ) t ~ r n ( x ) σ r 2 ( x ) ] 2 + δ x y N x [ t ^ r n ( x ) t ^ r n ( y ) ] 2 I ( x ) I ( y ) 2 ,
coun t reliability ( L ) = min ( 1 , n ( L ) 50 ) ,
1 σ r ( x ) std ~ x L ( I r ( x ) ) count _ reliability ( L ) ,
S a t ( I c ( x ) ) = i = 1 M j = 1 N [ 1 min ( I c ( x ) ) max ( I c ( x ) ) ] S a t ( I c ( x ) ) = 1 , if max ( I c ( x ) ) = 0 ,
S a t r e v ( x ) = 1 S a t ( I c ( x ) ) ,
S a t a d j r e v ( x ) = λ S a t r e v ( x ) ,
t c n ( x ) = i = 1 M j = 1 N max ( t r n ( i , j ) , S a t a d j r e v ( i , j ) ) ,
q = { B c , B c > 1 B c 1 B c , B c < 1 B c ,
max c { r , g , b } | J c ( α ) B c | q = 1 ,
K = max c { r , g , b } max | I c ( x ) B c | q ,
0 I c ( x ) B c K t ^ r n ( x ) δ c + B c 1 ,
{ δ c ln ( I c ( x ) B c K ( 1 B c ) ) / ln ( t ^ r n ( x ) ) + ε c , if I c ( x ) > B c δ c ln ( B c I c ( x ) K B c ) / ln ( t ^ r n ( x ) ) + ε c , if I c ( x ) < B c ,
J c ( x ) = I c ( x ) B c max ( K t ^ r n ( x ) δ c , t 1 ) + t 0 B c , c { r , g , b } ,
{ P o = P i φ m a x × ( μ u r e f ) + φ v u r e f = ( u r ) 2 + ( u g ) 2 + ( u b ) 2 ,
min φ x [ ( φ ( x ) O ( x ) ) 2 + ϑ ( ( φ ( x ) x ) 2 | V h ( x ) | θ + ϵ + ( φ ( x ) y ) 2 | V v ( x ) | θ + ϵ ) ] ,
V h ( x ) = O ( x ) x , V v ( x ) = O ( x ) y ,
( φ O ) T ( φ O ) + ϑ ( φ T D x T B x D x φ + φ T D y T B y D y φ ) ,
B x = diag { 1 | V h ( x ) | θ + ϵ } , B y = diag { 1 | V v ( x ) | θ + ϵ } ,
( I + ϑ ( D x T B x D x + D y T B y D y ) ) φ = O ,
UIQM = a 1 UICM + a 2 UISM + a 3 UIConM ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.