Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Effective method for low-light image enhancement based on the JND and OCTM models

Open Access Open Access

Abstract

Low-light images always suffer from dim overall brightness, low contrast, and low dynamic ranges, thus result in image degradation. In this paper, we propose an effective method for low-light image enhancement based on the just-noticeable-difference (JND) and the optimal contrast-tone mapping (OCTM) models. First, the guided filter decomposes the original images into base and detail images. After this filtering, detail images are processed based on the visual masking model to enhance details effectively. At the same time, the brightness of base images is adjusted based on the JND and OCTM models. Finally, we propose a new method to generate a sequence of artificial images to adjust the brightness of the output, which has a better performance in image detail preservation compared with other single-input algorithms. Experiments have demonstrated that the proposed method not only achieves low-light image enhancement, but also outperforms state-of-the-art methods qualitatively and quantitatively.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

With the continuous development of information technology and widespread use of image processing in many fields, human’s requirements for image quality are accompanied raised. Nonetheless, the process of actual image capture is always affected by weather, light, and other factors, which undoubtedly leads to image degradation [1]. Images acquired under non-ideal lighting have many drawbacks, in particular, images acquired under non-uniform lighting conditions are often characterised by low contrast, low brightness and low dynamic range. It inevitably leads to the texture of the source images being submerged in dark areas, thus making the regions of interest difficult to identify. In addition, many related technologies have been widely used in digital photography [2,3], remote sensing technology [4,5] and machine vision [6,7]. Therefore, the development of image enhancement technologies plays a vital role in solving the above problems and significantly improving image quality.

Limited by low contrast, low dynamic range, color distortion, and other defects, low-light images are challenging to implement in areas such as feature extraction and target recognition. Thus, considering the preservation of image naturalness and contrast while effectively enhancing detail information is an issue worthy of in-depth study.

To effectively address the above-mentioned shortcomings, an efficient and effective method for low-light image enhancement based on the just-noticeable-difference (JND) and optimal contrast-tone mapping (OCTM) models is put forward in this article [8,9]. The proposed method is partly inspired by Xu’s [10] and Su’s work [11]. Furthermore, our method avoids the drawbacks of traditional methods and maintains the naturalness of the images without losing detail information. First, the guided filter decomposes the original image into a base image and (multiple) detail image(s). Second, we extract and enhance detail images by using the visual masking model, so processed images are free from noise-enhanced. We perform brightness adjustment based on the JND and OCTM models for base images. Finally, to show the characteristics of low-light images, an artificial image set is generated by a new proposed remapping function based on a single input image for subsequent image fusion. The remapping function enhances each channel of the input image separately to ensure that the contrast between the processed and original images remains unchanged. In addition, this method allows flexible and effective enhancement of low-light images taken under various conditions. Numerous experiments have shown that the proposed method yields better results than other state-of-the-art algorithms in qualitative and quantitative evaluations.

2. Related works

Up to the present, various low-light image enhancement methods have been intensively studied and applied in many fields. Throughout all of these algorithms, there are almost four main categories, including histogram-equalization (HE)-based methods, Retinex-based methods, multi-scale fusion-based methods, and deep learning-based methods.

The HE-based methods increase the dynamic range and enhance the images’ contrast by adjusting the gray level’s probability density function [12]. A method based on exposure sub-image histogram equalization (ESIHE) was proposed by Singh et al. [13] to enhance low-light images. Unfortunately, the exposure thresholds affect the enhancement rate, which may lead to some distortions such as brightness diffusion in bright areas. Jung’s method [14] enhances contrast by combining the histogram equalization method with a tone mapping method. While this method does a good job of suppressing over-enhancement in smooth areas, it may restore limited sharpness at some edges.

The Retinex theory is the one proposed based on the characteristics of human visual perception [15,16]. By using an image decomposition method based on Retinex theory, Pei et al. [17] not only compensated the illumination layer, but also enhanced the reflectance layer. In addition, the processed images are effectively enhanced. A novel low-light image enhancement method based on the Poisson-aware Retinex model was proposed by Kong et al. [18]. In addition, it is the first time that the Poisson distribution is considered to formulate the fidelity term in the Retinex model. However, it may not lead to an appropriate luminance. Fu et al. [19] proposed a low-light image enhancement method based on a probabilistic variational model for estimating illumination and reflectance maps in the linear domain. This method gives good results in most cases, but the noise is amplified while enhancing valuable information. A method called LIME was proposed by Guo et al. [6], which first constructed illumination by finding the maximum intensity of each pixel in red-green-blue channels. However, this method may cause over-enhancement in bright regions.

Multi-scale fusion schemes have shown great potential in many applications of computer vision with their powerful information extraction capabilities, and have been widely used in fields such as single-image dehazing and single-image super-resolution [2023]. Fusion-based techniques are also an essential subset of low-light image enhancement methods. Xu et al. [10] proposed an image enhancement algorithm based on multi-scale fusion schemes. Experimental results have shown that this algorithm can process low-light images under various conditions. Due to the advantages of the multiscale fusion framework, the method compensates for the details that may be lost in the fusion process by other methods. Hao et al. [24] proposed a method with semi-decoupled decomposition. Although this method can handle various types of low-light images, there are some under-exposed regions in the results.

Unlike traditional low-light image enhancement methods, the deep learning-based approaches employ a data-driven strategy to enhance low-light images [25]. Jiang et al. [26] proposed a highly effective unsupervised generative adversarial network, dubbed enlighten GAN. It can effectively improve the image feature extraction and color preservation. Cai et al. [27] used a convolutional neural network to process decomposed high and low frequency images, so as to realize the enhancement of low-light images. Lv et al. [28] used multi-branch image enhancement network to extract multi-level features from low-light images, and used different subnets for targeted enhancement. A new method that gets the illumination map of the input image by a trainable convolutional neural network was proposed by Li et al. [29]. And then, they used a Retinex-based method to enhance low-light images. However, some deep learning methods often fail to yield visually natural results.

3. Proposed method

The proposed low-light image enhancement method based on the JND and OCTM models will be described in detail in this section. Moreover, the entire framework of the proposed method is illustrated in Fig. 1.

 figure: Fig. 1.

Fig. 1. Schematic diagram of the proposed low-light image enhancement via JND and OCTM models.

Download Full Size | PDF

3.1 Image decomposition

The local edge-preserving filters (LEPF) are widely used in image processing algorithms to extract useful information for further processing [30]. The bilateral filter is a common and practical tool used to decompose images in a variety of situations. However, it will result in “gradient reversal” around edges which severely affects the quality of processed images. The guided filter is popular in image processing because of its good edge preserving property and clear filter kernel. Moreover, the computational complexity is not affected by the calculation matrix. Importantly, with the help of the guided filter, the processed images are more structured. The expression of the guided filter can be expressed as:

$${q_i} = {a_k} \times {g_i} + {b_k},\forall i \in {\omega _k}$$

The guided filter assumes a local linear relationship between the output image q and the guided image g [31]. Where $({a_k},{b_k})$ represents the linear constant-coefficients of the square window ${\omega _k}$, and r is the side length of the square window. To derive the two parameters, a cost function method is introduced, described in [32]. At this point, we can obtain the optimal solution by seeking partial derivatives as follows:

$${a_k} = \frac{{\frac{1}{{|\omega |}}\sum\limits_{i \in {\omega _k}} {{g_i} \times {I_i} - {\mu _k} \times \overline {{I_k}} } }}{{\sigma _k^2 + \varepsilon }}$$
$${b_k} = \overline {{I_k}} - {a_k} \times {\mu _k}$$

In (3), ${\mu _k}$ and $\sigma _k^2$ represent the mean and variance of the guided image g in the square window ${\omega _k}$, respectively. Moreover, $|\omega |$ is the number of pixels in a square window ${\omega _k}$, the mean of the input image I in the square window is shown as $\overline {{I_k}}$. So far, we can get the output image as shown in (4).

$${q_i} = \overline {{a_k}} {g_i} + \overline {{b_i}} = \frac{1}{{|\omega |}}\sum\limits_{k \in {\omega _i}} {({a_k}{g_i} + {b_k}),{\kern 1pt} {\kern 1pt} {\kern 1pt} \forall i \in g}$$

After the filtering procedure, a smoothed base image ${B_0}$ and a detail image ${D_0}$ can be obtained:

$${B_0} = Guide(I),\textrm{ }{D_0} = I - {B_0}$$

To highlight the details of low-light images, we can get a further decomposition based on ${D_0}$ so that a detail image set can be obtained as (6). And all the detail images will be enhanced according to their properties as shown in following sections.

$$\begin{array}{c} {D_1} = {D_0} - Guide({D_0})\\ {D_2} = {D_1} - Guide({D_1})\\ \ldots \end{array}$$

3.2 Enhancement of detail images based on the visual masking model

The detail images contain many tiny details, which determine the quality of the processed images. However, detail images also contain lots of noise from the input images. If a fixed gain factor is used for the entire image, the noise will undoubtedly be amplified. Psychophysical experiments have illustrated that the visibility of local texture information is influenced by the interaction between visual stimuli, which is known as visual masking effect. This effect is a fundamental phenomenon in the perception of human visual system (HVS), whereby the visibility of one visual component decreases when a different visual component is present. In general, noise is less visible in areas where the image information transitions are evident, while it is easily observed in smooth areas. In this paper, the visual masking effect is calculated according to brightness contrast (gradient amplitude) and visual regularity (gradient regularity) in the following way [33]:

$${V_M}(x) = \frac{{1.84 \cdot {L_c}(x)}}{{{L_c}{{(x)}^2} + {{26}^2}}} \cdot \frac{{0.3 \cdot N{{(x)}^{2.7}}}}{{N{{(x)}^2} + 1}}$$
$$\begin{aligned} {L_c} &= \sqrt {\frac{{G_v^2(x) + G_h^2(x)}}{2}} \\ {G_h} &= I \ast {f_h}\\ {G_v} &= I \ast {f_v} \end{aligned}$$

In (7), we denote the ${L_c}(x)$ and $N(x)$ as the gradient magnitude and the gradient regularity in a local region, respectively. And the gradient magnitude ${L_c}(x)$ is calculated as shown in (8).

Where ${\ast} $ denotes the convolution operator; ${G_h}$ and ${G_v}$ represent the horizontal and vertical gradient of image I, while ${f_h}$ and ${f_v}$ are the horizontal and vertical Prewitt gradient operators as:

$${f_h} = \left( \begin{array}{ccc} 1/3&1/3&1/3\\ 0&0&0\\ - 1/3&- 1/3&- 1/3 \end{array} \right)$$
$${f_h} = \left( \begin{array}{ccc} 1/3&0&- 1/3\\ 1/3&0&- 1/3\\ 1/3&0&- 1/3 \end{array} \right)$$

The gradient regularity $N(x)$ is calculated as the gradient orientation complexity in local regions [33]. In this paper, the gradient direction complexity is represented by the number of different values quantifying the orientation differences of local regions. Furthermore, we can use the orientation difference between a pixel ${x_c}$ and its neighbor pixel ${x_{c + \delta }}$ to represent the complexity of the pixel, which is shown as:

$$N({x_c}) = {g_o}({x_c}) - {g_o}({x_{c + \delta }})$$
$${g_o}(x) = \arctan \frac{{{G_v}(x)}}{{{G_h}(x)}}$$

In (12), ${g_o}(x)$ is the gradient orientation in the pixel x, and the orientation difference is quantified with the interval ${12^o}$.

We defined the noise visibility function as follows:

$$Nv(x) = \frac{1}{{{V_M}(x) \cdot \theta + 1}}$$

In (13), $\theta$ is the adjustment parameter, and $Nv(x)$ is a standardized function with values of [0,1]. Psychophysical experiments have confirmed that noise in flat areas of images may create illusions. Moreover, the contrast and sensitivity of human visual system decreases with sharp changes in image intensity [34]. It can be seen when (13) is close to 0, the orientation difference is significant, that is, where the image changes significantly, meaning the noise has little effect on this region. Conversely, when $Nv(x) \approx 1$, the effect of noise on the image is apparent. As mentioned above, a single gain coefficient is not conducive to obtaining high-quality images. In this paper, we set difference gain coefficients according to the value of $Nv(x)$. Thus, the gain function and processed detail image can be expressions as (14), (15):

$$gain(x) = {g_{\min }} + [1 - Nv(x)] \cdot ({g_{\max }} - {g_{\min }})$$
$${D_p} = {D_0} \cdot gain$$

3.3 Adjustment of base images based on the JND and OCTM models

Jayant first proposed the JND model in 1992 [8], which aims to show the minimum difference that HVS can distinguish from the background. To ensure that the processing is justified, we apply the JND model to highlight the texture of the processed images. Therefore, the results are more comfortable for human visual perception. However, there is no accurate mathematical model that is consistent with visual features, as the perceptual mechanism is too complex and related to visual psychology [35]. In this paper, we apply the experiments in [36] to express the visibility threshold of the JND model, as follows:

$$JND(I(x)) = \left\{ \begin{array}{cc} 17 \cdot (1 - \sqrt {\frac{{I(x)}}{T}} ) + 3,&I(x) \le T\\ \frac{3}{{T + 1}} \cdot (I(x) - T) + 3,&otherwise \end{array} \right.$$
where T represents the cut-off value of brightness, we set T as the average value of an image to achieve a better effect. And the visibility threshold of the JND model is shown as Fig. 2.

 figure: Fig. 2.

Fig. 2. JND thresholds for different gray levels.

Download Full Size | PDF

Since the low intensity and narrow dynamic range of low-light images, traditional methods such as histogram modification and gamma correction are difficult to effectively enhance low-light images, which may result in over-or under-enhancement. the OCTM model addresses this issue well by controlling the degree of enhancement at each grey level to reduce tonal distortion and over-enhancement. Luminance adaptation represents the minimum noticeable gray level variation that HVS can perceive. Moreover, it is a practical way to enhance an image on its own terms. Thus, the perceptual contrast enhancement of base images is formulated as follows:

$$\begin{aligned} &{f_b} = \max \sum\limits_{0 \le j \le L} {{p_j}{s_j}} \\ &s.t.\sum\limits_{0 \le j \le L} {{s_j} \le L^{\prime}} \end{aligned}$$
where ${p_j}$ is the probability density of the gray level j; ${s_j}$ is the gray output level versus one unit input level in the gray level j; L and $L^{\prime}$ are the dynamic ranges before and after the process, respectively. By combing with the constraints, we can get the maximum value of $\sum\limits_{0 \le j \le L} {{p_j}{s_j}}$ so that the processing function ${f_b}$ can be obtained. For a better enhancement, we added some constraints of ${s_j}$ as follows:
$$\left\{ \begin{array}{l} {s_j} \ge {\{ 1/JND(j)\}^\gamma }\\ {s_j} \le {\{ JND(j)\}^\gamma } - \beta \end{array} \right.$$

In (18), $\gamma$ is the parameter to adjust the shape of the function, while $\beta = 0.5$ is used to control the enhancement rate.

At this point, we can obtain the processed base images in the following case.

$${B_p} = {B_0}(j) \cdot {f_b}(j)$$

And the enhanced image can be obtained as (20).

$${I_p} = {B_p} + \eta \cdot {D_p}$$

Figure 3 shows the enhancement results with different values of T. As can be seen, the index PSNR decreases with the increase of T. Although the image (f) ranks second in PSNR, it retains color naturalness better than the image (e). Therefore, in this paper, we set $T = Avg$.

 figure: Fig. 3.

Fig. 3. Enhancement results with different T. ($T = 20\%$ denotes T is the 20% of the maximum gray level of the original level.)

Download Full Size | PDF

The enhancement results of the building according to $\gamma$ are shown in Fig. 4. As can be seen, with the increase of $\gamma$, the enhancement becomes more. Verified through extensive experiments, we set $\gamma = 0.6$ in this paper.

 figure: Fig. 4.

Fig. 4. Base image enhancement results. (a) Original base image. (b) Enhancement at $\gamma = 0.3$. (c) Enhancement at $\gamma = 0.4$. (d) Enhancement at $\gamma = 0.5$. (e) Enhancement at $\gamma = 0.6$. (f) Gain cure by different $\gamma$.

Download Full Size | PDF

3.4 Brightness adjustment

To further ensure the quality of the processed images, we adjust the brightness in the last step. In this section, we propose a new remapping function to enhance the processed images, thus improving their brightness and contrast. To avoid misunderstanding, images darker than ${I_p}$ are represented as under-exposure images while brighter images are denoted as over-exposure images. And a set of artificial multi-exposure images are generated by this remapping function, including ${n_u}$ under-exposure images and ${n_o}$ over-exposure images. The unique expression of the function is shown as:

$${I_{art}} = \left\{ \begin{array}{lc} {A_\alpha }^{ - \frac{k}{{({n_u} + {n_o}) \times 10}}} \times {I_p},&0 < k < m\\ {A_\alpha }^{\frac{k}{{{n_u}}}} \times {I_p},&m < k < m + n \end{array} \right.$$

In (21), ${A_\alpha }$ determines the overall enhancement. Since the entropy can reflect the gray level distribution of an image, we apply an entropy-based fusion method to obtain the final image.

To improve the brightness of the final image effectively, we made the difference between the entropies of artificial images and the enhanced image ${I_p}$ so that some images will be discarded, as shown in (22). And the fusion weight is expressed in (23):

$$\begin{aligned} &Entrop{y_{sum}} = \sum\limits_{y = 1}^{num} {Entrop{y_y}} \\ &|Entrop{y_y} - Entrop{y_{{I_p}}}|< 1 \end{aligned}$$
$${W_{fusion\_y}} = \frac{{Entrop{y_y}}}{{Entrop{y_{sum}}}}$$
where, $Entrop{y_y}$ represents the $yth$ artificial image’s entropy, $Entrop{y_{{I_p}}}$ represents the entropy of ${I_p}$, and $num$ is the number of the adopted artificial images. Finally, the final image can be got as (24).
$${I_f} = \sum\limits_{y = 1}^{num} {{W_{fusion\_y}} \times {I_{art\_y}}}$$

3.5 Implementation of the proposed method

The whole procedure is shown in Fig. 5. The image (a), (b), and (c) are the input image, base image, and detail image, respectively. After the filtering procedure, we first enhance the detail image based on the visual masking model, while details have been effectively highlighted in the (d). With the help of the JND and OCTM models, the image (b) has been greatly improved in visibility, as shown in the (e). For better visualisation, a sequence of artificial images is generated, followed by the entropy-based fusion, the (h) is superior in image quality.

 figure: Fig. 5.

Fig. 5. The proposed method flow chart on test image. (a) The input image. (b) The base image. (c) The detail image. (d) The enhanced detail image. (e) The enhanced base image. (f) The fusion of (d) and (e). (g) The artificial image set. (h) The final image.

Download Full Size | PDF

4. Experiments and discussion

In this section, the proposed method is compared with eight state-of-the-art low-light image enhancement algorithms, including LIME [6], LLIE [37], NUM [38], LightenNet [29], ICIP2019 [39], NPE [40], Dong’s [41], and GEN [42]. All experiments are run on a personal computer (CPU: Intel Core i7-7700 CPU @ 3.60 GHz processor) using MATLAB R2020b. In addition, 612 LLL images provided by SCIE [27], LIME [6], and LLIE [37] are used to test the proposed method’s performance.

4.1 Parameter settings

The parameters $\eta$ and ${A_\alpha }$ in (20) and (21) control the quality of the enhanced images, and the produced artificial multi-scale images serve as source data for the proposed framework. Therefore, we will first investigate the influence of different values of these two parameters on the final results. As shown in Fig. 6, the results are obtained with different parameter settings. To reflect the experiment results intuitively, we apply the entropy to measure the amount of image information. After a preliminary experimental comparison, we found the acceptable range for these two parameters are [2,6] and [2,4], respectively.

 figure: Fig. 6.

Fig. 6. Comparison of the results with different parameters $\eta$ and ${A_\alpha }$.

Download Full Size | PDF

As can be observed in Fig. 6, the results can achieve good visual effects under a wide range of parameter settings. Specifically, the entropy value increases steadily when $\eta$ varies from 2 to 4. And when $\eta$ changes from 4 to 6, the entropy value decreases gradually, indicating that we can obtain a relatively information-rich result when $\eta = 4$. Similarly, by observing the trend of entropy values with different ${A_\alpha }$, we set the parameters $\eta = 4$ and ${A_\alpha } = 3.2$, respectively.

For a just and fair result, we use low-light images which have never been chosen in parameters’ determination in following comparison.

4.2 Comparison with ground truth images

To highlight the effectiveness of our method, the comparison with ground truth images is carried out. There are some examples of different scenes were obtained at different exposure levels, downloaded from SCIE [27]. Overall, our method is able to find a good balance between detail enhancement and brightness adjustment, so that the processed image shows the texture information well.

As can be seen in Fig. 7, the processed images have a similar luminance range to the middle-exposed images, but are superior to other two groups significantly. In terms of detail processing, our approach does a good job of extracting details which submerged in darkness and enhancing them in a sensible way. A reasonable combination of detail enhancement and brightness adjustment allows the images to be processed without loss of texture as in Fig. 7(c).

 figure: Fig. 7.

Fig. 7. Comparison with ground truth images.

Download Full Size | PDF

In order to get a more objective and fair comparison, we introduce four no-reference images quality evaluation indexes, including the natural image quality evaluator (NIQE) [43], the blind tone-mapped quality index (BTMQI) [44], the no-reference image quality metric for contrast distortion (NIQMC) [45], and the auto-regressive based image sharpness metric (ARISMC) [46]. A lower score for NIQE, BTMQI, and ARISMC represents a better image quality. Conversely, higher NIQMC scores are associated with better image quality for large contrasts. The results in Table 1 demonstrate that the proposed method is significantly more effective in processing low-light images and in some respects outperforms images taken in the same scene under appropriate exposure.

Tables Icon

Table 1. Quantitative performance comparison with grand truth images dataset SCIE [27]a

4.3 Comparison with state-of-the-art methods

4.3.1 Qualitative comparison

The parameters of the proposed methods are set as shown in the previous section and produce acceptable results in most cases. And the parameters of the eight comparison algorithms are set to be optimal according to the corresponding literature.

Similar to the proposed method, LLIE also enhances low-light images based on a multi-fusion-based strategy. Using mathematical morphology, this method decomposes the low-light image into illumination and reflection components. Then, these two parts are processed by sigmoid function and adaptive histogram, respectively. Finally, an enhanced image is obtained by using the corresponding fusion method. However, it may lead to over-enhancement and blurring of the enhanced image, resulting in a loss of detail. As shown in Fig. 8, LLIE achieves a certain level of enhancement effect while retaining image information. However, varying degrees of detail are lost in the processed images (the first and third rows in Fig. 8(b), the detail-lost seriously affect the images’ quality). In addition, some noises are amplified, accompanied by the enhancement of details, as in the corridor entrance in the first row in Fig. 8(b). In contrast, our method avoids over-enhancement and image blurring while producing better visual effects.

 figure: Fig. 8.

Fig. 8. Enhancement results comparison between LLIE and our method.

Download Full Size | PDF

The NUM, ICIP2019, and NPE are based on multi-decomposition, which decompose an image into high-frequency and low-frequency terms. And then, these two terms are processed separately with different methods, respectively. Although these algorithms can suppress over-enhancement, they often do not provide a good trade-off between detail and naturalness. As shown in Fig. 9, the color distortion of the images processed by NUM is pronounced, and the overall images are severely distorted. The brightness of the results of ICIP2019 and NPE remains dark; in the fifth row in Fig. 9, it is hard to distinguish the people in the red blocks. In addition, the naturalness of the clouds in the second row is not well preserved in Fig. 9(b) and (c). Our method can effectively enhance the details while preserving the naturalness, as seen in Fig. 9(e).

 figure: Fig. 9.

Fig. 9. Results comparison between multi-decomposition methods and our method.

Download Full Size | PDF

As an algorithm based on the Retinex theory, LIME transforms the decomposition problem into an energy minimization problem with relevant prior knowledge and constraints. Although Retinex-based methods perform well in image-enhancing, the computational complexity of the minimization problem is also a critical issue that cannot be overlooked. LIME effectively enhances the brightness of low-light images, but it also suffers from shortcomings such as over-enhancement, over-exposure and distortion. As shown in Fig. 10, there are some losses of detail in the buildings and stair in the LIME’s results. By contrast, the proposed method has a better performance in the processing of texture and brightness.

 figure: Fig. 10.

Fig. 10. Results comparison between LIME and the proposed method.

Download Full Size | PDF

Dong’s method is effective in enhancing details and the brightness of the processed image is appropriate. However, this method still has some defects in processing the boundaries of images, such as the second row in Fig. 11(b). Moreover, Dong’s method may cause blurring, as shown in the first row of Fig. 11(b). On the contrary, the abovementioned problems do not appear in the images processed with our method.

 figure: Fig. 11.

Fig. 11. Results comparison between Dong’s and the proposed method.

Download Full Size | PDF

As shown in Fig. 12(b), the brightness of the images processed by GEN is relatively low. By comparison, our method performs better in processing brightness so that processed images can display details and textures more intuitively.

 figure: Fig. 12.

Fig. 12. Results comparison between GEN and the proposed method.

Download Full Size | PDF

We also conducted experiments to compare our algorithm with a deep learning-based method. In Fig. 13(b), the LightenNet can handle the brightness well. However, the three images are distorted to varying degrees. Moreover, there are some halos in their results. For example, the sky in the first and third row in Fig. 13(b) seriously detracts from the naturalness. The proposed method has a better performance in these aspects. We can see that the images in Fig. 13(c) retain the original images’ details, textures, and colors well.

 figure: Fig. 13.

Fig. 13. Results comparison between LightenNet and the proposed method.

Download Full Size | PDF

Figure 14 gives an example of the comparison results of the nine methods mentioned above. There is noticeable color distortion in the results of the LIME, NPE, and NUM methods. Due to the low brightness of the enhanced images, the methods GEN and ICIP2019 are not dominant in this comparison. Although LightenNet can accurately enhance images based on deep learning, many halos in the enhanced images are still shown in Fig. 14(e). By comparison, Dong’s method and LLIE can get satisfactory results, but overall they are still inferior to our method.

 figure: Fig. 14.

Fig. 14. Result comparison between different methods.

Download Full Size | PDF

4.3.2 Quantitative comparison

The comparison of visual effects does not fully reflect the quality of enhanced images, because a comprehensive assessment of the images is not a trivial task. Therefore, indicators as NIQE, BTMQI, NIQMC, and ARISMC are applied to assess these methods quantitatively.

The comparison results are listed in Table 24. We can see that the LightenNet and GEN perform well in this comparison. However, they have the disadvantages of distortion and low brightness, respectively. By comparison, although our method scores second place in some evaluation indexes, our algorithm achieves better quantitative performance by considering all these objective evaluations.

Tables Icon

Table 2. Quantitative performance comparison for SCIE [27]a

Tables Icon

Table 3. Quantitative performance comparison for LIME [6]a

Tables Icon

Table 4. Quantitative performance comparison for LLIE [37]a

4.3.3 Comparison of computational costs

Table 5 lists the average computational running time of the nine methods for different datasets. Since the computational efficiency of LightenNet relies on GPU acceleration, we don’t take it into account when comparing CPU computing efficiency.

Tables Icon

Table 5. Average computation time of different method (second)a

Since we apply the visual masking model in our method, the computational running time is slighter longer than in other methods. The computational costs will be more acceptable with a continuous study of the proposed method.

5. Conclusion

In this paper, we proposed an effective method for low-light image enhancement based on the JND and OCTM models. The guided filter decomposes the original image into a base image and (multiple) detail image(s). To better retain the original textures, we apply the JND and OCTM models to the base image to adjust the brightness while effectively avoiding distortion. Due to the superior performance of the visual masking model, the proposed method has a good performance in detail enhancement. Aiming to adjust the brightness of the enhanced images, a set of artificial images are generated by a new method proposed in this paper. Then we use a multi-exposure fusion method to accomplish the procedure. Experimental results in qualitative and quantitative aspects demonstrate that the proposed method performs better than the state-of-the-arts. In addition, the proposed method can further improve computational simplicity. Moreover, enhanced image denoising is another future direction.

Funding

National Natural Science Foundation of China (U2141239); National Defense Basic Scientific Research Program of China (JCKY2018208B016).

Acknowledgments

All authors thank the National Natural Science Foundation of China and National Defense Basic Scientific Research Program of China for help identifying collaborators for this work.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. Q. Tian and L. Cohen, “A variational-based fusion model for non-uniform illumination image enhancement via contrast optimization and color correction,” Signal Process. 53, 210–220 (2018). [CrossRef]  

2. C. Hessel and J. Morel, “An extended exposure fusion and its application to single image contrast enhancement,” in Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), Snowmass, Colorado, 2020, pp. 137–146.

3. M. Devi and K. Shivakumar, “Secured covert color image transmission using secret fragment visible mosaic image and reversible color transformation technique,” in Proceedings of International Conference on Electrical, Mysiri, India, 2016, pp. 47–52.

4. X. Li, R. Feng, X. Guan, H. Shen, and L. Zhang, “Remote Sensing Image Mosaicking: Achievements and Challenges,” IEEE Geosci. Remote Sens. Mag. 7(4), 8–22 (2019). [CrossRef]  

5. X. Li, N. Hui, H. Shen, Y. Fu, and L. Zhang, “A robust mosaicking procedure for high spatial resolution remote sensing images,” ISPRS J. Photogramm. Remote Sens. 109(11), 108–125 (2015). [CrossRef]  

6. X. Guo, Y. Li, and H. Ling, “LIME: low-light image enhancement via illumination map estimation,” IEEE Trans. Image Process. 26(2), 982–993 (2017). [CrossRef]  

7. F. Luthon, B. Beaumesnil, and N. Dubois, “LUX color transform for mosaic image rendering,” in Proceedings of IEEE International Conference on Automation, Quality and Testing, Robotics (AQTR), Cluj-Napoca, 2010, pp. 1–6.

8. N. Jayant, “Signal compression: technology targets and research directions,” IEEE J. Sel. Area. Comm. 10(5), 796–818 (1992). [CrossRef]  

9. X. Wu, “A linear programming approach for optimal contrast-tone mapping,” IEEE Trans. Image Process. 20(5), 1262–1272 (2011). [CrossRef]  

10. Y. Xu, C. Yang, B. Sun, X. Yan, and M. Chen, “A novel multi-scale fusion framework for detail-preserving low-light image enhancement,” Sciences 548, 378–397 (2021). [CrossRef]  

11. H. Su, L. Yu, and C. Jung, “Joint contrast enhancement and noise reduction of low light image via JND transform,” IEEE Trans. Multimedia. 24, 17–32 (2022). [CrossRef]  

12. S. Yu and H. Zhu, “Low-illumination image enhancement algorithm based on a physical lighting model,” IEEE Trans. Circuits Syst. Video Technol. 29(1), 28–37 (2019). [CrossRef]  

13. K. Singh and R. Kapoor, “Image enhancement via median-mean based sub-imgae-clipped histogram equalization,” Optik 125(17), 4646–4651 (2014). [CrossRef]  

14. C. Jung and T. Sun, “Optimized Perceptual Tone Mapping for Contrast Enhancement of Images,” IEEE Trans. Circ. Syst. Vid. 27(6), 1161–1170 (2017). [CrossRef]  

15. J. Xu, Y. Hou, D. Ren, L. Liu, F. Zhu, M. Yu, H. Wang, and L. Shao, “STAR: a structure and texture aware Retinex model,” IEEE Trans. Image Process. 29, 5022–5037 (2020). [CrossRef]  

16. X. Ren, W. Yang, W. Cheng, and J. Liu, “LR3M: robust low-light enhancement via low-rank regularized Retinex model,” IEEE Trans. Image Process. 29, 5862–5876 (2020). [CrossRef]  

17. S. Pei and C. Shen, “Color enhancement with adaptive illumination estimation for low-backlighted displays,” IEEE Trans. Multimedia 19(8), 1956–1961 (2017). [CrossRef]  

18. X. Kong, L. Liu, and Y. Qian, “Low-light image enhancement via Poisson noise aware Retinex model,” IEEE Signal Proc. Let. 28, 1540–1544 (2021). [CrossRef]  

19. X. Fu, D. Zeng, Y. Huang, X. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, 2016, pp. 2782–2790.

20. A. Galdran, “Image dehazing by artificial multiple-exposure image fusion,” Signal Process. 149, 135–147 (2018). [CrossRef]  

21. J. Liu, D. Xu, W. Yang, M. Fan, and H. Huang, “Benchmarking lowlight image enhancement and beyond,” Int. J. Comput. Vision. 129(4), 1153–1184 (2021). [CrossRef]  

22. P. Guan, J. Qiang, W. Liu, X. Liu, and D. Wang, “U-net-based multiscale feature preserving method for low light image enhancement,” J. Electron. Imaging 30(5), 053011 (2021). [CrossRef]  

23. J. Qin, Y. Huang, and W. Wen, “Multi-scale feature fusion residual network for single image super-resolution,” Neurocomputing 379, 334–342 (2020). [CrossRef]  

24. S. Hao, X. Han, Y. Guo, X. Xu, and M. Wang, “Low-light image enhancement with semi-decoupled decomposition,” IEEE Trans. Multimedia 22(12), 3025–3038 (2020). [CrossRef]  

25. G. Kim and J. Kwon, “Deep illumination-aware dehazing with low-light and detail enhancement,” IEEE Trans. Intell. Transp. 23(3), 2494–2508 (2022). [CrossRef]  

26. Y. Jiang, X. Guo, D. Liu, Y. Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, and Z. Wang, “EnlightenGAN: deep light enhancement without paired supervision,” IEEE Trans. Image Process. 30, 2340–2349 (2021). [CrossRef]  

27. J. Cai, S. Gu, and L. Zhang, “Learning a deep single image contrast enhancer from multi-exposure images,” IEEE Trans. Image Process. 27(4), 2049–2062 (2018). [CrossRef]  

28. F. Lv, F. Lu, and J. Wu, “MBLLEN: low-light image/video enhancement using CNNs,” in Proceedings of British Machine Vision Conference, 2018, pp. 220.

29. C. Li, J. Guo, F. Porikli, and Y. Pang, “LightenNet: a convolutional neural network for weakly illuminated image enhancement,” Pattern Recogn. Lett. 104(3), 15–22 (2018). [CrossRef]  

30. K. He, J. Sun, and X. Tang, “Local Edge-Preserving Multiscale Decomposition for High Dynamic Range Image Tone Mapping,” IEEE Trans. Pattern Anal. 35(6), 1397–1409 (2013). [CrossRef]  

31. B. Gu, W. Li, M. Zhu, and M. Wang, “Guided image filtering,” IEEE Trans. Image Process. 35(6), 1397–1409 (2013).

32. S. Belekos, N. Galatsanos, and A. Katsaggelos, “Maximum a posteriori video super-resolution with a new multichannel image prior,” in Proceedings of 16th European Signal Processing Conference, Lausanne, Switzerland, 2008, pp. 1–5.

33. J. Wu, G. Shi, W. Lin, and C. Kuo, “Enhanced just noticeable difference model with visual regularity consideration,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 2016, pp. 1581–1585.

34. Y. Lang, Y. Qian, X. Kong, J. Zhang, Y. Wang, and Y. Cao, “Effective enhancement method of low-light-level images based on the guided filter and multi-scale fusion,” J. Opt. Soc. Am. A 40(1), 1–9 (2023). [CrossRef]  

35. H. Zhang, Q. Zhao, L. Li, Y. Li, and Y. You, “Multi-scale image enhancement based on properties of human visual system,” in Proceedings of 4th International Congress on Image and Signal Processing (CISP 2011), Shanghai, China, 2011, pp. 704–708.

36. C. Chou, “A perceptually tuned subband image coder based on the measure of just-noticeable-distortion profile,” in Proceedings of IEEE International Symposium on Information Theory, Trondheim, Norway, 1994, pp. 420.

37. X. Fu, D. Zeng, Y. Huang, Y. Liao, X. Ding, and J. Paisley, “A fusion-based enhancing method for weakly illuminated images,” Signal Process. 129, 82–96 (2016). [CrossRef]  

38. D. Ngo, S. Lee, and B. Kang, “Nonlinear unsharp masking algorithm,” in Processing of International Conference on Electronics, Information, and Communication (ICEIC), Barcelona, Spain, 2020, pp. 6.

39. S. Ghosh and K. Chaudhury, “Fast Bright-Pass Bilateral Filtering for Low-Light Enhancement,” in Processing of IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 2019, pp. 205–209.

40. S. Wang, J. Zheng, H. Hu, and B. Li, “Naturalness preserved enhancement algorithm for non-uniform illumination images,” IEEE Trans. Image Process. 22(9), 3538–3548 (2013). [CrossRef]  

41. X. Dong, G. Wang, Y. Pang, W. Li, J. Wen, W. Meng, and Y. Lu, “Fast efficient algorithm for enhancement of low lighting video,” in Processing of the IEEE International Conference on Multimedia and Expo (ICME 2011), Barcelona, Spain, 2011, pp. 6.

42. H. Xu, G. Zhai, X. Wu, and X. Yang, “Generalized Equalization Model for Image Enhancement,” IEEE Trans. Multimedia 16(1), 68–82 (2014). [CrossRef]  

43. A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a ‘completely blind’ image quality analyzer,” IEEE Signal Process. Lett. 20(3), 209–212 (2013). [CrossRef]  

44. K. Gu, S. Wang, G. Zhai, S. Ma, X. Yang, W. Lin, W. Zhang, and W. Gao, “Blind quality assessment of tone-mapped images via analysis of information, naturalness, and structure,” IEEE Trans. Multimedia. 18(3), 432–443 (2016). [CrossRef]  

45. K. Gu, W. Lin, G. Zhai, X. Yang, W. Zhai, and C. Chen, “No-reference quality metric of contrast-distorted images based on information maximization,” IEEE Trans. Cybern. 47(12), 4559–4565 (2017). [CrossRef]  

46. K. Gu, G. Zhai, W. Lin, X. Yang, and W. Zhang, “No-reference image sharpness assessment in autoregressive parameter space,” IEEE Trans. Image Process. 24(10), 3218–3231 (2015). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. Schematic diagram of the proposed low-light image enhancement via JND and OCTM models.
Fig. 2.
Fig. 2. JND thresholds for different gray levels.
Fig. 3.
Fig. 3. Enhancement results with different T. ($T = 20\%$ denotes T is the 20% of the maximum gray level of the original level.)
Fig. 4.
Fig. 4. Base image enhancement results. (a) Original base image. (b) Enhancement at $\gamma = 0.3$. (c) Enhancement at $\gamma = 0.4$. (d) Enhancement at $\gamma = 0.5$. (e) Enhancement at $\gamma = 0.6$. (f) Gain cure by different $\gamma$.
Fig. 5.
Fig. 5. The proposed method flow chart on test image. (a) The input image. (b) The base image. (c) The detail image. (d) The enhanced detail image. (e) The enhanced base image. (f) The fusion of (d) and (e). (g) The artificial image set. (h) The final image.
Fig. 6.
Fig. 6. Comparison of the results with different parameters $\eta$ and ${A_\alpha }$.
Fig. 7.
Fig. 7. Comparison with ground truth images.
Fig. 8.
Fig. 8. Enhancement results comparison between LLIE and our method.
Fig. 9.
Fig. 9. Results comparison between multi-decomposition methods and our method.
Fig. 10.
Fig. 10. Results comparison between LIME and the proposed method.
Fig. 11.
Fig. 11. Results comparison between Dong’s and the proposed method.
Fig. 12.
Fig. 12. Results comparison between GEN and the proposed method.
Fig. 13.
Fig. 13. Results comparison between LightenNet and the proposed method.
Fig. 14.
Fig. 14. Result comparison between different methods.

Tables (5)

Tables Icon

Table 1. Quantitative performance comparison with grand truth images dataset SCIE [27]a

Tables Icon

Table 2. Quantitative performance comparison for SCIE [27]a

Tables Icon

Table 3. Quantitative performance comparison for LIME [6]a

Tables Icon

Table 4. Quantitative performance comparison for LLIE [37]a

Tables Icon

Table 5. Average computation time of different method (second)a

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

q i = a k × g i + b k , i ω k
a k = 1 | ω | i ω k g i × I i μ k × I k ¯ σ k 2 + ε
b k = I k ¯ a k × μ k
q i = a k ¯ g i + b i ¯ = 1 | ω | k ω i ( a k g i + b k ) , i g
B 0 = G u i d e ( I ) ,   D 0 = I B 0
D 1 = D 0 G u i d e ( D 0 ) D 2 = D 1 G u i d e ( D 1 )
V M ( x ) = 1.84 L c ( x ) L c ( x ) 2 + 26 2 0.3 N ( x ) 2.7 N ( x ) 2 + 1
L c = G v 2 ( x ) + G h 2 ( x ) 2 G h = I f h G v = I f v
f h = ( 1 / 3 1 / 3 1 / 3 0 0 0 1 / 3 1 / 3 1 / 3 )
f h = ( 1 / 3 0 1 / 3 1 / 3 0 1 / 3 1 / 3 0 1 / 3 )
N ( x c ) = g o ( x c ) g o ( x c + δ )
g o ( x ) = arctan G v ( x ) G h ( x )
N v ( x ) = 1 V M ( x ) θ + 1
g a i n ( x ) = g min + [ 1 N v ( x ) ] ( g max g min )
D p = D 0 g a i n
J N D ( I ( x ) ) = { 17 ( 1 I ( x ) T ) + 3 , I ( x ) T 3 T + 1 ( I ( x ) T ) + 3 , o t h e r w i s e
f b = max 0 j L p j s j s . t . 0 j L s j L
{ s j { 1 / J N D ( j ) } γ s j { J N D ( j ) } γ β
B p = B 0 ( j ) f b ( j )
I p = B p + η D p
I a r t = { A α k ( n u + n o ) × 10 × I p , 0 < k < m A α k n u × I p , m < k < m + n
E n t r o p y s u m = y = 1 n u m E n t r o p y y | E n t r o p y y E n t r o p y I p | < 1
W f u s i o n _ y = E n t r o p y y E n t r o p y s u m
I f = y = 1 n u m W f u s i o n _ y × I a r t _ y
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.