A low-light image enhancement method based on bright channel prior and maximum colour channel

Low-light image enhancement algorithms have been introduced to improve the visual quality of low-light images that may degrade the performance of many computer vision and multimedia systems designed for high-quality images. However, the existing bright channel prior and maximum colour channel enhancement algorithms introduce halo artifacts and colour distortions while enhancing the images. To overcome these limitations, in this paper, an effective fusion-based low-light image enhancement algorithm is proposed. In the proposed algorithm, the illumination of the low-light image is estimated from both the bright and maximum colour channels to overcome the halo artifacts and colour distortion problems. Further, an effective reﬁnement method is utilized to improve the sharpness of the initial enhanced image representing the scene reﬂectance. Experiment results show that the proposed algorithm outperforms the state-of-the-art algorithms qualitatively and quantitatively. Moreover, the proposed algorithm reduces the halo artifacts and colour distortion and enhances the details while preserving the naturalness.


INTRODUCTION
In low-light conditions, the quality of images captured by the imaging devices can be degraded since the sensors receive an insufficient amount of light. This may result in dimmed images in which the observer could not identify the true colours of the objects in these images. In low-light not only the colour but also the objects are not distinguishable. Using these insufficient illumination images with various vision-oriented systems, such as intelligent transportation, traffic monitoring, and medical imaging systems, may degrade the performance of these systems. Therefore, low-light image enhancement algorithms are highly required to deal with the problems of insufficient illumination.
The existing low-light image enhancement algorithms improve the illumination of low-light images but they may suffer from the halo artifacts near the edges and/or the colour distortion problems. To overcome these problems, a fusion-based low-light image enhancement algorithm is proposed in this paper. In the proposed algorithm, the insufficient illumination of the input image is estimated by fusing both the bright channel prior and the maximum colour channel of the input image to This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2021 The Authors. IET Image Processing published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology overcome the halo artifacts and colour distortion problems. The scene reflectance representing the initial enhanced image is then recovered. Finally, an effective refinement method is applied to improve the sharpness of the initial enhanced image. Utilizing the proposed fusion and the refinement methods can reduce the halo artifacts and colour distortion and enhance the details of the image while preserving the naturalness. These distinctive features gained from the proposed algorithm are compared with those gained from the existing algorithms and they are demonstrated in the experimental results section.
The rest of the paper is organized as follows: In Section 2, the related work is reviewed. In Section 3, the bright channel prior and the maximum colour channel are given. In Section 4, the proposed algorithm is described in detail. In Section 5, the experimental results are presented and analysed. Finally, in Section 6, the paper is concluded. these algorithms can be categorized into spatial and frequency domain methods. In the transform domain enhancement algorithms, the input image is decomposed into sub-bands in which their coefficients are modified to enhance the image. Among the transform domain-based algorithms, homomorphic filtering [1,2] is popularly used to enhance the illumination of input image based on the illumination-reflectance model. In [3], an enhancement algorithm was proposed to enhance the low illumination images using discrete cosine transform (DCT) and homomorphic filtering technique. The enhancement methods based on the homomorphic filtering produce output images with unsatisfactory illumination and poor colour constancy.
In [4], a low-light image enhancement algorithm was proposed based on re-scaling the wavelet coefficients. This algorithm does not require parameters adjustment as in [2]. However, it may cause noise in the enhanced images and it needs an additional denoising algorithm. Provenzi et al. [5] introduced a variational model for image contrast enhancement using perceptually inspired colour correction based on the wavelet transform. However, this method may cause under or over-enhancement and colour breakage due to the wrong selection of the parameters. In [6], an image enhancement in compressed wavelet domain was proposed. In this method the contrast and the brightness are enhanced by applying linear scaling factors to the wavelet coefficients. Since this algorithm does not require inverse transform, its computational efficiency is improved. Kim et al. [7] introduced a contrast enhancement algorithm based on the entropy scaling utilized in wavelet domain. Although this algorithm can improve the image quality and can correct the uneven illumination, it may cause colour distortion. Further, Sun et al. [8,9] proposed an enhancement algorithm based on dual tree complex wavelet transform (DTCWT) to enhance the contrast of the low-light images without amplifying the noise. In this algorithm, the illumination compensation is first performed to preserve the image details. The image is then decomposed into low-pass and high-pass sub-bands using DTCWT. The noise reduction is performed in the high-pass sub-bands of DTCWT due to its property of low redundancy. Although this method improves the visibility and the details of low-light images, it cannot perfectly handle the uneven illumination. Guo et al. [10] proposed a pipeline network for low-light image enhancement based on blending the outputs from the convolutional neural network (CNN) with the low-low sub-band (LL) generated from the discrete wavelet transform (DWT). This method consists of a denoising net and a low-light image enhancement net (LLIE-net) trained for learning a pair of dark and bright images. Therefore, the computation time of this method is high. Although the transform domain-based image enhancement algorithms produce pleasant results, their computational costs are high. Therefore, the spatial domain algorithms are widely used due to their simplicity and effectiveness.
The spatial domain-based enhancement algorithms are directly applied on the images. One of the traditional image enhancement algorithms based on spatial domain is the histogram equalization (HE) [11]. It is popularly used due to its simplicity and effectiveness in enhancing the quality of dimmed images. Pisano et al. [12] proposed contrast-limited adaptive histogram equalization (CLAHE) to extend the histogram equalization by using a threshold in order to improve the image contrast. However, HE and CLAHE methods usually cause over contrast enhancement which may create visual artifacts like noise and hence they produce enhanced images with unnatural look. Therefore, the variational algorithms were proposed to improve the performance of these traditional methods and suppress the over contrast enhancement problem. In [13], the contextual and variational contrast enhancement (CVCE) has been introduced. It generates visually pleasing enhanced images by performing 2D histogram on input images. However, these variational methods focus on the illumination enhancement of the low-light images without refining the sharpness of these images. Singh and Kapoor [14] proposed an improved bi-histogram equalization algorithm called exposure-based sub-image (ESIHE). It avoids the over-enhancement by using exposure threshold to divide the histogram into sub-histograms before performing equalizations; however this algorithm is limited to local contrast enhancement. In [15], an adaptive extended piecewise histogram equalization (AEPHE) algorithm was proposed to enhance the dark images by dividing the original histogram into a set of piecewise extended histograms. The adaptive histogram equalization is then applied. Finally, a weighted fusion of the equalized histograms is calculated to obtain the enhanced images. However, it is difficult to apply this method on the images with non-uniformly illumination. Banik et al. [16] introduced a contrast enhancement algorithm to enhance different types of low-light images using histogram equalization and gamma correction. Further, Agrawal et al. [17] proposed a contract enhancement algorithm based on joint histogram equalization (JHE) by taking into consideration the neighbourhood information surrounding each pixel to reduce the noise in the enhanced images. However, most of the histogram equalization-based enhancement algorithms cause over-enhancement or may produce noisy images especially when the input images are very dark.
Retinex theory [18] is another spatial domain-based image enhancement technique in which the image is presented as the product of the illumination and reflectance. In the retinexbased algorithms, the enhanced image is obtained by estimating the illumination component of an input image and then the reflectance component is retrieved. The single-scale retinex (SSR) [19] and multi-scale retinex (MSR) [20] algorithms separate the illumination and reflectance by using the local Gaussian filters. However, these algorithms suffer from the halo artifacts and colour distortions. In multi-scale retinex with colour restoration (MSRCR) algorithm [21], the colours of the images that violate the grey-world assumptions are restored. This method can handle the colour distortion problem of MSR; however, it over-enhances the brightness of the images. In [22], the naturalness preserved enhancement (NPE) algorithm was proposed to enhance the details of the non-uniformly illuminated images. NPE algorithm estimates and adjusts the illumination using a bright-pass filter and a bi-log transformation for the image histogram, respectively. Although NPE algorithm preserves the naturalness and produces outstanding results, its computational cost is high. Fu et al. [23] proposed the bright channel prior (BCP) to estimate the illumination component of the low-light images. This algorithm used two-stage map method to estimate the illumination of the input image. First, it finds the maximum value of the three colour channels of RGB (red, green, blue) to estimate the illumination in each pixel individually. Further, the final illumination map is obtained by imposing a structure prior on the initial illumination map for refinement. Although these mentioned algorithms improve the illumination of the low-light images effectively, they fail to enhance the bright regions in the images since they are sensitive to the bright light. These algorithms also suffer from halo artifacts problem because of the patch-wise bright channel. In [24], a maximum intensity channel image enhancement algorithm (MICIE) was proposed. MICIE algorithm estimates the illumination of an input image using the maximum colour channel in which the maximum value of R, G, and B channels is obtained. The fast joint bilateral filter is then used to refine the illumination component. This algorithm improves the computational time and reduces the halo artifacts but it suffers from the colour distortion. Daway et al. [45] proposed a low-light image enhancement algorithm based on HE and MSRCR. In this method, the illumination component is estimated using bright channel prior. However, it suffers from over-enhancement which leads to losing the details especially in the bright regions due to inaccurate illumination estimation. Further, Sun and Guo [25] proposed a brightness enhancement algorithm based on the bright channel prior. However, this method results in noise and halo artifacts in the enhanced images.
To improve the performance of mentioned algorithms, the multi-deviation fusion-based enhancement (MF) algorithm [26] was proposed to adjust the illumination of the low-light images by fusing multiple derivations of the initially estimated illumination map. Although the performance of this method is pleasant, it misses the realistic look in the regions with more textures. Fu et al. [27] proposed a weighted variational method to enhance the images by simultaneously estimating the illumination and reflectance. However, this method has low computational efficiency and can only process the limited size images. In [28], a low-light image enhancement (LIME) algorithm was proposed to improve the estimated illumination map by using a smoothing model. Furthermore, Ren et al. [29] proposed an enhancement framework combining the traditional retinex and the camera response models in which the enhanced image is obtained by adjusting the exposure of low-light image. However, these algorithms cause visual artifacts or over-enhancement in some regions in the output images. Shi et al. [30] proposed a night-time low illumination image enhancement method based on the dual channel prior-based algorithm in which the initial transmission is estimated using bright channel prior. The potentially erroneous transmission is then corrected using the dark channel prior as a complementary channel. However, this method suffers from over-enhancement at the edges causing noise in some regions. In [31], a low-light image enhancement algorithm based on a non-uniform illumination prior model was proposed. Singh and Parihar [32] proposed a structureaware method for illumination estimation. Furthermore, Li et al. [33] proposed an enhancement framework based on a pair of complementary gamma functions (PCGF) by image fusion. Although these methods can improve the visual quality of the input image while preserving the edges and details, their computational time is high.
Recently, deep learning has been used for low-light image enhancement by designing a variety of CNN architectures [46][47][48][49][50][51][52][53]. In [46], a weakly illuminated image enhancement algorithm is proposed using a trainable CNN to estimate the illumination map. However, this method produces visual artifacts in the enhanced results. Cai et al. [47] proposed a CNN-based contrast enhancer trained by multi-exposure images. In [48], a Retinex-Net was proposed to enhance the image by adjusting the brightness of the illumination map and applying the BM3D denoising algorithm on the reflectance component. In [50], underexposed image enhancement algorithm was proposed for training an image-to-illumination instead of end-to-end mapping. However, this method suffers from low-visibility and often cause colour inconsistency. Ren et al. [51] proposed a hybrid network which contains the content and edge streams to improve the visibility of low-light images. However, the enhanced images look unnatural. In [52], a low-light image enhancement method was proposed based on a semi-supervised learning strategy. Guo et al. [53] proposed a zero-reference deep curve estimation (Zero-DCE) method to train end-to-end deep network for low-light image enhancement. However, the performance of this method is limited due to the accumulation of the error in the processing pipeline. Although the deep learning based algorithms improve the visibility of low-light images, most of these algorithms need pairs of reference and low-light images for training the network which are not always available and preparing a large dataset is expensive besides the complex constrains of the training phase.

IMAGING MODEL AND ILLUMINATION ESTIMATION
According to Retinex theory [18], a low illumination image can be presented as the product of the illumination and reflectance components. The Retinex model which represents the formation of a low-light image is written as: where, I, L, and R represent the measured image, the light illumination, and the scene reflectance, respectively. The operator • represents the element-wise multiplication. According to this model, if the illumination component L of the input dimmed image I is known, the reflectance component R which is the goal of the intrinsic image decomposition can be recovered. According to Equation (1), R = I ⊘ L, where ⊘ is the element-wise division. Therefore, the estimation of L (L) is the key to the recovery of R. It is apparent that the problem can be simplified as I ⊘L which can directly act as lightening low-light images that this paper concentrates on.
The illumination component can be estimated by various methods. One of these methods is Max-RGB [24,34] in which the illumination is estimated by obtaining the maximum value of the three colour channels of RGB. The illumination estimation represented by the maximum colour channel for the input image at each pixel location x can be expressed as: Although this method can enhance the global illumination, it may suffer from the colour distortion.
Another method for estimating the illumination component has been developed to improve the accuracy of the previous method by performing the bright channel prior which converts an arbitrary image in the spatial domain into the brightness domain [23]. This method considers the local consistency of the illumination by taking into consideration the neighbouring pixels in a small region around the target pixel. In this case, the illumination is estimated by applying the bright channel prior on the input dimmed image as follows: where, y is a pixel within the local patch Ω(x) centered at x having size w × w. To obtain the bright channel, a maximum value in each patch for each colour channel is determined. The maximum value on those three channels is then calculated. The pixel values of 8-bit input images are normalized into a range of [0, 1]. Figure 1 shows the effect of using various patch sizes of 3 × 3, 15 × 15, and 30 × 30 on the generated bright channel prior of a low-light image. It can be observed that when the patch size increases, the bright channel becomes brighter because the probability of having more bright pixels in that patch increases. Moreover, when the patch size is too large such as 30 × 30, the bright areas become wider which may cause stronger halo artifacts near the edges of the enhanced image. It should be mentioned when the patch size is 1 × 1, the effect of the bright channel prior is the same as that of the maximum colour channel. In this case, both channels are free from the halo artifacts effect but suffer from colour distortion. In brief, the halo artifacts and colour distortion generally happen due to using large and small patch sizes, respectively. These problems have been solved in the proposed algorithm and will be discussed in details in the following section. The effect of the patch size on the performance of the proposed algorithm will be demonstrated in Section 4.2. Figure 2 shows the bright channel for two images having different illumination. It is clear that when the illumination of the input image is enough, the intensity of the bright channel I bright (x) tends to be one (after normalization). In contrast, the bright channel of the low-light image is much lower illustrating the range of insufficient illumination. Therefore, the bright channel prior-based methods enhance the local consistency but they suffer from the halo artifacts around the edges and their computational times are high. Figure 3 shows the flowchart of the proposed fusion-based enhancement algorithm that consists of two main stages, namely, the fusion-based bright channel estimation and the refinement. These stages are described in the following subsections.

Fusion-based bright channel estimation
As mentioned in the previous section, the illumination estimation schemes based on the bright channel prior produce halo artifacts around the edges due to using a large patch size. Although these halo artifacts can be reduced by using filters such as Gaussian filter, the regions around the edges may still look dark. On the other hand, the schemes based on the maximum colour channel do not produce such halo artifacts but they suffer from the colour distortion occurring due to not considering the neighbouring pixels in the recovered scene. Therefore, a powerful scheme is proposed to overcome the problems associated with the illumination estimation schemes mentioned above.
In the proposed algorithm, the illumination component is estimated based on both the bright channel prior and the maximum colour channel in order to weaken the halo artifacts and the colour distortion produced from both channels. To achieve such goal, each pixel is adaptively adjusted by calculating a weight representing the strength of halo artifacts at that pixel. This weight is calculated by normalizing the difference between the bright channel (I bright ) and the maximum colour channel (I max ) as follows: where, weight bright (x) represents the strength of the halo artifacts at the pixel location x in the bright channel and its value has a range of [0, 1]. Based on the values of the weight, the weighted sum of both bright channel I bright and maximum colour channel I max is calculated to accurately estimate the bright channel (I ′ bright ) which represents the estimated illumination (i.e., I ′ bright =L) and is given by: On the edges, the strength of the halo artifacts is strong and the difference prior (weight) will be close to 1. Hence, according to Equation (5) the illumination component is estimated by giving more weight to I max and less weight to I bright . In this case, the proposed fusion mechanism mostly produces similar results to those produced by maximum colour channel which are free from halo artifacts. On contrast, in homogeneous regions, there are no halo artifacts but they may contain colour distortion. In this case, the difference prior will be close to 0 and hence the proposed fusion method mostly produces similar results to those produced by the bright channel prior. In this case, the colour distortion is reduced at these regions. Therefore, the proposed fusion mechanism weakens the intensities of the halo artifacts and the colour distortion in the estimated illumination. It should be mentioned that Gaussian filter [35] is used to smoothen the halo artifacts in the bright channel (I bright ).

Image enhancement and refinement
After estimating the illumination componentL using the proposed fusion-based bright channel estimation scheme, the scene reflectance R can be obtained using Equation (1): When the illumination componentL is close to zero, the scene reflectance R can be infinite. So, the illumination component should be restricted by a lower boundL min which is set to 50 as in [36]. Since the image is normalized in this paper,L min is set to 0.19 due to dividing 50 by 255. Therefore, the scene reflectance R can be obtained as: In the proposed algorithm, the brightness transform function is considered as a linear function like some image enhancement algorithms, such as LIME [28]. In this case, the initial enhanced image I ′ equals to the scene reflectance R (i.e. I ′ = R) and it can be obtained using Equation (7).
As mentioned in Section 4.1, in homogeneous regions of the low light images the proposed fusion method produces approximately similar results to the bright channel prior. In this case the proper patch size should be selected since it affects the estimated bright channel and the enhanced image. Figure 4  illustrates the effect of the patch size on the performance of the proposed algorithm. It can be seen in the yellow bounded boxes when the patch size (3 × 3) is small, the enhanced image has an oversaturated illumination with noise in some regions due to the aliasing artifacts while using a large patch size such as 30 × 30 produces an enhanced image with insufficient illumination with halo artifacts near the edges. Therefore, the patch size of 15 × 15 is selected in this paper as it achieves a balance between reducing the colour distortion and reducing the halo artifacts. Hence a good visual quality as demonstrated in Figure 4 and a good quantitative assessment can be achieved as it will be discussed in Section 5.2. Figure 5 shows the effect of estimating the illumination using the bright channel prior, the maximum colour channel and the proposed fusion method on the enhanced images. It is obvious that in the enlarged red bounded boxes located below each image in Figure 5(c) when the bright channel prior is used, there are halo artifacts and dark areas on the boundary of some objects. These halo artifacts are reduced when the maximum colour channel algorithm and the proposed fusion algorithm are used. Moreover, the maximum colour channel suffers from colour distortion which is reduced using the proposed fusion algorithm. Hence the proposed fusion algorithm weakens the intensities of the halo artifacts and the colour distortion as shown in Figure 5(c) (right image).
To fulfil the visual perception requirements of the human visual system, the sharpness of the initial enhanced image I ′ obtained from Equation (7) should be refined. Figure 6 demonstrates the effect of the proposed texture refinement process which enhances the visibility of the image for the observers as shown in the red bounded box. To do so, the initial enhanced image is first smoothed using the summed area tables (SATs) method [37]. This method provides a way to filter arbitrarily large rectangular areas of an image in a constant amount of time. SATs method acts as a low-pass spatial filter such as average and median filters.
In SATs method, each patch in the image is transformed into SATs and the weighting map of insufficient sharpness is then estimated. The sum of each patch for a pixel in the enhanced image I ′ re f (x, c ) within each colour channel c can be calculated based on the principle of SATs as follows: The image is traversed only once to reduce the computational complexity. Therefore, the SATs at each pixel is calculated as: where, , , and are three direction offsets having values of (−1,0), (0,-1), and (−1,−1), respectively. For the enhanced image I ′ , the weighting map W m for insufficient sharpness obtained from the path of eight-connected pixels can be given by: where, n Ω is the number of pixels within the patch Ω. a = (v, v) , b = (u, u),z = (v, u) and t = (u, v) correspond to one of the neighbor pixels. u and v are the upper and lower indices in the patch Ω.
After estimating the weighting map representing the smoothed version (low-pass filter) of the enhanced image I ′ . The details D of the enhanced image can be obtained by: The refined enhanced image I ′ re f is then obtained by sharpening the details D as: (12) where, (x) is a scaling factor used to modify the value of each pixel to enhance the details of the image. Thus this scaling factor can improve the global contrast of the image. Note that, when the value of D(x, c ) is negative, the value of the smoothed pixel I ′ re f (x, c ) is greater than that of the enhanced pixel I ′ (x, c ). This means the enhanced image contains the required details and there is no need to refine these details to avoid the over-enhancement case. On the other hand, when the value of D(x, c ) is positive, this means that the value of the enhanced image is greater than that of the smoothed image. In this case, the image details should be enhanced. The simple automatic scaling factor setting for σ at each location x is introduced as: where, I ′ dark is the estimated dark channel of the input image and it can be obtained by the same mechanism used to estimate I ′ bright as follows: I dark (x) = min c∈{R,G ,B} ( min y∈ (x) I c (y)).
Step2: Obtain the halo strength by calculating weight bright using Equation (4).
Step3: Estimate the illumination componentL based on the proposed fusion-based bright channel estimation scheme using Equation (5).
Step4: Obtain the initial enhanced image I ′ by using Equation (7).
Step5: Calculate the weighting map W m for insufficient sharpness of the initial enhanced image I ′ using Equation (10).
Step6: Extend the sharpness to obtain the refined enhanced image I ′ re f using Equation (12).
This way of estimating the value of as defined in Equation (13) can enhance the details and the contrast of the image with preserving naturalness. The workflow of the proposed image enhancement algorithm is shown in Algorithm 1.

EXPERIMENTAL RESULTS
Several experiments were performed to evaluate the performance of the proposed algorithm. These experiments were tested on five low-light image datasets [22,28,38,39,48] which contain images with various light conditions such as sunlight, indoor lighting, and moonlight. Most of the images in the datasets have low contrast in some local areas but they have serious illumination variation in the global space. The performance of the proposed method was qualitatively and quantitatively compared with several state-of-the-art methods, including exposure based sub-image histogram equalization (ESIHE) [14], multi-scale retinex with colour restoration (MSRCR) [21], naturalness preserved enhancement (NPE) [22], max intensity channel image enhancement (MICIE) [24], multideviation fusion-based enhancement (MF) [26], simultaneous reflection and illumination estimation (SRIE) [27], low-light image enhancement (LIME) [28], low-light enhancement using camera response (LECARM) [29], dual channel prior-based method (DCP) [30], nature preserving low-light image enhancement (NPLIE) [32], and zero-reference deep curve estimation (Zero-DCE) [53]. All the experiments were conducted on a PC with a 2.16 GHz Intel Pentium Processor and 4G RAM using Matlab R2015a.

Qualitative evaluation
In this subsection, a subjective experiment for qualitative perceptual quality measurement was conducted to evaluate the efficiency of the proposed low-light image enhancement algorithm as compared with ESIHE, MSRCR, NPE, MICIE, MF, SRIE, LIME, LECARM, DCP and Zero-DCE algorithms. Due to the space limitation, the visual qualities of nine test images selected from the four datasets are shown in Figures 7-9. Figure 7 shows three low-light images captured under the sunlight conditions during the daytime period and the enhanced images produced by each algorithm. As shown in this figure, the input images have bright background and dark foreground in which certain details in the dark regions cannot be seen clearly. Also, the output of ESIHE algorithm has insufficient enhancement due to the inappropriate exposure-threshold value used to enhance the images. Although the lightness is improved for the enhanced results of the remaining algorithms, the results of MSRCR, NPE and MICIE algorithms have visual artifacts and colour distortions whereas the illumination of the MF and SRIE outputs are still dim. It can also be seen that the bright areas in LIME output are saturated and the output of LECARM is slightly pale in which the details are not clear enough whereas the brightness of the DCP output has not efficiently enhanced. Further, Zero-DCE fails to restore the real colours of the input image which make the output looks pale and unnatural. Although the output of LIME algorithm is brighter than that of the proposed algorithm in some regions, the proposed algorithm still produces more visible details with less visual artifacts than LIME and other algorithms. Figure 8 demonstrates three dimmed colour images captured under indoor lighting conditions and the enhanced images generated by each algorithm. Since the illumination of the indoor scenes is limited, the overall colour features in these images are poor compared with the low-light images captured under the sunlight conditions. As shown in Figure 8, although the proposed algorithm may not sufficiently brighten the very dark area (in the right column of Figure 8), it can brighten the other areas while it can recover more object details in the images than the other enhanced methods and it can produce the best visual quality. Figure 9 displays three other low-light images captured under the moonlight conditions during the night-time period and the effect of the used enhancement algorithms on the quality of their output images. Since the illumination in the night-time period is much lower than the illumination during the daytime under the sunlight or indoor lighting conditions, the colour features in the original images are weak and invisible. Hence, improving this kind of images becomes very difficult. It can be seen from Figure 9 that the ESIHE algorithm fails to preserve the brightness and its image quality enhancement is not visible while the MSRCR algorithm has blurry output. The NPE algorithm causes over-enhancement artifacts. However, the colour features of the output of MICIE algorithm are distorted. As can be seen in some regions of these images, the visual quality of MF outputs is not good while the output images of SRIE have low brightness. The LIME algorithm suffers from overenhancement especially in the bright regions. However, there is a grey veil coated the output images of LECARM algorithm. It is also observed that the results of DCP have noise in some regions which makes the image muddled due to the over-enhancement at the edges of objects. Although the Zero-DCE improves the brightness of the input image, it fails to restore the details and textures clearly. The proposed algorithm   outperforms the other algorithms for enhancing the night-time images.
To better illustrate the visible change between the subjective quality of the images produced by the proposed algorithm and the state-of-the-art algorithms (LIME, LECARM, DCP and Zero-DCE) which produce close visible results, the selected red, yellow and green boxes from Figures 7-9 were zoomedin and demonstrated in Figure 10(a-c). It can be seen from the zoomed-in patches that the proposed algorithm produces more visible details with less visual artifacts and less colour distortion than the other algorithms. In summary, compared with other methods, the proposed method subjectively produces an improvement for enhancing dimmed images. It also achieves a good trade-off between the illumination improvement and naturalness preservation because of performing the proposed fusion algorithm and the refinement process to extend the sharpness without amplifying the noise.
A user study was also performed to validate the efficiency of the proposed algorithm via scoring the subjective visual quality of the enhanced images. A group of 25 low-light images were randomly selected from the used datasets and enhanced using various algorithms. The enhanced images were rated by 15 human observers (subjects) invited to indepen-dently rate the visual quality of the enhanced images according to specified requirements which are recovery of brightness, contrast, no colour distortion, over-enhancement and preserving details. The rating scores of visual quality range from 1 to 5 (worst to best quality). Table 1 illustrates the average opinion scores collected from the subjects. It can be seen that the proposed algorithm achieves the highest score indicates that its results are more preferred by the subjects. These scores in Table 1 are also support the qualitative assessments.

Quantitative assessments
To support the qualitative superiority of the proposed method, a quantitative analysis is given in this subsection. Six quality metrics were employed for the quantitative assessments of the image enhancement methods. These quality metrics are: The absolute mean brightness error (AMBE) [40], the lightness order error (LOE) [22], the discrete entropy (DE) [41], the patch-based contrast quality index (PCQI) [42], the measure of enhancement (EME) [43], and the maximum contrast with minimum artefact (MCMA) [44].
The AMBE metric measures the absolute difference between the mean brightness of the input ( i ) and output ( o ) images in gray-scale format. The AMBE metric can be formulated as: The lower value of AMBE indicates better brightness preservation such that the enhancement algorithm can preserve the mean brightness of the low-light input image.
The second quantitative measure is the LOE metric which measures the lightness distortion in the enhanced image. The LOE metric can be formulated as: where, m is the number of pixels in the image. M (x) and M e (x) are the maximum values among the three colour channels at location x of the input image and the enhanced image, respectively. ⊕ is the exclusive-or operator. If p ≥ q, the value of the function U (p, q) is one, otherwise the function returns zero. The lower value of LOE indicates that the better lightness order is preserved.
Next, DE metric is used to measure the amount of information in the enhanced image I ′ re f and is given by: where, p(I ′ re f ) is the pixel intensity probability in the enhanced image I ′ re f . The higher value of DE indicates the richer details in the enhanced image I ′ re f . The PCQI metric is also used to provide accurate estimation of the contrast variations on the human perception and is given by: where, q i , q c , and q s describe the mean intensity comparison, the contrast change, and the structural distortion, respectively [42]. The higher value of PCQI implies less contrast distortion in the enhanced image. Moreover, the EME metric measures the average contrast in the enhanced image I ′ re f . The EME metric can be formulated as: where, max k,l (I ′ re f ), and min k,l (I ′ re f ) respectively are the maximum and minimum intensity values in each block of the enhanced image I ′ re f . The higher value of EME indicates that the corresponding algorithm well enhances the contrast of the input image. Finally, the MCMA metric is used to measure the performance of the enhancement algorithm in terms of contrast. This metric detects the enhancement side-effects such as, information loss or other artifacts like, over-enhancement [44]. It can be formulated as: where, P DRO , P HSD , and P PU are three sub-measures extracted from the image and they represent the dynamic range occupation (DRO), the histogram shape deformation (HSD), and the pixel uniformity (PU), respectively. Note that, the lower value of MCMA means that the corresponding algorithm improves the contrast with minimum artifacts. First to study the effect of various patch sizes on the performance of the proposed algorithm, the average quantitative measures of AMBE, LOE, DE, PCQI, EME, and MCMA applied on all the images in the four datasets [22,28,38,39] are given in Table 2. It is observed that when the patch size is 3 × 3, the proposed algorithm achieves the lowest values of AMBE and LOE while it achieves the best results of DE, PCQI, EME and MCMA metrics when the patch size is set to 15 × 15. It can be concluded that the proposed algorithm with the patch size of 15 × 15 produces better illumination and contrast while preserving the details and avoiding over-enhancement artifacts than that with the other patch sizes. Second, the efficiency of the proposed fusion algorithm for illumination estimation was compared against the bright channel prior and the maximum colour channel algorithms, where the quantitative assessments are given in Table 3. The results in this table illustrate the superiority of the proposed fusion algorithm over both the bright channel and maximum colour channel algorithms for illumination estimation.
Further, to evaluate the efficiency of the proposed algorithm, Table 4 illustrates the average AMBE, LOE, DE, PCQI, EME, and MCMA metrics obtained by the proposed and the other enhancement algorithms. It is observed from this table that SRIE achieves the worst score in terms of AMBE metric because this method produces insufficient improvement for image illumination. MICIE method achieves the worst score in terms of LOE metric because it produces a weird colour distortion due to the inaccurate illumination estimation which makes the enhanced images look unnatural. It can also be seen that MSRCR achieves the worst score in terms of the DE, PCQI and EME metrics because MSRCR increases the overall brightness of the image which causes the over-enhancement in some details areas resulting in the loss of important details. ESIHE and MF methods produce the worst score in terms of MCMA metric. This indicates that these methods fail in improving the input image in terms of contrast and they produce visual artifacts in the enhanced images. As can be seen, no method can get the best score on all metrics. However, the proposed algorithm achieves the best score on all metrics. It gets the smallest values of AMBE and LOE metrics as compared with the other enhancement algorithms. This means that the proposed algo-rithm can preserve both the brightness and the lightness order of the low-light input images and hence it can preserve the naturalness of the enhanced images.
Moreover, the proposed algorithm produces the highest averaged values of DE, PCQI, and EME metrics. It also achieves the lowest averaged value of MCMA metric. These averaged values of PCQI, DE, EME, and MCMA indicate that the proposed algorithm can improve the quality of the images in terms of illumination, sharpness and contrast while avoiding the overenhancement artifacts due to using proper fusion-based mechanism for illumination estimation of input image with an effective refinement method for the initial enhanced image.
Finally, peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) metrics were used to check the similarity between the enhanced images and their corresponding reference images. Table 5 illustrates the average PSNR and SSIM results of various enhancement algorithms applied on the images from LOL dataset [48] which has ground truth images. It can be seen that the proposed algorithm achieves the highest PSNR and SSIM values among the other methods which indicate the enhanced images produced by the proposed algorithm are closer to their ground truth images than those produced by the other methods.

5.3
Computational time Table 6 demonstrates the average computational time per image of various enhancement algorithms. It can be observed that SRIE is the slowest algorithm. The Zero-DCE algorithm achieves the least processing time which is the time of the test phase. However, this algorithm fails in preserving the textural details and its training time is 30 min [53]. Although the proposed algorithm is not the best among some algorithms, its results are promising in terms of qualitative and quantitative assessments.

CONCLUSION
In this paper, an efficient low-light image enhancement algorithm was proposed. The proposed algorithm consists of two main stages, namely, the fusion-based bright channel estimation and the refinement. In the proposed fusion-based algorithm, the bright channel representing the insufficient illumination of the input image is estimated by fusing both the bright channel prior and the maximum colour channel. Further, an effective refinement algorithm was proposed to improve the sharpness of the initial enhanced image. In the refinement algorithm, the details of the initial enhanced image are obtained using the summed area tables (SATs) and scaled using the introduced automatic scaling factor to highlight the details of the enhanced image.
Thus the global contrast of the image can be improved. The experimental results demonstrated that the proposed algorithm outperforms the state-of-the-art algorithms in terms of qualitative analysis and quantitative analysis. Moreover, it enhances the illumination and the details of the low-light images while preserving the naturalness. Furthermore, it reduces the colour and halo artifacts distortions as compared with the other state-ofthe-art methods.