Low light image enhancement based on non-uniform illumination prior model

: Images captured under low-light conditions are often of low visibility. To improve visualisation, a novel low light image enhancement method is presented based on the non-uniform illumination prior model. First, the k -means method is used to process the value channel in the hue-saturation-value (HSV) colour space after space conversion of the input image. Then, the initial illumination of segmented scenes is estimated by an improved maximum red–green–blue method. Next, an illumination preservation method is presented to maintain the naturalness of the enhanced image. Furthermore, the non-uniform illumination prior model is proposed to enhance the textural details in the enhanced image. Fast Fourier transformation is used to accelerate the optimisation. Since an adaptive weight is assigned, the proposed method can preserve the edges and textures at the bright and edge areas. Experimental analysis shows that the results using the proposed method have less noise, better illumination, improved contrast, and satisfactory naturalness. In addition, the proposed method can provide better quality images in terms of subjective and objective assessments.


Introduction
With the development of video cameras, the problem of enhancement for low-light images or videos has become more of a concern. Low-light image enhancement is significantly used in various applications, such as video surveillance, medical image enhancement, and military image enhancement. Furthermore, it is common for some images to have different visibility in different local areas. It is well known that most images captured under lowlight conditions have a narrow dynamic range and suffer from many degradations, such as low visibility, low contrast, and heavy noise. Nevertheless, it is a challenging task to balance the relationship among contrast, noise, and the illumination of lowlight images. This is because images obtained under low-light conditions have low signal-to-noise ratios, which means that the noise may dominate over image information. In addition, most low-light images are subjected to non-uniform illumination, implying different areas have different illumination changing trends.
Numerous methods of low-light image enhancement have been proposed. The simplest, intuitive method is histogram equalisation (HE) [1,2]. HE uniformly redistributes the dynamic range, which causes the output image to fall in the range [0, 1]. Based on this principle, many extended HE methods are proposed. For example, adaptive extended piecewise HE (AEPHE) [3] divides the original histogram into a group of piecewise histograms; then, adaptive HE is applied to these extended histograms. The final result of AEPHE is produced by a weighted fusion of these equalised histograms. However, for non-uniformly illuminated images, the histograms of different areas are different. HE-based methods may need to apply different constraints in different local regions, which is difficult to achieve. In addition, these methods neglect the noise hidden in low-light images, especially in dark areas.
Another popular approach to low-light image enhancement is based on what the researchers found [4], an inverted low-light image looks like a haze image, i.e. dehazing methods can be applied to low-light image enhancement, in which the enhanced result is obtained by inverting the dehazed result. Dong et al. [5] applied an optimised image dehazing algorithms to the inverted video frames and utilised temporal correlations between subsequent video frames for the estimation of parameters. This method shows that the combination of inverting low-light videos and applying dehazing is an effective algorithm for enhancing lowlight videos. Li et al. [6] followed this technique to segment the observed image into super pixels and denoise the segments via the block-matching three-dimensional method (BM3D) [7]. Then, an adaptive enhancement parameter is adopted into the dark channel prior dehazing process to enlarge contrast and prevent over/under enhancement. Even though these methods can produce reasonable results, their basic model lacks a convincing physical explanation.
Recently, the enormously powerful impact of machine learning and deep learning has led to dramatic improvements in object recognition, detection, tracking, and semantic segmentation. In addition to these high-level vision tasks, machine learning and deep learning have shown powerful capability in low-light image enhancement. Fotiadou et al. [8] proposed a low-light image enhancement using coupled dictionary learning and the mathematical framework of sparse representations. Lore et al. [9] presented a low-light net to achieve the contrast enhancement and denoising by using deep autoencoders, identify signal features from low-light images, and adaptively brighten images without overamplifying the lighter parts in images with a high-dynamic range. Nevertheless, these methods require a large number of sample pairs (patches of low-light images and corresponding patches of daytime images), which increase the complexity; therefore, the results of these methods may not satisfy the naturalness desired, owing to the lack of daytime images.
The retinex theory has been studied extensively in low-light image enhancement. Land [10] proposed the retinex theory to model the image obtained by the human visual system, which decomposes the image as a product of reflectance and illumination. After that, three classical retinex algorithms based on the centre/ surround retinex theory have been developed: the single-scale retinex [11], the multi-scale retinex [12], and the multi-scale retinex with colour restoration [13]. These methods use Gaussian filtering to estimate and remove the illumination, which results in the reflectance component. Meanwhile, because that illumination contains the naturalness of the low-light image, the result often appears unnatural or over-enhanced, with halo artefacts near the edges. Wang et al. [14] proposed the naturalness preserved enhancement algorithm (NPE) that split the observed image into reflectance and illumination by a bright-pass filter, which can preserve the naturalness and enhance the details. Then, Fu et al. [15] used the bright-pass filter to adjust the illumination map. The piecewise-based contrast enhancement method (PCE) proposed in [16] is based on a piecewise stretch of the brightness component extracted with the retinex theory in the hue-saturation-value (HSV) space to improve the visuality of the image. This method can serve different applications with global and local illumination estimations.
Variational interpretations of retinex are another set of enhancement algorithms. Variational model-based methods transform the decomposition of illumination and reflectance into an optimisation problem that depends on adding regularisation terms on illumination and reflectance components. The most recent work proposed in [17] uses a weighted variational model for simultaneous reflectance and illumination estimation (SRIE). While, this method does not consider the effect of noise, which generates observable halo artefacts. The method proposed in [18] low-light image enhancement via illumination map estimation (LIME) refines the initial illumination map by adding a structureaware prior, which denoises using BM3D for post-processing. However, owing to the less constraint of reflectance, these methods often result in amplified noise, which is buried in the dark areas of the observed image. Although denoising can be the post-processing step, the results will lose many details. Mading Li et al. [19] considered the noise and proposed the structure-revealing low-light image enhancement method (SRLLIE), which simultaneously estimates a structure-revealed reflectance and a smoothed illumination component. Nevertheless, the method has a relatively high-computational cost. Furthermore, SRIE, LIME, and SRLLIE are based on the assumption that illumination tends to change smoothly, which may result in an incorrect estimation of illumination because of different illumination changing trends in different areas.
In this study, a novel low-light image enhancement based on the non-uniform illumination prior model (NIPM) is proposed, which can improve contrast, enhance details as well as preserve the naturalness of low-light images, especially non-uniform illumination images. The proposed method belongs to the retinexbased strategy. The core concept is to make the illumination map region-aware. First, the non-uniform illumination low-light image is split into several clusters by segmenting the image by the kmeans algorithm, as different areas have different illumination changing trends in a low-light image. Then, to show the rough illumination of the observed image, the initial illumination is estimated by the method in Section 2.2. In the course of processing, illumination preservation is worthy of concern. Finally, to show the texture details flooded in the darkness and enhance the contrast of the edge, the variational method, NIPM, is proposed to remove the texture details from the illumination. That is to say, the result of the proposed method is reflectance, which is calculated by the basic retinex model, according to the observed image and the illumination from previous steps. Even though the proposed method removes the illumination, the key information in illumination is preserved. Therefore, the results of the proposed method appear more natural, have no over enhancement, and fewer halo effects.
This paper is organised as follows: the details of the proposed method are introduced in Section 2. Section 3 presents the experimental results and compares the proposed method with the state-of-the-art both in subjective and objective assessments. In addition, the parameters included in the proposed method are analysed and the computational time is presented. Finally, the work is concluded in Section 4.

Proposed method
This section describes the proposed method. The general framework of the proposed method is illustrated in Fig. 1, where the four blue boxes contain key ideas of this study. These key ideas are as follows: (i) scene segmentation: the original image is divided into several clusters by the k-means clustering method, which is introduced in Section 2.1. (ii) Initial illumination estimation and illumination preservation: the initial illumination of each cluster is estimated by a modified maximum red-green-blue (maxRGB) algorithm. In addition, the initial illumination is further processed by illumination preservation based on the observation of illumination. They are introduced in Sections 2.2 and 2.3. (iii) NIPM: to preserve the texture details in the enhanced result, the NIPM is proposed to remove the texture details from the initial illumination by the variational method. Then, they are restored in the reflectance component. The enhanced result has texture detail information. It is introduced in Section 2.4.
To further illustrate the idea of the proposed algorithm, the retinex theory is explained. The retinex theory assumes that the captured image is the product of illumination and reflectance, which can be modelled as where I is the observed image, R indicates the reflectance component, and L represents the illumination component. The operator ∘ denotes the pixel-wise multiplication, and R is the enhanced image recovered from I. This study uses a simplified model that estimates the illumination L only, and then, the enhanced result is R based on the element-wise division R = I/L (in this study, this model is denoted as the basic retinex model). To preserve the hue and saturation of the enhanced image, only the value (V) channel of the observed image in the HSV colour space is processed as shown in Fig. 1.

Scene segmentation
As seen in Fig. 2, the non-uniform illumination images have different illumination changing trends in different areas, which is due to the property of non-uniform illumination. In Fig. 2, the feathers of the white duck ( Fig. 2a) and sand beach ( Fig. 2b) exhibit different illumination from the surrounding areas. Thus, the observation promotes the emergence of the idea that dividing the original image into different scenes according to the illumination information. In other words, the goal of this step is to segment the V channel into a group of scenes. After segmentation, the pixels within the same scene should share a similar identical illumination. The image is divided in a manner similar to the data cluster. Therefore, the segmentation can be transformed into a clustering problem. There are many mature algorithms for clustering. In this study, the k-means method [20] is adopted. The processing step is expressed as follows: where k is the cluster number, Ω(i) is the ith cluster, and φ i is the ith cluster centre. In this study, the cluster number k is set to 10 based on extensive experiments and qualitative analysis (which is introduced in Section 3.3). The k-means clustering method works the process of iteration by minimising the mean square distance from each pixel to the cluster centre. After the lth iteration, the difference is where l is the iteration index. The iteration procedure stops when the iteration converges. In this study, the convergence criterion is as follows: The number of iterations is determined by setting ε = 10 −5 . As the cluster method optimises the sum of squares in each cluster and the number of iterations is limited, the algorithm must find the local optimum. In this study, the clusters have been divided as V 1 , V 2 , …, V k (V m represents the mth cluster value in the V channel, and m has the integer range of [1, k]). Fig. 3 shows the results of the k-means of two images in Fig. 2, indicating that the segmentation results precisely describe the spatial distribution of the illumination, which is generally consistent with human visual observations.

Initial illumination estimation
On initially estimating the illumination in each element V m , many algorithms choose the maxRGB technique (as shown in [18]) where T m is the initially estimated illumination of the mth cluster, and c is one of the three red-green-blue (RGB) channels. In this equation, it is assumed that the illumination for each pixel is at least the maximum value among its three channels. Figs. 4a and b show the estimated illumination maps of images in Fig. 2 by maxRGB. While this estimation method boosts the global illumination. As shown, the body of a cute duck has high illumination intensity and the sky of sand beach loses details. In addition, this method may fail in the case of non-uniform illumination images, because of illumination changes in the image. Thus, the initial illumination estimation method that this study adopts is the following: where a is the ratio of the R channel to all channels in the elementwise pixel, b is the ratio of the G channel to all channels in the element-wise pixel, and c is the ratio of the B channel to all channels in the element-wise pixel. Ω m is the area of the mth cluster. By this estimation method, the initial illumination preserves the original colour ratio of the different scenes. The initial illumination for each cluster can be obtained by (6), and then the entire initial illumination map can be obtained by combining k illumination maps together. This study denotes the entire initial illumination map as T 0 . As shown in Figs. 4c and d, the initially estimated illumination maps of Fig. 2 by this method are presented.

Illumination preservation
Estimating the initial illumination by the method described in Section 2.2 induces overexposure in result R, based on the element-wise division R = V /T 0 . In Fig. 5, overexposure appears in the results after the initial illumination estimation.
Specifically, the details of the feather in (a) are barely seen; the patterns of the stones are imperceptible, and there are some halos in (b); this is because errors exist in the initial illumination estimation method. Therefore, to maintain the naturalness of illumination in the resulting image, a large number of images are selected, including non-uniform illumination low-light images and unambiguous or well-lit daytime images from the internet and several datasets, such as the night-time image dataset [21]. Next, 2000 non-uniform illumination low-light images are randomly selected to obtain the initial illumination estimation using the method (which is explained in Section 2.2). Then, the reflectance is calculated with the basic retinex model. From the reflectance, the phenomenon that some regions are overexposed is obvious. The overexposed regions in the reflectance make the result appear overbright and lose many details, thus resulting in the unnaturalness of the enhanced image. Finally, the overexposure regions in the reflectance correspond to the regions with higher brightness in the V channel of the original image. To explore the relationship between the overexposed regions in the reflectance and the corresponding regions in the original image, this study labels the overexposure sites in the V image. Finally, the following relationship is obtained: the pixel values that are more than 1.2 times the average pixel value of the entire original image are more likely to experience overexposure. In other words, the pixels that satisfy the above condition must maintain their original values to preserve the naturalness.
In this study, the initial illumination map is updated as T after the illumination preservation routine. As shown in Figs. 6a and b, the illumination map after illumination preservation is different from the initial illumination estimation, shown in Figs. 4c and d. In addition, the corresponding reflectance based on R = V /T is significantly different, as shown in Figs. 6c, d and 5. In this way, the saturability of the entire image adapts to the human vision system and naturalness can be preserved in the image, such as the sky of sand beach in Fig. 6d.

Non-uniform illumination prior model (NIPM)
From (6), T is estimated in the pixel-wise style, which produces the illumination map similar to the original image in terms of texture details. Thus, the texture details in enhanced result R can be flattened during the division process. Moreover, it is necessary to make T a piece-wise constant map, because the original texture details in image V can be well preserved in the division process. To this end, the improved illumination map must be further refined. In other words, the texture pattern in the same semantic regions must be partitioned and removed. However, the boundaries between different areas should be preserved. In addition, it is imperative to consider the noise hidden in the dark regions. Noise has a huge effect on the improved illumination map. Therefore, reducing the impact of noise on the reflectance is essential. To smooth the texture details and consider the noise in the dark areas of the nonuniform low-light image, NIPM is proposed by using the variational algorithm. NIPM includes four steps as follows: (i) The first step is to divide the initial illumination map into texture information, structure information, and noise information. In this study, noise is assumed to be additive white Gaussian noise. The decomposition process is expressed in the mathematical formula where T t is the texture information, T is the structure information and N is the noise information. Of course, texture and structure are different. The local variation represents the gradient feature, and its deviation represents the variation correlation in the local patch. Therefore, a surprisingly strong discriminative power in distinguishing texture is the local variation deviation. The texture is weakly correlated; in contrast, the structure is strongly correlated. Texture changes frequently and its gradient fluctuates rapidly. On the contrary, structure changes in accordance, and the deviation of its gradient is small.
(ii) The second step is to obtain the structure prior information of a non-uniform illumination image. A guided filter is an effective way to obtain structure information of the image. In this research, the fast guided filter [22] is used as a method to ensure the validity of the non-uniform illumination prior information. The initially estimated illumination map is filtered by the fast guided filter, and this result is used as the structure prior information. The structure prior information is denoted as the variation G in this study.
(iii) The third step is to build the weight map, which is used to restrict the gradient of the texture. Given an input non-uniform illumination low-light image, the proposed method can adaptively assign different weights because the improved illumination map contains abundant texture in the bright areas and vice versa. Thus, the smoothness for bright areas and dark areas is achieved. Meanwhile, the broader smoothing is carried out in bright areas. The definition of the weight map is where σ is the coefficient to restrain the sparsity of T. After repeated testing, the texture details behave well and preserve the consistency of the original texture when σ = 5.
(iv) The fourth step is to remove the texture. To preserve the texture details in the reflectance, the variational method is used where α, β and γ are the coefficients to balance the importance of different terms. ∥ ⋅ ∥ F 2 represents the Frobenius norm, and ∇ is the first-order differential operator. G is the structure prior information, which has been introduced in the second step. W s is the weight map, which has been expounded in the third step. The role of each term in the mathematical expression (9) is interpreted below: • ∥ T t + T + N − T ∥ F 2 constrains the fidelity between the initial illumination T and the recomposed one T t + T + N; • ∥ T − G ∥ F 2 restrains the difference between the structure prior information G and the expected one T; • ∥ W s ⋅ ∇T t ∥ F 2 corresponds to the variation sparsity and considers the piece-wise smoothness of the texture ∇T t . In this study, the l 2 norm is used to replace the l 1 norm, because the l 1 norm minimisation is computationally expensive. As an alternative approach, the l 2 norm minimisation can be realised by weight W s to assign different regularisation parameters in the variation sparsity to estimate the smoothness of the texture. • ∥ N ∥ F 2 obtains the noise information to limit its effect in subsequent processing.

Solution of NIPM
According to mathematical expression (9), the optimisation problems of the NIPM can be defined as To minimise the energy function with respect to T t , T, and N, the proposed method alternately minimises the texture, structure, and noise components. The solver iteratively updates one variable at a time by fixing the other, and each step has a simple closed-form solution. To state the details clearly, the solutions of the subproblems are provided below, which are for the (t + 1)th iteration (where t is the number of iterations located in the superscript of the variable).

T sub-problem:
Collecting the T involved terms from (10) gives the problem as follows: As can be seen from (11), the (t + 1)th iteration of T is related to the tth iteration of T t and N. This is a classic least squares problem. Thus, the solution can be computed through differentiating (11) with respect to T and setting it to 0. Fortunately, by assuming circular boundary conditions, applying 2D fast Fourier transform (FFT) techniques to the above problem can speed up the calculation. Therefore, the solution of this sub-problem is where ℱ( ⋅ ) is the 2D FFT operator, while ℱ −1 ( ⋅ ) stands for the 2D inverse FFT. In addition, I is the matrix with the proper size, all the entries of which are 1. T is updated until

T t sub-problem:
Dropping the terms unrelated to T t leads to the following optimisation problem: The closed form solution of (13) can be obtained through differentiating (13) with respect to T t and setting it to 0 where D contains D h and D v , which are the Toeplitz matrices from the discrete gradient operators with a forward difference. Similar to the T sub-problem, the 2D FFT techniques are applied to the subproblem. Consequently, the expression is and T t is updated until:

N sub-problem:
Preserving the terms that are related to N can obtain the N sub-problem as follows: This sub-problem is a classic least squares problem; thus, the solution is Texture, structure, and noise information are estimated efficiently because large-matrix inversion is avoided by FFT. The whole iteration is stopped when ε 1 or ε 2 is a small threshold, i.e. 10 −7 in practice, or if the maximal number of iterations is reached. In conclusion, the proposed algorithm is summarised in Algorithm 1 (see Fig. 7). Then, the reflectance of the V channel is obtained by R = V /(T + N). Combining R with the H and S components of the original image, the enhanced HSV image can be obtained. Finally, the enhanced HSV image is transformed to the RGB colour space, and the final enhancement result is presented.

Experiments and comparison
In this section, the performance of the proposed method is evaluated, and then, the parameters and computational time are analysed, as follows. First, the experiment settings are presented. Next, the proposed method is evaluated by comparing it with the state-of-the-art low light image enhancement methods to illustrate the efficiency and effectiveness of the proposed method. Then, an extensive parameter study is conducted to evaluate the impact of some parameters. Finally, the computational time of the proposed method is discussed.

Experiment settings
To comprehensively evaluate the proposed method, the proposed method is tested on images under different illumination conditions. Test images are obtained from the night-time image dataset, the Google images, and the NASA image dataset. Figs. 8 and 9 show the 12 images tested in experiments. All experiments are performed in MATLAB R2016b on a computer running Windows 10 with an Intel (R) Core (TM) i5-3470 CPU and 8.00 GB of RAM. In experiments, if not specifically stated, the parameter k in k-means is 10. In addition, the parameters α, β and γ are set to 0.008, 0.3 and 0.5, respectively.

Quality assessment
In this section, the proposed method is compared with several state-of-the-art methods, including HE, NPE [14], Fu's method (SRIE) [17], and LIME [18]. HE is performed by using the MATLAB built-in function histeq. The results of NPE, SRIE, and LIME are generated by the code downloaded from the authors' websites, and the settings of parameters remain the same as the original papers. As shown in these figures, the major visual deficiency of non-uniformly illuminated images is that the dark areas suffer from uneven illumination and are perceived with low contrast. HE aims to stretch the narrowly distributed histograms of non-uniform illumination low-light images with the attempt to improve the contrast. The second column of Figs. 10-13 provides the enhanced results of the HE method. While, the effect of noise is too heavy in the enhanced results, such as the sky areas of the image in Fig. 10b. In addition, this method only enhances the whole image, but the local details are likely to be overlooked, and the illumination is not improved to adapt to the human visual system. For instance, the images in Figs. 11b and 13b have lowillumination intensity. To be specific, the tree of the image in Fig. 11b is fuzzy so that the leaves cannot be seen clearly. Simultaneously, noticeable artefacts in the flat areas are produced by this method. For example, Fig. 12b has distinct halo effects in the hands of two astronauts, who are in the bright area. By contrast, the proposed method generates well-detailed images that satisfy the human visual system. NPE attempts to preserve the naturalness of images. It is effective in improving local contrast in dark areas and keeping the lightness-order of the image. As the third column of Figs. 10-13 shows most of the results are satisfactory images. However, the deficiency of the method is the halo effects. For instance, the edges of the house in the image in Fig. 11c have heavy artefacts and the   edges of clothes for every astronaut have some dark areas in Fig. 12c. Furthermore, some details are lost, and undesirable colour distortion appears through NPE, such as illustrated by the people in Fig. 10c. In Fig. 13c, the colour of plimsolls does not maintain the colour sequence of the original image, and the illumination of plimsolls should be different in different areas. As is shown, the proposed method successfully preserves the colour sequence of the original image and considers the characteristics of non-uniform illumination.
SRIE argues that logarithmic transformation is not appropriate in the regularisation terms and proposes a weighted variational model. The fourth column of Figs. 10-13 shows the results of SRIE. In general, enhanced images show sharpened details that look unnatural, such as the house in Fig. 11d. Even though the method shows impressive results in the decomposition of reflectance and illumination, there is not much improvement in the general illumination of the enhanced images. The bottom part of Fig. 10d has low-pixel values, and the cobbled road is dim in Fig. 11d. In addition, as shown in Fig. 11d, SRIE generates observable halo artefacts in some regions, such as the halo around the street lamp in the sky. However, beyond that, the results lack the visibility of the input image, as can be observed in the top right corner of Fig. 12d and the top left corner of Fig. 13d. Unlike this method, the proposed method can produce preferable images in most cases, which can avoid halo effects. LIME is an effective method to adjust the illumination of lowlight images by adding a structure-aware prior to the initial illumination map obtained by maxRGB. As shown in the fifth column of Figs. 10-13, the enhanced results have better illumination information. Nevertheless, owing to the lack of constraint on the reflectance, the method can easily over-enhance regions with high intensities, causing a lack of naturalness in the enhanced image, such as the texture of trees in Fig. 10e and the illumination of timber pile in Fig. 13e. In addition, the halo artefacts appear in the sky of Fig. 11e. In this study, BM3D is used as the post-processing method, which is the same as that presented in the original paper. However, as shown in Fig. 12e, denoising gives astronauts a characteristic of fuzziness. Comparatively, the proposed method generates more natural results and considers the influence of noise.
From these examples, it is evident that the proposed method can preserve fidelity, improve illumination, suppress noise, and reduce halo effects.

Objective assessment:
It is difficult to find a widely accepted measure to quantify the quality of an enhanced image. Since the ground truth of the enhanced image is unknown, a blind image quality assessment is used to evaluate the results. Three objective metrics are presented to evaluate the enhanced results.
The first metric is the no-reference image quality metric for contrast distortion (NIQMC) [23], which is based on information maximisation. NIQMC assesses image quality by measuring local details and global histogram of the observed image, and it particularly favours images with higher contrast. For NIQMC, larger values indicate better image contrast. As shown in Table 1, HE has higher NIQMC scores, as NIQMC has a close relationship with the global histogram that HE uses to enhance the image. Meanwhile, the results of HE do not significantly improve the illumination in general, and the results are affected by noise (e.g. the enhanced sidewalk image by HE in Fig. 10b), i.e. HE cannot balance the illumination and contrast, but the proposed method can do it well.
The second metric is the natural image quality evaluator (NIQE) blind image quality assessment [24], which is based on statistical regularities from natural and undistorted images. A lower NIQE implies a higher image quality. As shown in Table 2, the   proposed method achieves the lowest NIQE value, which means that the results of the proposed method are most similar to natural images and have the least distortion. The third metric is the no-reference free energy based robust metric (NFERM) [25], which extracts features inspired by free energy, based on the brain theory and the classical human visual system to measure the distortion of the input image. Here, smaller NFERM values represent better image quality. From Table 3, the average NFERM value of the proposed method ranks second among the compared methods. Although LIME has lower NFERM scores, its NIQE values are much larger, and the NIQMC values are much smaller than that of the proposed method, implying that the results of LIME do not have a naturalness and high contrast. As can be observed in visual comparisons, some of the results produced by LIME appear unnatural (e.g. house in Fig. 11e). However, the proposed algorithm generally balances the relationship between illumination, contrast, naturalness, and noise.
In summary, the proposed method generates better visual quality both in subjective and objective assessments. The enhanced images have an improvement in the visibility of texture details, illumination, and naturalness preservation.

Parameter study
The first parameter that needs to be analysed is the number of clustering. To determine a relatively balanced clustering number k, a large number of experiments are conducted with different nonuniform illumination low-light images using different values of k. The enhanced results are compared in terms of the objective assessments, changing k in the range of [1,15]. Fig. 14 shows the results of calculating the objective assessments after segmentation. From the objective assessments, it is obvious that k is ∼10, where the largest NIQMC, the smallest NIQE, and the smallest NFERM are achieved, simultaneously. When k increases from 10 to 15, objective assessments tend to be stable. This means that the enhanced results tend to be the same even if k continues to increase.
The impact of regularisation parameters α, β and γ in the proposed model (10) is shown in Fig. 15. Average objective results obtained with different (α, β, γ) pairs on all 12 test images are presented, where α takes on the values of 0.8, 0.08, and 0.008, β is selected from 0.3, 0.03, and 0.003, and γ is varied in 0.5, 5, and 10. To test the impact of different parameters, changing the value of one parameter while keeping the other unchanged. From Fig. 15,  when (α, β, γ) is (0.008, 0.3, 0.5), the average NIQE and average NFERM have the lowest values. At the same time, the corresponding average NIQMC has a relatively large value. In other words, the empirical setting of regularisation parameters generates satisfactory results in most cases.

Computational time
Since all calculations of the proposed algorithm are element-wise and the FFT is adopted to avoid large-matrix inversion, the computational time is satisfactory. It takes ∼87.236 s for the proposed method to process one image of size 1155 × 859. For comparison, HE, NPE, SRIE, and LIME require ∼0.042, 36.115, 84.245, and 7.352 s, respectively. The operation of the proposed method is processed in the HSV domain. Although the proposed method requires more time, its results are the best in terms of objective and subjective assessments. Furthermore, it should be noticed that as the proposed method is implemented in MATLAB and not well optimised, the computational time of the proposed method can be further improved by C/C++ programming and advanced computing, such as using a graphics processing unit.

Conclusion
In this study, a novel low-light image enhancement technique based on the NIPM is presented. First, the V channel is segmented into several scenes by the k-means method for the observed image. Then, the initial illumination map of segmented scenes is estimated by an improved maxRGB method. Next, to maintain the naturalness of enhanced images, the illumination preservation method is applied to further refine illumination, which is based on the analysis of illumination. Moreover, the NIPM is proposed to keep the texture details in enhanced images. The FFT-based algorithm is provided to improve the speed of solving the optimisation problem. In addition, a comprehensive experimental analysis of the proposed model is presented in terms of subjective and objective assessments. Comparisons with the other methods show that the proposed method displays better enhanced results with satisfactory visual effects.