A fast level set image segmentation driven by a new region descriptor

In order to deal with the intensities inhomogeneities and to overcome the effect of different types of noise in the image segmentation process, we have formulated a new level set function to implement a fast and robust active contour model. The proposed model was formulated by combining the SBGFRLS model and Legendre polynomials. With the aim of ensuring the segmentation accuracy and for dealing in the best way with the presence of noises and inhomogenous distribution of intensity, we deﬁne a local region descriptor for image intensities based on the Legendre polynomial. Instead of using the average intensity of the region, we regularised our level set function using a Gaussian ﬁltering process. Experimental results on challenging images demonstrate the efﬁciency, robustness and the low cost and computational time of our model against the well-known active contour models.

Inasmuch as the edge-based models do not assume intensity homogeneities, it can therefore be applied effectively to segment images with intensity inhomogeneities. However, with these methods, we cannot achieve effective segmentation for noisy images or when there are weak boundaries in the image. Most region-based models [6,9] are based on region descriptors to guide the curve evolution assuming the homogeneity of the intensity. However, defining a region descriptor for images with intensity inhomogeneities is one of the major challenges for these types of methods.
Since the Mumford and Shah model was developed [14], in the last decades, many region-based active contour methods  [3,7] developed a local binary fitting method (LBF) and after that, using local intensity, Zhang et al. developed a new local image adjustment method [20], next they proposed the SBGFRLS model [11] for selective and global image segmentation. Wang et al. [17] proposed a local Gaussian distribution fitting (LGD) method for modelling images by Gaussian distributions with local variances which lead to improve the LBF model. In another paper [12], Li et al. proposed a new local intensity clustering model for image segmentation and bias correction based on the local intensity clustering criterion. In [25], Zhang et al. developed a new local statistical active contour to improve the local intensity clustering model. Although these methods can deal with intensity inhomogeneities, they are not designed to deal with different types of noise. Furthermore, they are very sensitive to initialisation, their algorithm consumes more time and some of them cannot handle images with blurred contours. Therefore, their utilities are limited [11].
To contribute to the improvement of image segmentation techniques, a new active contour model based on the region was proposed in this paper. This model can handle noisy images and images with the presence of inhomogeneity of intensity. The major contributions are as follows: • We formulate our model based on the modified and simplified version of the CV [6] model and the Legendre polynomials. • To ensure the segmentation accuracy, we define a local region descriptor for image intensities using a linear combination of Legendre functions instead of the average intensity of the region adopted in the SBGFRLS model. • We avoid the regularising terms, such as the length of C and the area inside C used in the CV model, which reduce the calculation time.
That leads to ameliorate the important points cited below: • Elimination of the re-initialisation and all the drawbacks that come with it. • Our method is less sensitive to the initial contour.
• Our method is more computationally efficient with lower computational cost and fewer number of iterations. • The images with weak or blurred boundaries can be effectively segmented in restricted iterations. • Our model can also accurately segment images with different kinds of noise and in the presence of intensity inhomogeneity.
In addition, we use a Gaussian filter to keep the level set function regularised and smooth during the evolution process, and at the same time avoid re-initialisation. Quantitative evaluations using difficult images (real and synthetic) are carried out, as well as comparisons with previous work such as CV [6], SBGFRLS [11], LGD [17], L0MOS [24] and L0RDLSM [21] models.
This paper is organised as follows: in Section 2, we present the proposed model with a brief overview of some famous active contour models used in the area of image segmentation. Section 3 is devoted to implementation and experimental results. We close this article with a conclusion in Section 4.

METHOD
Using the Mumford-Shah model [14], Chan-Vese [6] implements a piecewise-constant model for image segmentation. The general form of the energy function of the Mumford-Shah model has been defined by whereΩ represent the image domain, the segmenting contour C ⊂ Ω, and are positive constants.
The Chan-Vese (C-V) model minimises the functional energy (2) to solve the minimisation of the Mumford-Shah equation (1): With c 1 the average of the image intensities inside the contour C, and c 2 the average of the image intensities outside the contour C. The values of c 1 and c 2 are re-estimated during the propagation of the curve.
where represents the level set function, 1 and 2 are positive constants that control the force driven by the image data, generally, they are taken to be 1 > 0 2 > 0.
In [11], Zhang et al. proposed a new active contour model based on selective local or global image segmentation SBGFRLS, where they used a novel signed pressure function (spf) defined as The sp f function modulates the signs of the pressure force inside and outside the region of interest so that the contour expands when inside the object, or shrinks when outside the object. Then they regularised the level set function with a Gaussian filter process after each iteration. The level set function of the SBGFRLS model was formulated as follows: is a scale parameter tuned according to the image, its role is to control the speed of level set update.
For the SBGFRLS model, the energy functional was formulated using the difference between each pixel and the average intensity of the region.
In our model, we have reformulated and generalised the Chan-Vese functional (2) by substituting the constants c 1 and c 2 by two smooth functions c m 1 (x) and c m 2 (x) presented in [19] as follow: P k is a multidimensional Legendre polynomial, to calculate the 2D polynomial we use the following equation: The one-dimensional Legendre polynomial p k was given by where k is the Legendre polynomial degree. A linear combination of (m + 1) 2 basic functions of Legendre 2D will represent the regions of the image.
Given the vector of Legendre polynomials by P(x) = (P 0 (x) … P N (x)) T . The coefficient vectors for the two regions are: We can now rewrite the modified and simplified version of (2) in matrix form as H (x) denotes the regularised version of the Heaviside function [11]. We minimise the energy functional E CV (A, B, ) with respect to to obtain the gradient descent flow as: The optimal A and B (Â andB) are calculated by [19] where And the Gramian matrices [K] and [L] are obtained as [16]: We rewrite Equation ( (12) to become as follows [26]: During the level set evolution. the coefficient 2(Â T P(x) −B T P(x)) can be approximated as a constant. Therefore, we can omit the coefficient and Equation (13) can be simplified as follows:

IMPLEMENTATION AND RESULTS
In this section, the proposed model has been validated by several experiments on various difficult images such as noisy images and images with intensity inhomogeneities which illustrate the robustness and the efficiency of our model compared to some of the well-known models. The use of the Gaussian filtering process leads to regularise the level set function and to make it smooth and its evolution more stable [11]: The standard deviation σ of the Gaussian filter is a critical parameter that must be carefully tuned for each image. If is too small, the evolution of the level set function will be unstable.
On the other hand, if is too large, an edge leak may occur and the detected contour may be inaccurate.
In the experiments, the parameters are set as follows: The Legendre polynomial degree m = 1, the regularising constants 1 = 2 = 1, time step Δt = 1, = 3, and = 3. In specified cases some parameters are tuned according the image.
We summarise the proposed method algorithm in the following steps: 1. Normalise the input image. 2. Adjust the parameters and Initialise the level set function. 3. Calculate the optimal A and B using Equation (9). 4. Evolve the level set function according to Equation (8). 5. Convolve the level set function with a Gaussian filter as in Equation (12). 6. Verify if the evolution of the level set function has converged. Otherwise, go back to step 3.

Segmentation of real and synthetic images
We begin the test of the proposed model by a set of noisy images. Figure 1 presents the segmentation process of two real  Figure 2 shows two images among them. These results show that our model can provide a satisfying segmentation result for these kinds of images which prove its general efficiency.

Performance for different types of noises
In this section, we quantitatively evaluate the efficiency of our model for images with different types of noise using Jaccard similarity (JS ). The JS index is given by where I 1 represent the ground truth, and I 2 represent the segmentation results. We can also evaluate the segmentation accuracy using Dice similarity coefficient (DSC) defined as follows: It is evident if I 1 is very similar to I 2 , JS and DSC are closer to 1 and the higher the segmentation. We test our model on five artificial images having the same object and in the presence of different types of noise. We illustrate the results of the segmentation in Figure 3, where we have used a clear image for which we add different types of noise; Gaussian, Nakagami, Raghleit and rice respectively. From these results, we can say that our model has succeeded in obtaining efficient results in the presence of different types of noise.
In order to prove the effectiveness and robustness of our model in the case of presence of different types of noises, we apply our model to a set of images where we increase gradually the variances of the noise. Figure 4 presents in the first line the results for the images with different values of variances of the Gaussian noises, while the second line shows the results for the images with different values of variances of the speckled noises. The segmentation results validate the performance and efficiency of our model against different kinds of noises and even in cases of strong noises. In Figure 5, we present the quantitative evaluation of the performance of our model using JS index, for the images with speckle noises we have a high value changes from 0.955 to 0.960 and from 093 to 0.96 for those with Gaussian white noises, which demonstrate the robustness and efficiency of our model.

Robustness to level set initialisations
Using the same metric (JS), we can assess the precision of the segmentation and quantitatively evaluate the performance of our model using different initial contours. Figure 6 shows the results of applying our model to a nucleus fluorescence micrograph image using various initial contours, where we initialise our function according to the four possible cases: the first case the initial contour completely surrounds the objects, the second case the initial contour is found in the objects, the third case the initial contour crosses the objects and background and the last case the initial contour is in the background, which is far from the object. We can see obviously that the object boundaries are accurately segmented and the results for all contours are almost the same despite the different initialisations, these results are clearly verified by the quantitative evaluation presented in Figure 7, which proves the robustness of our model to different level set initialisation in fewer iterations number and high computational speed (see Table 1).

Image segmentation in the presence of intensity inhomogeneity
We present in Figure 8 the segmentation results of our model for synthetic images corrupted by intensity inhomogeneity where we set the parameters as follows: m = 1, ε = 1, and σ = 5 for the image in the first row, m = 1, ε = 3, and σ = 3 for the image in the second row, m = 1, ε = 1, and σ = 3 for the image in the third row, m = 1, ε = 3, and σ = 3 for the image in the last row. Our model accurately extracts the object boundaries for these images: in just seven iterations taking only 0.073 s for the first image and takes 0.020 s in five iterations for the second image, and in just nine iterations taking only 0.037 s for the third image and 11 iterations and takes 0.029 s for the fourth image. These results demonstrate the robustness and the high speed of our model in the presence of intensity inhomogeneities.

Comparison of our model with L0MOS and L0RDLSM model
In Figure 9, we show the segmentation results of our model compared to the results of L0MOS [24] and L0RDLSM [21]. For this purpose, we used four synthetic images, the original image in the first column with three noisy images created from the first image by adding a different noise (speckle: means zero, contrast 0.2; Gaussian: means zero, contrast 0.3; salt and pepper: 0.2), successively. As Figure 9 shows, we can see that the L0MS model only succeeded in dividing the first image, while it failed with the other images. In contrast, our model and the L0RDLSM model had almost the same effective results. However, we can clearly see that the results of our model are smoother than the results of the L0RDLSM model. In addition, our model is much faster. This shows the performance and the advantages of our model.

Comparison of the segmentation results for our model with, SBGFRLS, LGD, and C-V model
For the comparison, we use in this experiment, the C-V and LGD models. Figure 10 shows the segmentation results of the following images: a synthetic image corrupted by inhomogeneity of intensity, an image under a fluorescence microscope of the nucleus, and an X-ray image of the bones. Figure 10 presents the initial contours in blue, then in red the segmentation results using the C-V model, LGD model, and our model respectively.  our model accurately extracts the boundary the object for the first image, whereas the C-V model has missed the object boundaries extraction. In the case of the second image, we note that the C-V model did not succeed in obtaining good results while the proposed model and the LGD mode give precise results. The third image results show the great effectiveness of our model against the C-V and LGD models.
Then again, our model performs better than other models in terms of computational time and number of iterations for each image. Table 2  model and the traditional C-V model especially for the second image. As presented in Figure 11, this experience, therefore, shows the great performance of our model over the C-V and LGD models in terms of accuracy, computational time and the iterations number.
To quantitatively evaluate segmentation accuracy, our model is compared with LGD and SBGFRLS models in segmenting several noisy images. Figure 11 shows two images among them. For all the images, it can be seen that the proposed model and the other models have obtained satisfactory segmentation results.
The segmentation results are evaluated by JS, DSC (the dice metric) and MSE (mean square error), which are presented in  Jaccard similarity index  Table 2, with the time consumed for each method. Figure 12 shows the bar plot of the JS values for the three models: our model, LGD, and SBGFRLS models. We can conclude from the bar chart of the Jaccard index, Figure 12 and Table 3 that our model achieves the highest average value of JS and DSC with the lowest mean square error value in the shorter time among all the three methods. This clearly illustrates that our model is very fast and robust to noise, and it achieves efficient and precise segmentation results.

CONCLUSION
Using Legendre polynomials for region intensity approximation, we formulate a robust image segmentation method in level set active contour model framework, which demonstrates     Jaccard similarity index

FIGURE 13
Quantitative evaluation of our model and the SBGFRLS and LGD model using Jaccard similarity index excellent robustness and accuracy in image segmentation, especially images that have an inhomogeneous distribution of intensity and images with the presence of a lot of noise.
In the numerical implementation, to overcome the traditional problem of re-initialisation, the level set function has been regularised by convolving it with a Gaussian filter, which greatly minimises the time consumed during the evolution process.
The performance of our model has been demonstrated by quantitative evaluations using real and synthetic images, as well as comparing with previous work such as CV, SBGFRLS, LGD, L0MOS, and L0RDLSM models. Experimental results on different kinds of images (i.e. real images and medical images) show that our proposed model is very fast, precise and more robust than other popular models, all in very few iterations.