A new blind image conversion complexity metric for intelligent CMOS image sensors

Many algorithms have been developed for complementary metal–oxide–semiconductor (CMOS) image sensors to speed up analogue-to-digital (A-to-D) conversion of captured images. However, there is no objective blind-image quality metric available to compare and quantify the quality and effectiveness of these speed-up algorithms. In this work, we developed a blind-image quality and complexity metric for this purpose. The proposed metric relies on counting the successive zeros in a code histogram. The proposed metric is called the conversion complexity metric (CCM). The CCM is designed to quantify how complex, and to predict how much time and power consuming a captured image is for A-to-D conversion, mainly by integrating (ramp) type A-to-D converter used in column-parallel architectures of a CMOS image sensor (CIS). The proposed metric, CCM, is tested for linearity, monotonicity, and sensitivity to many types of introduced distortion. The CCM is compared with other no-reference and full-reference image quality and complexity metrics. It accomplished, for brightness change distortion, 99% linearity and 316% sensitivity, providing a computationally efﬁcient blind-image quality metric that no other metrics provide for CIS to intelligently adjust and optimise on-chip analogue and digital signal processing.


INTRODUCTION
Today, a huge number of people carry mobile phones that have built-in cameras. Millions of images are captured every minute by these cameras, and this number is increasing yearly [1]. Hence, the image sensor and camera markets are huge and very competitive. Cameras are expected to be able to capture highquality and high-resolution images while intelligently minimising power consumption, especially in battery-powered mobile devices and in future Internet of things devices. Besides, cameras should be able to respond and dynamically adapt to different scene conditions providing a wide-dynamic-range operation capability. Nowadays, mainly CMOS image sensors (CISs) are at the core of these cameras, in which their performance determines the quality and the value of them. The CIS captures images in the analogue domain in the form of voltage, current, or charge and performs some analogue signal processing (ASP) before converting them into the digital domain by on-chip analogue-to-digital converters (ADCs) for further digital image processing. All on-chip analogue or digital signal pro-This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2020 The Authors. IET Image Processing published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology cessing and conversion operations have to be optimised for lowpower and high-speed performance. Many algorithms were proposed to speed up these operations [2][3][4][5]. However, there is no objective image quality (IQ) metric to compare the effectiveness or quality of these algorithms proposed and used. The IQ metric should not only consider the complexity of the images captured and the difficulty of analogue and digital processing and conversion, but also provide intelligent information to processing and conversion algorithms to optimise conversion quality, speed, and power consumption of the image sensor in both analogue and digital domains. It should also be computationally simple and fast to be implemented and should not require large analogue or digital storage capacity (i.e. two frames digital memory).
Image capture, conversion, and processing techniques introduce undesirable effects to digital images, causing a loss of quality and important information. These may occur during any stage(s) of image reproduction processes such as image capture, conversion, processing, compression, transmission, storage, or retrieval. Hence, assessing image properties (mainly IQ) becomes paramount to be able to judge the effects of the aforementioned processes. Fundamentally, there are two methods [6] available to evaluate and to compare IQ: subjective methods [7,8] and objective methods [9,10]. The subjective methods are built on human perception quantifying image properties with a scale based on a human observer's judgment, while the objective methods are based on obvious numerical and mathematical calculations of image parameters. Both of these IQ assessment methods are trying to correlate index assessed by the human visual system (HVS) to an index obtained from mathematical calculations. Until today, over 100 metrics have been developed in both methods to assess IQ as summarised by Pedersen and Hardeberg [11] and Marius [12]. The common purpose of these metrics is to quantify the distortion that may occur, IQ monitoring, process optimisation, benchmark production, and problem area identification [13]. Some IQ metrics are designed for coloured images while others are for greyscale images. There are many ways to classify these metrics based on the way and purpose that they have been developed.
Researchers have classified IQ metrics in different ways [13]. Avcibas et al. [8] classified IQ metrics into six groups according to the information that image carries: (1) pixel differencebased distortion measure like mean square error (MSE); (2) correlation-based measure; (3) edge-based measure such as edge position displacement and its consistency for different resolution levels; (4) spectral distance-based measure; (5) contextbased measure; and (6) HVS-based measures, which are dependent on (dis)similarity criteria used in image-based browsing functions. Le Callet and Barba [14] categorised IQ metrics into two different groups. The first group is IQ metrics that utilise the HVS model as a low-level perception like masking effect and sub-band decomposition to compute a distortion map. The second metric group uses little information from the HVS model for the representation of errors and forms a prior knowledge for introduced distortions. Chandler and Hemami [15] classified IQ metrics into three categories: first, metrics that can be obtained mathematically, that is, depending on distortion intensity; second, near-threshold psychophysics-based metrics in which the visual detectability of distortion is considered; and third, overarching principles-based metrics that extract information or structure. Wang and Bovik [16] divided IQ metrics into three groups. The first group is based on the use of a reference image, which is no-reference, full-reference, and reduced-reference based IQ metrics. Furthermore, Thung and Raveendran [17] sub-divided full-reference IQ metrics into three groups as mathematical, HVS-based, and other. The second group is based on the coverage of the IQ metrics as application-specific or general-purpose. The third group is based on how the IQ metric is structured, either bottom-up or top-down. These three groups and their IQ metrics use the original image, a distortion process, and HVS knowledge. Pedersen and Hardeberg [13] had also proposed to divide IQ metrics into four groups. The first group consists of mathematically based metrics that use distortion intensity such as MSE and peak signal-to-noise ratio. The second group comprises low-level metrics that depend on distortion recognition using contrast sensitivity functions, which is used in spatial-CIELAB [18]. The third group is composed of high-level metrics such as structural similarity metric (SSIM) [19], which depends on structural content or the visual image fidelity [20]. This group of metrics depends on the statistical properties of the captured scenes. The fourth group consists of other metrics that mix two or more strategies of the above groups such as visual signal-to-noise ratio [15], which considers both low-and mid-level scene properties and utilises mathematical models to get the IQ metric score in the final stage.
In summary, IQ metrics are classified in the literature based on the fundamental answers to the following questions: Does the given metric require single (no-reference) or multiple images (full-or reduced-reference) to compare with? Is the metric objective or subjective? If it is subjective, does the metric provide a low or high target perception level? If it is objective, what is the image information (pixel, row, edge, window etc.) used by the metric? Choosing an IQ metric would be a simple task if all these questions are answered fully. However, each application has its unique requirements that none of the existing IQ metrics may fulfil them all.
For intelligent CIS operations, the metric should be reference independent (single-image) and objective and use image information row-by-row. The metric index should also be bounded (i.e. between 0 and 1) and computationally simple to implement on-chip intelligent operations and learning to adjust analogue and digital characteristics of CIS electronics.

IQ METRICS FOR CIS ELECTRONICS
The classifications mentioned in the previous section are based on an area of applications and points of view of the importance of specific requirements. Among all IQ metrics mentioned in [13], except for several of them that will be investigated in the following sub-sections, none of the metrics can be used for quantifying the performance requirements of image processing electronics to convert an image from the analogue to the digital domain. The IQ metrics MSE [21], SSIM [19], histogram flatness measure (HFM) [22], histogram spread (HS) [22], blind/referenceless image spatial quality evaluator (BRISQUE) [23], natural image quality evaluator (NIQE) [24], and perception-based image quality evaluator (PIQUE) [25] are checked if they are suitable for intelligent and evaluation of CIS.

Mean square error
MSE is one of the oldest and most widely used IQ metrics [16] because of its computational simplicity, ease of analytical tractability, and clear physical meaning; it is mathematically convenient in the context of optimisation [19]. Its index is calculated by adding and averaging the squared difference between the original and processed image element values [13]. Thus, MSE is an objective IQ metric that needs two images (reference and processed) and uses pixel information for calculations. However, MSE has a drawback that it is not correlated with perceived images by HVS [15,16,26,27]. Indeed, some images that have the same MSE perceived very differently from each other, as outlined in [16] and [19]. Furthermore, MSE does not satisfy the aforementioned metric requirements for intelligent CIS as it needs two images to quantify IQ and its index is not bounded and frame based.

Structural similarity metric
SSIM is one of the most popular IQ metrics owing to the drawbacks of MSE [19]. It is built on the universal IQ metric index [28] and takes into account the HVS model to overcome issues MSE possesses [6]. It quantifies IQ using a combination of luminance, contrast, and structure comparisons of the original and processed images [29]. These comparisons are performed for local image windows, and then, the SSIM metric is calculated as the mean of all these local windows. The SSIM index is bounded between 0 and 1. It is a symmetric metric and has a unique maximum. However, the SSIM index cannot be used for intelligent CIS, because it needs a reference image and uses the window of pixels for calculations. It is more suitable for human perception assessment of still image frames not for local assessments (i.e. row-based) to assist intelligent image capture and digitisation processes in CIS.

Histogram flatness measure and histogram spread
IQ metrics that use histogram distribution, such as HFM and HS [22], have the potential to be used in the intelligent CIS. They quantify the contrast level of images using full-frame image histograms, which make these metric references independent, or single-image metrics. The HFM index is calculated as the ratio between the geometric mean and the arithmetic mean of image histogram intensities. Hence, its index is bounded between 0 and 1, because the geometric mean is always less than or equal to the arithmetic mean of the same data set. As a result, HFM has a low index value for low-contrast images that has a narrow and peaky histogram and vice versa. HS is also a single-image metric that uses the histogram of the full image frame. Its index is calculated by determining the ratio of the quartile distance (the difference between the third quartile and the first quartile of a cumulative histogram) to the full range of the image histogram. Thus, the HS index is bounded between 0 and 0.5 for uniformly distributed images and 0 and 1.0 for binary images. The HS index is low for narrow and peaky image histograms, which has low contrast images, and vice versa. Although the HFM and the HS are image reference independent and objective metrics that use image histogram information, they are not fully suitable for the intelligent CIS as they depend on whole frame histograms and are computationally expensive.

2.4
Blind/referenceless image spatial quality evaluator Anish Mittal has proposed the BRISQUE metric [23] for the blind IQ assessment of natural scenes based on statistics and training models. It is designed for the quantification of a lack of naturalness of an image due to any distortion that may present in it. The BRISQUE generates an index value by a model called support vector regression, which is trained on an image database that has differential mean opinion score values. Images in this database have well-known distortions for natural images, and hence, the BRISQUE index is limited to evaluating IQ for the same type distortions only. Hence, it cannot compute specific distortion features such blurring and blocking. Although the BRISQUE is a single-image metric, it is an unbounded metric, and its index computational complexity makes it unsuitable for CIS intelligent electronics.

2.5
Natural image quality evaluator NIQE was proposed by Mittal et al. [24] to solve issues surrounding BRISQUE. It uses a model that is not trained on human-rated distorted images or even exposed to any distorted image. Thus, it is considered as a completely blind metric. The NIQE model uses measurable deviations from natural image regularities. This model is built on quality-aware collections derived from statistical features of a simple and successful space-domain natural scene statistic (NSS) model. These statistical features are based on undistorted and natural images. NIQE is developed by fitting quality-aware features to a multivariate Gaussian (MVG) model and measure the distance between this MVG model and the NSS feature extracted from the test image. This measured distance has no limit, so NIQE is an unbounded metric. Despite NIQE being a no-reference metric, it is not applicable for CIS intelligent electronics because it is unbounded and derived from a distortion-aware training model that is not considering how images will be processed through CIS electronics.

Perception-based image quality evaluator
Venkatanath proposed PIQUE [25] to quantify the distortion of an image without the need for any training data, so it is an opinion-unaware metric. PIQUE predicts IQ based on extracting image local features that help create a fine-grained blocklevel distortion map. This approach does not need any statistical learning data for IQ assessment, but it is based on a test image local block/patch-level characteristic. Each local block level has a size of n × n. PIQUE methodology classifies the given block into a distorted or not distorted block and assigns a score for each block, where it then calculates the PIQUE overall score. Thus, PIQUE is an unbounded metric that is designed for specific applications such as feature point extraction, object detection, and compression, not for CIS intelligent electronics.
In summary, none of the existing IQ metrics are suitable for assessing captured IQ or complexity to assist the intelligent operation of CIS. We proposed a new metric to close this gap. It is a histogram-based, no-reference, and objective metric that we called conversion complexity metric (CCM) based on a successive zeros histogram, as explained in the next section.

CONVERSION COMPLEXITY METRIC
Since there is no clear definition for what "image complexity" is [30], quantifying it is not an easy undertaking [31]. Thus, we defined image complexity from an A-to-D conversion complexity point of view considering how difficult it is (simple/complex timing, low/high power consumption, low/high speed, small/large silicon footprint etc.) for image sensor electronics to convert a captured image from analogue to digital domain. This is very critical to be able to judge and compare between different analogue and digital processing algorithms and choose which algorithm is better for specific applications to intelligently maximise or predict performance and efficiency of sensor electronics. As mentioned before, it is necessary for the metric to be reference independent (single image), objective, bounded, and row-by-row based. We will start by illustrating the electronic system requirements and describing the new philosophy of the proposed metric afterward where some examples and discussion will follow.

IQ and image complexity metric requirements
After a scene image is captured by a CIS, it passes through a long chain of electronics including ASP(s), ADC(s), and finally digital signal processor (DSP). These blocks are integrated into different orders, forms, and shapes by different CIS architectures. Four fundamental architectures can be constructed: columnparallel architecture (CPA), column-series architecture (CSA), pixel-parallel architecture (PPA), and pixel-series architecture (PSA). The captured image data in the form of voltage, current, or charge can be converted and processed in digital form in different stages and locations of these architectures. It may be processed and converted in each pixel in parallel in PPA, or out of the pixels sequentially in PSA, or row-by-row sequentially in CSA, or row-by-row in parallel in CPA [32]. Today, CPAs are widely used in CIS to satisfy the market-driven requirements of high frame rate, large pixel arrays, smaller pixels, and low power consumption. Integrating (ramp) ADC topologies are best fit for CPAs as they provide the best monotonic, low-noise, and power-efficient conversion capabilities with smaller silicon footprints [4,33,34].
Intelligent image sensor operation requires an image complexity metric that relates captured or incoming image pixel, row, or frame information to performance parameters of readout electronics in the sensor architecture. We proposed a metric that is designed for the most commonly used CIS architecture, the CPA. In CPA, the image is processed row-by-row, and each column has its own or shared ASP and ADC. The row decoder selects one row at a time, and analogue signals from this selected row of pixels are transferred to column ASPs for pre-processing in the analogue domain. Then, these signals are transferred to ADCs, and finally, digitised pixel data is transferred to the DSP or stored in memory for further digital processing. Each column-level circuit reads the next row of pixel data in a pipelined fashion, as the selected row is reset before the next row is accessed allowing rolling-shutter frame integration operation.
Performance efficiency of CIS readout electronics in different image sensor architectures is affected by the complexity of scene image. For example, in CSA, if an image has welldistributed grey levels or a wide histogram (we consider them a complex image), its ASP stage consumes more power than when the image or row has a narrow code distribution or a peaky histogram (we consider them less complex). For the same scenario, ADCs in CPA work more power efficiently for noncomplex images than complex ones. Digital power consumed during the data transfer period from ADC to DSP in both architectures would be lower for non-complex images. If a metric is available for a row or full-frame image complexity, it will be easy to gear up and down the biasing conditions in ASPs, or ADCs, or regulation efficiencies in on-chip voltage regulators, or other parts of the readout chain could be tuned for optimum and uniform performance of the CIS. One such example is shown in Figure 1, where a multi-mode ramp ADC [36] is used in a CIS with CPA, and full-chip power consumption was measured from the imager in [36] at 20 frames per second for 1 min capturing the same scene images. Mode 1 is the standard ADC mode, where the ADC works normally without any speedup process, so it has the maximum power consumption as it works for the longest time. On the other hand, different speedup techniques can be applied to minimise the ADC conversion time that would result in minimising the power consumption by shutting down or idling circuit blocks during the saved times. Modes 2 and 3 represent the power consumption after applying two of these speedup techniques. However, mode 3 is a higher speedup mode than mode 2 and shorter ADC period. Thus, the power consumption of mode 3 is lower than that of mode 2. This speedup comes at the expense of IQ. Thus, based on the new CCM index, the speedup mode can be determined intelligently. As the scene content and complexity change from frame to frame, the power consumption of the CIS can be reduced up to 10%.

CCM new philosophy
Images stored in digital mediums consist of a two-dimensional array of rows and columns with specific digital numbers representing the intensity of the scene pixels captured by image sensors that have ADCs with a resolution of n bits. Thus, each pixel holds a digital number or code in the range between 0 and (2 n − 1) if binary representation is used. However, in a typical scene image, not all codes appear in each row or frame, which could be seen in the histogram distribution of codes, as shown in Figure 2. The evaluation of these CHs would reveal the fundamental characteristics of images, rows, or columns, i.e. whether the selected row is bright, dark, or well distributed. CH is constructed by counting the number of pixels in the selected region (row(s), column(s), sub-or full-area(s) of image) that have the same code values, and assigning this number as a hit for the given code. For example, a dark region will have a CH that has all the hits appear in lower code ranges, and the rest of the range will have no hits (zero), and vice versa for bright regions, as seen in Figure 2. If the number of pixels (N) in the selected region is less than the code range or resolution of the image (2 n − 1), then some number of zero hits will appear in the CH of the selected region. Depending on the distribution of the code hits in the histogram, the number of successive zero code hits will vary. For example, all pixels will have a 0 code as a binary number, if the image is completely dark, and as a result, code zero (0) will be repeated for N number of hits. The rest of the codes (2 n ) will have zero hits or will have one 2 n number of successive zeros in the CH. This is same for a completely saturated or bright image that the last code number (2 n − 1) will have N number of hits and the rest of the codes (2 n ) will have zero hits resulting in one 2 n number of successive zeros in the CH. On the other hand, for the graded greyscale region, code hits are well distributed, minimising the number of successive zeros. Our proposed metric is based on these numbers of successive zeros in a CH for the given region of the image row.
As the number of successive zeros in the CH of a row increases, the analogue signals become easier and faster to be converted into the digital domain by row-level ADCs in CIS CPA. That is because if a code histogram has a large number of successive zeros, the ADC can skip scanning all codes that do not appear in the CH and save the conversion time for these absent codes resulting in faster operation and lower power consumption [36]. The opposite is true if there are few or no zeros appear in the CH, which means that the pixel values are well distributed in this row, and ADCs will consume more power to convert the analogue content of the rows into the digital domain. Indeed, a metric based on the successive zeros in CHs will provide valuable information not only for the electronics but also for assessing the complexity of the regions of the images (row, column, and area, of the full image). Based on these understandings, we call our metric the CCM because it relies on the successive zeros in the CHs of a given region of the image.
Because CCM is a histogram-based metric, it does not need any reference image and uses simple mathematical processes to count the number of successive zeros in the CH and calculate the average image conversion complexity. Although it was developed for assessing the complexity of captured image rows in CMOS CPA imagers, it can also be used as a metric to quantify quality or complexity of any regions [i.e. sub-image area, full image, row(s), and column(s)] of a single image. Its index does not depend on human evaluation and uncertainty. As presented in the next sections, the CCM index is bounded between 0 and 1 and inversely proportional to the image conversion complexity, i.e. uniformly distributed CH of a row have very small CCM index (difficult to convert) and ideally have an index of zero, while very dark very bright (easy to convert) rows have CCM index of near unity.
The CCM index of an image row is calculated as follows.
1. Start reading rows from the image or pixel array (i = 1). CCM row (i ) = where j is the index of SZH that representing the number of successive zeros, (j≠0). Z j represents the repetition of the successive zeros of the (j) number. n is the ADC resolution and n row is the total number of rows of an image. CCM min and CCM max are the minimum and maximum theoretical limits of the CCM index, respectively, and they are derived in the following subsection. Figure 3 illustrates a simple example of how CCM works and its index is calculated. Assume a row of an image (row(i)) that has 16 columns (n col = 16) with 4-bit resolution (n = 4). Thus, the code range is between 0 and 15. Figure 3(a) shows three rows of such data, of which values lie in the code range 0-(2 n − 1). Figure 3(b) shows the CH and SZH of each row. For example, in the row(i); 6 columns (or pixels) have a binary code of 2, 4 pixels have code of 7, 3 pixels have code of 4, 2 pixels have code of 0, and 1 pixel has code of 14. So, except for these five codes, the rest of the code did not appear in the row(i). The CH for the row(i) could be constructed using these values, as shown in Figure 3(b). Using the CH, SZH for the row(i) can be constructed counting the successive zeros (j) of the CH. For example, there are six successive zeros exist once between codes 7 and 14 in CH, two successive zeros appear once between codes 4 and 7 in CH, and a single zero exists three times as seen in the CH of row(i). Thus, Z j becomes {3,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0} for j between 1 and 16. Then, the CCM row (i) index for the row(i) is calculated from Equation (1) Finally, if we have the data for the rest of the image rows, the average of all rows (CCM image ) will be calculated according to Equation (3) for the entire image. The CCM row index of 0.52 for the row(i) indicates that this row has a moderate conversion complexity as it contains a relatively large number of successive zeros and the small number of single and double successive zeros. A row becomes complex for processing and conversion electronics if all grey-level codes appear in its CH, as in row(i+1) in Figure 3, which gives the CCM index value of 0.125, while it becomes less complex if all pixels have the same value, as in row(i+2) in Figure 3, which increases the number of successive zeros and resulting in a larger CCM index value of 0.875.
The simple example in Figure 3 shows the proposed algorithm to calculate the CCM index value and how to apply it for given image data. However, the theoretical limits of the proposed metric need to be well understood, derived, and calculated to make sure that its value is bounded. Besides, different combinations of the number of columns (n col ) and the bit resolution (n) of the images or ADCs should be considered to create more generic IQ and complexity metrics. This is explained in the next sub-section that results in a mapping function.

Theoretical limits of the CCM index
The CH distribution of natural images is fairly random, as some of them may be dark themed images, while others may be brighter or well distributed. Because of this variability, theoretical limits should be established, and a mapping function is defined to keep the index values bounded. Moreover, to generalise the CCM as IQ and complexity metric, the relationship between the number of image elements and the bit resolution (n) of the ADCs in image sensors should be considered. Image elements could be selected as a sub-array of pixels, or in our case, the number of columns (n col ) of image pixels of the selected row sampled in the column electronics of CPA CISs. If the number of columns (n col ) (or number of pixels in the full or sub-array of an image) is less than the code range (2 n ), not all codes will appear in CHs. As a result, at least (2 n − n col ) number of extra zeros will appear in row CH. Using these extra zeros during the calculation of the CCM index may result in saturation. They are considered and counted because, fortunately, these extra zeros help ADCs to convert these signals faster. We will investigate CCM upper and lower limits for this case first.
The upper theoretical limit of the CCM index (ideally 1) for (n col < 2 n ) occurs when all column data in the selected row are zero (i.e. totally dark row of an image). In this case, only one code appears in the CH at code index 0. The number of occurrences or hits of code 0 equals the number of columns (n col ), and the rest of the histogram indices will have zero hits, as shown in Figure 4(a). The number of successive zeros (j) in this case is equal to (2 n − 1) and appear one time (Z j = 1), as illustrated in the SZH in Figure 4(b). So, the maximum CCM index value can be calculated using Equation (1) as The lower theoretical limit of the CCM (ideally 0) can be calculated for (n col < 2 n ) when a CH is well distributed as in Figure 4(c). In this case, each code will appear only once in the CH. However, because n col is less than code range (2 n ), there will be extra zeros between CH indices. In the worst case, these extra successive zeros (j) can be calculated as j = 2 n n col − 1 ( 6 ) and the (j) number will be repeated for n col times, so Z j = n col . The SZH is obtained from row CH, as illustrated in Figure 4(d), and the theoretical lower limit of the CCM index is calculated using Equation (1) as There is another case that needs to be investigated, that is, when the number of columns (n col ) is larger than or equal to code range (2 n ) for the n-bit resolution of the ADC. The upper theoretical CCM limit when (n col ≥ 2 n ) is the same as the case when (n col < 2 n ), as explained before and as shown in Figure 4   lower theoretical limit when (n col ≥ 2 n ), in the worst case, all codes in the CH have at least one or repeated number of hits (R = n col /2 n ), as shown in Figure 4(e). Thus, no successive zeros appear in the CH (j = 0), and as a result, the successive zero histogram has no hits, as shown in Figure 4(f). In this case, the lower theoretical limit CCM min will be 0. Table 1 summarises the upper and lower theoretical limits of the CCM index values for all cases of n col and the code range (2 n ) of the image or ADC. It is found that the maximum CCM value depends on the ADC resolution (n) only and is independent of n col , while the minimum CCM value depends on both the ADC resolution and n col . Besides, the minimum and maximum CCM values are independent of n col when (n col ≥ 2 n ). These maximum and minimum theoretical limits are calculated for CCM "values" of 1 and 0, respectively. However, as stated before, the higher CCM index means that the image is less complex, and the lower CCM index means a higher complex image.

CCM mapping function for bounded index values
All possible combinations of n col between 2 1 and 2 16 and resolution (n) between 1 and 16 were tested to find the minimum and maximum ranges of the CCM to verify theoretical limits. It was found that upper theoretical limit CCM (which ideally should equal 1) changes between 0.5 and 0.999984 when resolution varies from 1 to 16 bits, respectively. It is independent of n col , as expected. The minimum theoretical limit (which ideally should equal to 0) varies between 0.125 (for n = n col = 2) and 4.65 × 10 -10 (for n = 16, n col = 32768) if n col < 2 n and always zero if n col ≥ 2 n as expected. Indeed, these checks show that a correction/mapping function is needed to map the calculated CCM index values to the theoretical limits and bound the metric index between 0 and 1.
Using CCM max and CCM min in Table 1 and the linear mapping function in Equation (3), the un-mapped CCM index in Equation (2) can be mapped, as illustrated in Equation (8).

EVALUATION OF THE CCM
The first and the most important feature that needs to be checked and evaluated is to test the theoretical limits and make sure that the CCM index follows these limits as designed. Also, the derived mapping function in Equation (8) needs to be verified. After that, the CCM index will be calculated for the standard images and other images with different resolutions and dimensions. CCM index will be compared with the avail- able blind and objective metrics using the same images. Finally, the verification of linearity, sensitivity, and monotonicity of the CCM will be determined by changing an image parameter and compared with other metrics to check if it is a valid metric for CIS.

Evaluation of theoretical limits of CCM
Two types of images were synthesised to test the theoretical upper and lower limits of the CCM for cases when n col < 2 n and for n col ≥ 2 n , as shown in Figure 5. When n col < 2 n , the upper limit is hit when all columns have zero or the maximum ADC resolution value, so the upper limit image for the first case will be synthesised as a whole black or white image. Figure 5(a) shows an example of the synthesised test image for the lower theoretical limit when n col < 2 n (n = 8 and n col = 128). In the synthesised image, only 128 codes out of 256 total codes will appear gradually in the available 128 columns, and the rest of codes will not appear in the image, so these codes will be the zeros that appear in between codes as the (j) value in the row histogram in Figure 4(c). Note that although the image in Figure 5(a) looks very simple and uniform, it is very difficult for CPA ADC to convert this well-distributed image because it will scan all codes from zero to the maximum ADC resolution, and it will consume more power and time.
For the second case when n col ≥ 2 n , the upper limit remains the same as the first case. All pixels hold zero or the maximum ADC resolution value and the synthesised images will be a whole black or white image, respectively. For the lower limit, we synthesised a test image such that all available codes appear in columns, and if there are any extra columns, it will hold zero value or it may hold any random values because the CCM index does not care about the repetition of codes; it takes care of the number of successive zeros between codes in the CH. By synthesising a test image that way, we fill up all row histograms with codes, and there are no available zeros between the codes to count, which is the most difficult case for ADC that hits the theoretical lower limit. Figure 5 The synthesised images have two main parameters to create: the number of columns and bit depth or, in other words, resolution. To study the upper and lower limits of CCM and how general is it for all cases, we scanned the number of ADC resolution bits from 1 to 16 bits and the number of columns from 2 to 65,536 simultaneously. The mapped outputs for all these combinations were as exactly as expected. The CCM is 0 for the lower theoretical limit and 1 for the upper limit for all cases. Note that the CCM index is inversely proportional to image conversion complexity. Hence, the CCM is bounded between 0 and 1, and it will never exceed these theoretical limits. Besides, the mapping function is verified and working correctly.

The CCM index for the standard images
The CCM metric is measured for different groups of images with different sizes and resolutions. Standard images like Lena, Barbra, Baboon, and Peppers (shown in Figure 6) were the first group tested. This group was 512 × 512 pixels in size with an 8-bit resolution. Because no IQ metric has the same purpose as CCM, HFM and HS have been selected to evaluate these images too because these two metrics are the nearest metrics that use histograms for calculating the contrast of an image and they are blind metrics. So, any change in these images' histograms should appear in these metrics' indices. MSE and SSIM are not used in this test as they need a reference image to compare with. They will be used for the next test as a reference for comparing images and comparing metrics. Table 2 summarises the simulated metrics values for the first group of standard images. The Lena image is the most complex image for the ADC to convert, while the Baboon is the easiest image relative to this group from CCM's point of view. This group has a CCM index less than 0.5, which means that they all have a high conversion complexity and that makes sense according to their well-distributed histogram shape that makes the conversion of these images hard and complex. The standard deviations between all images are calculated for CCM, HS, and HFM to check how these metrics deviate with respect to each other for different images. It is found that CCM  and HS have almost the same standard deviation, while HFM has a different value, which is expected. The creator of HS and HFM [22] concluded that HS is more meaningful than HFM for contrast detection as takes into consideration both histogram counts and histogram bins. Because HS and CCM have the same standard deviation between the same group of images using the same kind of image information, which is CH, the CCM index can be trusted as it follows the same context of HS. So, CCM can track any changes in an image histogram correctly. The second selected group for the test was 1920 × 1200 pixels with 16-bit resolution images, as illustrated in Figure 7. CCM, HS, and HFM are calculated for these images as well. The summary of these calculations is shown in Table 3. Note that images (d) and (e) have less conversion complexity (higher CCM index) for ADC as they are mostly bright or dark images respectively. On the other hand, images (a) and (b) look very well-distributed images; as a result, they are more complex to be converted with the lower CCM index. Looking at the image (a), the CH is very well distributed and most of the codes appeared on it. So, this image is very complex to be converted by CPA ADC as it will count all codes starting from 0 to 65535 for every row, which makes the conversion of this image consume more power and time than images like (d) or (e). The later images have somewhat narrow CH in which not all the codes appeared and successive zeros will exist instead, so CPA ADC can jump over empty (successive zeros) codes to save power and time of conversion, which proves that the theory of successive zeros makes an image easier to be converted.

Monotonicity, sensitivity, and linearity evaluation
After calculating some IQ metrics for different images, it is very important to calculate the IQ metrics for an image after adding several kinds of distortions and check if these IQ metrics are responsive to the added distortions and also to see if they respond in a monotonic fashion or not. Not only monotonicity needs to be checked, but also the sensitivity of a metric is important too. Sensitivity is calculated to see how much IQ metrics could change corresponding to the amount of linearly introduced distortion to an image. In addition to sensitivity, linearity is also a parameter that should be verified to check how linear the IQ metric is with respect to the linear change in distortion.
Many image distortion types can be used in MATLAB [35] to add distortions to an image, namely, the blurring filter effect, Gaussian white noise, salt and peppers noise, and multiplicative noise. Also, brightness change is an important effect of our new metric that we added to these distortions.
All of these distortion types can be controlled by a specific parameter related to each of them. For example, the variance parameter is used to control the blurring filter and multiplicative noise, while the noise density parameter is used in the salt and peppers noise type. Gaussian white noise is controlled using the variance and mean parameters. To be able to control Gaussian white noise, we fixed one parameter while changing the other one linearly. For brightness change, we added a fixed number to all the image pixels and increased this number linearly. The scanning parameter is changed 64 times linearly, adding more distor- The CCM, HS, HFM, BRISQUE, NIQE, PIQUE, SSIM, and MSE are calculated for the Lena image after applying and linearly scanning the aforementioned types of distortions. All these metrics are checked for monotonicity with respect to the distortion increase. Table 4 summarises the monotonicity features for these metrics. It is found that CCM and MSE are monotonic for all kinds of distortions. HS, PIQUE, and SSIM are monotonic for most distortion types, while HFM, BRISQUE, and NIQE are non-monotonic for almost every distortion type.
The sensitivity of these IQ metrics is calculated as the percentage of the difference between the metric values, which corresponds to the maximum and minimum change of the added distortion divided by the original metric value. Table 5 summarises the sensitivity values for all metrics with respect to added distortions. The table is divided into two groups: bounded metrics and unbounded metrics. The bounded group is for those metrics that are bounded between zero and one, while another group has no limit. This table is split that way to be able to perform a fair comparison. CCM has the highest sensitivity for all types of distortions except for Gaussian white noise with the fixed mean and changing variance; it comes in the second position after SSIM. For the unbounded group, MSE comes as the highest sensitive unbounded metric for every distortion change.
For linearity calculations, the R-square value is used to correlate each metric dataset output to the linearly changed distortion and figure out how linear the metric is corresponding to linear distortion change. Table 6 summarises all R-squared values for all metrics for added distortions. CCM has the max-imum linearity value of 99% for the first three distortion types while MSE has the maximum linearity for the other three types.

CCM evaluation for brightness change
The CCM theory is based on counting the number of successive zeros in a row histogram. When an image becomes darker or brighter, it becomes less complex for the ADC to convert as a result of increasing the number of successive zeros (see Figure 2). Brightness is the most important image parameter that is used to verify the new theory. The 8-bit 512  × 512 pixels Lena image was selected to perform this test because of being the most difficult image of the selected standard images. Figure 9 illustrates the original Lena image and its brightness is increased linearly by adding 40 to 240 digital values to each pixel in images in Figure 9(b-f), respectively. The brightness increase is performed by simply adding a specific digital number to all pixels, which, as a result, makes histogram shift to the maximum value according to the added number. This process will introduce extra zeros in the lower code range of the row histogram and cause many high illumination pixels to saturate at the maximum value of the row histogram. Table 7 summarises measurements of CCM, HS, HFM, BRISQUE, NIQE, PIQUE, SSIM, and MSE corresponding to the brightness increase of the Lena image. It is obvious that CCM has the highest sensitivity to the brightness increase. CCM changes 316% when brightness increases, while no other bounded metric exceeds 100% change for the same brightness change. These results are plotted in Figure 10. CCM has a relatively small response to small value brightness increases; then, it increases linearly as brightness increases further, which is the typical case for MSE and SSIM with different rates. The rest of the metrics have a non-monotonic and non-linear response to the brightness increase.

CONCLUSION
A new metric, CCM, is designed for the assessment of images before processing by intelligent CPA ADC electronics. The CCM index is an important parameter for intelligent electronics to obtain an optimum performance such as maximising the speed up and minimising the power consumption with the minimum image distortion. The CCM is designed to be bounded between 0 and 1. These limits are designed and tested for all possible cases of image resolution and image dimensions. Additionally, a mapping function is created to correct the CCM index for specific cases. The mapped output is tested too, resulting in a successful output as expected for all limits. It gives 0 for the lower limit and 1 for the upper limit images. The CCM is an independent matric that does not need any reference image to compare with. This independence makes CCM suitable for analogue processing electronics. The CCM uses a simple calculation method based on a histogram of a row that allows fast processing and minimises hardware implementation complexity. Finally, it is not for human perception as a very simple and uniform image is very challenging for analogue electronics to process and consume more power and time. CCM is tested in different ways. First, the theoretical mapped output is tested using synthesised images to reach the maximum and minimum limits according to the successive zeros theory. Second, CCM is calculated for different standard 8-bit greyscale images resulting in the same behaviour as HS, the metric that uses the histogram for calculating image contrast. Third, CCM is calculated for 16-bit images to verify how general it is. This test results in a very well understood result that the most bright or dark images have less conversion complexity than the welldistributed images in the same group. Forth, six types of distortion are introduced linearly to an image, and CCM and other IQ metrics are calculated for these types of distortion. Monotonicity, sensitivity, and linearity are investigated for all available IQ metrics. The CCM results in a monotonic behaviour to linearly scanned distortion with linearity of 99% and 316% sensitivity to brightness change.
The CCM index is tested for greyscale images; however, for coloured images, the CCM index can be calculated for each colour channels, and the weighted combination of them can be used to calculate the global CCM index for the coloured images.