An improved symbol reduction technique based Huffman coder for efﬁcient entropy coding in the transform coders

Entropy coding is the essential block of transform coders that losslessly converts the quantized transform coefﬁcients into the bit-stream suitable for transmission or storage. Usually, the entropy coders exhibit less compression capability than the lossy coding techniques. Hence, in the past decade, several efforts have been made to improve the compression capability of the entropy coding technique. Recently, a symbol reduction technique (SRT) based Huffman coder is developed to achieve higher compression than the existing entropy coders at similar complexity of the regular Huffman coder. However, the SRT-based Huffman coding is not popular for the real-time applications due to the improper negative symbol handling and the additional indexing issues, which restrict its compression gain at most 10–20% over the regular Huffman coder. Hence, in this paper, an improved SRT (ISRT) based Huffman coder is proposed to properly alleviate the deﬁciencies of the recent SRT-based Huffman coder and to achieve higher compression gains. The proposed entropy coder is extensively evaluated on the ground of compression gain and the time complexity. The results show that the proposed ISRT-based Huffman coder provides signiﬁcant compression gain against the existing entropy coders with lower time consumptions.

and CABAC are more efficient but highly complex in nature. Hence, some improved entropy coders are essentially required that could provide higher encoding efficiency with low complexity. Recently, one important attempt has been made by Saravanan et al. [17] and Bheshaj et al. [18] in the development of a symbol reduction technique (SRT) based Huffman coding for the JPEG image coder. In this entropy coding technique, the input symbols are grouped prior to the Huffman coding to achieve a higher compression ratio as compared to the regular Huffman coding with a slight increase in the complexity. This technique has opened a new direction to achieve better entropy coding efficiency than the existing entropy coders but restricted in the practical applications due to the inefficiency of handling the negative symbols of source sequence and the symbol digit indexing problems. Hence, in this paper, an improved SRT (ISRT) based Huffman coding technique is proposed to efficiently alleviate the negative symbol handling and symbol digit indexing problems of the existing SRT-based Huffman coding technique, and to achieve higher compression ratios with less complexity as compared to the popular entropy coders on the same reconstruction qualities.
The complete structure of the paper is as follows. A brief description of the fundamentals of entropy coding is given in Section 2. Section 3 presents the details of the regular Huffman coding technique. A brief description and algorithmic steps of the recent SRT-based Huffman coding technique is given in Section 4. Section 5 presents the detailed development process of the proposed ISRT-based Huffman coding technique. Section 6 presents a comparative performance evaluation of the proposed entropy coder, which is followed by the conclusions of the present work in Section 7.

ENTROPY CODING: DEFINITION AND FUNDAMENTALS
In a very basic sense, the term entropy coding refers to the process of lossless mapping of an input symbol into a codeword usually in the binary stream. If the codeword generated from the entropy coder is fixed in length for all the input symbols then the entropy coder is known as fixed length or fixed rate entropy coder. Whereas, if the generated codeword is variable in length for all the input symbols then the entropy coder is termed as the variable length of variable rate entropy coder. The entropy decoding is the exact inverse process of the entropy encoding, which restores the original source symbol from the encoded binary codeword. Now, we first define the basic mathematical preliminaries that are essential for analysing the fundamentals of the entropy coding or also known as source coding.
Let X = x 1 , x 2 , x 3 … … .x n be a sequence of input symbols, where each symbol takes values from a finite universe of alphabet A = a 1 , a 2 , a 3 … … .a m . Let p i j denotes the probability that x i , which is the i th symbol in the source sequence will take the value a j . Then the self-information I i linked with the i th symbol can be defined as, 2 1 p i j = −log 2 (p i j ) (bits) Next, we can also define the entropy of the sequence as, According to Shannon's theorem for source coding [19], the entropy of the source sequence is the lower bound, which indicates the average number of bits needed to encode the source symbols without loss of information [20]. Hence, the efficiency of entropy coders is always measured in terms of closeness to the source entropy. Now, the average number of bits L avg required to represent a symbol is defined as, where, r k is the discrete random variable or symbol for k = 1, 2, 3 … L with associated probabilities p r (r k ), and l (r k ) is the number of bits used to represent each value of r k . The total number of bits required to represent a complete symbol sequence is then calculated by the total number of symbols multiplied by L avg [20]. Furthermore, usually, the existing entropy coders are compared in the view of gain in the compression and reduction in the time complexity. The parameter time complexity is simply a measure of the total time required to encode and decode the input source sequence. Whereas, the compression gain of entropy coders signify the increase in compression ratio (CR), which is given by, where, B i and B c are the size of the input and compressed data, respectively [21]. For a good entropy coder, the compression gain (CR) should be higher and the time complexity must be as minimum as possible.

REGULAR HUFFMAN CODING TECHNIQUE
Huffman coding is a popular variable length entropy coding technique, which usually assigns shorter codeword to a higher probable symbol and longer codewords to the lower probable symbols. The Huffman coding technique constructs a binary tree for assigning binary codewords to the source symbols. Every source symbol from the sequence becomes a leaf of the Huffman tree. The corresponding codeword for a source symbol is obtained by moving in a unique path leading from the root of the tree to that of the respective leaf node, with each edge is labelled by 0 or 1, that are added to the respective codeword for the occurrence of the edge along the path based on whether the left or right child of a given node occurs next along the path. For the proper understanding of the Huffman algorithm, let us consider a source sequence X that contains n symbols with the respective probabilities p i j as described in the previous section, then the basic steps of the Huffman coding technique are as follows: Step 1: Initially, sort the probabilities p i j of the source symbols in descending order, and represent the corresponding symbols as a leaf node of the Huffman tree.
Step 2: Next, merge the two lowest probable symbols; usually, they are the last two symbols in the sorted list, to make a new combined node and calculate its probability by adding the respective probabilities of the two merged symbols. Now, assign a "1″ to the top branch and a "0″ to the bottom branch. However, this branch labelling can be interchanged but the final codeword generated is unique in both the ways.
Step 3: Now, repeat the procedure of step 2, until a single node is left with the respective probability of 1. This node forms the root node of the designed Huffman tree.
Step 4: Finally, obtained the respective codewords for each of the source symbols by sequentially reading the branch labels or digits from the root node to the corresponding terminal node.
After encoding of all the source symbols, the Huffman coding technique generates a Huffman table for the reconstruction of the original symbols. The generated Huffman table must be available to the decoder side for the proper reconstruction of the source symbols. In order to reconstruct the original source sequence X , the Huffman decoder simply takes inverse steps as the codeword generation process and remaps the code stream to the individual symbols by sequentially moving in the developed Huffman tree node to node from a leaf node to the respective root node. The Huffman coding technique usually offers good coding efficiency; however, it is not able to provide the fractional bitrates as the other entropy coders such as arithmetic, CAVLC and CABAC coders. Hence, in order to improve the coding efficiency of the regular Huffman coding technique, the SRT-based Huffman coders have been proposed in past few years [17,18]. A detailed description of the recent SRT-based Huffman coding technique along with its advantages and practical limitations has been given in the following section.

EXISTING SRT-BASED HUFFMAN CODING TECHNIQUE
Usually, in the entropy coding, the number of symbols presented in a source sequence plays a crucial role in achieving a good compression ratio. If the numbers of source symbols are decreased by any possible mapping process prior to the entropy encoding, then the compression ratio immediately increases. This is the basic concept behind the existing SRTbased entropy coders. In 2009, Saravanan et al. [17] proposed the first SRT-based Huffman coding technique to attain higher compression ratios as compared to the simple Huffman coding for the JPEG image compression standard. This entropy coder groups the input symbols in a group of four symbols prior to the Huffman coding and hence provide 10-20% gain in the compression ratio. However, this technique is not able to handle the negative symbols of the source sequence and also requires additional indexing to enlist the digits for each grouped symbols. The index list generated after encoding must be transmitted to the receiver for the proper decoding, which reduces the compression ratio depending on the size of the index list. Later on, in 2012, Bheshaj et al. [18] proposed a modified SRT (MSRT) based Huffman coder to address the negative symbol handling issue of the previous SRT-based Huffman coding technique proposed by Saravanan et al. [17]. The basic algorithm of this recent MSRT-based Huffman coding technique is given in Algorithm 1.

Start:
X ←Sequence of source symbols. index = [ ]; Initialize a null variable index to store the indexes of the negative symbols. indexdigit = [ ]; Initialize a null variable indexdigit to store the respective digits of source symbols. l = 1; Counter for negative symbols. j = 1; Counter for grouped symbols. f or k = 1 : length(X ) i f X (k) < 0; Identify the negative source symbol. X (k) = X (k) + 128; index(l ) = k; Store the index of negative symbol. indexdigit (k) = Number of digits in X (k); l = l + 1; end i f end f or f or k = 1 : length(X ) GX ( j ) = Group the input symbols (X (i : i + 3)); Map the group of four input symbols into a single symbol by putting them side by side. j = j + 1; end f or Hu f f _Code = Huffman_Coding(GX ); Encode the grouped symbol sequence GX using Huffman coding technique described in section 3.

End of Algorithm:
Furthermore, for the practical entropy coding applications, the existing MSRT-based Huffman coding technique given in Algorithm 1, is also not popular due to the following limitations: (i) The recent MSRT-based Huffman coding technique adds 128 to the negative symbols to shift them towards the positive side, which is not a good strategy to handle the negative symbols. Because, the negative symbols could have higher values than the 128, hence the direct addition of 128 to the negative symbols may generate negative symbols with lower values but they still present in the sequence of the source symbol. (ii) The second and most important problem related to the recent MSRT-based Huffman coding technique is that the two indices index and indexdigit that are required to transmit along with the encoded bit-stream to the receiver for the proper decoding of the source symbols. This additional data transmission greatly reduces the compression ratio of the encoder, and hence this technique offers 20-25% compression gain over the regular Huffman coding, which is a lower gain in the compression ratio than the expectation.

PROPOSED ISRT-BASED HUFFMAN CODING TECHNIQUE
The practical limitations of the recent MSRT-based Huffman coding technique discussed in the previous section; restrict it for the real-time entropy coding applications, and hence leads to a strong motivation for the development of some improved SRT-based entropy coding techniques. Therefore, to properly alleviate the issues related to the existing SRT-based entropy coders, this section presents a new ISRT-based Huffman coding technique. The important contribution of the proposed ISRTbased Huffman coding technique is that it can efficiently handle the negative symbols exist in the source sequence and does not require additional indexing to enlist the digits for each of the grouped symbols. The encoding process of the proposed ISRTbased Huffman coding technique is given in Algorithm 2. The decoding process of the proposed ISRT-based Huffman coding technique is given in Algorithm 3, which is the exact opposite of the encoding procedure shown in Algorithm 2. Start: X ←Sequence of source symbols. X min = abs(min(X )); Represents absolute value of maximum negative symbol presented in the source sequence. X new = X + X min ; Shift the entire source sequence in the positive integer side. NSG = 2; Number of source symbols to be grouped. X newbinary = dec2bin(X new , 8); Convert each source symbol into 8-bit binary number. j = 1; Counter for grouped symbols.
Group NSG symbols in binary domain by concatenation of the symbols. GX ( j ) = bin2dec(temp); Convert the grouped symbol from binary to a positive integer. j = j + 1; Increment the counter for grouped symbols. end f or Hu f f _Code = Hu f fman_Coding(GX ); Encode the grouped symbol sequence GX using Huffman coding technique described in section 3.

End of Algorithm:
The advantages of the proposed ISRT-based Huffman coding technique over the exciting MSRT-based Huffman coder are as follows: (i) The first advantage of the proposed ISRT-based Huffman coding technique is that, in order to handle the negative symbols of the source sequence, the whole sequence is shifted to the positive integer side by the addition of minimum value (i.e. maximum value of negative symbol) of the source sequence to all the symbols of the sequence. Shifting of the whole sequence towards the positive side ensures that all the new symbols are positive and it also reduces the requirement of indexing for the negative symbols. Hence, the proposed ISRT-based entropy coding technique properly alleviates the negative symbol handling and its indexing problem related to the MSRT-based entropy coding technique. (ii) The second advantage of the proposed entropy coder is that it does not require additional indexing to indicate the digits for each grouped symbols. Because, the proposed entropy coder, groups the source symbols in the binary domain with 8-bit representation for each symbol, hence at the decoder, when the combined 16-bit symbol is divided into two parts, the initially combined symbols can be easily isolated. (iii) The third advantage of the proposed entropy coder is that, to reduce the computation complexity, it groups two source symbols instead of four symbols as used in the MSRTbased entropy coder [18].
Furthermore, to properly elaborate the encoding and decoding processes of the proposed ISRT-based Huffman coding technique and its advantages over the existing MSRT-based Huffman coding technique, we now take a real-time practical example of entropy coding and decoding for a random symbol sequence X defined in Equation (5). ; Extract first source symbol from the i th grouped symbol and convert it into a decimal number. X dec (1, nn + 1) = bin2dec(Rec_GX binary (i, 9 : 16)); Extract second source symbol from the i th grouped symbol and convert it into a decimal number. nn = nn + NSG ; Increment the counter. end f or X =X dec − X min ; Restore the shifted source sequence in the original integer values and obtain the original source sequence.

End of Algorithm:
The matrix X shown in Equation (5) is the input to the entropy encoding problem that contains a total of eight symbols to be encoded and has a positive maximum value of 15 and a negative maximum value of -135. For this input matrix X , we can easily analyse the improper negative value handling problem of the existing MSRT-based Huffman coding technique. According to the existing MSRT-based Huffman coding technique (Algorithm 1, line 8), a predefined number 128 has to be added in the vector X to shift the entire vector to the positive direction. Applying, line 8 of the MSRT algorithm to the input vector X the resultant vector X MSRT is shown in Equation (6).
From Equation (6), we can easily observe that there are two negative elements -2 and -7 still present in the shifted vector X Shift , that is the addition process of 128 to the input sequence is not able to completely shift the entire vector to the positive side. These two negative values -2 and -7, lead to the requirements of two indexes e.g. index and indexdigit (Algorithm 1, line numbers 2, and 3) when encoded and decoded using MSRT-based Huffman coding technique. Hence, the two important problems discussed in Section 4 related to the existing MSRT-based Huffman coding technique are clearly visible in the practical example considered in this section. Now, we explain the step by step encoding and decoding processes of the input symbol sequence vector X using the proposed ISRT-based Huffman coding technique and also validate its advantages over the existing MSRT-based Huffman coding technique.

Entropy encoding of source sequence X using proposed ISRT-based Huffman coding technique
Starting from the first line of the proposed entropy encoding technique shown in Algorithm 2, let us store the source sequence in a row vector X . Now in the second line, the absolute minimum value of the source symbols has been obtained using MATLAB syntax X min = abs(min(X )), which comes out as X min = 135. The next task in the proposed entropy encoder is to shift the entire source sequence X in positive direction. To perform this task we follow the MATLAB syntax X new = X + X min shown in line number three of Algorithm 2, which yields the value of the resultant shifted vector X new as shown in Equation (7).
From Equation (7), we can easily analyse that the proposed technique properly shifts the entire source vector X in the positive direction by adding the value of X min in the source vector X . Next, line number four of Algorithm 2 indicates that the proposed entropy coding technique combines NSG = 2 symbols during the symbol grouping process. Moving further, a MATLAB syntax X newbinary = dec2bin(X new , 8) shown in line number five of Algorithm 2, converts each source symbol into an 8-bit binary number and stores it to a new vector named as X newbinary . Following this syntax we obtain a character set vector X newbinary of size 8 × 8, as shown in Equation (8) Next, line number six in Algorithm 2, initializes a counter variable j by 1 to count the grouped symbols during the grouping process of source symbols in binary domain using the obtained vector X newbinary in the previous step.
Further, as discussed in the previous section that the proposed entropy coder, groups the two adjacent source symbols in the binary domain by the concatenation process. In this context, from line number seven to line number 11 in Algorithm 2, performs a loop for 1 to the complete length of vector X newbinary . For the above example, for loop performs the looping operation from 1 to 8. Inside this for loop (in line 8), a temporary variable temp has been calculated in each loop that contains the binary value of a grouped symbol which is obtained by concatenation of the two adjacent binary values presented in vector X newbinary . Next in line 9, the obtained grouped symbol in each loop is converted into an equivalent decimal number and then saved in a new vector GX, which is actually the grouped symbol vector whose length is simply half of the length of source vector X . After the execution of each loop counter, variable j is incremented by 1 as shown in line 10 of Algorithm 2. Finally, after completion of for loop, the obtained grouped symbol vector GX is shown in Equation (9).
Vector GX shown in Equation (9) is the final grouped symbol source vector to be encoded by the regular Huffman coding technique to get binary codeword vector Huff_Code as described in line number 10 of Algorithm 2. After regular Huffman coding, the codewords obtained for each of the symbols of vector GX is shown in Table 1.
With the generation of codeword for each of the symbols presented in vector GX, the encoding process of the proposed ISRT-based Huffman coding technique is now finished and the number of bits required for the transmission is 8 bits. However, as mentioned in Algorithm 2, we have to transmit one additional information that is the value of X min (here in the above example X min = 135) along with the obtained codewords for the proper decoding process. This additional information consumes 8-bit binary overload for the transmission. So the total number of bits to be transmitted after encoding using the proposed entropy coder becomes 16 bit. Although, for the above example, this additional information of 8-bit is looking like a high burden for the transmission, but in the practical cases where the number of source symbols is large this factor is negligible.

Entropy decoding process of the proposed ISRT-based Huffman coding technique
In this section, we will present the decoding process of the encoded codeword Huff_Code obtained in the previous section using the proposed entropy decoder described in Algorithm 3.
Let us first store the minimum value (i.e. maximum negative value) of the source vector X transmitted by the proposed encoder in a vector X min as shown in line 1 of Algorithm 3. Here in the case of the above example the received value of X min = 135. Moving to the next line, unlike line 4 of Algorithm 2, in line 2 of Algorithm 3, the constant NSG = 2 here in Algorithm 3 represents the number of symbols in which each grouped symbols to be divided during the decoding process. Next in line 3, the received codeword Huff_Code is decoded using the regular Huffman decoding and stored in vector Rec_GX. The decoded vector Rec_GX is shown in Equation (10).
Next, in line number 5 of Algorithm 3, a new constant nn is initialized by one to count the number of ungrouped symbols. Moving further, line number 6 to 10 contains a for loop which runs for variable i = 1 to the length of vector Rec_GX binary (four times for the above example). In i th iteration of the for loop, the first eight binary digits from the i th symbol of Rec_GX binary vector are extracted and their decimal equivalent is stored in nn th position of vectorX dec using line number 7. Thereafter the remaining eight to sixteen binary digits are then extracted from the same i th symbol of the vector Rec_GX binary and its decimal equivalent is stored in nn + 1 th position of vec-torX dec using line number 8. After completion of each loop, the counter variable nn is incremented by NSG to identify the remaining ungrouped symbols as shown in line number 9. On the completion of the for loop, we get the vectorX dec which is the shifted version of original ungrouped source symbols as shown in Equation (12).
Finally, in order to get the original source symbols show in Equation (5), we have to shift theX dec vector by utilizing the value of X min transmitted by the encoder using line number 11 of Algorithm 3. The obtained value of finally decoded vector X is shown in Equation (13).
The final decoded vector X is same as the original input source symbol vector shown in Equation (5). Consequently, we can conclude that the proposed ISRT-based Huffman coding technique fairly able to encode and decode the practical source symbols, and hence highly feasible for the real time coding applications.

PERFORMANCE EVALUATION OF THE PROPOSED ISRT-BASED HUFFMAN CODING TECHNIQUE
This section presents an extensive evaluation for the entropy coding performance of the proposed ISRT-based Huffman coding technique against the existing entropy coders. In the first part of this section, a comparative entropy coding performance of the proposed entropy coder is presented with a developed test system commonly used in the video encoders. The next part of this section presents a comparative real-time practical entropy coding performance of the proposed entropy coder using JPEG image coder used in the Apple iPhone 4/4S models.
In order to present fair testing of the entropy coding performances for the proposed entropy coder and the other existing entropy coders, we need a common testing platform that suits the encoding process of all the important entropy coding techniques. Hence, in the first part of this section, a proper test system has been developed that comprises DCT transformation, scalar quantization, and different entropy coding techniques as shown in Figure 1. Note that the developed test system is commonly used for image and video compression; hence the testing

FIGURE 1
Test system developed to evaluate the entropy coding performances of the proposed and the existing entropy coders [22][23][24]  of entropy coders with this test system leads us towards their realistic performance. The performance of the proposed entropy coder has been compared with the recent MSRT-based Huffman coder, regular Huffman coding technique, arithmetic coding, and the advanced entropy coders such as Exp-Golomb coding, CAVLC, and CABAC coders. Particularly, since the advanced entropy coders like Exp-Golomb coding, CAVLC, and CABAC coders support block DCT transformation of size 4 × 4, hence the test system shown in Figure 1, has been designed to satisfy that requirement. Further, the encoding performances of all these entropy coders have been analysed using the developed test system for the four standard greyscale test images of different sizes. The name of test images with their specifications are enlisted in Table 2 and all the test images are also shown in Figure 2. The reason behind the utilization of test images with different sizes is to properly analyse the compression gain and the time complexity of the proposed entropy coder on different levels of input complexities.
Moreover, for the realistic performance validation of the tested entropy coders, all the experiments have been performed in MATLAB (version: R2014b) software platform and Intel(R) Core (TM), i5, 3.60 GHz CPU-based computer system. After compression and reconstruction of all the four test images shown in Figure 2, with the developed test system on different quantization factors (QFs), the obtained values of compression ratio and the PSNR index for all the tested entropy coders are tabulated from Tables 3 to 6 respectively.
Note that, the bold highlighted values in all the tables represent the highest compression ratio on the respective compression or quality levels. Meanwhile, in order to properly analyse the compression gain of the proposed entropy coder against the existing entropy coders, the compression ratios tabulated in Tables 3-6 are also plotted with respect to the reconstruction qualities i.e. PSNR values. The compression ratio versus PSNR plots for the four test images are shown from Figures 3 to 6.
The compression ratio vs PSNR analysis is an important feature that represents the ability of entropy coders to provide compression gain on the same reconstruction quality.
From Tables 3-6, it is clearly evident that, on lower values of QF (i.e. on lower compression side), Huffman and arithmetic coders provide better compression ratios than the advance entropy coders Exp-Golomb, CAVLC, and CABAC. However, on the higher values of QF (i.e. on higher compression side), the advance entropy coders offer significant gains in the compression ratios as compared to Huffman and arithmetic coders. Moreover, from the resultant tables, it is also observable that the recent MSRT-based Huffman coder provides higher compression ratios than the regular Huffman coding on all the values of QF, for all the four test images, with the average gain of about 22.5% in the compression ratio. But the gain in compression ratio is very less as the expectation from the recent MSRT-based Huffman coder, because of the requirements to transmit the additional indexing lists with the encoded bit stream, which reduces the compression ratio from the expected values. Meanwhile, from the resultant tables, we can also observe that the proposed ISRT-based Huffman coder provides highest compression ratios on all the QF values for all the four test images compared to the existing entropy coders. Furthermore, analyse the plots of compression ratios shown from Figures 3-6, which provides a good visualization about the comparative compression performances of all the tested entropy coders on different reconstruction qualities. The plots clearly validate that the proposed ISRT-based Huffman coding technique provides higher compression as compared to the state-of-the-art entropy coders over the same reconstruction qualities for all the four test images. Furthermore, in order to provide the quantitative performance gain of the proposed entropy coder on different   complexity levels of the input signal; its average percentage gains in the compression ratio against the existing entropy coders have been calculated according to the size of input images. The obtained values of average percentage gains in the compression ratio are tabulated in Table 7. From Table 7, it can be easily observed that the proposed entropy coder offers high compression gain against the existing entropy coders irrespective of the complexity level of the input signal. These significant gains in the compression ratio confirm the higher reliability of the proposed ISRT-based Huffman coder for the practical entropy coding applications. Apart from this, in order to properly justify the high gains in the compression ratio obtained from the proposed ISRT-based Huffman coding technique as compared to the existing entropy coders; we now present an example based comparative description for the working of the proposed ISRT-based Huffman coding against the regular Huffman coding technique.
Let us consider a 4 × 4 block of pixel value as a source matrix X for the entropy coding problem given as: transformed matrix is given in Equation (15).
Next, the DCT transformed matrix Y is then quantized with QF = 10 and rounded along with the sign of symbols. The quantized and rounded matrix is given in Equation (16).
Matrix YQR is the final symbol source matrix for entropy coding that has 16 source symbols. Let us first encode the YQR matrix using the regular Huffman coding technique. The regular Huffman coder generates the codeword for each of the symbols as shown in Table 8.
Further, in order to encode the source symbols using the proposed ISRT-based Huffman coder, the matrix YQR given in Equation (16) has been first reshaped in a column matrix as given by:  Finally, as per the proposed encoding algorithm of ISRTbased Huffman coding given in Algorithm 2, the maximum negative value of the YQRR matrix (i.e. −13) has been added to all the elements of this matrix to obtain all the positive source symbols. The modified source symbol matrix YQRRM is given in Equation (18).
Lastly, the above-modified source symbol matrix that contains sixteen different positive symbols is reduced in eight symbols using Algorithm 2. The obtained combined source symbol matrix using the proposed ISRT-based Huffman encoder is given as, Finally, the codewords obtained from the proposed ISRTbased Huffman coder for each of the symbols of matrix Z is shown in Table 9.
From Tables 8 and 9, we can easily analyse that the proposed ISRT-based Huffman coding technique offers a significant gain in compression ratio over the regular Huffman coder. The quantitative gain in the compression ratio can be simply calculated by  (20) where, CR p is the compression ratio obtained from the proposed ISRT-based Huffman coder, and CR r is the compression ratio obtained from the regular Huffman coder.
From Equation (20) we can easily obtain the value of GCR for the proposed ISRT-based Huffman coder over the regular Huffman coding as equal to 158.38 %. This example can be easily extended to justify the respective high compression gains of the proposed entropy coder against the other existing entropy coders tabulated in Table 7.

6.1
Real-time practical performance evaluation of proposed entropy coder using the JPEG image coder used in Apple-iPhone 4/4S model This subsection presents a detailed comparative real-time and practical entropy coding performance of the proposed entropy coder using JPEG image coder used in the Apple-iPhone 4/4S model [25]. The JPEG image coder is the most popular and widely used tool for practical image coding applications [26]. The basic structure of a JPEG image coder is shown in Figure 7, which contains four main functions for example: 1. The input image is first divided into non-overlapping blocks (NOBs) of size 8 × 8. 2. Decorrelation of image pixels from each of the NOBs by the forward DCT transformation, 3. Quantization of individual DCT coefficients from each NOBs to achieve compression, and FIGURE 7 Basic structure of JPEG image coder [26] 4. Entropy coding of the quantized DCT coefficients using the Huffman coding technique to generate compressed bitstream.
In the standard JPEG coder, a default quantization table (DQT) is used to generate the compressed stream at different bitrates using the scaling technique of the DQT. The DQT of the JPEG standard (DQT JPEG ) is shown in Equation (21).
However, in the past decade, several modifications for the quantization table of JPEG standard have been proposed and find utilization in the practical applications. For instance, JPEG coder used in the Apple-iPhone 4/4S model is almost the same as the standard JPEG coder except for the quantization table. The quantization table used in JPEG image coder of Apple-iPhone 4/4S model is shown in Equation (22).
higher value of bitrate signifies a lower compression amount (i.e. lower compression ratio), whereas the lower value bitrate signifies a higher compression amount.
From Figures 8 and 9, it is evident that the Apple-iPhone JPEG coder provides better compression than the standard JPEG coder for both the test images. Meanwhile, from both the figures it is clearly reflected that the compression quality of the Apple-iPhone JPEG coder with the proposed entropy coder is much better than the others because it offers higher PSNR values on all the bitrates. The PSNR performances shown in Figures 8 and 9 clearly validates the higher compression capability of the proposed ISRT-based Huffman coding technique over the regular Huffman coder for the real-time practical applications.

TIME COMPLEXITY ANALYSIS OF THE PROPOSED ISRT-BASED HUFFMAN CODING TECHNIQUE
The time complexity of an entropy coder is a very important parameter that actually reflects the feasibility of the coder for practical applications. In the case of entropy coding, the time complexity is a measure of total time consumption by the entropy coder for encoding and decoding of the source sequence. This section presents an extensive time complexity analysis for the proposed ISRT-based Huffman coding technique against the existing entropy coders to provide a clear visualization of its real-time application feasibility.
For the proper time complexity evaluation of entropy coders, the total time consumed by a particular coder for the encoding and decoding of the source sequence has been determined for all the four test images using the test system shown in Figure 1 over the different values of the quantization factor. Where-upon, the respective average values of obtained time requirements for the different input image sizes have been taken for the comparison and analysis. The obtained values of total time requirements for encoding and decoding processes (TTRFEDP) for the proposed and the other existing entropy coders are tabulated in Table 10.
From Table 10, it is clearly observable that among all the tested entropy coders, the regular Huffman coder consumes the smallest average time of about 10.507 s for the encoding and decoding of all the four test images with the developed test system. This is the reason why in spite of the low coding efficiency the Huffman coder is still popular for practical entropy coding applications. On the other hand, the advance CABAC entropy coder takes the highest average time of about 53.077 s for the encoding and decoding of the four test images. The alternative of CABAC coder that is the CAVLC coder shows slightly lower time complexity than the CABAC but also consumes very high time than the other existing entropy coders. The entropy coders such as arithmetic coder and the Exp-Golomb coder show moderate time complexity for the practical implementations. However, they both require higher time to encode and decode the input sequence as compared to the regular Huffman coding. Now moving to the recent MSRT-based Huffman coding, it consumes higher time of average 66.25% than the regular Huffman coding, but the time consumption is still less than the other existing entropy coders. Furthermore, it is worth to mention that the average time consumption for the proposed ISRT-based Huffman coder is about 14.60 s, which is only 35% higher than the regular Huffman coder. Otherwise, the time consumption of the proposed entropy coder is smallest than the recent MSRT-based Huffman coding and the other stateof-the-art entropy coders. Therefore, the proposed ISRT-based Huffman coder offers low complexity for coding and decoding, and hence highly suitable for real-time entropy coding applications.

CONCLUSIONS
In this paper, a new ISRT-based Huffman coder has been developed to achieve higher compression ratios with less complexity as compared to the recent and popular entropy coders. The developed entropy coder has been particularly designed to efficiently alleviate the negative symbol handling and additional indexing problems of the recent MSRT-based Huffman coding technique. Further, in order to provide the realistic validation for the coding performances of the entropy coders, a proper test system has been developed that satisfies all the structural requirements for testing of simple as well as advance entropy coders. The comparative compression gain and the time complexity of the proposed ISRT-based Huffman coder has been extensively analysed using the developed test system against the popular entropy coders. The obtained results validate that the developed ISRT-based Huffman coding technique provides