An improved path splitting strategy on successive cancellation list decoder for polar codes

The performance of the cyclic redundancy check aided successive cancellation list (CA-SCL) decoder for polar codes exceeds that of the Turbo codes and the Low Density Parity Check (LDPC) codes adopted in the World Interoperability for Microwave Access (WiMAX) proposal. However, CA-SCL decoder has a high computational complexity, resulting in a long time delay and high memory complexity. In order to alleviate this problem, an improved path splitting strategy on successive cancellation list (IPSS-SCL) decoder was proposed, which can signiﬁcantly reduce the average list size. The inﬂuence of splitting on CA-SCL decoder was analysed at ﬁrst and a new selective splitting criterion using the Gaussian approximation method was proposed according to the analysis. In addition, a path contraction mechanism based on the location estimation of the correct path was proposed to further reduce the average number of unnecessary candidate paths. Compared with existing path splitting strategies, simulation results show that the proposed IPSS-SCL decoding algorithm can reduce the average computational complexity


INTRODUCTION
Polar codes [1] is a major area of interest within the field of channel coding recently for it has been proven to be the capacity achieving code for binary-input discrete memoryless channels (BI-DMC). Successive cancellation (SC) decoding scheme [1] is proposed as the original Polar decoding scheme with low complexity and it could approach asymptotically capacity achieving over the BI-DMC when the block length goes infinity.
Unfortunately, the main challenge faced by many researchers is that the performance of SC decoder is dissatisfactory in the practical cases with finite-length blocks. To improve it, a successive cancellation list (SC-List) decoding algorithm [2] approaches the performance of maximum likelihood decoding in the high signal-to-noise ratio (SNR) regime. Furthermore, a cyclic redundancy check (CRC) aided successive cancellation list (CA-SCL) [2] decoding algorithm is comparable with the performance of current state-of-the-art LDPC codes. Considering the computational complexity of CA-SCL algorithm, O(LN log N ), an adaptive CA-SCL decoding algorithm which uses adaptive list size L instead of constant L for decoding [3]. It can achieve the same performance but with significantly lower the number of the list in the high SNR regime. Moreover, a segmented CA-SCL polar decoding scheme [4] was proposed for a better trade-off between performance and complexity, which provides high complexity reduction in the lower SNR regime.
Besides, SC-based decoding algorithms also include depthfirst SC stack decoding algorithm [5] and SC-Flip decoding algorithm [6]. SC stack algorithm has the same performance as the SC-List algorithm and can approach that of the maximum likelihood algorithm but the computational complexity of the SC stack decoder is much lower than that of the SC-List decoder in the high SNR regime. Different from SC-List algorithm and SC stack algorithm, SC-Flip algorithm [6] was proposed to solve the problem that SC algorithm has the effect of error propagation. The SC-Flip decoder preserves the low memory requirements of the basic SC decoder and improves the block error rate (BLER) performance by retrying alternative decisions.
SC-List decoder achieves superior performance compared to the SC decoder at the price of increased complexity and CA-SCL decoder outperform current state-of-the-art turbo and LDPC codes. However, the computational complexities of the SC-List decoder and CA-SCL decoder are relatively high. Therefore, lots of researchers have proposed improved algorithms based on SC-List decoding. For example, fast SC decoder idea [7] was applied to SC-List algorithm, and the significant reduction in the decoding latency was observed without sacrificing the bit error rate performance of the codes [8]. In addition, a split-reduced SC-List decoder [9] was proposed and it can achieve substantial complexity reduction. At the best of our knowledge, most of the work can reduce the computational complexity of SC-List algorithm with almost no performance loss, but it is not suitable for CA-SCL algorithm.
Obviously, there is no need to split all the decoding paths when performing CA-SCL algorithm. This is because the splitting is unnecessary if the reliability of deciding the unfrozen bit u i is extremely high, especially for the bits transmitted through the sufficient polarized sub-channels. Thus, we focus on the splitting operation at unfrozen bits in order to reduce the computational complexity of CA-SCL algorithm. The main contributions can be summarized as follows: • We analysed the effect of path splits on decoding performance, aiming to find the causes of performance degradation in the selective path splitting process on CA-SCL decoder. Then, we proposed an improved path splitting strategy to determine whether the path extension operation should be performed based on the Gaussian approximation (GA). In this way, the speed of the list expansion can be effectively curbed with negligible performance loss. • A path contraction mechanism was proposed with a dynamic updated threshold. This mechanism can dynamically decide whether the path deletion operation can be performed at the related information bits after the paths competition finished. Thanks to the contractile list size, the decoding complexity is further reduced without significant performance loss.
The rest of this work is organized as follows. The preliminaries about the polar codes and the path splitting strategy on SC-List (PSS-SCL) decoder [10] are introduced in Section 2. In Section 3, a new path splitting selecting decoder is proposed, including the splitting strategy and the contraction mechanism. Then, the numerical simulation results are demonstrated in Section 4. Finally, conclusions are drawn in Section 5.

Polar codes and SC decoding
A polar code [1] is characterized by a four-tuple (N , K , ,  c ), where N = 2 n is the length of the code, K is the number of information bits,  ⊂ {1, … , N } is a set indicating the positions of the K information bits and  c = {1, … , N }∖. After channels polarization, the indices of the K best sub-channels are taken into . Bits corresponding to positions i ∈  c are referred to as frozen bits and are set to constant values known at both the encoder and decoder.
For any N = 2 n , the encoded vector (x 1 , … , x N ), denoted by x N 1 , is obtained by the formula x N 1 = u N 1 F ⊗n , where source vector u N 1 denotes row vector (u 1 , … , u N ), F ⊗n denote nth Kronecker power of matrix F and F = [ 1 0 1 1 ]. In this work, the encoded codeword x N 1 is sent to AWGN channel using binary phase shift keying (BPSK) modulation.
At the receiver, SC decoding algorithm [1] is taken to obtain the estimate of source vector, denoted byû N 1 . Based on the previous resultsû i−1 1 as well as the received values y N 1 , the SC decoding algorithm computes the probability of u i , denoted by W N (y N 1 ,û i−1 1 |u i ). Since u i , i ∈  c are known to the receiver, the real task of SC decoding is to estimate u i , i ∈ . The loglikelihood ratio (LLR) for bit u i is defined as Hard decisions are taken according tô (2)

SC-List-based decoding algorithms of polar codes
SC decoder makes hard decision in each bit position and does nothing to remedy when the decision is wrong. To overcome this problem, SC-List decoding algorithm [2] preserves both 0 and 1 probabilities at each position i ∈  and splits the decoding path into two parallel decoding threads, continuing in either possible direction. In this work, SC-List decoder based on probability measurement is called probability-based SC-List decoder, corresponding to LLR-based SC-List decoder [11]. Each of the threads is estimated with probability where Pr(û i 1 [l ] = u i 1 ) denote the probability of whether the l th path at ith position is the correct path, W N (y N 1 ,û i−1 1 [l ]|u i ) is computed by SC algorithm. In order to constrain the exponentially growing complexity, the authors in [2] considered a pruning procedure to discard all but the L most likely paths. At the end of the SC-List decoding process, the final output is selected with the following formula: where  = {1, … , L} is a set indicating the alive paths. The formula (4) indicates that SC-List algorithm picks the estimated codewordû i with the maximum probability. Full binary tree for polar code (16,8) Actually, considering the fact that the path with the maximum probability is not necessarily correct, a cyclic redundancy check aided SC-List (CA-SCL) decoder [2,12] was shown to improve the performance of SC-List decoder. In CA-SCL decoder, the path that passes the CRC check is selected as the final output. If there are more than one path pass the CRC check, then the most likely one is picked. The simulation results showed that CA-SCL algorithm can obtain 1 dB gain at 10 −5 comparing to SC-List decoder for polar code (1024, 512) with list size L = 16.

An exiting path splitting selecting strategy for SC-List-based algorithms
In order to reduce the computational complexity of SC-List algorithm, a simple idea is to modify the decoding tree. Instead of splitting the paths at each sub-channel, an alternative method is to split the paths only at unreliable sub-channels.
Critical set [13] includes the erroneous bits caused by channel noise with high probability (more than 99% for SC decoding). These erroneous bits induced by channel noise are called noise erroneous bits for simplicity.
We take a polar code (16, 8) for example, a full binary tree with 16 leaf nodes is constructed in Figure 1. The frozen bits and the information bits are denoted by white circles and black circles, respectively, at level 4. For each non-leaf node in the tree, if its two descendants are of the same colour, then it is marked with that colour. Otherwise, it is marked with grey one. The first leaf node of the rate-1 node is taken into the critical set [7]. As shown in Figure 1, the critical set  = {u 8 , u 10 , u 11 , u 13 }.
Critical set is first proposed to be a reference when performing SC-Flip algorithm for it contains more than 99% indices of channel noise erroneous bits. If the first SC-Flip decoding attempt fail to pass CRC check, the bits in critical set are flipped to correct the channel noise erroneous bits. Similarly, SC-Listbased decoders can perform path splitting operation only at the channel noise erroneous bits, for other information bits are almost infallible according to the property of polar code. Thus, the critical set was adopted to the CA-SCL algorithm [10]. Simulation results showed that the computational complexity of CA-SCL algorithm can be reduced if paths splitting is only performed with critical set-based strategy. As shown in Figure 2, if critical set is applied to SC-List algorithm, the computational complexity of SC-List decoder is slightly reduced with no performance loss. However, simulation results in Section IV demonstrated that the method may be unsuitable for CA-SCL algorithm. It is because critical set cannot achieve the accuracy of the channel noise erroneous bits detection compared to the SC algorithm especially in a large list size, leading to the performance degradation.

AN IMPROVED PATH SPLITTING SELECTING STRATEGY AIDED SCL ALGORITHM
To reduce the computational complexity with negligible performance loss of CA-SCL decoder, we first study the relevant properties of CA-SCL decoding algorithm and a new path splitting selecting strategy is proposed in this section. Then, we propose a path contraction mechanism to reduce the average list size further.

Analysis of path splitting selecting strategy
The CA-SCL decoding algorithm is different from the SC-List algorithm for it picks the final path with following formula: where Pr(û N 1 [l crc ] = u N 1 ) denote the probability of the path that passes CRC check. Equation (5) implies that CA-SCL decoding algorithm fails for two reasons. One is the correct path does not exist in the list, the other is the wrong path passes the CRC check and the probability of it is larger than the correct one, which causes a false alarm. To further evaluate the CA-SCL algorithm and identify which situation dominates the error decoding, we study the frequency that the correct path exiting in the list and the ratio of unsuccessful decoding caused by false alarm through Monte Carlo simulations with a 16 bits CRC codes with g(D) = D 16 + D 15 + D 2 + 1, which is shown in Table 1.
According to above numerical evaluation, it can be observed that the performance of CA-SCL algorithm is mainly determined by the probability that the correct path kept in the list. Therefore, defines the probability that the correct path kept in the list in the whole decoding process and in a certain informa- where Ω is the splitting set which contains all the bits performed splitting operation and Ω is the complementary set of Ω, Ω ⋃ Ω = . According to inequality (6), the performance of CA-SCL algorithm cannot be improved because of the lack of path competition when path splitting selecting strategy is used. Instead, it must deteriorate if the set Ω is constructed unreasonably. In the PSS-SCL decoder [10], Ω is replaced by critical set. To further reduce the computational complexity, a threshold is considered. If the maximum LLR value of || paths less than , then the decoder will split the decoding paths. However, critical set is not suitable to be a path splitting selecting strategy in CA-SCL decoding for it does not include certain unreliable information bit u i , make which causes performance loss significantly.

An improved path splitting selecting strategy
According to the above analysis, when ith sub-channel condition is ideal enough, we have which means operation of the paths splits can be replaced by SC algorithm in each thread at ith ideal sub-channel. To get the optimal set Ω, we should try It is impossible to realize when K is large enough. Therefore, we construct Ω approximately with the GA method [14]. However, considering that Φ −1 (x) in [14] is hard to be computed, so Φ(x) can be approximated by the following piecewise function [15]: With the help of the GA, we can obtain the expectation of the error probability for every information bits under the designed SNR. Then we take smallest of them into Ω, Ω = {i 1 , … , i } ⊂  and is the size of splits set. can be set according to communication requirements, it is fixed with the size of critical set for comparison. To further reduce the splits, a splitting threshold split is considered in our work. For i ∈ Ω, we splits the path at ith position if the average LLR value of || paths satisfies Based on the observation above, split can directly influence the performance of the path splitting criterion. It should be set properly, for an undersize value will lead to dramatically performance deterioration and an oversize one will result in unnecessary splitting. In our work, split is defined as where [L (i ) N ] min is smallest mean LLR value of information bit computed by the GA and f is given by experience as margin. The bit whose average LLR value is lower than the minimum theoretical value has a high probability to be erroneously decoded by SC algorithm, which means that path splitting operation should be provided to improve the error correction ability. In this way, we slow down the rapid expansion of the list sizes, reducing the operation of early paths replication, sorting and LLR calculations, which can significantly rise down the complexity of CA-SCL decoder with almost no performance loss.

Path contraction mechanism
Suppressing the rapid growth of the previous paths is not enough to greatly reduce the computational complexity of the CA-SCL algorithm. Since the number of the paths double when performing the paths split, the size of the set || quickly increases to the designed list size L. Moreover, most of the information bits with the high channel index are transmitted over the sufficient polarized sub-channels according to the construction of polar code, which means that they have a high probability to be decoded correctly by SC algorithm. Therefore, an extra paths deletion operation should be considered for CA-SCL decoder.
To realize the path contraction mechanism, we first analyse which path can be eliminated in CA-SCL decoding. Obviously, if path l satisfies where Pr(û i 1 [l c ] = u i 1 ) represent the correct probability of the correct path in CA-SCL decoder, the path l should be deleted to reduce the computational complexity. However, the correct path l c is unclear when processing CA-SCL algorithm. Therefore, instead of the correct probability of the correct path, we adopt the probability that the correct path l c exists in the list to approximate it in the path contraction operation. In [16], Path Metric Range (PMR) was proposed to measure the probability of that the correct path exists in the list for the LLR-based SC-List algorithm [11]. In this work, the concept of PMR is applied in the probability-based SC-List decoder.

Definition 1.
To measure the probability of the correct path existing in the list, we defined the like logarithm probability ratio (LPR) of path l at bit u i for probability-based SC-List decoder as follow where l L is the first ranking candidate in the list and it has the largest probability Pr(û i [l L ] = u i ) to be the correct path. Ranking represents the location of the corresponding path in the list. According to structure of CA-SCL decoder, a lower ranking path correspond to a higher LPR value.
An intuitive understanding is that when LPR (i ) l for all l are close to 0, all candidate paths have the similar probability value, so the correct path cannot be identified clearly and it is easy to be deleted after the paths competition. In contrast, when LPR (i ) l values for ranking 2 ≤ l ≤ L go infinity, the path with maximum probability is likely to be the correct one.

Definition 2.
During the CA-SCL decoding process, if the absolute values of LLR are large enough for all candidate paths at an information bit u i , the value of LPR (i ) l will tend to decrease compared to the bit u i−1 . On the contrary, if u i is a frozen bit and the absolute values of LLR are close to zero, a rising trend will appear on the LPR To verify the conclusion above, suppose that the probabilities of alive paths at u i−1 satisfy where P . We analyse the LPR characteristics of the last ranking candidate, referring to LPR (i ) 1 , by Monte-Carlo simulation and the result is shown in Figure 3. It is equally applicable to the other ranking l th in the list. The LPR (i ) l of frozen bits and information bits are marked in red line and blue line, respectively.
(1) Assuming that u i is an information bit and the absolute LLR values for all paths are large enough. After paths splitting operation, the correct probabilities of two extensions corresponding to path l satisfy the relationship as follows: where (i ) l is LLR value of l th path at u i . Because of the great LLR values, it can be easily observed that P b are large enough, the second term of the equality approaches to zero, we can obtain The equation holds if and only if (i ) l goes infinity, which make P (2) If u i is a frozen bit and the absolute LLR values are close to zero for all survival paths, the characteristics of the sequence will play an important role. Since frozen bits are forced to zero, we can obtain Equation (19) refers that the probability of a higher ranking candidate will decrease slowly than the lower one at frozen bits. Hence, Thus, while u i is a frozen bit and (i ) l is close to zero, we can notice that LPR The above analysis illustrates that if the information bits u i , i ∈  are reliable enough and the frozen bits u i , i ∈  c are unreliable, the LPR values will act like description in Definition 2. Usually, for the bit rate R ≤ 0.5, information bits are reliable enough and for the bit rate R ≥ 0.5, frozen bits are unreliable enough contrary. As a consequence, when bit rate R = 0.5, such property is easy to be observed, just as shown in Figure 3. In order to eliminate the unnecessary decoding paths, we investigated the variation trend of the correct path location in CA-SCL decoding. Numerical result of the correct path ranking for polar code (512, 256) with list size 16 is shown in Figure 4 as an example. Frozen bits and information bits are marked in red line and blue line, respectively. According to Figure 4, we can clearly observe that the survival probability of the correct path will decline when it meets the information bits and will increase when it meets the frozen bits, which shows the same property as the variation trend of LPR  6.3288} relative to other frozen bits because of the insufficient polarization, which may act like information bits, leading to a decline in the ranking of the correct path.
Based on above analysis, we conjecture that the LPR The formula (21) shows the relative position of the correct path in {l , l + 1, … , L} in order of increasing probability. From another perspective, we can judge that whether the l th path is the correct one with metric LPR l , 1 ≤ l ≤ L is large, we can infer that the relative position of the correct path is high so that the path l may not be the correct one so we can delete it in the decoding process. Therefore, we use a threshold to judge whether the path l should be eliminated and it is defined in Definition 3. In order to reduce the computational complexity of CA-SCL algorithm with no performance loss, the l th path with should be eliminated. However, we cannot calculate the (LPR accurately. Hence, we replace it by LPR (i ) 1 min , it can dynamically update during the decoding process.
As shown in Figure 5, we take an example of our proposed pruning process. Since the LPR min characterizes the survival probability of the correct path in the whole decoding process, if the distance between path l and path L at u i is larger than decisive LPR, we have reason to believe that the l th path cannot be the correct one. Therefore, for l th path at bit u i , if the l th path should be deleted. Otherwise, the l th should be hold. Actually, LPR min will changes with different SNRs, block lengths and list sizes. To determine the LPR min for CA-SCL decoding dynamically, a variable is adopted to record the LPR min in the whole decoding process for each individual block. It is first initialized to split we mentioned in Subsection 3.2. After processing the path eliminations at u i , the decoder will update the threshold, , with rule as follow It is because while is larger than the maximum of LPR (i ) l at bit u i , the worst case of the correct path ranking after path competition is better than the ranking corresponding to the deleting threshold before u i , which means that the contraction will ALGORITHM 1 IPSS-SCL Decoding Algorithm be insufficient in the following decoding process without narrowing . Based on above analysis, we can dynamically updating the deleting threshold in different construction of polar codes and transmission conditions, making our contraction mechanism have a better robustness.
Moreover, to further reduce the average list size, it has been proven in [9] that when performing SC algorithm after certain bit sc , it keeps the same performance with SC-List algorithm because of the channels polarization. We adopt this improvement to our decoder by early checking CRC codes and simulation results show that this property is also suitable for CA-SCL algorithm. To determine the position sc , representing the beginning bit for SC-only decoding, we consider the last bit in critical set [13], u . Because the bits in the critical set has a relatively high error probability, after u , the sub-channels will tend to be noiseless channels and SC decoding can achieve the performance of CA-SCL decoding. Meanwhile, considering that the LPR values decline at information bits, the minimum LPR value may occur in those noiseless sub-channels. Therefore, we leave a bit position to record the minimum LPR. The bit position sc = + 1 is used to reduce the computational complexity. After sc , only the path which can passes the CRC checking can continue to decode using SC algorithm and the computation complexity will decrease further according to our proposed path contraction mechanism.
In summary, the selective PSS and the path contraction mechanism we proposed are materialized in an improved path splitting strategy on SC-List (IPSS-SCL) decoding algorithm. The details are described in Algorithm 1.

SIMULATION RESULTS AND DISCUSSION
In this section, we evaluate the BLER performance and computational complexity of the IPSS-SCL decoder. Since polar codes are selected as the short code standard in the 5G control channel, we focus on the polar codes with shorter code lengths. Therefore, we take polar codes (512, 256) and (256, 128) transmissions using BPSK modulation over AWGN channel. List sizes will be selected as L = 4 and L = 16 unless otherwise noted. Polar codes are constructed using the GA [14]. Meanwhile, the information bits are concatenated with a 16-bits CRC with generator polynomial g(D) = D 16 + D 15 + D 2 + 1 and coding rate for polar codes is fixed to 0.5 while the effective information rate is (K − 16)∕N . The threshold f = 3 is given in this work. The number of simulation blocks is 10 6 .

BLER performance comparisons
In this section, BLER performance of several decoding algorithms are compared under the same effective code rate in the simulations, including the proposed IPSS-SCL decoder, the conventional CA-SCL decoder and the exiting path splitting selecting (PSS-SCL) decoder [10]. In Figure 6, we compare the performance of polar codes with different code lengths in list size L = 16. We can obverse that the proposed decoder performs no more than 0.05 dB performance loss compared with CA-SCL decoder in the BLER ranges from 10 −3 to 10 −4 . Moreover, as the code length grows, the performance loss will become more negligible. This is because the main reason leading to the performance loss in our proposed decoder is the paths deletion operation which using the estimated location of the correct path. As we discuss in Section 3, our approximation based on LPR values need to satisfy the condition of a large enough LLR value for every candidate paths at information bits, which means that a smaller LLR value will influence the accuracy of the location significantly. Thus, while the code length goes larger, a larger LLR value will be calculated in CA-SCL decoder due to the higher degree of channel polarization, corresponding to a more accurate approximation for correct path. In a similar way, a higher SNR means a higher LLR value, resulting in a less performance degradation. In addition, compared with PSS-SCL decoder [10], our proposed decoder can provide a better error correction performance as the transmission conditions become better. This is because the location estimation of the correct path is more reliable although a large number of paths are deleted. Figures 6 and 7 indicate that the performance loss of our proposed IPSS-SCL decoder will become larger as the list size increasing compared to conventional CA-SCL decoder. We can clearly notice that our decoder has almost no performance loss in a low list size. It is because if the list size is large enough (such as L = 16), our proposed decoder cannot fill in the list due to the path contraction mechanism. A numerical result will be shown in Section 4.2 to verify this conclusion. Therefore, the list size has a less effect on our algorithm especially in the high ranges, which makes the performance curve larger since the list size can enhance the error correction ability of CA-SCL decoder significantly. At the same time, our proposed algorithm can provide a competitive error correction performance in middle to high SNR ranges with different list size compared to PSS-SCL decoder [10].

Average computational complexity
The main purpose of our proposed decoder is to reduce the computational complexity without a significant performance loss. In order to analyse the computational complexity theoretically, we use running time in simulation to represent the complexity and the running time of the conventional CA-SCL decoder is O(LN log N ) [2]. To verify this conclusion, the running time of additional operations based on the probability SC-List decoder [2] is shown in Table 2. The value L and L (i ) represents the average list size with weight 1 and the actual list size of bit u i , respectively. In order to obtain the splitting threshold split , we should calculate the [L (i ) N ] min by the GA at first. The complexity of calculating the expectation via the GA method is O(N ) [17]. Then, to determine whether the path need to be extended at an information bit u i , the comparison between the LLR values and split costs O(L (i ) ) complexity per bit. After splitting operation, the value of LPR will be calculated for path contraction mechanism using the LLR values, which leads to L (i ) − 1 time steps per bit. Then, paths deletion will be performed, which result in another comparison complexity O(L (i ) ) between LPR of each candidates and threshold . Finally, threshold has to be updated according to Section 3.3 after the path competition. To obtain the maximum LPR (i ) l , O(L (i ) ) need to be introduced for searching.
To update the data recursively, it has been proven in [2] that the total running time of an entire block updating process with a constant list size satisfies the relationship as follow: where m = log N and is initialized to m. However, while the list size becomes alterable during the decoding process, Equation (25) should be changed into where the weight n (i ) of bit u i can be calculated by S is the terminal condition that j ≤ m − 1 as well as ⌊ i − 1 2 j −1 ⌋ mod 2 = 0.
Thus, the overall computational complexity of the IPSS-SCL decoder can be defined as O(E L N log N ) according to the analysis above. We notice that the average list size E L can directly reflect the complexity of our algorithm, so we investigate the change of the list size in polar code (512, 256) and the result is shown in Figure 8. The PSS-SCL algorithm [10] splits the paths into two threads when the maximum LLR value of the paths is lower than the constant threshold. Therefore, the early growth of the number of alive paths will slow down. But for the IPSS-SCL decoder, in addition to slowing down the path growth rate, a reasonable pruning operation is also considered. Therefore, the list size will be limited in a low range for the entire block so that the complexity can be reduced significantly. However, we can observe in Figure 8 that the largest list size of a L = 16 decoder is no more than 6, which means that the performance of our decoder may not increase if the list size changes from The average list size of different polar codes in several algorithms with different E b ∕N 0 L = 8 to L = 16. This results in the expansion of the performance gap between our proposed decoder and conventional CA-SCL with a large list size we mentioned in Section 4.1. Figure 9 compares the relationship between the weighted average list size of different algorithms and E b ∕N 0 in different polar codes. We take the frozen bits into account so the actual list size of the conventional CA-SCL decoding is smaller than the designed value due to the unfilled list at the beginning frozen bits. Note that our proposed algorithm has a desirable decline in average list size comparing to the CA-SCL decoder and the PSS-SCL decoder [10], for more than 75% decline in complexity comparing to CA-SCL algorithm with list size L = 16. Moreover, while the list size goes larger, the complexity of the IPSS-SCL decoder growing much slower than CA-SCL decoder, which means that the reducing of complexity will be more significantly under a higher list size. It is because most of the additional paths are redundant for decoding in SC-Listbased decoder especially under a high list size, the IPSS-SCL algorithm can reduce these invalid decoding paths by eliminating the candidates which have a high probability to be errordecoded. Figures 10 and 11 show the numerical results of the normalized running time with different list sizes. Note that compared to CA-SCL decoder and the PSS-SCL decoder, our proposed decoder can significantly reduce the complexity for about 25% in a low list size L = 4. Moreover, as the list size growing, our decoder can reduce more decoding complexity, for over 70% decline compared to CA-SCL decoder with list size L = 16, due to the variation property of weighted average list size we discuss above.
In summary, our proposed decoder can achieve a desirable complexity reduction compared to exiting path splitting CA-SCL decoder especially in high list sizes without a significant performance loss. This effect will be more obvious if the IPSS is applied to the decoder with a shorter code length or a larger code rate. It is because if the number of the code length decreases or code rate becomes larger, the threshold will be lower and the computational complexity will be further reduced while ensuring performance. It is also useful for polar codes with a larger code length due to the advanced CRC check.

CONCLUSION AND FUTURE WORK
The properties of the CA-SCL algorithm were analysed in the work. To alleviate the complexity problem, we proposed the IPSS-SCL algorithm for polar codes. Based on the path splitting selecting strategy, the proposed decoder can prevent the list size from growing too fast. Moreover, the path contraction mechanism can delete the path which seems to be erroneously decoded in order to reduce the list sizes further. Simulation results revealed that the proposed algorithm performed better than the existing paths splitting algorithm. Based on the critical set not only in reducing complexity, but also in the error correction performance at the area of medium to high SNR. The reason is that the proposed decoding algorithm can keep the correct path surviving longer during iterative decoding. At the same time, the polarization characteristics of the polar codes were taken into consideration. With aid of the characteristics, the splitting path can be shrunk to single one and almost without causing the error correction performance loss. The future work is to apply this path splitting strategy to the simplified SC algorithm. Simplified SC algorithm can reduce the complexity as well as decoding latency of the original SC algorithm. Most literatures have proposed SC-List decoding based on simplified SC algorithm, which reduce SC-List decoding latency. To further reduce the complexity of SC-List algorithm, the proposed path splitting strategy may be used while ensuring the performance.