An iterative algorithm for discrete Lyapunov matrix equations

Funding information National Natural Science Foundation of China, Grant/Award Numbers: 61822305, 61690210, 61690212, 61903103; Shenzhen Municipal Basic Research Project for Discipline Layout, Grant/Award Numbers: JCYJ20170811160715620, JCYJ20180507183437860; Fundamental Research Funds for the Central Universities, Grant/Award Number: HIT.BRETIV.201907; Guangdong Natural Science Foundation, Grant/Award Number: 2020A151501109 Abstract In this paper, an iterative algorithm is established to solve discrete Lyapunov matrix equations. In this algorithm, a tuning parameter is introduced such that the iterative solution can be updated by using a combination of the information in the last step and the previous step. Some conditions for the convergence of the proposed algorithm are given. In addition, an approach is also developed to choose the optimal tuning parameter such that the algorithm achieves its fastest convergence rate. A numerical example is employed to illustrate the effectiveness of the proposed algorithm.


INTRODUCTION
Various types of matrix equations such as Lyapunov, Riccati, and Sylvester equations play an important role in the analysis and design of linear systems [1][2][3][4]. Among these equations, the discrete Lyapunov matrix equation is particularly important. Discrete Lyapunov matrix equations appear in the analysis and design of discrete-time linear systems. It is well-known that this matrix equation can be used to check the stability of discretetime linear systems. This kind of matrix equations is also closely related to the H 2 norm of discrete-time linear systems [5]. In addition, discrete Lyapunov matrix equations can also be utilized to characterize the covariance matrix of the state for a class of stochastic discrete-time systems [6]. In ref. [7], a class of parametric discrete Lyapunov matrix equations was used to design low gain state feedback controllers for constrained discretetime systems. Due to the importance of discrete Lyapunov matrix equations, they have received much attention from a considerable number of researchers. In ref. [8],  [9] for the solution of discrete Lyapunov matrix equations. Based on these matrix bounds, eigenvalue bounds can be determined. While in ref. [10] upper bounds were given for the solution of discrete Lyapunov matrix equations with stable system matrices. On the explicit solution of discrete Lyapunov matrix equations, there were also some results reported in literature. In ref. [11], two explicit solutions of a discrete Lyapunov matrix equation were given by integral and infinite series, respectively. In ref. [12], the solution was expressed in terms of controllability and observability matrices by using the so-called Kronecker maps as tools.
Many numerical algorithms have been presented to solve the discrete Lyapunov matrix equations. In ref. [6], an algorithm was developed by the reduction of the system matrix into a lower Hessenberg form with QR factorization as a tool. In ref. [13], the real Schur factorization was used to establish an algorithm for solving discrete Lyapunov matrix equations. A new updating formula was proposed in ref. [14] to construct the Cholesky factors in the Hammarling's algorithm of ref. [13]. A common feature of these numerical algorithms is that some matrix transformations need to be carried out.
The iterative methods can find the solutions of matrix equations, and are used in signal processing and system identification. In ref. [15], an iterative algorithm was proposed to solve the matrix equations X = AXB + C and AX + XB = C by applying the hierarchical identification principle and basic gradient based iterative algorithms for simple matrix equations. By generalizing the methods, a gradient based iterative algorithm was also presented to solve general Sylvester matrix equations. In ref. [16], a large family of iterative methods was presented for simple matrix-vector equations by extending the well-known Jacobi and Gauss-Seidel iterations. Then, iterative algorithms were proposed to solve coupled matrix equations by combining these iterative methods and hierarchical identification principle. In ref. [17], least-squares iterative algorithms were firstly constructed for simple bilateral matrix equations, and then leastsquares based iterative algorithms were developed for general linear matrix equations by using hierarchical identification principle. Convergence conditions were also provided in ref. [17] for the iterative algorithms. In ref. [18], iterative algorithms were also investigated for general coupled matrix equations. Different from the methods in ref. [15][16][17], a quadratic error index function was constructed, and the algorithm was presented based on the gradient of this error function with respect to the unknown matrices. A necessary and sufficient condition was established in terms of the spectral radius of an augmented matrix related to the original coefficient matrices of the considered matrix equations. In ref. [19], the gradient-based idea was given to solve some matrix equations related to control systems. In addition, the gradient based method was also applied to solve complex matrix equations with the conjugate of unknown matrices in refs. [20] and [21]. Recently, a modified version of the algorithm in ref. [20] was presented in ref. [22] by the introduction of a relaxation parameter.
In addition, it should be pointed out that the iterative technique has wide applications in the area of system identification. In ref. [23], a normalized iterative algorithm was developed to estimate the unknown parameters of Hammerstein systems. By the use of the decomposition based hierarchical principle, gradient-based and least-squares-based iterative identification algorithms were presented in ref. [24] for Hammerstein systems. In ref. [25], in order to obtain unique parameter estimates of Hammerstein non-linear systems, the output of the system was expressed as a linear combination of all the system parameters by means of the key-term separation principle. With such a representation, a gradient based iterative identification algorithm was proposed by replacing the unknown variables in the information vectors with their estimates. Iterative identification algorithms were also constructed for other systems, such as, the CARARMA systems [26], the output error systems [27].
Since the iterative algorithms are powerful, it is possible to solve discrete Lyapunov matrix equations by using iterative algorithms to approximate the exact solution. A typical example of this kind of methods is the celebrated Smith iterative algorithm [28]. A new Smith-type iterative algorithm was developed in ref. [29] to solve the matrix equation X = AXB + C . In addition, a sufficient condition was also given for the algorithm to be convergent. By specialising the results in refs. [29] and [15], some algorithms can also be obtained to solve the discrete Lyapunov matrix equations. In ref. [30], a so-called parametric Smith iterative algorithm was developed to solve the discrete Lyapunov matrix equation. In this algorithm, there is a tuning parameter. This algorithm will become the Smith algorithm if the tuning parameter is zero. Recently, a new iterative algorithm was also developed in ref. [31] to solve a class of periodic discrete Lyapunov matrix equations. There are two main features for this algorithm. One is that a tuning parameter is introduced based on an identity. The other is that the information in the current iteration step is utilized to update the iterative sequence.
In this paper, we aim to give a new iterative algorithm for solving the discrete Lyapunov matrix equation. It can be seen that the iterative solution was updated by only using the information in the last step when the algorithms in refs. [15, 18 28] were utilized. In addition, the idea "using the information in the current iteration step" in ref. [31] cannot be applied to solve the discrete Lyapunov matrix equation since it is not in a coupling form. However, the information in the previous iteration step has been available before the iterative solution is updated. Therefore, it is rational to use a combination of the information in the last step and the previous step to update the iterative solution in the current step. Based on this idea, we propose a new iterative algorithm for solving the discrete Lyapunov matrix equation by introducing a tuning parameter. Some convergence conditions are established for the proposed algorithm to be convergent. In addition, a method is also given to choose the optimal tuning parameter such that the algorithm achieves its fastest convergence rate.
Throughout this paper, we use ‖ ⋅ ‖ 2 , ‖ ⋅ ‖ F and (⋅) to denote the spectral norm, Frobenius norm and spectral radius of a matrix, respectively. For two integers a ≤ b, the notation [a, b] is defined as [a, b] = {a, a + 1, … , b}.

PREVIOUS RESULTS AND THE PROPOSED ALGORITHM
In this paper, we consider the following discrete Lyapunov matrix equation where A ∈ ℝ n and Q > 0 are known matrices, and X ∈ ℝ n is the matrix to be determined. It is well known that this matrix equation can be used to check the stability of a discrete-time linear system. This is the following result. For the discrete Lyapunov matrix equation (1), there has been the celebrated Smith iterative algorithm: In this algorithm, the iterative solution is updated by only using the estimation obtained in the last step. In fact, the previous estimation has been available, which we can also use to update the iterative sequence. With this idea, we give the following iterative algorithm to solve the discrete Lyapunov matrix Equation (1): If = 1, then the proposed algorithm (3) is reduced to the Smith iterative Algorithm (2). In ref. [31], a new iterative algorithm was also developed to solve a class of periodic discrete Lyapunov matrix equations by using the information in the current iteration step in view of the coupling structure. The matrix equation (1) is not in a coupling form, and thus the idea in ref. [31] is not applicable. By specialising the result in ref. [15], the following iterative algorithm can be obtained to solve the discrete Lyapunov matrix equation.

Lemma 2. For the discrete Lyapunov matrix equation (1) to have a unique solution, the sequence {X (m)} generated by the following iterative algorithm
converges to the unique solution if the parameter satisfies

MONOTONICITY AND BOUNDEDNESS OF THE PROPOSED ALGORITHM
In this section, we investigate the monotonicity and boundedness of the proposed iterative algorithm (3). First, the monotonicity of the sequence generated by the algorithm (3) is considered.

Lemma 3. It is assumed that the matrix A is Schur stable. For given positive definite matrix Q, the sequence {X (m)} generated by the algorithm
Proof. We prove the result of this lemma by the principle of mathematical induction. First, due to the zero initial conditions, it is obvious that X (1) = Q. Thus, (5) holds for m = 0. Further, in view of ∈ (0, 1], it can be obtained that This implies that (5) holds for m = 1. Now, it is assumed that (5) holds for m = k − 1 and m = k, k ≥ 1, that is With the induction assumption (6), it follows from the expression in (3) with m = k + 1 and m = k, ∈ (0, 1], that This implies that (5) holds for m = k + 1.
With the previous derivation, it can be concluded that (5) holds for any m ≥ 0 by the principle of mathematical induction. The proof is thus completed. □ Next, the boundedness of the sequence generated by the algorithm (3) is investigated. The result is given in the following lemma.

Lemma 4.
It is assumed that the matrix A is Schur stable. For given definite positive matrix Q, the sequence {X (m)} generated by the algorithm (3) with ∈ (0, 1] is upper bounded by the solution of discrete Lyapunov matrix equation (1) under zero initial conditions X (0) = 0 and X (−1) = 0. That is, for any integer m ≥ 0, there holds Proof. Since the matrix A is Schur stable, then the discrete Lyapunov matrix equation (1) has a unique positive definite solution X . Due to the zero initial conditions, it is known from the proof of the previous lemma that X (0) = 0 ≤ X and X (1) = Q. By combining this with (3), we have The previous facts imply that (8) holds for m = 0 and m = 1. Now, it is assumed that (8) holds for m = k − 1 and m = k, k ≥ 1, that is With the induction assumption (9), it follows from the expressions (1) and (3) with m = k, ∈ (0, 1] that By this relation and the previous statement, it can be confirmed that the inequality (8) holds for any m ≥ 0 by the mathematical induction principle. □ Based on the obtained properties of monotonicity and boundedness, a convergence result on the proposed algorithm (3) can be given. This is the following theorem.
Taking limits for the both sides of (3), gives which can be equivalently written as This implies that X ∞ is a solution of the discrete Lyapunov matrix Equation (1). In addition, the Lyapunov matrix equation (1) has a unique solution since the matrix A is Schur stable. Thus, X ∞ is the unique solution of the matrix equation (1). The proof is thus completed. □ In the above Theorem 1, it has been shown that the proposed iterative algorithm (3) converges to the unique solution of the considered discrete Lyapunov matrix equation under the zero initial conditions if the tuning parameter is within the interval (0, 1]. In fact, if we focus on only the convergence of the proposed algorithm (3), it is not necessary to require these two conditions. That is to say, the algorithm (3) may be convergent with-out zero initial conditions even if the parameter is beyond the interval (0, 1]. Nevertheless, the result in this section is still interesting due to the monotonicity and boundedness of the matrix sequences generated by the algorithm (3).

CONVERGENCE OF THE PROPOSED ALGORITHM
In this section, we further investigate the convergence property of the algorithm (3) without the requirement of Schur stability and the zero initial condition. Moreover, the tuning parameter is not required to be confined to the interval (0, 1]. A necessary and sufficient condition is proposed in the following theorem to guarantee the convergence of the algorithm (3) under arbitrary initial conditions. Theorem 2. For the discrete Lyapunov matrix equation (1) to have a unique solution, consider the iterative algorithm (3) and define the matrix G as follows The sequence {X (m)} generated by the algorithm (3)

under arbitrary initial conditions converges to the solution X of the discrete Lyapunov matrix equation (1) if and only if the parameter is chosen to satisfy
Proof. Taking the vectorisation for both sides of the algorithm (3), gives vec(X (m + 1)) Denote (m) = vec(X (m)), q = vec(Q).
Then, the relation (14) can be equivalently expressed as According to the stability theory of discrete-time linear systems, the iteration (15) is convergent for any initial values if and only if the parameter satisfies (G ) < 1.
From this, we have * = (I − A ⊗ A) −1 q. Further, let = vec(X ), then the discrete Lyapunov matrix equation (1) can be equivalently expressed as (I − A ⊗ A) = q and thus its unique solution can be given by = (I − A ⊗ A) −1 q. Thus, we have lim m→∞ (m) = . With this, the conclusion of this theorem is confirmed. □ In Theorem 2, a necessary and sufficient condition is presented for the parameter to guarantee the convergence of the proposed algorithm (3). This condition is characterised by the spectral radius of a 2n 2 × 2n 2 dimensional matrix G defined in (13). In order to avoid the high dimensionality, another condition is provided in the following theorem.

Theorem 3. For the discrete Lyapunov matrix equation (1), the sequence {X (m)} generated by the algorithm (3) under arbitrary initial conditions converges to its solution X if and only if the parameter is chosen such that all the roots of the following polynomial equation with respect to lie inside the unit circle of the complex plane:
det Proof. Let be an arbitrary eigenvalue of the matrix G defined in (13). Then, we have By simple calculation, it can be derived that ] .
By this relation, it is known that (17) holds if and only if (16) is true. Thus, the conclusion of this theorem can be immediately obtained by Theorem 2. □ Given a square matrix A, let i , i ∈ [1, n] be its n eigenvalues. Then, by the properties of Kroneker products it is known that all the eigenvalues of the matrix A ⊗ A are i j , i, j ∈ [1, n]. With this result, by using the Jordan form of the matrix A ⊗ A the following conclusion can be easily obtained from Theorem 3. (1), let i , i ∈ [1, n] be the n eigenvalues of the matrix A. If the parameter is chosen such that all the roots of the following n 2 polynomials with respect to lie inside the unit circle of the complex plane:

Theorem 4. For the discrete Lyapunov matrix equation
then the sequence {X (m)} generated by the algorithm (3) under arbitrary initial conditions converges to the solution X of the discrete Lyapunov matrix equation (1).
In comparison with the result of Theorem 2, Theorem 4 is more convenient to find the parameter such that the algorithm (3) is convergent since only the polynomial equations of degree 2 in (18) need to be solved in Theorem 4.
For the case where all the eigenvalues of the matrix A are real, explicit expressions can be given for the range of the tuning parameter to guarantee the convergence of the proposed algorithm (3). In the sequel, we are devoted to investigate this problem. For this end, we need the following result.

then the algorithm (3) under arbitrary initial conditions is convergent if and only if
Proof. The equations in (18) can be equivalently written as Since < 1, it is known that for any i, j ∈ [1, n] there holds By adding and subtracting the term i j , it can be obtained from (20) that the following relation holds for ∈ ℝ In addition, it is easily derived that the following relation holds if and only if > By combining the relations (21) and (22) it is known that the following relation holds if and only if On the other hand, it can be derived that the following relation holds which can be equivalently written as Finally, by using Lemma 5 it is concluded that all roots of the equations in (20)  (1 − 1 ), 1 + 1 ). This is the conclusion. □ In the preceding theorem, we have given an explicit expression for the range of the tuning parameter to guarantee the convergence when all the eigenvalues of A are positive. Next, we consider the case where all the eigenvalues of A are real, and there exist at least one negative eigenvalue for A. Theorem 6. Consider the discrete Lyapunov matrix equation (1)

The algorithm (3) converges to the solution of the discrete Lyapunov matrix Equation (1) under arbitrary initial conditions if and only if
Proof. Similarly to Theorem 5, the proof of this theorem is also based on the polynomial equations in (19). Firstly, according to the conditions in this theorem it is easily derived that By adding and subtracting the term i j , it can be known from (28) that the following relation holds for any ∈ ℝ In addition, it is easily derived that for given i, j ∈ [1, n] the following relation holds if and only if , for i j < 0, Then, it follows from (29) and (30) that for given i, j ∈ [1, n] if and only if the parameter satisfies (31). On the other hand, it can be easily known that for fixed i, j ∈ [1, n], the following relation holds if and only if By the preceding analysis, it can be obtained that for given i, j ∈ [1, n] the relations (32) and (33) With this result, it is known that the relations (32) which is equivalent to In addition, the relations (32) which is equivalent to With the results in (36) and (37), it is immediately obtained that the relations (32) and (33) simultaneously hold for all i, j ∈ [1, n] if and only if max On this expression, in the sequel we give further results. Before proceeding, for > −1 we give the following relation In this case, the expression (38) can be rewritten as ) .
In this case, it is easily obtained that In addition, the condition in this case can be equivalently expressed as By combining this relation with (39), we have from which it can be derived that By (40) and (41), it is known that the expression (38) can be rewritten Case 3: 2 + 1 > −1. In this case, by similar derivation as in Case 2 it can obtained that the following relations hold With this, the expression (38) can be rewritten ∈ (1 + 1 , By combining the results in Cases 1-3 and using Lemma 5, it can be derived that all the roots of the equations in (18) are less than one in modulus if and only if the parameter satisfies the condition (27). From this result, it is concluded that the algorithm (3) under arbitrary initial conditions is convergent if and only if the condition (27) is satisfied. □

THE OPTIMAL TUNING PARAMETER
In the previous sections, some convergence conditions have been established for the iterative algorithm (3). In this section, we focus on the choice of the tuning parameter such that the algorithm (3) achieves its fastest convergence rate. Before giving the main result, a necessary condition is firstly given to guarantee the convergence of the proposed algorithm (3).

Theorem 7. Given the discrete Lyapunov matrix equation (1) to have a unique solution, if the sequence X (m) generated by the algorithm (3) under arbitrary initial conditions converges to the solution of the discrete Lyapunov matrix equation (3), then the parameter satisfies
Proof. By simple calculation, it can be derived that the matrix G in (13) satisfies ] .
It follows from this relation that In addition, if all the eigenvalues of the matrix G ∈ ℝ 2n 2 ×2n 2 are 1 , 2 , … , 2n 2 , then by matrix theory there holds By Theorem 2, if the proposed algorithm (3) converges to the solution of the discrete Lyapunov matrix equation (1) for any initial conditions, then | i | < 1, for any i ∈ [1, 2n 2 ]. Combining this with (44) and (45), gives This gives the conclusion of this theorem. □ By the conclusion of Theorems 4 and 7, the problem of finding optimal tuning parameter can be converted into an optimization problem within an interval. Let (A) = { i , i ∈ [1, n]}, and define c = | det A| 2n . Then, the optimal tuning parameter can be obtained by solving the following optimization problem: For the case where all the eigenvalues of the matrix A are real, an explicit expression of the optimal tuning parameter can be given. In the sequel, we are devoted to investigate this problem.
Theorem 8. Consider the discrete Lyapunov matrix equation (1) to have a unique solution, define the matrix G as in (13), and let i , i ∈ [1, n] be the n eigenvalues of the matrix A. If all i j , i, j ∈ [1, n] are positive and real, and

then the algorithm (3) under arbitrary initial conditions has the fastest convergence rate when the parameter is chosen as
In this case, the corresponding spectral radius of the matrix G is given by Proof. For notational convenience, we denote By Theorem 4, all the eigenvalues of the matrix G are given in terms of the roots of the polynomial equations in (18). For some i j < − 4(1− ) 2 , in this case there holds For this case, the parameter should satisfy ≥ 1. The two roots of the corresponding equation in (18) are For these two roots, it is easily obtained that For some i j ≥ − 4(1− ) 2 , in this case there holds Thus, the two roots of the corresponding equation in (18) with respect to can be expressed by For these two roots, if > 0 there holds With the previous preliminary result, we can give the expression of the spectral radius of the matrix G . Before proceeding, it is defined that } .
If ≤ 0, then for any i, j ∈ [1, n], there holds i j ≥ − 4(1− ) 2 . Thus, with the previous preliminary results we have which implies that Since g( , ) is an increasing function with respect to when ∈ (0, 1], then it follows from (54) that there holds Finally, the case of > 1 is considered. If > 1 and ≥ − 4(1− ) 2 , that is , then there holds max{| 1,2 ( i j )|} = g( , i j ) for any i, j ∈ [1, n] by the previous preliminary results. Based on Theorem 7, a necessary convergence condition for the algorithm (3) is where Obviously, there holds c ≤ ≤ under the condition of this theorem. By simple computation, it can be found that Therefore, in this case we only consider ∈ (1, .
≤ , then the parameter satisfies In view of the condition (56) and the relation (57) In view of the definition of , by the monotonically increasing property of the functions √ ( − 1) and g( , ) with respect to when ∈ ( Let It follows from the above relations that Since > 1 and > 0, then 2 > 0. In addition, it is obvious that 1 2 > 0. Thus, it is immediately derived that 1 > 0. With the notation in (61), it can be obtained that Thus, it follows from (58), (59), (60), and (62) that there holds then we have According to the previous argument, the spectral radius of the matrix G can be given by Obviously, the function √ ( − 1) is increasing with respect to . In addition, it is easily derived that the function g( , ) is decreasing for ∈ (0, Thus, the relation (48) holds. The proof of this theorem is thus completed. □ The basic idea of the proof of this theorem is to firstly give the expression of the spectral radius (G ), and then give the optimal tuning parameter according to the monotonicity of the spectral radius (G ). This is similar to the proof of Theorem 14 in ref. [31].
Based on the conclusion of Theorem 8, we can give another proof for Theorem 5. In fact, in the proof of Theorem 8, the expression of (G ) is given in (63). In view of the monotonicity of the two functions g( , ) and h( , ) defined in (49), the algorithm (3) is convergent if and only if the following conditions are satisfied: By the first expression in (64), it can be obtained that > (1 − 1 ). By the second expression in Equation (64), we have < 1 + 1 . This is the conclusion of Theorem 5.
In this section, we only give the expression of the optimal tuning parameter when all i j , i, j ∈ [1, n] are positive and real, where i , i ∈ [1, n] are the eigenvalues of A. The case where there exists at least one negative i j can also be investigated along the line of the proof of Theorem 8.

A NUMERICAL EXAMPLE
In addition, the positive definite matrix Q is the identity matrix I . We use the proposed algorithm (3) and the gradient based algorithm (4) to solve this matrix equation. All the eigenvalues of the matrix A are 0.3000, 0.4700, 0.5501, 0.6999, and 0.7498.
Shown in Figure 1 are the convergence curves of the algorithm (3) for different tuning parameters under zero initial conditions. It should be pointed out that the algorithm (3) with = 1 is the Smith iterative algorithm (2). Shown in Figure 2 is the convergence curve of the algorithm (4) with = 0.8000, for which this algorithm has the fastest convergence rate. From Figure 1, it can be seen that the algorithm with = 1.2036 has faster convergence rate than other cases. In addition, it can be seen from Figures 1 and 2 that the proposed algorithm has better convergence performance than the existing Smith iterative algorithm if the tuning parameter is appropriately chosen.

CONCLUSION
In this paper, we have proposed a new iterative algorithm for solving discrete Lyapunov matrix equations. An important feature of this algorithm is the introduction of a tuning parameter so that a combination of the available information in the last step and previous step is used to update the iterative solution of the considered Lyapunov matrix equation. Some convergence conditions are given for this proposed algorithm. In addition, a necessary and sufficient condition for the convergence of the algorithm is presented in terms of the roots of a family of polynomial equations with degree 2. Moreover, an explicit expression is given for the optimal tuning parameter such that the algorithm achieves its fastest convergence rate.

DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author, Ying Zhang, upon reasonable request.

CONFLICT OF INTEREST
No potential conflict of interest was reported by the authors.