Achieving Optimal Output Consensus for Discrete-time Linear Multi-agent Systems with Disturbances Rejection

In this paper, an optimal output consensus problem is studied for discrete-time linear multi-agent systems subject to external disturbances. Each agent is assigned with a local cost function which is known only to itself. Distributed protocols are to be designed to guarantee an output consensus for these high-order agents and meanwhile minimize the aggregate cost as the sum of these local costs. To overcome the difficulties brought by high-order dynamics and external disturbances, we develop an embedded design and constructively present a distributed rule to solve this problem. The proposed control includes three terms: an optimal signal generator under a directed information graph, an observer-based compensator to reject these disturbances, and a reference tracking controller for these linear agents. It is shown to solve the formulated problem with some mild assumptions. Numerical examples are also provided to illustrate the effectiveness of our proposed distributed control laws.


Introduction
In recent years, a lot of efforts have been made to study the distributed coordination of multi-agent systems. As one of the most important problems, distributed optimization has drawn growing attention due to its wide applications in power systems and sensor networks [1,2]. In a typical setting of this problem, a network of interconnected nodes are associated with a group of convex functions, while each node only knows one component of these functions. The design goal is to drive all these nodes to reach an optimal consensus minimizing the sum of these functions by exchanging information with each other.
There has been a plenty of publications on this topic and many significant results have been reported in literature. For instance, the authors in [3] investigated the distributed consensus optimization problem through a novel combination of average consensus algorithms with subgradient methods. Extensions with global or local constraints on the decision variables were further studied in [4,5]. Efforts have also been made to derive distributed algorithms with fast convergence rate in [6][7][8]. Paralleled with these discrete-time results, continuous-time solvers to this distributed optimization were also considered under various conditions in [9][10][11].
However, it is observed that most of the above results are derived only for single-integrator agents from the viewpoint of mathematical programming. In practical applications, the decision variables might be determined by or depend upon physical plants, which can not be described well by single integrators, e.g., a group of mobile robots to achieve rendezvous [12]. In [13], a numerical example was provided to show that direct use of distributed rules for single integrators might fail to achieve the optimal goal for agents with unity relative degree. Thus, we should take these high-order dynamics into account when seeking an optimal consensus in a distributed manner.
Note that the gradient-based rules are basically nonlinear, achieving optimal (output) consensus might be challenging due to the couplings between the high-order dynamics of agents with the distributed optimization requirement. To solve this problem, some interesting attempts have been made for continuoustime high-order dynamics. For example, the authors in [14,15] extend existing rules to continuous-time seconder-order agents by adding some integral terms. Similar ideas have been used in [16] to achieve optimal consensus for high-order integrators by bounded controls. In a recent work [17], the authors proposed an embedded control method to solve this problem for linear agents in a modular way. Some nonlinear multi-agent systems were also investigated to achieve such an optimal consensus in [18,19].
However, in contrast to these papers for continuous-time high-order agents, there is still no general result to our best knowledge on achieving optimal consensus for discrete-time multi-agent systems with non-integrator dynamics.
The objective of this paper is to develop distributed rules for discrete-time high-order agents to achieve an optimal output consensus. To be specific, we assume that the agents are of general linear time-invariant dynamics and each agent is assigned with a local cost function and can exchange information through a communication topology represented by a directed graph. All agents are to be designed to achieve an output consensus and meanwhile minimize the aggregate cost as the sum of local ones. Moreover, we further consider the cases when agents are subject to external disturbances, which are inevitably encountered in practical circumstances.
Based on aforementioned observations, the contribution of this work can be summarized in at least three aspects.
Firstly, an optimal output consensus problem is formulated and solved for a high-order multi-agent system in discrete time. In contrast to plenty of optimal consensus results for single-integrators, this problem is extended to multi-agent systems having linear dynamics, which can be taken as the discretetime counterpart of existing optimal consensus results for high-order agents in continuous time.
Secondly, we develop novel distributed rules to achieve optimal (output) consensus under weightbalanced directed graphs. Compared with similar results for undirected graphs, this problem is certainly more challenging. Moreover, the proposed algorithm is free of initialization in contrast to many existing works for digraphs, which might be more favorable in large scale multi-agent systems.
Thirdly, disturbances rejection issue is discussed in achieving optimal consensus for this discrete-time multi-agent system. Note that the considered external disturbances are modeled by an autonomous linear system, which are general enough to cover many typical signals, e.g., step sequence and sinusoidal sequence. An observer based method is used to reject these harmful disturbances to facilitate our optimal consensus design.
The rest of this paper is organized as follows. We first give some preliminaries about graph notations and convex analysis in Section 2 and then formulate our problem in Section 3. The main design with proofs is presented in Section 4 with a numerical example in Section 5. Finally, some concluding remarks are given in Section 6.

Preliminaries
In this section, we first provide some preliminaries about graph theory [20] and convex analysis [21].

Graph theory
Let R N be the N -dimensional Euclidean space. col(a 1 , . . . , a N ) = [a T 1 , . . . , a T N ] T for column vectors a i (i = 1, . . . , N ). For a vector a (or a matrix A), |a| (or |A|) denotes its Euclidian (or spectral) norm. 1 N (or 0 N ) denotes an N -dimensional all-one (or all-zero) column vector, and I N denotes the We may omit the subscript when it is self-evident. case, we order these eigenvalues as λ 1 = 0 < λ 2 ≤ · · · ≤ λ N and have that the matrix R T Sym(L)R is positive definite with eigenvalues λ 2 , . . . , λ N .

Convex analysis
When the function f is differentiable, it is verified that f is convex if the following inequality holds, and is strictly convex if this inequality is strict whenever

Problem Formulation
Consider a multi-agent system consisting of N discrete-time linear agents of the following form: where x i (t) ∈ R nx is the state, u i (t) ∈ R nu is the input and y i (t) ∈ R is the output of agent i. The system matrices (C, A, B) are assumed to be minimal with compatible dimensions. Here, d i (t) ∈ R nx represents external disturbances modeled by where w i ∈ R nw is the full internal state of external disturbances. As usual, we assume that S has no eigenvalue inside the unit circle on the complex plane [22]. In fact, the components of w i corresponding to the eigenvalues inside the unit circle will exponentially converge to zero and thus in no way affect the designed goal.
Each agent is assigned with a local cost function f i : R → R. We consider the following optimization This aggregate objective function f is called the global cost of this multi-agent system.
Here is an assumption to guarantee the solvability of (3).

Assumption 1.
For any 1 ≤ i ≤ N , the cost function f i is l-strongly convex and ∇f i isl-Lipschitz.
Assumption 1 implies the existence and uniqueness of optimal point to (3). As usual [10,14], we assume this point is finite and denote it as y * . We aim to develop distributed algorithms to drive the outputs of all agents to achieve a consensus on the minimizer of 3 without setting up a centralized working station, which might be expensive and inhibitive in some circumstances.
For this purpose, a weighted digraph G = (V, E, A) is used to describe the information sharing relationships among these agents with a node set N = {1, . . . , N } and a weight matrix A ∈ R N ×N . If agent i can get the information of agent j, then there is an edge (j, i) in the graph, i.e., a ij > 0.
The optimal output consensus problem for discrete-time multi-agent system (1) can be formulated as follows. Given agent (1), cost function f i (·), graph G and disturbance (2), the optimal output consensus problem is to find a feedback control u i for agent i by using its own and neighboring information such that all trajectories of agents are bounded and the associated outputs satisfy lim t→+∞ |y i (t) − y * | = 0.
In this formulation, these agents are required to achieve an output consensus minimizing the aggregate global cost function. When agents are all single integrators without disturbances, our formulated problem is coincided with the well-studied distributed optimization problem [3]. Here, we further consider multiagent systems having high-order dynamics subject to external disturbances.
To guarantee that any agent's information can reach any other agents through a directed information flow, we suppose the following assumption is fulfilled as in many publications [10,14,23].
Assumption 2. G is strongly connected and weight-balanced.
Note that when this optimal output consensus problem is solved, we have lim t→∞ y i (t) = y * . It is natural for agent i to reach some steady state. Thus, we make another assumption to ensure this point.
Assumption 3. There are constant matrices X 1 , X 2 and U 1 , U 2 with compatible dimensions satisfying that: Assumption 3 is known as the solvability of regulator equations to achieve set-point regulation and disturbances rejection for discrete-time linear systems [22], which play a crucial role in resolving our optimal consensus problem . Under this assumption, one can obtain the steady-state state and input for each agent as X 1 y * + X 1 w(t) and U 1 y * + U 2 w(t) when the optimal output consensus is achieved at y * with disturbances rejection.
To reject the external disturbances, we suppose the following condition holds without loss of generality.
This assumption implies that the external disturbances can indeed affect our regulated output y i . A sufficient condition to ensure this assumption is the observability of (E, S), which can be trivially verified by PBH-test.

Main Results
To avoid the difficulties brought by the high-order linear dynamics, we develop an embedded design in two steps as in [17] to achieve optimal output consensus for these agents over directed graphs.

Optimal signal generation
To begin with, we consider an optimal consensus problem with (3) for a virtual agent with the same cost function f i and information sharing graph G.
Note that under Assumption 1, the optimization problem (3) can be reformulated to the following equivalent form: f i (y i ), subject to Ly = 0 with y col(y 1 , . . . , y N ).
There have been some distributed rules to achieve optimal consensus or compute the optimal solution to this problem under directed graphs, e.g. [10] and [24]. However, most of these algorithms require some initialization process. Since there might be disturbances or round-off errors, such an initialization could fail to be fulfilled during the implementation of these rules. Thus, we are more interested to construct optimal signal generators free of such initializations.
Note that its associated Lagrangian is L(y, Λ) =f (y) + Λ T Ly. When the graph is undirected and connected, the Laplacian L is symmetric and the optimal point can be readily derived by a primal-dual dynamics (e.g. [25]). As for digraphs, we lose such a symmetry and the original primal-dual method fails to generate the optimal point. Here, we extend the primal-dual dynamic to weight-balanced graphs by adding a proportional terms as follows.
where α, β and γ are positive constants to be specified later. This algorithm has been partially investigated in [25] when α = β = 1. Here we add these two tunable parameters to ensure its efficiency with directed graphs.
Under Assumption 1, the function ∇f isl-Lipschitz in Z.
Let (Z * , Λ * ) be an equilibrium point of system (8). The following lemma shows that these agents can achieve an optimal consensus at the equilibrium (Z * , Λ * ).
To develop effective optimal signal generators, we have to choose appropriate parameters α, β, γ such that the equilibrium of (8) is attractive. Here is a key lemma to ensure this point. Its proof can be found in Appendix.

Lemma 2.
Under Assumptions 1-2, the trajectory of z i (t) along system (7) exponentially converges to the optimal solution y * of problem (3) from any initial value if the chosen parameters satisfy Remark 1. Condition (9) is only sufficient to guarantee the efficiency of generator (7) to reproduce y * , which can be conservative. One may prefer to select these parameters from repeated simulations by first increasing α, β and then decreasing γ sequentially.
Remark 2. Compared with existing optimal consensus design, the designed generator actually solve a distributed optimization problem under weight-balanced directed graphs. Unlike similar rules in [10,14], the developed algorithm is initialization free and more favorable in large scale networks.

Solvability of optimal consensus problem
With the above optimal signal generator, we are going to solve the associated reference tracking and disturbance rejection problem for agent i with reference z i (t) and disturbance d i (t).
Recalling some classical output regulation results [22], a full-information control for each agent to achieve optimal output consensus is written as follows: where K is chosen such that A+ BK is Schur stable and K 1 = U 1 − KX 1 , K 2 = U 2 − KX 2 . In fact, under this full-information control, we can obtain an error system by lettingx i (t) = x i (t) − X 1 y * − X 2 w i (t) of the following form:x along which the regulated output e i (t) y i (t) − y * = Cx i (t) converges to zero as t goes to infinity.
However, the disturbance w i (t) is not available to us and the global optimal solution y * is also unknown due to the distributedness of the global cost function. Thus, the above full-information control is not applicable to our problem.
In Lemma 2, we have shown that the optimal solution y * can be generated by (7) exponentially fast.
This motivates us to replace y * by the reference signal z i (t). Considering those external disturbances, we can estimate them by observer based methods to complete the whole design.
Under Assumption 3, a full-state Luenberger observer is constructed to estimate these disturbances as follows. x where L 1 and L 2 are chosen gain matrices such that Here is a lemma to guarantee its efficiency and we omit its proof.
Lemma 3. Under Assumption 4, for any 1 ≤ i ≤ N , the signalsx i (t) andw i (t) along the trajectory of system (10) exponentially converge to x i (t) and w i (t) as t goes to infinity.
Based on this observer and signal generator (7), we propose the following control for agent i to achieve the optimal consensus goal.
where matrices L 1 , L 2 and parameters α, β, γ are chosen as above. This control law is distributed in sense of using only its own and neighboring information of each agent.
To show its effectiveness, we derive the new error system under control (11) by some mathematical manipulations as follows: where . It can be found that Ξ i (t) represents the discrepancy between our actual control and full-information control.
It is ready to give our main theorem of this paper.
Proof. To prove this theorem, we first claim that there exists a constant c > 0 such that lim sup t→∞ |e i (t)| ≤ c lim sup t→∞ |Ξ i (t)|. Moreover, if lim t→∞ |Ξ i (t)| = 0, we have lim t→∞ e i (t) = 0. This property is a variant of input-to-output stability of system (12) with input Ξ i (t) and output e i (t).
To prove it, we denote G = A + BK for short. By the iteration (12), one can obtain that Under Assumption 3, the regulated output e i is derived as follows.
Since G is Schur stable, lim t→∞ CG t x i (0) = 0. To estimate the limit superior of {|e i (t)|}, we can neglect this term without affecting the conclusion. Without loss of generality, we assume b i lim sup t→∞ |Ξ i (t)| is finite.
By its definition, for any ε > 0, there exists a large enough integer M > 0 such that holds for any t > M . Splitting the last term of (13) into two parts gives that and |G| t−1−M → 0 as t → ∞ by its Schurness. We have that lim sup Since the constant ε can be chosen arbitrarily, this further implies that lim sup t→∞ |e i (t)| ≤ c lim sup t→∞ |v i (t)| for c = 1 1−|G| . That is, the claim is correct. With the above claim, we only have to show Ξ i (t) → 0 as t → ∞ under the control (11), which trivially holds by Lemmas 2 and 3. Thus, one can conclude that e i (t) = y i (t) − y * converges to 0 as t goes to infinity. The proof is thus complete.
Remark 3. This theorem can be regarded as a discrete-time companion of existing optimal output consensus results for continuous-time agents derived in [17,26]. Compared with the well-studied optimal consensus problem for discrete-time single integrators [3,7,8,25], we extend them to a more general case with high-order linear multi-agent systems subjection to nontrivial disturbances. Particularly, we achieve an output average consensus for these linear agents by letting f i (s) = (s−y i (0)) 2 with disturbance rejection.

Simulations
In this section, we provide a numerical example to illustrate the effectiveness of our previous designs. Consider a multi-agent system including four agents as follows.
x i (t + 1) = 1 1 where the external disturbance d i (t) is generated by an exosystem of the form (2) with (1) and S = cos(1) sin (1) − sin (1) cos (1) Assumption 3 holds for this multi-agent system with The observability of (E, S) is verified and implies Assumption 4.
The communication topology among these agents is represented by a directed ring graph depicted as All these functions are strongly convex with Lipschitz gradients. In fact, Assumption 1 is verified with l = 1 andl = 3. By minimizing f (y) = 4 i=1 f i (y), the global optimal point is y * = 3.24. According to Theorem 1, the associated optimal output consensus problem can be solved by a control (11). For simulations, we choose let α = 1, β = 15, γ = 0.004. The efficiency of the developed signal generator is shown in Figs. 2 and 3, where the optimal point y * can be reproduced quickly.
Next, we choose the gain matrices in control (11) as follows. As shown in Fig. 4, the optimal output consensus for all agents is achieved on the global optimal solution y * . Also, we shut down the disturbance rejection part in the controller (i.e., set K 2 = 0) between t = 2000 and t = 2250 and find that these agents fail to achieve a consensus. After we restart this part, the optimal output consensus is quickly recovered, which verifies the efficiency of our control to reject these periodic disturbances.

Conclusions
The paper studied an optimal output consensus for discrete-time linear multi-agent systems subject to external disturbances. Following an embedded control design, we employed a primal-dual rule with fixed step size to generate the optimal point under directed information flow and developed effective observer based distributed controllers for agents to achieve optimal output consensus. Future works will include time-varying directed graphs.