MATRIXx TM Xmath Model Reduction Module TM Xmath Model Reduction Module April 2004 Edition Part Number 370755B-01
Support Worldwide Technical Support and Product Information ni.
Important Information Warranty The media on which you receive National Instruments software are warranted not to fail to execute programming instructions, due to defects in materials and workmanship, for a period of 90 days from date of shipment, as evidenced by receipts or other documentation. National Instruments will, at its option, repair or replace software media that do not execute programming instructions if National Instruments receives notice of such defects during the warranty period.
Conventions The following conventions are used in this manual: [] Square brackets enclose optional items—for example, [response]. Square brackets also cite bibliographic references. » The » symbol leads you through nested menu items and dialog box options to a final action. The sequence File»Page Setup»Options directs you to pull down the File menu, select the Page Setup item, and select Options from the last dialog box. This icon denotes a note, which alerts you to important information.
Contents Chapter 1 Introduction Using This Manual.........................................................................................................1-1 Document Organization...................................................................................1-1 Bibliographic References ................................................................................1-2 Commonly Used Nomenclature ......................................................................1-2 Conventions.........................
Contents Onepass Algorithm ......................................................................................... 2-18 Multipass Algorithm ...................................................................................... 2-20 Discrete-Time Systems ................................................................................... 2-21 Impulse Response Error .................................................................................. 2-22 Unstable System Approximation .........................
Contents Algorithm ........................................................................................................4-18 Additional Background ...................................................................................4-20 Related Functions ............................................................................................4-21 Chapter 5 Utilities hankelsv( )......................................................................................................................
1 Introduction This chapter starts with an outline of the manual and some useful notes. It also provides an overview of the Model Reduction Module, describes the functions in this module, and introduces nomenclature and concepts used throughout this manual. Using This Manual This manual describes the Model Reduction Module (MRM), which provides a collection of tools for reducing the order of systems.
Chapter 1 Introduction • Chapter 5, Utilities, describes three utility functions: hankelsv( ), stable( ), and compare( ). • Chapter 6, Tutorial, illustrates a number of the MRM functions and their underlying ideas. Bibliographic References Throughout this document, bibliographic references are cited with bracketed entries. For example, a reference to [VODM1] corresponds to a paper published by Van Overschee and De Moor. For a table of bibliographic references, refer to Appendix A, Bibliography.
Chapter 1 Introduction Related Publications For a complete list of MATRIXx publications, refer to Chapter 2, MATRIXx Publications, Online Help, and Customer Support, of the MATRIXx Getting Started Guide.
Chapter 1 Introduction As shown in Figure 1-1, functions are provided to handle four broad tasks: • Model reduction with additive errors • Model reduction with multiplicative errors • Model reduction with frequency weighting of an additive error, including controller reduction • Utility functions Functions Additive Error Model Reduction Multiplicative Model Reduction balmoore redschur ophank truncate balance mreduce Frequency Weighted Model Reduction bst mulhank wtbalance fracred Utility Fu
Chapter 1 Introduction Certain restrictions regarding minimality and stability are required of the input data, and are summarized in Table 1-1. Table 1-1.
Chapter 1 Introduction • L2 approximation, in which the L2 norm of impulse response error (or, by Parseval’s theorem, the L2 norm of the transfer-function error along the imaginary axis) serves as the error measure • Markov parameter or impulse response matching, moment matching, covariance matching, and combinations of these, for example, q-COVER approximation • Controller reduction using canonical interactions, balanced Riccati equations, and certain balanced controller reduction algorithms Nomenc
Chapter 1 • Introduction An inequality or bound is tight if it can be met in practice, for example 1 + log x – x ≤ 0 is tight because the inequality becomes an equality for x = 1.
Chapter 1 Introduction • The controllability grammian is also E[x(t)x′(t)] when the system x· = Ax + Bw has been excited from time –∞ by zero mean white noise with E [ w ( t )w′ ( s ) ] = Iδ ( t – s ). • The observability grammian can be thought of as measuring the information contained in the output concerning an initial state.
Chapter 1 • Introduction Suppose the transfer-function matrix corresponds to a discrete-time system, with state variable dimension n. Then the infinite Hankel matrix, 2 CB H = CAB CA B 2 CAB CA B 2 CA B has for its singular values the n nonzero Hankel singular values, together with an infinite number of zero singular values. The Hankel singular values of a (stable) all pass system (or all pass matrix) are all 1.
Chapter 1 Introduction Internally Balanced Realizations Suppose that a realization of a transfer-function matrix has the controllability and observability grammian property that P = Q = Σ for some diagonal Σ. Then the realization is termed internally balanced. Notice that the diagonal entries σi of Σ are square roots of the eigenvalues of PQ, that is, they are the Hankel singular values. Often the entries of Σ are assumed ordered with σi ≥ σi+1.
Chapter 1 Introduction This is almost the algorithm set out in Section II of [LHPW87]. The one difference (and it is minor) is that in [LHPW87], lower triangular Cholesky factors of P and Q are used, in place of Uc Sc1/2 and UO SO1/2 in forming H in step 2. The grammians themselves need never be formed, as these Cholesky factors can be computed directly from A, B, and C in both continuous and discrete time; this, however, is not done in balmoore.
Chapter 1 Introduction and also: Reλ (A )<0 and i 22 –1 Reλ i ( A 11 – A 12 A 22 A 21 ) < 0. Usually, we expect that, –1 Reλ i ( A 22 ) « Reλi ( A 11 – A 12 A 22 A 21 ) in the sense that the intuitive argument hinges on this, but it is not necessary.
Chapter 1 Introduction Similar considerations govern the discrete-time problem, where, x1 ( k + 1 ) x2 ( k + 1 ) = B1 A 11 A 12 x 1 ( k ) + u(k ) B2 A 21 A 22 x 2 ( k ) y ( k ) = C1 C2 x1 ( k ) + Du ( k ) x2 ( k ) can be approximated by: –1 x 1 ( k + 1 ) = [ A 11 + A 12 ( I – A 22 ) A 21 ]x 1 ( k ) + –1 [ B 1 + A 12 ( I – A 22 ) B 2 ]u ( k ) –1 y k = [ C 1 + C 2 ( I – A 22 ) A 21 ]x 1 ( k ) + –1 [ D + C 2 ( I – A 22 ) B 2 ]u ( k ) mreduce( ) can carry out singular perturbation.
Chapter 1 Introduction nonnegative hermitian for all ω. If Φ is scalar, then Φ(jω)≥0 for all ω. Normally one restricts attention to Φ(·) with limω→∞Φ(jω)<∞. A key result is that, given a rational, nonnegative hermitian Φ(jω) with limω→∞Φ(jω)<∞, there exists a rational W(s) where, • W(∞)<∞. • W(s) is stable. • W(s) is minimum phase, that is, the rank of W(s) is constant in Re[s]>0. In the scalar case, all zeros of W(s) lie in Re[s]≤0, or in Re[s]<0 if Φ(jω)>0 for all ω.
Chapter 1 Introduction Low Order Controller Design Through Order Reduction The Model Reduction Module is particularly suitable for achieving low order controller design for a high order plant. This section explains some of the broad issues involved. Most modern controller design methods, for example, LQG and H∞, yield controllers of order roughly comparable with that of the plant.
Chapter 1 Introduction multiplicative reduction, as described in Chapter 4, Frequency-Weighted Error Reduction, is a sound approach. Chapter 3, Multiplicative Error Reduction, and Chapter 4, Frequency-Weighted Error Reduction, develop these arguments more fully. Xmath Model Reduction Module 1-16 ni.
2 Additive Error Reduction This chapter describes additive error reduction including discussions of truncation of, reduction by, and perturbation of balanced realizations. Introduction Additive error reduction focuses on errors of the form, G ( jω) – G r ( jω) ∞ where G is the originally given transfer function, or model, and Gr is the reduced one.
Chapter 2 Additive Error Reduction Truncation of Balanced Realizations A group of functions can be used to achieve a reduction through truncation of a balanced realization.
Chapter 2 Additive Error Reduction A very attractive feature of the truncation procedure is the availability of an error bound. More precisely, suppose that the controllability and observability grammians for [Enn84] are P = Q = Σ = Σ1 0 (2-2) 0 Σ2 with the diagonal entries of Σ in decreasing order, that is, σ1 ≥ σ2 ≥ ···. Then the key result is, G ( jω) – G r ( jω) ∞ ≤ 2trΣ 2 with G, Gr the transfer function matrices of Equation 2-1 and Equation 2-2, respectively.
Chapter 2 Additive Error Reduction proper. So, even if all zeros are unstable, the maximum phase shift when ω moves from 0 to ∞ is (2n – 3)π/2. It follows that if G(jω) remains large in magnitude at frequencies when the phase shift has moved past (2n – 3)π/2, approximation of G by Gr will necessarily be poor.
Chapter 2 Additive Error Reduction order model is not one in general obtainable by truncation of an internally-balanced realization of the full order model. Figure 2-1 sets out several routes to a reduced-order realization. In continuous time, a truncation of a balanced realization is again balanced. This is not the case for discrete time, but otherwise it looks the same.
Chapter 2 Additive Error Reduction with controllability and observability grammians given by, P = Q = Σ = Σ1 0 0 Σ2 in which the diagonal entries of Σ are in decreasing order, that is, σ1 ≥ σ2 ≥ ···, and such that the last diagonal entry of Σ1 exceeds –1 the first diagonal entry of Σ2.
Chapter 2 Additive Error Reduction function matrix. Consider the way the associated impulse response maps inputs defined over (–∞,0] in L2 into outputs, and focus on the output over [0,∞). Define the input as u(t) for t < 0, and set v(t) = u(–t). Define the output as y(t) for t > 0. Then the mapping is ∞ y(t) = ∫ CexpA ( t + r )Bv ( r )dr 0 if G(s) = C(sI-A)–1B. The norm of the associated operator is the Hankel norm G H of G.
Chapter 2 Additive Error Reduction Further, the Ĝ which is optimal for Hankel norm approximation also is optimal for this second type of approximation. In Xmath Hankel norm approximation is achieved with ophank( ). The most comprehensive reference is [Glo84]. balmoore( ) [SysR,HSV,T] = balmoore(Sys,{nsr,bound}) The balmoore( ) function computes an internally-balanced realization of a continuous system and then optionally truncates it to provide a balance reduced order system using B.C.
Chapter 2 Additive Error Reduction of the balanced system occurs, (assuming nsr is less than the number of states).
Chapter 2 Additive Error Reduction The actual approximation error for discrete systems also depends on frequency, and can be large at ω = 0. The error bound is almost never tight, that is, the actual error magnitude as a function of ω almost never attains the error bound, so that the bound can only be a guide to the selection of the reduced system dimension.
Chapter 2 Additive Error Reduction Related Functions balance(), truncate(), redschur(), mreduce() truncate( ) SysR = truncate(Sys,nsr,{VD,VA}) The truncate( ) function reduces a system Sys by retaining the first nsr states and throwing away the rest to form a system SysR. If for Sys one has, A = A 11 A 12 A 21 A 22 B = B1 B2 C = C1 C2 the reduced order system (in both continuous-time and discrete-time cases) is defined by A11, B1, C1, and D.
Chapter 2 Additive Error Reduction redschur( ) [SysR,HSV,slbig,srbig,VD,VA] = redschur(Sys,{nsr,bound}) The redschur( ) function uses a Schur method (from Safonov and Chiang) to calculate a reduced version of a continuous or discrete system without balancing.
Chapter 2 Additive Error Reduction Next, Schur decompositions of WcWo are formed with the eigenvalues of WcWo in ascending and descending order. These eigenvalues are the square of the Hankel singular values of Sys, and if Sys is nonminimal, some can be zero. V′ A W c W o V A = S asc V′ D W c W o V D = S des The matrices VA, VD are orthogonal and Sasc, Sdes are upper triangular.
Chapter 2 Additive Error Reduction For the discrete-time case: jω jω G ( e ) – GR ( e ) ∞ ≤ 2 ( σ nsr + 1 + σ nsr + 2 + ... + σ ns ) When {bound} is specified, the error bound just enunciated is used to choose the number of states in SysR so that the bound is satisfied and nsr is as small as possible. If the desired error bound is smaller than 2σns , no reduction is made. In the continuous-time case, the error depends on frequency, but is always zero at ω = ∞.
Chapter 2 Additive Error Reduction Algorithm The algorithm does the following. The system Sys and the reduced order system SysR are stable; the system SysU has all its poles in Re[s] > 0. If the transfer function matrices are G(s), Gr (s) and Gu(s) then: • Gr (s) is a stable approximation of G(s). • Gr (s) + Gu(s) is a more accurate, but not stable, approximation of G(s), and optimal in a certain sense.
Chapter 2 Additive Error Reduction Table 2-1.
Chapter 2 Additive Error Reduction Thus, the penalty for not being allowed to include Gu in the approximation is an increase in the error bound, by σni + 1 + ... + σns . A number of theoretical developments hinge on bounding the Hankel singular values of Gr (s) and Gu(–s) in terms of those of G(s).
Chapter 2 Additive Error Reduction being approximated by a stable Gr (s) with the actual error (as opposed to just the error bound) satisfying: G ( s ) – Gr ( s ) ∞ = σ ns Note Gr is optimal, that is, there is no other Gr achieving a lower bound. Onepass Algorithm The first steps of the algorithm are to obtain the Hankel singular values of G(s) (by using hankelsv( )) and identify their multiplicities. (Stability of G(s) is checked in this process.
Chapter 2 Additive Error Reduction and finally: # –1 Ã = S B ( A 11 – A –12 A 22 A 21 ) # –1 B̃ = S B ( B 1 – A –12 A 22 B 2 ) # # C̃ = C 1 – C 2 A 22 A 21 # D̃ = D – C 2 A 22 B 2 These four matrices are the constituents of the system matrix of G̃ ( s ) , where: G̃ ( s ) = G r ( s ) + G u ( s ) Digression: This choice is related to the ideas of [Glo84] in the following way; in [Glo84], the complete set is identified of G̃ ( s ) satisfying G ( jω ) – G̃ ( jω ) ∞ = σ ni with G̃ having a stable pa
Chapter 2 Additive Error Reduction to choose the D matrix of Gr (s), by splitting D̃ between Gr (s) and Gu(s). This is done by using a separate function ophiter( ). Suppose Gu(s) is the unstable output of stable( ), and let K(s) = Gu(–s). By applying the multipass Hankel reduction algorithm, described further below, K(s) is reduced to the constant K0 (the approximation), which satisfies, K ( s ) – K0 ∞ ≤ σ 1 ( K ) + ... + σ σ n s – n i ( K ) ≤ σ ni + 1 ( G ) + ...
Chapter 2 2. Additive Error Reduction Find a stable order ns – 2 approximation Gns – 2 of Gns – 1(s), with G ns – 1 ( jω) – G ns – 2 ( jω) ∞ = σ ns – 1 ( G ns – 1 ) . . . 3. (Step ns–nr): Find a stable order nsr approximation of Gnsr + 1, with G nsr + 1 ( jω) – G nsr ( jω) ∞ = σ nsr + 1 ( G nsr + 1 ) Then, because σ i ( G ns – 1 ) ≤ σ i ( G ) for i < ns , σ i ( G ns – 2 ) ≤ σ i ( G ns – 1 ) for i ≤ s – i , ...
Chapter 2 Additive Error Reduction We use sysZ to denote G(z) and define: bilinsys=makepoly([-1,a]/makepoly([1,a]) as the mapping from the z-domain to the s-domain. The specification is reversed because this function uses backward polynomial rotation. Hankel norm reduction is then applied to H(s), to generate, a stable reduced order approximation Hr(s) and unstable Hu(s) such that: H – Hr – H u = σi H – H r = σ i + σ ni + 1 + ...
Chapter 2 Additive Error Reduction It follows by a result of [BoD87] that the impulse response error for t > 0 satisfies: ns g ( t ) – g r ( t ) 1 ≤ 2 ( 2i – 2 + r )σ i ( G ) + ∑ σ (G) j i+r Evidently, Hankel norm approximation ensures some form of approximation of the impulse response too. Unstable System Approximation A transfer function G(s) with stable and unstable poles can be reduced by applying stable( ) to separate G(s) into a stable and unstable part.
Multiplicative Error Reduction 3 This chapter describes multiplicative error reduction presenting two reasons to consider multiplicative rather than additive error reduction, one general and one specific. Selecting Multiplicative Error Reduction The general reason to use multiplicative error reduction is that many specifications are given using decibels; ±1 db corresponds to a multiplicative error of about 12%.
Chapter 3 Multiplicative Error Reduction Multiplicative Robustness Result –1 Suppose C stabilizes Ĝ , that ∆ = ( G – Ĝ )Ĝ has no jω-axis poles, and that G has the same number of poles in Re[s] ≥ 0 as Ĝ . If for all ω, ∆ ( jω) Ĝ ( jω)C ( jω) [ I + Ĝ ( jω)C ( jω) ] –1 <1 (3-1) then C stabilizes G. This result indicates that if a controller C is designed to stabilize a nominal or reduced order model Ĝ , satisfaction of Equation 3-1 ensures that the controller also will stabilize the true plant G.
Chapter 3 Multiplicative Error Reduction bandwidth at the expense of being larger outside this bandwidth, which would be preferable. –1 Second, the previously used multiplicative error is ( G – Ĝ )Ĝ . In the algorithms that follow, the error δ = ( G – Ĝ )Ĝ – 1 appears.
Chapter 3 Multiplicative Error Reduction The objective of the algorithm is to approximate a high-order stable transfer function matrix G(s) by a lower-order Gr (s) with either inv(g)(g-gr) or (g-gr)inv(g) minimized, under the condition that Gr is stable and of the prescribed order. Restrictions This function has the following restrictions: • The user must ensure that the input system is stable and nonsingular at s = infinity.
Chapter 3 Multiplicative Error Reduction These cases are secured with the keywords right and left, respectively. If the wrong option is requested for a nonsquare G(s), an error message will result. The algorithm has the property that right half plane zeros of G(s) remain as right half plane zeros of Gr(s). This means that if G(s) has order nsr with n+ zeros in Re[s] > 0, Gr (s) must have degree at least n+ , else, given that it has n+ zeros in Re[s] > 0 it would not be proper, [Gre88].
Chapter 3 Multiplicative Error Reduction 2. With G(s) = D + C(sI – A)–1B and stable, with DD´ nonsingular and G(jω) G'(–jω) nonsingular for all ω, part of a state variable realization of a minimum phase stable W(s) is determined such that W´(–s)W(s) = G(s)G´(–s) with –1 W ( s ) = D W + C W ( sI – Aw ) B W The state variable matrices in W(s) are obtained as follows. The controllability grammian P associated with G(s) is first found from AP + PA´ + BB´=0 then AW = A, BW = PC´ + BD´.
Chapter 3 Multiplicative Error Reduction strictly proper stable part of θ(s), as the square roots of the eigenvalues of PQ. Call these quantities νi. The Schur decompositions are, ′ ′ V A PQV A = S asc V D PQV D = S des where VA, VD are orthogonal and Sasc, Sdes are upper triangular. 4.
Chapter 3 Multiplicative Error Reduction state-variable representation of G. In this case, the user is effectively asking for Gr = G. When the phase matrix has repeated Hankel singular values, they must all be included or all excluded from the model, that is, νnsr = νnsr + 1 is not permitted; the algorithm checks for this. The number of νi equal to 1 is the number of zeros in Re[s]>0 of G(s), and as mentioned already, these zeros remain as zeros of Gr (s).
Chapter 3 Multiplicative Error Reduction Hankel Singular Values of Phase Matrix of Gr The νi, i = 1,2,...,ns have been termed above the Hankel singular values of the phase matrix associated with G. The corresponding quantities for Gr are νi, i = 1,..., nsr.
Chapter 3 Multiplicative Error Reduction which also can be relevant in finding a reduced order model of a plant. The procedure requires G again to be nonsingular at ω = ∞, and to have no jω-axis poles. It is as follows: 1. Form H = G–1. If G is described by state-variable matrices A, B, C, D, then H is described by A – BD–1C, BD–1, –D–1C, D–1. H is square, stable, and of full rank on the jω-axis. 2. Form Hr of the desired order to minimize approximately: –1 H ( H – Hr ) 3. Set Gr = H–1 ∞ r.
Chapter 3 Multiplicative Error Reduction The values of G(s), as shown in Figure 3-2, along the jω-axis are the same as the values of G̃ ( s ) around a circle with diameter defined by [a – j0, b–1 + j0] on the positive real axis. a b –1 ˜ (s) G values G(s) values Figure 3-2. Bilinear Mapping from G(s) to ( G̃s ) (Case 1) Also, the values of G̃ ( s ) , as shown in Figure 3-3, along the jω-axis are the same as the values of G(s) around a circle with diameter defined by [–b–1 + j0, –a + j0].
Chapter 3 Multiplicative Error Reduction Any zero (or rank reduction) on the jω-axis of G(s) becomes a zero (or rank reduction) in Re[s] > 0 of G̃ ( s ) , and if G(s) has a zero (or rank reduction) at infinity, this is shifted to a zero (or rank reduction) of G̃ ( s ) at the point b–1, (in Re[s] > 0).
Chapter 3 Multiplicative Error Reduction again with a bilinear transformation to secure multiplicative approximations over a limited frequency band.
Chapter 3 Multiplicative Error Reduction There is one potential source of failure of the algorithm. Because G(s) is stable, G̃ ( s ) certainly will be, as its poles will be in the left half plane circle on diameter ( – ε = j0, 0 ). If G̃ r ( s ) acquires a pole outside this circle (but still in the left half plane of course)—and this appears possible in principle—Gr(s) will then acquire a pole in Re [s] > 0. Should this difficulty be encountered, a smaller value of ε should be used.
Chapter 3 Multiplicative Error Reduction The conceptual basis of the algorithm can best be grasped by considering the case of scalar G(s) of degree n. Then one can form a minimum phase, stable W(s) with |W(jω)|2 = |G(jω)|2 and then an all-pass function (the phase function) W–1(–s) G(s). This all-pass function has a mixture of stable and unstable poles, and it encodes the phase of G(jω).
Chapter 3 Multiplicative Error Reduction eigenvalues of A – B/D * C with the aid of schur( ). If any real part of the eigenvalues is less than eps, a warning is displayed. Next, a stabilizing solution Q is found for the following Riccati equation: ′ –1 QA + A′Q + ( C – B′ w Q )′ ( DD′ ) ( C – Bw Q ) = 0 The function singriccati( ) is used; failure of the nonsingularity condition of G(jω) will normally result in an error message.
Chapter 3 Multiplicative Error Reduction singular values of F(s) larger than 1– ε (refer to steps 1 through 3 of the Restrictions section). The maximum order permitted is the number of nonzero eigenvalues of WcWo larger than ε. 4. Let r be the multiplicity of νns. The algorithm approximates –1 F ( s ) = C w ( sI – A ) B by a transfer function matrix F̂ ( s ) of order ns – r, using Hankel norm approximation. The procedure is slightly different from that used in ophank( ).
Chapter 3 Multiplicative Error Reduction Note The expression F̂ p ( s ) is the strictly proper part of F̂ ( s ) . The matrix –1 v ns [ F ( s ) – F̂ ( s ) ] is all pass; this property is not always secured in the multivariable case when ophank( ) is used to find a Hankel norm approximation of F(s). 5.
Chapter 3 • Multiplicative Error Reduction Ŵ ( s ) and Ĝs stand in the same relation as W(s) and G(s), that is: – Ŵ′ ( – s )Ŵ ( s ) = Ĝ ( s )Ĝ′ ( – s ) – With P̂Â′ F + Â F P̂ = – B̂ F B̂′ F , there holds ′ ′ B Ŵ = P̂C Ĝ + B Ĝ D Ĝ or B̂ F D′ + V 1 C′ = P̂ ( DĈ F + B′ W U 1 Σ 1 )′ + B̂ F ( I – v ns T′ )D′ – ′ With Q̂Â F + Â F Q̂ = – Ĉ′ F Ĉ F there holds –1 C Ŵ = D Gˆ ( C Gˆ – B′ Ŵ Q̂ ) or –1 ( I – v ns T′ ) ( I – v ns T ) Ĉ F = [ D ( I – v ns T ) ] –1 { DĈ F + B′ W U 1 ( Σ 1
Chapter 3 Multiplicative Error Reduction Error Bounds The error bound formula (Equation 3-3) is a simple consequence of iterating (Equation 3-5). To illustrate, suppose there are three reductions G → Ĝ → Ĝ 2 → Ĝ 3 , each by degree one.
Chapter 3 Multiplicative Error Reduction For mulhank( ), this translates for a scalar system into ns – 86.9 ∑ v i dB < 20log 10 Ĝ nsr ⁄ G ns i = nsr + 1 ∑ < 8.69 v i dB i = nsr + 1 and ns ∑ phase error < v i radians i = nsr + 1 The bounds are double for bst( ). The error as a function of frequency is always zero at ω = ∞ for bst( ) (or at ω = 0 if a transformation s → s–1 is used), whereas no such particular property of the error holds for mulhank( ).
Chapter 3 Multiplicative Error Reduction The values of G(s) along the jω-axis are the same as the values of G̃ ( s ) around a circle with diameter defined by [a – j0, b–1 + j0] on the positive real axis (refer to Figure 3-2). Also, the values of G̃ ( s ) along the jω-axis are the same as the values of G(s) around a circle with diameter defined by [–b–1 + j0, –a + j0].
Chapter 3 Multiplicative Error Reduction –1 The error G ( G – G r ) ∞ will be overbounded by the error –1 G̃ ( G̃ – G̃ r ) ∞ , and Gr will contain the same zeros in Re[s] ≥ 0 as G. If there is no zero (or rank reduction) of G(s) at the origin, one can take a = 0 and b–1 = bandwidth over which a good approximation of G(s) is needed, and at the very least b–1 sufficiently large that the poles of G(s) lie in the circle of diameter [–b–1 + j0, –a + j0].
Chapter 3 Multiplicative Error Reduction Multiplicative approximation of G̃ ( s ) (along the jω-axis) corresponds to multiplicative approximation of G(s) around a circle in the right half plane, touching the jω-axis at the origin. For those points on the jω-axis near the circle, there will be good multiplicative approximation of G(jω). If a good approximation of G(s) over an interval [–jΩ, jΩ] it is desired, then ε –1 = 5 Ω or 10 Ω are good choices. Reduction then proceeds as follows: 1. Form G̃ ( s ) .
4 Frequency-Weighted Error Reduction This chapter describes frequency-weighted error reduction problems. This includes a discussion of controller reduction and fractional representations.
Chapter 4 Frequency-Weighted Error Reduction * (so that VV = Φ ) is logical. However, a major use of weighting is in controller reduction, which is now described. Controller Reduction Frequency weighted error reduction becomes particularly important in reducing controller dimension. The LQG and H ∞ design procedures lead to controllers which have order equal to, or roughly equal to, the order of the plant.
Chapter 4 Frequency-Weighted Error Reduction is minimized (and of course is less than 1). Notice that these two error measures are like those of Equation 4-1 and Equation 4-2. The fact that the plant ought to show up in a good formulation of a controller reduction problem is evidenced by the appearance of P in the two weights. It is instructive to consider the shape of the weighting matrix or function P(Ι + CP)–1. Consider the scalar plant case.
Chapter 4 Frequency-Weighted Error Reduction Most of these ideas are discussed in [Enn84], [AnL89], and [AnM89]. The function wtbalance( ) implements weighted reduction, with five choices of error measure, namely EIS, EOS, EM, EMS, and E1 with arbitrary V(jω). The first four are specifically for controller reduction, whereas the last is not aimed specifically at this situation.
Chapter 4 Frequency-Weighted Error Reduction Fractional Representations The treatment of jω-axis or right half plane poles in the above schemes is crude: they are simply copied into the reduced order controller. A different approach comes when one uses a so-called matrix fraction description (MFD) to represent the controller, and controller reduction procedures based on these representations (only for continuous-time) are found in fracred( ). Consider first a scalar controller C ( s ) = n ( s ) ⁄ d ( s ).
Chapter 4 Frequency-Weighted Error Reduction • Form the reduced controller by interconnecting using negative feedback the second output of Gr to the input, that is, set nr C r ( s ) = -------------dr + er Nothing has been said as to how d should be chosen—and the end result of the reduction, Cr(s), depends on d r . Nor has the reduction procedure been specified.
Chapter 4 Frequency-Weighted Error Reduction Matrix algebra shows that C(s) can be described through a left or right matrix fraction description –1 –1 C ( s ) = D L ( s )N L ( s ) = N R ( s )D R ( s ) with DL, and related values, all stable transfer function matrices.
Chapter 4 Frequency-Weighted Error Reduction The left MFD corresponds to the setup of Figure 4-3. + P(s) K r ( sI – A + K E C ) –1 B KE Figure 4-3.
Chapter 4 + - KR - + C Frequency-Weighted Error Reduction –1 ( sI – A + BK R ) K E P(s) I + - Pˆ ( s ) C(s) Figure 4-4. Redrawn; Individual Signal Paths as Vector Paths It is possible to verify that –1 –1 ( I + P̂G ) P̂ = [ CsI – A + K E C B –1 I – C ( sI – ( A + K E C ) K E ) ] –1 and accordingly the output weight ( I + P̂G ) P̂ = W can be used in an error measure W ( G – G r ) .
Chapter 4 Frequency-Weighted Error Reduction (Here, the Wi and Vi are submatrices of W,V.) Evidently, D L NL V = I and W NR DR = I Some manipulation shows that trying to preserve these identities after approximation of DL, NL or NR, DR suggests use of the error measures W ( G – Gr ) ∞ and ( H – H r )V ∞. For further details, refer to [AnM89] and [LAL90].
Chapter 4 • Frequency-Weighted Error Reduction Reduce the order of a transfer function matrix C(s) through frequency-weighted balanced truncation, a stable frequency weight V(s) being prescribed. The syntax is more accented towards the first use. For the second use, the user should set S = 0, NS = 0. This results in (automatically) SCLR = NSCLR = 0. The user will also select the type="input spec". Let Cr(s) be the reduced order approximation of C(s) which is being sought.
Chapter 4 Frequency-Weighted Error Reduction This rather crude approach to the handling of the unstable part of a controller is avoided in fracred( ), which provides an alternative to wtbalance( ) for controller reduction, at least for an important family of controllers. Table 4-2.
Chapter 4 Frequency-Weighted Error Reduction 3. Compute weighted Hankel Singular Values σi (described in more detail later). If the order of Cr(s) is not specified a priori, it must be input at this time. Certain values may be flagged as unacceptable for various reasons. In particular nscr cannot be chosen so that σnscr = σnscr + 1. 4. Execute reduction step on stable part of C(s), based on a modification of redschur( ) to accommodate frequency weighting, and yielding stable part of Cr(s). 5.
Chapter 4 Frequency-Weighted Error Reduction and the observability grammian Q, defined in the obvious way, is written as Q = Q cc Q cw ′ Q cw Q ww ′ ′ It is trivial to verify that Q cc A c + A c Q cc = – C c C c so that Qcc is the observability gramian of Cs(s) alone, as well as a submatrix of Q. The weighted Hankel singular values of Cs(s) are the square roots of the eigenvalues of PccQcc.
Chapter 4 Frequency-Weighted Error Reduction From these quantities the transformation matrices used for calculating Csr(s), the stable part of Cr (s), are defined –1 ⁄ 2 S lbig = V lbig V ebig S ebig –1 ⁄ 2 S rbig = V rbig V ebig S ebig and then ′ ′ AC R = S lbig A C S rbig B CR = S lbig B C AC R = C C S rbig B CR = D C Just as in unweighted balanced truncation, the reduced order transfer function matrix is guaranteed stable, the same is guaranteed to be true in weighted balanced truncation when ei
Chapter 4 Frequency-Weighted Error Reduction 3. Only continuous systems are accepted; for discrete systems use makecontinuous( ) before calling bst( ), then discretize the result. Sys=fracred(makecontinuous(SysD)); SysD=discretize(Sys); Defining and Reducing a Controller Suppose P(s) = C(sI – A)–1B and A – BKR and A – KEC are stable (where KR is a stabilizing state feedback gain and KE a stabilizing observer gain).
Chapter 4 Frequency-Weighted Error Reduction to, for example, Er ( s ) = KR C ( sI – Â )K E through, for example, balanced truncation, and then defining: –1 –1 C ( s ) = K R ( sI – Â ) K E [ I + C ( sI – Â ) K E ] –1 –1 = K R ( sI – Â + K E C ) K E For the second rationale, consider Figure 4-5. C + - - + KE + + 1 --s KR P(s) X A – BK R Figure 4-5.
Chapter 4 Frequency-Weighted Error Reduction Controller reduction proceeds by implementing the same connection rule but on reduced versions of the two transfer function matrices. When KE has been defined through Kalman filtering considerations, the spectrum of the signal driving KE in Figure 4-5 is white, with intensity Qyy.
Chapter 4 6. Frequency-Weighted Error Reduction Check the stability of the closed-loop system with Cr (s). When the type="left perf" is specified, one works with E ( s ) = K R ( sI – A + K E C ) –1 B KE (4-11) which is formed from the numerator and denominator of the MFD in Equation 4-5.
Chapter 4 Frequency-Weighted Error Reduction Additional Background A discussion of the stability robustness measure can be found in [AnM89] and [LAL90]. The idea can be understood with reference to the transfer functions E(s) and Er (s) used in discussing type="right perf". It is possible to argue (through block diagram manipulation) that • C(s) stabilizes P(s) when E(s) stabilizes (as a series compensator) with unity negative feedback P̂ ( s ) = P ( s ) I .
Chapter 4 Frequency-Weighted Error Reduction The four schemes all produce different HSVs; it follows that it may be prudent to try all four schemes for a particular controller reduction. Recall again that their relative sizes are only a guide as to what can be thrown away without incurring much error. There is no simple rule to indicate which of the four schemes will be the most effective at controller reduction. Two rough rules can, however, be formulated.
5 Utilities This chapter describes three utility functions: hankelsv( ), stable( ), and compare( ). The background to hankelsv( ), which calculates Hankel singular values, was presented in Chapter 1, Introduction. Hankel singular values are also calculated in other functions, sometimes by other procedures. A comparison of the procedures is given in the Hankel Singular Values section.
Chapter 5 Utilities The gramian matrices are defined by solving the equations (in continuous time) AW c + W c A′ = – BB′ W o A + A′W 0 = – C′C and, in discrete time W c – AW c A′ = BB′ W o – A′W o A = C′C The computations are effected with lyapunov( ) and stability is checked, which is time-consuming. The Hankel singular values are the square roots of the eigenvalues of the product.
Chapter 5 Utilities Doubtful ones are those for which the real part of the eigenvalue has magnitude less than or equal to tol for continuous-time, or eigenvalue magnitude within the following range for discrete time: 1 – tol, 1 + tol A warning is given if doubtful eigenvalues exist. The algorithm then computes a real ordered Schur decomposition of A so that after transformation A = A S A SU 0 AU where the eigenvalues of AS and AU are respectively stable and unstable.
Chapter 5 Utilities After this last transformation, and with B = BS BU C = [ CS CU ] it follows that SysS = [A S,A S ;C S D] and SysU = [A U,B U ;C U 0] By combining the transformation yielding the real ordered Schur form for A with the transformation defined using X, the overall transformation T is readily identified. In case all eigenvalues of A are stable or all are unstable, this is flagged, and T = I.
6 Tutorial This chapter illustrates a number of the MRM functions and their underlying ideas. A plant and full-order controller are defined, and then the effects of various reduction algorithms are examined. The data for this example is stored in the file mr_disc.xmd in the Xmath demos directory. To follow the example, start Xmath, and then select File»Load from the Xmath Commands menu, or enter the load command with the file specification appropriate to your operating system from the Xmath Commands area.
Chapter 6 Tutorial A minimal realization in modal coordinates is C(sI – A)–1B where: A = diag 0 1 , – 0.015 0.765 , – 0.028 1.410 , – 0.04 1.85 0 0 – 0.765 – 0.015 – 1.410 – 0.028 – 1.85 – 0.04 0.026 – 0.251 0.033 B = – 0.886 – 4.017 0.145 3.604 0.280 – 0.996 – 0.105 0.261 C′ = 0.009 – 0.001 – 0.043 0.002 – 0.
Chapter 6 Tutorial With a state weighting matrix, Q = 1e-3*diag([2,2,80,80,8,8,3,3]); R = 1; (and unity control weighting), a state-feedback control-gain is determined through a linear-quadratic performance index minimization as: [Kr,ev] = regulator(sys,Q,R); A – B × Kr is stable. Next, with an input noise variance matrix Q = WtBBWt where, W t = DIAG ( [ 0.346, 0.346, 0.024, 0.0240.042, 0.042 0.042, 0.
Chapter 6 Tutorial recovery at low frequencies; there is consequently a faster roll-off of the –1 loop gain at high frequencies than for K r ( jωI – A ) B , and this is desired. Figure 6-2 displays the (magnitudes of the) plant transfer function, the compensator transfer function and the loop gain, as well as the constraints; evidently the compensated plant meets the constraints.
Chapter 6 Tutorial Controller Reduction This section contrasts the effect of unweighted and weighted controller reduction. Unweighted reduction is at first examined, through redschur( ) (using balance( ) or balmoore( ) will give similar results). The Hankel singular values of the controller transfer function are 6.264×10–2 4.901×10–2 2.581×10–2 2.474×10–2 1.545×10–2 1.335×10–2 9.467×10–3 9.466×10–3 A reduction to order 2 is attempted. The ending Hankel singular values, that is, σ3, σ4, ...
Chapter 6 Tutorial Figures 6-3, 6-4, and 6-5 display the outcome of the reduction. The loop gain is shown in Figure 6-3. The error near the unity gain crossover frequency may not look large, but it is considerably larger than that obtained through frequency weighted reduction methods, as described later. Figure 6-3 also shows the inability to suppress all three plant resonances, in contrast to the full-order controller. Two are such as to cause violation of the specifications.
Chapter 6 Tutorial Generate Figure 6-4: compare(syscl,sysclr,w,{radians,type=5}) f4=plot({keep,legend=["original","reduced"]}) Figure 6-4.
Chapter 6 Tutorial Generate Figure 6-5: tvec=0:(140/99):140; compare(syscl,sysclr,tvec,{type=7}) f5=plot({keep,legend=["original","reduced"]}) Figure 6-5. Step Response with redschur Xmath Model Reduction Module 6-8 ni.
Chapter 6 Tutorial ophank( ) ophank( ) is next used to reduce the controller with the results shown in Figures 6-6, 6-7, and 6-8. Generate Figure 6-6: [syscr,sysu,hsv]=ophank(sysc,2); svalsrol = svplot(sys*syscr,w,{radians}); plot(svalsol, {keep}) f6=plot(wc, constr, {keep,!grid, title="Open-loop gain using ophank()"}) Figure 6-6.
Chapter 6 Tutorial Generate Figure 6-7: syscl = feedback(sysol); sysolr=sys*syscr; sysclr=feedback(sysolr); compare(syscl,sysclr,w,{radians,type=5}) f7=plot({keep,legend=["original","reduced"]}) Figure 6-7. Closed-Loop Gain with ophank Xmath Model Reduction Module 6-10 ni.
Chapter 6 Tutorial Generate Figure 6-8: tvec=0:(140/99):140; compare(syscl,sysclr,tvec,{type=7}) f8=plot({keep,legend=["original","reduced"]}) Figure 6-8. Step Response with ophank The open-loop gain, closed-loop gain and step response are all inferior to those obtained with redschur( ). This emphasizes the point that one cannot automatically assume that, because the error bound formula for ophank( ) is more attractive than that for redschur( ), the error itself will be better for ophank( ).
Chapter 6 Tutorial wtbalance The next command examined is wtbalance with the option "match". [syscr,ysclr,hsv] = wtbalance(sys,sysc,"match",2) Recall that this command should promote matching of closed-loop transfer functions. The weighted Hankel singular values are: 1.486 4.513 × 10–1 8.420 × 10–2 5.869 × 1–2 1.999 × 10–2 1.382 × 10–2 7.198 × 10–3 6.
Chapter 6 Tutorial The following function calls produce Figure 6-9: svalsrol = svplot(sys*syscr,w,{radians}) plot(svalsol, {keep}) f9=plot(wc, constr, {keep,!grid, legend=["reduced","original","constrained"], title="Open-Loop Gain Using wtbalance()"}) Figure 6-9.
Chapter 6 Tutorial Generate Figure 6-10: syscl = feedback(sysol); sysolr=sys*syscr; sysclr=feedback(sysolr); compare(syscl,sysclr,w,{radians,type=5}) f10=plot({keep,legend=["original","reduced"]}) Figure 6-10. Closed-Loop Gain with wtbalance Xmath Model Reduction Module 6-14 ni.
Chapter 6 Tutorial Generate Figure 6-11: tvec=0:(140/99):140; compare(syscl,sysclr,tvec,{type=7}) f11=plot({keep,legend=["original","reduced"]}) Figure 6-11. Step Response with wtbalance Figures 6-9, 6-10, and 6-11 are obtained for wtbalance with the option "input spec". Evidently, there is little difference between this and the result with the option "match". One notices marginally better matching in the region of interest (0.1 to 5 rad per second) at the expense of matching at other frequencies.
Chapter 6 Tutorial Generate Figure 6-12: vtf=poly([-0.1,-10])/poly([-1,-1.4]) [,sysv]=check(vtf,{ss,convert}); svalsv = svplot(sysv,w,{radians}); Figure 6-12. Frequency Response of the Weight V(jω) Xmath Model Reduction Module 6-16 ni.
Chapter 6 Tutorial Generate Figure 6-13: [syscr,sysclr,hsv] = wtbalance(sys,sysc, "input spec",2,sysv) svalsrol = svplot(sys*syscr,w,{radians}) plot(svalsol, {keep}) f13=plot(wc,constr,{keep, !grid, legend=["reduced","original","constrained"], title="Open-Loop Gain with wtbal(), \"input spec\""}) Figure 6-13.
Chapter 6 Tutorial Generate Figure 6-14: syscl = feedback(sysol); sysolr=sys*syscr; sysclr=feedback(sysolr); compare(syscl,sysclr,w,{radians,type=5}) f14=plot({keep,legend=["original","reduced"]}) Figure 6-14. System Singular Values of wtbalance with "input spec" Xmath Model Reduction Module 6-18 ni.
Chapter 6 Tutorial Generate Figure 6-15: tvec=0:(140/99):140; compare(syscl,sysclr,tvec,{type=7}) f15=plot({keep,legend=["original","reduced"]}) Figure 6-15.
Chapter 6 Tutorial fracred fracred, the next command examined, has four options—"right stab", "left stab", "right perf", and "left perf". The options "left stab", "right perf", and "left perf" all produce instability. Given the relative magnitudes of the Hankel singular values, this is perhaps not surprising. Figures 6-16, 6-17, and 6-18 illustrate the results using "right stab".
Chapter 6 Tutorial Generate Figure 6-17: syscl = feedback(sysol); sysolr=sys*syscr; sysclr=feedback(sysolr); compare(syscl,sysclr,w,{radians,type=5}) f17=plot({keep,legend=["original","reduced"]}) Figure 6-17.
Chapter 6 Tutorial Generate Figure 6-18: tvec=0:(140/99):140; compare(syscl,sysclr,tvec,{type=7}) f18=plot({keep,legend=["original","reduced"]}) Figure 6-18. Step Response with fracred The end result is comparable to that from wtbalance( ) with option "match". We can create a table to examine the values of the Hankel singular values based on different decompositions approaches. set precision 3 # Optional: set format fixed # we set a smaller precision here so we could fit # the table in the manual.
Chapter 6 Tutorial hsvtable = [... "right stab:", string(hsvrs'); "left stab:", string(hsvls'); "right perf:", string(hsvrp'); "left perf:", string(hsvlp')]? hsvtable (a rectangular matrix of strings) = right stab:3.308 0.728 0.112 0.078 0.024 0.018 0.011 left stab:1.403 1.331 1.133 1.092 0.965 0.549 0.526 right perf:0.034 0.016 0.013 0.010 0.004 0.004 0.000 left perf:4.907 4.874 3.846 3.781 1.225 1.175 0.505 © National Instruments Corporation 6-23 0.010 0.313 0.000 0.
A Bibliography [AnJ] BDO Anderson and B. James, “Algorithm for multiplicative approximation of a stable linear system,” in preparation. [AnL89] BDO Anderson and Y. Liu, “Controller reduction: Concepts and approaches,” IEEE Transactions on Automatic Control, Vol. 34, 1989, pp. 802–812. [AnM89] BDO Anderson and J. B. Moore, Optimal Control: Linear Quadratic Methods, Prentice-Hall Inc., Englewood Cliffs, NJ, 1989. [BoD87] S. P. Boyd and J.
Appendix A Bibliography [GrA90] M. Green and BDO Anderson, “Generalized balanced stochastic truncation,” Proceedings for 29th CDC, 1990. [Gre88] M. Green, “Balanced stochastic realization,” Linear Algebra and Applications, Vol. 98, 1988, pp. 211–247. [Gre88a] M. Green, “A relative error bound for balanced stochastic truncation,” IEEE Transactions on Automatic Control, Vol. 33, 1988, pp. 961–965. [HiP90] D.
Appendix A Bibliography [SaC88] M. G. Safonov and R. Y. Chiang, “Model reduction for robust control: a Schur relative-error method,” Proceedings for the American Controls Conference, 1988, pp. 1685–1690. [Saf87] M. G. Safonov, “Imaginary-axis zeros in multivariable H• optimal control,” Modeling, Robustness, and Sensitivity Reduction in Control, (Ed. R. F. Curtain), Springer Verlag, Berlin, 1987. [SCL90] M. G. Safonov, R. Y.
Appendix A Bibliography [Doy82] J. C. Doyle. “Analysis of Feedback Systems with Structured Uncertainties.” IEEE Proceedings, November 1982. [DWS82] J. C. Doyle, J. E. Wall, and G. Stein. “Performance and Robustness Analysis for Structure Uncertainties,” Proceedings IEEE Conference on Decision and Control, pp. 629–636, 1982. [FaT88] M. K. Fan and A. L. Tits, “m-form Numerical Range and the Computation of the Structured Singular Value.” IEEE Transactions on Automatic Control, Vol. 33, pp.
Appendix A Bibliography [SLH81] M. G. Safonov, A. J. Laub, and G. L. Hartmann, “Feedback Properties of Multivariable Systems: The Role and Use of the Return Difference Matrix,” IEEE Transactions on Automatic Control, Vol. AC-26, February 1981. [SA88] G. Stein and M. Athans. “The LQG/LTR Procedure for Multivariable Control Design,” IEEE Transactions on Automatic Control, Vol. AC-32, No. 2, February 1987, pp. 105–114. [Za81] G.
Technical Support and Professional Services B Visit the following sections of the National Instruments Web site at ni.com for technical support and professional services: • Support—Online technical support resources at ni.
Index Symbols controller robustness, 4-2 conventions used in the manual, iv *, 1-6 ´, 1-6 D A diagnostic tools (NI resources), B-1 documentation conventions used in the manual, iv NI resources, B-1 drivers (NI resources), B-1 additive error, reduction, 2-1 algorithm balanced stochastic truncation (bst), 3-4 fractional representation reduction, 4-18 Hankel multi-pass, 2-20 optimal Hankel norm reduction, 2-15 stable, 5-2 weighted balance, 4-12 all-pass transfer function, 1-6, 2-4 E equality bounds, tigh
Index G N grammians controllability, 1-7 description of, 1-7 observability, 1-7 National Instruments support and services, B-1 nomenclature, 1-2 nomenclature for MRM, 1-6 numerical conditioning, 2-8 H O Hankel matrix, 1-9 Hankel norm approximation, 2-6 Hankel singular values, 1-8, 3-9, 5-1 hankelsv, 1-5, 5-1 algorithm, multipass, 2-20 help, technical support, B-1 ophank, 1-5, 2-14 discrete-time systems, 2-21 error formulas, 2-22 impulse response error, 2-22 multipass, 2-20 onepass, 2-18 unstable sys
Index U stable, 1-5, 5-2 sup, 1-6 support, technical, B-1 unstable zeros, 2-3 W T Web resources, B-1 wtbalance, 1-5, 4-10 technical support, B-1 tight equality bounds, 1-7 training and certification (NI resources), B-1 transfer function, allpass, 1-6 troubleshooting (NI resources), B-1 truncate, 1-5, 2-4, 2-11 © National Instruments Corporation I-3 Xmath Model Reduction Module