advertisement

Robust and Optimal Control

67 %
33 %
advertisement
Information about Robust and Optimal Control
Books

Published on February 23, 2014

Author: WenChihPei

Source: slideshare.net

advertisement

ROBUST AND OPTIMAL CONTROL

ROBUST AND OPTIMAL CONTROL KEMIN ZHOU with JOHN C. DOYLE and KEITH GLOVER PRENTICE HALL, Englewood Cli s, New Jersey 07632

TO OUR PARENTS

Contents Preface xiii Notation and Symbols xvi List of Acronyms xx 1 Introduction 1 1.1 1.2 1.3 Historical Perspective : : : : : : : : : : : : : : : : : : : : : : : : : : How to Use This Book : : : : : : : : : : : : : : : : : : : : : : : : : Highlights of The Book : : : : : : : : : : : : : : : : : : : : : : : : : 2 Linear Algebra 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12 Linear Subspaces : : : : : : : : : : : : : Eigenvalues and Eigenvectors : : : : : : Matrix Inversion Formulas : : : : : : : : Matrix Calculus : : : : : : : : : : : : : : Kronecker Product and Kronecker Sum : Invariant Subspaces : : : : : : : : : : : : Vector Norms and Matrix Norms : : : : Singular Value Decomposition : : : : : : Generalized Inverses : : : : : : : : : : : Semide nite Matrices : : : : : : : : : : : Matrix Dilation Problems* : : : : : : : : Notes and References : : : : : : : : : : : 3 Linear Dynamical Systems 3.1 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Descriptions of Linear Dynamical Systems : : : : : : : : : : : : : : vii 1 4 6 17 17 20 22 24 25 26 28 32 36 36 38 44 45 45

CONTENTS viii 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 Controllability and Observability : : : : : : : : Kalman Canonical Decomposition : : : : : : : : Pole Placement and Canonical Forms : : : : : : Observers and Observer-Based Controllers : : : Operations on Systems : : : : : : : : : : : : : : State Space Realizations for Transfer Matrices : Lyapunov Equations : : : : : : : : : : : : : : : Balanced Realizations : : : : : : : : : : : : : : Hidden Modes and Pole-Zero Cancelation : : : Multivariable System Poles and Zeros : : : : : : Notes and References : : : : : : : : : : : : : : : 4 Performance Speci cations 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 Normed Spaces : : : : : : : : : Hilbert Spaces : : : : : : : : : : Hardy Spaces H2 and H1 : : : Power and Spectral Signals : : Induced System Gains : : : : : Computing L2 and H2 Norms : Computing L1 and H1 Norms Notes and References : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5 Stability and Performance of Feedback Systems 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 Feedback Structure : : : : : : : : : Well-Posedness of Feedback Loop : Internal Stability : : : : : : : : : : Coprime Factorization over RH1 : Feedback Properties : : : : : : : : The Concept of Loop Shaping : : : Weighted H2 and H1 Performance Notes and References : : : : : : : : 6 Performance Limitations 6.1 6.2 6.3 6.4 6.5 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Introduction : : : : : : : : : : : : : : : : : : Integral Relations : : : : : : : : : : : : : : : Design Limitations and Sensitivity Bounds : Bode's Gain and Phase Relation : : : : : : Notes and References : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 47 53 58 63 65 68 71 72 78 80 89 91 91 93 97 102 104 112 114 116 117 117 119 121 126 130 134 137 141 143 143 145 149 151 152

CONTENTS ix 7 Model Reduction by Balanced Truncation 7.1 7.2 7.3 7.4 Model Reduction by Balanced Truncation : : : Frequency-Weighted Balanced Model Reduction Relative and Multiplicative Model Reductions : Notes and References : : : : : : : : : : : : : : : 8 Hankel Norm Approximation 8.1 8.2 8.3 8.4 8.5 8.6 8.7 8.8 8.9 Hankel Operator : : : : : : : : : : : : : : : : All-pass Dilations : : : : : : : : : : : : : : : : Optimal Hankel Norm Approximation : : : : L1 Bounds for Hankel Norm Approximation Bounds for Balanced Truncation : : : : : : : Toeplitz Operators : : : : : : : : : : : : : : : Hankel and Toeplitz Operators on the Disk* : Nehari's Theorem* : : : : : : : : : : : : : : : Notes and References : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 9 Model Uncertainty and Robustness 9.1 9.2 9.3 9.4 9.5 9.6 9.7 Model Uncertainty : : : : : : : : : : : : : : : : : : Small Gain Theorem : : : : : : : : : : : : : : : : : Stability under Stable Unstructured Uncertainties : Unstructured Robust Performance : : : : : : : : : Gain Margin and Phase Margin : : : : : : : : : : : De ciency of Classical Control for MIMO Systems Notes and References : : : : : : : : : : : : : : : : : 10 Linear Fractional Transformation 10.1 10.2 10.3 10.4 10.5 Linear Fractional Transformations : Examples of LFTs : : : : : : : : : Basic Principle : : : : : : : : : : : Redhe er Star-Products : : : : : : Notes and References : : : : : : : : 11 Structured Singular Value 11.1 11.2 11.3 11.4 11.5 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : General Framework for System Robustness : : Structured Singular Value : : : : : : : : : : : Structured Robust Stability and Performance Overview on Synthesis : : : : : : : : : : : : Notes and References : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 153 154 160 161 167 169 170 175 184 188 192 194 195 200 206 207 207 211 214 222 229 232 237 239 239 246 256 257 260 261 262 266 276 287 290

CONTENTS x 12 Parameterization of Stabilizing Controllers 12.1 12.2 12.3 12.4 12.5 12.6 12.7 Existence of Stabilizing Controllers : : : : : : : : : : Duality and Special Problems : : : : : : : : : : : : : Parameterization of All Stabilizing Controllers : : : : Structure of Controller Parameterization : : : : : : : Closed-Loop Transfer Matrix : : : : : : : : : : : : : Youla Parameterization via Coprime Factorization* : Notes and References : : : : : : : : : : : : : : : : : : 13 Algebraic Riccati Equations 13.1 13.2 13.3 13.4 13.5 13.6 13.7 13.8 13.9 All Solutions of A Riccati Equation : : : : Stabilizing Solution and Riccati Operator Extreme Solutions and Matrix Inequalities Spectral Factorizations : : : : : : : : : : : Positive Real Functions : : : : : : : : : : : Inner Functions : : : : : : : : : : : : : : : Inner-Outer Factorizations : : : : : : : : : Normalized Coprime Factorizations : : : : Notes and References : : : : : : : : : : : : 14 H2 Optimal Control 14.1 14.2 14.3 14.4 14.5 14.6 14.7 14.8 14.9 14.10 14.11 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Introduction to Regulator Problem : : : : : : : : : Standard LQR Problem : : : : : : : : : : : : : : : Extended LQR Problem : : : : : : : : : : : : : : : Guaranteed Stability Margins of LQR : : : : : : : Standard H2 Problem : : : : : : : : : : : : : : : : Optimal Controlled System : : : : : : : : : : : : : H2 Control with Direct Disturbance Feedforward* Special Problems : : : : : : : : : : : : : : : : : : : Separation Theory : : : : : : : : : : : : : : : : : : Stability Margins of H2 Controllers : : : : : : : : : Notes and References : : : : : : : : : : : : : : : : : 15 Linear Quadratic Optimization 15.1 15.2 15.3 15.4 15.5 15.6 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Hankel Operators : : : : : : : : : : : : : : : : : : : : : Toeplitz Operators : : : : : : : : : : : : : : : : : : : : Mixed Hankel-Toeplitz Operators : : : : : : : : : : : : Mixed Hankel-Toeplitz Operators: The General Case* Linear Quadratic Max-Min Problem : : : : : : : : : : Notes and References : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 291 292 295 301 312 314 315 318 319 320 325 333 342 353 357 358 362 364 365 365 367 372 373 375 378 379 381 388 390 392 393 393 395 397 399 401 404

CONTENTS 16 H1 Control: Simple Case 16.1 16.2 16.3 16.4 16.5 16.6 16.7 16.8 16.9 16.10 16.11 16.12 Problem Formulation : : : : : : : : Output Feedback H1 Control : : : Motivation for Special Problems : : Full Information Control : : : : : : Full Control : : : : : : : : : : : : : Disturbance Feedforward : : : : : Output Estimation : : : : : : : : : Separation Theory : : : : : : : : : Optimality and Limiting Behavior Controller Interpretations : : : : : An Optimal Controller : : : : : : : Notes and References : : : : : : : : xi : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 17 H1 Control: General Case 17.1 17.2 17.3 17.4 17.5 17.6 17.7 17.8 17.9 17.10 General H1 Solutions : : : : : : : : : : : : : : : : : Loop Shifting : : : : : : : : : : : : : : : : : : : : : : Relaxing Assumptions : : : : : : : : : : : : : : : : : H2 and H1 Integral Control : : : : : : : : : : : : : : H1 Filtering : : : : : : : : : : : : : : : : : : : : : : Youla Parameterization Approach* : : : : : : : : : : Connections : : : : : : : : : : : : : : : : : : : : : : : State Feedback and Di erential Game : : : : : : : : Parameterization of State Feedback H1 Controllers : Notes and References : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 18 H1 Loop Shaping 18.1 18.2 18.3 18.4 Robust Stabilization of Coprime Factors : : : : : : : : Loop Shaping Using Normalized Coprime Stabilization Theoretical Justi cation for H1 Loop Shaping : : : : Notes and References : : : : : : : : : : : : : : : : : : : 19 Controller Order Reduction 19.1 19.2 19.3 19.4 19.5 Controller Reduction with Stability Criteria : : H1 Controller Reductions : : : : : : : : : : : : Frequency-Weighted L1 Norm Approximations An Example : : : : : : : : : : : : : : : : : : : : Notes and References : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 405 405 406 412 416 423 423 425 426 431 436 438 439 441 442 445 449 450 454 456 460 462 465 468 469 469 478 481 487 489 490 497 504 509 513

CONTENTS xii 20 Structure Fixed Controllers 515 21 Discrete Time Control 527 20.1 20.2 20.3 21.1 21.2 21.3 21.4 21.5 21.6 21.7 21.8 Lagrange Multiplier Method : : : : : : : : : : : : : : : : : : : : : : 515 Fixed Order Controllers : : : : : : : : : : : : : : : : : : : : : : : : 520 Notes and References : : : : : : : : : : : : : : : : : : : : : : : : : : 525 Discrete Lyapunov Equations : : : : : : Discrete Riccati Equations : : : : : : : : Bounded Real Functions : : : : : : : : : Matrix Factorizations : : : : : : : : : : : Discrete Time H2 Control : : : : : : : : Discrete Balanced Model Reduction : : : Model Reduction Using Coprime Factors Notes and References : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 527 528 537 542 548 553 558 561 Bibliography 563 Index 581

Preface This book is inspired by the recent development in the robust and H1 control theory, particularly the state-space H1 control theory developed in the paper by Doyle, Glover, Khargonekar, and Francis 1989] (known as DGKF paper). We give a fairly comprehensive and step-by-step treatment of the state-space H1 control theory in the style of DGKF. We also treat the robust control problems with unstructured and structured uncertainties. The linear fractional transformation (LFT) and the structured singular value (known as ) are introduced as the uni ed tools for robust stability and performance analysis and synthesis. Chapter 1 contains a more detailed chapter-by-chapter review of the topics and results presented in this book. We would like to thank Professor Bruce A. Francis at University of Toronto for his helpful comments and suggestions on early versions of the manuscript. As a matter of fact, this manuscript was inspired by his lectures given at Caltech in 1987 and his masterpiece { A Course in H1 Control Theory. We are grateful to Professor Andre Tits at University of Maryland who has made numerous helpful comments and suggestions that have greatly improved the quality of the manuscript. Professor Jakob Stoustrup, Professor Hans Henrik Niemann, and their students at The Technical University of Denmark have read various versions of this manuscript and have made many helpful comments and suggestions. We are grateful to their help. Special thanks go to Professor Andrew Packard at University of California-Berkeley for his help during the preparation of the early versions of this manuscript. We are also grateful to Professor Jie Chen at University of California-Riverside for providing material used in Chapter 6. We would also like to thank Professor Kang-Zhi Liu at Chiba University (Japan) and Professor Tongwen Chen at University of Calgary for their valuable comments and suggestions. In addition, we would like to thank G. Balas, C. Beck, D. S. Bernstein, G. Gu, W. Lu, J. Morris, M. Newlin, L. Qiu, H. P. Rotstein and many other people for their comments and suggestions. The rst author is especially grateful to Professor Pramod P. Khargonekar at The University of Michigan for introducing him to robust and H1 control and to Professor Tryphon Georgiou at University of Minnesota for encouraging him to complete this work. Kemin Zhou xiii

xiv PREFACE John C. Doyle Keith Glover

xv

NOTATION AND SYMBOLS xvi Notation and Symbols R C F R+ C ; and C ; C + and C + C 0 , jR D D @D eld of real numbers eld of complex numbers eld, either R or C nonnegative real numbers open and closed left-half plane open and closed right-half plane imaginary axis unit disk closed unit disk unit circle 2 belong to subset union intersection 2 3 ~ end of proof end of example end of remark := de ned as asymptotically greater than asymptotically less than much greater than much less than ' / j j Re ( ) (t) ij 1+(t) complex conjugate of 2 C absolute value of 2 C real part of 2 C unit impulse Kronecker delta function, ii = 1 and ij = 0 if i 6= j unit step function

NOTATION AND SYMBOLS In aij ] xvii n n identity matrix a matrix with aij as its i-th row and j -th column element diag(a1 : : : an ) an n n diagonal matrix with ai as its i-th diagonal element AT transpose A adjoint operator of A or complex conjugate transpose of A A;1 inverse of A A+ pseudo inverse of A A; shorthand for (A;1 ) det(A) determinant of A Trace(A) trace of A (A) eigenvalue of A (A) spectral radius of A (A) the set of spectrum of A (A) largest singular value of A (A) smallest singular value of A i-th singular value of A i (A) (A) condition number of A kAk spectral norm of A: kAk = (A) Im(A), R(A) image (or range) space of A Ker(A), N(A) kernel (or null) space of A X; (A) stable invariant subspace of A X+ (A) antistable invariant subspace of A Ric(H ) g f hi x?y D? S? L2 (;1 1) L2 0 1) L2 (;1 0] L2+ L2; l2+ l2; L2 (j R) L2 (@ D ) H2 (j R) the stabilizing solution of an ARE convolution of g and f Kronecker product direct sum or Kronecker sum angle inner product orthogonal, hx yi = 0 orthogonal complement of D, i.e., D D? D or D is unitary ? ? orthogonal complement of subspace S , e.g., H2 time domain Lebesgue space subspace of L2 (;1 1) subspace of L2 (;1 1) shorthand for L2 0 1) shorthand for L2 (;1 0] shorthand for l2 0 1) shorthand for l2 (;1 0) square integrable functions on C 0 including at 1 square integrable functions on @ D subspace of L2 (j R) with analytic extension to the rhp

NOTATION AND SYMBOLS xviii H2 (@ D ) ? H2 (j R) ? H2 (@ D ) L 1 ( j R) L1 (@ D ) H1 (j R) H1 (@ D ) ; H1 (j R) ; (@ D ) H1 subspace of L2 (@ D ) with analytic extension to the inside of @ D subspace of L2 (j R) with analytic extension to the lhp subspace of L2 (@ D ) with analytic extension to the outside of @ D functions bounded on Re(s) = 0 including at 1 functions bounded on @ D the set of L1 (j R) functions analytic in Re(s) > 0 the set of L1 (@ D ) functions analytic in jz j < 1 the set of L1 (j R) functions analytic in Re(s) < 0 the set of L1 (@ D ) functions analytic in jz j > 1 pre x B or B closed unit ball, e.g. BH1 and B pre x Bo open unit ball pre x R real rational, e.g., RH1 and RH2 , etc R s] Rp (s) polynomial ring rational proper transfer matrices G (s) G (z ) A B C D shorthand for GT (;s) (continuous time) shorthand for GT (z ;1 ) (discrete time) shorthand for state space realization C (sI ; A);1 B + D or C (zI ; A);1 B + D F` (M Q) F u ( M Q) S (M N ) lower LFT upper LFT star product

xix

LIST OF ACRONYMS xx List of Acronyms ARE BR CIF DF FC FDLTI FI HF i lcf LF LFT lhp or LHP LQG LQR LTI LTR MIMO nlcf NP nrcf NS OE OF OI rcf rhp or RHP RP RS SF SISO SSV SVD algebraic Riccati equation bounded real complementary inner factor disturbance feedforward full control nite dimensional linear time invariant full information high frequency if and only if left coprime factorization low frequency linear fractional transformation left-half plane Re(s) < 0 linear quadratic Gaussian linear quadratic regulator linear time invariant loop transfer recovery multi-input multi-output normalized left coprime factorization nominal performance normalized right coprime factorization nominal stability output estimation output feedback output injection right coprime factorization right-half plane Re(s) > 0 robust performance robust stability state feedback single-input single-output structured singular value ( ) singular value decomposition

1 Introduction 1.1 Historical Perspective This book gives a comprehensive treatment of optimal H2 and H1 control theory and an introduction to the more general subject of robust control. Since the central subject of this book is state-space H1 optimal control, in contrast to the approach adopted in the famous book by Francis 1987]: A Course in H1 Control Theory, it may be helpful to provide some historical perspective of the state-space H1 control theory to be presented in this book. This section is not intended as a review of the literature in H1 theory or robust control, but rather only an attempt to outline some of the work that most closely touches on our approach to state-space H1 . Hopefully our lack of written historical material will be somewhat made up for by the pictorial history of control shown in Figure 1.1. Here we see how the practical but classical methods yielded to the more sophisticated modern theory. Robust control sought to blend the best of both worlds. The strange creature that resulted is the main topic of this book. The H1 optimal control theory was originally formulated by Zames 1981] in an input-output setting. Most solution techniques available at that time involved analytic functions (Nevanlinna-Pick interpolation) or operator-theoretic methods Sarason, 1967 Adamjan et al., 1978 Ball and Helton, 1983]. Indeed, H1 theory seemed to many to signal the beginning of the end for the state-space methods which had dominated control for the previous 20 years. Unfortunately, the standard frequency-domain approaches to H1 started running into signi cant obstacles in dealing with multi-input multi-output (MIMO) systems, both mathematically and computationally, much as the H2 (or LQG) theory of the 1950's had. 1

INTRODUCTION 2 Figure 1.1: A picture history of control Not surprisingly, the rst solution to a general rational MIMO H1 optimal control problem, presented in Doyle 1984], relied heavily on state-space methods, although more as a computational tool than in any essential way. The steps in this solution were as follows: parameterize all internally-stabilizing controllers via Youla et al., 1976] obtain realizations of the closed-loop transfer matrix convert the resulting model-matching problem into the equivalent 2 2-block general distance or best approximation problem involving mixed Hankel-Toeplitz operators reduce to the Nehari problem (Hankel only) solve the Nehari problem by the procedure of Glover 1984]. Both Francis, 1987] and Francis and Doyle, 1987] give expositions of this approach, which will be referred to as the 1984" approach. In a mathematical sense, the 1984 procedure solved" the general rational H1 optimal control problem and much of the subsequent work in H1 control theory focused on the 2 2-block problems, either in the model-matching or general distance forms. Unfortunately, the associated complexity of computation was substantial, involving several Riccati equations of increasing dimension, and formulae for the resulting controllers tended to be very complicated and have high state dimension. Encouragement came

1.1. Historical Perspective 3 from Limebeer and Hung 1987] and Limebeer and Halikias 1988] who showed, for problems transformable to 2 1-block problems, that a subsequent minimal realization of the controller has state dimension no greater than that of the generalized plant G. This suggested the likely existence of similarly low dimension optimal controllers in the general 2 2 case. Additional progress on the 2 2-block problems came from Ball and Cohen 1987], who gave a state-space solution involving 3 Riccati equations. Jonckheere and Juang 1987] showed a connection between the 2 1-block problem and previous work by Jonckheere and Silverman 1978] on linear-quadratic control. Foias and Tannenbaum 1988] developed an interesting class of operators called skew Toeplitz to study the 2 2-block problem. Other approaches have been derived by Hung 1989] using an interpolation theory approach, Kwakernaak 1986] using a polynomial approach, and Kimura 1988] using a method based on conjugation. The simple state space H1 controller formulae to be presented in this book were rst derived in Glover and Doyle 1988] with the 1984 approach, but using a new 2 2-block solution, together with a cumbersome back substitution. The very simplicity of the new formulae and their similarity with the H2 ones suggested a more direct approach. Independent encouragement for a simpler approach to the H1 problem came from papers by Petersen 1987], Khargonekar, Petersen, and Zhou 1990], Zhou and Khargonekar 1988], and Khargonekar, Petersen, and Rotea 1988]. They showed that for the state-feedback H1 problem one can choose a constant gain as a (sub)optimal controller. In addition, a formula for the state-feedback gain matrix was given in terms of an algebraic Riccati equation. Also, these papers established connections between H1 -optimal control, quadratic stabilization, and linear-quadratic di erential games. The landmark breakthrough came in the DGKF paper (Doyle, Glover, Khargonekar, and Francis 1989]). In addition to providing controller formulae that are simple and expressed in terms of plant data as in Glover and Doyle 1988], the methods in that paper are a fundamental departure from the 1984 approach. In particular, the Youla parameterization and the resulting 2 2-block model-matching problem of the 1984 solution are avoided entirely replaced by a more purely state-space approach involving observer-based compensators, a pair of 2 1 block problems, and a separation argument. The operator theory still plays a central role (as does Redhe er's work Redhe er, 1960] on linear fractional transformations), but its use is more straightforward. The key to this was a return to simple and familiar state-space tools, in the style of Willems 1971], such as completing the square, and the connection between frequency domain inequalities (e.g kGk1 < 1), Riccati equations, and spectral factorizations. This book in some sense can be regarded as an expansion of the DGKF paper. The state-space theory of H1 can be carried much further, by generalizing timeinvariant to time-varying, in nite horizon to nite horizon, and nite dimensional to in nite dimensional. A ourish of activity has begun on these problems since the publication of the DGKF paper and numerous results have been published in the literature, not surprising, many results in DGKF paper generalize mutatis mutandis, to these cases, which are beyond the scope of this book.

4 INTRODUCTION 1.2 How to Use This Book This book is intended to be used either as a graduate textbook or as a reference for control engineers. With the second objective in mind, we have tried to balance the broadness and the depth of the material covered in the book. In particular, some chapters have been written su ciently self-contained so that one may jump to those special topics without going through all the preceding chapters, for example, Chapter 13 on algebraic Riccati equations. Some other topics may only require some basic linear system theory, for instance, many readers may nd that it is not di cult to go directly to Chapters 9 11. In some cases, we have tried to collect some most frequently used formulas and results in one place for the convenience of reference although they may not have any direct connection with the main results presented in the book. For example, readers may nd that those matrix formulas collected in Chapter 2 on linear algebra convenient in their research. On the other hand, if the book is used as a textbook, it may be advisable to skip those topics like Chapter 2 on the regular lectures and leave them for students to read. It is obvious that only some selected topics in this book can be covered in an one or two semester course. The speci c choice of the topics depends on the time allotted for the course and the preference of the instructor. The diagram in Figure 1.2 shows roughly the relations among the chapters and should give the users some idea for the selection of the topics. For example, the diagram shows that the only prerequisite for Chapters 7 and 8 is Section 3.9 of Chapter 3 and, therefore, these two chapters alone may be used as a short course on model reductions. Similarly, one only needs the knowledge of Sections 13.2 and 13.6 of Chapter 13 to understand Chapter 14. Hence one may only cover those related sections of Chapter 13 if time is the factor. The book is separated roughly into the following subgroups: Basic Linear System Theory: Chapters 2 3. Stability and Performance: Chapters 4 6. Model Reduction: Chapters 7 8. Robustness: Chapters 9 11. H2 and H1 Control: Chapters 12 19. Lagrange Method: Chapter 20. Discrete Time Systems: Chapter 21. In view of the above classi cation, one possible choice for an one-semester course on robust control would cover Chapters 4 5 9 11 or 4 11 and an one-semester advanced course on H2 and H1 control would cover (parts of) Chapters 12 19. Another possible choice for an one-semester course on H1 control may include Chapter 4, parts of Chapter 5 (5:1 5:3 5:5 5:7), Chapter 10, Chapter 12 (except Section 12.6), parts of Chapter 13 (13:2 13:4 13:6), Chapter 15 and Chapter 16. Although Chapters 7 8 are very much independent of other topics and can, in principle, be studied at any

1.2. How to Use This Book 5 2 3 4 ? 3:9 ? 5 - 6 - 7 A U A 11 - 8 ? 9 ? 10 ? 12 13 ? @ @ R @ ? I @ @ @ 15 16 ? 17 - ; ?; ; 19 14 ? 13:2 13:6 ? 13:4 18 20 21 Figure 1.2: Relations among the chapters

INTRODUCTION 6 stage with the background of Section 3.9, they may serve as an introduction to sources of model uncertainties and hence to robustness problems. Robust Control 4 5 6* 7* 8* 9 10 11 H1 Control Advanced Model & Controller Reductions 4 12 3.9 5.1 5.3,5.5,5.7 13.2,13.4,13.6 7 10 14 8 12 15 5.4,5.7 13.2,13.4,13.6 16 10.1 15 17* 16.1,16.2 16 18* 17.1 19* 19 H1 Control Table 1.1: Possible choices for an one-semester course (* chapters may be omitted) Table 1.1 lists several possible choices of topics for an one-semester course. A course on model and controller reductions may only include the concept of H1 control and the H1 controller formulas with the detailed proofs omitted as suggested in the above table. 1.3 Highlights of The Book The key results in each chapter are highlighted below. Note that some of the statements in this section are not precise, they are true under certain assumptions that are not explicitly stated. Readers should consult the corresponding chapters for the exact statements and conditions. Chapter 2 reviews some basic linear algebra facts and treats a special class of matrix dilation problems. In particular, we show min X X B C A = max C A B A and characterize all optimal (suboptimal) X . Chapter 3 reviews some system theoretical concepts: controllability, observability, stabilizability, detectability, pole placement, observer theory, system poles and zeros, and state space realizations. Particularly, the balanced state space realizations are studied in some detail. We show that for a given stable transfer function G(s) there A B is a state space realization G(s) = C D such that the controllability Gramian P and the observability Gramian Q de ned below are equal and diagonal: P = Q = = diag( 1 2 : : : n ) where AP + PA + BB = 0

1.3. Highlights of The Book 7 A Q + QA + C C = 0: Chapter 4 de nes several norms for signals and introduces the H2 spaces and the H1 spaces. The input/output gains of a stable linear system under various input signals are characterized. We show that H2 and H1 norms come out naturally as measures of the worst possible performance for many classes of input signals. For example, let A G(s) = C B 2 RH1 0 kg uk g(t) = CeAt B Z 1 n X kg(t)k dt 2 i . Some state Then kGk1 = sup kuk 2 and 1 kGk1 0 2 i=1 space methods of computing real rational H2 and H1 transfer matrix norms are also presented: kGk2 = trace(B QB ) = trace(CPC ) 2 and kGk1 = maxf : H has an eigenvalue on the imaginary axisg where 2 A H = ;C C BBA= : ; Chapter 5 introduces the feedback structure and discusses its stability and performance properties. w1 + e e1 +6 - P ^ K + e2 ?+ w2 e We show that the above closed-loop system is internally stable if and only if ^ ^ ^ ^ ^ I ;K ;1 = I + K (I ; P K );1 P K (I ; P K );1 2 RH1 : ^ );1 P ^ );1 ;P I (I ; P K (I ; P K Alternative characterizations of internal stability using coprime factorizations are also presented. Chapter 6 introduces some multivariable versions of the Bode's sensitivity integral relations and Poisson integral formula. The sensitivity integral relations are used to study the design limitations imposed by bandwidth constraint and the open-loop unstable poles, while the Poisson integral formula is used to study the design constraints

INTRODUCTION 8 imposed by the non-minimum phase zeros. For example, let S (s) be a sensitivity function, and let pi be the right half plane poles of the open-loop system and i be the corresponding pole directions. Then we show that 1 Z 0 ln (S (j!))d! = max X i ! (Repi ) i i + H1 H1 0: This equality shows that the design limitations in multivariable systems are dependent on the directionality properties of the sensitivity function as well as those of the poles (and zeros), in addition to the dependence upon pole (and zero) locations which is known in single-input single-output systems. Chapter 7 considers the problem of reducing the order of a linear multivariable dynamical system using the balanced truncation method. Suppose 2 G(s) = 4 A11 A12 A21 A22 C1 C2 B1 B2 D 3 5 2 RH1 is a balanced realization with controllability and observability Gramians P = Q = = diag( 1 2 ) 1 = diag( 1 Is1 2 Is2 : : : r Isr ) 2 = diag( r+1 Isr+1 r+2 Isr+2 : : : N IsN ): Then the truncated system Gr (s) = A11 B1 is stable and satis es an additive C1 D error bound: N X kG(s) ; Gr (s)k1 2 i: i=r+1 On the other hand, if G;1 2 RH1 , and P and Q satisfy PA + AP + BB = 0 Q(A ; BD;1 C ) + (A ; BD;1 C ) Q + C (D;1 ) D;1 C = 0 such that P = Q = diag( 1 2 ) with G partitioned compatibly as before, then Gr (s) = A11 B1 C1 D is stable and minimum phase, and satis es respectively the following relative and multiplicative error bounds: G;1 (G ; Gr ) 1 G;1 (G ; Gr ) 1 r N Y i=r+1 N Y i=r+1 q 1 + 2 i ( 1 + i2 + i ) ; 1 q 1 + 2 i ( 1 + i2 + i ) ; 1:

1.3. Highlights of The Book 9 Chapter 8 deals with the optimal Hankel norm approximation and its applications in L1 norm model reduction. We show that for a given G(s) of McMillan degree n ^ there is a G(s) of McMillan degree r < n such that ^ ^ G(s) ; G(s) H = inf G(s) ; G(s) H = r+1 : Moreover, there is a constant matrix D0 such that N X ^ G(s) ; G(s) ; D0 1 i: i=r+1 The well-known Nehari's theorem is also shown: inf kG ; Qk1 = kGkH = 1 : Q2RH; 1 Chapter 9 derives robust stability tests for systems under various modeling assumptions through the use of a small gain theorem. In particular, we show that an uncertain system described below with an unstructured uncertainty 2 RH1 with k k1 < 1 is robustly stable if and only if the transfer function from w to z has H1 norm no greater than 1. ∆ z w nominal system Chapter 10 introduces the linear fractional transformation (LFT). We show that many control problems can be formulated and treated in the LFT framework. In particular, we show that every analysis problem can be put in an LFT form with some structured (s) and some interconnection matrix M (s) and every synthesis problem can be put in an LFT form with a generalized plant G(s) and a controller K (s) to be designed. z z M w y G -K w u

INTRODUCTION 10 Chapter 11 considers robust stability and performance for systems with multiple sources of uncertainties. We show that an uncertain system is robustly stable for all i 2 RH1 with k i k1 < 1 if and only if the structured singular value ( ) of the corresponding interconnection model is no greater than 1. ∆1 ∆4 ∆2 nominal system ∆3 Chapter 12 characterizes all controllers that stabilize a given dynamical system G(s) using the state space approach. The construction of the controller parameterization is done via separation theory and a sequence of special problems, which are so-called full information (FI) problems, disturbance feedforward (DF) problems, full control (FC) problems and output estimation (OE). The relations among these special problems are established. FI - dual 6 equivalent FC 6 equivalent ? DF - dual ? OE For a given generalized plant 2 A B1 B2 G(s) = G11 (s) G12 (s) = 4 C1 D11 D12 G21 (s) G22 (s) C2 D21 D22 3 5 we show that all stabilizing controllers can be parameterized as the transfer matrix from y to u below where F and L are such that A + LC2 and A + B2 F are stable.

1.3. Highlights of The Book 11 z y ?? c ; c u D22 C2 R -A - ;L u1 w G - c c 66 B2 c 6 -F y1 Q Chapter 13 studies the Algebraic Riccati Equation and the related problems: the properties of its solutions, the methods to obtain the solutions, and some applications. In particular, we study in detail the so-called stabilizing solution and its applications in matrix factorizations. A solution to the following ARE A X + XA + XRX + Q = 0 is said to be a stabilizing solution if A + RX is stable. Now let A H := ;Q ;R A and let X; (H ) be the stable H invariant subspace and X; (H ) = Im X1 X2 ; where X1 X2 2 C n n . If X1 is nonsingular, then X := X2 X1 1 is uniquely determined by H , denoted by X = Ric(H ). A key result of this chapter is the relationship between the spectral factorization of a transfer function and the solution of a corresponding ARE. Suppose (A B ) is stabilizable and suppose either A has no eigenvalues on j!-axis or P is sign de nite (i.e., P 0 or P 0) and (P A) has no unobservable modes on the j!-axis. De ne (s) = B (;sI ; A );1 I P S S R (sI ; A);1 B : I

INTRODUCTION 12 Then (j!) > 0 for all 0 ! 1 () 9 a stabilizing solution X to (A ; BR;1 S ) X + X (A ; BR;1 S ) ; XBR;1B X + P ; SR;1S = 0 () the Hamiltonian matrix ;1 S ;BR;1B H = ;A ; BR ;1S ) ;(A ; BR;1 S ) (P ; SR has no j!-axis eigenvalues. Similarly, () 9 a solution X to (j!) 0 for all 0 ! 1 (A ; BR;1 S ) X + X (A ; BR;1 S ) ; XBR;1B X + P ; SR;1S = 0 such that (A ; BR;1 S ; BR;1 B X ) C ; . Furthermore, there exists a M 2 Rp such that = M RM: with A M = ;F B I F = ;R;1(S + B X ): Chapter 14 treats the optimal control of linear time-invariant systems with quadratic performance criteria, i.e., LQR and H2 problems. We consider a dynamical system described by an LFT with 3 2 A B1 B2 G(s) = 4 C1 0 D12 5 : C2 D21 0 z y G - K w u

1.3. Highlights of The Book De ne H2 := ;CA C ;0 A 1 1 13 ; ;CB2 12 1D D12 C1 B2 A C D21 B1 C2 J2 := ;B B ;0A ; ;B 2 1 1 1 D21 X2 := Ric(H2) 0 Y2 := Ric(J2 ) 0 F2 := ;(B2 X2 + D12 C1 ) L2 := ;(Y2 C2 + B1 D21 ): Then the H2 optimal controller, i.e. the controller that minimizes kTzw k2 , is given by L Kopt (s) := A + B2 F2 + L2C2 ;0 2 : F 2 Chapter 15 solves a max-min problem, i.e., a full information (or state feedback) H1 control problem, which is the key to the H1 theory considered in the next chapter. Consider a dynamical system x = Ax + B1 w + B2 u _ z = C1 x + D12 u D12 C1 D12 = 0 I : Then we show that sup umin kz k2 < if and only if H1 2 dom(Ric) and X1 = w2BL2+ 2L2+ Ric(H1 ) 0 where H1 := ;CA C 1 1 ;2B1 B1 ; B2 B2 ;A Furthermore, u = F1 x with F1 := ;B2 X1 is an optimal control. Chapter 16 considers a simpli ed H1 control problem with the generalized plant G(s) as given in Chapter 14. We show that there exists an admissible controller such that kTzw k1 < i the following three conditions hold: (i) H1 2 dom(Ric) and X1 := Ric(H1 ) 0 (ii) J1 2 dom(Ric) and Y1 := Ric(J1 ) 0 where A J1 := ;B B 1 1 (iii) (X1 Y1 ) < 2 . ;2 C1 C1 ; C2 C2 ;A :

INTRODUCTION 14 Moreover, the set of all admissible controllers such that kTzw k1 < equals the set of all transfer matrices from y to u in u M1 - y 2 ^ A1 ;Z1 L1 Z1 B2 4 F1 M1 (s) = 0 I I 0 ;C2 Q 3 5 where Q 2 RH1 , kQk1 < and ^ A1 := A + ;2B1 B1 X1 + B2 F1 + Z1 L1 C2 F1 := ;B2 X1 L1 := ;Y1 C2 Z1 := (I ; ;2Y1 X1 );1 : Chapter 17 considers again the standard H1 control problem but with some assumptions in the last chapter relaxed. We indicate how the assumptions can be relaxed to accommodate other more complicated problems such as singular control problems. We also consider the integral control in the H2 and H1 theory and show how the general H1 solution can be used to solve the H1 ltering problem. The conventional Youla parameterization approach to the H2 and H1 problems is also outlined. Finally, the general state feedback H1 control problem and its relations with full information control and di erential game problems are discussed. ~ ~ Chapter 18 rst solves a gap metric minimization problem. Let P = M ;1 N be a normalized left coprime factorization. Then we show that inf K stabilizing = K stabilizing inf K (I + PK );1 I P I K (I + PK );1M ;1 = ~ I 1 q 1; 1 ~ ~ N M 2 H ;1 : This implies that there is a robustly stabilizing controller for ~ ~ P = (M + ~ M );1 (N + ~ N ) with if and only if ~N ~M q 1< ~ ~ H 1; N M 2 : Using this stabilization result, a loop shaping design technique is proposed. The proposed technique uses only the basic concept of loop shaping methods and then a robust

1.3. Highlights of The Book 15 stabilization controller for the normalized coprime factor perturbed system is used to construct the nal controller. Chapter 19 considers the design of reduced order controllers by means of controller reduction. Special attention is paid to the controller reduction methods that preserve the closed-loop stability and performance. In particular, two H1 performance preserving reduction methods are proposed: ^ a) Let K0 be a stabilizing controller such that kF` (G K0 )k1 < . Then K is also a ^ stabilizing controller such that F`(G K ) < if 1 ^ W ;1 (K ; K0)W ;1 2 1 1 <1 where W1 and W2 are some stable, minimum phase and invertible transfer matrices. b) Let K0 = 12 ;1 be a central H1 controller such that kF`(G K0 )k1 < and 22 ^ ^ let U V 2 RH1 be such that ;1 I 0 p ^ 12 ; U ;1 < 1= 2: ^ 0 I V 22 1 ^ ^^ ^ Then K = U V ;1 is also a stabilizing controller such that kF`(G K )k1 < . Thus the controller reduction problem is converted to weighted model reduction problems for which some numerical methods are suggested. Chapter 20 brie y introduces the Lagrange multiplier method for the design of xed order controllers. Chapter 21 discusses discrete time Riccati equations and some of their applications in discrete time control. Finally, the discrete time balanced model reduction is considered.

16 INTRODUCTION

2 Linear Algebra Some basic linear algebra facts will be reviewed in this chapter. The detailed treatment of this topic can be found in the references listed at the end of the chapter. Hence we shall omit most proofs and provide proofs only for those results that either cannot be easily found in the standard linear algebra textbooks or are insightful to the understanding of some related problems. We then treat a special class of matrix dilation problems which will be used in Chapters 8 and 17 however, most of the results presented in this book can be understood without the knowledge of the matrix dilation theory. 2.1 Linear Subspaces Let R denote the real scalar eld and C the complex scalar eld. For the interest of this chapter, let F be either R or C and let Fn be the vector space over F, i.e., Fn is either Rn or C n . Now let x1 x2 : : : xk 2 Fn . Then an element of the form 1 x1 + : : : + k xk with i 2 F is a linear combination over F of x1 : : : xk . The set of all linear combinations of x1 x2 : : : xk 2 Fn is a subspace called the span of x1 x2 : : : xk , denoted by spanfx1 x2 : : : xk g := fx = 1 x1 + : : : + k xk : i 2 Fg: A set of vectors x1 x2 : : : xk 2 Fn are said to be linearly dependent over F if there exists 1 : : : k 2 F not all zero such that 1 x2 + : : : + k xk = 0 otherwise they are said to be linearly independent. Let S be a subspace of Fn , then a set of vectors fx1 x2 : : : xk g 2 S is called a basis for S if x1 x2 : : : xk are linearly independent and S = spanfx1 x2 : : : xk g. However, 17

LINEAR ALGEBRA 18 such a basis for a subspace S is not unique but all bases for S have the same number of elements. This number is called the dimension of S , denoted by dim(S ). A set of vectors fx1 x2 : : : xk g in Fn are mutually orthogonal if xi xj = 0 for all i 6= j and orthonormal if xi xj = ij , where the superscript denotes complex conjugate transpose and ij is the Kronecker delta function with ij = 1 for i = j and ij = 0 for i 6= j . More generally, a collection of subspaces S1 S2 : : : Sk of Fn are mutually orthogonal if x y = 0 whenever x 2 Si and y 2 Sj for i 6= j . The orthogonal complement of a subspace S Fn is de ned by S ? := fy 2 Fn : y x = 0 for all x 2 S g: We call a set of vectors fu1 u2 : : : uk g an orthonormal basis for a subspace S 2 Fn if they form a basis of S and are orthonormal. It is always possible to extend such a basis to a full orthonormal basis fu1 u2 : : : un g for Fn . Note that in this case S ? = spanfuk+1 : : : un g and fuk+1 : : : un g is called an orthonormal completion of fu1 u2 : : : uk g. Let A 2 Fm n be a linear transformation from Fn to Fm , i.e., A : Fn 7;! Fm : (Note that a vector x 2 Fm can also be viewed as a linear transformation from F to Fm , hence anything said for the general matrix case is also true for the vector case.) Then the kernel or null space of the linear transformation A is de ned by KerA = N (A) := fx 2 Fn : Ax = 0g and the image or range of A is ImA = R(A) := fy 2 Fm : y = Ax x 2 Fn g: It is clear that KerA is a subspace of Fn and ImA is a subspace of Fm . Moreover, it can be easily seen that dim(KerA) + dim(ImA) = n and dim(ImA) = dim(KerA)? . Note that (KerA)? is a subspace of Fn . Let ai i = 1 2 : : : n denote the columns of a matrix A 2 Fm n , then ImA = spanfa1 a2 : : : an g: The rank of a matrix A is de ned by rank(A) = dim(ImA): It is a fact that rank(A) = rank(A ), and thus the rank of a matrix equals the maximal number of independent rows or columns. A matrix A 2 Fm n is said to have full row rank if m n and rank(A) = m. Dually, it is said to have full column rank if n m and rank(A) = n. A full rank square matrix is called a nonsingular matrix. It is easy

2.1. Linear Subspaces 19 to see that rank(A) = rank(AT ) = rank(PA) if T and P are nonsingular matrices with appropriate dimensions. A square matrix U 2 F n n whose columns form an orthonormal basis for Fn is called an unitary matrix (or orthogonal matrix if F = R), and it satis es U U = I = UU . The following lemma is useful. Lemma 2.1 Let D = d1 : : : dk 2 Fn k (n > k) be such that D D = I , so di i = 1 2 : : : k are orthonormal. Then there exists a matrix D? 2 Fn (n;k) such that D D? is a unitary matrix. Furthermore, the columns of D?, di i = k + 1 : : : n, form an orthonormal completion of fd1 d2 : : : dk g. The following results are standard: Lemma 2.2 Consider the linear equation where A 2 Fn l and B AX = B 2 Fn m are given matrices. Then the following statements are equivalent: (i) there exists a solution X 2 Fl m . (ii) the columns of B 2 ImA. (iii) rank A B =rank(A). (iv) Ker(A ) Ker(B ). Furthermore, the solution, if it exists, is unique if and only if A has full column rank. The following lemma concerns the rank of the product of two matrices. Lemma 2.3 (Sylvester's inequality) Let A 2 Fm n and B 2 Fn k . Then rank (A) + rank(B ) ; n rank(AB ) minfrank (A) rank(B )g: For simplicity, a matrix M with mij as its i-th row and j -th column's element will sometimes be denoted as M = mij ] in this book. We will mostly use I as above to denote an identity matrix with compatible dimensions, but from time to time, we will use In to emphasis that it is an n n identity matrix. Now let A = aij ] 2 C n n , then the trace of A is de ned as Trace(A) := n X i=1 aii : Trace has the following properties: Trace( A) = Trace(A) 8 2 C A 2 C n n Trace(A + B ) = Trace(A) + Trace(B ) 8A B 2 C n n Trace(AB ) = Trace(BA) 8A 2 C n m B 2 C m n :

LINEAR ALGEBRA 20 2.2 Eigenvalues and Eigenvectors Let A 2 C n n then the eigenvalues of A are the n roots of its characteristic polynomial p( ) = det( I ; A). This set of roots is called the spectrum of A and is denoted by (A) (not to be confused with singular values de ned later). That is, (A) := f 1 2 : : : n g if i is a root of p( ). The maximal modulus of the eigenvalues is called the spectral radius, denoted by (A) := 1maxn j i j i where, as usual, j j denotes the magnitude. If 2 (A) then any nonzero vector x 2 C n that satis es Ax = x is referred to as a right eigenvector of A. Dually, a nonzero vector y is called a left eigenvector of A if y A= y : It is a well known (but nontrivial) fact in linear algebra that any complex matrix admits a Jordan Canonical representation: Theorem 2.4 For any square complex matrix A 2 C n n , there exists a nonsingular matrix T such that A = TJT ;1 where 2 Jij = with Pl i=1 Pmi 6 6 6 6 6 4 J = diagfJ1 J2 : : : Jl g Ji = diagfJi1 Ji2 : : : Jimi g i j =1 nij = n, and with 1 i 3 1 ... ... i 1 7 7 7 7 7 5 2 C nij i nij f i : i = 1 : : : lg as the distinct eigenvalues of A. The transformation T has the following form: T = T1 T2 Ti = Ti1 Ti2 Tij = tij1 tij2 where tij1 are the eigenvectors of A, Atij1 = : : : Tl : : : Timi : : : tijnij i tij 1

2.2. Eigenvalues and Eigenvectors 21 and tijk 6= 0 de ned by the following linear equations for k 2 (A ; i I )tijk = tij(k;1) are called the generalized eigenvectors of A. For a given integer q nij , the generalized eigenvectors tijl 8l < q, are called the lower rank generalized eigenvectors of tijq . De nition 2.1 A square matrix A 2 Rn n is called cyclic if the Jordan canonical form of A has one and only one Jordan block associated with each distinct eigenvalue. More speci cally, a matrix A is cyclic if its Jordan form has mi = 1 i = 1 : : : l. Clearly, a square matrix A with all distinct eigenvalues is cyclic and can be diagonalized: 2 A x1 x2 xn = x1 x2 xn 6 6 6 4 1 3 2 ... 7 7 7 5 : n In this case, A has the following spectral decomposition: A= where yi 2 C n is given by 2 6 6 6 4 y1 y2 .. . yn n X i=1 i xi yi 3 7 7 7 5 = x1 x2 xn ;1 : In general, eigenvalues need not be real, and neither do their corresponding eigenvectors. However, if A is real and is a real eigenvalue of A, then there is a real eigenvector corresponding to . In the case that all eigenvalues of a matrix A are real1 , we will denote max (A) for the largest eigenvalue of A and min (A) for the smallest eigenvalue. In particular, if A is a Hermitian matrix, then there exist a unitary matrix U and a real diagonal matrix such that A = U U , where the diagonal elements of are the eigenvalues of A and the columns of U are the eigenvectors of A. The following theorem is useful in linear system theory. Theorem 2.5 (Cayley-Hamilton) Let A 2 C n n and denote det( I ; A) = n + a1 n;1 + + an : Then An + a1 An;1 + + an I = 0: 1 For example, this is the case if A is Hermitian, i.e., A = A .

LINEAR ALGEBRA 22 This is obvious if A has distinct eigenvalues. Since An + a1 An;1 + + an I = T ;1 diag f: : : n + a1 in;1 + i + an : : :g T = 0 and i is an eigenvalue of A. The proof for the general case follows from the following lemma. Lemma 2.6 Let A 2 C n n . Then 1 ( I ; A);1 = det( I ; A) (R1 n;1 + R2 n;2 + + Rn ) and det( I ; A) = n + a1 n;1 + + an where ai and Ri can be computed from the following recursive formulas: a1 a2 = = . . . = = ; Trace A 1 ; 2 Trace(R2 A) R1 = I R2 = R1 A + a1 I . . . 1 Trace(Rn;1 A) an;1 ; n;1 Rn = Rn;1 A + an;1 I 1 0 = Rn A + an I: an ; n Trace(Rn A) The proof is left to the reader as an exercise. Note that the Cayley-Hamilton Theorem follows from the fact that 0 = Rn A + an I = An + a1 An;1 + + an I: 2.3 Matrix Inversion Formulas Let A be a square matrix partitioned as follows A := A11 A12 A21 A22 where A11 and A22 are also square matrices. Now suppose A11 is nonsingular, then A has the following decomposition: I 0 A11 A12 = A21 A22 A21 A;1 I 11 A11 0 0 with := A22 ; A21 A;1 A12 , and A is nonsingular i 11 Dually, if A22 is nonsingular, then A11 A12 = I A12 A;1 22 A21 A22 0 I 0 0 A22 I A;1 A12 11 0 I is nonsingular. I 0 A;1 A21 I 22

2.3. Matrix Inversion Formulas 23 with := A11 ; A12 A;1 A21 , and A is nonsingular i is nonsingular. The matrix 22 ( ) is called the Schur complement of A11 (A22 ) in A. Moreover, if A is nonsingular, then A11 A12 ;1 = A;1 + A;1 A12 ;1 A21 A;1 ;A;1 A12 ;1 11 11 11 11 ;1 A21 A22 ; ;1 A21 A;1 11 and # " ;1 ; ;1 A12 A;1 A11 A12 ;1 = 22 : A21 A22 ;A;1 A21 ;1 A;1 + A;1 A21 ;1 A12 A;1 22 22 22 22 The above matrix inversion formulas are particularly simple if A is block triangular: A11 0 ;1 = A;1 0 11 ;1 A21 A;1 A;1 A21 A22 ;A22 11 22 A11 A12 ;1 = A;1 ;A;1 A12 A;1 : 11 11 ;1 22 0 A22 0 A22 The following identity is also very useful. Suppose A11 and A22 are both nonsingular matrices, then (A11 ; A12 A;1 A21 );1 = A;1 + A;1 A12 (A22 ; A21 A;1 A12 );1 A21 A;1 : 22 11 11 11 11 As a consequence of the matrix decomposition formulas mentioned above, we can calculate the determinant of a matrix by using its sub-matrices. Suppose A11 is nonsingular, then det A = det A11 det(A22 ; A21 A;1 A12 ): 11 On the other hand, if A22 is nonsingular, then det A = det A22 det(A11 ; A12 A;1 A21 ): 22 In particular, for any B 2 C m n and C 2 C n m , we have I B det ;m I = det(In + CB ) = det(Im + BC ) C n and for x y 2 C n det(In + xy ) = 1 + y x:

LINEAR ALGEBRA 24 2.4 Matrix Calculus Let X = xij ] 2 C m n be a real or complex matrix and F (X ) 2 C be a scalar real or complex function of X then the derivative of F (X ) with respect to X is de ned as @ @ @X F (X ) := @xij F (X ) : Let A and B be constant complex matrices with compatible dimensions. Then the following is a list of formulas for the derivatives2: @ @X TracefAXB g @ T @X TracefAX B g @ @X TracefAXBX g @ T @X TracefAXBX g @ k @X TracefX g = AT B T @ k @X TracefAX g = @ ;1 @X TracefAX B g @ @X log det X @ T @X det X @ k @X detfX g = ;(X ;1 BAX ;1)T = BA = AT X T B T + B T X T AT = AT XB T + AXB = k(X k;1 )T Pk;1 i=0 X i AX k;i;1 T = (X T );1 @ = @X det X = (detX )(X T );1 = k(detX k )(X T );1 : And nally, the derivative of a matrix A( ) 2 C m n with respect to a scalar 2 C is de ned as dA := daij d d so that all the rules applicable to a scalar function also apply here. In particular, we have d(AB ) = dA B + A dB d d d dA;1 = ;A;1 dA A;1 : d d 2 Note that transpose rather than complex conjugate transpose should be used in the list even if the involved matrices are complex matrices.

2.5. Kronecker Product and Kronecker Sum 25 2.5 Kronecker Product and Kronecker Sum Let A 2 C m n and B 2 C p q , then the Kronecker product of A and B is de ned as 2 6 A B := 6 6 4 a11 B a12 B a21 B a22 B a1n B a2n B am1 B am2 B amn B .. . .. . .. . 3 7 7 7 5 2 C mp nq : Furthermore, if the matrices A and B are square and A 2 C n n and B 2 C m m then the Kronecker sum of A and B is de ned as A B := (A Im ) + (In B ) 2 C nm nm : Let X 2 C m n and let vec(X ) denote the vector formed by stacking the columns of X into one long vector: 2 vec(X ) := 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4 x11 x21 .. . 3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5 xm1 x12 x22 : . .. x1n x2n .. . xmn Then for any matrices A 2 C k m , B 2 C n l , and X 2 C m n , we have vec(AXB ) = (B T A)vec(X ): Consequently, if k = m and l = n, then vec(AX + XB ) = (B T A)vec(X ): Let A 2 C n n and B 2 C m m , and let f i i = 1 : : : ng be the eigenvalues of A and f j j = 1 : : : mg be the eigenvalues of B . Then we have the following properties: The eigenvalues of A B are the mn numbers i j , i = 1 2 : : : n, j = 1 2 : : : m. The eigenvalues of A B = (A Im ) + (In B ) are the mn numbers i + j , i = 1 2 : : : n, j = 1 2 : : : m.

LINEAR ALGEBRA 26 Let fxi i = 1 : : : ng be the eigenvectors of A and let fyj j = 1 : : : mg be the eigenvectors of B . Then the eigenvectors of A B and A B correspond to the eigenvalues i j and i + j are xi yj . Using these properties, we can show the following Lemma. Lemma 2.7 Consider the Sylvester equation AX + XB = C (2:1) where A 2 Fn n , B 2 Fm m , and C 2 Fn m are given matrices. There exists a unique solution X 2 Fn m if and only if i (A)+ j (B ) 6= 0 8i = 1 2 : : : n and j = 1 2 : : : m. In particular, if B = A , (2.1) is called the Lyapunov Equation" and the necessary and su cient condition for the existence of a unique solution is that i (A) + j (A) 6= 0 8i j = 1 2 : : : n Proof. Equation (2.1) can be written as a linear matrix equation by using the Kronecker product: (B T A)vec(X ) = vec(C ): Now this equation has a unique solution i B T A is nonsingular. Since the eigenvalues of B T A have the form of i (A) + j (B T ) = i (A) + j (B ), the conclusion follows. 2 The properties of the Lyapunov equations will be studied in more detail in the next chapter. 2.6 Invariant Subspaces Let A : C n 7;! C n be a linear transformation, be an eigenvalue of A, and x be a corresponding eigenvector, respectively. Then Ax = x and A( x) = ( x) for any 2 C . Clearly, the eigenvector x de nes an one-dimensional subspace that is invariant with respect to pre-multiplication by A since Ak x = k x 8k. In general, a subspace S C n is called invariant for the transformation A, or A-invariant, if Ax 2 S for every x 2 S . In other words, that S is invariant for A means that the image of S under A is contained in S : AS S . For example, f0g, C n , KerA, and ImA are all A-invariant subspaces. As a generalization of the one dimensional invariant subspace induced by an eigenvector, let 1 : : : k be eigenvalues of A (not necessarily distinct), and let xi be the corresponding eigenvectors and the generalized eigenvectors. Then S = spanfx1 : : : xk g is an A-invariant subspace provided that all the lower rank generalized eigenvectors are included. More speci cally, let 1 = 2 = = l be eigenvalues of A, and

2.6. Invariant Subspaces 27 let x1 x2 : : : xl be the corresponding eigenvector and the generalized eigenvectors obtained through the following equations: (A ; 1 I )x1 = 0 (A ; 1 I )x2 = x1 .. . (A ; 1 I )xl = xl;1 : Then a subspace S with xt 2 S for some t l is an A-invariant subspace only if all lower rank eigenvectors and generalized eigenvectors of xt are in S , i.e., xi 2 S 81 i t. This will be further illustrated in Example 2.1. On the other hand, if S is a nontrivial subspace3 and is A-invariant, then there is x 2 S and such that Ax = x. An A-invariant subspace S C n is called a stable invariant subspace if all the eigenvalues of A constrained to S have negative real parts. Stable invariant subspaces will play an important role in computing the stabilizing solutions to the algebraic Riccati equations in Chapter 13. Example 2.1 Suppose a matrix A has the following Jordan canonical form 2 A x1 x2 x3 x4 = x1 x2 x3 x4 with Re 1 < 0, 3 < 0, and 4 S1 = spanfx1 g S3 = spanfx3 g S4 = spanfx4 g 6 6 4 1 3 1 1 3 7 7 5 4 > 0. Then it is easy to verify that S12 = spanfx1 x2 g S13 = spanfx1 x3 g S14 = spanfx1 x4 g S123 = spanfx1 x2 x3 g S124 = spanfx1 x2 x4 g S34 = spanfx3 x4 g are all A-invariant subspaces. Moreover, S1 S3 S12 S13 , and S123 are stable A-invariant subspaces. However, the subspaces S2 = spanfx2 g, S23 = spanfx2 x3 g, S24 = spanfx2 x4 g, and S234 = spanfx2 x3 x4 g are not A-invariant subspaces since the lower rank generalized eigenvector x1 of x2 is not in these subspaces. To illustrate, consider the subspace S23 . Then by de nition, Ax2 2 S23 if it is an A-invariant subspace. Since Ax2 = x2 + x1 Ax2 2 S23 would require that x1 be a linear combination of x2 and x3 , but this is impossible since x1 is independent of x2 and x3 . 3 3 We will say subspace S is trivial if S = f0g.

LINEAR ALGEBRA 28 2.7 Vector Norms and Matrix Norms In this section, we will de ne vector and matrix norms. Let X be a vector space, a realvalued function k k de ned on X is said to be a norm on X if it satis es the following properties: (i) kxk 0 (positivity) (ii) kxk = 0 if and only if x = 0 (positive de niteness) (iii) k xk = j j kxk, for any scalar (homogeneity) (iv) kx + yk kxk + kyk (triangle inequality) for any x 2 X and y 2 X . A function is said to be a semi-norm if it satis es (i), (iii), and (iv) but not necessarily (ii). Let x 2 C n . Then we de ne the vector p-norm of x as kxkp := n X i=1 jxi jp !1=p for 1 p 1: In particular, when p = 1 2 1 we have kxk1 := kxk2 := n X i=1 jxi j v u n uX t i=1 jxi j2 kxk1 := 1maxn jxi j: i Clearly, norm is an abstraction and extension of our usual concept of length in 3dimensional Euclidean space. So a norm of a vector is a measure of the vector length", for example kxk2 is the Euclidean distance of the vector x from the origin. Similarly, we can introduce some kind of measure for a matrix. Let A = aij ] 2 C m n , then the matrix norm induced by a vector p-norm is de ned as kAxk kAkp := sup kxk p : x6=0 p In particular, for p = 1 2 1, the corresponding induced matrix norm can be computed as m X kAk1 = 1maxn jaij j (column sum) j i=1

2.7. Vector Norms and Matrix Norms kAk2 = p kAk1 = 1max i m 29 max (A A) n X j =1 jaij j (row sum) : The matrix norms induced by vector p-norms are sometimes called induced p-norms. This is because kAkp is de ned by or induced from a vector p-norm. In fact, A can be viewed as a mapping from a vector space C n equipped with a vector norm k kp to another vector space C m equipped with a vector norm k kp . So from a system theoretical point of view, the induced norms have the interpretation of input/output ampli cation gains. We shall adopt the following convention throughout the book for the vector and matrix norms unless speci ed otherwise: let x 2 C n and A 2 C m n , then we shall denote the Euclidean 2-norm of x simply by kxk := kxk2 and the induced 2-norm of A by kAk := kAk2 : The Euclidean 2-norm has some very nice properties: Lemma 2.8 Let x 2 Fn and y 2 Fm . 1. Suppose n m. Then kxk = kyk i there is a matrix U 2 Fn and U U = I . 2. Suppose n = m. Then jx yj for some 2 F or y = 0. m such that x = Uy kxk kyk. Moreover, the equality holds i x = y 3. kxk kyk i there is a matrix 2 Fn m with k k Furthermore, kxk < kyk i k k < 1. 1 such that x = y. 4. kUxk = kxk for any appropriately dimensioned unitary matrices U . Another often used matrix norm is the so called Frobenius norm. It is de ned as p kAkF := Trace(A A) = v u m n uX X t i=1 j =1 jaij j2 : However, the Frobenius norm is not an induced norm. The following properties of matrix norms are easy to show: Lemma 2.9 Let A and B be any matrices with appropriate dimensions. Then

LINEAR ALGEBRA 30 1. (A) kAk (This is also true for F norm and any induced matrix norm). 2. kAB k kAk kB k. In particular, this gives A;1 kAk;1 if A is invertible. (This is also true for any induced matrix norm.) 3. kUAV k = kAk, and kUAV kF = kAkF , for any appropriately dimensioned unitary matrices U and V . 4. kAB kF kAk kB kF and kAB kF kB k kAkF . Note that although pre-multiplication or post-multiplication of a unitary matrix on a matrix does not change its induced 2-norm and F -norm, it does change its eigenvalues. For example, let A= 1 0 : 1 0 Then 1 (A) = 1 2 (A) = 0. Now let U= " 1 p2 1 ; p2 then U is a unitary matrix and # 1 p2 1 p2 p 2 0 0 0 UA = p with 1 (UA) = 2, 2 (UA) = 0. This property is useful in some matrix perturbation problems, particularly, in the computation of bounds for structured singular values which will be studied in Chapter 10. Lemma 2.10 Let A be a block partitioned matrix with 2A A 6 A11 A12 A=6 . 6 .21 ..22 4 . . A1q A2q Am1 Am2 . . . Amq 3 7 7 =: Aij ] 7 5 and let each Aij be an appropriately dimensioned matrix. Then for any induced matrix p-norm 2 3 kA11 k kAkp kA12 k 6 kA21 kp kA22 kp p 6 . p 6 . . . 4 . . kAm1 kp kAm2 kp kA1q kp kA2q kp 7 7 . . . kAmq kp 7 : 5 Further, the inequality becomes an equality if the F -norm is used. p (2:2)

2.7. Vector Norms and Matrix Norms 31 Proof. It is obvious that if the F -norm is used, then the right hand side of inequality (2.2) equals the left hand side. Hence only the induced p-norm cases, 1 p 1, will be shown. Let a vector x be partitioned consistently with A as 2x 6 x1 x = 6 .. 6 2 4 . xq and note that kxkp = Then k Aij ]kp := = = 2 kx1k 6 kx2kp 6 .p 6 . 4 . kxq kp 3 7 7 : 7 5 p 2 Pq A1j xj 3 6 Pjq=1 A2j xj 7 7 sup k Aij ] xkp = sup 6 j=1.. 6 7 4P . 5 kxkp =1 kxkp=1 q A x mj j p 2 Pq 3 j=1 2 Pq kA1j k kxj k 6 Pjq=1 A1j xj p 7 6 6 j=1 A2j xj p 7 7 6 Pjq=1 kA2j kp kxj kp 7 sup 6 6 7 kxsup 6 j=1 .. p p 6 .. 7 kxkp =1 6 kp =1 4 P . 6 P . 7 q kA k kx k 4 q 5 j =1 mj p j p j =1 Amj xj p p 2 kA11 k kA12k kA1q kp 3 2 kx1 kp 3 p p 6 kA21 kp kA22kp kA2q kp 7 6 kx2 kp 7 76 7 sup 6 .. 6 .. .. 7 6 .. 7 4 . 54 . 5 kxkp =1 . . kxq kp p kAm1 kp kAm2 kp kA1q kp h i sup kxkp =1 = 3 7 7 7 5 h kAij kp kAij kp i p p 3 7 7 7 5 p kxkp : 2

LINEAR ALGEBRA 32 2

#constant presentations

Add a comment

Related presentations

Related pages

Robust and Optimal Control eBook by Mi-Ching Tsai ...

Lesen Sie Robust and Optimal Control A Two-port Framework Approach von Mi-Ching Tsai mit Kobo. A Two-port Framework for Robust and Optimal Control ...
Read more

ROBUST AND OPTIMAL CONTROL - Startseite – Universitäts ...

CONTENTS XI 16 Hoo Control: Simple Case 413 16.1 Problem Formulation 413 16.2 Output Feedback Hoo Control 414 16.3 Motivation for Special Problems 420
Read more

Robust and Optimal Control Feher/Prentice Hall Digital and ...

Kemin Zhou - Robust and Optimal Control (Feher/Prentice Hall Digital and) jetzt kaufen. ISBN: 9780134565675, Fremdsprachige Bücher - Prinzip der Elektrizität
Read more

Robust and Optimal Control: Kemin Zhou, John C. Doyle ...

Robust and Optimal Control [Kemin Zhou, John C. Doyle, Keith Glover] on Amazon.com. *FREE* shipping on qualifying offers. This book provides a ...
Read more

Robust and Optimal Control - Kemin Zhou (Buch) – jpc

Das Buch Kemin Zhou: Robust and Optimal Control jetzt portofrei für 256,36 Euro kaufen. Mehr von Kemin Zhou gibt es im Shop.
Read more

Robust and Optimal Control - A Two-port Framework Approach ...

A Two-port Framework for Robust and Optimal Control introduces an alternative approach to robust and optimal controller synthesis procedures for linear,
Read more

Robust and Optimal Control - LSU Division of Electrical ...

Robust and Optimal Control, Prentice Hall . Kemin Zhou, Louisiana State University John Doyle, California Institute of Technology Keith Glover, University ...
Read more

Robust and Optimal Control von Da-Wei Gu; Mi-Ching Tsai ...

A Two-port Framework for Robust and Optimal Control introduces an alternative approach to robust and optimal controller synthesis procedures for linear ...
Read more

Robust and optimal control | DeepDyve

Robust and optimal control Weiss, M. Pergamon PII: SOOOS-1098(97)00132-S Auromur~co, Vol.33.No II.p. 2095. 19Y7 t 1997 Elsevm ScienceLtd.All mhts mened ...
Read more

Robust and Optimal Control: Robust Sampled-DataH and Fault ...

Robust and Optimal Control: Robust Sampled-DataH2 and Fault Detection and Isolation Mike Lind Rank Department of Automation Technical University of Denmark
Read more