1,875 156 7MB
Pages 930 Page size 410.4 x 648 pts Year 2008
Linear Control Theory Structure, Robustness, and Optimization
AUTOMATION AND CONTROL ENGINEERING A Series of Reference Books and Textbooks
Series Editors FRANK L. LEWIS, PH.D., FELLOW IEEE, FELLOW IFAC
SHUZHI SAM GE, PH.D., FELLOW IEEE
Professor
Professor Interactive Digital Media Institute
Automation and Robotics Research Institute The University of Texas at Arlington
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17.
The National University of Singapore
Nonlinear Control of Electric Machinery, Darren M. Dawson, Jun Hu, and Timothy C. Burg Computational Intelligence in Control Engineering, Robert E. King Quantitative Feedback Theory: Fundamentals and Applications, Constantine H. Houpis and Steven J. Rasmussen Self-Learning Control of Finite Markov Chains, A. S. Poznyak, K. Najim, and E. Gómez-Ramírez Robust Control and Filtering for Time-Delay Systems, Magdi S. Mahmoud Classical Feedback Control: With MATLAB®, Boris J. Lurie and Paul J. Enright Optimal Control of Singularly Perturbed Linear Systems and Applications: High-Accuracy Techniques, Zoran Gajif and Myo-Taeg Lim Engineering System Dynamics: A Unified Graph-Centered Approach, Forbes T. Brown Advanced Process Identification and Control, Enso Ikonen and Kaddour Najim Modern Control Engineering, P. N. Paraskevopoulos Sliding Mode Control in Engineering, edited by Wilfrid Perruquetti and Jean-Pierre Barbot Actuator Saturation Control, edited by Vikram Kapila and Karolos M. Grigoriadis Nonlinear Control Systems, Zoran Vukiç, Ljubomir Kuljaãa, Dali Donlagiã, and Sejid Tesnjak Linear Control System Analysis & Design: Fifth Edition, John D’Azzo, Constantine H. Houpis and Stuart Sheldon Robot Manipulator Control: Theory & Practice, Second Edition, Frank L. Lewis, Darren M. Dawson, and Chaouki Abdallah Robust Control System Design: Advanced State Space Techniques, Second Edition, Chia-Chi Tsui Differentially Flat Systems, Hebertt Sira-Ramirez and Sunil Kumar Agrawal
18. Chaos in Automatic Control, edited by Wilfrid Perruquetti and Jean-Pierre Barbot 19. Fuzzy Controller Design: Theory and Applications, Zdenko Kovacic and Stjepan Bogdan 20. Quantitative Feedback Theory: Fundamentals and Applications, Second Edition, Constantine H. Houpis, Steven J. Rasmussen, and Mario Garcia-Sanz 21. Neural Network Control of Nonlinear Discrete-Time Systems, Jagannathan Sarangapani 22. Autonomous Mobile Robots: Sensing, Control, Decision Making and Applications, edited by Shuzhi Sam Ge and Frank L. Lewis 23. Hard Disk Drive: Mechatronics and Control, Abdullah Al Mamun, GuoXiao Guo, and Chao Bi 24. Stochastic Hybrid Systems, edited by Christos G. Cassandras and John Lygeros 25. Wireless Ad Hoc and Sensor Networks: Protocols, Performance, and Control, Jagannathan Sarangapani 26. Modeling and Control of Complex Systems, edited by Petros A. Ioannou and Andreas Pitsillides 27. Intelligent Freight Transportation, edited by Petros A. Ioannou 28. Feedback Control of Dynamic Bipedal Robot Locomotion, Eric R. Westervelt, Jessy W. Grizzle, Christine Chevallereau, Jun Ho Choi, and Benjamin Morris 29. Optimal and Robust Estimation: With an Introduction to Stochastic Control Theory, Second Edition, Frank L. Lewis; Lihua Xie and Dan Popa 30. Intelligent Systems: Modeling, Optimization, and Control, Yung C. Shin and Chengying Xu 31. Optimal Control: Weakly Coupled Systems and Applications, v Zoran Gajic´, Myo-Taeg Lim, Dobrila Skataric´, Wu-Chung Su, and Vojislav Kecman 32. Deterministic Learning Theory for Identification, Control, and Recognition, Cong Wang and David J. Hill 33. Linear Control Theory: Structure, Robustness, and Optimization, Shankar P. Bhattacharyya, Aniruddha Datta, and Lee H. Keel
Linear Control Theory Structure, Robustness, and Optimization
Shankar P. Bhattacharyya Texas A&M University College Station, Texas, U.S.A.
Aniruddha Datta Texas A&M University College Station, Texas, U.S.A.
L. H. Keel Tennessee State University Nashville, Tennessee, U.S.A.
Boca Raton London New York
CRC Press is an imprint of the Taylor & Francis Group, an informa business
CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487‑2742 © 2009 by Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Printed in the United States of America on acid‑free paper 10 9 8 7 6 5 4 3 2 1 International Standard Book Number‑13: 978‑0‑8493‑4063‑5 (Hardcover) This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher can‑ not assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www.copy‑ right.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978‑750‑8400. CCC is a not‑for‑profit organization that pro‑ vides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging‑in‑Publication Data Datta, Aniruddha, 1963‑ Linear control theory : structure, robustness, and optimization / authors, Aniruddha Datta, Lee H. Keel, Shankar P. Bhattacharyya. p. cm. ‑‑ (Automation and control engineering) A CRC title. Includes bibliographical references and index. ISBN 978‑0‑8493‑4063‑5 (hardcover : alk. paper) 1. Linear control systems. I. Keel, L. H. (Lee H.) II. Bhattacharyya, S. P. (Shankar P.), 1946‑ III. Title. IV. Series. TJ220.D38 2009 629.8’32‑‑dc22 Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com
2008052001
DEDICATION
To my Guru, Ustad (Baba) Ali Akbar Khan, the greatest musician in the world. Baba opened my eye to Nada Brahma through the divine music of the Seni Maihar Gharana. S.P. Bhattacharyya
To My Beloved Wife Anindita A. Datta
To My Beloved Wife Kuisook L.H. Keel
vii
Contents
PREFACE
I
xvii
THREE TERM CONTROLLERS
1 PID CONTROLLERS: AN OVERVIEW OF CLASSICAL THEORY 1.1 Introduction to Control . . . . . . . . . . . . . . . . . . 1.2 The Magic of Integral Control . . . . . . . . . . . . . . . 1.3 PID Controllers . . . . . . . . . . . . . . . . . . . . . . . 1.4 Classical PID Controller Design . . . . . . . . . . . . . . 1.4.1 The Ziegler-Nichols Step Response Method . . . 1.4.2 The Ziegler-Nichols Frequency Response Method 1.4.3 PID Settings Using the Internal Model Controller Design Technique . . . . . . . . . . . . . . . . . . 1.4.4 Dominant Pole Design: The Cohen-Coon Method 1.4.5 New Tuning Approaches . . . . . . . . . . . . . . 1.5 Integrator Windup . . . . . . . . . . . . . . . . . . . . . 1.5.1 Setpoint Limitation . . . . . . . . . . . . . . . . . 1.5.2 Back-Calculation and Tracking . . . . . . . . . . 1.5.3 Conditional Integration . . . . . . . . . . . . . . 1.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Notes and References . . . . . . . . . . . . . . . . . . . . 2 PID 2.1 2.2 2.3
2.4 2.5
1
. . . . . .
. . . . . .
3 . 3 . 5 . 8 . 10 . 10 . 11
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
15 17 17 19 20 20 21 21 24
CONTROLLERS FOR DELAY-FREE LTI SYSTEMS Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stabilizing Set . . . . . . . . . . . . . . . . . . . . . . . . . . . Signature Formulas . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Computation of σ(p) . . . . . . . . . . . . . . . . . . . 2.3.2 Alternative Signature Expression . . . . . . . . . . . . Computation of the PID Stabilizing Set . . . . . . . . . . . . PID Design with Performance Requirements . . . . . . . . . . 2.5.1 Signature Formulas for Complex Polynomials . . . . . 2.5.2 Complex PID Stabilization Algorithm . . . . . . . . . 2.5.3 PID Design with Guaranteed Gain and Phase Margins 2.5.4 Synthesis of PID Controllers with an H∞ Criterion . . 2.5.5 PID Controller Design for H∞ Robust Performance .
25 25 27 29 30 32 33 38 40 42 44 44 49
ix
x
Linear Control Theory 2.6 2.7
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Notes and References . . . . . . . . . . . . . . . . . . . . . . . 54
3 PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY 55 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.2 Characteristic Equations for Delay Systems . . . . . . . . . . 57 3.3 The Pad´e Approximation and Its Limitations . . . . . . . . . 60 3.3.1 First Order Pad´e Approximation . . . . . . . . . . . . 62 3.3.2 Higher Order Pad´e Approximations . . . . . . . . . . . 65 3.4 The Hermite-Biehler Theorem for Quasi-polynomials . . . . . 69 3.4.1 Applications to Control Theory . . . . . . . . . . . . . 71 3.5 Stability of Systems with a Single Delay . . . . . . . . . . . . 77 3.6 PID Stabilization of First Order Systems with Time Delay . . 85 3.6.1 The PID Stabilization Problem . . . . . . . . . . . . . 86 3.6.2 Open-Loop Stable Plant . . . . . . . . . . . . . . . . . 87 3.6.3 Open-Loop Unstable Plant . . . . . . . . . . . . . . . 105 3.7 PID Stabilization of Arbitrary LTI Systems with a Single Time Delay . . . . . . . . . . . . . . . . . . . . . . . 116 3.7.1 Connection between Pontryagin’s Theory and the Nyquist Criterion . . . . . . . . . . . . . . . . . . 117 3.7.2 Problem Formulation and Solution Approach . . . . . 121 3.7.3 Proportional Controllers . . . . . . . . . . . . . . . . . 123 3.7.4 PI Controllers . . . . . . . . . . . . . . . . . . . . . . . 126 3.7.5 PID Controllers for an Arbitrary LTI Plant with Delay 128 3.8 Proofs of Lemmas 3.3, 3.4, and 3.5 . . . . . . . . . . . . . . . 135 3.8.1 Preliminary Results . . . . . . . . . . . . . . . . . . . 135 3.8.2 Proof of Lemma 3.3 . . . . . . . . . . . . . . . . . . . 139 3.8.3 Proof of Lemma 3.4 . . . . . . . . . . . . . . . . . . . 141 3.8.4 Proof of Lemma 3.5 . . . . . . . . . . . . . . . . . . . 141 3.9 Proofs of Lemmas 3.7 and 3.9 . . . . . . . . . . . . . . . . . . 144 3.9.1 Proof of Lemma 3.7 . . . . . . . . . . . . . . . . . . . 144 3.9.2 Proof of Lemma 3.9 . . . . . . . . . . . . . . . . . . . 145 3.10 An Example of Computing the Stabilizing Set . . . . . . . . . 148 3.11 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 3.12 Notes and References . . . . . . . . . . . . . . . . . . . . . . . 151 4 DIGITAL PID CONTROLLER DESIGN 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Tchebyshev Representation and Root Clustering . . . . . . 4.3.1 Tchebyshev Representation of Real Polynomials . . . 4.3.2 Interlacing Conditions for Root Clustering and Schur Stability . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Tchebyshev Representation of Rational Functions . .
. . . .
153 153 155 156 156
. 158 . 159
xi
Table of Contents 4.4
. . . . . . . . . . . . .
. . . . . . . . . . . . .
160 160 161 163 165 165 168 169 170 173 175 179 179
5 FIRST ORDER CONTROLLERS FOR LTI SYSTEMS 5.1 Root Invariant Regions . . . . . . . . . . . . . . . . . . . . . 5.2 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Robust Stabilization by First Order Controllers . . . . . . . 5.4 H∞ Design with First Order Controllers . . . . . . . . . . . 5.5 First Order Discrete-Time Controllers . . . . . . . . . . . . 5.5.1 Computation of Root Distribution Invariant Regions 5.5.2 Delay Tolerance . . . . . . . . . . . . . . . . . . . . . 5.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Notes and References . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . .
181 181 185 189 190 195 196 201 205 206
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
207 208 210 215 218 222 224 227 232
. . . . . . .
237 240 243 244 251 255 257
4.5 4.6
4.7
4.8 4.9
Root Counting Formulas . . . . . . . . . . . . . . . . . 4.4.1 Phase Unwrapping and Root Distribution . . . 4.4.2 Root Counting and Tchebyshev Representation Digital PI, PD, and PID Controllers . . . . . . . . . . Computation of the Stabilizing Set . . . . . . . . . . . 4.6.1 Constant Gain Stabilization . . . . . . . . . . . 4.6.2 Stabilization with PI Controllers . . . . . . . . 4.6.3 Stabilization with PD Controllers . . . . . . . . Stabilization with PID Controllers . . . . . . . . . . . 4.7.1 Maximally Deadbeat Control . . . . . . . . . . 4.7.2 Maximal Delay Tolerance Design . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . Notes and References . . . . . . . . . . . . . . . . . . .
6 CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Mathematical Preliminaries . . . . . . . . . . . . . . . 6.3 Phase, Signature, Poles, Zeros, and Bode Plots . . . . 6.4 PID Synthesis for Delay Free Continuous-Time Systems 6.5 PID Synthesis for Systems with Delay . . . . . . . . . 6.6 PID Synthesis for Performance . . . . . . . . . . . . . 6.7 An Illustrative Example: PID Synthesis . . . . . . . . 6.8 Model Free Synthesis for First Order Controllers . . . 6.9 Model Free Synthesis of First Order Controllers for Performance . . . . . . . . . . . . . . . . . . . . . . . . 6.10 Data Based Design vs. Model Based Design . . . . . . 6.11 Data-Robust Design via Interval Linear Programming 6.11.1 Data Robust PID Design . . . . . . . . . . . . 6.12 Computer-Aided Design . . . . . . . . . . . . . . . . . 6.13 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 6.14 Notes and References . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
xii
Linear Control Theory
7 DATA DRIVEN SYNTHESIS OF THREE TERM DIGITAL CONTROLLERS 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Notation and Preliminaries . . . . . . . . . . . . . . . . 7.3 PID Controllers for Discrete-Time Systems . . . . . . . 7.4 Data Based Design: Impulse Response Data . . . . . . . 7.4.1 Example: Stabilizing Set from Impulse Response 7.4.2 Sets Satisfying Performance Requirements . . . . 7.5 First Order Controllers for Discrete-Time Systems . . . 7.6 Computer-Aided Design . . . . . . . . . . . . . . . . . . 7.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8 Notes and References . . . . . . . . . . . . . . . . . . . .
II
. . . . . . . . . .
. . . . . . . . . .
ROBUST PARAMETRIC CONTROL
8 STABILITY THEORY FOR POLYNOMIALS 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . 8.2 The Boundary Crossing Theorem . . . . . . . . . . 8.2.1 Zero Exclusion Principle . . . . . . . . . . . 8.3 The Hermite-Biehler Theorem . . . . . . . . . . . . 8.3.1 Hurwitz Stability . . . . . . . . . . . . . . . 8.3.2 Hurwitz Stability for Complex Polynomials 8.3.3 Schur Stability . . . . . . . . . . . . . . . . 8.3.4 General Stability Regions . . . . . . . . . . 8.4 Schur Stability Test . . . . . . . . . . . . . . . . . 8.5 Hurwitz Stability Test . . . . . . . . . . . . . . . . 8.5.1 Root Counting and the Routh Table . . . . 8.5.2 Complex Polynomials . . . . . . . . . . . . 8.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . 8.7 Notes and References . . . . . . . . . . . . . . . . .
. . . . . . . . . .
259 259 260 262 270 273 276 278 282 288 289
291 . . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
293 293 294 301 302 302 310 312 319 319 322 326 327 328 332
9 STABILITY OF A LINE SEGMENT 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Bounded Phase Conditions . . . . . . . . . . . . . . . 9.3 Segment Lemma . . . . . . . . . . . . . . . . . . . . . 9.3.1 Hurwitz Case . . . . . . . . . . . . . . . . . . . 9.4 Schur Segment Lemma via Tchebyshev Representation 9.5 Some Fundamental Phase Relations . . . . . . . . . . 9.5.1 Phase Properties of Hurwitz Polynomials . . . 9.5.2 Phase Relations for a Segment . . . . . . . . . 9.6 Convex Directions . . . . . . . . . . . . . . . . . . . . 9.7 The Vertex Lemma . . . . . . . . . . . . . . . . . . . . 9.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9 Notes and References . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
333 333 334 341 341 345 348 348 356 359 369 374 378
. . . . . . . . . . . . . .
xiii
Table of Contents 10 STABILITY MARGIN COMPUTATION 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 The Parametric Stability Margin . . . . . . . . . . . . . . 10.2.1 The Stability Ball in Parameter Space . . . . . . . 10.2.2 The Image Set Approach . . . . . . . . . . . . . . 10.3 Stability Margin Computation . . . . . . . . . . . . . . . . 10.3.1 ℓ2 Stability Margin . . . . . . . . . . . . . . . . . . 10.3.2 Discontinuity of the Stability Margin . . . . . . . . 10.3.3 ℓ2 Stability Margin for Time-Delay Systems . . . . 10.3.4 ℓ∞ and ℓ1 Stability Margins . . . . . . . . . . . . . 10.4 The Mapping Theorem . . . . . . . . . . . . . . . . . . . . 10.4.1 Robust Stability via the Mapping Theorem . . . . 10.4.2 Refinement of the Convex Hull Approximation . . 10.5 Stability Margins of Multilinear Interval Systems . . . . . 10.5.1 Examples . . . . . . . . . . . . . . . . . . . . . . . 10.6 Robust Stability of Interval Matrices . . . . . . . . . . . . 10.6.1 Unity Rank Perturbation Structure . . . . . . . . . 10.6.2 Interval Matrix Stability via the Mapping Theorem 10.6.3 Numerical Examples . . . . . . . . . . . . . . . . . 10.7 Robustness Using a Lyapunov Approach . . . . . . . . . . 10.7.1 Robustification Procedure . . . . . . . . . . . . . . 10.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9 Notes and References . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
11 STABILITY OF A POLYTOPE 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Stability of Polytopic Families . . . . . . . . . . . . . . . . . 11.2.1 Exposed Edges and Vertices . . . . . . . . . . . . . . 11.2.2 Bounded Phase Conditions for Checking Robust Stability of Polytopes . . . . . . . . . . . . . . . . . 11.2.3 Extremal Properties of Edges and Vertices . . . . . . 11.3 The Edge Theorem . . . . . . . . . . . . . . . . . . . . . . . 11.3.1 Edge Theorem . . . . . . . . . . . . . . . . . . . . . 11.3.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . 11.4 Stability of Interval Polynomials . . . . . . . . . . . . . . . 11.4.1 Kharitonov’s Theorem for Real Polynomials . . . . . 11.4.2 Kharitonov’s Theorem for Complex Polynomials . . 11.4.3 Interlacing and Image Set . . . . . . . . . . . . . . . 11.4.4 Image Set Based Proof of Kharitonov’s Theorem . . 11.4.5 Image Set Edge Generators and Exposed Edges . . . 11.4.6 Extremal Properties of the Kharitonov Polynomials 11.4.7 Robust State Feedback Stabilization . . . . . . . . . 11.5 Stability of Interval Systems . . . . . . . . . . . . . . . . . . 11.5.1 Problem Formulation and Notation . . . . . . . . . . 11.5.2 The Generalized Kharitonov Theorem . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
379 379 380 380 382 384 387 391 392 395 396 399 402 405 407 416 416 417 418 423 427 432 441
443 . 443 . 444 . 445 . . . . . . . . . . . . . . . .
448 452 455 456 464 470 470 478 481 484 485 487 492 498 500 504
xiv
Linear Control Theory 11.5.3 Comparison with the Edge Theorem 11.5.4 Examples . . . . . . . . . . . . . . . 11.5.5 Image Set Interpretation . . . . . . . 11.6 Polynomic Interval Families . . . . . . . . . 11.6.1 Robust Positivity . . . . . . . . . . . 11.6.2 Robust Stability . . . . . . . . . . . 11.6.3 Application to Controller Synthesis . 11.7 Exercises . . . . . . . . . . . . . . . . . . . . 11.8 Notes and References . . . . . . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
514 515 521 522 524 528 532 536 546
12 ROBUST CONTROL DESIGN 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . 12.2 Interval Control Systems . . . . . . . . . . . . . . 12.3 Frequency Domain Properties . . . . . . . . . . . 12.4 Nyquist, Bode, and Nichols Envelopes . . . . . . 12.5 Extremal Stability Margins . . . . . . . . . . . . 12.5.1 Guaranteed Gain and Phase Margins . . . 12.5.2 Worst Case Parametric Stability Margin . 12.6 Robust Parametric Classical Design . . . . . . . . 12.6.1 Guaranteed Classical Design . . . . . . . . 12.6.2 Optimal Controller Parameter Selection . 12.7 Robustness Under Mixed Perturbations . . . . . 12.7.1 Small Gain Theorem . . . . . . . . . . . . 12.7.2 Small Gain Theorem for Interval Systems 12.8 Robust Small Gain Theorem . . . . . . . . . . . 12.9 Robust Performance . . . . . . . . . . . . . . . . 12.10 The Absolute Stability Problem . . . . . . . . . . 12.11 Characterization of the SPR Property . . . . . . 12.11.1 SPR Conditions for Interval Systems . . . 12.12 The Robust Absolute Stability Problem . . . . . 12.13 Exercises . . . . . . . . . . . . . . . . . . . . . . . 12.14 Notes and References . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
549 549 551 553 563 573 574 574 577 577 588 591 592 593 602 606 609 615 617 625 632 637
III
. . . . . . . . .
. . . . . . . . .
OPTIMAL AND ROBUST CONTROL
13 THE LINEAR QUADRATIC REGULATOR 13.1 An Optimal Control Problem . . . . . . . . . . . . . . . . . 13.1.1 Principle of Optimality . . . . . . . . . . . . . . . . . 13.1.2 Hamilton-Jacobi-Bellman Equation . . . . . . . . . . 13.2 The Finite-Time LQR Problem . . . . . . . . . . . . . . . . 13.2.1 Solution of the Matrix Ricatti Differential Equation 13.2.2 Cross Product Terms . . . . . . . . . . . . . . . . . . 13.3 The Infinite Horizon LQR Problem . . . . . . . . . . . . . . 13.3.1 General Conditions for Optimality . . . . . . . . . . 13.3.2 The Infinite Horizon LQR Problem . . . . . . . . . .
639 . . . . . . . . .
641 641 642 642 645 647 647 648 648 650
xv
Table of Contents 13.4 13.5 13.6 13.7
13.8 13.9 13.10
13.11 13.12
Solution of the Algebraic Riccati Equation . . . . The LQR as an Output Zeroing Problem . . . . . Return Difference Relations . . . . . . . . . . . . Guaranteed Stability Margins for the LQR . . . . 13.7.1 Gain Margin . . . . . . . . . . . . . . . . 13.7.2 Phase Margin . . . . . . . . . . . . . . . . 13.7.3 Single Input Case . . . . . . . . . . . . . . Eigenvalues of the Optimal Closed Loop System . 13.8.1 Closed-Loop Spectrum . . . . . . . . . . . Optimal Dynamic Compensators . . . . . . . . . 13.9.1 Dual Compensators . . . . . . . . . . . . Servomechanisms and Regulators . . . . . . . . . 13.10.1 Notation and Problem Formulation . . . . 13.10.2 Reference and Disturbance Signal Classes 13.10.3 Solution of the Servomechanism Problem Exercises . . . . . . . . . . . . . . . . . . . . . . . Notes and References . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
14 SISO H∞ AND l1 OPTIMAL CONTROL 14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 The Small Gain Theorem . . . . . . . . . . . . . . . . . . 14.3 L-Stability and Robustness via the Small Gain Theorem . 14.4 YJBK Parametrization of All Stabilizing Compensators (Scalar Case) . . . . . . . . . . . . . . . . . 14.5 Control Problems in the H∞ Framework . . . . . . . . . . 14.6 H∞ Optimal Control: SISO Case . . . . . . . . . . . . . . 14.6.1 Dual Spaces . . . . . . . . . . . . . . . . . . . . . . 14.6.2 Inner Product Spaces . . . . . . . . . . . . . . . . 14.6.3 Orthogonality and Alignment in Noninner Product Spaces . . . . . . . . . . . . . . . . . . . . 14.6.4 The All-Pass Property of H∞ Optimal Controllers 14.6.5 The Single-Input Single-Output Solution . . . . . . 14.7 l1 Optimal Control: SISO Case . . . . . . . . . . . . . . . 14.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.9 Notes and References . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
651 658 660 661 663 663 664 665 665 667 670 672 672 673 673 680 687
689 . . 689 . . 693 . . 704 . . . . .
. . . . .
709 716 725 729 733
. . . . . .
. . . . . .
736 737 742 747 755 757
. . . .
759 759 759 763 769
15 H∞ OPTIMAL MULTIVARIABLE CONTROL 15.1 H∞ Optimal Control Using Hankel Theory . . . . . . . . . 15.1.1 H∞ and Hankel Operators . . . . . . . . . . . . . . . 15.1.2 State Space Computations of the Hankel Norm . . . 15.1.3 State Space Computation of an All-Pass Extension . 15.1.4 H∞ Optimal Control Based on the YJBK Parametrization and Hankel Approximation Theory 15.1.5 LQ Return Difference Equality . . . . . . . . . . . . 15.1.6 State Space Formulas for Coprime Factorizations . .
. 771 . 776 . 780
xvi
Linear Control Theory 15.2 The State Space Solution of H∞ 15.2.1 The H∞ Solution . . . . 15.2.2 The H2 Solution . . . . 15.3 Exercises . . . . . . . . . . . . . 15.4 Notes and References . . . . . .
Optimal . . . . . . . . . . . . . . . . . . . .
A SIGNAL SPACES A.1 Vector Spaces and Norms . . . . . A.2 Metric Spaces . . . . . . . . . . . . A.3 Equivalent Norms and Convergence A.4 Relations between Normed Spaces A.5 Notes and References . . . . . . . .
Control . . . . . . . . . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
784 784 795 804 806
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
807 807 819 825 828 832
B NORMS FOR LINEAR SYSTEMS B.1 Induced Norms for Linear Maps . . . . . . . . B.2 Properties of Fourier and Laplace Transforms B.2.1 Fourier Transforms . . . . . . . . . . . B.2.2 Laplace Transforms . . . . . . . . . . . B.3 Lp /lp Norms of Convolutions of Signals . . . B.3.1 L1 Theory . . . . . . . . . . . . . . . . B.3.2 Lp Theory . . . . . . . . . . . . . . . . B.4 Induced Norms of Convolution Maps . . . . . B.5 Notes and References . . . . . . . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
833 833 844 846 848 849 849 850 852 865
IV
. . . . .
. . . . .
. . . . .
. . . . .
EPILOGUE
ROBUSTNESS AND FRAGILITY Feedback, Robustness, and Fragility Examples . . . . . . . . . . . . . . . Discussion . . . . . . . . . . . . . . Notes and References . . . . . . . .
867 . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
869 869 871 883 885
REFERENCES
887
Index
905
PREFACE
This book describes three major areas of Control Engineering Theory. In Part I we develop results directed at the design of PID and first order controllers for continuous and discrete time linear systems possibly containing delays. This class of problems is crucially important in applications. The main features of our results are the computation of complete sets of controllers achieving stability and performance. They are developed for model based as well as measurement based approaches. In the latter case controller synthesis is based on measured responses only and no identification is required. The results of Part I constitute a modernized version of Classical Control Theory appropriate to the computer-aided design environment of the 21st century. In Part II we deal with the Robust Stability and Performance of systems under parametric as well as unstructured uncertainty. Several elegant and sharp results such as Kharitonov’s Theorem and its extensions, the Edge Theorem and the Mapping Theorem are described. The main thrust of the results is to reduce the verification of stability and performance over the entire uncertainty set to certain extremal test sets, which are points or lines. These results are useful to engineers as aids to robustness analysis and synthesis of control systems. Part III deals with Optimal Control of linear systems. We develop the standard theories of the Linear Quadratic Regulator (LQR), H∞ and ℓ1 optimal control, and associated results. In the LQR chapter we include results on the servomechanism problem. We have been using this material successfully in a second graduate level course in Control Systems for some time. It is our opinion that it gives a balanced coverage of elegant mathematical theory and useful engineering oriented results that can serve the needs of a diverse group of students from Electrical, Mechanical, Chemical, Aerospace, and Civil Engineering as well as Computer Science and Mathematics. It is possible to cover the entire book in a 14-week semester with a judicious choice of reading assignments. Many of the results described in the book were obtained in collaboration with our graduate students and it is a pleasure to acknowledge the many contributions of P.M.G. Ferreira, Herve Chapellat, Ming-Tzu Ho, Guillermo J. Silva, Hao Xu, Sandipan Mitra, and Richard Tantaris. Part I contains material published in the earlier monograph PID Controllers for Time-Delay Systems by Guillermo J. Silva, A. Datta, and S.P. Bhattacharyya, Birkhaueser, 2005. Much of the material of Part II appeared in the earlier book Robust Control: The Parametric Approach by S.P. Bhat-
xvii
xviii
Linear Control Theory
tacharyya, H. Chapellat and L.H. Keel, Prentice Hall, 1995. A.D. would like to thank Professor M. G. Safonov of the University of Southern California for teaching him the basics of H∞ control theory almost two decades ago. Indeed, a lot of the material in Part III of this book is based on a Special Topics course taught by Professor Safonov at USC in the Spring of 1990. The authors would also like to thank Dr. Nripendra Sarkar, Dr. Ranadip Pal, Ms. Rouella Mendonca, and Mr. Ritwik Layek for assistance with Latex and figures on several occasions. A book of this size and scope inevitably has errors and we welcome corrective feedback from the readers. We also apologize in advance for any omissions or inaccuracies in referencing and would want to compensate for this in future editions. S. P. Bhattacharyya A. Datta L. H. Keel June 23, 2008 College Station, Texas USA
Part I
THREE TERM CONTROLLERS In this part we deal with the analysis, synthesis, and design of the important special class of controllers known as three term controllers. Specifically we cover the design of Proportional-Integral-Derivative (PID) controllers and first order controllers. Each of these controllers has three adjustable parameters, is widely used in the control industry across several technologies and engineering disciplines, and offers the possibility of using 2-D and 3-D graphics as design aids. In Chapter 1, we briefly review some classical methods of designing PID controllers. This is followed by Chapter 2, which is devoted to computing the complete PID stabilizing set for a linear time invariant (LTI) continuous time plant without time delays, using recent results on root counting. We illustrate how this set is used in computer-aided design to satisfy multiple design specifications. In Chapter 3, the above results are extended to (LTI) plants cascaded with a delay. These are especially important in process industries. In Chapter 4, we develop corresponding results for digital PID controllers for discrete time plants again determining complete sets of controllers satisfying several specifications. In Chapter 5, we cover the design of first order controllers for continuous time and discrete time LTI plants. The stabilizing and performance attaining sets can be displayed using 2-D and 3-D graphics. Chapters 6 and 7 focus on direct data driven synthesis of controllers. It is shown that complete sets of stabilizing PID and first order controllers for an LTI system can be calculated directly from the frequency response or impulse response data of the plant without producing an identified analytical model. The design methods presented in this part have been implemented in LabView and Matlab. The Matlab toolbox is in the public domain and can be downloaded from http://procit.chungbuk.ac.kr.
1 PID CONTROLLERS: AN OVERVIEW OF CLASSICAL THEORY
In this chapter we give a quick overview of control theory, explaining why integral feedback control works so well, describing PID (Proportional - Integral - Derivative) controllers, and summarizing some of the classical techniques for PID controller design. This background will also serve to motivate recent breakthroughs on PID control, presented in the subsequent chapters.
1.1
Introduction to Control
Control theory and control engineering deal with dynamic systems such as aircraft, spacecraft, ships, trains, and automobiles, chemical and industrial processes such as distillation columns and rolling mills, electrical systems such as motors, generators, and power systems, and machines such as numerically controlled lathes and robots. In each case the setting of the control problem is represented by the following elements: 1. There are certain dependent variables, called outputs, to be controlled, which must be made to behave in a prescribed way. For instance it may be necessary to assign the temperature and pressure at various points in a process, or the position and velocity of a vehicle, or the voltage and frequency in a power system, to given desired fixed values, despite uncontrolled and unknown variations at other points in the system. 2. Certain independent variables, called inputs, such as voltage applied to the motor terminals, or valve position, are available to regulate and control the behavior of the system. Other dependent variables, such as position, velocity, or temperature, are accessible as dynamic measurements on the system. 3. There are unknown and unpredictable disturbances impacting the system. These could be, for example, the fluctuations of load in a power system, disturbances such as wind gusts acting on a vehicle, external weather conditions acting on an air conditioning plant, or the fluctuating load torque on an elevator motor, as passengers enter and exit.
3
4
THREE TERM CONTROLLERS 4. The equations describing the plant dynamics, and the parameters contained in these equations, are not known at all or at best known imprecisely. This uncertainty can arise even when the physical laws and equations governing a process are known well, for instance, because these equations were obtained by linearizing a nonlinear system about an operating point. As the operating point changes so do the system parameters.
These considerations suggest the following general representation of the plant or system to be controlled. In Figure 1.1 the inputs or outputs shown could actually be representing a vector of signals. In such cases the plant is said to be a multivariable plant as opposed to the case where the signals are scalar, in which case the plant is said to be a scalar or monovariable plant.
disturbances
control inputs
Dynamic System or Plant
outputs to be controlled
measurements Figure 1.1 A general plant.
Control is exercised by feedback, which means that the corrective control input to the plant is generated by a device that is driven by the available measurements. Thus, the controlled system can be represented by the feedback or closed-loop system shown in Figure 1.2. The control design problem is to determine the characteristics of the controller so that the controlled outputs can be 1. Set to prescribed values called references; 2. Maintained at the reference values despite the unknown disturbances; 3. Conditions (1) and (2) are met despite the inherent uncertainties and changes in the plant dynamic characteristics.
PID CONTROLLERS: AN OVERVIEW OF CLASSICAL THEORY
5
disturbances
controlled output Plant
measurements
Controller reference inputs
Figure 1.2 A feedback control system.
The first condition above is called tracking, the second, disturbance rejection, and the third, robustness of the system. The simultaneous satisfaction of (1), (2), and (3) is called robust tracking and disturbance rejection and control systems designed to achieve this are called robust servomechanisms. In the next section we discuss how integral and PID control are useful in the design of robust servomechanisms.
1.2
The Magic of Integral Control
Integral control is used almost universally in the control industry to design robust servomechanisms. Integral action is most easily implemented by computer control. It turns out that hydraulic, pneumatic, electronic, and mechanical integrators are also commonly used elements in control systems. In this section we explain how integral control works in general to achieve robust tracking and disturbance rejection. Let us first consider an integrator as shown in Figure 1.3. The input-output relationship is Z t y(t) = K u(τ )dτ + y(0) (1.1) 0
or, in differential form, dy(t) = Ku(t) dt
(1.2)
6
THREE TERM CONTROLLERS
u(t)
Integrator
y(t)
Figure 1.3 An integrator.
where K is the integrator gain. Now suppose that the output y(t) is a constant. It follows from (1.2) that dy(t) = 0 = Ku(t) for all t > 0. dt
(1.3)
Equation (1.3) proves the following important facts about the operation of an integrator: 1. If the output of an integrator is constant over a segment of time, then the input must be identically zero over that same segment. 2. The output of an integrator changes as long as the input is nonzero. The simple facts stated above suggest how an integrator can be used to solve the servomechanism problem. If a plant output y(t) is to track a constant reference value r, despite the presence of unknown constant disturbances, it is enough to a. attach an integrator to the plant and make the error e(t) = r − y(t) the input to the integrator; b. ensure that the closed-loop system is asymptotically stable so that under constant reference and disturbance inputs, all signals, including the integrator output, reach constant steady-state values. This is depicted in the block diagram shown in Figure 1.4. If the system shown in Figure 1.4 is asymptotically stable, and the inputs r and d (disturbances) are constant, it follows that all signals in the closed loop will tend to constant values. In particular the integrator output v(t) tends to a constant value. Therefore, by the fundamental fact about the operation of an integrator established above, it follows that the integrator input tends to zero. Since we have arranged that this input is the tracking error it follows that e(t) = r − y(t) tends to zero and hence y(t) tracks r as t → ∞. We emphasize that the steady-state tracking property established above is very robust. It holds as long as the closed loop is asymptotically stable and is
7
PID CONTROLLERS: AN OVERVIEW OF CLASSICAL THEORY disturbance d u
y
Plant ym − Controller
v
Integrator
e
+
r
Figure 1.4 Servomechanism.
(1) independent of the particular values of the constant disturbances or references, (2) independent of the initial conditions of the plant and controller, and (3) independent of whether the plant and controller are linear or nonlinear. Thus, the tracking problem is reduced to guaranteeing that stability is assured. In many practical systems stability of the closed-loop system can even be ensured without detailed and exact knowledge of the plant characteristics and parameters; this is known as robust stability. We next discuss how several plant outputs y1 , y2 , . . . , ym can be pinned down to prescribed but arbitrary constant reference values r1 , r2 , . . . , rm in the presence of unknown but constant disturbances d1 , d2 , . . . , dq . The previous argument can be extended to this multivariable case by attaching m integrators to the plant and driving each integrator with its corresponding error input ei = ri − yi , i = 1, . . . , m. This is shown in the configuration in Figure 1.5. Once again it follows that as long as the closed-loop system is stable, all signals in the system must tend to constant values and integral action forces the ei (t), i = 1, . . . , m to tend to zero asymptotically, regardless of the actual values of the disturbances dj , j = 1, . . . , q or the values of the ri , i = 1, . . . , m. The existence of steady-state inputs u1 , u2 , . . . , ur that make yi = ri , i = 1, . . . , m for arbitrary ri , i = 1, . . . , m requires that the plant equations relating yi , i = 1, . . . , m to uj , j = 1, . . . , r be invertible for constant inputs. In the case of linear time-invariant systems this is equivalent to the requirement that the corresponding transfer matrix have rank equal to m at s = 0. Sometimes this is restated as two conditions: (1) r ≥ m or at least as many control inputs as outputs to be controlled and (2) G(s) has no transmission zero at s = 0. The architecture of the block diagram of Figure 1.5 is easily modified to handle servomechanism problems for more general classes of reference and disturbance signals such as ramps or sinusoids of specified frequency. The only modification required is that the integrators be replaced by
8
THREE TERM CONTROLLERS dq
d1 ···
u1 ur
Plant
···
y1
.. .
y2 ym
−
Integrator − Integrator
Controller .. .
r1
r2 .. .
− Integrator
+
+
+
rm
Figure 1.5 Multivariable servomechanism.
corresponding signal generators of these external signals. See the treatment of this general servo problem in Chapter 13. In general, the addition of an integrator to the plant tends to make the system less stable. This is because the integrator is an inherently unstable device; for instance, its response to a step input, a bounded signal, is a ramp, an unbounded signal. Therefore, the problem of stabilizing the closed loop becomes a critical issue even when the plant is stable to begin with. Since integral action and thus the attainment of zero steady-state error is independent of the particular value of the integrator gain K, we can see that this gain can be used to try to stabilize the system. This single degree of freedom is sometimes insufficient for attaining stability and an acceptable transient response, and additional gains are introduced as explained in the next section. This leads naturally to the PID controller structure commonly used in industry.
1.3
PID Controllers
In the last section we saw that when an integrator is part of an asymptotically stable system and constant inputs are applied to the system, the integrator input is forced to become zero. This simple and powerful principle is the basis for the design of linear, nonlinear, single-input single-output, and multivari-
9
PID CONTROLLERS: AN OVERVIEW OF CLASSICAL THEORY
able servomechanisms. All we have to do is (1) attach as many integrators as outputs to be regulated, (2) drive the integrators with the tracking errors required to be zeroed, and (3) stabilize the closed-loop system by using any adjustable parameters. As argued in the last section the input zeroing property is independent of the gain cascaded to the integrator. Therefore, this gain can be freely used to attempt to stabilize the closed-loop system and achieve performance specifications such as a good transient response and robust stability. Additional free parameters for stabilization can be obtained, without destroying the input zeroing property, by adding parallel branches to the controller, processing in addition to the integral of the error, the error itself and its derivative, when it can be obtained. This leads to the PID controller structure shown in Figure 1.6.
e(t)
Differentiator
kd
Integrator
ki
+
u(t)
kp Figure 1.6 PID controller.
As long as the closed loop is stable it is clear that the input to the integrator will be driven to zero independent of the values of the gains. Thus, the function of the gains kp , ki , and kd is to stabilize the closed-loop system if possible and to adjust the transient response of the system and robustify the stability of the system. In general the derivative can be computed or obtained if the error is varying slowly. Since the response of the derivative to high-frequency inputs is much higher than its response to slowly varying signals (see Figure 1.7), the derivative term is usually omitted if the error signal is corrupted by high-frequency noise. In such cases the derivative gain kd is set to zero or equivalently the differentiator is switched off and the controller is a proportional-integral or PI controller. Such controllers are most common in industry. In subsequent chapters of the book, we present solutions to the problem of stabilization of a linear time-invariant plant by a PID controller. Both delayfree systems and systems with time delay are considered. These solutions uncover the entire set of stabilizing controllers in a computationally tractable
10
THREE TERM CONTROLLERS
Differentiator signal
response to signal Differentiator response to noise
noise
Figure 1.7 Response of derivative to signal and noise.
way. In the rest of this introductory chapter we briefly discuss the classical techniques for PID controller design. Many of them are based on empirical observations.
1.4 1.4.1
Classical PID Controller Design The Ziegler-Nichols Step Response Method
The PID controller we are concerned with is implemented as follows: C(s) = kp +
ki + kd s s
(1.4)
where kp is the proportional gain, ki is the integral gain, and kd is the derivative gain. The derivative term is often replaced by kd s , 1 + Td s where Td is a small positive value that is usually fixed. This circumvents the problem of pure differentiation when the error signals are contaminated by noise. The Ziegler-Nichols step response method is an experimental open-loop tuning method and is applicable to open-loop stable plants. This method first characterizes the plant by two parameters A and L obtained from its step response. A and L can be determined graphically from a measurement of the step response of the plant as illustrated in Figure 1.8. First, the point on the step response curve with the maximum slope is determined and the tangent is drawn. The intersection of the tangent with the vertical axis gives A, while the intersection of the tangent with the horizontal axis gives L. Once A and
PID CONTROLLERS: AN OVERVIEW OF CLASSICAL THEORY
11
point of maximum slope
A
L Figure 1.8 Graphical determination of parameters A and L.
L are determined, the PID controller parameters are then given in terms of A and L by the following formulas: kp =
1.2 , A
ki =
0.6 , AL
kd =
0.6L . A
These formulas for the controller parameters were selected to obtain an amplitude decay ratio of 0.25, which means that the first overshoot decays to 1 4 th of its original value after one oscillation. Intense experimentation showed that this criterion gives a small settling time.
1.4.2
The Ziegler-Nichols Frequency Response Method
The Ziegler-Nichols frequency response method is a closed-loop tuning method. This method first determines the point where the Nyquist curve of the plant G(s) intersects the negative real axis. It can be obtained experimentally in the following way: Turn the integral and differential actions off and set the controller to be in the proportional mode only and close the loop as shown in Figure 1.9. Slowly increase the proportional gain kp until a periodic oscillation in the output is observed. This critical value of kp is called the ultimate gain (ku ). The resulting period of oscillation is referred to as the ultimate period (Tu ). Based on ku and Tu , the Ziegler-Nichols frequency response method
12
THREE TERM CONTROLLERS
gives the following simple formulas for setting PID controller parameters: kp = 0.6ku ,
ki =
1.2ku , Tu
kd = 0.075kuTu .
(1.5)
y
r=0
kp
G(s)
+ −
Proportional Controller
Plant
Figure 1.9 The closed-loop system with the proportional controller.
This method can be interpreted in terms of the Nyquist plot. Using PID control it is possible to move a given point on the Nyquist curve to an arbitrary position in the complex plane. Now, the first step in the frequency response method is to determine the point 1 − ,0 ku where the Nyquist curve of the open-loop transfer function intersects the negative real axis. We will study how this point is changed by the PID controller. Using (1.5) in (1.4), the frequency response of the controller at the ultimate frequency wu is 1.2ku C(jwu ) = 0.6ku − j + j(0.075ku Tu wu ) Tu wu = 0.6ku (1 + j0.4671) since Tu wu = 2π . From this we see that the controller gives a phase advance of 25 degrees at the ultimate frequency. The loop gain is then Gloop (jwu ) = G(jwu )C(jwu ) = −0.6(1 + j0.4671) = −0.6 − j0.28 Thus, the point
1 − ,0 ku
is moved to the point (-0.6, -0.28). The distance from this point to the critical point −1 + j0 is almost 0.5. This means that the frequency response method gives a sensitivity greater than 2.
13
PID CONTROLLERS: AN OVERVIEW OF CLASSICAL THEORY
The procedure described above for measuring the ultimate gain and ultimate period requires that the closed-loop system be operated close to instability. To avoid damaging the physical system, this procedure needs to be executed carefully. Without bringing the system to the verge of instability, an alternative method was proposed by ˚ Astr¨ om and H¨agglund using a relay to generate a relay oscillation for measuring the ultimate gain and ultimate period. This is done by using the relay feedback configuration shown in Figure 1.10. In Figure 1.10, the relay is adjusted to induce a self-sustaining oscillation in the loop.
r=0
y
d G(s) +
−d −
Plant
Relay
Figure 1.10 Block diagram of relay feedback.
This relay feedback can be used to determine the ultimate gain and ultimate period. The relay block is a nonlinear element that can be represented by a describing function. This describing function is obtained by applying a sinusoidal signal asin(wt) at the input of the nonlinearity and calculating the ratio of the Fourier coefficient of the first harmonic at the output to a. This function can be thought of as an equivalent gain of the nonlinear system. For the case of the relay its describing function is given by N (a) =
4d aπ
where a is the amplitude of the sinusoidal input signal and d is the relay amplitude. The conditions for the presence of limit cycle oscillations can be derived by investigating the propagation of a sinusoidal signal around the loop. Since the plant G(s) acts as a low pass filter, the higher harmonics produced by the nonlinear relay will be attenuated at the output of the plant. Hence, the condition for oscillation is that the fundamental sine waveform comes back with the same amplitude and phase after traversing through the loop. This means that for sustained oscillations at a frequency of ω, we must have G(jω)N (a) = −1 . (1.6) This equation can be solved by plotting the Nyquist plot of G(s) and the line − N 1(a) . As shown in Figure 1.11, the plot of − N 1(a) is the negative real axis,
14
THREE TERM CONTROLLERS
so the solution to (1.6) is given by the two conditions: |G(jωu )| =
aπ ∆ 1 = , 4d ku
and arg [G(jωu )] = −π .
Im[G(jω)] −
1 N (a) Re[G(jω)] ω = ωu
G(jω) Figure 1.11 Nyquist plots of the plant G(jω) and the describing function − N 1(a) .
The ultimate gain and ultimate period can now be determined by measuring the amplitude and period of the oscillations. This relay feedback technique is widely used in automatic PID tuning. REMARK 1.1 Both Ziegler-Nichols tuning methods require very little knowledge of the plants and simple formulas are given for controller parameter settings. These formulas are obtained by extensive simulations of many stable and simple plants. The main design criterion of these methods is to obtain a quarter amplitude decay ratio for the load disturbance response. It has been criticized because little emphasis is given to measurement noise, sensitivity to process variations, and setpoint response. Even though these methods provide good rejection of load disturbance, the resulting closed-loop system can be poorly damped and sometimes can have poor stability margins.
15
PID CONTROLLERS: AN OVERVIEW OF CLASSICAL THEORY
1.4.3
PID Settings Using the Internal Model Controller Design Technique
The internal model controller (IMC) structure has become popular in process control applications. This structure, in which the controller includes an explicit model of the plant, is particularly appropriate for the design and implementation of controllers for open-loop stable systems. The fact that many of the plants encountered in process control happen to be open-loop stable possibly accounts for the popularity of IMC among practicing engineers. In this section, we consider the IMC configuration for a stable plant G(s) as shown in Figure 1.12. The IMC controller consists of a stable IMC parameter ˆ Q(s) and a model of the plant G(s), which is usually referred to as the internal model. F (s) is the IMC filter chosen to enhance robustness with respect to the modelling error and to make the overall IMC parameter Q(s)F (s) proper. From Figure 1.12 the equivalent feedback controller C(s) is C(s) =
F (s)Q(s) ˆ 1 − F (s)Q(s)G(s)
.
Internal Model Controller r +
F (s)
Q(s)
y
u
G(s)
− Internal Model
ˆ G(s)
+
yˆ −
Figure 1.12 The IMC configuration.
The IMC design objective considered in this section is to choose Q(s) which minimizes the L2 norm of the tracking error r − y, that is, achieves an H2 optimal control design. In general, complex models lead to complex IMC H2 optimal controllers. However, it has been shown that, for first-order plants with deadtime and a step command signal, the IMC H2 -optimal design results in a controller with a PID structure. This will be clearly borne out by the following discussion. Assume that the plant to be controlled is a first-order model with deadtime: G(s) =
k e−Ls . 1 + Ts
16
THREE TERM CONTROLLERS
The control objective is to minimize the L2 norm of the tracking error due to setpoint changes. Using Parseval’s Theorem, this is equivalent to choosing 1 ˆ Q(s) for which min k[1 − G(s)Q(s)]R(s)k 2 is achieved, where R(s) = s is the Laplace transform of the unit step command. Approximating the deadtime with a first-order Pad´e approximation, we have L ∼ 1− 2s . e−Ls = 1 + L2 s ˆ is given by The resulting rational transfer function of the internal model G(s) ! 1 − L2 s k ˆ . G(s) = 1 + Ts 1 + L2 s ˆ Choosing Q(s) to minimize the H2 norm of [1 − G(s)Q(s)]R(s), we obtain Q(s) =
1 + Ts . k
Since this Q(s) is improper, we choose F (s) =
1 1 + λs
where λ > 0 is a small number. The equivalent feedback controller becomes C(s) =
F (s)Q(s) ˆ 1 − F (s)Q(s)G(s)
(1 + T s)(1 + ∼ = ks(L + λ)
L 2 s)
=
L 2 s) Lλ 2 s)
(1 + T s)(1 + ks(L + λ +
.
(1.7)
From (1.7) we can extract the following parameters for a standard PID controller: 2T + L , 2k(L + λ) 1 ki = , k(L + λ) TL kd = . 2k(L + λ)
kp =
Since a first-order Pad´e approximation was used for the time delay, ensuring the robustness of the design to modeling errors is all the more important. This can be done by properly selecting the design variable λ to achieve the appropriate compromise between performance and robustness. Morari and Zafiriou [157] have proposed that a suitable choice for λ is λ > 0.2T and λ > 0.25L.
PID CONTROLLERS: AN OVERVIEW OF CLASSICAL THEORY
17
REMARK 1.2 The IMC PID design procedure minimizes the L2 norm of the tracking error due to setpoint changes. Therefore, as expected, this design method gives good response to setpoint changes. However, for lag dominant plants the method gives poor load disturbance response because of the pole-zero cancellation inherent in the design methodology.
1.4.4
Dominant Pole Design: The Cohen-Coon Method
Dominant pole design attempts to position a few poles to achieve certain control performance specifications. The Cohen-Coon method is a dominant pole design method. This tuning method is based on the first-order plant model with deadtime: k e−Ls . G(s) = 1 + Ts The key feature of this tuning method is to attempt to locate three dominant poles, a pair of complex poles and one real pole, such that the amplitude R ∞ decay ratio for load disturbance response is 0.25 and the integrated error 0 |e(t)|dt is minimized. Thus, the Cohen-Coon method gives good load disturbance rejection. Based on analytical and numerical computation, Cohen and Coon gave the following PID controller parameters in terms of k, T , and L: 1.35(1 − 0.82b) , a(1 − b) 1.35(1 − 0.82b)(1 − 0.39b) ki = , aL(1 − b)(2.5 − 2b) 1.35L(0.37 − 0.37b) kd = a(1 − b) kp =
where a=
kL , T
b=
L . L+T
Note that for small b, the controller parameters given by the above formulas are close to the parameters obtained by the Ziegler-Nichols step response method.
1.4.5
New Tuning Approaches
The tuning methods described in the previous subsections are easy to use and require very little information about the plant to be controlled. However, since they do not capture all aspects of desirable PID performance, many other new approaches have been developed. These methods can be classified into three categories.
18
THREE TERM CONTROLLERS
1.4.5.1
Time Domain Optimization Methods
The idea behind these methods is to choose the PID controller parameters to minimize an integral cost functional. Zhuang and Atherton [215] used an integral criterion with data from a relay experiment. The time-weighted system error integral criterion was chosen as Jn (θ) =
Z
∞
tn e2 (θ, t)dt
0
where θ is a vector containing the controller parameters and e(θ, t) represents the error signal. Experimentation showed that for n = 1, the controller obtained produced a step response of desirable form. This gave birth to the integral square time error (ISTE) criterion. Another contribution is due to Pessen [167], who used the integral absolute error (IAE) criterion: J(θ) =
Z
∞
|e(θ, t)|dt.
0
In order to minimize the above integral cost functions, Parseval’s Theorem can be invoked to express the time functions in terms of their Laplace transforms. Definite integrals of the form encountered in this approach have been evaluated in terms of the coefficients of the numerator and denominator of the Laplace transforms (see [161]). Once the integration is carried out, the parameters of the PID controller are adjusted in such a way as to minimize the integral cost function. Recently Atherton and Majhi [9] proposed a modified form of the PID controller (see Figure 1.13). In this structure an internal proportional-derivative (PD) feedback is used to change the poles of the plant transfer function to more desirable locations and then a PI controller is used in the forward loop. The parameters of the controller are obtained by minimization of the ISTE criterion.
− r(t)
d(t) u(t)
+
PI −
+
y(t) Plant
− PD
Figure 1.13 PI-PD feedback control structure.
PID CONTROLLERS: AN OVERVIEW OF CLASSICAL THEORY 1.4.5.2
19
Frequency Domain Shaping
These methods seek a set of controller parameters that give a desired frequency response. ˚ Astr¨om and H¨ agglund [8] proposed the idea of using a set of rules to achieve a desired phase margin specification. In the same spirit, Ho, Hang, and Zhou [102] developed a PID self-tuning method with specifications on the gain and phase margins. Another contribution by Voda and Landau [199] presented a method to shape the compensated system frequency response. 1.4.5.3
Optimal Control Methods
This new trend has been motivated by the desire to incorporate several control system performance objectives such as reference tracking, disturbance rejection, and measurement noise rejection. Grimble and Johnson [88] incorporated all these objectives into an LQG optimal control problem. They proposed an algorithm to minimize an LQG-cost function where the controller structure is fixed to a particular PID industrial form. In a similar fashion, Panagopoulos, ˚ Astr¨om, and H¨ agglund [164] presented a method to design PID controllers that captures demands on load disturbance rejection, set point response, measurement noise, and model uncertainty. Good load disturbance rejection was obtained by minimization of the integral control error. Good set point response was obtained by using a structure with two degrees of freedom. Measurement noise was dealt with by filtering. Robustness was achieved by requiring a maximum sensitivity of less than a specified value. 1.4.5.4
Design for Multiple Specifications
Recent work based on finding the complete set of stabilizing PID controllers have opened up the possibility of finding sets of controllers achieving multiple specifications. This is described in later chapters of this book.
1.5
Integrator Windup
An important element of the control strategy discussed in Section 1.2 is the actuator, which applies the control signal u to the plant. However, all actuators have limitations that make them nonlinear elements. For instance, a valve cannot be more than fully opened or fully closed. During the regular operation of a control system, it can very well happen that the control variable reaches the actuator limits. When this situation arises, the feedback loop is broken and the system runs as an open loop because the actuator will remain at its limit independently of the process output. In this scenario, if the controller is of the PID type, the error will continue to be integrated. This in turn means that the integral term may become very large, which is commonly
20
THREE TERM CONTROLLERS
referred to as windup. In order to return to a normal state, the error signal needs to have an opposite sign for a long period of time. As a consequence of all this, a system with a PID controller may give large transients when the actuator saturates. The phenomenon of windup has been known for a long time. It may occur in connection with large setpoint changes or it may be caused by large disturbances or equipment malfunction. Several techniques are available to avoid windup when using an integral term in the controller. We describe some of these techniques in this section.
1.5.1
Setpoint Limitation
The easiest way to avoid integrator windup is to introduce limiters on the setpoint variations so that the controller output will never reach the actuator bounds. However, this approach has several disadvantages: (a) it leads to conservative bounds; (b) it imposes limitations on the controller performance; and (c) it does not prevent windup caused by disturbances.
1.5.2
Back-Calculation and Tracking
This technique is illustrated in Figure 1.14. If we compare this figure to Figure 1.6, we see that the controller has an extra feedback path. This path is generated by measuring the actual actuator output u(t) and forming the error signal es (t) as the difference between the output of the controller v(t) and the signal u(t). This signal es (t) is fed to the input of the integrator through a gain 1/Tt .
kd
Differentiator
v(t) kp
+
e(t) ki
+
Integrator 1 Tt
u(t) Actuator
−
+ es (t)
Figure 1.14 Controller with antiwindup.
When the actuator is within its operating range, the signal es (t) is zero. Thus,
PID CONTROLLERS: AN OVERVIEW OF CLASSICAL THEORY
21
it will not have any effect on the normal operation of the controller. When the actuator saturates, the signal es (t) is different from zero. The normal feedback path around the process is broken because the process input remains constant. However, there is a new feedback path around the integrator due to es (t) 6= 0 and this prevents the integrator from winding up. The rate at which the controller output is reset is governed by the feedback gain 1/Tt. The parameter Tt can thus be interpreted as the time constant that determines how quickly the integral action is reset. In general, the smaller the value of Tt , the faster the integrator is reset. However, if the parameter Tt is chosen too small, spurious errors can cause saturation of the output, which accidentally resets the integrator. ˚ Astr¨ om and H¨ agglund [7] recommend Tt to be larger than kkd and smaller than kki .
1.5.3
Conditional Integration
Conditional integration is an alternative to the back-calculation technique. It simply consists of switching off the integral action when the control is far from the steady state. This means that the integral action is only used when certain conditions are fulfilled, otherwise the integral term is kept constant. We now consider a couple of these switching conditions. One simple approach is to switch off the integral action when the control error e(t) is large. Another one is to switch off the integral action when the actuator saturates. However, both approaches have a disadvantage: the controller may get stuck at a nonzero control error e(t) if the integral term has a large value at the time of switch off. Because of the previous disadvantage, a better approach is to switch off integration when the controller is saturated and the integrator update is such that it causes the control signal to become more saturated. For example, consider that the controller becomes saturated at its upper bound. Integration is then switched off if the control error is positive, but not if it is negative.
1.6
Exercises
1.1 Carry out a Ziegler-Nichols step response design for the plant K · e−sL 1 + sT where K = 1, T = 1, L = 1. Find the gain and phase margins of the system. 1.2 Repeat Problem 1.1 with K = 1 and
22
THREE TERM CONTROLLERS
(a) T = 1 and L ∈ [1, 10], (b) L = 1 and T ∈ [1, 10]. In each case, determine the gain and phase margins and their variations with respect to T and L. 1.3 Prove that any strictly proper first order plant with transfer function P (s) =
K 1 + sT
can be stabilized by the Proportional-Integral controller C(s) = Kp +
Ki . s
(a) Find the stabilizing sets S + and S − for T < 0 (unstable plant) and T > 0 (stable plant), respectively in (Kp , Ki ) space, and show that S+ ∩ S− = ∅ and
S + ∪ S − = IR2 .
(b) Determine the subsets of S + and S − for which the closed-loop characteristic roots are (i) real, (ii) complex. (c) Show that the steady state error to a ramp input can be made arbitrarily small. 1.4 Consider the second order plant P (s) =
K(s − z) K(s − z) = , s2 + a 1 s + a 0 (s − p1 )(s − p2 )
K, z > 0
with the feedback controller C(s) = Kp . (a) Prove that stabilization by constant gain is possible if and only if −a1
0 1 + sT with the constant gain feedback controller C(s) = Kp . Use the first order Pad´e approximation e−sL =
1 − s L2 1 + s L2
(1.12)
and (i) Prove that stabilization is possible if and only if L < T;
(1.13)
(ii) Find the stabilizing range for Kp ; (iii) Prove that the step response has undershoot; (iv) Prove that the minimum steady state error to a unit step input is 1 L . (1.14) 2 T −L 1.6 Prove that the PID controller C(s) =
K i + K p s + K d s2 s(1 + K0 s)
can stabilize any strictly proper second order plant. 1.7 Consider the PID controller K i + K p s + K d s2 s and the plant with transfer function C(s) =
P (s) =
b1 s + b0 . s2 + a 1 s + a 0
(1.15)
(1.16)
Use the Routh-Hurwitz criterion to obtain an explicit description of the stabilizing set S. Prove that the subsets of S with constant Kp are described by linear inequalities.
24
1.7
THREE TERM CONTROLLERS
Notes and References
The Ziegler-Nichols methods were first presented in [216]. The alternative method using relay feedback is described in [7]. The relay feedback technique in Section 1.4.2 and its applications to automatic PID tuning can be found in the works of ˚ Astr¨ om and H¨ agglund [7, 8]. For a better understanding of describing functions, the book by Khalil [135] is recommended. For a more detailed explanation of the IMC structure and its applications in process control, the reader is referred to [157]. The Cohen-Coon method can be found in [53]. A comprehensive survey on tuning methods for PID controllers and a description of antiwindup techniques can be found in [8]. Needless to say there is an extensive literature covering all aspects of PID control. We have not attempted to be complete in citing this literature. Instead, we have tried to cite all relevant publications related to the new results given later in this part of the book. In concluding this chapter, we point out that in addition to the approaches discussed above, there are many other approaches for tuning PID controllers [8]. Despite this, for plants having order higher than two, there were no approaches that could be used to determine the set of all stabilizing PID gain values. The principal contribution of this part of the book to the PID literature is the development of a methodology that provides a complete answer to this long-standing open problem for both delay-free LTI plants as well as for plants with time delay. For the former class of plants, the results were first reported in [97]. In this book, we give results for determining, in a computationally efficient way, the complete set of PID controllers that stabilize a given linear time-invariant plant and achieve prescribed levels of performance in the chapters that follow. These results apply to plants with and without time delay.
2 PID CONTROLLERS FOR DELAY-FREE LTI SYSTEMS
In this chapter, we solve the problem of stabilizing a given but arbitrary linear time invariant (LTI) continuous time system with rational transfer function P (s), by a proportional-integral-derivative (PID) controller. The complete set of stabilizing PID controllers is determined in the PID gain space [kp , ki , kd ]. This crucial step allows for the attainment of multiple design specifications over the stabilizing set. It is shown that, for a fixed value of the proportional gain kp , the set is a union of convex regions in (ki , kd ) space characterized by sets of linear inequalities. The entire set is then found by sweeping kp over appropriate ranges. The development is based on root-counting and signature results for polynomials which are developed in this chapter. A related result that deals with complex polynomials is also developed. Using this result, we give solutions to some performance attainment problems using PID control. These include requirements such as guaranteed gain and phase margins, and H∞ norm bounds.
2.1
Introduction
PID controllers are widely used in such diverse applications as process control, rolling mills, aerospace, motion control, pneumatic, hydraulic, electrical and mechanical systems, disc drives, and digital cameras. In most cases, their designs are carried out using adhoc tuning rules. These rules have been developed over the years based primarily on empirical observations and industrial experience. In part, this state of affairs is due to the fact that the state feedback observer based theory of modern and postmodern control theory including H2 , H∞ , µ and l1 optimal control cannot be applied to PID control. Indeed, until the results to be described here appeared, it was not known how to even determine whether stabilization of a nominal LTI system was possible using PID controllers. Recently, a number of significant results on PID stabilization have been obtained. These results could assist the industrial practitioner to carry out computer-aided designs of PID controllers with guaranteed stability and per-
25
26
THREE TERM CONTROLLERS
formance. Given the widespread use of PID controllers in various industries the impact of these results could be enormous. Consider the general feedback system shown in Figure 2.1.
r(t)
u(t) G(s)
CONTROLLER
PLANT
+ −
y(t)
C(s)
Figure 2.1 Feedback control system.
Here r(t) is the command signal, y(t) is the output, G(s) is the plant to be controlled, and C(s) is the controller to be designed. The controller C(s) will be assumed to be of the PID type so that C(s) = kp +
ki + kd s s
(2.1)
where kp , ki and kd are the proportional, integral and derivative gains respectively. In many cases the pure derivative term is not allowed, particularly if the error is measured in a noisy environment. In such cases we may consider the PID controller with transfer function C(s) =
skp + ki + kd s2 , s(1 + sT )
T >0
(2.2)
where T is usually fixed at a small positive value. For this chapter, the plant transfer function G(s) will be assumed to be rational so that N (s) G(s) = (2.3) D(s) where N (s), D(s) are polynomials in the Laplace variable s. With this plant description,the closed-loop characteristic polynomial becomes δ(s, kp , ki , kd ) = sD(s) + (ki + kd s2 )N (s) + kp sN (s).
(2.4)
ˆ or with D(s) = D(s)(1 + sT ), ˆ δ(s, kp , ki , kd ) = sD(s) + (ki + kd s2 )N (s) + kp sN (s).
(2.5)
depending on the particular type of C(s) being discussed. The problem of stabilization using a PID controller is to determine the values of kp , ki and
PID CONTROLLERS FOR DELAY-FREE LTI SYSTEMS
27
kd for which the closed-loop characteristic polynomial is Hurwitz, that is, has all its roots in the open left half plane. Since plants with a zero at the origin (N (0) = 0) cannot be stabilized by PID controllers we exclude such plants at the outset. Finding the complete set of stabilizing parameters is an important first step in searching for subsets attaining various design objectives. We will now develop an algorithm for computationally characterizing this set for a given plant with a rational transfer function.
2.2
Stabilizing Set
The set of controllers of a given structure that stabilizes the closed loop is of fundamental importance since every design must belong to this set and any performance specifications that are imposed must be achieved over this set. Write k := [kp , ki , kd ] (2.6) and let S o := {k : δ(s, k) is Hurwitz}
(2.7)
denote the set of PID controllers that stabilize the closed-loop for the given plant characterized by the transfer function (N (s), D(s)). It is emphasized that due to the presence of integral action on the error any controller in S o already provides asymptotic tracking and disturbance rejection for step inputs. In general, additional design specifications on stability margins and transient response are also required and subsets representing these must be sought within S o . The three dimensional set S o is simply described by (2.7) but not necessarily simple to calculate. For example, a naive application of the Routh-Hurwitz criterion to δ(s, k) will result in a description of S o in terms of highly nonlinear and intractable inequalities. Example 2.1 Consider the problem of choosing stabilizing PID gains for the plant G(s) = N (s) D(s) where D(s) = s5 + 8s4 + 32s3 + 46s2 + 46s + 17 N (s) = s3 − 4s2 + s + 2. The closed-loop characteristic polynomial is δ(s, kp , ki , kd ) = sD(s) + (ki + kp s + kd s2 )N (s) = s6 + (kd + 8)s5 + (kp − 4kd + 32)s4 + (ki − 4kp + kd + 46)s3 +(−4ki + kp + 2kd + 46)s2 + (ki + 2kp + 17)s + 2ki
28
THREE TERM CONTROLLERS
Using the Routh-Hurwitz criterion to determine the stabilizing values for kp , ki and kd , we see that the following inequalities must hold.
kd + 8 > 0 kp kd − 4kd2 − ki + 12kp − kd + 210 > 0 2 2 3 2 ki kp kd − 4kp kd + 16kp kd − 6kd − ki + 16ki kp 2 +63ki kd − 48kp + 48kp kd −263kd2 + 428ki − 336kp − 683kd + 6852 > 0 −4ki2 kp kd + 16ki kp2 kd − 52ki kp kd2 −6kp3 kd + 24kp2 kd2 − 6kp kd3 − 12kd4 + 4ki3 −64ki2 kp − 264ki2kd + 198ki kp2 − 9ki kp kd + 1238ki kd2 −72kp3 − 213kp2 kd + 957kp kd2 − 1074kd3 − 1775ki2 +2127ki kp + 7688ki kd − 3924kp2 + 3027kpkd − 11322kd2 −10746ki − 31338kp − 1836kd + 206919 > 0 −6ki3 kp kd + 24ki2 kp2 kd − 84ki2 kp kd2 − 6ki kp3 kd + 60ki kp2 kd2 −102ki kp kd3 − 12kp4 kd + 48kp3 kd2 − 12kp2 kd3 − 24kp kd4 +6ki4 − 96ki3 kp − 390ki3kd + 294ki2 kp2 − 285ki2 kp kd +1476ki2kd2 − 60ki kp3 + 969ki kp2 kd − 1221ki kp kd2 −132ki kd3 − 144kp4 − 528kp3kd + 2322kp2kd2 − 2250kp kd3 −204kd4 − 2487ki3 + 273ki2 kp − 2484ki2kd + 5808kikp2 +10530kikp kd + 34164kikd2 − 9072kp3 + 2433kp2kd −6375kpkd2 − 18258kd3 − 92961ki2 + 79041kikp +184860kikd − 129384kp2 + 47787kpkd − 192474kd2 −549027ki − 118908kp − 31212kd + 3517623 > 0 2ki > 0
(2.8)
Clearly, the above inequalities are highly nonlinear and there is no straight forward method for obtaining a solution.
In the sequel, we develop a novel and computationally efficient approach to determining S o . This is based on some root counting or signature formulas, which we develop next.
PID CONTROLLERS FOR DELAY-FREE LTI SYSTEMS
2.3
29
Signature Formulas
Let p(s) denote a polynomial of degree n with real coefficients and without zeros on the jω axis. Write p(s) := p0 + p2 s2 + · · · +s p1 + p3 s2 + · · · (2.9) | {z } | {z } peven (s2 )
podd (s2 )
so that
p(jω) = pr (ω) + jpi (ω)
(2.10)
where pr (ω), pi (ω) are polynomials in ω with real coefficients with pr (ω) = peven(−ω 2 ), 2
pi (ω) = ωpodd(−ω ). DEFINITION 2.1 is defined by
(2.11) (2.12)
The standard signum function sgn : R → {−1, 0, 1} −1 if x < 0 0 if x = 0 sgn[x] = 1 if x > 0.
DEFINITION 2.2 Let p(s) be a given polynomial of degree n with real coefficients and without zeros on the imaginary axis. Let C − denote the open left-half plane (LHP), C + the open right-half plane (RHP), and l and r the numbers of roots of p(s) in C − and C + , respectively. Let ∠p(jω) denote the 2 angle of p(jω) and ∆ω ω1 ∠p(jω) the net change, in radians, in the phase or angle of p(jω) as ω runs from ω1 to ω2 . LEMMA 2.1 ∆∞ 0 ∠p(jω) =
π (l − r). 2
(2.13)
PROOF Each LHP root contributes π and each RHP root contributes −π to the net change in phase of p(jω) as ω runs from −∞ to ∞, and (2.13) follows from the symmetry about the real axis of the roots since p(s) has real coefficients. We call l − r, the Hurwitz signature of p(s), and denote it as: σ(p) := l − r.
(2.14)
30
THREE TERM CONTROLLERS
2.3.1
Computation of σ(p)
By the previous lemma the computation of σ(p) amounts to a determination of the total phase change of p(jω). To see how the total phase change may be calculated consider typical plots of p(jω) where ω runs from 0 to +∞ as in Figure 2.2. We note that the frequencies 0, ω1 , ω2 , ω3 , ω4 are the points where the plot cuts or touches the real axis. In Figure 2.2(a), ω3 is a point where the plot touches but does not cut the real axis. +∞
ω2
ω4
ω1
ω3
ω3
ω=0
ω2
ω=0 ω1
+∞
(b)
(a)
Figure 2.2 (a) Plot of p(jω) for p(s) of even degree. (b) Plot of p(jω) for p(s) of odd degree.
In Figure 2.2(a), we have ω1 ω2 ω3 ω4 ∆∞ 0 ∠p(jω) = ∆0 ∠p(jω) + ∆ω1 ∠p(jω) + ∆ω2 ∠p(jω) + ∆ω3 ∠p(jω) {z } | | {z } | {z } | {z } 0
−π
0
−π
+ ∆∞ ω ∠p(jω) . | 4 {z } 0
Observe that,
π + 1 ∆ω 0 ∠p(jω) = sgn[pi (0 )] sgn[pr (0) − sgn[pr (ω1 )] 2 π + ω2 ∆ω1 ∠p(jω) = sgn[pi (ω1 )] sgn[pr (ω1 ) − sgn[pr (ω2 )] 2
(2.15)
31
PID CONTROLLERS FOR DELAY-FREE LTI SYSTEMS
and
π + 3 ∆ω ω2 ∠p(jω) = sgn[pi (ω2 )] sgn[pr (ω2 ) − sgn[pr (ω3 )] 2 π + 4 ∆ω ∠p(jω) = sgn[p (ω )] sgn[p (ω ) − sgn[p (ω )] i r 3 r 4 ω3 3 2 π + +∞ ∆ω4 ∠p(jω) = sgn[pi (ω4 )] sgn[pr (ω4 ) − sgn[pr (∞)] 2
(2.16)
sgn[pi (ω1+ )] = −sgn[pi (0+ )] sgn[pi (ω2+ )] = −sgn[pi (ω1+ )] = +sgn[pi (0+ )] sgn[pi (ω3+ )] = +sgn[pi (ω2+ )] = +sgn[pi (0+ )]
(2.17)
sgn[pi (ω4+ )] = −sgn[pi (ω3+ )] = −sgn[pi (0+ )] and note also that 0, ω1 , ω2 , ω4 are the real zeros of pi (ω) of odd multiplicity whereas ω3 is a real zero of even multiplicity. From these relations, it is evident that (2.15) may be rewritten, skipping the terms involving ω3 the root of even multiplicity so that ω1 ω2 ω4 ∞ ∆∞ 0 ∠p(jω) = ∆0 ∠p(jω) + ∆ω1 ∠p(jω) + ∆ω2 ∠p(jω) + ∆ω4 ∠p(jω) π = sgn[pi (0+ )] sgn[pr (0)] − sgn[pr (ω1 )] 2 −sgn[pi (0+ )] sgn[pr (ω1 )] − sgn[pr (ω2 )] +sgn[pi (0+ )] sgn[pr (ω2 )] − sgn[pr (ω4 )] −sgn[pi (0+ )] sgn[pr (ω4 )] − sgn[pr (∞)] . (2.18)
Equation (2.18) can be rewritten as ∆∞ 0 ∠p(jω) =
π sgn[pi (0+ )] sgn[pr (0)] − 2sgn[pr (ω1 )] + 2sgn[pr (ω2 )] 2 −2sgn[pr (ω4 )] + sgn[pr (∞)] . (2.19)
In the case of Figure 2.2(b), that is, when p(s) is of odd degree, we have
ω1 ω3 +∞ ω2 ∆∞ 0 ∠p(jω) = ∆0 ∠p(jω) + ∆ω1 ∠p(jω) + ∆ω2 ∠p(jω) + ∆ω3 ∠p(jω) (2.20) | {z } | {z } | {z } | {z } +π
0
−π 2
−π
ω2 ω3 1 and ∆ω 0 ∠p(jω), ∆ω1 ∠p(jω), ∆ω2 ∠p(jω) are as before whereas
∆∞ ω3 ∠p(jω) =
π sgn[pi (ω3+ )]sgn[pr (ω3 )]. 2
(2.21)
We also have, as before, sgn[pi (ωj+ )] = (−1)j sgn[pi (0+ )],
j = 1, 2, 3
(2.22)
32
THREE TERM CONTROLLERS
Combining (2.20) - (2.22), we have, finally, for Figure 2.2(b), ∆∞ 0 ∠p(jω) =
π sgn[pi (0+ )] sgn[pr (0)] − 2sgn[pr (ω1 )] + 2sgn[pr (ω2 )] 2 −2sgn[pr (ω3 )] . (2.23)
We can now easily generalize the above formulas for the signature, based on Lemma 2.1. THEOREM 2.1 Let p(s) be a polynomial of degree n with real coefficients, without zeros on the imaginary axis. Write p(jω) = pr (ω) + jpi (ω) and let ω0 , ω1 , ω3 , · · ·, ωl−1 denote the real nonnegative zeros of pi (ω) with odd multiplicities with ω0 = 0. Then If n is even, l−1 X σ(p) = sgn[pi (0+ )] sgn[pr (0)] + 2 (−1)j sgn[pr (ωj )] + (−1)l sgn[pr (∞)] . j=1
If n is odd,
σ(p) = sgn[pi (0+ )] sgn[pr (0)] + 2
2.3.2
l−1 X j=1
(−1)j sgn[pr (ωj )] .
Alternative Signature Expression
In the previous subsection, we gave expressions for the signature of a polynomial p(s) in terms of the signs of the real part of p(jω) at the zeros of the imaginary part. Here we dualize these formulas, that is, we develop signature expressions in terms of the signs of the imaginary part at the zeros of the real part. Let v(s) denote a polynomial of degree n with real coefficients without jω axis zeros. Write as before v(s) = veven (s2 ) + svodd (s2 ) so that v(jω) = vr (ω) + jvi (ω) with vr (ω) = veven (−ω 2 ),
vi (ω) = ωvodd (−ω 2 ).
PID CONTROLLERS FOR DELAY-FREE LTI SYSTEMS
33
Let 0 < ω1 < ω2 < · · · < ωl−1 denote the real positive distinct zeros of vr (ω) of odd multiplicities, and let ωl = ∞. Observe that π j+1 sgn vr (ωj+ ) sgn [vi (ωj+1 )] − sgn [vi (ωj )] , ∆ω ωj ∠v(jω) = 2 j = 1, 2, · · · , l − 2. (2.24) When n is odd, ∆∞ ωl−1 ∠v(jω) =
π + sgn vr (ωl−1 ) sgn [vi (∞)] − sgn [vi (ωl−1 )] 2
(2.25)
and for n even, we have
π + ∆∞ l−1 ∠v(jω) = − sgn vr (ωl−1 ) sgn [vi (ωl−1 )] . 2
Also, it is easily verified that in both cases, even and odd, + sgn vr (ωj+1 ) = −sgn vr (ωj+ ) .
(2.26)
(2.27)
Combining (2.24) - (2.27) with
∆∞ 0 ∠v(jω) = (l − r)
π π = σ(v) , 2 2
we have the alternative signature formulas given below: THEOREM 2.2 If n is even, σ(v) = sgn [vr (0)] 2sgn [vi (ω1 )]−2sgn [vi (ω2 )]+· · ·+(−1)l−2 2sgn [vi (ωl−1 )] . If n is odd, σ(v) = sgn [vr (0)] 2sgn [vi (ω1 )] − 2sgn [vi (ω2 )] + · · ·
· · · + (−1)l−2 2sgn [vi (ωl−1 ] + (−1)l−1 sgn [vi (∞)] .
2.4
Computation of the PID Stabilizing Set
Consider the plant, with rational transfer function P (s) =
N (s) D(s)
34
THREE TERM CONTROLLERS
with the PID feedback controller C(s) =
kp s + ki + kd s2 , s(1 + sT )
T > 0.
The closed-loop characteristic polynomial is δ(s) = sD(s)(1 + sT ) + kp s + ki + kd s2 N (s).
(2.28)
ν(s) := δ(s)N (−s)
(2.29)
We form the new polynomial
and note that the even-odd decomposition of ν(s) is of the form: ν(s) = νeven (s2 , ki , kd ) + sνodd (s2 , kp ).
(2.30)
The polynomial ν(s) exhibits the parameter separation property, namely, that kp appears only in the odd part and ki , kd only in the even part. This will facilitate the computation of the stabilizing set using signature concepts. Let deg[D(s)] = n, deg[N (s)] = m ≤ n, and let z + and z − denote the number of RHP and LHP zeros of the plant, respectively, that is, zeros of N (s). We assume, as a convenient technical assumption, that the plant has no jω axis zeros. THEOREM 2.3 The closed-loop system is stable if and only if σ(ν) = n − m + 2 + 2z + .
(2.31)
PROOF Closed-loop stability is equivalent to the requirement that the n + 2 zeros of δ(s) lie in the open LHP. This is equivalent to σ(δ) = n + 2
(2.32)
and to σ(ν) = n + 2 + z + − z − = n + 2 + z + − (m − z + ) = (n − m) + 2 + 2z + .
Based on this, we can develop the following procedure to calculate S 0 , the stabilizing set:
PID CONTROLLERS FOR DELAY-FREE LTI SYSTEMS
35
a) First, fix kp = kp∗ and let 0 < ω1 < ω2 < · · · < ωl−1 denote the real, positive, finite frequencies which are zeros of νodd (−ω 2 , kp∗ ) = 0
(2.33)
of odd multiplicities. Let ω0 := 0 and ωl := ∞. b) Write
j = sgn vodd (0+ , kp∗ )
and determine strings of integers, i0 , i1 , · · · such that: If n + m is even, j i0 − 2i1 + 2i2 + · · · + (−1)l−1 2il−1 + (−1)l il = n − m + 2 + 2z + (2.34) If n + m is odd, j i0 − 2i1 + 2i2 + · · · + (−1)l−1 2il−1 = n − m + 2 + 2z + (2.35)
c) Let I1 , I2 , I3 , · · · denote distinct strings {i0 , i1 , · · ·} satisfying (2.34) or (2.35). Then the stabilizing sets in ki , kd space, for kp = kp∗ are given by the linear inequalities νeven −ωt2 , ki , kd it > 0 (2.36) where the it range over each of the strings I1 , I2 , · · ·.
d) For each string Ij , (2.36) generates a convex stability set Sj (kp∗ ) and the complete stabilizing set for fixed kp∗ is the union of these convex sets S(kp∗ ) = ∪j Sj (kp∗ ).
(2.37)
e) The complete stabilizing set in (kp , ki , kd ) space can be found by sweeping kp over the real axis and repeating the calculations (2.33) - (2.37). From (2.34) and (2.35), we can see that the range of sweeping can be restricted to those values such that the number of roots l − 1 can satisfy (2.34) or (2.35) in the most favorable case. For n + m even, this requires that 2 + 2(l − 1) ≥ n − m + 2 + 2z + or l−1≥
n − m + 2z + 2
(2.38)
and for n + m odd, we need 1 + 2(l − 1) ≥ n − m + 2 + 2z + or
n − m + 1 + 2z + . (2.39) 2 Thus, kp needs to be swept over those ranges where (2.33) is satisfied with l − 1 given by (2.38) or (2.39). l−1≥
36
THREE TERM CONTROLLERS
REMARK 2.1 If the PID controller with pure derivative action is used (T = 0) it is easy to see that the signature requirement for stability becomes σ(ν) = n − m + 1 + 2z + . The following example illustrates the detailed calculations involved in determining the stabilizing (kp , ki , kd ) gain values. Example 2.2 Consider the problem of determining stabilizing PID gains for the plant P (s) = N (s) D(s) where N (s) = s3 − 2s2 − s − 1 D(s) = s6 + 2s5 + 32s4 + 26s3 + 65s2 − 8s + 1. In this example we use the PID controller with T = 0. The closed-loop characteristic polynomial is δ(s, kp , ki , kd ) = sD(s) + (ki + kd s2 )N (s) + kp sN (s). Here n = 6 and m = 3 Ne (s2 ) = −2s2 − 1, No (s2 ) = s2 − 1, De (s2 ) = s6 + 32s4 + 65s2 + 1, Do (s2 ) = 2s4 + 26s2 − 8, and Therefore, we obtain
N (−s) = −2s2 − 1 − s s2 − 1 .
ν(s) = δ(s, kp , ki , kd )N (−s) = s2 −s8 − 35s6 − 87s4 + 54s2 + 9 + ki + kd s2 −s6 + 6s4 + 3s2 + 1 +s −4s8 − 89s6 − 128s4 − 75s2 − 1 + kp (−s6 + 6s4 + 3s2 + 1
so that
ν(jω, kp , ki , kd ) = p1 (ω) + ki − kd ω 2 p2 (ω) + j [q1 (ω) + kp q2 (ω)]
where
p1 (ω) = ω 10 − 35ω 8 + 87ω 6 + 54ω 4 − 9ω 2 p2 (ω) = ω 6 + 6ω 4 − 3ω 2 + 1 q1 (ω) = −4ω 9 + 89ω 7 − 128ω 5 + 75ω 3 − ω q2 (ω) = ω 7 + 6ω 5 − 3ω 3 + ω.
PID CONTROLLERS FOR DELAY-FREE LTI SYSTEMS
37
We find that z + = 1 so that the signature requirement on ν(s) for stability is, σ(ν) = n − m + 1 + 2z + = 6. Since the degree of ν(s) is even, we see from the signature formulas that q(ω) must have at least two positive real roots of odd multiplicity. The range of kp such that q(ω, kp ) has at least 2 real, positive, distinct, finite zeros with odd multiplicities was determined to be (−24.7513, 1) which is the allowable range for kp . For a fixed kp ∈ (−24.7513, 1), for instance kp = −18, we have q(ω, −18) = q1 (ω) − 18q2 (ω) = −4ω 9 + 71ω 7 − 236ω 5 + 129ω 3 − 19ω. Then the real, nonnegative, distinct finite zeros of q(ω, −18) with odd multiplicities are ω0 = 0,
ω1 = 0.5195,
ω2 = 0.6055,
ω3 = 1.8804,
ω4 = 3.6848.
Also define ω5 = ∞. Since sgn[q(0, −18)] = −1, it follows that every admissible string I = {i0 , i1 , i2 , i3 , i4 , i5 } must satisfy {i0 − 2i1 + 2i2 − 2i3 + 2i4 − i5 } · (−1) = 6. Hence, the admissible strings are I1 = {−1, −1, −1, 1, −1, 1} I2 = {−1, 1, 1, 1, −1, 1} I3 = {−1, 1, −1, −1, −1, 1} I4 = {−1, 1, −1, 1, 1, 1} I5 = {1, 1, −1, 1, −1, −1}. For I1 it follows that the stabilizing (ki , kd ) values corresponding to kp = −18 must satisfy the string of inequalities: p1 (ω0 ) + ki − kd ω02 p2 (ω0 ) < 0 p1 (ω1 ) + ki − kd ω12 p2 (ω1 ) < 0 p1 (ω2 ) + ki − kd ω22 p2 (ω2 ) < 0 p1 (ω3 ) + ki − kd ω32 p2 (ω3 ) > 0 p1 (ω4 ) + ki − kd ω42 p2 (ω4 ) < 0 p1 (ω5 ) + ki − kd ω52 p2 (ω5 ) > 0
38
THREE TERM CONTROLLERS
Substituting for ω0 , ω1 , ω2 , ω3 , ω4 and ω5 in the above expressions, we obtain ki < 0 ki − 0.2699kd < −4.6836 ki − 0.3666kd < −10.0797
(2.40)
ki − 3.5358kd > 3.912 ki − 13.5777kd < 140.2055. The set of values of (ki , kd ) for which (2.40) holds can be solved by linear programming and is denoted by S1 . For I2 , we have ki < 0 ki − 0.2699kd > −4.6836 ki − 0.3666kd > −10.0797
(2.41)
ki − 3.5358kd > 3.912 ki − 13.5777kd < 140.2055. The set of values of (ki , kd ) for which (2.41) holds can also be solved by linear programming and is denoted by S2 . Similarly, we obtain S3 = ∅ S4 = ∅
for I3 for I4
S5 = ∅
for I5 .
Then, the stabilizing set of (ki , kd ) values when kp = −18 is given by S(−18) = ∪x=1,2,···,5 Sx = S1 ∪ S2 . The set S(−18) and the corresponding S1 and S2 are shown in Figure 2.3. By sweeping over different kp values within the interval (−24.7513, 1) and repeating the above procedure at each stage, we can generate the set of stabilizing (kp , ki , kd ) values. This set is shown in Figure 2.4.
2.5
PID Design with Performance Requirements
Control system performance can be specified by requirements on closed-loop stability margins such as guarantees on gain or phase margins, time-delay margins as well as time domain performance specifications such as low overshoot and fast settling time. Sometimes frequency domain inequalities or equiva(s) lently an H∞ norm constraint on a closed-loop transfer function G(s) = N D(s) : kG(s)k∞ < γ
(2.42)
39
PID CONTROLLERS FOR DELAY-FREE LTI SYSTEMS 0
-2
-4
-6
kd
S2 -8
S1
- 10
- 12
- 14 - 45
- 40
- 35
- 25
- 30
- 20
- 15
-5
- 10
0
ki
Figure 2.3 The stabilizing set of (ki , kd ) values when kp = −18.
0
5
kp
10
15 5 20
0 5
25 50
10 40
15
30
20
10
0
20
ki
Figure 2.4 The stabilizing set of (kp , ki , kd ) values.
kd
40
THREE TERM CONTROLLERS
may be imposed. It is known (see Lemma 12.4) that the above condition is equivalent to Hurwitz stability of the complex polynomial family: γD(s) + ejθ N (s),
θ ∈ [0, 2π].
(2.43)
Similarly the gain and phase margin problems can be shown to be equivalent to Hurwitz stability of families of real and complex polynomials respectively. In our PID design problem, the polynomials D(s) and N (s) will have the PID gains embedded in them and the set of parameters achieving specifications is given by those achieving simultaneously the stabilization of the complex polynomial family as well as the real closed-loop characteristic polynomial. It turns out that the set of PID gains achieving stabilization of a complex polynomial family and therefore attaining the specifications can be found by an extension of the algorithm given for the real case. Towards this end, consider a complex polynomial of the form: c(s, kp , ki , kd ) = L(s) + kd s2 + kp s + ki M (s) (2.44) where L(s) and M (s) are given complex polynomials. The results on PID stabilization presented in Section 2.4 can be extended to the stabilization of (2.44). The algorithm, described below, is similar to the stabilization algorithm given for the real case. We will, therefore, not write the algorithm in detail but only point out the differences in the formulas and steps from that of the real case. We then show through examples how, many PID performance or design problems can be converted into stabilization problems of complex polynomial families of the form of (2.44) and solved using this algorithm.
2.5.1
Signature Formulas for Complex Polynomials
Consider the polynomial c(s) with complex coefficients and suppose that c(s) has no jω axis zeros and has l and r open LHP and open RHP roots. As before, we have the total phase change ∆+∞ −∞ ∠c(jω) = π(l − r)
(2.45)
and we define the signature σ(c) := l − r.
(2.46)
To compute the signature, write c(jω) = p(ω) + jq(ω)
(2.47)
where p(ω) and q(ω) are polynomials with real coefficients. Let ω1 , ω2 , · · · , ωl−1 denote the real zeros of q(ω) of odd multiplicities, ω0 = −∞, ωl = +∞ and ω0 < ω1 < · · · < ωl−1 < ωl .
(2.48)
PID CONTROLLERS FOR DELAY-FREE LTI SYSTEMS
41
Write j− = sgn[q(−∞)],
j+ = sgn[q(∞)]
(2.49)
for k = 0, 1, · · · , l.
(2.50)
and ik = sgn[p(ωk )], THEOREM 2.4 (a) If deg[p] > deg[q], 1 j− i0 − 2i1 + 2i2 + · · · + (−1)l−1 2il−1 + (−1)l il 2 1 = j+ (−1)l−1 i0 − 2i1 + · · · + (−1)l−1 2il−1 + (−1)l il 2
σ(c) =
(b) If deg[q] ≥ deg[p], σ(c) = j− −i1 + i2 − i3 + · · · + (−1)l−1 il−1 = j+ (−1)l−1 −i1 + i2 − i3 + · · · + (−1)l−1 il−1 .
(2.51) (2.52)
(2.53) (2.54)
PROOF In case (a), the complex plane plot of c(jω) approaches the real axis as |ω| → ∞. Thus, ω1 ω2 +∞ ∆+∞ −∞ ∠c(jω) = ∆−∞ ∠c(jω) + ∆ω1 ∠c(jω) + · · · + ∆ωl−1 ∠c(jω)
(2.55)
and π j− (i0 − i1 ) 2 π k+1 ∆ω j− (−1)k (ik − ik+1 ) , ωk ∠c(jω) = 2 1 ∆ω −∞ ∠c(jω) =
(2.56) k = 0, 1, · · · , l − 1
(2.57)
Substituting (2.56), (2.57) into (2.55) and using (2.45), (2.46), we get (2.51); (2.52) follows from j− = j+ (−1)l−1 . (2.58) In case (b), π 2 π ω3 ∆ω2 ∠c(jω) = +j− (i2 − i3 ) 2 .. . 2 ∆ω ω1 ∠c(jω) = −j− (i1 − i2 )
(2.59)
π l−2 l−1 ∆ω j− (il−2 − il−1 ) ωl−2 ∠c(jω) = (−1) 2 whereas +∞ l−1 1 ∆ω il−1 −∞ ∠c(jω) + ∆ωl−1 ∠c(jω) = j− −i1 + (−1)
π . 2
(2.60)
42
THREE TERM CONTROLLERS
Combining (2.59) and (2.60) with (2.45) and (2.46), we obtain (2.53); (2.54) again follows from j− = j+ (−1)l−1 . (2.61)
2.5.2
Complex PID Stabilization Algorithm
Now we consider a complex polynomial of the form: c(s, kp , ki , kd ) = L(s) + kd s2 + kp s + ki M (s)
(2.62)
where L(s) and M (s) are two given complex polynomials. Let (Lr (s), Li (s) denote polynomials such that Lr (jω) is purely real and Li (jω) is purely imaginary, with similar notation for M (s). Write L(s) and M (s) in terms of this real-imaginary decomposition: L(s) = Lr (s) + Li (s) M (s) = Mr (s) + Mi (s) and define M ∗ (s) := Mr (s) − Mi (s) and ν(s) := c(s, kp , ki , kd )M ∗ (s). Also let n, m be the degrees of c(s, kp , ki , kd ) and M (s) respectively. Evaluating the polynomial ν(s) at s = jω, we obtain ν(jω) = c(jω, kp , ki , kd )M ∗ (jω) = p(ω, ki , kd ) + jq(ω, kp )
(2.63)
where p(ω, ki , kd ) = p1 (ω) + (ki − kd ω 2 )p2 (ω) q(ω, kp ) = q1 (ω) + kp q2 (ω) p1 (ω) = Lr (jω)Mr (jω) − Li (jω)Mi (jω) p2 (ω) =
Mr2 (jω)
(2.64) (2.65)
Mi2 (jω)
− 1 q1 (ω) = [Li (jω)Mr (jω) − Lr (jω)Mi (jω)] j q2 (ω) = ω Mr2 (jω) − Mi2 (jω) .
(2.66)
Note that ν(s) exhibits the parameter separation property as in the real case. The polynomials p1 (ω), p2 (ω), q1 (ω), q2 (ω) have real coefficients so that the previous theorem on the signature of complex polynomials can be used. The condition for Hurwitz stability of the complex polynomial c(s) of degree n is
43
PID CONTROLLERS FOR DELAY-FREE LTI SYSTEMS
equivalent to the signature condition σ(ν) = n + σ(M ∗ ). The calculation of the stabilizing gains kp , ki , kd proceeds as in the real case: Complex PID Stabilization Algorithm: • Step 1 Compute p1 (ω), p2 (ω), q1 (ω), q2 (ω) from (2.65) - (2.66). • Step 2 The admissible ranges of kp are such that q(ω, kp ) has at least
|n − (l(M (s)) − r(M (s)))| |n − (l(M (s)) − r(M (s)))| − 1
if deg[q] ≥ deg[p] if deg[p] > deg[q]
real, distinct finite zeros with odd multiplicities. The resulting ranges of kp are the only ranges of kp for which stabilizing (ki , kd ) values may exist; • Step 3 For fixed kp = kp∗ solve for the real, distinct finite zeros of q(ω, kp∗ ) with odd multiplicities and denote them by ω1 < ω2 < · · · < ωl−1 and let ω0 = −∞ and ωl = ∞; • Step 4 Find sequences of integers i0 , ii , i2 , · · · , il with: it ∈ {−1, 1}, for t = 0, 1, · · · , l. such that n − (l(M (s)) − r(M (s))) = n o Pl−1 1 i0 · (−1)l−1 + 2 r=1 ir · (−1)l−1−r − il · sgn[q(∞, kp )], 2 if deg[p] > deg[q] n P o l−1 1 l−1−r 2 i · (−1) · sgn[q(∞, kp )], r r=1 2 if deg[q] ≥ deg[p]
(2.67)
• Step 5 The stabilizing sets in ki , kd space are given by p(ωt , ki .kd )it > 0 where it are taken from the admissible strings satisfying the signature condition for stability. • Step 6 Repeat the above steps by updating kp in the admissible ranges. We now give some application examples of PID performance using the complex stabilization algorithm.
44
THREE TERM CONTROLLERS
2.5.3
PID Design with Guaranteed Gain and Phase Margins
In this subsection, we consider the problem of designing PID controllers that achieve prespecified gain and phase margins for a given plant. Towards this end, let for example Am and θm denote the desired upper gain and phase margins respectively. From the definitions of the upper gain and phase margins, it follows that the PID gain values (kp , ki , kd ) achieving gain margin Am and phase margin θm must satisfy the following conditions: (1) sD(s) + A kd s2 + kp s + ki N (s) is Hurwitz for all A ∈ [1, Am ]; and (2) sD(s) + e−jθ kd s2 + kp s + ki N (s) is Hurwitz for all θ ∈ [0, θm ].
Thus, the problem to be solved is reduced to the problem of simultaneous stabilization of two families of polynomials. The algorithm of Section 2.4 can now be used to solve these simultaneous stabilization problems. The following example illustrates the procedure. Example 2.3 Consider the plant G(s) = N (s) = 2s − 1
N (s) D(s)
where
and D(s) = s4 + 3s3 + 4s2 + 7s + 9.
In this example, we consider the problem of determining all (kp , ki , kd ) gain values that provide a gain margin Am ≥ 3.0 and a phase margin θm ≥ 40◦ . A given set of (kp , ki , kd ) values will meet these specifications if and only if the following conditions hold: (1) s s4 + 3s3 + 4s2 + 7s + 9 + A kd s2 + kp s + ki (2s − 1) is Hurwitz for all A ∈ [1, 3.0]; (2) s s4 + 3s3 + 4s2 + 7s + 9 + e−jθ kd s2 + kp s + ki (2s − 1) is Hurwitz for all θ ∈ [0◦ , 40◦ ]. Again, the procedure for determining the set of (kp , ki , kd ) values is similar to that presented in Section 2.4 and therefore a detailed description is omitted. The resulting set is sketched in Figure 2.5.
2.5.4
Synthesis of PID Controllers with an H∞ Criterion
Let us consider the problem of synthesizing PID controllers for which the closed-loop system is internally stable and the H∞ -norm of a certain closedloop transfer function is less than a prescribed level. In particular, the following closed-loop transfer functions are considered: • The sensitivity function: S(s) =
1 . 1 + C(s)G(s)
(2.68)
45
PID CONTROLLERS FOR DELAY-FREE LTI SYSTEMS
3 2.8 2.6 2.4
kp
2.2 2 1.8 1.6 1.4 1.2 0.6 0.4 ki
0.2 0
1
1.5
2
3
2.5
3.5
4
4.5
kd
Figure 2.5 The set of (kp , ki , kd ) values for which the resulting closed-loop system achieves a gain margin Am ≥ 3.0 and a phase margin θm ≥ 40◦ .
• The complementary sensitivity function: T (s) =
C(s)G(s) . 1 + C(s)G(s)
(2.69)
C(s) . 1 + C(s)G(s)
(2.70)
• The input sensitivity function: U (s) =
It is known that various performance and robustness specifications can be captured by using the H∞ -norm of weighted versions of the transfer functions (2.68)-(2.70). It can be verified that when C(s) is a PID controller, the transfer functions (2.68)-(2.70) can all be represented in the following general form: A(s) + kd s2 + kp s + ki B(s) Tcl (s, kp , ki , kd ) = (2.71) sD(s) + (kd s2 + kp s + ki ) N (s) where A(s) and B(s) are some real polynomials. For the transfer function Tcl (s, kp , ki , kd ) and a given number γ > 0, the standard H∞ performance specification usually takes the form: kW (s)Tcl (s, kp , ki , kd )k∞ < γ
(2.72)
46
THREE TERM CONTROLLERS
where W (s) is a stable frequency-dependent weighting function that is selected to capture the desired design objectives at hand. Suppose the weighting function Wn (s) , W (s) = Wd (s) where Wn (s) and Wd (s) are coprime polynomials and Wd (s) is Hurwitz. Define the polynomials δ(s, kp , ki , kd ) and φ(s, kp , ki , kd , γ, θ) as follows:
and
∆ δ(s, kp , ki , kd ) = sD(s) + ki + kp s + kd s2 N (s)
(2.73)
1 jθ φ(s, kp , ki , kd , γ, θ) = sWd (s)D(s) + e Wn (s)A(s) γ 1 jθ 2 + kd s + kp s + ki Wd (s)N (s) + e Wn (s)B(s) . γ ∆
We can now establish the following relationship between H∞ synthesis using PID controllers and simultaneous stabilization of a complex polynomial family: For a given γ > 0, there exist PID gain values (kd , kp , ki ) such that kW (s)Tcl (s, kp , ki , kd )k∞ < γ if and only if the following conditions hold: (1) δ(s, kp , ki , kd ) is Hurwitz; (2) φ(s, kp , ki , kd , γ, θ) is Hurwitz for all θ in [0, 2π); (3) kW (∞)Tcl (∞, kp , ki , kd )k < γ. The above equivalence can be used to determine stabilizing (kp , ki , kd ) values such that the H∞ -norm of a certain closed-loop transfer function is less than a prescribed level. This is illustrated using the following example. Example 2.4 Consider the plant G(s) =
N (s) D(s)
where
N (s) = s − 1 D(s) = s2 + 0.8s − 0.2 and the PID controller C(s) =
kd s2 + kp s + ki . s
PID CONTROLLERS FOR DELAY-FREE LTI SYSTEMS
47
In this example, we consider the problem of determining all stabilizing PID gain values for which kW (s)T (s, kp , ki , kd )k∞ < 1, where T (s, kp , ki , kd ) is the complementary sensitivity function: kd s2 + kp s + ki (s − 1) T (s, kp , ki , kd ) = s (s2 + 0.8s − 0.2) + (kd s2 + kp s + ki ) (s − 1) and the weight W (s) is chosen as a high pass transfer function: W (s) =
s + 0.1 . s+1
We know that (kp , ki , kd ) values meeting the H∞ performance constraint exist if and only if the following conditions hold: (1) δ(s, kp , ki , kd ) = s(s2 + 0.8s − 0.2) + (kd s2 + kp s + ki )(s − 1) is Hurwitz; (2) φ(s, kp , ki , kd , 1, θ) = s(s + 1)(s2 + 0.8s − 0.2) + kd s2 + kp s + ki (s + 1)(s − 1) + ejθ (s + 0.1)(s − 1)
is Hurwitz for all θ in [0, 2π]; (3)
kd < 1. |W (∞)T (∞, kp , ki , kd )| = kd + 1
The set of all (kp , ki , kd ) values for which the H∞ performance specifications are met are precisely the values of kp , ki , kd for which conditions (1), (2) and (3) are satisfied. To search for such values of (kp , ki , kd ), we fix kp and determine all the values of (ki , kd ) for which conditions (1),(2) and (3) hold. For the condition (1), with a fixed kp , for instance kp = −0.35, by setting L(s) = s s2 + 0.8s − 0.2 and M (s) = s − 1, and using the algorithm of Section 2.4, we obtain the set of (ki , kd ) values for which the closed-loop system is stable. This set is denoted by S(1,−0.35) and is sketched in Figure 2.6. Now fixing kp = −0.35 and any θ ∈ [0, 2π), L(s) = s(s + 1) s2 + 0.8s − 0.2 , M (s, θ) = (s + 1)(s − 1) + ejθ (s + 0.1)(s − 1)
using the complex stabilization algorithm of Section 2.5.2 again we can solve linear programming problems to determine the set of (ki , kd ) values. Let this
48
THREE TERM CONTROLLERS 0.5
kd
0
−0.5
−1
−0.14
−0.12
−0.1
−0.08 ki
−0.06
−0.04
−0.02
0
Figure 2.6 The set S(1,−0.35) .
set be denoted by S(2,−0.35,θ) . By keeping kp fixed, sweeping over θ ∈ [0, 2π), and using the complex stabilization algorithm of Section 2.5.2 at each stage, we can determine the set of (ki , kd ) values for which condition (2) is satisfied. This set is denoted by S(2,−0.35) and is given by S(2,−0.35) = ∩θ∈[0,2π) S(2,−0.35,θ) . The set S(2, −0.35) is sketched in Figure 2.7. Let S(3,−0.35) be the set of (ki , kd ) values satisfying condition (3) and this set is given by S(3,−0.35) = {(ki , kd ) : ki ∈ R, kd > −0.5} . Then for kp = −0.35, the set of (ki , kd ) values for which |W (s)T (s, kp , ki , kd )|∞ < 1 is denoted by S(−0.35) and is given by S(−0.35) = ∩i=1,2,3 S(i,−0.35) . In this case, we have S(−0.35) = S(2,−0.35) . Now, using root loci concepts, it was determined that a necessary condition for the existence of stabilizing
49
PID CONTROLLERS FOR DELAY-FREE LTI SYSTEMS
0.1
0
kd
−0.1
−0.2
−0.3
−0.4
−0.5 −0.045
−0.04
−0.035
−0.03
−0.025
−0.02
−0.015
−0.01
−0.005
0
ki
Figure 2.7 The set S(2,−0.35) = ∩θ∈[0,2π) S(2,−0.35,θ) .
(ki , kd ) values is that kp ∈ (−0.5566, −0.2197). Then, by sweeping over this range of kp and repeating the above procedure, we obtained the stabilizing set of (kp , ki , kd ) values for which kW (s)T (s, kp , ki , kd )k∞ < 1. This set is sketched in Figure 2.8.
2.5.5
PID Controller Design for H∞ Robust Performance
This subsection is devoted to the problem of synthesizing PID controllers for robust performance. In particular, we focus on the following robust performance specification: k|W1 (s)S(s)| + |W2 (s)T (s)|k∞ < 1, where W1 (s) =
NW1 (s) DW1 (s)
and W2 (s) =
(2.74)
NW2 (s) DW2 (s)
are stable weighting functions, and S(s) and T (s) are the sensitivity and the complementary sensitivity functions respectively. As before, let δ(s, kp , ki , kd )
50
THREE TERM CONTROLLERS
0.2
kp
0.3
0.4
0.5
0.6 0.08 0.06 0.04 ki
0.02 0
0.6
0.4
0.2
0
0.2
0.4
kd
Figure 2.8 The set of stabilizing (kp , ki , kd ) values for which kW (s)T (s, kp , ki , kd )k∞ < 1.
denote the closed-loop characteristic polynomial ∆ δ(s, kp , ki , kd ) = sD(s) + ki + kp s + kd s2 N (s).
(2.75)
We define the complex polynomial ψ(s, kp , ki , kd , θ, φ) by ∆
ψ(s, kp , ki , kd , θ, φ) = sDW1 (s)DW2 (s)D(s) + ejθ sNW1 (s)DW2 (s)D(s) + kd s2 + kp s + ki DW1 (s)DW2 (s)N (s) + ejφ DW1 (s)NW2 (s)N (s) .
The problem of synthesizing PID controllers for robust performance can be converted into the problem of determining values of (kp , ki , kd ) for which the following conditions hold: (1) δ(s, kp , ki , kd ) is Hurwitz; (2) ψ(s, kp , ki , kd , θ, φ) is Hurwitz for all θ ∈ [0, 2π) and for all φ ∈ [0, 2π); (3) |W1 (∞)S(∞)| + |W2 (∞)T (∞)| < 1. Example 2.5 Consider the plant G(s) =
N (s) D(s)
where
N (s) = s − 15 and D(s) = s2 + s − 1.
PID CONTROLLERS FOR DELAY-FREE LTI SYSTEMS
51
Then the sensitivity function and complementary sensitivity function are: s s2 + s − 1 , S(s, kp , ki , kd ) = s (s2 + s − 1) + (kd s2 + kp s + ki ) (s − 15) kd s2 + kp s + ki (s − 15) T (s, kp , ki , kd ) = . s (s2 + s − 1) + (kd s2 + kp s + ki ) (s − 15) The weighting functions are chosen as: W1 (s) =
0.2 s + 0.2
and W2 (s) =
s + 0.1 . s+1
We know that stabilizing (kp , ki , kd ) values meeting the performance specification (2.74) exist if and only if the following conditions hold: (1) δ(s, kp , ki , kd ) = s s2 + s − 1 + kd s2 + kp s + ki (s − 15) is Hurwitz; (2)
ψ(s, kp , ki , kd , θ, φ) = s(s + 0.2)(s + 1) s2 + s − 1 +ejθ s(0.2)(s + 1) s2 + s − 1 + kd s2 + kp s + ki · (s + 0.2)(s + 1)(s − 15) + ejφ (s + 0.2)(s + 0.1)(s − 15)
is Hurwitz for all θ ∈ [0, 2π) and for all φ ∈ [0, 2π); (3)
kd < 1. |W1 (∞)S(∞, kp , ki , kd )| + |W2 (∞)T (∞, kp , ki , kd )| = kd + 1
The procedure for determining the set of (kp , ki , kd ) values satisfying conditions (1), (2), and (3) is similar to that presented in the previous example. Using root loci, it was determined that a necessary condition for the existence of stabilizing (ki , kd ) values is that kp ∈ (−0.5079, −0.1155). For any fixed kp ∈ (−0.5079, −0.1155), we use the algorithm of Section 2.4 to determine the set of (ki , kd ) values satisfying conditions (1) and (2). The condition (3) gives the admissible set of (ki , kd ) as {(ki , kd ) : ki ∈ R, kd > −0.5} . Then for a fixed kp , we obtain the set of all (ki , kd ) values for which k|W1 (s)S(s, kp , ki , kd )| + |W2 (s)T (s, kp , ki , kd )|k∞ < 1 by taking the intersection of the set of (ki , kd ) values satisfying conditions (1), (2) and (3). Thus, by sweeping over kp ∈ (−0.5079, −0.1155), and repeating the above procedure, we obtain the set of (kp , ki , kd ) values for which k|W1 (s)S(s, kp , ki , kd )| + |W2 (s)T (s, kp , ki , kd )|k∞ < 1. This set is sketched in Figure 2.9.
52
THREE TERM CONTROLLERS 0.15 0.2 0.25 0.3
kp
0.35 0.4 0.45 0.5 0.55 0.5 0.4 0.3 0.2
kd
0.1
0
0.05
0.1
0.2
0.15
0.25
0.3
0.35
0.4
ki
Figure 2.9 The set of (kp , ki , kd ) values for which k|W1 (s)S(s, kp , ki , kd )| + |W2 (s)T (s, kp , ki , kd )|k∞ < 1.
2.6
Exercises
2.1 Determine the PID stabilizing sets for the following plants using the controller kp s + ki + kd s2 . s (a) P (s) =
1 s2 + 3s + 4
(b) P (s) =
s−1 (s + 1)2
(c) P (s) =
s−1 (s − 2)(s + 3)
(d) P (s) =
s−1 s4 + 3s2 + 2s + 1
(e) P (s) =
s4
s−1 + 3s2 − 2s + 1
PID CONTROLLERS FOR DELAY-FREE LTI SYSTEMS
53
In each case, determine if stabilization is possible, the range of admissible kp , and some typical stabilizing sets in ki -kd space. 2.2 Repeat Problem 2.1 with the controller kp s + ki + kd s2 s(1 + sT ) where T = 0.1 . 2.3 Repeat Problems 2.1 and 2.2 with the following performance specifications: (a) gain margin of 2, (b) phase margin of 45 degrees, (c) H∞ norm of the closed-loop transfer function less than or equal to 1.4. 2.4 Consider the PID controller with the transfer function C(s) =
K i + K p s + K d s2 s
and the plant with transfer function P (s) =
s2
K(s − z) . + a1 s + a0
Use signature methods to develop explicit conditions describing the stabilizing set S. Consider the cases when z > 0 and z < 0. 2.5 The settling time Ts of a control system can be defined to be the time required for the step response of the system to reach and remain within 2% of its steady state value. Show that this specification corresponds to shifting the closed-loop poles to the left of the line s = −σ = −
5 Ts
and the PID controllers attaining this specification can be obtained by Hurwitz stabilization of the closed-loop system in the sˆ = s + σ plane. Apply this to the plant in Example 2.5 in this chapter to determine all PID controllers attaining a settling time of 5 seconds or less.
54
THREE TERM CONTROLLERS
2.6 (Extension of the parameter separation property). Consider the controller parametrized as Ne (s2 , x1 ) + sNo (s2 , x2 ) C(s) = De (s2 , x3 ) + sDo (s2 , x4 ) connected to an LTI plant, and assume that the parameter vectors xi , i = 1, 2, 3, 4 appear linearly in the polynomial coefficients. Prove that the stabilizing set can be described by linear inequalities in xi for fixed parameters xj = x∗j , j 6= i for i = 1, 2, 3, 4. Hint: Multiply the characteristic polynomial by N (−s), D(−s), sN (−s) or sD(−s), as needed to achieve parameter separation.
2.7
Notes and References
The PID stabilization algorithm presented in this chapter was developed in [96, 97]. The root counting and signature formulas on which this algorithm is based were developed in [163, 98, 62, 125]. They constitute a generalization of the Hermite-Biehler Theorem to the case of root distribution determination of real polynomials which are not necessarily Hurwitz. This generalization reported in [163, 62, 125] provides an analytical expression for the difference between the numbers of roots of a polynomial in the open left-half and open right-half planes. The synthesis and design methods discussed in this chapter are based on [95, 99, 100, 101].
3 PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
In this chapter we discuss the stability analysis and PID control of linear time invariant systems containing a time delay. The main result is a description and computation of the PID stabilizing set using some of Pontryagin’s results on the root distribution of quasipolynomials and their connection to the Nyquist criterion. For first order plants the stabilizing set is computed in quasi-closed form. For higher order plants the computation is algorithmic but yields the complete set.
3.1
Introduction
The PID controller is by far the most common control algorithm used in process control applications which contain time delays. The Japan Electric Measuring Instrument Manufacturers’ Association conducted a survey of the state of process control systems in 1989. According to the survey more than 90% of the control loops were of the PID type. The popularity of the PID controller can be attributed to its different characteristic features: it provides feedback, it has the ability to eliminate steady state offsets through integral action, and it can anticipate the future through derivative action. PID controllers come in many different forms. In some instances, the controller can be found as a stand-alone system in boxes for one or a few control loops. In other instances, PID control is combined with logic, sequential machines, transmitters, and simple function blocks to build complicated automation systems. These kinds of systems are often used for energy production, transportation, and manufacturing. Indeed, the PID controller can be considered to be the bread and butter of control engineering. The general empirical observation is that most industrial processes can be controlled reasonably well with PID control provided that the demands on the performance specifications are not too high. A PID controller is sufficient when the process has dominant dynamics of first or second order. Most of the time, there are no significant benefits gained by using a more complex controller for such processes. With the derivative action, improved damping is
55
56
THREE TERM CONTROLLERS
provided. Hence, a higher proportional gain can be used to speed up the transient response. An example of this is temperature control inside a chamber. However, tuning of the derivative action should be carefully done because it can amplify high-frequency noise. Because of this, most of the commercially available PID controllers have a limitation on the gain of the derivative term. In the previous chapter we have seen how the classical Hermite-Biehler Theorem can be generalized and used to solve the problem of finding the set of all the stabilizing PID controllers for a linear time-invariant system described by a rational transfer function. However, when the system under study involves time delays, the Hermite-Biehler Theorem for polynomials cannot be used. The problem of controlling systems containing delays occurs frequently in process control where transportation lags are present. Time delays also occur due to computation time, communication time in remote control, and congestion as in traffic or internet control. Time delays are often destabilizing factors and few results are available for controller synthesis for time-delay systems. Linear time-invariant systems with delays give rise to characteristic functions known as quasi-polynomials. Pontryagin was one of the first researchers to study these quasi-polynomials. He derived necessary and sufficient conditions for the roots of a given quasi-polynomial to have negative real parts. Furthermore, he used such conditions to study the stability of certain classes of quasi-polynomials. These and some other preliminary results are described in this chapter and used to study the stability of systems with time delays. Even though these results cannot be used to easily check stability, they can form the basis of useful strategies for solving some fixed order and fixed-structure stabilization problems for systems with time delays. The chapter is organized as follows. Section 3.2 introduces the concept of characteristic equations for delay systems. In Section 3.3, we discuss the Pad´e approximation of a pure time delay by a rational transfer function and point out some of its limitations. This is done mainly to motivate the need for working with quasi-polynomials while dealing with systems containing time delays. In Section 3.4, we present an extension of the Hermite-Biehler Theorem applicable to quasi-polynomials, due to Pontryagin along with some other useful results. Section 3.5 shows how to apply the previous results to the analysis of systems with time delay. Section 3.4 provides the reader with some alternative tools for studying the stability of quasi-polynomials. Section 3.6 deals with a first order system and the computation of the PID stabilizing set. Section 3.7 deals with PID stabilization of an arbitrary LTI system with a single time delay and computes the entire stabilizing set.
57
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
3.2
Characteristic Equations for Delay Systems
Delays are present in a system when a signal or physical variable originating in one part of a system becomes available in another part after a lapse of time. For example, the change of flow rate in a pipeline becomes known downstream after a lapse of time determined by the length of the pipe and velocity of flow. Delays can also happen due to the time associated with the transmission of information to remote locations and in digital control systems due to the time involved in computing control signals from measured data, when the controller complexity or order are high. The block in Figure 3.1 can represent time delay:
u(t)
y(t) = u(t − T )
Delay T Figure 3.1 Delay representation.
In a dynamic feedback system where delay is present, the system equation may take the form y(t) ˙ + ay(t − T ) = u(t) . (3.1) The block diagram representation of (3.1) is depicted in Figure 3.2. y(t) ˙
u(t)
y(t) Integrator
+ − a
Delay T
Figure 3.2 A feedback system with delay.
If there is a delay in the input, the system equation may take the form y(t) ˙ + ay(t) = u(t − T )
(3.2)
58
THREE TERM CONTROLLERS
with the block diagram depicted in Figure 3.3 or, if the delay is within the loop, y(t) ˙ = −ay(t − T ) + u(t − T ) (3.3) with the block diagram depicted in Figure 3.4. u(t − T )
u(t)
y(t)
Delay T
Integrator + − a Figure 3.3 Input delay.
y(t) ˙
u(t)
y(t)
Delay T
Integrator
+ − a Figure 3.4 Delay within the loop.
A higher order system with multiple delays might be represented by the equation y¨(t) + a1 y(t ˙ − T1 ) + a0 y(t − T0 ) = u(t) (3.4) with the corresponding block diagram depicted in Figure 3.5. The system (3.4) can be represented in state variable form by introducing y(t) = x1 (t) , y(t) ˙ = x2 (t) and writing
x˙ 1 (t) 01 x1 (t) 0 0 x1 (t − T0 ) = + x˙ 2 (t) 00 x2 (t) −a0 0 x2 (t − T0 ) 0 0 x1 (t − T1 ) 0 + + u(t) . 0 −a1 x2 (t − T1 ) 1
(3.5)
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY y¨(t)
u(t)
y(t) ˙ Integrator
+ −
59 y(t)
Integrator
− a1
Delay T1 a0
Delay T0
Figure 3.5 Multiple delays.
More generally, a linear time-invariant system with l distinct delays T1 , . . ., Tl may be represented in state-space form as x(t) ˙ = A0 x(t) +
l X
Ai x(t − Ti ) + Bu(t) .
(3.6)
i=1
To discuss the stability of systems such as (3.1) – (3.6), it is usual to examine the solutions y(t) with u(t) ≡ 0 and study the behavior of y(t) as t → ∞. For this purpose consider, for example, the system (3.4) with u(t) ≡ 0, and let y(t) = est be a proposed solution of y¨(t) + a1 y(t ˙ − T1 ) + a0 y(t − T0 ) ≡ 0 . Then we have so that s must satisfy
(3.7)
s2 + a1 e−sT1 s + a0 e−sT0 est ≡ 0 s2 + a1 se−sT1 + a0 e−sT0 = 0 .
(3.8)
Equation (3.8) is the characteristic equation of (3.4) or (3.7), and the location of its roots (or zeros) determine the stability of the system represented by (3.4). In particular, if any roots lie in the closed right-half plane, the system is unstable as the solution grows without bound. The characteristic equation associated with (3.6) can be shown to be ! l m X X −sTi δ(s) := det sI − A0 − e Ai = P0 (s) + Pk (s)e−Lk s (3.9) i=1
k=1
where Lk , k = 1, 2, . . . , m, are sums of the Ti and P0 (s) = sn +
n−1 X
a i si
(3.10)
i=0
Pk (s) =
n−1 X
(bk )i si .
i=0
(3.11)
60
THREE TERM CONTROLLERS
We note that in (3.9) there is no delay term associated with the highest order derivative. Such systems are referred to as retarded delay systems. When the highest-derivative term contains delays we may have an equation such as y¨(t − T2 ) + α1 y(t ˙ − T1 ) + α0 y(t − T0 ) = u(t)
(3.12)
with characteristic equation e−sT2 s2 + α1 e−sT1 s + α0 e−sT0 = 0 .
(3.13)
Such systems (with delays in the highest-derivative terms) are called neutral delay systems. In both retarded and neutral delay systems, stability is equivalent to the condition that all the roots of the characteristic equation lie in the open left-half plane (LHP). There are important differences between the nature of the roots of characteristic equations for retarded and neutral delay systems. In a retarded system there can only be a finite number of right-half plane (RHP) roots, a condition that does not hold for all neutral systems. The stability of retarded systems is equivalent to the absence of closed RHP roots. For neutral systems certain root chains can approach the imaginary axis from the LHP and thus destroy stability. This can be avoided by insisting that all roots lie strictly to the left of a line Re[s]= −α, for α > 0. The fact that retarded systems have a finite number of RHP roots means that one can count the number of roots crossing into the RHP through the stability boundary and keep track of the number of RHP roots as some parameters vary. This has significant implications for stability analysis. Finally, we mention that in equations such as (3.8), (3.9), or (3.13), the delays may be integer multiples of a common positive number τ . In such cases, the delays are said to be commensurate and the characteristic equation takes the form δ(s) = a0 (s) + a1 (s)e−τ s + a2 (s)e−2τ s + · · · + ak (s)e−kτ s where ai (s), i = 0, 1, . . . , k, are polynomials. Thus, δ(s) is a polynomial in the two variables s and v := e−sτ . There are extensive results on such quasipolynomials by Pontryagin and others, which will be used in this chapter. In the present chapter we will deal only with the case of a single delay in the feedback loop, representing delay in control action or delayed measurements. Even for this simple case, the stability problem from the synthesis point of view is complex and challenging, as we shall see.
3.3
The Pad´ e Approximation and Its Limitations
The Pad´e approximation is often used to approximate a pure time delay by a rational transfer function. A logical approach would be to try to use this
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
61
approximation for the stability analysis and design of controllers for timedelay systems; since this approximation reduces a time-delay system to one with a rational transfer function, for P, PI, and PID controllers, the results presented in Chapter 2 could then be employed. In this section we show via examples that PI and PID controllers that stabilize a system obtained by such an approximation of the time delay may actually be destabilizing for the true system. This will constitute one of the motivations for developing a new theory for the study of PID controllers for time-delay systems. There is usually some qualitative agreement for small values of the time delay, but the behavior may be very different for large values of the time delay. Furthermore, these examples will also show that the qualitative behavior improves with increasing order of the Pad´e approximant, but at the expense of greater algebraic complexity. Consider a simple first order model with time delay described by the following transfer function: G(s) =
k e−sL . 1 + sT
(3.14)
Here k represents the steady-state gain, T is the time constant, and L is the time delay of the system. We consider the feedback control system of Figure 2.1 where the controller transfer function C(s) is ki ki + kp s + kd s2 + kd s = (3.15) s s and the plant G(s) being stabilized is described by (3.14). For a given delay free plant of arbitrary order, the results in Chapter 2 allow us to characterize all stabilizing PID controllers. Clearly these results cannot be directly applied to the PID stabilization of the plant model (3.14) since it is not a rational transfer function. This difficulty is usually overcome by approximating the time-delay term by a properly chosen Pad´e approximation. The Pad´e approximation for the term e−sL is given by C(s) = kp +
Nr (sL) e−sL ∼ = Dr (sL) where Nr (sL) = Dr (sL) =
r X (2r − k)! (−sL)k k!(r − k)!
k=0 r X
k=0
(2r − k)! (sL)k k!(r − k)!
and r represents the order of the approximation. For example, the third order Pad´e approximation (r = 3) is given by N3 (sL) −L3 s3 + 12L2 s2 − 60Ls + 120 . = 3 3 D3 (sL) L s + 12L2s2 + 60Ls + 120
62
THREE TERM CONTROLLERS
Since this approximation is of finite order, the results of Chapter 2 can be used to characterize all stabilizing PID controllers for the resulting approximate rational transfer function model. Thus, we are now in a position to compare the stabilizing sets of PID parameters for Pad´e approximants of different orders. This is carried out in the next two subsections.
3.3.1
First Order Pad´ e Approximation
The first order Pad´e approximation of the time-delay term is 2 − Ls . e−sL ∼ = 2 + Ls Using this approximation in (3.14), we obtain the following rational transfer function Gm (s): (−Ls + 2) k Gm (s) = . (T s + 1) (Ls + 2) With the PID controller given by (3.15), the closed-loop characteristic polynomial becomes δ(s, kp , ki , kd ) = s(T s + 1)(Ls + 2) + (ki + kp s + kd s2 )(k)(−Ls + 2) which can be rewritten as δ(s, kp , ki , kd ) = (T s2 + s)(Ls + 2) + (kd′ s2 + ki′ )(−Ls + 2) + kp′ s(−Ls + 2) where kd′ = kkd , ki′ = kki , kp′ = kkp . Now by using the results of Chapter 2 we can obtain an analytical characterization of all stabilizing PID controllers for Gm (s). The stabilizing (kp , ki , kd ) values must satisfy the following inequalities: and
ki > 0 4(1 + kkp ) 2(1 + kkp )(2T + L − kkp L) ki − kd < L(4T + L − kk L) kL(4T + L − kkp L) p T kd < k
1 1 − < kp < k k
4T 1+ . L
(3.16)
(3.17)
Note that the set of inequalities (3.16) has a special structure. For a fixed kp , (3.16) becomes a set of linear inequalities in terms of ki , kd and the admissible set of (ki , kd ) values is given by the triangle shown in Figure 3.6.
63
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
P2 =
( 0,
T k
)
P3 =
( 2 ( 1 +kLk k p)
, T
k
)
0
kd
P1 = 0, -2T - L + k k p L 2k
(
)
0
ki Figure 3.6 The stabilizing set of (ki , kd ) values for a fixed kp .
The coordinates of the vertices P1 , P2 , and P3 of this triangular stabilizing region are −2T − L + kkp L P1 = 0, 2k T (3.18) P2 = 0, k 2(1 + kkp ) T , . P3 = kL k
Now the question of interest is whether the first order Pad´e approximant accurately captures the actual set of stabilizing PID parameters for the original time-delay system. As the following example illustrates, the set in Figure 3.6 can contain controller parameter values that lead to an unstable closed-loop system. Thus, the first order Pad´e approximation can prove inadequate for determining the set of stabilizing PID parameters for a time-delay system. Example 3.1 Consider the plant G(s) =
1.6667 e−0.2475s . 1 + 2.9036s
64
THREE TERM CONTROLLERS
The first order Pad´e approximation yields 1.6667 (−0.1238s + 1) . Gm (s) = (1 + 2.9036s) (0.1238s + 1)
Using (3.17) we obtained kp ǫ (−0.6, 28.7555) as the necessary condition to be satisfied by any stabilizing kp value. We fixed kp at 8.4467, which is the value suggested by the Ziegler-Nichols step response method. Using (3.18) we obtained the set of (ki , kd ) values that stabilize Gm (s) for this fixed value of kp . This set is sketched in Figure 3.7. We now set the PID controller parameters to kp = 8.4467, ki = 60, kd = 1.5, denoted by * in Figure 3.7. Notice that this point is contained inside the region depicted in Figure 3.7.
2
1.5
kd
1
0.5
0
−0.5
−1
0
10
20
30
40
50
60
70
80
ki
Figure 3.7 The set of stabilizing (ki , kd ) values for Example 3.1.
However, as the simulation in Figure 3.8 shows, this set of values leads to an unstable closed-loop system with the true delay. Here, the input r(t) to the system is a unit step applied at t = 5 seconds, and y(t) represents the output of the system. It is clear from this that some stabilizing (ki , kd ) values obtained on the basis of a Pad´e approximation can lead to a closed-loop system that in reality is unstable.
65
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY 10
8
6
y(t)
4
2
0
−2
−4
−6
−8
−10
0
2
4
6
8
10 Time (sec)
12
14
16
18
20
Figure 3.8 Step response of the system in Example 3.1.
3.3.2
Higher Order Pad´ e Approximations
When higher order Pad´e approximations are used, it is no longer possible to obtain analytical expressions for the stabilizing PID gain values. However, a computational characterization is still possible by making use of the results of Chapter 2. We now define as approximate stabilizing sets, the sets generated by approximating the time delay of the system with a Pad´e approximation and then using the results in Chapter 2. As the following examples show, using a second order Pad´e approximation still fails to adequately capture the set of stabilizing PID gain values, that is, it contains controller parameter values that lead to an unstable closed-loop system. However, as higher order Pad´e approximations are used, the situation seems to improve since the approximate sets tend to converge toward a set that appears to be the true stabilizing set. Example 3.2 Consider again the plant used in Example 3.1. We now approximate the time-delay term using the second, third, and fifth order Pad´e approximations. For each case we obtain the rational transfer function approximation Gim (s),
66
THREE TERM CONTROLLERS
where i = 2, 3, 5, indicates the order of the approximation. For instance, G3m (s) =
−1.6667s3 + 80.8097s2 − 1632.52s + 13192.1 . 2.9036s4 + 141.781s3 + 2892.54s2 + 23961.7s + 7915.09
As in Example 3.1, the controller parameter kp is set to 8.4467. Applying the results of Chapter 2 to the transfer functions Gim (s), we obtain the sets of all stabilizing (ki , kd ) values for kp = 8.4467. The set corresponding to G2m (s) is sketched in Figure 3.9 with solid lines. The sets corresponding to G3m (s) and G5m (s) are both sketched with dashed lines and are essentially overlapping with each other. 2.5
2
* 1.5
kd
1
0.5
0
−0.5
set for Gm 2 (s) set for Gm 3 (s), Gm 5 (s) −1
0
10
20
30
40
50
60
70
80
ki
Figure 3.9 The set of stabilizing (ki , kd ) values for Example 3.2.
As in Example 3.1 we can take values inside the region corresponding to G2m (s) but outside G3m (s) (such as the one denoted by *) and show that the corresponding closed-loop system is unstable. Thus, we conclude that while the second order Pad´e approximation fails to adequately capture the actual stabilizing set in the (ki , kd )-plane, the third and fifth order Pad´e approximations apparently do a better job. We examine next a system with a larger time delay.
67
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY Example 3.3 Consider the following first order plant with deadtime: 1 e−10s . G(s) = 1+s
Let us approximate the time delay term using the first, second, third, fifth, seventh, and ninth order Pad´e approximations. As in the previous example, we obtain the rational transfer function approximations Gim (s), i = 1, 2, 3, 5, 7, 9. Next, we set the controller parameter kp to 0.5 and compute the set of stabilizing (ki ,kd ) values using the results in Chapter 2. Figure 3.10 shows the 1 2 stabilizing controller sets Cm , Cm obtained for G1m (s) and G2m (s) in solid and dashed lines, respectively.
3
2
kd
1
0
−1
−2
1
stabilizing controller set C m stabilizing controller set C 2m
−3 0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
ki
Figure 3.10 The set of stabilizing (ki , kd ) values for Example 3.3.
3 5 Figure 3.11 shows the stabilizing sets Cm , Cm obtained for G3m (s) and in solid and dashed lines, respectively. The sets corresponding to and G9m (s) are similar and are represented with a dash-dotted line. For higher order Pad´e approximations of the time delay, the set of (ki , kd ) values seems to converge toward a possible true set. As in previous examples, we can show that for lower order approximations we obtain sets that contain
G5m (s) G7m (s)
68
THREE TERM CONTROLLERS 1.5 3
stabilizing controller set C m stabilizing controller set C 5m stabilizing controller set C 7m 1
kd
0.5
0
−0.5
−1
0
0.05
0.1
0.15
0.2
0.25
ki
Figure 3.11 The set of stabilizing (ki , kd ) values for Example 3.3.
controller gain values that lead to an unstable behavior of the closed-loop system. From the previous examples we can make the following observations: 1. For small values of the time delay, the approximate sets easily converge to the possible true sets. However, the convergence becomes more difficult as the value of the time-delay increases. 2. The convergence of the approximate set to a possible true set improves with increased order of the Pad´e approximation. We conclude that the Pad´e approximation is far from being a satisfactory tool for ensuring the stability of the resulting control design. The main problem lies in the fact that it is not a priori clear as to what order of the approximation will yield a stabilizing set of controller parameters accurately approximating the true set. The previous examples also showed that by increasing the order of the approximation, the approximate set can be made to closely approach the possible true set but at the cost of a greater algebraic complexity. These considerations motivate us to attempt a direct study of the stabilization problem for time delay systems without approximation.
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
3.4
69
The Hermite-Biehler Theorem for Quasi-polynomials
The stabilization of delay free systems is relatively easy to study because the number of roots of their characteristic equations is finite. However, when time delays are introduced, this ease of analysis disappears: the number of roots is no longer finite, making the establishment of stability quite a difficult task. To complicate matters, it can be shown that the Hermite-Biehler Theorem for Hurwitz polynomials does not carry over to arbitrary functions F (s) of the complex variable s. We will now study functions of the form f (s, es ), where f (s, t) is a polynomial in two variables and is called a quasi-polynomial. As discussed earlier, such quasi-polynomials appear in characteristic equations of time delay systems. Before presenting the results, we introduce some preliminary definitions. Let f (s, t) be a polynomial in two variables with real or complex coefficients defined as follows: f (s, t) =
M X N X
ahk sh tk .
h=0 k=0
DEFINITION 3.1 f (s, t) is said to have a principal term if there exists a nonzero coefficient ahk where both indices have maximal values. Without loss of generality, we will denote the principal term as aMN sM tN . This means that for each other term ahk sh tk , for ahk 6= 0, we have either M > h, N > k; or M = h, N > k; or M > h, N = k. For example, f (s, t) = 3s + t2 does not have a principal term but the polynomial f (s, t) = s2 + t + 2s2 t does. We now state the first result of Pontryagin. THEOREM 3.1 If the polynomial f (s, t) does not have a principal term, then the function F (s) = f (s, es ) has an infinite number of zeros with arbitrarily large positive real parts. If f (s, t) does have a principal term, the main result of Pontryagin is to show that the Hermite-Biehler Theorem extends to the class of functions F (s) = f (s, es ). Introduce the following definition for interlacing. DEFINITION 3.2
Let F (s) = f (s, es ), where f (s, t) is a polynomial with
70
THREE TERM CONTROLLERS
a principal term, and write F (jω) = Fr (ω) + jFi (ω) where Fr (ω) and Fi (ω) represent, respectively, the real and imaginary parts of F (jω). Let ωr1 , ωr2 , ωr3 , . . ., denote the real roots of Fr (ω), and let ωi1 , ωi2 , ωi3 , . . ., denote the real roots of Fi (ω), both arranged in ascending order of magnitude. Then we say that the roots of Fr (ω) and Fi (ω) interlace if they satisfy the following property: ωr1 < ωi1 < ωr2 < ωi2 < · · · . In this definition we have Fr (ω) = gr (ω, cos(ω), sin(ω)) , Fi (ω) = gi (ω, cos(ω), sin(ω)) where gr (ω, u, v) and gi (ω, u, v) are polynomials. After these preliminaries, we present the generalization of the HermiteBiehler Theorem to the quasi-polynomial F (s) = f (s, es ). THEOREM 3.2 Let F (s) = f (s, es ), where f (s, t) is a polynomial with a principal term, and write F (jω) = Fr (ω) + jFi (ω) where Fr (ω) and Fi (ω) represent, respectively, the real and imaginary parts of F (jω). If all the roots of F (s) lie in the open LHP, then the roots of Fr (ω) and Fi (ω) are real, simple, interlacing, and ′
′
Fi (ω)Fr (ω) − Fi (ω)Fr (ω) > 0 ′
(3.19)
′
for each ω in (−∞, ∞), where Fr (ω) and Fi (ω) denote the first derivative with respect to ω of Fr (ω) and Fi (ω), respectively. Moreover, in order that all the roots of F (s) lie in the open LHP, it is sufficient that one of the following conditions be satisfied: 1. All the roots of Fr (ω) and Fi (ω) are real, simple, and interlacing and the inequality (3.19) is satisfied for at least one value of ω; 2. All the roots of Fr (ω) are real and for each root ω = ωr , condition (3.19) ′ is satisfied, that is, Fi (ωr )Fr (ωr ) < 0; 3. All the roots of Fi (ω) are real and for each root ω = ωi , condition (3.19) ′ is satisfied, that is, Fi (ωi )Fr (ωi ) > 0. We need to point out here that condition (3.19) is analogous to the monotonic phase increase property. Moreover, we see that this property has to hold for each ω in (−∞, ∞).
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
3.4.1
71
Applications to Control Theory
Many problems in process control engineering involve time delays. As discussed in Section 3.2, these time delays can lead to dynamic models with characteristic equations of the form δ(s) = d(s) + e−sL1 n1 (s) + e−sL2 n2 (s) + · · · + e−sLm nm (s)
(3.20)
where d(s), ni (s) for i = 1, 2, ..., m, are polynomials with real coefficients, and (A1) deg[d(s)] = q and deg[ni (s)] ≤ q for i = 1, 2, ..., m; (A2) 0 < L1 < L2 < · · · < Lm ; (A3) Li = αi L1 , i = 2, ..., m, and αi are nonnegative integers (commensurate delays). Based on Pontryagin’s results, Theorem 3.2 can be developed to study the stability of this class of quasi-polynomials. Instead of (3.20) we can consider the quasi-polynomial δ ∗ (s) = esLm δ(s) (3.21) ∗ sLm s(Lm −L1 ) s(Lm −L2 ) ⇒ δ (s) = e d(s) + e n1 (s) + e n2 (s) + · · · + nm (s) . Notice that the new quasi-polynomial δ ∗ (s) is of the form f (s, es ) since, in view of (A3), the system exhibits commensurate delays, that is, delays that are related by integers. Since esLm does not have any finite zeros in the complex plane, the zeros of δ(s) are identical to those of δ ∗ (s). Furthermore, in view of (A1) and (A2), the quasi-polynomial δ ∗ (s) has a principal term since the coefficient of the term containing the highest powers of s and es is nonzero. As mentioned before, the quasi-polynomial δ ∗ (s) has an infinite number of roots. However, any bounded region of the complex plane contains only a finite number of its roots. Roots that are far from the origin can be assigned to a finite number of asymptotic chains. The geometry of these chains has been carefully studied in the past and they determine the following classes of quasi-polynomials: 1. Retarded-type quasi-polynomials (or delay-type quasi-polynomials): This first class consists of quasi-polynomials whose asymptotic chains go deep into the open LHP. 2. Neutral-type quasi-polynomials: This second class consists of quasi polynomials that along with delay-type chains contain at least one asymptotic chain of roots in a vertical strip of the complex plane. 3. Forestall-type quasi-polynomials: This last class consists of quasipolynomials with at least one asymptotic chain that goes deep into the open RHP.
72
THREE TERM CONTROLLERS
It turns out that any quasi-polynomial of either delay or neutral type has the principal term, and vice versa: every quasi-polynomial with the principal term belongs to one of these two classes. It then follows that our quasi-polynomial δ ∗ (s) in (3.21) is either of the delay or of the neutral type. Now for quasi-polynomials of either delay or neutral type, stability is defined as follows. DEFINITION 3.3 A delay-type quasi-polynomial is said to be stable if and only if all its roots have negative real parts. DEFINITION 3.4 A neutral-type quasi-polynomial is said to be stable if there exists a positive number σ such that the real parts of all its roots are less than −σ. The reason why we have a stronger condition for the stability of neutral type quasi-polynomials is that we need to exclude, for this case, the possibility of an asymptotic chain of roots converging to the imaginary axis. However, notice that we can always make a shift of the independent variable s → s¯ = s + σ, so that the negativity of the real parts of the roots of the quasi-polynomial δ¯∗ (¯ s) = δ ∗ (¯ s − σ) with respect to s¯ would imply the stability of the original time invariant system with delays. With these definitions of stability in hand, the stability of the system with characteristic equation (3.20) is equivalent to the condition that all the zeros of δ ∗ (s) be in the open LHP. It follows that Theorem 3.2 restated below specifically for commensurate delays can be used to test the stability of δ ∗ (s). THEOREM 3.3 Let δ ∗ (s) be given by (3.21), and write δ ∗ (jω) = δr (ω) + jδi (ω) , where δr (ω) and δi (ω) represent, respectively, the real and imaginary parts of δ ∗ (jω). Under conditions (A1), (A2) and (A3), δ ∗ (s) is stable if and only if 1. δr (ω) and δi (ω) have only simple, real roots and these interlace. ′
′
2. δi (ωo )δr (ωo ) − δi (ωo )δr (ωo ) > 0, for some ωo in (−∞, ∞) ′
′
where δr (ω) and δi (ω) denote the first derivative with respect to ω of δr (ω) and δi (ω), respectively. In the rest of this section, we will make use of this theorem to provide solutions to the P, PI, and PID stabilization problems for systems with time delay. Notice that the second condition establishes that the phase increase
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
73
property needs to be checked at a single real frequency ωo , provided Condition 1 is true. REMARK 3.1 In Theorem 3.3 above, the requirement in Condition 1 that δr (ω) and δi (ω) have only simple, real roots is not a superfluous one. This is borne out by the following example. Example 3.4 Consider a first order system given by G(s) =
1 2s + 1
and a PI controller arranged in the standard unity feedback closed-loop configuration. Recall that the PI controller has the following transfer function: C(s) = kp +
ki kp s + ki = . s s
If the parameters of the PI controller are kp = 1.8 and ki = 0.2, then the characteristic equation of the closed-loop system is δ(s) = 2s2 + 2.8s + 0.2 and it is stable. Let us now consider the same first order model but with a time delay of 10 seconds: 1 G(s) = e−10s . 2s + 1 As in the delay free case, we consider the same PI controller parameters: kp = 1.8 and ki = 0.2. With these values, the characteristic equation of the closed-loop system is given by δ(s) = 2s2 + s + (1.8s + 0.2)e−10s = 0 . For analyzing the stability, we consider δ ∗ (s) = e10s δ(s) = (2s2 + s)e10s + 1.8s + 0.2 . Thus, the real and imaginary parts of δ ∗ (jω) are given by δr (ω) = 0.2 − ω sin(10ω) − 2ω 2 cos(10ω) δi (ω) = ω[1.8 + cos(10ω) − 2ω sin(10ω)] . Using these expressions we can check if the quasi-polynomial δ ∗ (s) satisfies the interlacing property. Figure 3.12 shows the plot of the real and imaginary parts of δ ∗ (jω).
74
THREE TERM CONTROLLERS 8
real part imaginary part
6
4
2
0
−2
−4
−6
0
0.5
1
1.5
2
w (rad/sec)
Figure 3.12 Plot of the real and imaginary parts of δ ∗ (s) for Example 3.4.
It is clear from this plot that the roots of the real and imaginary parts interlace for all ω > 0. Notice that the interlacing condition needs to be checked only up to a finite frequency. This follows from the fact that the phasor of 1.8s+0.2 tends to zero as ω tends to +∞. This ensures that the quasi2s2 +s s=jω
polynomial δ ∗ (s) has the monotonic phase property for a sufficiently large ω. Therefore, the interlacing condition needs to be verified only for a low frequency range. Since interlacing holds for all ω, we might be tempted to think that it is unnecessary to check if the roots of δr (ω) and δi (ω) are all real. Next we check the monotonic phase property at ωo = 0. At this frequency we have δi (ωo ) = 0. Thus, in Theorem 3.3 we have ′
δi (ωo )δr (ωo ) = (2.8)(0.2) > 0 which indicates that the monotonic phase property holds. However, as Figure 3.13 shows, the system is unstable. In this figure, we have sketched the system response to a unit step input occurring at t = 5 seconds. The previous example illustrates the case of a time delay system that satisfies the interlacing and monotonic phase properties but fails to be stable.
75
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY 10
8
6
4
y(t)
2
r(t)
0
−2
−4
−6
−8
−10
0
10
20
30
40
50
60
Time (sec)
70
80
90
100
Figure 3.13 Time response of the closed-loop system for Example 3.4.
The reason for this behavior lies in the nature of the roots (zeros) of δr (ω) and δi (ω): they are not all real. Thus, a crucial step in applying Theorem 3.3 to check stability is to ensure that δr (ω) and δi (ω) have only real roots. Such a property can be checked by using the following key result, also due to Pontryagin. THEOREM 3.4 Let M and N denote the highest powers of s and es , respectively, in δ ∗ (s). Let η be an appropriate constant such that the coefficients of terms of highest degree in δr (ω) and δi (ω) do not vanish at ω = η. Then for the equations δr (ω) = 0 or δi (ω) = 0 to have only real roots, it is necessary and sufficient that in each of the intervals −2lπ + η ≤ ω ≤ 2lπ + η , l = lo , lo + 1, lo + 2, ... δr (ω) or δi (ω) have exactly 4lN + M real roots for a sufficiently large lo . We will now show how Theorem 3.4 can be used to determine the nature of the roots of δr (ω) or δi (ω) in Example 3.4.
76
THREE TERM CONTROLLERS
First let us make the following change of variables: sˆ = 10s. Then the expression for δ ∗ (s) can be rewritten as δˆ∗ (ˆ s) = (0.02ˆ s2 + 0.1ˆ s)esˆ + 0.18ˆ s + 0.2 . We see that for the new quasi-polynomial in sˆ, M = 2 and N = 1. Also, the real and imaginary parts of δˆ∗ (j ω ˆ ) are given by δˆr (ˆ ω) = 0.2 − 0.1ˆ ω sin(ˆ ω ) − 0.02ˆ ω 2 cos(ˆ ω) ˆ δi (ˆ ω) = ω ˆ [0.18 + 0.1 cos(ˆ ω ) − 0.02ˆ ω sin(ˆ ω )] . We now focus on the imaginary part of δˆ∗ (j ω ˆ ). From the previous expressions ˆ we can compute the roots of δi (ˆ ω ) = 0, that is, ω ˆ [0.18 + 0.1 cos(ˆ ω ) − 0.02ˆ ω sin(ˆ ω )] = 0 . Then ω ˆ = 0 , or 0.18 + 0.1 cos(ˆ ω ) − 0.02ˆ ω sin(ˆ ω) = 0 .
(3.22)
From this we see that one root of the imaginary part is ω ˆ o = 0. The positive real roots of (3.22) are ω ˆ 1 = 8.0812 ω ˆ 3 = 13.5896
ω ˆ 2 = 8.8519 ω ˆ 4 = 15.4332
ω ˆ 5 = 19.5618 .. .
ω ˆ 6 = 21.8025 .. .
Next we choose η = π4 to satisfy the requirement imposed by Theorem 3.4 that sin(η) 6= 0. Figure 3.14 shows the root distribution of δˆi (ˆ ω ) and will enable us to apply Theorem 3.4 to this example. From this figure we see that δˆi (ˆ ω ) has only one real root in the interval [0, 2π − π4 ] = [0, 7π ]: the root at the origin. Since δˆi (ˆ ω ) is an odd function of 4 7π 7π ˆ ω ˆ , it follows that in the interval [− 4 , 4 ], δi (ˆ ω ) will have only one real root. Also observe from the same figure that δˆi (ˆ ω) has no real roots in the interval 9π ˆ ω ) has only one real root in the interval [ 7π 4 , 4 ]. Thus, δi (ˆ h π πi −2π + , 2π + , 4 4
which does not sum up to 4N + M = 6 for lo = 1. Let us now make lo = 2 so now the requirement on the number of real roots is 8N + M = 10. From Figure 3.14 we see that in the interval h π πi −4π + , 4π + 4 4
77
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
l=1
ω^ 0
0
l=2
l=3
ω^ 3
ω^ 1 ω^ 2
ω^ 4
6π−π/4 4π−π/4 2π−π/4
0
5
ω^ 5
ω^ 6
6π+π/4
4π+π/4
2π+π/4
10
15
Frequency (rad/sec)
20
25
Figure 3.14 Root distribution of δˆi (ˆ ω ).
the function δˆi (ˆ ω ) has only five real roots. Following the same procedure for l = 3, 4, ..., we see that the number of real roots of δˆi (ˆ ω ) in the interval h i π π −2lπ + , 2lπ + 4 4 is always less than 4lN + M = 4l + 2. Hence, from Theorem 3.4 we conclude that the roots of δˆi (ˆ ω ) are not all real.
3.5
Stability of Systems with a Single Delay
In the previous section, we presented a general stability criterion for systems with time delay. This criterion gives necessary and sufficient conditions under which the roots of the characteristic equation of the closed-loop system have negative real parts. In this section, we consider an alternative analysis for the case of time-delay systems with a single delay. Such systems have the following characteristic equation: δ(s) = d(s) + n(s)e−sL
(3.23)
78
THREE TERM CONTROLLERS
where d(s) and n(s) are polynomials with real coefficients, deg[d(s)] = q, deg[n(s)] = p, q ≥ p, and L > 0 is the time delay of the system. Moreover, we assume that any common factors of d(s) and n(s) have been removed. The condition for stability is that all the roots of the characteristic equation (3.23) lie in the open left-half of the complex s plane. Thus, in our case, the basic problem of stability is that of determining the range (or ranges) of values of L for which this occurs. One way to answer this stability question is to develop a systematic procedure to analyze the behavior of the roots of (3.23) as L increases from 0 to ∞. In this section, we will introduce one such procedure due to Walton and Marshall. This procedure consists of three basic steps. The first step is to examine the stability of (3.23) for L = 0 and determine the number of roots, if any, of δ(s) = 0 not lying in the open LHP. The second step considers the case of an infinitesimally small positive L. For this value there will be an infinite number of new roots and it is necessary to find out where in the complex plane these roots have arisen. The third and final step is to find positive values of L, if any, at which there are roots of δ(s) = 0 lying on the imaginary axis and then to determine whether these roots merely touch the axis or whether they cross from one-half plane to the other with increasing L. Roots crossing from left to right are labeled destabilizing and those crossing from right to left are labeled stabilizing. We will now use this procedure to study the movement of the roots of δ(s) = 0 with an increasing L > 0. In particular, we will determine for which values of L these roots do not all lie in the open LHP, that is, the regions of instability. To explicitly show the dependence of the characteristic equation on the time delay L, we rewrite the characteristic equation as δ(s, L) = d(s) + n(s)e−Ls = 0 .
(3.24)
Step 1. We start by examining the stability at L = 0, that is, we have δ(s, 0) = d(s) + n(s) = 0 . This is a delay free problem to which any of the classical methods may be applied. If the system is found to be unstable it will then be necessary to determine how many zeros lie in the open RHP or on the imaginary axis. Step 2. In this step we increment L from 0 to an infinitesimally small and positive number. In this situation the number of roots changes from being finite to infinite and we need to determine where in the complex plane these new roots arise. Notice that for an infinitesimally small L, the new roots must come in at infinity; otherwise, the expression e−Ls would be approximately equal to unity and there would not be any new roots. Consequently, if p < q, (3.24) can be satisfied for large s if and only if e−Ls is large, that is, Re(s) < 0.
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
79
Thus, in this case the new roots all lie in the open LHP. The case where p = q involves more details and will be presented later in this section. Step 3. In this step we have to consider potential crossing points on the imaginary axis. By taking complex conjugates of the quantities involved in the definition of δ(s, L), it follows that if δ(s, L) = 0 has a root at s = jω, then it also has a root at s = −jω. This implies that the roots cross or touch the imaginary axis in conjugate pairs and therefore it suffices to consider positive values of ω. A special case is s = 0 and this will be analyzed later in this section. Substituting s = ±jω in (3.24) we obtain d(jω) + n(jω)e−jLω = 0 d(−jω) + n(−jω)ejLω = 0 .
(3.25)
Elimination of the exponential terms yields d(jω)d(−jω) − n(jω)n(−jω) = 0 .
(3.26)
The expression on the left-hand side of this equation is a polynomial in ω 2 and, for convenience, we denote it by W (ω 2 ) = d(jω)d(−jω) − n(jω)n(−jω) .
(3.27)
It should be clear that only the real, nonnegative zeros of W (ω 2 ) are of interest since only these can lead to potential crossing points s = ±jω. As a consequence of this if there are no positive roots of W (ω 2 ) = 0 (notice that this is a function of ω 2 ) then there are no values of L for which δ(jω, L) = 0. This leads us to the following important remark. REMARK 3.2 If p < q and W (ω 2 ) has no positive real roots, then there is no change in stability, that is, if the system is stable at L = 0, then it will be stable for all L ≥ 0, whereas if it is unstable for L = 0, then it will be unstable for all L ≥ 0. If there is a real value of ω such that d(jω) + n(jω)e−jLω = 0 then e−jLω = −
d(jω) . n(jω)
(3.28)
Notice that n(jω) 6= 0, otherwise, from (3.26), d(jω) would also be zero, which is not possible since we assumed that any common factors of d(s) and n(s) had been removed. Equation (3.28) will yield real positive values of L if and only if d(jω) − n(jω) = 1
80
THREE TERM CONTROLLERS
which is indeed true from (3.26). Thus, for any real ω 6= 0 satisfying W (ω 2 ) = 0, there exist real positive L such that δ(jω, L) = 0 and these are given by d(jω) d(jω) , sin(Lω) = Im . (3.29) cos(Lω) = Re − n(jω) n(jω) From these expressions it follows that if Lo denotes the smallest value of L (for a particular value of ω) satisfying this, then L = Lo +
2πk , k = 0, 1, 2, . . . ω
are also solutions. Hence, for each ω satisfying W (ω 2 ) = 0, there is an infinite number of values of L at which the roots cross the imaginary axis. We now consider the special case s = 0. In this case, instead of (3.25) and (3.26) we have only one equation d(0) + n(0) = 0 ⇒ d(0) + e−L·0 n(0) = 0
(3.30)
for all finite L. Thus, the system is unstable for all values of L and for our analysis this solution can be ignored if (3.30) is satisfied. Once we have found a value of L at which there is a root of the characteristic equation (3.24) on the imaginary axis, we need to determine its behavior for slightly smaller and slightly larger values of L. This means that we need to find out if the root crosses the imaginary axis and in which direction or if it merely touches the imaginary axis. We can achieve this by considering the root s of the characteristic equation δ(s, L) = 0 as an explicit function of L, ds that is, s = f (L), and analyzing the expression Re( dL ) evaluated at the root s = jω. Then, we have ds • If Re( dL ) > 0, then the root crosses the imaginary axis from left to right, that is, it is destabilizing. ds • If Re( dL ) < 0, then the root crosses the imaginary axis from right to left, that is, it is stabilizing. ds • If Re( dL ) = 0, then it is necessary to consider higher order derivatives.
If we differentiate equation (3.24) with respect to L we obtain ds sn(s)e−Ls = ′ ′ dL d (s) + n (s)e−Ls − n(s)Le−Ls ′
′
where n (s) and d (s) denote the first derivative with respect to s of n(s) and d(s), respectively. From (3.24), we can rewrite this expression as " ′ #−1 ′ ds d (s) n (s) = −s − +L . dL d(s) n(s)
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY We now evaluate this expression at s = jω ds ∆ = −sgn Re jω S = sgn Re dL " 1 = −sgn Re jω
81
and find the sign of the real part: !−1 ′ ′ d (jω) n (jω) − +L d(jω) n(jω) !!# ′ ′ d (jω) n (jω) − +L d(jω) n(jω)
1 since sgn[Re(a + jb)] = sgn[Re( a+jb )]. Then, !!# " ′ ′ n (jω) d (jω) 1 S = sgn Re − . jω n(jω) d(jω)
If we consider
′
′
n (jω) d (jω) − = a(ω) + jb(ω) n(jω) d(jω) then Re
Thus,
1 b(ω) (a(ω) + jb(ω)) = jω ω 1 = Im (a(ω) + jb(ω)) . ω "
S = sgn Im
1 ω
′
′
n (jω) d (jω) − n(jω) d(jω)
!!#
which is independent of L. This implies that even though there is an infinite number of values of L associated with each value of ω that make δ(jω, L) = 0, the behavior of the roots at these points will always be the same. Hence, we may classify solutions of W (ω 2 ) as • stabilizing if S = −1 or • destabilizing if S = +1. Since at s = jω we have W (ω 2 ) = 0 then from (3.27) we have d(jω)d(−jω) = n(jω)n(−jω). Thus, !!# " ′ ′ 1 n (jω)n(−jω) d (jω) S = sgn Im − ω d(jω)d(−jω) d(jω) " !!# ′ ′ 1 n (jω)n(−jω) − d (jω)d(−jω) = sgn Im ω d(jω)d(−jω) ′ ′ 1 = sgn Im n (jω)n(−jω) − d (jω)d(−jω) ω
82
THREE TERM CONTROLLERS
since d(jω)d(−jω) = |d(jω)|2 > 0. Now, using the property Im(z) = any complex number z, we have ′ 1 ′ S = sgn n (jω)n(−jω) − n(jω)n (−jω)− 2jω ′ ′ d (jω)d(−jω) + d(jω)d (−jω) which finally leads us to
′
S = sgn[W (ω 2 )]
z−¯ z 2j ,
for
(3.31) 2
in which the prime denotes differentiation with respect to ω . Hence, we can use (3.31) to determine whether a root is destabilizing or stabilizing. Step 2: Special Case. We mentioned earlier that the case q = p in Step 2 involved more details. In this case it is possible for all the roots of the characteristic equation to lie in the open LHP but for the system to be unstable. For example, consider the following system: s + 1 + se−Ls = 0 . Detailed calculations show that the new roots for infinitesimally small L > 0 are just in the LHP but the system is not stable in the sense of Definition 3.4. This shows that we will need to further analyze the case where q = p. Toward this end, suppose that s = σ + jω is a new root of δ(s, L) = 0 for infinitesimally small L. As mentioned earlier, since L is infinitesimally small, the new root must come in at infinity. Since q = p, (3.24) can be satisfied for large s = σ + jω if and only if e−(σ+jω)L is a real number. This happens if e−jωL equates to unity, or cos(ωL) − j sin(ωL) = 1. This implies that ωL = 2lπ, l = 0, 1, ..., and since L is infinitesimally small, we thus conclude that |ω| >> |σ|. Hence, we now have d(s) d(jω) −Lσ . e = ≈ (3.32) n(s) s=σ+jω n(jω) From this expression we conclude that σ > 0 if and only if |d(jω)| < |n(jω)| or, equivalently, W (ω 2 ) < 0 for large ω. Thus, we conclude that the system is unstable for q = p if W (ω 2 ) < 0 for large ω. For the case of stability in the sense of Definition 3.4 we require that the new roots lie to the left of the line Re(s) = α for some α < 0. From (3.32), this occurs if d(jω) aq = >1 lim cp ω→∞ n(jω) where aq and cp denote the coefficient of the highest powers in s of polynomials d(s) and n(s), respectively. Moreover, the new roots will lie in the LHP if W (ω 2 ) > 0 for large ω. Thus, we conclude that the system is stable in the sense of Definition 3.4 for q = p if W (ω 2 ) > 0 for large ω and this occurs if and only if |aq | > |cp |. We can now summarize the previous discussion as follows.
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
83
• Step 1. Examine the stability at L = 0. • Step 2. Consider an infinitesimally small positive L. If q > p, all the new roots will lie in the open LHP and this step can be omitted. If q = p, the location of the new roots is determined by the sign of W (ω 2 ) for large ω. • Step 3. Determine the positive roots of W (ω 2 ) = 0, the corresponding positive values of L, and the nature (stabilizing or destabilizing) of these roots. If there are no repeated roots, then the stabilizing and destabilizing roots alternate. For example, if the largest root is destabilizing, we can then label the roots in descending order as destabilizing, stabilizing, and so on. The same procedure can be used for the corresponding values of L in order to determine for what values of L all the roots of δ(s, L) = 0 lie in the open LHP. Next we present several examples that will clarify the concepts introduced in this section. Example 3.5 Let δ(s, L) = s + 2e−Ls . Then, (1) δ(s, 0) = s + 2, so the system is stable for L = 0. (2) Since q = 1 > p = 0, we skip this step. ′ (3) d(s) = s and n(s) = 2, so W (ω 2 ) = ω 2 − 4. Thus, W (ω 2 ) = 1 which is positive. We conclude that there is only one positive root of W (ω 2 ) at 4. ′ Since S = sgn[W (ω 2 )] = 1, then this root is destabilizing. The corresponding values of L are given by (3.29): jω jω cos(Lω) = Re − = 0 , sin(Lω) = Im =1. 2 2 Solving for L we obtain L = (4k + 1)
π , k = 0, 1, 2, . . . . 4
This means that at L = π4 , two roots of δ(s, L) = 0 cross from left to right of the imaginary axis. Then, at L = 5π 4 , two more cross from left to right of the imaginary axis and so on. We conclude that the only region of stability is 0 ≤ L < π4 . Example 3.6 Consider δ(s, L) = (s + 1) + (s + 3)e−Ls . Then, (1) δ(s, 0) = 2s + 4, so the system is stable for L = 0. (2) Since q = p = 1, we need to consider the behavior of W (ω 2 ) for large ω 2 . We have W (ω 2 ) = (jω + 1)(−jω + 1) − (jω + 3)(−jω + 3) = −8 .
84
THREE TERM CONTROLLERS
Thus, since W (ω 2 ) < 0 for large ω 2 , an infinite number of new roots occur in the RHP and the system is unstable for all L > 0. Notice however that the system is stable for L = 0. Example 3.7 Let δ(s, L) = s2 + s + 4 + 2e−Ls . Then, (1) δ(s, 0) = s2 + s + 6, so the system is stable for L = 0. (2) Since q = 2 > p = 0, we skip this step. (3) d(s) = s2 + s + 4 and n(s) = 2, so W (ω 2 ) is given by ′
W (ω 2 ) = ω 4 − 7ω 2 + 12 ⇒ W (ω 2 ) = 2ω 2 − 7 . The positive roots of W (ω 2 ) are ω12 = 4 and ω22 = 3. The corresponding values of L satisfy (3.29), that is, cos(Lω) = 0.5ω 2 − 2 , sin(Lω) = 0.5ω . ′
ω12 = 4 is the larger of the two roots and S = sgn[W (4)] = 1, so this root is destabilizing. The corresponding values of L are given by cos(Lω1 ) = 0 , sin(Lω1 ) = 1 and hence L1 = (k + 1/4)π, k = 0, 1, 2, . . . . On the other hand, ω22 = 3 is the smallest of the two roots of W (ω 2 ) and ′ S = sgn[W (3)] = −1, so this root is stabilizing. The corresponding values of L are given by √ 2π 2π/3 + 2kπ √ 3L = + 2kπ ⇒ L2 = . 3 3 Thus, at L1 = 0.25π, 1.25π, 2.25π, . . ., a pair of roots crosses from the LHP into the RHP, whereas at L = 0.3849π, 1.5396π, 2.6943π, . . ., a pair of roots crosses from the RHP into the LHP. We can summarize these root crossings as follows: 1. At L = 0.25π, two roots move from the LHP to the RHP and then back to the LHP at L = 0.3849π. 2. At L = 1.25π, a second pair of roots crosses into the RHP and then crosses back at L = 1.5396π. 3. At L = 2.25π, a third pair crosses into the right and then crosses back at L = 2.6943π. This succession of stable and unstable regions must eventually cease since 2π ω2 , 2π the interval between successive stabilizing values, is greater than ω1 , that of destabilizing values. When it does, permanent instability will occur. This
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
85
occurs at L = 6.25π where a pair of roots crosses from left to right, but then at L = 7.25π another pair follows the same pattern. Thus, the roots have come to accumulate in the RHP. Moreover, since there can never be two 2π consecutive stabilizing crossings (since 2π ω1 < ω2 ), no more stability intervals are possible for L > 6.25π and instability occurs. We conclude that there is stability for [ [ [ L = (0, 0.25π) (0.3849π, 1.25π) (1.5396π, 2.25π) [ [ [ (2.6943π, 3.25π) (3.8490π, 4.25π) (5.0037π, 5.25π) (6.1584π, 6.25π) .
These are the so-called stability windows of the time delay system. This example shows that even low order systems with time delay can exhibit a complicated behavior. As we can see, the addition of delay to a system may produce a stabilizing effect which may contradict intuition. Moreover, the presence of these stability windows constitutes another reason why a Pad´e approximation of the time delay may not be adequate for analyzing the stability of time delay systems. REMARK 3.3 The results of this section have a nice interpretation in terms of Nyquist plots (see Exercise 3.4).
3.6
PID Stabilization of First Order Systems with Time Delay
Over the last four decades, several methods have been developed for setting the parameters of the PID controller. Some of these methods are based on characterizing the dynamic response of the plant to be controlled with a first order model with time delay. It is interesting to note that even though most of these tuning techniques provide satisfactory results, the set of all stabilizing PID controllers for these first order models with time delay has remained unknown until recently. Since this is the basic set in which every design must reside, it is important to determine it. This fact constitutes the motivation for this section, which is to provide a complete solution to the problem of characterizing the set of all PID controllers that stabilize a given first order plant with time delay. In Section 3.6.1 we present the formal statement of the PID stabilization problem. In Section 3.6.2 we present the solution to the problem when the system to be controlled is open-loop stable. Section 3.6.3 contains a similar
86
THREE TERM CONTROLLERS
result for the case of an open-loop unstable plant. We also provide here a necessary and sufficient condition on the time delay for the existence of stabilizing PID controllers. Simulations and examples are provided to illustrate the applicability of the results.
3.6.1
The PID Stabilization Problem
In this subsection we again study the problem of stabilizing a first order system with time delay using a PID controller. Our feedback control system is as shown in Figure 2.1, where G(s) is given by G(s) =
k e−Ls 1 + Ts
(3.33)
is the plant to be controlled, and C(s) is the PID controller. is: C(s) = kp +
ki + kd s s
where kp is the proportional gain, ki is the integral gain, and kd is the derivative gain. Our objective is to analytically determine the set of controller parameters (kp ,ki ,kd ) for which the closed-loop system is stable. We first analyze the system without the time delay, that is, L = 0. In this case the closed-loop characteristic equation of the system is given by δ(s) = (T + kkd )s2 + (1 + kkp )s + kki . Since this is a second order polynomial, closed-loop stability is equivalent to all the coefficients having the same sign. Assuming that the steady state gain k of the plant is positive these conditions are T 1 , ki > 0 and kd > − k k or 1 T kp < − , ki < 0 and kd < − . k k kp > −
(3.34) (3.35)
A minimal requirement for any control design is that the delay free closed-loop system be stable. Consequently, it will be henceforth assumed in this section that the PID gains used to stabilize the plant with delay always satisfy one of the conditions (3.34) or (3.35). Next consider the case where the time delay of the plant model is different from zero. The closed-loop characteristic equation of the system is then δ(s) = (kki + kkp s + kkd s2 )e−Ls + (1 + T s)s . As before, we can make use of Theorems 3.3 and 3.4 to solve the stability problem and find the set of stabilizing PID controllers.
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
87
We start by rewriting the quasi-polynomial δ(s) as δ ∗ (s) = eLs δ(s) = kki + kkp s + kkd s2 + (1 + T s)seLs . Substituting s = jω, we have δ ∗ (jω) = δr (ω) + jδi (ω) where δr (ω) = kki − kkd ω 2 − ω sin(Lω) − T ω 2 cos(Lω) δi (ω) = ω[kkp + cos(Lω) − T ω sin(Lω)] . The following sections separately treat the two cases of an open-loop stable plant and an open-loop unstable plant.
3.6.2
Open-Loop Stable Plant
If the system is open-loop stable, then T > 0 in (3.33). Furthermore, we make the standing assumption that k > 0 and L > 0. From the expressions for δr (ω) and δi (ω), it is clear that the controller parameter kp only affects the imaginary part of δ ∗ (jω) whereas the parameters ki and kd affect the real part of δ ∗ (jω). Moreover, these three controller parameters appear affinely in δr (ω) and δi (ω). These facts are exploited in applying Theorems 3.3 and 3.4 to determine the range of stabilizing PID gains. Before stating the main result of this section, we will present a few preliminary results that will be useful in solving the PID stabilization problem. LEMMA 3.1 The imaginary part of δ ∗ (jω) has only simple real roots if and only if 1 T 1 α1 sin(α1 ) − cos(α1 ) (3.36) − < kp < k k L where α1 is the solution of the equation tan(α) = −
T α T +L
in the interval (0, π). PROOF With the change of variables z = Lω the real and imaginary parts of δ ∗ (jω) can be expressed as kkd 1 T δr (z) = kki − 2 z 2 − z sin(z) − 2 z 2 cos(z) L L L z T δi (z) = kkp + cos(z) − z sin(z) . L L
(3.37) (3.38)
88
THREE TERM CONTROLLERS
From (3.38) we can compute the roots of the imaginary part, that is, δi (z) = 0. This gives us the following equation: z T kkp + cos(z) − z sin(z) = 0 . L L Then either z = 0 or T kkp + cos(z) − z sin(z) = 0 . L
(3.39)
From this it is clear that one root of the imaginary part is zo = 0. The other roots are difficult to find since we need to solve (3.39) analytically. However, we can plot the terms involved in (3.39) and graphically examine the nature of the solution. Let us denote the positive real roots of (3.39) by zj , j = 1, 2, ..., arranged in increasing order of magnitude. There are now four different cases to consider. Case 1: kp < − k1 . In this case, we sketch plots shown in Figure 3.15.
kkp +cos(z) sin(z)
and
T Lz
to obtain the
10 Tz L
8 6 4 2 0
π
3π z 3
z 2 2π
z1
z 4 4π
-2 -4 -6 -8 -10
0
5
z
k kp + cos (z) sin (z) 10
Figure 3.15 Plot of the terms involved in (3.39) for kp < − k1 .
15
89
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY kkp +cos(z) sin(z)
Case 2: − k1 < kp < k1 . In this case, we graph the plots shown in Figure 3.16.
and
T Lz
to obtain
10 8
Tz L
6 4 2 0
z1
π z2
3π z 4
2π z 3
-2
4π z 5
k kp + cos (z) sin (z)
-4 -6 -8 -10
0
5
z
10
15
Figure 3.16 Plot of the terms involved in (3.39) for − k1 < kp < k1 .
Case 3: kp = k1 . In this case, we sketch kkp + cos(z) and the plots shown in Figure 3.17. kk +cos(z)
T L z sin(z)
to obtain
Case 4: k1 < kp . In this case, we sketch psin(z) and TL z to obtain the plots shown in Figures 3.18(a) and 3.18(b). The plot in Figure 3.18(a) corresponds to the case where k1 < kp < ku , and kk +cos(z) ku is the largest number so that the plot of psin(z) intersects the line TL z twice in the interval (0, π). The plot in Figure 3.18(b) corresponds to the case kk +cos(z) where kp ≥ ku and the plot of psin(z) does not intersect the line TL z twice in the interval (0, π). Let us now use the results from Section 3.4.1 to check if δi (z) has only real roots. Substituting s1 = Ls in the expression for δ ∗ (s), we see that for the new quasi-polynomial in s1 , M = 2 and N = 1. Next, we choose η = π4 to
90
THREE TERM CONTROLLERS 15
10
5 k kp + cos (z) 0
π z2
z1
2π z 3
3π z4
4π z 5
5π z6
-5
-10 T z sin (z) L
-15
0
2
4
6
8
10
z
12
14
16
Figure 3.17 Plot of the terms involved in (3.39) for kp = k1 .
(a)
10
Tz L
5 0
z1
z2
π
2π z 3
-5 -10
z4
3π
4π z 5
k kp + cos(z) sin (z)
0
5
10
15
(b) 10 Tz L
5 0
π
2π
-5 -10
3π
4π
k kp + cos (z) sin (z)
0
5
z
10
Figure 3.18 Plot of the terms involved in (3.39) for
15
1 k
< kp .
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
91
satisfy the requirement that sin(η) 6= 0. From Figures 3.16 through 3.18(a) we see that in each of these cases, that is, for − k1 < kp < ku , δi (z) has three real roots in the interval [0, 2π − π4 ] = [0, 7π 4 ], including a root at the origin. 7π Since δi (z) is an odd function of z, it follows that in the interval [− 7π 4 , 4 ], δi (z) will have five real roots. Also observe from Figures 3.16 through 3.18(a) 9π that δi (z) has a real root in the interval ( 7π 4 , 4 ]. Thus δi (z) has 4N + M = 6 π π real roots in the interval [−2π + 4 , 2π + 4 ]. Moreover, it can be shown using Figures 3.16 through 3.18(a) that δi (z) has two real roots in each of the intervals [2lπ + π4 , 2(l + 1)π + π4 ] and [−2(l + 1)π + π4 , −2lπ + π4 ] for l = 1, 2, .... It follows that δi (z) has exactly 4lN + M real roots in [−2lπ + π4 , 2lπ + π4 ] for − k1 < kp < ku . Hence, from Theorem 3.4, we conclude that for − k1 < kp < ku , δi (z) has only real roots. Also note that the cases kp < − k1 and kp ≥ ku corresponding to Figures 3.15 and 3.18(b), respectively, do not merit any further consideration since using Theorem 3.4, we can easily argue that in these cases, all the roots of δi (z) will not be real, thereby ruling out closedloop stability. It only remains to determine the upper bound ku on the allowable value of kk +cos(z) kp . From the definition of ku , it follows that if kp = ku the plot of psin(z) intersects the line TL z only once in the interval (0, π). Let us denote by α1 the value of z for which this intersection occurs. Then we know that for z = α1 ∈ (0, π) we have kku + cos(α1 ) T = α1 . sin(α1 ) L
(3.40)
+cos(z) Moreover, at z = α1 , the line TL z is tangent to the plot of kkusin(z) . Thus d kku + cos(z) T = dz sin(z) L z=α1 T ⇒ 1 + kku cos(α1 ) = − sin2 (α1 ) . (3.41) L Eliminating kku between (3.40) and (3.41) we conclude that α1 ∈ (0, π) can be obtained as a solution of the following equation:
tan(α1 ) = −
T α1 . T +L
Once α1 is determined the parameter ku can be obtained using (3.40): 1 T ku = α1 sin(α1 ) − cos(α1 ) . k L This completes the proof of the lemma. From (3.37), for z 6= 0, the real part δr (z) can be rewritten as δr (z) =
k 2 z [−kd + m(z)ki + b(z)] L2
(3.42)
92
THREE TERM CONTROLLERS
where L2 z2 L T ∆ sin(z) + z cos(z) . b(z) = − kz L ∆
m(z) =
LEMMA 3.2 For each value of kp in the range given by (3.36), the necessary and sufficient conditions on ki and kd for the roots of δr (z) and δi (z) to interlace is the following infinite set of inequalities: ki > 0 kd > m1 ki + b1 kd < m2 ki + b2 kd > m3 ki + b3 kd < m4 ki + b4 .. .
(3.43)
where the parameters mj and bj for j = 1, 2, 3, ... are given by ∆
(3.44)
∆
(3.45)
mj = m(zj ) bj = b(zj ) .
PROOF From the condition 1 of Theorem 3.3, the roots of δr (z) and δi (z) have to interlace for the quasi-polynomial δ ∗ (s) to be stable. Thus, we evaluate δr (z) at the roots of the imaginary part δi (z). For zo = 0, using (3.37) we obtain δr (zo ) = kki . (3.46) For zj , where j = 1, 2, 3, ..., using (3.42) we obtain δr (zj ) =
k 2 z [−kd + m(zj )ki + b(zj )] . L2 j
(3.47)
Interlacing of the roots of δr (z) and δi (z) is equivalent to δr (zo ) > 0 (since Lemma 3.1 implies that kp is necessarily greater than − k1 , which in view of the stability requirements (3.34) for the delay free case implies that ki > 0), δr (z1 ) < 0, δr (z2 ) > 0, δr (z3 ) < 0, and so on. Using this fact and (3.46) – (3.47) we obtain δr (zo ) > 0 ⇒ ki > 0 δr (z1 ) < 0 ⇒ kd > m1 ki + b1
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
93
δr (z2 ) > 0 ⇒ kd < m2 ki + b2 δr (z3 ) < 0 ⇒ kd > m3 ki + b3 δr (z4 ) > 0 ⇒ kd < m4 ki + b4 .. . Thus, intersecting all these regions in the ki –kd space, we obtain the set of (ki ,kd ) values for which the roots of δr (z) and δi (z) interlace for a given fixed value of kp . Notice that all these regions are half planes with their boundaries being lines with positive slopes mj . This completes the proof of the lemma. Example 3.8 Consider the transfer function (3.33) with the following plant parameters: k = 1, T = 2 seconds, and L = 4 seconds. Then, the quasi-polynomial δ ∗ (s) is given by δ ∗ (s) = ki + kp s + kd s2 + (2s2 + s)e4s . From Lemma 3.1 we need to find the solution of the following equation: 1 tan(α) = − α . 3 Solving this equation in the interval (0, π) we get α1 = 2.4556. Then, from (3.36), the imaginary part of δ ∗ (jω) has only simple real roots if and only if −1 < kp < 1.5515 . We now set the controller parameter kp to 0.8, which is inside the previous range. For this kp value, (3.39) takes the form 0.8 + cos(z) − 0.5z sin(z) = 0 . We next compute some of the positive real roots of this equation and arrange them in increasing order of magnitude: z1 = 1.5806 , z2 = 3.2602 , z3 = 6.7971 , z4 = 9.4669 . Using (3.44) and (3.45) we now calculate the parameters mj and bj for j = 1, · · · , 4: m1 = 6.4044 b1 = −2.5110 m2 = 1.5053 b2 = 2.1311 m3 = 0.3463 b3 = −2.0309 m4 = 0.1785 b4 = 2.0160 .
94
THREE TERM CONTROLLERS
From Lemma 3.2, interlacing of the roots of the real and imaginary parts of δ ∗ (jω) occurs for kp = 0.8, if and only if the following set of inequalities are satisfied: ki > 0 kd > 6.4044ki − 2.5110 kd < 1.5053ki + 2.1311 kd > 0.3463ki − 2.0309 kd < 0.1785ki + 2.0160 .. . The boundaries of these regions are illustrated in Figure 3.19. Notice that the boundaries corresponding to z2 , z4 , . . ., converge to the line kd = Tk = 2, whereas the boundaries corresponding to z1 , z3 , . . ., converge to the line kd = − Tk = −2.
4
3
k d = m2 k i + b 2
2
k d = m4 k i + b 4 1
kd
k d = m1 k i + b 1
0
−1
k d = m3 k i + b 3 −2
−3
0
0.1
0.2
0.3
0.4
ki
0.5
0.6
0.7
0.8
0.9
1
Figure 3.19 Boundaries of the regions of Example 3.8.
As pointed out in the proof of Lemma 3.2 and in Example 3.8, the inequal-
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
95
ities given by (3.43) represent half planes in the space of ki and kd . Their boundaries are given by lines with the following equations: kd = mj ki + bj for j = 1, 2, 3, . . . . The focus of the remainder of this section will be to show that this intersection is nonempty. We will also determine the intersection of this countably infinite number of half planes in a computationally tractable way. To this end, let us denote by vj the ki -coordinate of the intersection of the line kd = mj ki + bj , j = 1, 2, 3, . . ., with the line kd = − Tk . From (3.44) and (3.45) it is not difficult to show that zj T vj = sin(zj ) + zj (cos(zj ) − 1) . (3.48) kL L In a similar fashion, let us now denote by wj the ki -coordinate of the intersection of the line kd = mj ki + bj , j = 1, 2, 3, . . ., with the line kd = Tk . Using (3.44) and (3.45) it can be once again shown that zj T wj = sin(zj ) + zj (cos(zj ) + 1) . kL L
(3.49)
We now state three important technical lemmas that will allow us to develop an algorithm for solving the PID stabilization problem of an open-loop stable plant (T > 0). These lemmas show the behavior of the parameters bj , vj , and wj , j = 1, 2, 3, . . ., for different values of the parameter kp inside the range proposed by Lemma 3.1. The proofs of these lemmas are long and will be omitted here. They are given in Section 3.8. LEMMA 3.3 If − k1 < kp < k1 then T for odd values of j k T T (ii) bj > and bj → as j → ∞ for even values of j k k (iii) 0 < vj < vj+2 for odd values of j. (i) bj < bj+2 < −
LEMMA 3.4 If kp = k1 then (i) bj = − (ii) bj =
T for odd values of j k
T for even values of j. k
96
THREE TERM CONTROLLERS
LEMMA 3.5 If k1 < kp < k1 TL α1 sin(α1 ) − cos(α1 ) where α1 is the solution of the equation T tan(α) = − α T +L in the interval (0, π), then (i) bj > bj+2 > −
T for odd values of j k
T for even values of j k (iii) wj > wj+2 > 0 for even values of j (iv) b1 < b2 , w1 > w2 . (ii) bj < bj+2
0 for some ωo in (−∞, ∞). Let us take ωo = 0, so zo = 0. Thus, δi (zo ) = 0 and
98
THREE TERM CONTROLLERS
δr (zo ) = kki . We also have ′ T 2T 1 1 kkp + − 2 z 2 cos(z) − z + 2 z sin(z) δi (z) = L L L L L kkp + 1 ⇒ E(zo ) = (kki ) . L Recall that k > 0 and L > 0. Thus, for ki > 0 and kp > −
1 k
or
1 k we have E(zo ) > 0. Notice that from these conditions we can safely discard kp = − k1 from the set of kp values for which a stabilizing PID controller can be found. ki < 0 and kp < −
Step 2. Next we check condition 1 of Theorem 3.3, that is, δr (z) and δi (z) have only simple real roots and these interlace. From Lemma 3.1 we know that the roots of δi (z) are all real if and only if the parameter kp lies inside the range 1 1 T − , α1 sin(α1 ) − cos(α1 ) k k L where α1 is the solution of the equation
tan(α) = −
T α T +L
in the interval (0, π). From the proof of Lemma 3.2, we see that interlacing of the roots of δr (z) and δi (z) leads to the following set of inequalities ki > 0 kd > m1 ki + b1 kd < m2 ki + b2 kd > m3 ki + b3 kd < m4 ki + b4 .. . We now show that for − k1 < kp < ku , where ku = k1 [ TL α1 sin(α1 )− cos(α1 )], all these regions have a nonempty intersection. Notice first that the slopes mj of the boundary lines of these regions decrease with zj . Moreover, in the limit we have lim mj = 0 . j→∞
Using this fact we have the following observations:
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
99
1. When − k1 < kp < k1 , the intersection is given by the trapezoid T sketched in Figure 3.20(a). This region can be found using the properties stated in Lemma 3.3. 2. When kp = k1 , the intersection is given by the triangle ∆ sketched in Figure 3.20(b). This region can be found using the properties stated in Lemma 3.4. 3. When k1 < kp < ku , the intersection is given by the quadrilateral Q sketched in Figure 3.20(c). This region can be found using the properties stated in Lemma 3.5. For values of kp in (− k1 , ku ), the interlacing property and the fact that the roots of δi (z) are all real can be used in Theorem 3.4 to guarantee that δr (z) also has only real roots. Thus, for values of kp inside this range there is a solution to the PID stabilization problem for a first order open-loop stable plant with time delay. For values of kp outside this range the aforementioned problem does not have a solution. This completes the proof of the theorem. In view of Theorem 3.5, we propose an algorithm to determine the set of stabilizing parameters for the plant (3.33) with T > 0. Algorithm for Determining Stabilizing PID Parameters • Step 1: Initialize kp = − k1 and step = N1+1 ku + k1 , where N is the desired number of points; • Step 2: Increase kp as follows: kp = kp + step; • Step 3: If kp < ku then go to Step 4. Else, terminate the algorithm; • Step 4: Find the roots z1 and z2 of (3.39); • Step 5: Compute the parameters mj and bj , j = 1, 2 associated with the previously found zj by using (3.44) and (3.45); • Step 6: Determine the stabilizing region in the ki –kd space using Figure 3.20; • Step 7: Go to Step 2. Two examples that illustrate the procedure involved in solving the PID stabilization problem using the results of this section are given below. Example 3.9 Consider the PID controller design for the first order process with deadtime using the Ziegler-Nichols step response method. The process model is given as k G(s) = e−Ls (3.56) Ts + 1
100
THREE TERM CONTROLLERS
where k = 0.1, T = 0.01 seconds, and L = 0.1 seconds. Using the ZieglerNichols step response method, we obtain the controller parameter values kp = 1.2, ki = 6.0, and kd = 0.06. We now use the results of this section to determine the set of all stabilizing (ki , kd ) values when kp is kept fixed at 1.2 (the Ziegler-Nichols value). First we compute the roots z1 and z2 of the imaginary part δi (z) of the characteristic equation of the closed-loop system, that is, we solve the following equation 0.12 + cos(z) − 0.1z sin(z) = 0 . The roots obtained are z1 = 1.537 and z2 = 4.204. Then from Figure 3.20 we only need to compute the boundary line corresponding to z1 . This line is given by kd = 0.00423ki − 0.6535. Thus, the set of stabilizing (ki , kd ) values when kp = 1.2 is the one sketched in Figure 3.21.
0.25 0.2 0.15 0.1
kd
0.05 0
X
-0.05 -0.1 -0.15 -0.2 -0.25 0
20
40
60
80 k 100 i
120
140
160
180
Figure 3.21 The stabilizing region of (ki , kd ) when kp = 1.2 for Example 3.9.
From Figure 3.21, it is clear that the PID controller obtained by the ZieglerNichols step response method (denoted by *) is very close to the stability boundary. So this example shows that a PID controller design obtained using the Ziegler-Nichols step response method may suffer from “fragility.” Also shown in Figure 3.21 is the circle of largest radius inscribed inside the stabilizing region. This circle has a radius of r = 0.1 and its center can be placed
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
101
anywhere on the line kd = 0, 0 < ki < 130.26. By choosing the (ki , kd ) value at the center of this circle, we can obtain the largest l2 parametric stability margin in the space of ki and kd , thereby alleviating the controller fragility problem. Example 3.10 Let us revisit the design problem presented in Example 3.9. The plant parameters are k = 0.1, T = 0.01 seconds, and L = 0.1 seconds. We will now use a different approach to solve this problem. We first approximate the deadtime of (3.56) by the first order Pad´e approximation introduced in Section 3.3. The approximated process model is given by −Ls + 1 k · L2 Ts + 1 2s+1 0.1(−0.05s + 1) = . (0.01s + 1)(0.05s + 1)
G1m (s) =
Since G1m (s) is a rational transfer function we use the PID controller design procedure presented in Chapter 2. Using this procedure we obtained the set of all stabilizing (kp , ki , kd ) values. The set of all stabilizing (ki , kd ) values corresponding to kp = 1.2 is sketched in Figure 3.22 with a continuous line. We next compare this set with the one obtained in Example 3.9. This latter set is superimposed on Figure 3.22 using a dashed line. As we can see from this figure, the set obtained by the Pad´e approximation includes settings of the PID controller that lead to an unstable closed-loop system. This shows once again, that the Pad´e approximation may indeed be unsatisfactory when designing a PID controller for systems with deadtime. Finally let us use the results of this section to determine the entire set of stabilizing PID parameters. The range of kp values specified by Theorem 3.5 is given by −10 < kp < 10.4048 . By sweeping over this range and using the algorithm presented earlier, we obtain the stabilizing set of (kp , ki , kd ) values sketched in Figure 3.23. Example 3.11 Consider the PID stabilization problem for a plant described by the differential equation dy(t) = −0.5y(t) + 0.5u(t − 4) . dt This process can also be described by the transfer function G(s) where G(s) =
k e−Ls 1 + Ts
102
THREE TERM CONTROLLERS 0.2 0.1 0
kd
-0.1 -0.2 -0.3 -0.4 with Pade approximation exact delay
-0.5 -0.6
0
50
100
ki
150
200
250
Figure 3.22 The stabilizing region of (ki , kd ) when kp = 1.2 for Example 3.10.
15
10
kp
5
0
-5
-10 -0.1
-0.05 kd
0
0.05
0.1
0.15
0.2
250
200
150
100 ki
50
0
Figure 3.23 The stabilizing region of (kp , ki , kd ) values for the PID controller in Example 3.10.
103
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
with the parameters: k = 1, T = 2 sec, and L = 4 sec. Since the system is open-loop stable, we use Theorem 3.5 to find the range of kp values for which a solution to the PID stabilization problem exists. We first compute the parameter α1 ∈ (0, π) satisfying the following equation tan(α) = −0.3333α . Solving this equation we obtain α1 = 2.4557. Thus, from (3.50) the range of kp values is given by −1 < kp < 1.5515 . We now sweep over the above range of kp values and determine the stabilizing set of (ki , kd ) values at each stage using the previous algorithm. These regions are sketched in Figure 3.24.
2 1.5
Kp
1 0.5 0 -0.5 -1 -2
0 0.2
-1
0 Kd
0.4 1
2
0.6
Ki
0.8
Figure 3.24 The stabilizing region of (kp ,ki ,kd ) values for the PID controller in Example 3.11.
Any PID gains selected from these regions will result in closed-loop stability and any gains outside will result in instability. Now, consider the following performance specifications: 1. Settling time ≤ 60 secs; 2. Overshoot ≤ 20%.
104
THREE TERM CONTROLLERS
We can obtain the transient responses of the closed-loop system for the (kp , ki , kd ) values inside the regions depicted in Figure 3.24. In general we also need some tolerance around the controller parameters, that is we want the controller to be controller-robust or nonfragile. Thus, we only consider PID gains lying inside the following box defined in the parameter space: 0.1 ≤ kp ≤ 1 , 0.1 ≤ ki ≤ 0.3 and 0.5 ≤ kd ≤ 1.5 . By searching over this box, several (kp ,ki ,kd ) values are found to meet the desired performance specifications. We arbitrarily set the controller parameters to: kp = 0.3444, ki = 0.1667, kd = 0.8333. Figure 3.25 shows the step response of the resulting closed-loop system. It is clear from the figure that the closed-loop system is stable, the output y(t) tracks the step input signal and the performance specifications are met.
1.4 Using the entire set Cohen-Coon Method Ziegler-Nichols Method
1.2
1
0.8
0.6
0.4
0.2
0
0
10
20
30
40
50 60 Time (sec)
70
80
90
100
Figure 3.25 Time response of the closed-loop system for Example 3.11.
The figure also shows the responses of the closed-loop systems for the case of a PID controller designed using the Cohen-Coon method (kp = 0.9180, ki = 0.1456, kd = 0.9845) and the Ziegler-Nichols method (kp = 0.6, ki = 0.075, kd = 1.2). Notice that in these cases also the system is stable and achieves setpoint following. However, the responses are much more oscillatory.
105
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
Although the design presented above is essentially an optimization search by gridding, nevertheless the fact that the algorithm of this section can be used to confine the search to the stabilizing set makes the design problem orders of magnitude easier.
3.6.3
Open-Loop Unstable Plant
We consider here the PID stabilization of an unstable first order plant with delay. In this case, T < 0 in (3.33). Furthermore, let us assume that k > 0 and L > 0. The same procedure used in the last section will be used here to solve the problem of stabilizing an open-loop unstable plant using a PID controller. In other words, we will find the set of all stabilizing (kp ,ki ,kd ) values by repeatedly using Theorem 3.3. Before stating the main result of this section, we will present a few preliminary results leading up to it. As the next lemma shows, the range of kp values for which stabilization is possible can be determined exactly. LEMMA 3.6 For | TL | > 0.5, the imaginary part of δ ∗ (jω) has only simple real roots if and only if 1 T 1 α1 sin(α1 ) − cos(α1 ) < kp < − (3.57) k L k where α1 is the solution of the equation tan(α) = −
T α T +L
in the interval (0, π). In the special case of | TL | = 1, we have α1 = | TL | ≤ 0.5, the roots of the imaginary part of δ ∗ (jω) are not all real.
π 2.
For
PROOF As in the proof of Lemma 3.1 we make use of the change of variables z = Lω. With this change of variables, the real and imaginary parts of δ ∗ (jω) can be expressed as kkd 1 T δr (z) = kki − 2 z 2 − z sin(z) − 2 z 2 cos(z) L L L z T δi (z) = kkp + cos(z) − z sin(z) . L L
(3.58) (3.59)
We can compute the roots of the imaginary part from (3.59), that is, δi (z) = 0. This gives us the following equation: z T kkp + cos(z) − z sin(z) = 0 . L L
106
THREE TERM CONTROLLERS
Then either z = 0 or T kkp + cos(z) − z sin(z) = 0 . L
(3.60)
From this expression one root of the imaginary part is zo = 0. As in Section 3.6.2, we will plot the terms involved in (3.60) and graphically examine the nature of the solution. Let us denote the positive real roots of (3.60) by zj , j = 1, 2, ..., arranged in increasing order of magnitude. There are now four different cases to consider. Case 1: kp < − k1 . In this case, we sketch
kkp +cos(z) sin(z)
and
T L z. It can be kkp +cos(z) and sin(z)
shown (see Lemma 3.7) that if TL ≥ −0.5, then the curves T L z do not intersect at all in the interval (0, π) regardless of the value of kp in (−∞, − k1 ). As will be shortly shown, in such a case the roots of the imaginary part are not all real. Accordingly, let us focus on the case TL < −0.5 in which case again there are two possibilities, depending on the value of kp . These two possibilities are shown in Figures 3.26(a) and 3.26(b). The plot in Figure 3.26(a) corresponds to the case where kl < kp < − k1 , and kl is the kk +cos(z) smallest number so that the plot of psin(z) intersects the line TL z twice in the interval (0, π). The plot in Figure 3.26(b) corresponds to the case where kk +cos(z) kp ≤ kl and the plot of psin(z) does not intersect the line TL z twice in the interval (0, π). Case 2: − k1 < kp < k1 . As in the previous case, we graph to obtain the plots shown in Figure 3.27. Case 3: kp = k1 . In this case, we sketch kkp + cos(z) and the plots shown in Figure 3.28. kk +cos(z)
kkp +cos(z) sin(z)
T L z sin(z)
and
T Lz
to obtain
Case 4: k1 < kp . In this case, we sketch psin(z) and TL z to obtain the plots shown in Figure 3.29. We will now use Theorem 3.4 to check if δi (z) has only real roots. Substituting s1 = Ls in the expression for δ ∗ (s), we see that for the new quasipolynomial in s1 , M = 2 and N = 1. Next, we choose η = π4 to satisfy the requirement that sin(η) 6= 0. From Figure 3.26(a), we see that in this case, that is, for kl < kp < − k1 , δi (z) has three real roots in the interval [0, 2π − π4 ] = [0, 7π 4 ], including a root at the origin. Since δi (z) is an odd 7π function of z, it follows that in the interval [− 7π 4 , 4 ], δi (z) will have five real roots. Also observe from Figure 3.26(a) that δi (z) has a real root in 9π the interval ( 7π 4 , 4 ]. Thus, δi (z) has 4N + M = 6 real roots in the interπ val [−2π + 4 , 2π + π4 ]. Moreover, it can be shown using Figure 3.26(a) that δi (z) has two real roots in each of the intervals [2lπ + π4 , 2(l + 1)π + π4 ] and
107
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
(a) 10 k kp + cos (z) sin (z)
5 z1
0
z2
z4
z3
π
2π
3π
-5 -10
4π
z5
Tz L 0
5
10
15
(b) 10 5 0
π
3π
2π
-5 -10
4π
k kp + cos (z) sin (z)
Tz L 0
5
10
z
15
Figure 3.26 Plot of the terms involved in (3.60) for kp < − k1 .
10 8 6 4 k kp + cos (z) sin (z)
2 z1
0
z2
π
z3 2π
3π
z4
z5 4π
-2 -4
Tz L
-6 -8 -10
0
5
z
10
Figure 3.27 Plot of the terms involved in (3.60) for − k1 < kp < k1 .
15
108
THREE TERM CONTROLLERS 15
10
5 k kp + cos(z) 0
π z1
z 2 2π
3π z3
z 4 4π
5π z5
-5 T z sin(z) L
-10
-15
0
2
4
6
8
z
10
12
14
16
Figure 3.28 Plot of the terms involved in (3.60) for kp = k1 . 10 8
k kp + cos (z) sin (z)
6 4 2 π z1
0
z2
2π
3π z 3
z 4 4π
-2 -4 -6 Tz L
-8 -10
0
5
z
10
Figure 3.29 Plot of the terms involved in (3.60) for
15
1 k
< kp .
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
109
[−2(l + 1)π + π4 , −2lπ + π4 ] for l = 1, 2, .... Hence, it follows that δi (z) has exactly 4lN + M real roots in [−2lπ + π4 , 2lπ + π4 ] for kl < kp < − k1 . Hence, from Theorem 3.4, we conclude that for kl < kp < − k1 , δi (z) has only real roots. Also note that the cases kp ≤ kl and kp > − k1 corresponding to Figures 3.26(b) through 3.29, respectively, do not merit any further consideration since using Theorem 3.4, we can easily argue that in these cases all the roots of δi (z) will not be real. The same argument can also be used in conjunction with Lemma 3.7 to conclude that for all the roots of δi (z) to be real, the condition TL < −0.5 must necessarily be satisfied. We now need to determine the lower bound kl on the allowable value for kk +cos(z) kp . From the definition of kl , it follows that for kp = kl the plot of psin(z) intersects the line TL z only once in the interval (0, π). Let us denote by α1 the value of z for which this intersection occurs. Then we know that for z = α1 ∈ (0, π) we have kkl + cos(α1 ) T = α1 . sin(α1 ) L
(3.61)
+cos(z) Now, at z = α1 , the line TL z is tangent to the plot of kklsin(z) . Thus, T T d kkl + cos(z) = ⇒ 1 + kkl cos(α1 ) = − sin2 (α1 ) . (3.62) dz sin(z) L L z=α1
Eliminating kkl between (3.61) and (3.62) we conclude that α1 ∈ (0, π) can be obtained as a solution of the following equation: T T sin(α1 ) + 1 = − α1 cos(α1 ) . L L If
T L
6= −1, then this expression can be rewritten as follows: tan(α1 ) = −
If
T L
= −1, then α1 =
T α1 . T +L
(3.63)
π 2.
In either case, the parameter kl is given by 1 T α1 sin(α1 ) − cos(α1 ) from (3.61) kl = k L
and this completes the proof. In Lemma 3.6, it was stated that if T ≤ 0.5, L
then the roots of the imaginary part δi (z) are not all real. The following lemma forms the basis of this claim. The proof is given in Section 3.9.
110
THREE TERM CONTROLLERS
LEMMA 3.7 If −0.5 ≤ TL < 0, then the curves kkp + cos(z) sin(z) and TL z do not intersect in the interval (0, π) regardless of the value of kp in (−∞, − k1 ). For z 6= 0, the real part δr (z) can be rewritten as δr (z) =
k 2 z [−kd + m(z)ki + b(z)] L2
(3.64)
where L2 z2 T L ∆ sin(z) + z cos(z) . b(z) = − kz L ∆
m(z) =
LEMMA 3.8 For each value of kp in the range given by (3.57), the necessary and sufficient conditions on ki and kd for the roots of δr (z) and δi (z) to interlace are the following infinite set of inequalities: ki < 0 kd < m1 ki + b1 kd > m2 ki + b2 kd < m3 ki + b3 kd > m4 ki + b4 .. .
(3.65)
where the parameters mj and bj for j = 1, 2, 3, ... are given by ∆
(3.66)
∆
(3.67)
mj = m(zj ) bj = b(zj ) .
PROOF From condition 1 of Theorem 3.3, the roots of δr (z) and δi (z) have to interlace in order for the quasi-polynomial δ ∗ (s) to be stable. Thus, we evaluate the real part δr (z) at the roots of the imaginary part δi (z). For zo = 0, using (3.58) we obtain δr (zo ) = kki .
(3.68)
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
111
For zj , where j = 1, 2, 3, ..., using (3.64) we obtain δr (zj ) =
k 2 z [−kd + m(zj )ki + b(zj )] . L2 j
(3.69)
Interlacing the roots of δr (z) and δi (z) is equivalent to δr (zo ) < 0 (since Lemma 3.6 implies that kp is necessarily less than − k1 , which in view of the stability requirements (3.35) for the delay free case, implies that ki < 0), δr (z1 ) > 0, δr (z2 ) < 0, δr (z3 ) > 0, and so on. Using this fact and (3.68)– (3.69) we obtain δr (zo ) < 0 ⇒ ki < 0 δr (z1 ) > 0 ⇒ kd < m1 ki + b1 δr (z2 ) < 0 ⇒ kd > m2 ki + b2 δr (z3 ) > 0 ⇒ kd < m3 ki + b3 δr (z4 ) < 0 ⇒ kd > m4 ki + b4 .. . Thus, intersecting all these regions in the ki –kd space, we obtain the set of (ki , kd ) values for which the roots of δr (z) and δi (z) interlace for a given fixed value of kp . Notice that all these regions are half planes with their boundaries being lines with positive slopes mj . This completes the proof of the lemma. The inequalities given by (3.65) represent half planes in the space of ki and kd . Their boundaries are given by lines with the following equations: kd = mj ki + bj for j = 1, 2, 3, . . . . The focus of the remainder of this section will be to determine the intersection of this countably infinite number of half planes in a computationally tractable way. To this end, let us denote by wj the ki -coordinate of the intersection of the line kd = mj ki + bj , j = 1, 2, 3, ..., with the line kd = Tk . Using (3.66) and (3.67), it can be shown that zj T wj = sin(zj ) + zj (cos(zj ) + 1) . (3.70) kL L We now state a lemma that describes the behavior of the parameters bj defined in (3.67) and wj defined in (3.70) for kp ∈ (kl , − k1 ). The proof of this lemma is long and technical and is relegated to Section 3.9. LEMMA 3.9 If
1 T 1 α1 sin(α1 ) − cos(α1 ) < kp < − k L k
112
THREE TERM CONTROLLERS
where α1 is the solution of the equation tan(α) = − in the interval (0, π) or α1 =
π 2
T α T +L
if | TL | = 1, then
(i) bj < bj+2 < −
T for odd values of j k
T for even values of j k < 0 for even values of j
(ii) bj > bj+2 > (iii) wj < wj+2
(iv) b1 > b2 , w1 < w2 . We are now ready to state the main result of this section. THEOREM 3.6 A necessary and sufficient condition for the existence of a stabilizing PID controller for the open-loop unstable plant (3.33) is | TL | > 0.5. If this condition is satisfied, then the range of kp values for which a given open-loop unstable plant, with transfer function G(s) as in (3.33), can be stabilized using a PID controller is given by 1 1 T α1 sin(α1 ) − cos(α1 ) < kp < − (3.71) k L k where α1 is the solution of the equation tan(α) = −
T α T +L
(3.72)
in the interval (0, π). In the special case of | TL | = 1, we have α1 = π2 . For kp values outside this range, there are no stabilizing PID controllers. Moreover, the complete stabilizing region is given by (see Figure 3.30): For each kp ∈ kl := k1 TL α1 sin(α1 ) − cos(α1 ) , − k1 , the cross-section of the stabilizing region in the (ki , kd ) space is the quadrilateral Q. The parameters mj , bj and ωj , j = 1, 2 necessary for determining the boundary of Q are as defined in the statement of Theorem 3.5. PROOF To ensure the stability of the quasi-polynomial δ ∗ (s), we need to check the two conditions given in Theorem 3.3. Step 1. We first check condition 2 of Theorem 3.3: ′
′
E(ωo ) = δi (ωo )δr (ωo ) − δi (ωo )δr (ωo ) > 0
113
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY 3
2
b1
1
Q
0 kd
Line kd = m 1 ki + b1
-1
b2
-2 Line kd = m 2 ki + b 2 -3
-4 -4
(w2 ,T/k)
(w1 ,T/k) -3.5
-3
-2.5
-2
ki
-1.5
-1
-0.5
0
0.5
1
Figure 3.30 The stabilizing region of (ki , kd ) for kl < kp < − k1 .
for some ωo in (−∞, ∞). Again we take ωo = 0, so zo = 0. Thus, δi (zo ) = 0 and δr (zo ) = kki and we have kkp + 1 E(zo ) = (kki ) . L Recall k > 0 and L > 0. Thus, if we pick
ki > 0 and kp > −
1 k
or
1 k we have E(zo ) > 0. Notice that from these conditions we can safely discard kp = − k1 from the set of kp values for which a stabilizing PID controller can be found. ki < 0 and kp < −
Step 2. The second step is to check condition 1 of Theorem 3.3: δr (z) and δi (z) have only simple real roots and they interlace. From Lemma 3.6 we know that for TL < −0.5, the roots of δi (z) are all real if and only if the parameter kp is inside the range 1 T 1 α1 sin(α1 ) − cos(α1 ) , − , k L k
114
THREE TERM CONTROLLERS
where α1 is the solution of the equation tan(α) = −
T α T +L
in the interval (0, π). Since for TL ≥ −0.5 the roots of the imaginary part are not all real, we conclude that for the existence of a stabilizing PID controller, the condition TL < −0.5 must necessarily be satisfied. From Lemma 3.8, interlacing the roots of δr (z) and δi (z) leads to the following set of inequalities: ki < 0 kd < m1 ki + b1 kd > m2 ki + b2 kd < m3 ki + b3 kd > m4 ki + b4 .. . We now show that for kl < kp < − k1 all these regions have a nonempty intersection. Notice first that the slopes mj of the boundary lines of these regions decrease with zj . Moreover, in the limit, we have lim mj = 0 .
j→∞
Using this fact and Lemma 3.9 we get the intersection shown in Figure 3.30. Finally we note that for values of kp in the range (kl , − k1 ), the interlacing property and the fact that all the roots of δi (z) are real can be used in Theorem 3.4 to guarantee that δr (z) also has only real roots. Thus, for values of kp inside this range there is a solution to the PID stabilization problem for a first order open-loop unstable plant with time delay. REMARK 3.4 It is not difficult to see that when TL ≤ −1 there is always a solution to (3.72) in the interval (0, π). However, when TL > −1 we have two situations to consider. These situations are illustrated in Figure 3.31 where the terms tan(α) and − T T+L α involved in (3.72) are plotted. Figure 3.31(a) corresponds to the case where −1 < TL < −0.5. Figure 3.31(b) corresponds to the case where −0.5 ≤ TL < 0. As can be seen from Figure 3.31(a), there is always a solution to (3.72) in the interval (0, π). However, in the case of Figure 3.31(b), we see that there is no solution in this open interval. Thus, for this situation, the parameter α1 does not exist and neither does kl . However, as pointed out earlier, this corresponds to the case where no stabilizing PID controller exists. A similar algorithm to the one presented in the previous section can now be developed to solve the PID stabilization problem of an open-loop unstable
115
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
(a) 5 tan(α)
0
-5
-T α T+L
slope=1
α1
0
0.5
1
π
1.5
2
2.5
3
3.5
4
4.5
(b) 5 slope=1
tan(α)
0
-5
-T α T+L
π
0
0.5
1
1.5
2
2.5
α
3
3.5
4
Figure 3.31 Cases involved in determining the parameter α1 when
4.5
T L
> −1.
plant. We only need to sweep the parameter kp over the interval proposed by Theorem 3.6 and use Figure 3.30 to find the stabilizing region of (ki , kd ) values at each admissible value of kp . Example 3.12 Consider a process described by the differential equation dy(t) = 0.25y(t) − 0.25u(t − 0.8) . dt This process can be described by the transfer function G(s) in (3.56) with the following parameters: k = 1, T = −4, and L = 0.8 seconds. Since the system is open-loop unstable we use Theorem 3.6 to find the range of kp values for which a solution to the PID stabilization problem exists. Since | TL | = 5 > 0.5, we can proceed to compute α1 ∈ (0, π) satisfying the following equation: tan(α) = −1.25α . Solving this equation we obtain α1 = 1.9586. Thus, from (3.71) the range of kp values is given by −8.6876 < kp < −1 .
116
THREE TERM CONTROLLERS
We now sweep over the above range of kp values and determine the stabilizing set of (ki , kd ) values at each stage. These regions are sketched in Figure 3.32.
0 -1 -2 -3 -4
kp
-5 -6 -7 -8 -9
-10 -4
-10 -2 kd
-5
0
0
2 4
5
ki
Figure 3.32 The stabilizing region of (kp , ki , kd ) values for the PID controller in Example 3.12.
3.7
PID Stabilization of Arbitrary LTI Systems with a Single Time Delay
In the last section, we presented some neat and quasi-closed form solutions for the PID stabilization problem for a first order plant with delay. In this section, we turn our attention to the PID stabilization of an arbitrary order LTI plant cascaded with a delay. For this general case, we will develop an approach based on the Nyquist criterion and Boundary Crossing methods to find the PID stabilizing set.
117
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
3.7.1
Connection between Pontryagin’s Theory and the Nyquist Criterion
In 1946, Tsypkin proposed a method to extend the Nyquist criterion to deal with time delay systems. In this section, we show that, unless care is taken, this may lead to misleading conclusions. This allows us to connect the Nyquist approach with Pontryagin’s results. Example 3.13 Given a system with nominal open-loop transfer function G(s) =
2s + 1 , s+2
(3.73)
we can draw its Nyquist plot, as shown in Figure 3.33. The closed-loop system is stable with unity negative feedback and the plot intersects the unit circle at ω0 = 1. Thus, from the graph, using Tsypkin’s result, the closed-loop system 0 )] apparently should tolerate a time delay up to L0 = π+arg[G(jω = 3.7851. ω0 However, when we add a 1 second delay to the nominal transfer function, the closed-loop system becomes unstable, as shown in Figure 3.34.
2
1.5
1
ω0=1
0.5
arg[G(jω0)] 0
−0.5
−1
−1.5
−2 −1.5
−1
−0.5
0
0.5
1
1.5
Figure 3.33 Nyquist plot of a simple system.
2
2.5
118
THREE TERM CONTROLLERS
Figure 3.34 Simulation of the system with 1 sec delay.
In this section, we use Pontryagin’s Theorems to derive conditions under which a modified generalized Nyquist Criterion can be used to correctly analyze the stability of a system. Let h(z, t) be a polynomial in the two variables z and t with constant coefficients, X h(z, t) = amn z m tn . (3.74) m,n
The term ars z r ts is called the principal term of the polynomial if ars 6= 0 and the exponents r and s each attain their maximum; that is for each other term amn z m tn in (3.74), for amn 6= 0, either r > m, s > n, or r = m, s > n, or r > m, s = n. We can also write (3.74) as (s)
(s)
(s)
r r−1 h(z, t) = χ(s) + · · · + χ1 (t)z + χ0 (t), r (t)z + χr−1 (t)z (s) χj (t),
(3.75)
where j = 0, 1, 2, . . . , r are polynomials in t with degree at most equal to s. We will use the following result of Pontryagin’s to clarify conditions under which the Nyquist Criterion can be used to study the stability of systems with time delay.
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
119
THEOREM 3.7 Let H(z) = h(z, ez ), where h(z, t) is a polynomial with nonzero principal term (s) ars z r ts . If the function χr (ez ) which denotes the coefficient of z r has roots in the open right-half plane, then the function H(z) has an unbounded set of (s) zeros in the open right-half plane. If all the zeros of the function χr (ez ) lie in the open left-half plane, then the function H(z) has no more than a bounded set of zeros in the open right-half plane. (s)
We note that in Theorem 3.7, the situation when χr (ez ) has zero(s) on the imaginary axis is not mentioned. We will look into this more deeply. Let us look at the distribution of the zeros of H(z) when |z| → ∞. As |z| → ∞, (s) H(z) = 0 can be approximated as χr (ez ) = 0. That means the roots of (s) z χr (e ) = 0 determine the zeros of H(z) at infinity. Those roots form certain chains and they go deep into the left-half plane, the right-half plane or go (s) to infinity within strips with finite real parts. Thus, if χr (ez ) has zeros on the imaginary axis, H(z) has root chains that approach the imaginary axis at infinity. The following theorem gives us the conditions which should be satisfied when using the Nyquist Criterion with the conventional Nyquist contour (the contour consisting of the imaginary axis and a semicircle of arbitrarily large radius in the right-half of the complex plane). THEOREM 3.8 (Connection between Pontryagin’s Results and the Nyquist Criterion) Suppose we are given a unity feedback system with an open-loop transfer function N (s) −Ls e (3.76) G(s) = G0 (s)e−Ls = D(s) where N (s) and D(s) are real polynomials of degree m and n respectively and L is a fixed delay. Then we have the following conclusions: 1. If n < m, or, n = m and | abnn | ≥ 1, where an , bn are the leading coefficients of D(s) and N (s) respectively, the Nyquist Criterion is not applicable and the system is unstable according to Pontryagin’s theorems. 2. If n > m, or, n = m and | abnn | < 1, the Nyquist Criterion is applicable and we can use it to check the stability of the closed-loop system. PROOF
The characteristic equation of the closed-loop system is δ(s) = D(s) + N (s)e−Ls .
(3.77)
120
THREE TERM CONTROLLERS
Multiply (3.77) by eLs to obtain δ ∗ (s) = δ(s)eLs = D(s)eLs + N (s),
(3.78)
and let z = Ls to obtain δ ∗ (z) = Dz (z)ez + Nz (z).
(3.79)
Note that both the above operations do not affect the number of RHP roots of the original equation with L > 0. With D(s) = an sn + an−1 sn−1 + · · · + a1 s + a0 N (s) = bm sm + bm−1 sm−1 + · · · + b1 s + b0 ,
(3.80)
we have Dz (z) = an L−n z n + an−1 L−n+1 z n−1 + · · · + a1 L−1 z + a0 Nz (z) = bm L−m z m + bm−1 L−m+1 z m−1 + · · · + b1 L−1 z + b0 .
(3.81)
Now we will discuss the possible stability of (3.79) in the following three cases. 1. deg[Dz (z)] < deg[Nz (z)], that is, n < m. In this case, δ ∗ (z) does not have a principal term. According to Theorem 3.1, it has an unbounded number of RHP roots. The Nyquist Criterion is inapplicable but we already know that δ ∗ (z) is unstable. 2. deg[Dz (z)] > deg[Nz (z)], that is, n > m. δ ∗ (z) has the principal term an L−n z n ez . The coefficient of z n is z χ(1) n (e ) =
an z e , Ln
(3.82)
(1)
and the solution for χn (ez ) = 0 is z = −∞, which lies in the LHP. So, by Theorem 3.7, δ ∗ (z) can only have a bounded set of RHP zeros. This bounded set is also a finite set, and the Nyquist Criterion can be used for stability analysis. 3. deg[Dz (z)] = deg[Nz (z)], that is, n = m. δ ∗ (z) has the principal term an L−n z n ez in this case too. However, the coefficient of z n is an z bn z χ(1) e + n (3.83) n (e ) = Ln L (1)
To make χn (ez ) = 0, we must have ez = − abnn . Let z = x + jy and x, y ∈ R, then we have ex ejy = − abnn . The solutions are • Case 1:
bn an
> 0. Then ex = | abnn |, ejy = −1 so that x = ln |
bn |, y = 2kπ + π, k ∈ Z, an
(3.84)
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY • Case 2:
bn an
121
< 0. Then ex = | abnn |, ejy = 1 so that x = ln |
bn |, y = 2kπ, k ∈ Z. an
(3.85)
Depending on the value of | abnn |, we will arrive at different conclusions: (1)
(a) If | abnn | > 1, then x > 0, that is, χn has RHP zeros. So, δ ∗ (z) has an unbounded set of RHP zeros. Again, the Nyquist Criterion is inapplicable but the closed-loop system is unstable. (1)
(b) If | abnn | < 1, then x < 0, that is, χn only has LHP zeros. So, δ ∗ (z) has no more than a bounded and finite set of RHP zeros and the closed-loop stability is determinable from the Nyquist Criterion. (1)
(c) If | abnn | = 1, then x = 0, that is, χn has zeros on the imaginary axis. So, δ ∗ (z) has root chains approaching the imaginary axis, so it is unstable. The Nyquist Criterion is inapplicable in this case. Since δ ∗ (z) has the same number of RHP zeros as δ(s) for fixed L > 0, from the above analysis, we can see that in cases (1), (3a) and (3c), δ(s) is unstable, while in cases (2) and (3b), δ(s) has no more than a bounded set of zeros in the RHP, hence it is possibly stable. So, only in cases (2) and (3b), can the Nyquist Criterion be used to ascertain possible stability. Thus, Tsypkin’s results and the proof of the Generalized Nyquist Criterion are valid only for these two cases. REMARK 3.5 It is appropriate to point out that most likely Tsypkin assumed the plant to be strictly proper, though he did not state it explicitly in the literature. Attaching a PID controller to a proper or strictly proper plant opens up the very real possibility of ending up with an improper or a proper open-loop transfer function. This is the reason that the above investigation had to be undertaken. The above clarification sets the stage for determining all stabilizing P, PI and PID controllers for plants with time delay.
3.7.2
Problem Formulation and Solution Approach
Consider a given LTI plant with time delay L, P (s) = P0 (s)e−Ls =
N (s) −Ls e D(s)
(3.86)
and a controller with a unity feedback fixed-structure, C(s, k), where k is the vector of adjustable parameters of the controller. The problem of interest is to find the complete set of k’s which can stabilize the system for any L ∈ [0, L0 ].
122
THREE TERM CONTROLLERS
The approach developed in this chapter to solve this problem involves the following steps: 1. Find the complete set of k’s which stabilize the delay free plant P0 (s) and denote this set as S0 . 2. Define the set SN , which is the set of k’s such that C(s, k)P0 (s) is an improper transfer function or lim |[C(s, k)P0 (s)]| ≥ 1.
s→∞
Note that the elements in SN make the closed-loop system unstable after the delay is introduced (Theorem 3.8). Exclude SN from S0 and denote the new set by S1 , that is, S1 = S0 \SN . 3. Compute the set SL : SL = {k|k ∈ / SN and ∃L ∈ [0, L0 ], ω ∈ R, s.t.C(jω)P0 (jω)e−jLω = −1}. From this definition, SL is the set of k’s which make C(s, k)P (s) have a minimal destabilizing delay that is less than or equal to L0 . ∆
4. The set SR = S1 \SL is the solution to our problem. THEOREM 3.9 The set of controllers C(s, k) denoted by SR is the complete set of controllers in the unity feedback configuration that stabilize the plant P (s) with delay L from 0 up to L0 . PROOF For any k0 ∈ SR , since SR ⊆ S1 ⊆ S0 , k0 ∈ S0 , that is, there is no RHP pole when the controller C(s, k0 ) is applied to the plant P (s) with L = 0. Since k0 ∈ / SN , with the increase of L, there is no unbounded RHP pole (Theorem 3.8) and the possible RHP poles are the poles that come from the LHP by crossing the imaginary axis. However, from k0 ∈ / SL , we know that there are no boundary crossing poles. So, the system does not have RHP poles with L ranging from 0 to L0 and it is, therefore, stable for those L’s. For any k1 ∈ / SR , it must fall into one or more of following categories. 1. k1 ∈ / S0 , which means the controller cannot even stabilize the delay free plant (L = 0). 2. k1 ∈ SN , the closed-loop system is unstable with any amount of delay (Theorem 3.8).
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
123
3. k1 ∈ SL , some poles are on the imaginary axis for certain L1 ≤ L0 . These poles will either go into RHP or return to LHP. However, the stability at that L1 has already been destroyed. We can see from the above analysis that SR is exactly the complete set of stabilizing controller parameters that we are looking for. In this section, we apply this general method to the special case of PID controllers to find all the PID controllers which can stabilize a given plant with time delay up to a certain value.
3.7.3
Proportional Controllers
Let us consider using proportional controllers to stabilize an arbitrary plant with time delay. We will then extend the result to PI and PID controllers. For a proportional controller, we have C(s) = kp .
(3.87)
and the plant is: P (s) = P0 (s)e−Ls =
N (s) −Ls e D(s)
(3.88)
Our objective is to find all the kp ’s which stabilize P (s) with time delay L ∈ [0, L0 ]. To implement the method proposed in Section 3.7.2, the key is to find SL . The Nyquist curve of the system crossing (−1, 0) is equivalent to C(jω)P0 (jω)e−jLω = −1 for certain L and ω. This, in turn, is equivalent to the following two conditions: arg[kp P0 (jω)] − Lω = 2hπ − π, |kp P0 (jω)| = 1.
h∈Z
(3.89) (3.90)
Here the argument function arg(·) ∈ [−π, π) by convention. Also we only need to consider ω > 0 since the Nyquist plot for ω < 0 is symmetric. We are only interested in the minimal positive L which satisfies (3.89), so the phase condition (3.89) can be rewritten as arg[kp P0 (jω)] − Lω = −π.
(3.91)
Note that such a reasoning also applies to the PI and PID cases, to be considered later. The two conditions above yield arg[kp P0 (jω)] + π ω 1 kp (ω) = ± . |P0 (jω)|
L(ω, kp ) =
(3.92) (3.93)
124
THREE TERM CONTROLLERS
For kp > 0, we have L(ω, kp ) = L(ω) =
arg[P0 (jω)] + π . ω
(3.94)
Solve L(ω) ≤ L0 to get a set of ω’s, say Ω+ . From the magnitude condition (3.90), we can get a set of positive kp ’s corresponding to Ω+ , and let us call this set SL+ . This set consists of all the positive kp ’s that make the system have poles on the imaginary axis for certain L ≤ L0 . Similarly, for kp < 0, we will have a set Ω− and a corresponding set SL− . Now, the combination of SL+ and SL− is the complete set SL , that is, SL = SL+ ∪ SL− . The above discussion leads to the following steps for computing SR . 1. Compute the delay free stabilizing kp set, S0 , say, by the Routh-Hurwitz Criterion. 2. Find SN . • If deg[N (s)] > deg[D(s)], SN = R, which means SR = ∅. • If deg[N (s)] < deg[D(s)], SN = ∅. • If deg[N (s)] = deg[D(s)], SN = {kp | |kp | ≥ | abnn |}, where an , bn are the leading coefficients of D(s) and N (s) respectively. 3. Compute S1 = S0 \SN . 4. Compute SL according to the analysis in this section. 5. Compute SR = S1 \SL . We next present a numerical example to illustrate the above steps: Example 3.14 Find all proportional controllers that stabilize the plant P (s) =
s2 + 3s − 2 e−Ls s3 + 2s2 + 3s + 2
(3.95)
with delay up to L0 = 1.8. For the delay free plant, the stabilizing kp range S0 = (−0.4093, 1). Since deg[N (s)] = 2 < 3 = deg[D(s)], SN = ∅ and S1 = S0 . For kp > 0, Ω+ = [1.5129, +∞) (see Figure 3.35). The corresponding SL+ = [0.4473, +∞) (see Figure 3.36). For kp < 0, Ω− = [0.7359, 1.3312] ∪ [2.6817, +∞) (see Figure 3.37).
125
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY L vs. ω (k >0) p
14
12
10
L
8
6
4
ω+1=1.5129
2 L0=1.8 0
0
0.5
1
1.5
2
ω
2.5
3
3.5
Figure 3.35 L(ω) vs. ω for kp > 0. |kp| vs. ω 2.4
2.2
2
1.8
1.6
|kp|
|k− |=1.3691 p3
1.4
−
ω3=2.6817
1.2
1
−
|kp|min=0.4135 − ωmin=1.2135
0.8 − |kp1|=0.6025 − ω1=0.7359
0.6
0.4
0
0.5
+
kp1=0.4473 +
ω1=1.5129
1
1.5 − ω2=1.3312
2 ω
Figure 3.36 |kp (ω)| vs. ω.
2.5
3
3.5
126
THREE TERM CONTROLLERS L vs. ω (kp 0). the stabilizing kp for the plant with time delay up to 1.8 is
So,
SR = S1 \SL = (−0.4093, 1)\([0.4473, +∞) ∪ [−0.6025, −0.4135] ∪ (−∞, −1.3691]) = (−0.4093, 0.4473)
3.7.4
PI Controllers
For a PI controller
ki kp s + ki = s s and the open-loop transfer function becomes C(s) = kp +
G(s) = C(s)P (s) = C(s)P0 (s)e−Ls = G0 (s)e−Ls
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
127
where, G0 (s) = C(s)P0 (s) kp s + ki N (s) · = s D(s) N (s) = (kp s + ki ) · sD(s) = (kp s + ki ) · R0 (s), ∆
N (s) with R0 (s) = sD(s) . The magnitude and phase conditions
arg[(ki + jkp ω)R0 (jω)] − Lω = −π |(ki + jkp ω)R0 (jω)| = 1 can be written as L(ω, kp , ki ) =
arg[(ki + jkp ω)R0 (jω)] + π ω s
ki = ±
1 − kp2 ω 2 . |R0 (jω)|2
(3.96) (3.97)
We can first fix kp and define M (ω) = Thus,
1 − kp2 ω 2 . |R0 (jω)|2
p ki = ± M (ω)
(3.98)
(3.99)
Note that since ki ∈ R, only those ω’s with M (ω) ≥ 0 need consideration when we compute SL . Substituting (3.99) into (3.96), we will have p arg{[± M (ω) + jkp ω]R0 (jω)} + π (3.100) L(ω) = ω Before proceeding further, we need to introduce some notation. For a given set in the controller parameter space, if one of the controller parameters appears as a subscript, then the new set represents the subset of the original one with that parameter fixed at some value. For example, SR,kp is a subset of SR with kp fixed at some value. Based on the above discussion, the following steps can be used for computing SR : 1. Compute S0 .
128
THREE TERM CONTROLLERS
2. Find SN . • If deg[N (s)] > deg[D(s)], SN = R2 , which means SR = ∅. • If deg[N (s)] < deg[D(s)], SN = ∅. • If deg[N (s)] = deg[D(s)], SN = {(kp , ki )|kp , ki ∈ R and |kp | ≥ | abnn |}, where an , bn are the leading coefficients of D(s) and N (s) respectively. 3. Compute S1 = S0 \SN . 4. For a fixed kp , find SR,kp . + • First determine the sets Ω+ and SL,k : p Ω+ = ω|ω > 0 and M (ω) ≥ 0 and p arg{[ M (ω) + jkp ω]R0 (jω)} + π ≤ L0 L(ω) = ω n o p + + SL,k = k |k ∈ / S and ∃ ω ∈ Ω s.t. k = M (ω) . i i N,k i p p
− • Next determine the sets Ω− and SL,k : p Ω− = ω|ω > 0 and M (ω) ≥ 0 and p arg{[− M (ω) + jkp ω]R0 (jω)} + π L(ω) = ≤ L0 ω n o p − − SL,k = k |k ∈ / S and ∃ ω ∈ Ω s.t. k = − M (ω) . i i N,k i p p
+ − Compute SL,kp = SL,k ∪ SL,k and SR,kp = S1,kp \SL,kp . p p
5. By sweeping over kp , we will have the complete set of PI controllers that stabilize all plants with delay up to L0 : [ SR = SR,kp . (3.101) kp
3.7.5
PID Controllers for an Arbitrary LTI Plant with Delay
For a PID controller C(s) = kp +
ki kd s2 + kp s + ki + kd s = s s
(3.102)
and the open-loop transfer function becomes G(s) = C(s)P (s) = C(s)P0 (s)e−Ls = G0 (s)e−Ls
(3.103)
129
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY where G0 (s) = C(s)P0 (s) kd s2 + kp s + ki N (s) · = s D(s) N (s) = (kd s2 + kp s + ki ) · sD(s) = (kd s2 + kp s + ki ) · R0 (s), ∆ N (s) sD(s) .
with R0 (s) =
(3.104)
The phase and magnitude conditions
arg[(ki − kd ω 2 + jkp ω)R0 (jω)] − Lω = −π
(3.105)
|(ki − kd ω 2 + jkp ω)R0 (jω)| = 1
(3.106)
and can be further reduced to: π + arg [(ki − kd ω 2 ) + jkp ω] · R0 (jω) L(ω, kp , ki , kd ) = ω s ki − kd ω 2 = ±
(3.107)
1 − (kp ω)2 . |R0 (jω)|2
Similar to the PI case, for fixed kp , we define M (ω) = Then
1 − (kp ω)2 . |R0 (jω)|2
p ki − kd ω 2 = ± M (ω).
(3.108)
(3.109)
Like the PI case, we only need to consider ω’s with M (ω) ≥ 0 when we compute SL . Substituting (3.109) into (3.107), we have p π + arg{[± M (ω) + jkp ω] · R0 (jω)} L(ω, kp , ki , kd ) = L(ω) = . (3.110) ω The following steps can be used for computing SR : 1. Compute S0 . 2. Find SN . • If deg[N (s)] > deg[D(s)] − 1, SN = R3 , which means SR = ∅. • If deg[N (s)] < deg[D(s)] − 1, SN = ∅.
130
THREE TERM CONTROLLERS • If deg[N (s)] = deg[D(s)] − 1, SN = {(kp , ki , kd ) | kp , ki , kd ∈ an |}, where an , bn−1 are the leading coefficients R and |kd | ≥ | bn−1 of D(s) and N (s) respectively.
3. Compute S1 = S0 \SN . 4. For a fixed kp , determine the set SR,kp as follows: + • First determine the sets Ω+ and SL,k : p
Ω =
+ SL,k = p
+
ω | ω > 0 and M (ω) ≥ 0 and p π + arg{[ M (ω) + jkp ω] · R0 (jω)} L(ω) = ≤ L0 ω (ki , kd ) | (ki , kd ) ∈ / SN,kp and ∃ ω ∈ Ω+ p such that ki − kd ω 2 = M (ω) .
+ Note that SL,k is a set of straight lines in the (ki , kd ) space. p − • Next determine the sets Ω− and SL,k : p
ω|ω > 0 and M (ω) ≥ 0 and p π + arg{[− M (ω) + jkp ω] · R0 (jω)} L(ω) = ≤ L0 ω = (ki , kd )|(ki , kd ) ∈ / SN,kp and ∃ ω ∈ Ω− p 2 such that ki − kd ω = − M (ω) .
Ω− =
− SL,k p
+ − Compute SL,kp = SL,k ∪ SL,k and SR,kp = S1,kp \SL,kp . p p
5. By sweeping over kp , we will have the complete set of PID controllers that stabilize all plants with delay up to L0 : [ (3.111) SR = SR,kp . kp
We next present two examples to demonstrate the above steps. The first example shows how the approach of this chapter can be used to recover the results of Section 3.6 for first order systems.
131
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
Example 3.15 Determine all PID controllers that stabilize a first order plant with time delay up to L0 . To this end, consider the first order plant with time delay: P (s) =
k e−Ls , L ∈ [0, L0 ]. Ts + 1
The stabilizing PID parameters for the delay free plant are: 1 T 1 T S0 = (kp , ki , kd ) | kp > − , ki > 0, kd > − or kp < − , ki < 0, kd < − k k k k Since deg[D(s)] − deg[N (s)] = 1, T SN = (kp , ki , kd ) | kp , ki , kd ∈ R and |kd | ≥ . k
Without loss of generality, let us assume that k > 0. Then 1 T T S1 = S0 \SN = (kp , ki , kd ) | kp > − , ki > 0, > kd > − k k k
for T > 0, and T T 1 S1 = {(kp , ki , kd )|kp < − , ki < 0, < kd < − } k k k for T < 0. A detailed analysis of this example verifies the results previously obtained for a first order plant, using Pontryagin’s method and the results obtained are as follows. For T > 0, with different kp values, the stabilizing regions of (ki , kd ) take on different but simple shapes: • For − k1 < kp ≤ k1 , SR,kp is a trapezoid (see Figure 3.38(a)). • For kp > k1 , SR,kp is a quadrilateral (see Figure 3.38(b)(c)). Similar results can also be obtained for T < 0. Example 3.16 Use a PID controller to stabilize the plant P (s) =
s5
+
8s4
s3 − 4s2 + s + 2 e−Ls + 32s3 + 46s2 + 46s + 17
(3.112)
with L up to L0 = 1, that is, for all L ∈ [0, 1]. Fix kp = 1. First, we can use the method proposed in Chapter 2 to get the stabilizing ki , kd values for the delay free plant, S0,kp , shown in Figure 3.39.
132
THREE TERM CONTROLLERS 2
k
d
1
T/k
0
(a)
−1
−T/k
−2
0
1
2
3
4 k
5
6
7
8
i
2
T/k
k
d
1 0
(b)
−1
−T/k
−2
0
1
2
3
k
d
2
4 ki
5
6
7
8
1
T/k
0
(c)
−1
−T/k
−2
0
1
2
3
4 k
5
6
7
8
i
Figure 3.38 First order plant: stabilizing region of(ki , kd ) with different kp .
Since deg[D(s)] − deg[N (s)] > 1, SN = ∅ and S1 = S0 . p For ki − kd ω 2 = M (ω) > 0, the set of ω where L(ω) ≤ L0 is Ω+ = [0.524825, 0.742302] ∪ [2.57318, +∞) (see Figure 3.40). Also, we can find the p + corresponding values of M (ω) (see Figure 3.41) and SL,k , that is, the p p 2 + straight lines defined by ki − kd ω = M (ω) for ω ∈ Ω . p For ki − kd ω 2 = − M (ω) < 0, Ω− = [1.35894, 1.8659] ∪ [4.37326, +∞) (see − Figure 3.42). Then we can get SL,k . p + − Finally, we can exclude SL,k and SL,k from S1,kp to get SR,kp (see Figp p
ure 3.43). In this section, we first clarified the conditions under which the Nyquist Criterion can be applied to time delay systems. Based on this clarification, a method to compute the set of all P, PI and PID controllers that stabilize a given plant with time delay is obtained. The procedure is simple and easy to understand. With this PID stabilizing set in hand, further optimization (design) can be undertaken, while meeting the stability constraint.
133
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
6
4
2
kd
0
−2
−4
−6
−8
0
1
2
3
4
5
6
7
ki
Figure 3.39 Stabilizing region of(ki , kd ) with kp = 1 for delay free plant.
50 40 30 20 10 1
2
3
4
5
Figure 3.40 p L(ω) vs. ω with ki − kd ω 2 = M (ω).
6
134
THREE TERM CONTROLLERS
200 150 100 50 1
2
3
4
5
6
5
6
Figure 3.41 p M (ω) vs. ω with kp = 1.
120 100 80 60 40 20 1
2
3
4
Figure 3.42 p L(ω) vs. ω with ki − kd ω 2 = − M (ω).
135
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY 6
4
2
S
R,k
0
kd
p
−2
−4
−6
−8
0
1
2
3
4
5
6
7
ki
Figure 3.43 Stabilizing region of(ki , kd ) with kp = 1 for plant with delay up to 1.
3.8 3.8.1
Proofs of Lemmas 3.3, 3.4, and 3.5 Preliminary Results
We begin by making the following observations, which follow from the proof of Lemma 3.1. REMARK 3.6 As we can see from Figures 3.16 through 3.18(a), for kp ∈ (− k1 , ku ), the odd roots of (3.39), i.e., zj where j = 1, 3, 5, ..., are getting closer to (j − 1)π as j increases. So in the limit for odd values of j we have lim cos(zj ) = 1 .
j→∞
Moreover, since the cosine function is monotonically decreasing between (j − 1)π and jπ for odd values of j, in view of the previous observation we have cos(z1 ) < cos(z3 ) < cos(z5 ) < · · · .
136
THREE TERM CONTROLLERS
REMARK 3.7 From Figure 3.16 and Figure 3.18(a) we see that for kp ∈ (− k1 , k1 ) ∪ ( k1 , ku ), the even roots of (3.39), i.e., zj where j = 2, 4, 6, ..., are getting closer to (j − 1)π as j increases. So in the limit for even values of j we have lim cos(zj ) = −1 . j→∞
We also see in Figure 3.16 that these roots approach (j − 1)π from the right whereas in Figure 3.18(a) we see that they approach (j − 1)π from the left. Since the cosine function is monotonically decreasing between (j − 2)π and (j − 1)π (j = 2, 4, 6, ...) and is monotonically increasing between (j − 1)π and jπ (j = 2, 4, 6, ...), we have cos(z2 ) > cos(z4 ) > cos(z6 ) > · · · for kp ∈ (− k1 , k1 ) ∪ ( k1 , ku ). In the particular case of Figure 3.17, i.e., kp = k1 , we see that cos(z2 ) = cos(z4 ) = cos(z6 ) = · · · = −1. Before proving Lemmas 3.3, 3.4, and 3.5, we first state and prove the following technical lemmas that will simplify the subsequent analysis. LEMMA 3.10 Consider the function E1 : Z + × Z + → R defined by ∆
E1 (m, n) = bm − bn where m and n are natural numbers and bj , j = m, n are as defined in (3.45). Then, for zm , zn 6= lπ, l = 0, 1, 2, ..., E1 (m, n) can be equivalently expressed as L2 [1 − (kkp )2 ][cos(zm ) − cos(zn )] E1 (m, n) = . kT zm zn sin(zm ) sin(zn ) PROOF We will first show that for zj 6= lπ, j = 1, 2, 3, ..., the following identity holds: sin(zj ) +
1 + kkp cos(zj ) T zj cos(zj ) = . L sin(zj )
(3.113)
For zj 6= lπ, from (3.39) we obtain T kkp + cos(zj ) 1 + kkp cos(zj ) sin(zj ) + zj cos(zj ) = sin(zj ) + cos(zj ) = . L sin(zj ) sin(zj ) From (3.45) we can rewrite E1 (m, n) as follows: L T L T E1 (m, n) = − sin(zm ) + zm cos(zm ) + sin(zn ) + zn cos(zn ) kzm L kzn L
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
137
k 1 + kkp cos(zm ) 1 + kkp cos(zn ) ⇒ − E1 (m, n) = − [using (3.113)] L zm sin(zm ) zn sin(zn ) zn sin(zn )[1 + kkp cos(zm )] zm sin(zm )[1 + kkp cos(zn )] − . = zm zn sin(zm ) sin(zn ) zm zn sin(zm ) sin(zn ) Since zj , j = 1, 2, 3, ..., satisfy (3.39), we can rewrite the previous expression as follows: −
kT [kkp + cos(zn )][1 + kkp cos(zm )] E1 (m, n) = 2 L zm zn sin(zm ) sin(zn ) [kkp + cos(zm )][1 + kkp cos(zn )] − zm zn sin(zm ) sin(zn ) [(kkp )2 − 1][cos(zm ) − cos(zn )] = . zm zn sin(zm ) sin(zn )
Thus, we finally obtain E1 (m, n) =
L2 [1 − (kkp )2 ][cos(zm ) − cos(zn )] . kT zm zn sin(zm ) sin(zn )
Before stating the next lemma, we recall here for convenience the standard signum function sgn : R → {−1, 0, 1} already introduced previously: −1 if x < 0 0 if x = 0 sgn[x] = 1 if x > 0 . LEMMA 3.11 Consider the function E2 : Z + × Z + → R defined by ∆
E2 (m, n) = vm − vn where m and n are natural numbers and vj , j = m, n are as defined in (3.48). If kp 6= k1 and zm , zn 6= lπ, l = 1, 2, 3, ..., then sgn[E2 (m, n)] = sgn[T ] · sgn[cos(zm ) − cos(zn )] . PROOF as follows:
First, since zj , j = 1, 2, 3, ..., satisfies (3.39), we can rewrite vj
zj kkp + cos(zj ) sin(zj ) + (cos(zj ) − 1) [since zj 6= lπ] kL sin(zj ) zj (1 − kkp )[1 − cos(zj )] ⇒ vj = . (3.114) kL sin(zj ) vj =
138
THREE TERM CONTROLLERS
Using (3.114) the function E2 (m, n) can be equivalently expressed as zn (1 − kkp )[1 − cos(zn )] zm (1 − kkp )[1 − cos(zm )] − kL sin(zm ) kL sin(zn ) kL [1 − cos(zm )] [1 − cos(zn )] ⇒ − zn . E2 (m, n) = zm 1 − kkp sin(zm ) sin(zn )
E2 (m, n) =
Once more we use the fact that zj , j = 1, 2, 3, ..., satisfies (3.39): [kkp + cos(zm )][1 − cos(zm )] kL E2 (m, n) = 2 T 1 − kkp L sin (zm ) [kkp + cos(zn )][1 − cos(zn )] 2 T L sin (zn ) kT kkp + cos(zm ) kkp + cos(zn ) ⇒ E2 (m, n) = − 1 − kkp 1 + cos(zm ) 1 + cos(zn ) −
[since sin2 (x) = 1 − cos2 (x)] (1 − kkp )[cos(zm ) − cos(zn )] = . [1 + cos(zm )][1 + cos(zn )] Thus, the function E2 (m, n) is given by E2 (m, n) =
(1 − kkp )2 [cos(zm ) − cos(zn )] . kT [1 + cos(zm )][1 + cos(zn )]
Since kp 6= k1 , then (1 − kkp )2 > 0. Also, since zm , zn 6= lπ, l = 1, 2, 3, ..., then 1 + cos(zm ) > 0 and 1 + cos(zn ) > 0. Thus, from the previous expression for E2 (m, n) it is clear that sgn[E2 (m, n)] = sgn[T ] · sgn[cos(zm ) − cos(zn )] . This completes the proof of the lemma. LEMMA 3.12 Consider the function E3 : Z + × Z + → R defined by ∆
E3 (m, n) = wm − wn where m and n are natural numbers and wj , j = m, n are as defined in (3.49). If kp 6= − k1 and zm , zn 6= lπ, l = 1, 2, 3, ..., then sgn[E3 (m, n)] = sgn[T ] · sgn[cos(zm ) − cos(zn )] . PROOF As in the previous proof, we use the fact that zj , j = 1, 2, 3, ..., satisfies (3.39). Thus, wj can be rewritten as follows: kkp + cos(zj ) zj wj = sin(zj ) + (cos(zj ) + 1) [since zj 6= lπ] kL sin(zj )
139
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY ⇒ wj =
zj (1 + kkp )[1 + cos(zj )] . kL sin(zj )
(3.115)
Following the same procedure used in the proof of Lemma 3.11 we obtain E3 (m, n) =
(1 + kkp )2 [cos(zm ) − cos(zn )] . kT [1 − cos(zm )][1 − cos(zn )]
Since kp 6= − k1 , then (1 + kkp )2 > 0. Also, since zm , zn 6= lπ, l = 1, 2, 3, ..., then 1 − cos(zm ) > 0 and 1 − cos(zn ) > 0. Thus, from the previous expression for E3 (m, n) it is clear that sgn[E3 (m, n)] = sgn[T ] · sgn[cos(zm ) − cos(zn )] . This completes the proof of the lemma.
3.8.2
Proof of Lemma 3.3
(i) First, we show that bj < − Tk for odd values of j. Recall from Figure 3.16 that zj is either in the first or second quadrant for odd values of j. Thus, sin(zj ) > 0 for j = 1, 3, 5, ... . For − k1 < kp < k1 and cos(zj ) < 1 we have cos(zj )(kkp − 1) > kkp − 1 T ⇒ 1 + kkp cos(zj ) > zj sin(zj ) [using (3.39)] L 1 + kkp cos(zj ) T ⇒ > zj [since sin(zj ) > 0] sin(zj ) L T T ⇒ sin(zj ) + zj cos(zj ) > zj [using (3.113)] L L L T T ⇒ bj = − sin(zj ) + zj cos(zj ) < − . kzj L k Next, we show that bj < bj+2 for odd values of j. Since in this case, i.e., for kp ∈ (− k1 , k1 ), zj 6= lπ for odd values of j, from Lemma 3.10 we have E1 (j, j + 2) := bj − bj+2 =
L2 [1 − (kkp )2 ][cos(zj ) − cos(zj+2 )] . kT zj zj+2 sin(zj ) sin(zj+2 )
Since − k1 < kp < k1 then 1 − (kkp )2 > 0. We also know that zj > 0 and sin(zj ) > 0 for odd values of j. Then, from the previous expression for E1 (j, j + 2) and recalling that T > 0, we have sgn[E1 (j, j + 2)] = sgn[cos(zj ) − cos(zj+2 )] . From Remark 3.6 we know that cos(zj ) < cos(zj+2 ). Then, sgn[E1 (j, j + 2)] = −1 ⇒ E1 (j, j + 2) = bj − bj+2 < 0
140
THREE TERM CONTROLLERS
and bj < bj+2 for odd values of j. Thus, we have shown that bj < bj+2 < −
T for odd values of j. k
(ii) We now show that bj > Tk for even values of j. From Figure 3.16 we see that zj is either in the third or fourth quadrant for even values of j. Thus, sin(zj ) < 0 in this case. Since cos(zj ) > −1 and −1 < kkp < 1 we have T cos(zj )(1 + kkp ) > −(1 + kkp ) ⇒ 1 + kkp cos(zj ) > − zj sin(zj ) [using (3.39)] L 1 + kkp cos(zj ) T ⇒ < − zj [since sin(zj ) < 0] sin(zj ) L T T ⇒ sin(zj ) + zj cos(zj ) < − zj [using (3.113)] L L T T L sin(zj ) + zj cos(zj ) > . ⇒ bj = − kzj L k Note from Figure 3.16 that as j → ∞, zj → (j − 1)π. Then bj → Tk . (iii) It only remains for us to show the properties of the parameter vj when j takes on odd values. From (3.114) we have vj =
zj (1 − kkp )[1 − cos(zj )] . kL sin(zj )
Since −1 < kkp < 1 then 1 − kkp > 0. Also note that 1 − cos(zj ) > 0. Moreover, when j takes on odd values then sin(zj ) > 0. Thus, we conclude that vj > 0 for odd values of j. We now make use of Lemma 3.11 to determine the sign of the quantity E2 (j, j + 2) := vj − vj+2 . Since kp 6= k1 and zj 6= lπ for odd values of j, the conditions in Lemma 3.11 are satisfied and we obtain sgn[E2 (j, j + 2)] = sgn[T ] · sgn[cos(zj ) − cos(zj+2 )] . We mentioned earlier that for odd values of j we have cos(zj ) < cos(zj+2 ) and we also have T > 0 for an open-loop stable plant. Then sgn[E2 (j, j +2)] = −1, so that E2 (j, j + 2) = vj − vj+2 < 0. Thus, we conclude that 0 < vj < vj+2 for odd values of j. This completes the proof of the lemma.
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
3.8.3
141
Proof of Lemma 3.4
(i) We first consider the case of odd values of j. The proof follows from substituting (3.113) into (3.45) since zj 6= lπ, for odd values of j: L 1 + kkp cos(zj ) [using (3.113)] bj = − kzj sin(zj ) L 1 + cos(zj ) [since kkp = 1] =− kzj sin(zj ) T =− [using (3.39)]. k (ii) For even values of j from Figure 3.17 we see that zj = (j − 1)π. Then sin(zj ) = 0 and cos(zj ) = −1 in this case. Thus from (3.45) we conclude that bj = Tk for even values of j. This completes the proof of this lemma.
3.8.4
Proof of Lemma 3.5
First we make a general observation regardingthe roots zj , j = 1, 2, 3,... when the parameter kp is inside the interval k1 , k1 TL α1 sin(α1 ) − cos(α1 ) . From Figure 3.18(a) we see that these roots lie either in the first or second quadrant. Then sin(zj ) > 0 for j = 1, 2, 3, ... . (3.116) (i) We now consider the case of odd values of j. Since kp > we have
1 k
and cos(zj ) < 1
cos(zj )(kkp − 1) < kkp − 1 T ⇒ 1 + kkp cos(zj ) < zj sin(zj ) [using (3.39)] L T T ⇒ sin(zj ) + zj cos(zj ) < zj [using (3.113) and (3.116)] L L L T T ⇒ bj = − sin(zj ) + zj cos(zj ) > − . kzj L k We now show that bj > bj+2 . Since zj 6= lπ for odd values of j, from Lemma 3.10 we have E1 (j, j + 2) = bj − bj+2 =
L2 [1 − (kkp )2 ][cos(zj ) − cos(zj+2 )] . kT zj zj+2 sin(zj ) sin(zj+2 )
Since kp > k1 then 1 − (kkp )2 < 0. We also know that zj > 0 and sin(zj ) > 0. Then from the previous expression for E1 (j, j + 2) we have sgn[E1 (j, j + 2)] = −sgn[cos(zj ) − cos(zj+2 )] .
142
THREE TERM CONTROLLERS
From Remark 3.6 we have that cos(zj ) < cos(zj+2 ). Then sgn[E1 (j, j + 2)] = 1 ⇒ E1 (j, j + 2) = bj − bj+2 > 0 and bj > bj+2 for odd values of j. Thus, we have shown that bj > bj+2 > −
T for odd values of j. k
(ii) We now consider the case of even values of the parameter j. cos(zj ) > −1 and kp > k1 we have
Since
[cos(zj ) + 1] (1 + kkp ) > 0 T ⇒ 1 + kkp cos(zj ) > − zj sin(zj ) [using (3.39)] L T T ⇒ sin(zj ) + zj cos(zj ) > − zj [using (3.113) and (3.116)] L L L T T ⇒ bj = − sin(zj ) + zj cos(zj ) < . kzj L k We now show that bj < bj+2 for this case. We know that zj 6= lπ for even values of j. Then from Lemma 3.10 we have E1 (j, j + 2) = bj − bj+2 =
L2 [1 − (kkp )2 ][cos(zj ) − cos(zj+2 )] . kT zj zj+2 sin(zj ) sin(zj+2 )
Once more, since kp > k1 then 1 − (kkp )2 < 0. We also know that zj > 0 and sin(zj ) > 0. Then from the previous expression for E1 (j, j + 2) we have sgn[E1 (j, j + 2)] = −sgn[cos(zj ) − cos(zj+2 )] . From Remark 3.7 we have that cos(zj ) > cos(zj+2 ) for even values of the parameter j. Using this fact we obtain sgn[E1 (j, j + 2)] = −1 ⇒ E1 (j, j + 2) = bj − bj+2 < 0 and bj < bj+2 for even values of j. Thus, we have shown that bj < bj+2
k1 then 1+kkp > 0. Also notice that 1+cos(zj ) > 0. Thus, since sin(zj ) > 0 we conclude that wj > 0 for even values of the parameter
143
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
j. We now invoke Lemma 3.12 and evaluate the function E3 (m, n) at m = j, n = j + 2: E3 (j, j + 2) = wj − wj+2 . Since kp 6= − k1 and zj 6= lπ for even values of j, then we have sgn[E3 (j, j + 2)] = sgn[T ] · sgn[cos(zj ) − cos(zj+2 )] . We know from Remark 3.7 that cos(zj ) > cos(zj+2 ) for even values of j, and also that T > 0. Then, sgn[E3 (j, j +2)] = 1, so that E3 (j, j +2) = wj −wj+2 > 0. Thus, we have shown that wj > wj+2 > 0 for even values of j. (iv) We show first that b1 < b2 . Since z1 , z2 6= lπ, from Lemma 3.10 we have E1 (1, 2) = b1 − b2 L2 [1 − (kkp )2 ][cos(z1 ) − cos(z2 )] . = kT z1 z2 sin(z1 ) sin(z2 ) We know that sin(z1 ) > 0 and sin(z2 ) > 0. Moreover, since kp > the following: sgn[E1 (1, 2)] = −sgn[cos(z1 ) − cos(z2 )] .
1 k
we obtain
As we can see from Figure 3.18(a), both z1 and z2 are in the interval (0, π) and z1 < z2 . Since the cosine function is monotonically decreasing in (0, π) then cos(z1 ) > cos(z2 ). Thus, sgn[E1 (1, 2)] = −1 ⇒ E1 (1, 2) = b1 − b2 < 0 . Hence, we have b1 < b2 . Finally we show that w1 > w2 . To do so, we invoke Lemma 3.12 and evaluate the function E3 (m, n) at m = 1, n = 2: E3 (1, 2) = w1 − w2 . / {0, π}, the conditions in Lemma 3.12 are satisfied Since kp 6= − k1 and z1 , z2 ∈ and we obtain sgn[E3 (1, 2)] = sgn[T ] · sgn[cos(z1 ) − cos(z2 )] . We already pointed out that cos(z1 ) > cos(z2 ) and since T > 0 we have sgn[E3 (1, 2)] = 1 ⇒ E3 (1, 2) = w1 − w2 > 0 . Thus, w1 > w2 and this completes the proof of the lemma.
144
3.9 3.9.1
THREE TERM CONTROLLERS
Proofs of Lemmas 3.7 and 3.9 Proof of Lemma 3.7
Let us define the function f : (0, π) × R → R by ∆
f (z, kp ) =
kkp + cos(z) . sin(z)
Consider kp1 , kp2 ∈ R such that kp1 < kp2 . Then for any z ∈ (0, π) we have kkp1 + cos(z) < kkp2 + cos(z) kkp1 + cos(z) kkp2 + cos(z) ⇒ < [since sin(z) > 0] sin(z) sin(z) ⇒ f (z, kp1 ) < f (z, kp2 ) . Thus, for any fixed z ∈ (0, π), f (z, kp ) is monotonically increasing with respect to kp . Hence, for kp < − k1 we have f (z, kp ) < f
1 z, − ∀z ∈ (0, π). k
This means that if the line TL z does not intersect the curve f (z, − k1 ) in z ∈ (0, π), then it will not intersect any other curve f (z, kp ) in z ∈ (0, π). Observe that ∀z ∈ (0, π) z −1 + cos(z) 1 f z, − = = − tan . k sin(z) 2 Accordingly, define a continuous extension of f (z, − k1 ) to [0, π) by z 1 f1 z, − = − tan . k 2 Clearly, the curve f1 (z, − k1 ) intersects the line TL z at z = 0. This is depicted in Figure 3.44. Also note that the slope of the tangent to f1 (z, − k1 ) at z = 0 is given by df1 1 1 2 z = − sec =− . dz z=0 2 2 z=0 2
If this slope is less than or equal to TL then we are guaranteed that no further intersections will take place in the interval (0, π). Since f1 (z, − k1 ) = f (z, − k1 ) on (0, π), it follows that if −0.5 ≤ TL , then the curve f (z, − k1 ) will not intersect the line TL z in the interval (0, π). This completes the proof.
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
Figure 3.44 Plot of the curve f1 (z, − k1 ) and the line
3.9.2
145
T L z.
Proof of Lemma 3.9
We begin by making the following observations, which follow from the proof of Lemma 3.6. REMARK 3.8 From Figure 3.26(a) we see that the odd roots of (3.60), i.e., zj where j = 1, 3, 5, ..., are getting closer to (j − 1)π as j increases. Since the cosine function is monotonically decreasing between (j − 1)π and jπ for odd values of j, in view of the previous observation we have cos(z1 ) < cos(z3 ) < cos(z5 ) < · · · . REMARK 3.9 From Figure 3.26(a) we see that the even roots of (3.60), i.e., zj where j = 2, 4, 6, ..., are getting closer to (j − 1)π as j increases. Since the cosine function is monotonically decreasing between (j − 2)π and (j − 1)π for even values of j, and because of the previous observation we have cos(z2 ) > cos(z4 ) > cos(z6 ) > · · · . From Figure 3.26(a) we see that the roots zj , j = 1, 2, 3, ... lie either in the first or in the second quadrant when the parameter kp is inside the interval
146
THREE TERM CONTROLLERS
(kl , − k1 ). Thus,
sin(zj ) > 0 for j = 1, 2, 3, ... .
(3.117)
(i) First we analyze the case of odd values of j. Since kp < − k1 and cos(zj ) < 1, we have cos(zj )(kkp − 1) > kkp − 1 T ⇒ 1 + kkp cos(zj ) > zj sin(zj ) [using (3.60)] L T T ⇒ sin(zj ) + zj cos(zj ) > zj [using (3.113) and (3.117)] L L L T T ⇒ bj = − sin(zj ) + zj cos(zj ) < − . kzj L k We now show that bj < bj+2 . Since zj 6= lπ for odd values of j, from Lemma 3.10 we have E1 (j, j + 2) = bj − bj+2 L2 [1 − (kkp )2 ][cos(zj ) − cos(zj+2 )] = . kT zj zj+2 sin(zj ) sin(zj+2 ) Since kp < − k1 then 1 − (kkp )2 < 0. We also know that zj > 0, sin(zj ) > 0, and T < 0. Then from the previous expression for E1 (j, j + 2) we have sgn[E1 (j, j + 2)] = sgn[cos(zj ) − cos(zj+2 )] . From Remark 3.8 we have cos(zj ) < cos(zj+2 ). Thus, sgn[E1 (j, j + 2)] = −1 ⇒ E1 (j, j + 2) = bj − bj+2 < 0 and bj < bj+2 for odd values of j. Thus, we have shown that bj < bj+2 < −
T for odd values of j. k
(ii) We now consider the case of even values of the parameter j. Since kp < − k1 and cos(zj ) > −1, we have cos(zj )(kkp + 1) < −(kkp + 1) T ⇒ 1 + kkp cos(zj ) < − zj sin(zj ) [using (3.60)] L T T ⇒ sin(zj ) + zj cos(zj ) < − zj [using (3.113) and (3.117)] L L T T L ⇒ bj = − sin(zj ) + zj cos(zj ) > . kzj L k
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
147
We now show that bj > bj+2 for this case. Again from Lemma 3.10, since zj 6= lπ for even values of j, we have E1 (j, j + 2) = bj − bj+2 =
L2 [1 − (kkp )2 ][cos(zj ) − cos(zj+2 )] . kT zj zj+2 sin(zj ) sin(zj+2 )
Since kp < − k1 then 1 − (kkp )2 < 0. We also know that zj > 0, sin(zj ) > 0, and T < 0. Then from the previous expression for E1 (j, j + 2) we have sgn[E1 (j, j + 2)] = sgn[cos(zj ) − cos(zj+2 )] . From Remark 3.9 we have cos(zj ) > cos(zj+2 ) for even values of the parameter j. Using this fact we obtain sgn[E1 (j, j + 2)] = 1 ⇒ E1 (j, j + 2) = bj − bj+2 > 0 and bj > bj+2 for even values of j. Thus, we have shown that bj > bj+2 >
T for even values of j. k
(iii) We will now study the properties of the parameter wj . From (3.115) we have zj (1 + kkp )[1 + cos(zj )] wj = . kL sin(zj ) Since kp < − k1 then 1 + kkp < 0. Moreover, we know that zj > 0, cos(zj ) > −1, and sin(zj ) > 0. Thus, we conclude that wj < 0 for even values of the parameter j. We now evaluate the function E3 (m, n) defined in Lemma 3.12 at m = j, n = j + 2: E3 (j, j + 2) = wj − wj+2 . Since kp 6= − k1 and zj 6= lπ for even values of j, then using Lemma 3.12, we have sgn[E3 (j, j + 2)] = sgn[T ] · sgn[cos(zj ) − cos(zj+2 )] . We know from Remark 3.9 that cos(zj ) > cos(zj+2 ) for even values of j. Since T < 0, it follows that sgn[E3 (j, j + 2)] = −1, i.e., E3 (j, j + 2) = wj − wj+2 < 0. Thus, we have shown that wj < wj+2 < 0 for even values of j. (iv) First we show that b1 > b2 . Since z1 , z2 6= lπ, we have from Lemma 3.10 E1 (1, 2) = b1 − b2 L2 [1 − (kkp )2 ][cos(z1 ) − cos(z2 )] . = kT z1 z2 sin(z1 ) sin(z2 )
148
THREE TERM CONTROLLERS
We know that sin(z1 ) > 0 and sin(z2 ) > 0. Moreover, since kp < − k1 and T < 0, we obtain the following: sgn[E1 (1, 2)] = sgn[cos(z1 ) − cos(z2 )] . From Figure 3.26(a), it is clear that both z1 and z2 are in the interval (0, π) and z1 < z2 . Since the cosine function is monotonically decreasing in (0, π) then cos(z1 ) > cos(z2 ), and we get sgn[E1 (1, 2)] = 1 ⇒ E1 (1, 2) = b1 − b2 > 0 . Hence, we have b1 > b2 . We finally show that w1 < w2 by evaluating E3 (m, n) at m = 1, n = 2: E3 (1, 2) = w1 − w2 . Since kp 6= − k1 and z1 , z2 6= lπ, l = 0, 1, from Lemma 3.12 we obtain sgn[E3 (1, 2)] = sgn[T ] · sgn[cos(z1 ) − cos(z2 )] . We already pointed out that cos(z1 ) > cos(z2 ). Since T < 0 we have sgn[E3 (1, 2)] = −1 ⇒ E3 (1, 2) = w1 − w2 < 0 . Thus, w1 < w2 and this completes the proof.
3.10
An Example of Computing the Stabilizing Set
Let us consider a plant P (s) given by P (s) =
N (s) D(s)
(3.118)
which is to be stabilized by a PID controller of structure C(s) = Kp +
Ki + Kd s s
(3.119)
in a unity feedback loop. Consider a delay of L0 in the loop which can be described as N (s) −L0 s P (s) = e (3.120) D(s) The aim is to find the stabilizing values of Kp ,Ki and Kd which will stabilize the plant with delay L0 . In this case, for a fixed Kp , first we evaluate the stabilizing set without time delay and call this set SN . Next we find the set
149
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
of all Ki − Kd lines as ω goes from −∞ to ∞ for which the system will be unstable for the given delay and denote this by SL,Kp . Subtracting SL,Kp from SN , we obtain the stabilizing set. As an example, consider a plant with delay of 1 unit described as: s3 − 4s2 + s + 2 e−1s (3.121) P (s) = 5 s + 8s4 + 32s3 + 46s2 + 46s + 17 First, we obtain the stabilizing set for this system without any delay. The stabilizing set for Kp = 1 is as shown in Figure 3.45. Next, we draw all the Ki − Kd lines which form the set SL,Kp . The region for which the plant is stable with a delay of 1 unit is given by SN /SL,Kp as shown by the shaded region in Figure 3.45.
K =1 p
6
4
2
Stabilizing set for delay = 1 second
Kd
0
K −K stabilizing set i
d
without delay
−2
−4
−6
−8 −2
0
2
4
6
Ki
Figure 3.45 Stabilizing set for system with time delay.
8
150
3.11
THREE TERM CONTROLLERS
Exercises
3.1 Determine the stabilizing set of controllers C(s) = kp +
ki + kd s s
for the plants G(s) =
K e−sL 1 + sT
where (a) K = 1, T = 1, L ∈ [0, 0.5] (b) K = 1, T = −1, L ∈ [0, 0.5] 3.2 Repeat the previous problem with K ∈ [0.5, 1.5] and T ∈ [0.5, 1.5] in part (a) and T ∈ [−1.5, −0.5] in part (b). 3.3 Consider the following plants with L ∈ [0, 1]: (a) P (s) = (b) P (s) = (c) P (s) = (d) P (s) =
s2
1 e−sL + 3s + 4
s2
s−1 e−sL + 3s + 4
s − 1 −sL e (s + 1)2 s4
s−1 e−sL + 3s2 + 2s + 1
s−1 e−sL + 3s2 − 2s + 1 In each case, find the set of stabilizing controllers of the form (e) P (s) =
s4
C(s) = kp +
ki + kd s s
using the methods of this and the last chapter. In particular, find the admissible ranges of kp . For distinct values in this range, calculate the stability sets in ki -kd space. 3.4 (Nyquist interpretation of the results of Section 3.5) Consider the closedloop system consisting of the rational plant G(s) with a single delay. Let Ω := {ω1 , ω2 , · · ·}
PID CONTROLLERS FOR SYSTEMS WITH TIME DELAY
151
denote the positive frequencies where |G(jω)| = 1, Ω+ ⊂ Ω those frequencies d d |G(jω)| > 0, Ω− ⊂ Ω those frequencies where dω |G(jω)| < 0, and where dω d 0 Ω ⊂ Ω those frequencies where dω |G(jω)| = 0. Define: For ωk ∈ Ω, ∠G(jωk ) (2n − 1)π + , for ∠G(jωk ) > π ωk ωk Lk (n) = ∠G(jω ) (2n + 1)π k + , for ∠G(jωk ) < π. ωk ωk
Prove that with |G(∞)| < 1 the delays Lk (n) corresponding to ωk ∈ Ω+ (Ω− ) are stabilizing (destabilizing) in the sense that, with increasing L, 2 roots cross from the RHP to the LHP (LHP to the RHP) and the delays corresponding to ωk ∈ Ω0 are touching, that is, there is no change in stability.
3.12
Notes and References
The use of the Pad´e approximation and its effect on stability and cost functional evaluation of time delay systems is treated in detail in the book by Marshall, Gorecki, Korytowski, and Walton [153]. Early discussions on the pitfalls of replacing the exponential term by a rational transfer function can be found in the work of Choksy [52] and Marshall [152]. For a complete description of Pontryagin’s results, the reader is referred to his original paper [170]. Extensions of these results for a certain class of quasi-polynomials can be found in the book by Bellman and Cooke [20]. A more detailed treatment of the geometry of the chains introduced in Section 3.4 can be found in [20]. Applications of Pontryagin’s results introduced in this chapter can be found in [112, 152, 170, 191]. The procedure presented in Section 3.5 for a single delay was developed by Walton and Marshall in 1987 and for more details the reader is referred to [153] and [152]. An important and complete account of recent advances in the study of time delay systems can be found in the book by Gu, Kharitonov, and Chen [90]. This work presents in detail important methods and tools developed for the study of time delay systems. The methods are organized into three categories: (a) frequency domain tools, (b) time domain methods, and (c) input-output stability formulation. In all cases, the robust stability of linear time-invariant delay systems is also discussed in a coherent and systematic manner. The characterization of all stabilizing PID controllers for a given first order plant with time delay was developed by Silva, Datta, and Bhattacharyya [179]. To the best of the authors’ knowledge, this is the first time that such a characterization was provided in the literature. An excellent account of PID theory and design for first order plants with time delay can be found in [8]. The results on PID stabilization of arbitrary order plants with a delay were
152
THREE TERM CONTROLLERS
developed by Xu, Datta, and Bhattacharyya [203]. The results presented in Section 3.3 are taken from [178]. The importance of PID controllers in modern industry is documented in the survey conducted by the Japan Electric Measuring Instrument Manufacturers’ Association [204].
4 DIGITAL PID CONTROLLER DESIGN
In many applications digital controllers are used to provide feedback control action on a continuous time plant whose inputs and outputs are sampled signals. When step input references are to be tracked and step disturbances rejected it is often necessary to use integral control in this discrete time setting. In this chapter we give a solution to the problem of designing a digital PID (Proportional-Integral-Derivative) controller for a given but arbitrary linear time invariant discrete time plant. By using the Tchebyshev representation of a discrete time transfer function and some results on root counting with respect to the unit circle we show how the digital PID stabilizing gains can be determined by solving sets of linear inequalities in two unknowns for a fixed value of the third parameter. By sweeping or gridding over this parameter, the entire set of stabilizing gains can be recovered. This solution is attractive because it answers the question of whether there exists a stabilizing solution or not and in case stabilization is possible the entire set of gains is determined constructively. This can be very useful in designing to satisfy multiple specifications as we demonstrate by examples. Using this characterization of the stabilizing set we present solutions to two design problems: a) Maximally deadbeat design - where we use the controller parameters to squeeze the closed-loop characteristic roots into a circle of smallest possible radius within the unit circle, b) Maximal delay tolerance - where we determine, for the given plant the maximal loop delay that can be tolerated under PID control. In each case the set of controllers attaining the specifications is calculated. Illustrative examples are included.
4.1
Introduction
Consider the sampled data or discrete-time feedback control system shown below (see Figure 4.1). When the reference and disturbance are arbitrary discrete-time step inputs it is possible to show that the tracking error e[n] converges to zero provided the digital controller C(z) includes a discrete time integrator and the closed-loop is stable. The latter condition reduces to the requirement that the closed-loop characteristic polynomial be Schur stable, or have all its roots within the unit circle. The set of controller gains rendering
153
154
THREE TERM CONTROLLERS ξ[n] r[n]
e[n]
u[n] C(z)
+
y[n] P (z)
−
Figure 4.1 A discrete-time feedback control system.
the closed-loop characteristic polynomial Schur stable is the stabilizing set. Our objective is to develop methods to compute this set. We proceed as follows. First the complex plane image of a real polynomial or rational function over a circle of radius ρ centered at the origin, is determined and expressed in terms of Tchebyshev polynomials of the first and second kinds. A formula is developed for root counting with respect to circular regions in terms of this Tchebyshev representation. This formula is a generalization of Hermite-Biehler type results for Schur stability. Using these results, we show how the PID controller can be reparametrized so that the stabilizing set is obtained as the solution of sets of linear inequalities in two variables for a fixed value of the third variable. By sweeping or gridding over the third variable, the complete stabilizing set can be determined constructively. The solution shows that the stabilizing set for any discrete-time linear time invariant (DTLTI) plant, when it is nonempty, consists of unions of convex polygons in the space of the PID gains. We show by examples how several performance specifications can be simultaneously satisfied once these stabilizing sets are found. Using this computation we solve two design problems. The first problem is related to deadbeat control wherein one places all closed-loop characteristic roots at the origin so that the transients are zeroed out in a finite number of steps. In general deadbeat control is not possible using PID’s and a reasonable goal is to place the closed-loop characteristic roots as close to the origin as possible so that the transient error decays quickly. Such designs have been advocated in the literature on sampled data control systems. We show how the stabilization solution obtained by us can be exploited to give a constructive determination of such “maximally” deadbeat designs. The second problem involves the determination of the maximum delay in the loop that a given plant under PID control can be made to tolerate. We show how our solution can also be extended to determine this maximum delay for a given DTLTI plant.
155
DIGITAL PID CONTROLLER DESIGN
4.2
Preliminaries
We will consider a discrete time control system consisting of a single input single output (SISO) DTLTI plant described by its z-domain transfer function G(z) and the unity feedback DTLTI controller C(z) (see Figure 4.2):
C(z)
+ −
G(z)
Figure 4.2 A unity feedback system.
The transfer functions shown above are rational functions and we write G(z) =
N (z) , D(z)
C(z) =
NC (z) . DC (z)
The characteristic polynomial of the closed-loop system is Π(z) := DC (z)D(z) + NC (z)N (z) and a necessary and sufficient condition for stability of the closed-loop control system is that the characteristic roots, namely the zeros of Π(z) have magnitude less than unity. This condition is commonly referred to as Schur stability of Π(z). The stabilization problem can be stated as that of determining C(z) so that for the given G(z), the closed-loop characteristic polynomial Π(z) is Schur. For a fixed structure controller, such as a PID controller, C(z) is characterized by a set of gains K (three gains in case of PID) and this gain vector x must be chosen to stabilize Π(z) if possible. An important problem in design is the determination of the entire set S of stabilizing gains in a constructive way. A useful characterization of S should allow the designer to test the feasibility of imposing various performance constraints and checking their attainability with the controller parameters. Thus, if P represents the set of controller parameter values K attaining a performance specification the designer should be able to constructively determine S ∩ P, the subset of S attaining specifications. To attain a set of independent specifications one would need to determine the intersection of S with each of the corresponding sets Pi and determine the common intersection if it is nonempty. We shall show how the stabilizing set S and the performance specification sets S ∩ Pi
156
THREE TERM CONTROLLERS
can be constructively determined for the case of digital PID controllers and two different specifications. This solution depends on certain root counting formulas which we need to develop. These formulas in turn depend on the Tchebyshev representation discussed next.
4.3
Tchebyshev Representation and Root Clustering
The stabilization results to be developed later in the paper require us to determine the complex plane image of polynomials and rational functions on a circle of radius ρ centered at the origin.
4.3.1
Tchebyshev Representation of Real Polynomials
Let us consider a polynomial P (z) = an z n +· · ·+a0 with real coefficients. The image of P (z) evaluated on the circle Cρ of radius ρ, centered at the origin is: P (z) : z = ρejθ , 0 ≤ θ ≤ 2π . (4.1)
As the coefficients ai are real P ρejθ and P ρe−jθ are conjugate complex, and so it suffices to determine the image of the upper half of the circle: P (z) : z = ρejθ , 0 ≤ θ ≤ π . (4.2) Since
we have
z k z=ρejθ = ρk (cos kθ + j sin kθ),
(4.3)
P ρejθ = (an ρn cos nθ + · · · + a1 ρ cos θ + a0 ) +j (an ρn sin nθ + · · · + a1 ρ sin θ) {z } | {z } | ¯ R(ρ,θ)
¯ I(ρ,θ)
¯ θ) + j I(ρ, ¯ θ). = R(ρ,
(4.4)
kθ It is well known that cos kθ, and sin sin θ can be written as polynomials in cos θ using Tchebyshev polynomials. Write u = − cos θ. Then as θ runs from 0 → π, u runs from −1 to +1. Now p ejθ = cos θ + j sin θ = −u + j 1 − u2 (4.5)
and we have
cos kθ =: ck (u) and
sin kθ =: sk (u) sin θ
(4.6)
157
DIGITAL PID CONTROLLER DESIGN
where ck (u) and sk (u) are real polynomials in u and are known as the Tchebyshev polynomials of the first and second kind, respectively. It is easy to show that c′ (u) , k = 1, 2, · · · (4.7) sk (u) = − k k and that the Tchebyshev polynomials satisfy the recursive relation: ck+1 (u) = −uck (u) − 1 − u2 sk (u), k = 1, 2, · · · (4.8) From (4.7) and (4.8), we can determine ck (u) and sk (u) for all k. Now (ρejθ )k = ρk cos kθ + jρk sin kθ
(4.9)
and so we define the generalized Tchebyshev polynomials as follows: ck (u, ρ) = ρk ck (u), sk (u, ρ) = ρk sk (u),
k = 0, 1, 2 · · ·
and note that 1 d [ck (u, ρ)] · , k = 1, 2, · · · k du 2 ck+1 (u, ρ) = −ρuck (u, ρ) − 1 − u ρsk (u, ρ), sk (u, ρ) = −
(4.10) k = 1, 2, · · · (4.11)
The generalized Tchebyshev polynomials are displayed in the table below for k = 1, · · · 5: k 1 2 3 4 5 .. .
ck (u, ρ) −ρu ρ2 2u2 − 1 ρ3 −4u3 + 3u 4 ρ 8u4 − 8u2 + 1 5 ρ −16u5 + 20u3 − 5u .. .
sk (u, ρ) ρ −2ρ2 u ρ3 4u2 − 1 4 ρ −8u3 + 4u 5 ρ 16u4 − 12u2 + 1 .. .
With this notation
where
p P ρejθ = R(u, ρ) + j 1 − u2 T (u, ρ) =: Pc (u, ρ)
(4.12)
R(u, ρ) = an cn (u, ρ) + an−1 cn−1 (u, ρ) + · · · + a1 c1 (u, ρ) + a0
(4.13)
T (u, ρ) = an sn (u, ρ) + an−1 sn−1 (u, ρ) + · · · + a1 s1 (u, ρ).
(4.14)
R(u, ρ) and T (u, ρ) are polynomials in u and ρ. The complex plane image of P (z) as z traverses the upper half of the circle Cρ can be obtained by evaluating Pc (u, ρ) as u runs from −1 to +1. LEMMA 4.1 For a fixed ρ > 0,
158
THREE TERM CONTROLLERS
(a) if P (z) has no roots on the circle of radius ρ > 0, (R (u, ρ) , T (u, ρ)) have no common roots for u ∈ [−1, 1] and R (±1, ρ) 6= 0. (b) if P (z) has 2m roots at z = −ρ (z = +ρ), then R (u, ρ) and T (u, ρ) have m roots each at u = +1 (u = −1). (c) if P (z) has 2m − 1 roots at z = −ρ (z = +ρ), then R (u, ρ) and T (u, ρ) have m and m − 1 roots, respectively at u = +1 (u = −1). p (d) if P (z) has qi pairs of complex roots at z = −ρui ± jρ 1 − u2i , for ui 6= ±1, then R (u, ρ) and T (u, ρ) each have qi real roots at u = ui . PROOF To prove (a) note that P (ρejθ ) 6= 0 for θ ∈ [0, π] and therefore Pc (u, ρ) 6= 0 for u ∈ [−1, +1]; hence, the result. The statements in (b)-(d) above may be verified by direct calculation. When the circle of interest is the unit circle, that is ρ = 1, we will write Pc (u, 1) = Pc (u) and also R(u, 1) =: R(u),
T (u, 1) =: T (u)
for notational simplicity.
4.3.2
Interlacing Conditions for Root Clustering and Schur Stability
The formulas of the last section can be used to derive conditions for root clustering in circular regions, that is for the roots to lie strictly within a circle of radius ρ. For Schur stability we simply take ρ = 1. As before let P (z) be a real polynomial of degree n and ¯ ρ) + j I(θ, ¯ ρ), P ρejθ = R(θ, where u = − cos θ p = R(u, ρ) + j 1 − u2 T (u, ρ) (4.15) where R(u, ρ) and T (u, ρ) are real polynomials of degree n and n − 1, respectively, in u, for fixed ρ. THEOREM 4.1 P (z) has all its zeros strictly within Cρ if and only if (a) R(u, ρ) has n real distinct zeros ri , i = 1, 2, · · · , n in (−1, 1). (b) T (u, ρ) has n − 1 real distinct zeros tj , j = 1, 2, · · · , n − 1 in (−1, 1). (c) The zeros ri and tj interlace: −1 < r1 < t1 < r2 < t2 < · · · < tn−1 < rn < +1.
159
DIGITAL PID CONTROLLER DESIGN PROOF
Let tj = − cos αj ,
αj ∈ (0, π),
j = 1, 2, · · · , n − 1
or αj = − cos−1 tj , α0 = 0,
j = 1, 2, · · · , n − 1
αn = π and let βi = − cos−1 ri ,
i = 1, 2, · · · , n, βi ∈ (0, π). ¯ Then (α0 , α1 , · · · , αn ) are the n + 1 zeros of I(θ, ρ) = 0 and (β1 , β2 , · · · , βn−1 ) ¯ ρ) = 0. The condition (c) means that αi and βj satisfy: are the n zeros of R(θ, 0 = α0 < β1 < α1 < β2 < · · · < βn−1 < αn = π. (4.16) The conditions (a)-(c) imply that the plot of P ρejθ for θ ∈ [0, π] turns counterclockwise through exactly 2n quadrants and this condition is equivalent to P (z) having n zeros inside the circle Cρ . REMARK 4.1 The three conditions given in the above Theorem may be referred to as interlacing conditions on R(u, ρ) and T (u, ρ). By setting ρ = 1 in the above Theorem we obtain conditions for Schur stability in terms of interlacing of the zeros of R(u) and T (u). This constitutes a Hermite-Biehler theorem for Schur stability.
4.3.3
Tchebyshev Representation of Rational Functions
Let Q(z) be a ratio of two real polynomials P1 (z) and P2 (z). We compute the image of Q(z) on Cρ and write it as the corresponding Tchebyshev representation Qc (u, ρ) as follows. Let p Pi (z)|z=−ρu+jρ√1−u2 = Ri (u, ρ) + j 1 − u2 Ti (u, ρ), for i = 1, 2. (4.17)
Then
Q(z)|z=−ρu+jρ√1−u2 P1 (z) = P2 (z) z=−ρu+jρ√1−u2 P1 (z)P2 z −1 = P2 (z)P2 (z −1 ) √ z=−ρu+jρ 1−u2 √ √ R1 (u, ρ) + j 1 − u2 T1 (u, ρ) R2 (u, ρ) − j 1 − u2 T2 (u, ρ) √ √ = R2 (u, ρ) + j 1 − u2 T2 (u, ρ) R2 (u, ρ) − j 1 − u2 T2 (u, ρ)
160
THREE TERM CONTROLLERS R(u,ρ)
}| { R1 (u, ρ)R2 (u, ρ) + 1 − u2 T1 (u, ρ)T2 (u, ρ) = R22 (u, ρ) + (1 − u2 ) T22 (u, ρ) z
T (u,ρ)
z }| { √ 1 − u2 (T1 (u, ρ)R2 (u, ρ) − R1 (u, ρ)T2 (u, ρ)) +j R22 (u, ρ) + (1 − u2 ) T22 (u, ρ) = : Qc (u, ρ).
(4.18)
Note that R(u, ρ), T (u, ρ) are rational functions of the real variable u which runs from -1 to +1. This representation will be needed in a later section on the solution of the maximally deadbeat problem.
4.4
Root Counting Formulas
In this section we develop some formulas for counting the root distribution with respect to the circle Cρ , for real polynomials and real rational functions. These formulas will be necessary for our solution of the stabilization problem but are also of independent interest. We begin by relating root distribution to phase unwrapping.
4.4.1
Phase Unwrapping and Root Distribution
Let φP (θ) := ArgP(ρejθ ) denote the phase of the polynomial P (z) evaluated at z = ρejθ and let ∆θθ21 [φP (θ)] denote the net change in or unwrapped phase of P (ρejθ ) as θ increases from θ1 to θ2 . Similar notation applies to the rational function Q(z) with Tchebyshev representation QC (u, ρ): let φQC (u) = ArgQC (u, ρ) denote the phase of QC (u, ρ) and ∆uu21 [φQc (u)] the net change in or unwrapped phase of QC (u, ρ) as u increases from u1 to u2 . LEMMA 4.2 Let the real polynomial P (z) have i roots in the interior of the circle Cρ and no roots on the circle. Then: ∆π0 [φP (θ)] = πi PROOF From geometric considerations it is easily seen that each interior root contributes 2π to ∆2π 0 [φP (θ)] and therefore because of the symmetry of roots about the real axis the interior roots contribute iπ to ∆π0 [φP (θ)].
DIGITAL PID CONTROLLER DESIGN
161
We state the corresponding result for a rational function. The proof is similar to the previous lemma and is omitted. LEMMA 4.3 1 (z) Let Q(z) = P P2 (z) where the real polynomials P1 (z) and P2 (z) have i1 and i2 roots, respectively in the interior of the circle Cρ and no roots on the circle. Then ∆π0 [φQ (θ)] = π (i1 − i2 ) = ∆+1 −1 [φQC (u)].
4.4.2
Root Counting and Tchebyshev Representation
In this section we first develop formulas to determine the number of roots of a real polynomial, inside a circle Cρ , to its Tchebyshev representation. Let us begin with a real polynomial P (z) and its Tchebyshev representation p PC (u, ρ) = R(u, ρ) + 1 − u2 T (u, ρ)
as developed before. Henceforth, let t1 , · · · , tk denote the real distinct zeros of T (u, ρ) of odd multiplicity, for u ∈ (−1, 1), ordered as follows: −1 < t1 < t2 < · · · < tk < +1. Suppose also that T (u, ρ) has p zeros at u = −1 and let f i (x0 ) denote the i-th derivative to f (x) evaluated at x = x0 . Let us also define if x < 0 −1 sgn[x] = 0 if x = 0 1 if x > 0 We begin with:
THEOREM 4.2 Let P (z) be a real polynomial with no roots on the circle Cρ and suppose that T (u, ρ) has p zeros at u = −1. Then the number of roots i of P (z) in the interior of the circle Cρ is given by i=
PROOF
k h i X 1 sgn T (p) (−1, ρ) sgn [R(−1, ρ)] + 2 (−1)j sgn [R (tj , ρ)] 2 j=1 +(−1)k+1 sgn [R(+1, ρ)] . (4.19)
Recall that ¯ ρ) + j I(θ, ¯ ρ) P (ρejθ ) = R(θ,
162
THREE TERM CONTROLLERS
and define θi , i = 1, · · · , k through ti = − cos θi ,
for θi ∈ [0, π].
Let θ0 := 0, t0 := −1 and θk+1 := π, and note that the θi , i = 0, 1 · · · , k + 1 ¯ ρ). The proof depends on the following elementary and easily are zeros of I(θ, verified facts which are first stated. (In the following, θi+ denotes the point immediately to the right of θi ). (a) ∆π0 [φ(θ)] = πi (b) ∆π0 [φ(θ)] = ∆θ01 [φ(θ)] + ∆θθ21 [φ(θ)] + · · · + ∆πθk [φ(θ)] π θ ¯ (θi , ρ) − sgn R ¯ (θi+1 , ρ) , (c) ∆θi+1 [φ(θ)] = sgn I¯ θi+ , ρ sgn R i 2 i = 0, 1, · · · , k + (d) sgn I¯ θi+ , ρ = −sgn I¯ θi+1 , ρ , i = 0, 1, · · · , k h i + ¯ , ρ) = sgn T (p) (−1, ρ) (e) sgn I(0 ¯ (θi , ρ) = sgn [R (ti , ρ)] , i = 0, 1, · · · , k. (f) sgn R Using (a) - (f), we have πi = ∆π0 [φ(θ)] = ∆θ01 [φ(θ)] + · · · + ∆πθk [φ(θ)], by (a) and (b) + π ¯ , ρ) sgn R(0, ¯ ρ) − sgn R ¯ (θ1 , ρ) + · · · = sgn I(0 2 ¯ (θk , ρ) − sgn R(π, ¯ ρ) · · · + sgn I¯ θk+ , ρ sgn R , by (c) π ¯ ρ) − sgn R ¯ (θ1 , ρ) = sgn I¯ 0+ , ρ sgn R(0, 2 ¯ (θ1 , ρ) − sgn R ¯ (θ2 , ρ) + − sgn R k ¯ ¯ · · · + (−1) sgn R (θk , ρ) − sgn R(π, ρ) , by (d) π ¯ ρ) − 2sgn R ¯ (θ1 , ρ) + 2sgn R ¯ (θ2 , ρ) + = sgn[T (p) (−1, ρ)] sgn R(0, 2 ¯ (θk , ρ) + (−1)k+1 sgn R(π, ¯ ρ) , by (e) · · · + (−1)k sgn R π = sgn[T (p) (−1, ρ)] sgn[R(−1, ρ)] − 2sgn [R (t1 , ρ)] + 2sgn [R (t2 , ρ)] + 2 · · · + (−1)k 2sgn [R (tk , ρ)] + (−1)k+1 sgn[R(+1, ρ)] , by (f)
DIGITAL PID CONTROLLER DESIGN
163
from which the result follows. The result derived above can now be extended to the case of rational func1 (z) tions. Let Q(z) = P P2 (z) where Pi (z), i = 1, 2 are real rational functions. Let Ri (u, ρ) + j
p 1 − u2 Ti (u, ρ), i = 1, 2
denote the Tchebyshev representations of Pi (z), i = 1, 2 and QC (u, ρ) denote the Tchebyshev representation of Q(z) on the circle Cρ . Let R(u, ρ), T (u, ρ) be defined by: R(u, ρ) = R1 (u, ρ)R2 (u, ρ) + (1 − u2 )T1 (u, ρ)T2 (u, ρ) T (u, ρ) = T1 (u, ρ)R2 (u, ρ) − R1 (u, ρ)T( u, ρ) Suppose that T (u, ρ) has p zeros at u = −1 and let t1 · · · tk denote the real distinct zeros of T (u, ρ) of odd multiplicity ordered as follows: −1 < t1 < t2 < · · · < tk < +1. THEOREM 4.3 P1 (z) where Pi (z), i = 1, 2 are real polynomials with i1 and i2 zeros Let Q(z) = P 2 (z) respectively inside the circle Cρ and no zeros on it. Then i1 − i2 =
k h i X 1 sgn T (p) (−1, ρ) sgn [R(−1, ρ)] + 2 (−1)j sgn [R (tj , ρ)] 2 j=1 +(−1)k+1 sgn [R(+1, ρ)] . (4.20)
PROOF The proof is based on the representation of QC (u, ρ) developed in (4.18). Since the denominator of (4.18) is strictly positive for u ∈ [−1, +1] it follows that the phase unwrapping can be computed from the numerator. The rest of the proof is similar to the proof for the polynomial case and is omitted.
4.5
Digital PI, PD, and PID Controllers
In this section we give general parametrizations of PI, PD, and PID controllers in transfer function form. These will be used in the sequel to compute the stabilizing set.
164
THREE TERM CONTROLLERS
1. For PI controllers, we have: C(z) = KP + KI T ·
(KP + KI T ) z − KP z = z−1 z−1 P (KP + KI T ) z − KI TK+K P = . z−1
Thus, we can use C(z) =
K1 (z − K2 ) z−1
where KP = K1 K2
and
KI =
(4.21)
K1 − K1 K2 . T
(4.22)
2. For PD controllers, we have: C(z) = KP +
KD z − 1 (KP T + KD ) z − KD · = T z Tz KD KD T KP + T z− K KP + TD = . z
So we can use C(z) =
K1 (z − K2 ) z
(4.23)
where KP = K1 − K1 K2
and
KD = K1 K2 T.
(4.24)
3. The general formula of a discrete PID controller, using backward differences to preserve causality, is: KD z − 1 z + · z−1 T z KP + KI T + KTD z 2 + −KP − = z(z − 1)
C(z) = KP + KI T ·
2KD T
z+
KD T
.
We use the representation C(z) =
K2 z 2 + K1 z + K0 z(z − 1)
where KP = −K1 − 2K0 ,
KI =
K0 + K1 + K2 , T
(4.25)
(4.26)
and KD = K0 T.
(4.27)
165
DIGITAL PID CONTROLLER DESIGN
4.6
Computation of the Stabilizing Set
The results of the previous sections on Tchebyshev representations, root counting and clustering, and the representations of digital PID controllers are used here to develop constructive techniques for computing the stabilizing set. The main idea is to construct a polynomial or rational function such that the controller parameters are separated as much as possible in the real and imaginary parts. By applying the root counting formulas to this function we can often “linearize” the problem. We emphasize that other root counting formulas such as Jury’s test applied to these problems result in difficult nonlinear problems, which are often impossible to solve. In this section we outline how the technique works for P, PI, and PD controllers, and present a complete development along with an example for PID controllers in the next section.
4.6.1
Constant Gain Stabilization
In this subsection, we apply the previous results to the problem of constant gain stabilization of a digital control system. Consider the control system shown in Figure 4.3 wherein the plant is represented by its discrete time (z) transfer function P (z) = N D(z) with N (z), D(z) being polynomials with real coefficients and with degree D(z) = n and degree N (z) ≤ n.
+
K
P (z)
−
Figure 4.3 A closed-loop system with constant gain.
The closed-loop system is stable if and only if the characteristic polynomial, denoted by δ(z), is Schur stable. Here δ(z) = D(z) + KN (z) and therefore our problem is to determine all values of K that render δ(z) Schur. To proceed write the Tchebyshev representations of D(z) and N (z) as: p D ejθ = RD (u) + j 1 − u2 TD (u)
166
THREE TERM CONTROLLERS p N ejθ = RN (u) + j 1 − u2 TN (u),
respectively. Note also that
and that
p N e−jθ = RD (u) − j 1 − u2 TD (u)
Nr (z) N z −1 = zl where Nr (z) is the reverse polynomial and l is the degree of N (z). Now δ(z)N z −1 = D(z)N z −1 + KN (z)N z −1 .
and therefore,
δ(z)Nr (z) jθ zl z=e
p p = RD (u) + j 1 − u2 TD (u) RN (u) − j 1 − u2 TN (u) 2 +K RN (u) + 1 − u2 TN2 (u) 2 (u) + 1 − u2 TN2 (u) = RD (u)RN (u) + 1 − u2 TD (u)TN (u) + K RN | {z } R(K,u)
p +j 1 − u2 [TD (u)RN (u) − RD (u)TN (u)] {z } | T (u)
p = R(K, u) + j 1 − u2 T (u).
We emphasize that the the imaginary part of the above expression has been rendered independent of K as a result of multiplying δ(z) by N (z −1 ). This is what me mean by parameter separation. The ground has now been prepared for the application of the root counting formulas developed earlier. For this let ti , i = 1, 2 · · · , k denote the real zeros of odd multiplicity of the fixed T (u), for u in (−1, +1) and set t0 = −1, tk+1 = +1. Write sgn [R(K, tj )] = xj ,
j = 0, 1, · · · , k + 1
(4.28)
and note that each xj can be either +1, −1 or 0. We call a particular choice of [x0 , x1 , · · · , xk+1 ] a string. Let iδ , iNr denote the number of zeros of δ(z) and Nr (z) inside the unit circle. For simplicity assume that N (z) has no unit circle zeros and therefore neither does Nr (z). Now application of the formula given by Lemma 4.3 gives: h i 1 iδ + iNr − l = sgn T (p) (−1) 2 k X · sgn [R(K, −1)] + 2 (−1)j sgn [R (K, tj )] + (−1)k+1 sgn [R(K, +1)] . j=1
167
DIGITAL PID CONTROLLER DESIGN
For closed-loop stability we need iδ = n. Using this in conjunction with the above formula, wherein we know iNr and l, yields the sets of strings corresponding to stability. Call this the set of feasible strings. Each feasible string gives a set of linear inequalities in K and the complete set of stabilizing gains is obtained by solving such sets of linear inequalities over the set of feasible strings. The procedure is best illustrated by an example. Example 4.1 Consider the plant N (z) D(z) z 4 + 1.93z 3 + 2.2692z 2 + 0.1443z − 0.7047 = 5 . z − 0.2z 4 − 3.005z 3 − 3.9608z 2 − 0.0985z + 1.2311
P (z) =
Then RD (u) = −16u5 − 1.6u4 + 32.02u3 − 6.3216u2 − 13.9165u + 4.9919 TD (u) = 16u4 + 1.6u3 − 24.02u3 + 7.1216u + 3.9065 RN (u) = 8u4 − 7.72u3 − 3.4616u2 + 5.6457u − 1.9739 TN (u) = −8u3 + 7.72u2 − 0.5384u − 1.7857 and T (u) = TD (u)RN (u) − RD (u)TN (u) = −11.2752u4 + 7.5669u3 + 16.7782u2 − 14.1655u + 1.203. The roots of T (u) of odd multiplicity and lying in (−1, 1) are 0.0963 and 0.8358. We also have R(K, u) = 11.2752u5 + 12.1307u4 − 40.6359u3 − 7.1779u2 + 40.8322u −16.8293 − 19.6615u − 5.4727 +K −11.2752u4 + 9.7262u3 + 15.0696u2 − 20.3653u + 7.085 .
Since iδ = 5 for stability, and iNr = 2 and l = 4, we must have: h i (p) sgn T (−1) sgn[R(K, −1)] − 2sgn[R(K, 0.0963)] +2sgn[R(K, 0.8358)] − sgn[R(K, 1)] = 6.
Since sgn T (p) (−1) = +1, we have the only feasible string given by: sgn[R(K,-1)] 1
sgn[R(K, 0.0963)] -1
sgn[R(K, 0.8358)] 1
(4.29)
sgn[R(K, 1)] -1
168
THREE TERM CONTROLLERS
This translates into the following set of inequalities: R(K, −1) = −23.348 + 21.5185K > 0 ⇒ K > 1.085 R(K, 0.0963) = −12.998 + 5.2709K < 0 ⇒ K < 2.466 R(K, 0.8358) = −0.9232 + 0.7673K > 0 ⇒ K > 1.2032 R(K, 1) = −0.4050 + 0.2403K < 0 ⇒ K < 1.6854. Therefore, the closed-loop system is stable for 1.2032 < K < 1.6854. REMARK 4.2 In the above example, we have xj , j = 0, 1, 2, 3 (see (4.28)). Each xj may assume the value +1 or −1 since 0 is excluded as we are testing for stability. This leads to 24 = 16 possible strings which may satisfy (4.29). In this example, only one string of the possible 16 satisfies (4.29). In general, it is easy to devise a sorting algorithm to pick out the feasible strings.
4.6.2
Stabilization with PI Controllers
Consider a unity feedback discrete time system with: Plant : PI Controller :
N (z) D(z) K1 (z − K2 ) . C(z) = z−1 P (z) =
The characteristic polynomial is δ(z) = (z − 1)D(z) + K1 (z − K2 ) N (z).
(4.30) Writing the Tchebyshev representations of D(z), N (z) and N z −1 , we have D(z)|z=ejθ = D ejθ u=− cos θ p := RD (u) + j 1 − u2 TD (u) (4.31) jθ N (z)|z=ejθ = N e u=− cos θ p := RN (u) + j 1 − u2 TN (u) (4.32) N z −1 |z=ejθ = N e−jθ u=− cos θ p := RN (u) − j 1 − u2 TN (u). (4.33)
To achieve parameter separation, we calculate p δ(z)N z −1 z=ejθ ,u=− cos θ = −u − 1 + j 1 − u2
h i p p RD (u) + j 1 − u2 TD (u) RN (u) − j 1 − u2 TN (u) h i p 2 + K1 −u + j 1 − u2 − K1 K2 RN (u) + 1 − u2 TN2 (u) .
DIGITAL PID CONTROLLER DESIGN
169
Then p p δ(z)N z −1 |u=− cos θ = −u − 1 + j 1 − u2 P1 (u) + j 1 − u2 P2 (u) p +jK1 1 − u2 P3 (u) − K1 (u + K2 ) P3 (u)
where
P1 (u) = RD (u)RN (u) + 1 − u2 TD (u)TN (u) Therefore,
where
P2 (u) = RN (u)TD (u) − TN (u)RD (u) 2 P3 (u) = RN (u) + 1 − u2 TN2 (u).
δ(z)Nr (z) δ(z)N z −1 z=ejθ ,u=− cos θ = jθ zl z=e ,u=− cos θ p = R (u, K1 , K2 ) + 1 − u2 T (u, K1 ) (4.34)
R (u, K1 , K2 ) = −(u + 1)P1 (u) − 1 − u2 P2 (u) − K1 (u + K2 ) P3 (u) T (u, K1 ) = P1 (u) − (u + 1)P2 (u) + K1 P3 (u). For a fixed value of K1 , we calculate the real distinct zeros ti of T (u, K1 ) of odd multiplicity for u ∈ (−1, 1): −1 < t1 < t2 < · · · < tk < +1. Let iδ , iNr be the number of zeros of δ(z) and Nr (z) inside the unit circle, respectively, then we have (4.35) iδ + iNr − l = 21 sgn T (p) (−1) sgn [R (−1, K1 , K2 )] P +2 kj=1 (−1)j sgn [R (tj , K1 , K2 )] + (−1)k+1 sgn [R (+1, K1 , K2 )] .
For closed-loop stability, we need iδ = n. Using this in conjunction with the above formula, we have the set of strings of sign patterns for the real part corresponding to stability. These lead to linear equations in K2 .
4.6.3
Stabilization with PD Controllers
Now let Plant : PD Controller :
N (z) D(z) K1 (z − K2 ) C(z) = . z
P (z) =
170
THREE TERM CONTROLLERS
The characteristic polynomial becomes δ(z) = zD(z) + K1 (z − K2 ) N (z)
(4.36)
We have p δ(z)N z −1 z=ejθ ,u=− cos θ = R (u, K1 , K2 ) + j 1 − u2 T (u, K1 )
(4.37)
where
R (u, K1 , K2 ) = −uP1 (u) − 1 − u2 P2 (u) − K1 (u + K2 ) P3 (u) (4.38) T (u, K1 ) = K1 P3 (u) + P1 (u) − uP2 (u). (4.39) We see that parameter separation has again been achieved, that is, K1 appears only in the imaginary part and for fixed K1 the real part is linear in K2 . Thus, the application of the root counting formulas will yield linear inequalities in K2 , for fixed K1 . The solution can be completed by sweeping over that range of values of K1 for which an adequate number of real roots tk exist to attain the root count required for stability.
4.7
Stabilization with PID Controllers
We now consider a discrete time plant as before with the PID controller: C(z) =
K2 z 2 + K1 z + K0 z(z − 1)
(4.40)
The characteristic polynomial becomes δ(z) = z(z − 1)D(z) + K2 z 2 + K1 z + K0 N (z).
(4.41)
Multiplying the characteristic polynomial by z −1 N z −1 , we have z −1 δ(z)N z −1 = (z − 1)D(z)N z −1
+ K2 z + K1 + K0 z −1 N (z)N z −1 .
(4.42)
Using the Tchebyshev representations given in (4.31), (4.32), (4.33), and the facts that p (4.43) z = ejθ = −u + j 1 − u2 p z −1 = e−jθ = −u − j 1 − u2 (4.44)
DIGITAL PID CONTROLLER DESIGN
171
we have z −1 δ(z)N z −1 = −(u + 1)P1 (u) − 1 − u2 P2 (u)
− [(K0 + K2 ) u − K1 ] P3 (u) p +j 1 − u2 [−(u + 1)P2 (u) + P1 (u) + (K2 − K0 ) P3 (u)] p = R (u, K0 , K1 , K2 ) + j 1 − u2 T (u, K0 , K2 ) . (4.45)
Now let
K3 := K2 − K0 .
(4.46)
Rewrite R (u, K0 , K1 , K2 ) and T (u, K0 , K2 ) as follows. R (u, K1 , K2 , K3 ) = −(u + 1)P1 (u) − 1 − u2 P2 (u) − [(2K2 − K3 ) u − K1 ] P3 (u) T (u, K3 ) = P1 (u) − (u + 1)P2 (u) + K3 P3 (u)
(4.47) (4.48)
We observe the parameter separation achieved above: K3 appears only in the imaginary part and K1 , K2 , K3 appear linearly in the real part. Thus by applying root counting formulas to the rational function on the left, and imposing the stability requirement yields linear inequalities in the parameters for fixed K3 . The solution is completed by sweeping over the range of K3 for which an adequate number of real roots tk exist. We illustrate with an example. Example 4.2 G(z) =
z2
1 − 0.25
Then RD (u) = 2u2 − 1.25 TD (u) = −2u RN (u) = 1 TN (u) = 0 and P1 (u) = 2u2 − 1.25 P2 (u) = −2u P3 (u) = 1 Recall (4.42). Since G(z) is of order 2 and C(z), the PID controller, is of order 2, the number of roots of δ(z) inside the unit circle is required to be 4
172
THREE TERM CONTROLLERS
for stability. From Theorem 4.2, ii − i2 = (iδ + iNr ) − (l + 1) | {z } | {z } i1
i2
where iδ and iNr are the numbers of roots of δ(z) and the reverse polynomial of N (z) inside the unit circle, respectively and l is the degree of N (z). Since the required iδ is 4, iNr = 0, and l = 0, i1 − i2 is required to be 3. To illustrate the example in detail, we first fix K3 = 1.3. Then the real roots of T (u, K3 ) in (−1, 1) are −0.4736 and −0.0264. Furthermore, sgn[T (−1)] = 1, and from Theorem 4.2, i1 − i2 = 3 requires that: 1 sgn[T (−1)] sgn[R(−1)] − 2sgn[R(−0.4736)] 2 +2sgn[R(−0.0264)] − sgn[R(1)] = 3. We have only one admissible string satisfying the above equation, namely: sgn[R(-1)] sgn[R(-0.4736)] sgn[R(-0.0264)] sgn[R(1)] 2 (i1 − i2 ) 1 -1 1 -1 6 From this string, we have the following set of linear inequalities. −1.3 + K1 + 2K2 > 0 −0.9286 + K1 + 0.9472 < 0 1.1286 + K1 + 0.0528K2 > 0 −0.2 + K1 − 2K2 < 0. This set of inequalities characterize the stability region in (K1 , K2 ) space for the fixed K3 = 1.3. By repeating this procedure for the range of K3 for which T (u, K3 ) has at least 2 real roots, we obtain the the stability region shown in the left of Figure 4.4. Consider the following relation. KP −2 −1 0 −2 −1 0 K0 0 1 −1 K1 KI = 1 1 1 K1 = 1 1 1 1 0 0 K2 T T T T T T KD T 0 0 K2 T 0 0 01 0 K3 −1 −2 2 K1 = T1 T2 − T1 K2 . 0 T −T K3
Using this relation, we can determine, for a fixed T , the stabilizing region in (KP , KI , KD ) space as shown in the right of Figure 4.4. REMARK 4.3 An alternative approach to determine the stabilizing set is via D-decomposition. In this approach, one sets δ ejθ , KP , KI , KD = 0 and
173
DIGITAL PID CONTROLLER DESIGN
1.5
1.5
1
K
D
1
0.5 0
0
−0.5 2
K
3
0.5
−0.5
1.5
−1 3
1 2
1
1 K
2
0.5
0 0
0
−1 −1
−2
K
K
I
1
−0.5 −1
−0.5
0.5
0
1
1.5
K
P
Figure 4.4 Stability regions in (K1 , K2 , K3 ) space (left) and (KP , KI , KD ) space (right).
determines the corresponding solution surfaces in the (KP , KI , KD ) space. These surfaces partition this space into disjoint open regions each with a fixed number of roots in the interior of the unit circle. The stabilizing regions will then have to be picked out by testing an arbitrary point from each region. On the other hand, our approach directly determines the stabilizing regions. It is worthwhile to point out that (4.47) and (4.48) are useful even in Ddecomposition as they show that the stability boundaries in (K1 , K2 ) space are given by straight lines for a fixed K3 .
4.7.1
Maximally Deadbeat Control
An important design technique in digital control is deadbeat control wherein one places all closed-loop poles at the origin. If this is used in conjunction with integral control the tracking error is zeroed out in a finite number of sampling steps. Deadbeat control requires in general that we be able to control all the poles of the system. However, such a pole placement design is in general not possible when a lower order controller is used. Thus, we are motivated to design a PID controller that places the closed-loop as close to the origin as possible. The transient response of such a system will decay out faster than any other design and therefore the fastest possible convergence of the error under PID control will be achieved. The design scheme to be developed will attempt to place the closed-loop poles in a circle of minimum radius ρ. Let Sρ denote the set of PID controllers achieving such a closed-loop root cluster. We show below how Sρ can be
174
THREE TERM CONTROLLERS
computed for fixed ρ. The minimum value of ρ can be found by determining the value ρ∗ for which Sρ∗ = φ but Sρ 6= φ, ρ > ρ∗ . Now let us again consider the PID controller C(z) =
K2 z 2 + K1 z + K0 z(z − 1)
(4.49)
and the characteristic polynomial
Note that
δ(z) = z(z − 1)D(z) + K2 z 2 + K1 z + K0 N (z). p 1 − u2 TD (u, ρ) p = RN (u, ρ) + j 1 − u2 TN (u, ρ)
(4.50)
D(z)|z=−ρu+jρ√1−u2 = RD (u, ρ) + j
(4.51)
N (z)|z=−ρu+jρ√1−u2
(4.52)
and
N ρ2 z −1 z=−ρu+jρ√ 1−u2 = N (z)|z=−ρu−jρ√1−u2 p = RN (u, ρ) − j 1 − u2 TN (u, ρ).
(4.53)
We now evaluate
ρ2 z −1 δ(z)N ρ2 z −1 (4.54) 2 −1 2 2 −1 =ρ z z(z − 1)D(z) + K2 z + K1 z + K0 N (z) N ρ z | {z } δ(z)
over the circle Cρ
ρ2 z −1 δ(z)N ρ2 z −1 z=−ρu+jρ√1−u2
= −ρ2 (ρu + 1)P1 (u, ρ) − ρ3 1 − u2 P2 (u, ρ) − K0 + K2 ρ2 ρu − K1 ρ2 P3 (u, ρ) (4.55) p 3 2 2 +j 1 − u2 ρ P1 (u, ρ) − ρ (ρu + 1)P2 (u, ρ) + K2 ρ − K0 ρP3 (u, ρ)
where
P1 (u, ρ) = RD (u, ρ)RN (u, ρ) + 1 − u2 TD (u, ρ)TN (u, ρ) By letting
P2 (u, ρ) = RN (u, ρ)TD (u, ρ) − TN (u, ρ)RD (u, ρ) 2 P3 (u, ρ) = RN (u, ρ) + 1 − u2 TN2 (u, ρ). K3 := K2 ρ2 − K0 ,
(4.56) (4.57) (4.58) (4.59)
175
DIGITAL PID CONTROLLER DESIGN we have
ρ2 z −1 δ(z)N ρ2 z −1 z=−ρu+jρ√1−u2
= −ρ2 (ρu + 1)P1 (u, ρ) − ρ3 1 − u2 P2 (u, ρ) − 2K2 ρ2 − K3 ρu − K1 ρ2 P3 (u, ρ) (4.60) p 3 +j 1 − u2 ρ P1 (u, ρ) − ρ2 (ρu + 1)P2 (u, ρ) + K3 ρP3 (u, ρ)
To determine the set of controllers achieving root clustering inside a circle of radius ρ we proceed as before: Fix K3 , use the root counting formulas of Section 4.4, develop linear inequalities in K2 , K3 and sweep over the requisite range of K3 . This procedure is then performed as ρ decreases until the set of stabilizing PID parameters just disappears. The following example illustrates this procedure.
Example 4.3 We consider the same plant used in Example 4.2. Figure 4.5 (left) shows the stabilizing set in the PID gain space at ρ = 0.275. For a smaller value of ρ, the stabilizing region in PID parameter space disappears. This means that there is no PID controller available to push all closed-loop poles inside a circle of radius smaller than 0.275. From this we select a point inside the region that is K0 = 0.0048, K1 = −0.3195, K2 = 0.6390 and K3 = 0.0435. From the relationship in (4.59), we have −1 −2ρ2 2 KP K1 0.3099 2 1 ρ 1 1 KI = + − K2 = 0.3243 T T T T KD K3 0.0048 0 ρ2 T −T
Figure 4.5 (right) shows the closed-loop poles that lie inside the circle of radius ρ = 0.275. The roots are 0.2500 ± j0.1118
and
0.2500 ± j0.0387.
We select several sets of stabilizing PID parameters from the set obtained in Example 4.2 (ρ = 1) and compare the step responses between them. Figure 4.6 shows that the maximally deadbeat design produces nearly deadbeat response.
4.7.2
Maximal Delay Tolerance Design
In some control systems an important design parameter is the delay tolerance of the loop, that is the maximum delay that can be inserted into the loop without destabilizing it. In digital control a delay of k sampling instants is
176
THREE TERM CONTROLLERS
Closed loop poles with selected PID gains 0.5
0.1
KD
imag
0.05 performance region 0 0.4
0
0.4 0.35 KI
0.35
−0.5 −0.5
0 real
KP
0.3 0.3
0.5
Figure 4.5 Stability regions with ρ = 0.275 (left). Closed-loop poles of the selected PID gains (right). Maximally deadbeat response
Responses with arbirary stabilizing PID
2
2
1.5
output
output
1.5
1
1
0.5 0.5
0
0
0
5
10 time
15
20
−0.5
0
5
10 time
15
Figure 4.6 Maximally deadbeat design (left). Arbitrary stabilization (right).
20
DIGITAL PID CONTROLLER DESIGN
177
represented by z −k . We use this to determine the maximum delay that a control loop under PID control can be designed to tolerate. This gives the limit of delay tolerance achievable for the given plant under PID control. Let the plant be N (z) . (4.61) G(z) = D(z) We consider the problem of finding the maximum delay L∗ such that the plant can be stabilized by a PID controller. In other words, finding the maximum values of L∗ such that the stabilizing PID gain set that simultaneously stabilizes the set of plants z −L G(z) =
N (z) z L D(z)
,
for L = 0, 1, · · · , L∗
(4.62)
is not empty. Let Si be the set of PID gains that stabilizes the plant z −i G(z). Then it is clear that i ∩L i=0 Si stabilizes z G(z) for all i = 0, 1, · · · , L.
(4.63)
We illustrate this computation by an example. Example 4.4 Consider the plant used in Example 4.2. Figure 4.7 (left) shows the stabilizing PID gains when there is no delay (i.e, L = 0). The right figure shows the stabilizing PID gains when L = 0, 1. As seen in the figure, the size of the set is reduced as the delay increases. In many systems, the set disappears for a large value of L∗ . This is the maximum delay that can be stabilized by any PID controller. To illustrate better, we fix K3 = 1. Figure 4.8 shows that the stability region reduces when the required time delay increases and for the system with the delay L ∈ (0, 3) the region vanishes.
178
THREE TERM CONTROLLERS L=0, 1
L=0
1.5
1
1
0.5
0.5
KD
K
D
1.5
0
0
1.5
−0.5 2
1 1.5
0.5 1
0 0.5
−0.5
0
K
P
−0.5 −1
I
0.5 1
−0.5
0 K
1 1.5
0 0.5
1.5
−0.5 2
K
K
P
−0.5 −1
I
Figure 4.7 Stability region for delayed systems. Stability region: K =1, L=0
Stability region: K =1, L=0, 1
3
3
1
D
0
K
K
D
1
−1 2
−1 2
1
1
0
K
I
0
−1
I
P
0
−1
K
P
Stability region is empty: K =1, L=0, 1, 2, 3 3
3
1
D
1
0
K
D
0
K
K
Stability region: K =1, L=0, 1, 2
K
1
1
0
−1 2
1
1
0
K
I
0
−1
K
P
0
−1 2
1
1
0
K
I
0
Figure 4.8 Stability region for delayed systems.
−1
K
P
179
DIGITAL PID CONTROLLER DESIGN
4.8
Exercises
4.1 For the plants with transfer functions (a) P (z) =
z2
z , −z+1
(b) P (z) =
z2
z−2 , −z+1
z − 0.2 , − z2 + z − 1 determine the complete PI and PID controller stabilizing sets using the methods of this chapter. (c) P (z) =
2z 3
4.2 Solve the above problem with the additional requirement that the gain margin be at least 3 dbs. 4.3 To the problem above now add the requirement that the phase margin be at least 25 degrees and compute the controller sets achieving the gain margin and phase margin specifications simultaneously. 4.4 To the gain margin and phase margin specifications already stated above add the requirements that i) the closed-loop system tolerates at least two units of delay and ii) the unit ramp response has a steady state error that is less than 10% in magnitude, and recompute the controller sets achieving all specifications.
4.9
Notes and References
The main results of this chapter are taken from Keel, Rego, and Bhattacharyya [133], and Keel and Bhattacharyya [126]. An alternative approach to digital PID controller synthesis was developed in [202] where the bilinear transformation was used in conjunction with the continuous time PID synthesis method of Chapter 2. The design methods presented in this chapter have been implemented as algorithms in Labview and should be commercially available as a part of Labview’s Control Design Toolkit (2008).
5 FIRST ORDER CONTROLLERS FOR LTI SYSTEMS
In classical control theory the so-called lag or lead controller is often used to reshape the loop frequency response to provide improved stability margins. The design methods used are ad hoc and often based on trial and error but nevertheless useful because of the widespread use of the first order controllers. Indeed they are next to the PID controllers in importance and usage. In this chapter we give a solution to the problem of finding all first order controllers that stabilize a given linear time-invariant (LTI) continuous time or discrete time plant and attain various performance specifications. These sets of controllers can be displayed in the 2-D or 3-D parameter spaces of the controllers. The main tool used is Neimark’s D-decomposition, which was originally introduced in 1949 in the Russian control literature.
5.1
Root Invariant Regions
Consider an arbitrary LTI plant and a first order controller (see Figure 5.1).
+ −
C(s)
P(s)
Figure 5.1 A unity feedback system.
Let Plant :
P (s) :=
N (s) D(s)
(5.1)
181
182
THREE TERM CONTROLLERS Controller :
C(s) :=
x1 s + x2 s + x3
(5.2)
We shall assume that the plant P (s) is stabilizable, by a controller of some order, not necessarily first order. Let us use the standard even-odd decomposition of polynomials: N (s) := Ne s2 + sNo s2 (5.3) 2 2 D(s) := De s + sDo s . (5.4) The characteristic polynomial of the closed-loop system is
δ(s) = D(s) (s + x3 ) + N (s) (x1 s + x2 ) = De s2 + sDo s2 (s + x3 ) + Ne s2 + sNo s2 (x1 s + x2 ) = s2 Do s2 + x3 De s2 + x2 Ne s2 + x1 s2 No s2 +s De s2 + x3 Do s2 + x2 No s2 + x1 Ne s2 .
With s = jω, we have δ(jω) = −ω 2 No −ω 2 x1 + Ne −ω 2 x2 + De −ω 2 x3 − ω 2 Do −ω 2 +jω Ne −ω 2 x1 + No −ω 2 x2 + Do −ω 2 x3 + De −ω 2 .
The complex root boundary is given by δ(jω) = 0,
ω ∈ (0, +∞)
(5.5)
and the real root boundary is given by δ(0) = 0,
δn+1 = 0
(5.6)
where δn+1 denotes the leading coefficient of δ(s). Thus, −ω 2 No −ω 2 x1 + Ne −ω 2 x2 + De −ω 2 x3 − ω 2 Do −ω 2 = 0 (5.7) ω Ne −ω 2 x1 + No −ω 2 x2 + Do −ω 2 x3 + De −ω 2 = 0. (5.8)
Note that at ω = 0 (5.8) is trivially satisfied and (5.7) becomes Ne (0)x2 + De (0)x3 = 0
(5.9)
which coincides with the condition δ(0) = 0. The condition δn+1 = 0 translates to dn + x1 nn = 0.
(5.10)
where dn , nn denote the coefficients of sn in D(s) and N (s) respectively. For ω > 0 we have −ω 2 No −ω 2 x1 + Ne −ω 2 x2 + De −ω 2 x3 − ω 2 Do −ω 2 = 0 Ne −ω 2 x1 + No −ω 2 x2 + Do −ω 2 x3 + De −ω 2 = 0.
FIRST ORDER CONTROLLERS FOR LTI SYSTEMS
183
Rewrite the above in matrix form as 2 ω No −ω2 −Ne −ω 2 x1 De −ω 2 x3 − ω 2 Do −ω 2 . (5.11) = x2 −Do −ω 2 x3 − De −ω 2 Ne −ω 2 No −ω 2 | {z } A(ω)
We now consider the case when |A(ω)| 6= 0 for all ω > 0. The case when |A(ω)| = 0 will be discussed later. Then ∀ ω > 0. (5.12) |A(ω)| = ω 2 No2 −ω 2 + Ne2 −ω 2 > 0,
Therefore, for every x3 (5.11) has a unique solution x1 and x2 at each ω > 0 given by: 1 No −ω 2 Ne −ω 2 De −ω 2 x3 − ω 2 Do −ω 2 x1 = x2 −Do −ω 2 x3 − De −ω 2 |A(ω)| −Ne −ω 2 ω 2 No −ω 2 In other words,
1 No −ω 2 De −ω 2 − Ne −ω 2 Do −ω 2 x3 |A(ω)| −ω 2 No −ω 2 Do −ω 2 − Ne −ω 2 De −ω 2 (5.13) 1 x2 (ω) = −Ne −ω 2 De −ω 2 − ω 2 No −ω 2 Do −ω 2 x3 |A(ω)| +ω 2 Ne −ω 2 Do −ω 2 − ω 2 No −ω 2 De −ω 2 . (5.14) x1 (ω) =
For a fixed value of x3 , let ω run from 0 to ∞. The above equations trace out a curve in the x1 - x2 plane corresponding to the complex root space boundary. These curves along with the straight lines (5.9) and (5.10) partition the parameter space into a set of open root distribution invariant regions. By sweeping over x3 , we find these regions. To complete our discussion let us consider the possibility |A(ω)| = 0 for some ω 6= 0. We will show that the assumption of stabilizability of the plant rules out this possibility. Let (5.15) |A(ω)| = ω 2 No2 −ω 2 + Ne2 −ω 2 = 0, for some ω 6= 0. Since No2 −ω 2 , Ne2 −ω 2 ≥ 0, (5.15) holds if and only if No −ω 2 = Ne −ω 2 = 0. (5.16) From (5.11) it follows that, De −ω 2 x3 − ω 2 Do −ω 2 = 0,
and therefore
−Do −ω 2 x3 − De −ω 2 = 0
ω 2 Do2 −ω 2 + De2 −ω 2 = 0.
(5.17) (5.18)
184
THREE TERM CONTROLLERS
Since Do2 ω 2 , De2 −ω 2 ≥ 0, (5.18) holds if and only if Do −ω 2 = De −ω 2 = 0.
(5.19)
From (5.16) and (5.19), it follows that (5.15) has a solution for ω 6= 0 if and only if D(s) and N (s) have a common factor s2 + ω 2 and this is ruled out by the assumption of stabilizability of the plant. Therefore, the case |A(ω)| = 0 for some ω need not be considered. The following example illustrates these computations. Example 5.1 Consider the following 13th order plant transfer function P (s) =
N (s) D(s)
where N (s) = s10 + 2s9 + 3s8 + 4s7 + 10s6 + 5s5 + s4 − 7s3 + 4s2 + s + 23 D(s) = s13 + 9s12 + 40s11 + 111s10 + 203s9 + 115s8 − 203s7 + 60s6 +25s5 + s4 − 18s3 + 21s2 + 2s + 7 and the first order controller C(s) =
x1 s + x2 . s + x3
The characteristic polynomial evaluated at s = jω becomes δ(jω) = δr (ω) + jωδi (ω) where δr (ω) = −ω 14 + (40 + 9x3 ) ω 12 + (−203 − 2x1 − x2 − 111x3 ) ω 10 + (−203 + 4x1 + 3x2 + 115x3 ) ω 8 + (−25 − 5x1 − 10x2 − 60x3 ) ω 6 + (−18 − 7x1 + x2 + x3 ) ω 4 + (−2 − x1 − 4x2 − 21x3 ) ω 2 + (23x2 + 7x3 ) δi (ω) = (9 + x3 ) ω 12 + (−111 − x1 − 40x3 ) ω 10 + (115 + 3x1 + 2x2 + 203x3 ) ω 8 + (−60 − 10x1 − 4x2 + 203x3 ) ω 6 + (1 + x1 + 5x2 + 25x3 ) ω 4 + (−21 − 4x1 + 7x2 + 18x3 ) ω 2 + (7 + 23x1 + x2 + 2x3 ) . We now have For ω = 0, 23x2 + 7x3 = 0,
FIRST ORDER CONTROLLERS FOR LTI SYSTEMS
185
For ω > 0, −9ω 22 + 120ω 20 − 181ω 18 − 452ω 16 + 1429ω 14 − 1738ω 12 +3355ω 10 − 2931ω 8 + 1586ω 6 − 142ω 4 + 504ω 2 − 161 x1 (ω) = |A(ω)| 20 18 22 9ω − 120ω + 280ω − 805ω 16 + 406ω 14 − 1341ω 12 +3501ω 10 − 3319ω 8 + 1289ω 6 − 225ω 4 + 539ω 2 − 154 · x3 + |A(ω)| −9ω 24 + 120ω 22 − 280ω 20 + 805ω 18 − 406ω 16 + 1341ω 14 − 3501ω 12 +3319ω 10 − 1289ω 8 + 225ω 6 − 539ω 4 + 154ω 2 x2 (ω) = |A(ω)| −27ω 22 + 396ω 20 − 1257ω 18 + 2596ω 16 − 1527ω 14 + 1042ω 12 −2951ω 10 + 3303ω 8 − 1364ω 6 + 86ω 4 − 518ω 2 + 161 + · x3 |A(ω)| where |A(ω)| = ω 20 − 2ω 18 + 13ω 16 − 26ω 14 + 102ω 12 − 117ω 10 + 281ω 8 −409ω 6 + 76ω 4 − 183ω 2 + 529. In this example, we choose x3 = 0 (integral control). Figure 5.2 shows that these functions partition the controller parameter space. The numbers shown in the figure indicate the number of RHP roots of the polynomial δ(s) where the value (x1 , x2 ) is taken from the indicated region in the figure. In particular there exists no stabilizing region for this specific value of x3 in this example.
5.2
An Example
It is clear from the development of the previous section that the stabilizing regions in the controller parameter space can be easily selected. We illustrate with an example. Example 5.2 Consider the following 8th order plant: 3s7 + 99s6 + 1320s5 + 9255s4 + 37287s3 + 88656s2 +120420s + 75600 N (s) . P (s) := = 8 6 5 7 D(s) s + 18s + 131s + 625s + 2017s4 + 4753s3 + 7896s2 +8919s + 5670
186
THREE TERM CONTROLLERS 150
100
6
4
6
50 6
x
2
8 0 7 7 −50 5 −100 3
−150 −100
−50
0
50 x
100
150
1
Figure 5.2 Root invariant regions (Example 5.1).
and a first order controller C(s) =
x1 s + x2 . s + x3
Then we have Ne (s2 ) = 99s6 + 9255s4 + 88656s2 + 75600 No (s2 ) = 3s6 + 1320s4 + 37287s2 + 120420 De (s2 ) = s8 + 131s6 + 2017s4 + 7896s2 + 5670 Do (s2 ) = 18s6 + 625s4 + 4753s2 + 8919 and δ(jω) = δr (ω) + jωδi (ω) where δr (ω) = (18 + 3x1 + x3 ) ω 8 + (−625 − 1320x1 − 99x2 − 131x3 ) ω 6 + (4753 + 37287x1 + 9255x2 + 2017x3 ) ω 4 + (−8919 − 120420x1 − 88656x2 − 7896x3 ) ω 2 + 75600x2 + 5670x3 δi (ω) = ω 8 + (−131 − 99x1 − 3x2 − 18x3 ) ω 6
FIRST ORDER CONTROLLERS FOR LTI SYSTEMS
187
+ (2017 + 9255x1 + 1320x2 + 625x3 ) ω 4 + (−7896 − 88656x1 − 37287x2 − 4753x3 ) ω 2 + (5670 + 75600x1 + 120420x2 + 8919x3 ) . We now have the two conditions for ω = 0 and ω > 0: For ω = 0, 75600x2 + 5670x3 = 0, For ω > 0,
45ω 14 + 3411ω 12 + −9681ω 10 + 634077ω 8 − 1899129ω 6 − 69813ω 4 +25591140ω 2 − 428652000 x1 (ω) = |A(ω)| −3ω 14 − 69ω 12 + 12207ω 10 − 159585ω 8 + 220167ω 6 −6387621ω 4 − 12203946ω 2 + 8505000 + · x3 |A(ω)| ω 16 + 69ω 14 − 12207ω 12 + 159585ω 10 − 220167ω 8 + 6387621ω 6 +12203946ω 4 − 8505000ω 2 x2 (ω) = |A(ω)| 14 −153ω + 47859ω 12 − 3011169ω 10 + 62911227ω 8 −526622253ω 6 + 1809907839ω 4 − 2173643100ω 2 + 428652000 + · x3 . |A(ω)| where |A(ω)| = 9ω 14 + 1881ω 12 + 133632ω 10 + 4048713ω 8 + 52237809ω 6 +279041256ω 4 + 1096189200ω 2 + 5715360000. Figure 5.3 depicts the stability region when x3 = 0.2 and it shows two distinct stability regions. By sweeping x3 from −0.7 to 0.5, we obtained the following three dimensional region shown in Figure 5.4. The figure shows that the two regions coalesce together for larger values of x3 . For smaller values of x3 the bounded region is getting smaller and eventually vanishes. while the unbounded region gets even bigger. As seen, even though the number of controller parameters are the same as that of PID controllers, the stability region in the first order controller parameter space is quite complicated and completely different from that of PID controllers.
188
THREE TERM CONTROLLERS 0.8
0.7
0.6
0.5
x2
0.4
0.3
0.2
0.1
0
−0.1
−0.2 −0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
x1
Figure 5.3 Stability region for x3 = 0.2 (Example 5.2).
0.4 0.2
x
3
0 −0.2 −0.4 −0.6 0.1 0.6
0.05
0.5 0.4 0.3
0
0.2 0.1
x2
−0.05
0 −0.1
x
1
Figure 5.4 Stability region for −0.7 ≤ x3 ≤ 0.5 (Example 5.2).
189
FIRST ORDER CONTROLLERS FOR LTI SYSTEMS
5.3
Robust Stabilization by First Order Controllers
The computation described in the previous section can be extended in a straightforward and obvious way to determine the set of first order controllers that simultaneously stabilize a number of plants Pi (s). Explicit solutions for the stability boundary can be found for each plant, for fixed x3 , the stabilizing regions intersected and this process repeatedly done as x3 is swept. To determine the robust stabilizability of a continuum of plants by first (s) order controllers we consider specifically an interval plant family P (s) = N D(s) where N (s) and D(s) are interval polynomials. Consider the Generalized Kharitonov Theorem of Chapter 11 (Theorem 11.13) which deals with a polynomial family of the form: δ(s) = F1 (s)D(s) + F2 (s)N (s)
(5.20)
where D(s) and N (s) are interval polynomials and Fi (s) = sti (ai s + bi )Ui (s)Qi (s),
i = 1, 2.
(5.21)
where ti ≥ 0 is an arbitrary integer, ai and bi are arbitrary real numbers, Ui (s) is an anti-Hurwitz polynomial, and Qi (s) is an even or odd polynomial. For robust stability of such a family it is enough that F (s) = (F1 (s), F2 (s)) stabilizes the finite set of vertex polynomials ∆v (s): ∆v (s) := {δv (s) : F1 (s)Di (s) + F2 (s)Nj (s),
i, j = 1, 2, 3, 4}
(5.22)
where Di (s) and Nj (s) are the Kharitonov polynomials of D(s) and N (s), respectively. As seen, the problem of robust stabilization with 1st order controllers is a special case of this result with F1 (s) = s + x3
and
F2 (s) = x1 s + x2 .
(5.23)
For the given interval plant N (s) P(s) := P (s) = : N (s) ∈ N(s), D(s) ∈ D(s) , D(s) let its Kharitonov vertices be Ni (s) Pv (s) := : Ni (s) ∈ KN (s), Dj (s) ∈ KD (s) Dj (s) where KN (s) and KD (s) are the set of Kharitonov polynomials of N(s) and D(s), respectively. Let Rk be controller parameter space regions that consist of all first order stabilizing controllers for the k th vertex system Pk (s). Then
190
THREE TERM CONTROLLERS
every first order stabilizing controller that robustly stabilizes the interval system P(s) is given by R := ∩k Rk . (5.24) Example 5.3 Consider the following interval plant. N (s) P(s) = D(s) n 7 s7 + n 6 s6 + n 5 s5 + n 4 s4 + n 3 s3 + n 2 s2 + n 1 s + n 0 = d8 s8 + d7 s7 + d6 s6 + d5 s5 + d4 s4 + d3 s3 + d2 s2 + d1 s + d0 where the upper and lower bounds of the coefficients of interval polynomials D(s) and N(s) are D+ = {2, 18.1, 131.1, 625.1, 2017.1, 4753.1, 7896.1, 8919.1, 5670.1} D− = {1, 17.9, 130.9, 624.9, 2016.9, 4752.9, 7895.9, 8918.9, 5669.9} N + = {3.1, 99.1, 1320.1, 9255.1, 37287.1, 88656.1, 120420.1, 75600.1} N − = {2.9, 98.9, 1319.9, 9254.9, 37286.9, 88655.9, 120419.9, 75599.9} We now construct the 16 vertex systems: Ni (s) Pv (s) = Pl (s) : , Ni (s) ∈ KN (s), Dj (s) ∈ KD (s) . Dj (s) By repeating the step given in Example 5.2 for all 16 systems, we can find stabilizing compensators for each of them. Figures 5.5 displays these for the case when x3 = 0.5. Clearly, the intersection of these 16 figures is the region for robust stabilization. This is shown in Figures 5.6 and 5.7.
5.4
H∞ Design with First Order Controllers
With the stabilizing set in hand one can set out to impose performance specifications such as gain margin, phase margin, and H∞ norm constraints on various closed-loop transfer functions. The gain margin and phase margin specifications can be imposed by replacing the plant by KP (s) and e(−jθ) P (s) respectively. We treat the H∞ specification in some detail below. Consider the standard feedback configuration shown in Figure 5.1 with the plant and controller as defined in (5.1) and (5.2). Again, the plant is assumed
191
FIRST ORDER CONTROLLERS FOR LTI SYSTEMS 80 60 40 20 0
80 60 40 20 0 0
2
4
6
8
80 60 40 20 0
80 60 40 20 0 0
2
4
6
8
80 60 40 20 0 0
2
4
6
8
80 60 40 20 0 0
2
4
6
0
8
2
4
6
8
2
4
6
8
2
4
6
8
0
2
4
6
0
8
2
4
6
8
2
4
6
0
8
2
4
6
8
0
2
4
6
8
0
2
4
6
8
0
2
4
6
8
80 60 40 20 0 2
4
6
8
80 60 40 20 0 0
0 80 60 40 20 0
80 60 40 20 0
80 60 40 20 0 0
0 80 60 40 20 0
80 60 40 20 0
80 60 40 20 0
80 60 40 20 0
80 60 40 20 0 0
2
4
6
8
Figure 5.5 Stabilizing regions for all 16 vertex systems. 90
80
70
60
50
x2
region for robust stability 40
This area is expanded in Figure 5.7.
30
20
10
0 0
1
2
3
4
5
6
x1
Figure 5.6 Region for robust stability when x3 = 0.5.
7
8
192
THREE TERM CONTROLLERS 0.3
0.25
0.2
0.15
x
2
region for robust stability 0.1
0.05
0
−0.05 −0.08
−0.06
−0.04
−0.02
0
0.02 x
0.04
0.06
0.08
0.1
0.12
1
Figure 5.7 Region for robust stability when x3 = 0.5 (magnified).
to be SISO, LTI, proper, and coprime. Now for a given closed-loop transfer function T (s) and positive scalar γ, define the H∞ design criteria to be: kW (s)T (s)k∞ < γ
where W (s) =
Wn (s) Wd (s)
(5.25)
is a stable, coprime, frequency-dependent weighting function. Here we consider for specificity the complementary sensitivity function T (s) =
(x1 s + x2 ) N (s) . (s + x3 ) D(s) + (x1 s + x2 ) N (s)
The results for the sensitivity and input sensitivity functions can be handled similarly. The objective is to determine the region in the parameter space for which the closed-loop system is stable and the above defined H∞ optimization criteria is satisfied. Specifically, the objective is to determine values of x1 , x2 , and x3 (if any) for the controller C(s), such that
Wn (s)
(x1 s + x2 )N (s)
Wd (s) (s + x3 )D(s) + (x1 s + x2 )N (s) < γ ∞ and the closed-loop characteristic polynomial
δ(s) := (s + x3 )D(s) + (x1 s + x2 )N (s)
193
FIRST ORDER CONTROLLERS FOR LTI SYSTEMS
is Hurwitz. To proceed we need the following lemma from Chapter 12 (Lemma 12.4). LEMMA 5.1 Let F (s) =
NF (s) DF (s)
be a stable and proper rational function, where NF (s) and DF (s) are polynomials with deg[DF (s)] =: q and nq , dq denote the q th degree coefficients of NF (s), DF (s). Define φ(s) := DF (s) +
1 jθ e NF (s). γ
Then for a given γ > 0, kF (s)k∞ < γ if and only if 1. |nq | < γ|dq | and 2. φ(s) is Hurwitz for all θ in [0, 2π), where nq and dq are the coefficients of sq in NF (s) and DF (s), respectively. To apply the above result to our problem, set NF (s) = Wn (s) (x1 s + x2 ) N (s) DF (s) = Wd (s) (s + x3 ) D(s) + (x1 s + x2 ) N (s) . Let Wn (s) = am sm + am−1 sm−1 + · · · + a1 s + a0 Wd (s) = bm sm + bm−1 sm−1 + · · · + b1 s + b0 N (s) = αn sn + αn−1 sn−1 + · · · + α1 s + α0 D(s) = βn sn + βn−1 sn−1 + · · · + β1 s + β0 where bm and βn 6= 0. Then nq = αn am x1 ,
dq = bm (βn + αn x1 ) .
(5.26)
Using the previous result, the problem can be reformulated as follows: Determine the region in the controller parameter space such that 1. δ(s) is Hurwitz; 2. |nq | < γ|dq | and; 3. φ(s) is Hurwitz for all θ in [0, 2π).
194
THREE TERM CONTROLLERS
The problem of determining the controller to satisfy the above simultaneous stabilization conditions can be solved as before by the D-decomposition technique with fixed x3 . We illustrate with an example. Example 5.4 G(s) =
s2
s−1 + 0.8s − 0.2
and C(s) =
x1 s + x2 . s + x3
The closed-loop transfer function is T (s) =
(x1 s + x2 ) (s − 1) . (s + x3 ) (s2 + 0.8s − 0.2) + (x1 s + x2 ) (s − 1)
We arbitrarily choose x3 = 2.5, γ = 1, and the weighting function to be the high pass transfer function W (s) =
s + 0.1 . s+1
In this example, we consider the problem of determining the set of all stabilizing controllers, C(s) such that kW (s)T (s)k∞ < 1 for x3 = 2.5. We determine the stability region as shown in Figure 5.8.
−0.5
x2
−1
−1.5
−2
−2.5
−4
−3
−2
−1 x1
0
Figure 5.8 Stability region for x3 = 2.5.
1
2
195
FIRST ORDER CONTROLLERS FOR LTI SYSTEMS
Next, we note that the plant is strictly proper so that |nα | < γ|dα | for all x1 , x2 , and x3 . Finally, we must determine the region for which φ(s) is Hurwitz for all θ ∈ [0, 2π). For this example we have 2 φ(s) = (s + 1) (s + x3 ) s + 0.8s − 0.2 + (x1 s + x2 ) (s − 1) +ejθ (s + 0.1) (x1 s + x2 ) (s − 1).
Using the substitution, ejθ = α + jβ, φ(jω) = ω 4 + βx1 ω 3 + (−0.6 + 0.9αx1 − x2 − αx2 − 1.8x3 ) + (0.1βx1 + 0.9βx2 ) ω − (1 + 0.1α)x2 − 0.2x3 +j (−1.8 − x1 − αx1 − x3 ) ω 3 + (0.9βx1 − βx2 ) ω 2
(5.27)
+ (−0.2 − x1 − 0.1αx1 − 0.9αx2 + 0.6x3 ) ω − 0.1βx2 ] .
First, we consider the real root boundaries. The real root boundary for the origin is obtained by setting φ(0) = 0: −(1 + 0.1α)x2 − 0.2x3 = 0,
−0.1βx2 = 0.
Thus there exists a real root boundary at −0.2 x3 . x2 = 1 + 0.1α Since the plant is strictly proper, there is not a real root boundary at infinity. The complex root boundary is characterized by φ(jω) = 0. The stability region for x3 = 2.5 and the curves and lines for θ = 0, π8 , 2π 3π 8 , 8 , · · ·, 2π are shown in Figure 5.9. The lightly shaded region is the stability region. All points (x1 , x2 ) in the darkly shaded region satisfy the H∞ constraint. Note that in general it is necessary to sweep over θ. However, it is clear from this figure that it may only be necessary to plot a finite set of values. It is necessary to sweep over ω and, to obtain the 3D set, x3 as well. In addition, while the arbitrarily chosen values for x3 and γ resulted in a nonempty solution set in this example, there is no guarantee that this will be the case.
5.5
First Order Discrete-Time Controllers
In this section, we develop a solution to the problem of finding all first order discrete time controllers x1 z + x2 C(z) = z + x3
196
THREE TERM CONTROLLERS
−0.5
x
2
−1
−1.5
−2
−2.5
−4
−3
−2
−1 x
0
1
2
1
Figure 5.9 Stability and H∞ region.
that stabilize a given linear time-invariant (LTI) discrete time plant P (z) of order n. The method is again based on Neimark’s D-decomposition. This computation is facilitated by using the Tchebyshev representation of polynomials evaluated on the unit circle introduced in Chapter 4.
5.5.1
Computation of Root Distribution Invariant Regions
Consider an arbitrary LTI discrete time plant and a discrete time first order controller (see Figure 5.10) given by Plant : Controller :
N (z) D(z) x1 z + x2 Nc (z) C(z) = =: . z + x3 Dc (z)
P (z) =
We shall assume that the plant P (z) is rational and proper and stabilizable, by a controller of some order, not necessarily first order. The characteristic polynomial is δ(z) = Dc (z)D(z) + Nc (z)N (z) = (z + x3 ) D(z) + (x1 z + x2 ) N (z)
FIRST ORDER CONTROLLERS FOR LTI SYSTEMS C(z)
+
197
P(z)
−
Figure 5.10 A unity feedback system.
We will need to determine the image of a polynomial P (z) = an z n + an−1 z n−1 + · · · + a2 z 2 + a1 z + a0 with real coefficients, evaluated on the unit circle. Setting z k = ejkθ = cos kθ + j sin kθ
(5.28)
u := − cos θ
(5.29)
and we have z = ejθ = −u + j Let ck (u) = cos kθ,
p 1 − u2 .
sk (u) =
(5.30)
sin kθ sin θ
where ck (u) and sk (u) are the Tchebyshev polynomials of the first and second kind respectively. With this notation we have p P ejθ = RP (u) + j 1 − u2 TP (u) where
RP (u) = an cn (u) + an−1 cn−1 (u) + · · · + a1 c1 (u) + a0 TP (u) = an sn (u) + an−1 sn−1 (u) + · · · + a1 s1 (u) are real polynomials of degree n and n − 1, respectively. Now consider the characteristic polynomial δ(z) = (z + x3 ) D(z) + (x1 z + x2 ) N (z) Using the previous notation, we can write p D(ejθ ) := RD (u) + j 1 − u2 TD (u) p N (ejθ ) := RN (u) + j 1 − u2 TN (u)
(5.31) (5.32)
198
THREE TERM CONTROLLERS
and p 1 − u2 + x3 p x1 ejθ + x2 = −x1 u + j 1 − u2 x1 + x2 .
(5.34)
Π(u) = Πr (u) + jΠi (u)
(5.35)
ejθ + x3 = −u + j
(5.33)
The characteristic polynomial evaluated on the unit circle then becomes
where Πr (u) = RD (u) (x3 − u) − TD (u) 1 − u2 +RN (u) (x2 − ux1 ) − 1 − u2 x1 TN (u) p Πi (u) = 1 − u2 [RD (u) + TD (u) (x3 − u) +x1 RN (u) + TN (u) (x2 − ux1 )]
(5.36)
The stability boundary for complex roots is obtained by setting Π(u) = 0,
u ∈ (−1, 1)
(5.37)
and the stability boundaries for real roots are obtained by setting Π(−1) = 0,
Π(1) = 0.
Thus, the complex root boundary from the Boundary Crossing Conditions is given by Πr (u) = 0 Πi (u) = 0
(5.38) (5.39)
Note that at u = 1 and u = −1, (5.39) is trivially satisfied, that is holds for all x1 , x2 , and x3 . At u = 1, (5.38) becomes RD (1) (x3 − 1) + RN (1) (x2 − x1 ) = 0
(5.40)
which for a given x3 is a straight line in the (x1 , x2 ) plane. Similarly, at u = −1, RD (−1) (x3 + 1) + RN (−1) (x2 + x1 ) = 0 (5.41) which for fixed x3 is√a straight line. For −1 < u < 1, 1 − u2 > 0 thus (5.38) and (5.39) become x1 (u − x3 ) RD (u) + 1 − u2 TD (u) A(u) = x2 (u − x3 ) TD (u) − RD (u) where A(u) =
u2 − 1 TN (u) − uRN (u) RN (u) − uTN (u)
RN (u) . TN (u)
(5.42)
FIRST ORDER CONTROLLERS FOR LTI SYSTEMS Now,
199
2 |A(u)| = TN2 (u) u2 − 1 − RN (u).
Thus, |A(u)| ≤ 0, ∀ u ∈ [−1, +1]. It is easily shown that |A(u)| = 0 is incompatible with stabilizability of the plant. Therefore, for every x3 , (5.42) has a unique solution x1 and x2 at each u ∈ (−1, 1). Solving this we get Y (u) (u − x3 ) + 1 − u2 TD (u)TN (u) + RD (u)RN (u) x1 (u) = |A(u)| Y (u) (1 − ux3 ) + x3 RN (u)RD (u) + 1 − u2 TN (u)TD (u) x2 (u) = |A(u)| where Y (u) = RD (u)TN (u) − RN (u)TD (u). The above two equations trace out a curve in the x1 - x2 plane representing the complex root space boundary, for fixed x3 , as u runs from -1 to +1. This curve along with the lines in (5.40) and (5.41) partition the controller parameter space into regions with a fixed root distribution with respect to the unit circle roots. By sweeping over x3 , we can identify the three dimensional stability region for a given plant, if one exists. Example 5.5 Consider the following 6th order discrete-time plant P (z) =
24z 5 + 72z 4 + 19z 3 + 81z 2 + 84z + 95 76z 6 + 42z 5 + 56z 4 + 59z 3 + 24z 2 + z + 15
and the first order discrete time controller x1 z + x2 C(z) = . z + x3 The unit circle image of the characteristic polynomial Π(u) can now be found. Choosing x3 = 0.75, the lines corresponding to u = −1 and u = 1 are 273 u = −1 : x2 = −x1 + (1 + x3 ) = −x1 − 1.274 375 69 = x1 + 0.14256. u = 1 : x2 = x1 + (1 − x3 ) 121 For −1 < u < 1, we have the curve given parametrically by 462080u6 − 505248u5 − 217368u4 + 303488u3 +13524u2 − 38088u − 3014.25 x1 (u) = |A(u)| 5 4 222400u − 233616u − 54554u3 + 86446u2 −3246u − 4143.5 x2 (u) = |A(u)|
200
THREE TERM CONTROLLERS
where |A(u)| = 72960u5 − 141696u4 − 12824u3 + 79380u2 + 2856u − 15317. Figure 5.11 illustrates how the curve and lines partition the controller parameter space for a fixed x3 = 0.75. Despite the simple controller structure, the behavior of the curve is extremely complicated. The numbers written inside each region indicate the respective number of roots outside the unit circle. The shaded region indicates the set of controller parameters that stabilizes the given plant.
3
2.5
5
2 3
x
2
1.5
6
1
0.5
2
1
4
4 0
0 2 −0.5
2
4
3 5
−1 −3
−2
−1
0
1
2
3
4
5
x1
Figure 5.11 Root invariant regions with first order controller (Example 5.5).
Figure 5.12 depicts the three dimensional stability region in the controller parameter space. The two dimensional sections of the region are for −1.25 ≤ x3 ≤ 1.375. The figure shows that the region gets smaller as x3 approaches +1.375 and −1.25. 5.5.1.1
Some Extensions
The solution of the stabilizing region obtained in the last section allows for some useful extensions. Firstly, it is possible to repeat the calculation for a
201
FIRST ORDER CONTROLLERS FOR LTI SYSTEMS
1
x3
0.5
0
−0.5
−1
0.6 1
0.4
0.8
0.2
0.6
0
0.4 0.2
−0.2
0
−0.4
−0.2
x2
x1
Figure 5.12 Stabilizing first order controller parameter region (Example 5.5).
given number of plants Pi (z) and obtain the common stabilizing region by intersection. By setting Pi (z) =
P (z) , zi
i = 1, 2...l
and solving the simultaneous stabilization problem for this special set one can design controllers that allow for delay tolerance of l sampling instants in the closed loop. The maximal delay that can be tolerated is the value of l such that no controller exists that simultaneously stabilize up to l + 1 delays. Also by constraining x3 to lie in the range (−1, +1) one can solve the stabilization problem with a stable controller. By constraining xx12 to lie in the range (−1, +1), we can obtain minimum phase controllers. We also show how to approximate deadbeat response. These extensions are illustrated in the following sections.
5.5.2
Delay Tolerance
Delay tolerance can be accomplished by solving the problem of simultaneously stabilizing the systems P (z),
z −1 P (z),
z −2 P (z),
···,
z −q P (z).
202
THREE TERM CONTROLLERS
Example 5.6 Consider the plant given in the previous example with x3 = −0.25. Figure 5.13(a) depicts the stability region for q = 0 (the nominal plant with no delay). Figures 5.13(b), 5.13(c), and 5.13(d) show how the stability region shrinks as q is increased to 1, 2, and 3, respectively.
no delay
with one delay
0
0
x
2
0.5
x2
0.5
−0.5
−0.5
−1
−1
−1
−0.5
0 x
0.5
−1
1
−0.5
0 x
1
with one and two delays
0.5
1
1
with one, two, and three delays
0
0
x
2
0.5
x2
0.5
−0.5
−0.5
−1
−1
−1
−0.5
0
0.5
1
x1
−1
−0.5
0
0.5
1
x1
Figure 5.13 Stabilizing first order controller parameter region.
5.5.2.1
Maximally Deadbeat Design
A deadbeat system often gives a desirable design. Such a system has all poles at the origin and provides convergence to the steady state in n steps. In our problem since the controller is constrained to be of first order, it will in general be impossible to attain deadbeat control. In such a case, it may be worthwhile attempt to place the roots as close to the origin as possible. Such a design approach called maximally deadbeat is developed here. This problem can be solved by obtaining the Tchebyshev representation of the
FIRST ORDER CONTROLLERS FOR LTI SYSTEMS
203
characteristic equation on a circle of radius ρ. By reducing the value of ρ from 1 we can determine the value of ρ = ρ∗ for which the set of stabilizing first order controllers just becomes empty. The Tchebyshev representation of a polynomial P (z) = an z n + an−1 z n−1 + · · · + a2 z 2 + a1 z + a0 on a circle of radius ρ can be written p P ρejθ = RP (u, ρ) + j 1 − u2 TP (u, ρ)
where
RP (u, ρ) = an cn (u, ρ) + · · · + a1 c1 (u, ρ) + a0 TP (u, ρ) = an sn (u, ρ) + · · · + a1 s1 (u, ρ) and ck (u, ρ) = ρk ck (u),
sk (u, ρ) = ρk sk (u).
Following similar procedure as before, we have the stability boundary Y (u, ρ) (ρu − x3 ) + ρ 1 − u2 TD (u, ρ)TN (u, ρ) +ρRD (u, ρ)RN (u, ρ) x1 (u) = |A(u, ρ)| ρY (u, ρ) (ρ − ux3 ) + ρx3 RN (u, ρ)RD (u, ρ) + 1 − u2 TN (u, ρ)TD (u, ρ) x2 (u) = |A(u, ρ)| where 2 |A(u, ρ)| = −ρ 1 − u2 TN2 (u, ρ) + RN (u, ρ) Y (u, ρ) = RD (u, ρ)TN (u, ρ) − RN (u, ρ)TD (u, ρ). Example 5.7 Consider the plant-controller pair of Example 5.5. The “stabilizing region” for x3 = −0.25 and various values of ρ are shown in Figure 5.14. The stabilizing region vanishes for ρ = 0.8. To illustrate the approximate deadbeat property, we selected two points (indicated with asterisks) from Figure 5.14: 1. (x1 , x2 ) = (0.3249, −0.4737) from the stability region for ρ = 1 and 2. (x1 , x2 ) = (0.0518, −0.1184) from the stability region for ρ = 0.9. Thus, for the first and second points 0.3249z − 0.4737 0.0518x − 0.1184 C1 (z) = , C2 (z) = . z − 0.25 z − 0.25 As shown in Figure 5.15, the step response for the point from the region for ρ = 0.9 converges much faster than the point from the ρ = 1 region.
204
THREE TERM CONTROLLERS ρ=1
ρ = 0.95 0.5
0
0
x
x
2
2
0.5
−0.5
−0.5
−1
−0.5
0 x
−1
0.5
−0.5
0 x
1
0.5
1
ρ = 0.8
ρ = 0.9 0.5
0
0
x
x
2
2
0.5
−0.5
−1
−0.5
−0.5
0 x
−1
0.5
−0.5
0 x
0.5
1
1
Figure 5.14 Root invariant region with first order controller for ρ = 1, 0.95, 0.9, and 0.8 (Example 5.7). 0.3 ρ=1 ρ = 0.9
0.2
0.1
0
Response
−0.1
−0.2
−0.3
−0.4
−0.5
−0.6
0
20
40
60
80
100 n
120
140
Figure 5.15 Step responses (Example 5.7).
160
180
200
FIRST ORDER CONTROLLERS FOR LTI SYSTEMS
5.6
205
Exercises
5.1 Determine all first order controllers stabilizing the plant P (s) =
s2
s−1 . −s+1
5.2 Determine all first order discrete time controllers stabilizing the plant P (z) =
z2
z−1 . −z−1
5.3 Repeat the above two problems with (a) a gain margin specification of at least 3 dbs (b) a phase margin specification of at least 25 degrees (c) simultaneous satisfaction of specifications a) and b). 5.4 In Examples 5.1, 5.2, and 5.5, determine the stabilizing subsets when the controller is constrained to be (a) stable (b) minimum phase (c) both stable and minimum phase. 5.5 Consider the proper plant P (s) =
N (s) αn sn + αn−1 sn−1 + · · · + α1 s + α0 = , D(s) βn sn + βn−1 sn−1 + · · · + β1 s + β0
the proper weighting function W (s) =
Wn (s) am sm + am−1 sm−1 + · · · + a1 s + a0 = , Wd (s) bm sm + bm−1 sm−1 + · · · + b1 s + b0
and the norm
Wn (s)
(x1 s + x2 )N (s)
kW (s)T (s)k =
Wd (s) (s + x3 )D(s) + (x1 s + x2 )N (s) < γ. ∞
206
THREE TERM CONTROLLERS
Let nq and dq be the leading coefficients of the numerator and denominator polynomials of W (s)T (s), respectively. Let ρ1 :=
γbm βn , αn (am − γbm )
ρ2 := −
γbm βn . αn (am + γbm )
(a) Show that the condition |nq | < γ|dq | in Lemma 5.1 is equivalent to the following bounds on x1 : min{ρ1 , ρ2 } < x1 < max{ρ1 , ρ2 }, if γ < |am /bm | min{ρ , ρ } > x or x > max{ρ , ρ } if γ > |am /bm | 1 2 1 1 1 2 βn x1 > − , if γ = am /bm and αn βn > 0 2αn x < − βn , if γ = am /bm and αn βn < 0. 1 2αn (b) Show that |nq | < γ|dq | is sufficient for the complex polynomial φ(s) to be degree invariant for every θ ∈ [0, 2π).
5.7
Notes and References
The results presented here were developed in [181, 182, 183, 184].
6 CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS
The focus of this chapter is on direct data driven synthesis and design of controllers. We show that the complete set of stabilizing PID and first order controllers for a finite dimensional linear time-invariant (LTI) plant, possibly cascaded with a delay, can be calculated directly from the frequency response (Nyquist/Bode) data P (ω) for ω ∈ [0, ∞) without the need of producing an identified analytical model. It is also shown that complete sets achieving guaranteed levels of performance measures such as gain margin, phase margin, and H∞ norms can likewise be calculated directly from only Nyquist/Bode data. The solutions have important new features. For example it is not necessary to know the order of the plant or even the number of left-half plane or right-half plane poles or zeros. The solution also identifies, in the case of PID controllers an exact low frequency band over which the plant data must be known with accuracy and beyond which the plant information may be rough or approximate. These constitute important new guidelines for identification when the latter is to be used for control design. The model free approach to control synthesis and design developed here is an attractive complement to modern and postmodern model based design methods which require complete information on the plant and generally produce a single optimal controller. A discussion is included, with illustrative example, of the sharp differences that can occur between model-free and model based approaches when computing sets of stabilizing controllers. For example, it is shown, that the identified model of a high order system can be non-PID stabilizable whereas the original data indicates it is PID stabilizable. The results given here could be a significant improvement over classical control loop-shaping approaches since we obtain complete sets of controllers achieving the design specifications. It can enhance fuzzy and neural approaches which are model free but cannot guarantee stability and performance. Finally, these results open the door to adaptive, model free, fixed order designs of real world systems. The possibilities for computer-aided design are illustrated using LabView VI’s.
207
208
6.1
THREE TERM CONTROLLERS
Introduction
In 1932, the Nyquist criterion was introduced as a means of predicting the stability of a closed-loop system based on frequency response measurements made on the open-loop system. This technique was later developed by Bode and others into a graphical design approach to reshape the open-loop frequency response by a simple cascaded compensator to achieve prescribed closed-loop stability margins. This essentially constitutes the model free part of classical control theory. This theory is routinely taught in undergraduate control courses and is widely used in industry. In 1960, Kalman introduced a model based approach to control by advocating the use of state space models, state feedback control, and quadratic optimization. The clear advantage of this approach over the adhoc approaches of classical control is that stability and optimality could be guaranteed. The basic features of this theory are (1) that an observer or unknown input observer is needed to reconstruct the state needed for feedback and (2) computation of optimal state feedback gains and the observer require the state space model of the plant. In this approach, the controller order is invariably high, typically being equal to the order of the “generalized” plant, and frequently one has to apply identification techniques to input output data to produce the state space model which is required to initiate the design. There has been a recent resurgence of interest in fixed and low order controllers. This is mainly due to the fact that high order controllers are rarely implemented and hardly any theory exists for low order controllers. The fixed order control problem almost by definition excludes the state feedback observer paradigm that is so successful when the controller order is unconstrained. Indeed, design of a simple controller is a much more difficult problem than that of designing a high order complex controller. In the previous chapters we have described results on the design of PID and first order controllers which are are generically important in many industries and are implemented in electrical, mechanical, hydraulic, fluidic, and pneumatic systems. It is important to note that these approaches are model based. The purpose of this chapter is to show that at least for three term controllers, synthesis and design can be carried out directly from frequency response measurements on the plant without constructing a state space or transfer function model. The main results show that complete sets achieving stability and various performance specifications can be obtained from Nyquist/Bode data without constructing an identified model. It is emphasized that our solution does not require knowledge of the order of the system nor the numbers of LHP or RHP poles or zeros, and no identification of the plant is needed. It will be seen in the sequel that, in the case of PID controllers, the solution specifies an exact “low frequency band” where the plant frequency response must be known accurately and beyond which rough data or measurements suffice.
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS
209
These features have important implications in real world control engineering, where models are often unavailable, measurements can be made only over a restricted range of frequencies and where guarantees of various performance specifications must still be made. The results given are valid for stable and unstable LTI systems, possibly containing a delay, and specifically deal with PID and first order controllers. They can be extended to general three term controllers with minor modifications. In general a higher order controller would have more than three adjustable parameters. In such cases the three term theory can be applied by fixing various sets of parameters and leaving three free terms. An important advantage of dealing with three design terms is that 2-D and 3-D graphics can be used to display the resulting sets of controllers in parameter space. In practice, the frequency response data can be readily obtained for stable plants by direct measurements and the theory given here can therefore be applied to stable plants without constructing a transfer function or state space model. For unstable plants, frequency response data could be obtained if a feedback compensator that stabilizes the plant is known. In the latter case, measurements can be made on the stable closed-loop system and the plant frequency response data extracted from it by “dividing out” the known compensator. This kind of procedure is also necessary in the identification of unstable systems and also for determining the Nyquist plot of unstable systems. By synthesis we mean that the complete set of controllers of the prescribed type (PID or first order) achieving stability can be computed. This is the first essential step to enable the design of systems achieving multiple performance specifications. We may also consider this as the model free fixed order version of the well-known YJBK parametrization of all stabilizing controllers. We also show how to compute the complete set of controllers achieving prescribed design specifications. By design we mean that several performance objectives can be intersected to simultaneously satisfy multiple performance objectives. In this framework, the performance measures that can be handled analytically include guaranteed gain and phase margins as well as H∞ norm specifications. The computations involved in most of the cases are linear programming or solutions of linear equations with a sweeping parameter. Simultaneous satisfaction of multiple performance criteria simply amounts to simultaneous solution of larger sets of linear inequalities. The results provide an alternative to model based control while at the same time overcoming the limitations of classical control theory. In this sense, they offer a combination of the best of the classical and modern approaches.
210
6.2
THREE TERM CONTROLLERS
Mathematical Preliminaries
In this section, we develop some notation and technical results which will be used later. First consider a real rational function R(s) =
A(s) B(s)
(6.1)
where A(s) and B(s) are polynomials with real coefficients and of degrees m and n, respectively. We assume that A(s) and B(s) have no zeros on the ω + − − axis. Let zR , p+ R (zR , pR ) denote the numbers of open right-half plane (RHP) and open left-half plane (LHP) zeros and poles of R(s). Also let ∆∞ 0 ∠R(ω) denote the net change in phase of R(ω) as ω runs from 0 to +∞. Then we have π − + + ∆∞ zR − zR − p− . (6.2) 0 ∠R(ω) = R − pR 2
This formula follows from the fact that each LHP zero and each RHP pole contribute + π2 to the net phase change whereas each RHP zero and LHP pole contribute − π2 to the net phase change. For convenience, we define the (Hurwitz) signature of R(s) as − + + σ(R) := zR − zR − p− (6.3) R − pR . Write
R(ω) = Rr (ω) + Ri (ω)
(6.4)
where Rr (ω) and Ri (ω) are rational functions in ω with real coefficients. It is easy to see that Rr (ω) and Ri (ω) have no real poles for ω ∈ (−∞, +∞) since R(s) has no imaginary axis poles. To compute the net change in phase, that is, the left-hand side of (6.2), it is convenient to develop formulas in terms of Rr (ω) and Ri (ω). Note that ω0 = 0 is always a zero of Ri (ω) since R(s) is real. Let 0 = ω0 < ω1 < ω2 < · · · < ωl−1 (6.5) denote the real, finite nonnegative zeros of Ri (ω) = 0 of odd multiplicities, let if x > 0 +1 0 if x = 0 sgn[x] = (6.6) −1 if x < 0 and define ωl = ∞− . Define, for a real function f (t), f t− lim f (t), f t+ lim 0 := 0 := t→t0 ,tt0
f (t).
(6.7)
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS
211
LEMMA 6.1 (Real Hurwitz Signature Lemma) For n − m is even, l−1 X σ(R) = sgn [Rr (ω0 )] + 2 (−1)j sgn [Rr (ωj )] j=1
+(−1) sgn [Rr (ωl )] (−1)l−1 sgn Ri (∞− ) . l
For n − m odd,
l−1 X j σ(R) = sgn [Rr (ω0 )] + 2 (−1) sgn [Rr (ωj )] · (−1)l−1 sgn Ri (∞− ) . j=1
PROOF The calculation of the signature is based on the total phase change of a frequency dependent function as the frequency ranges from 0 to ∞. The latter in turn can be broken up into a summation of phase changes over a disjoint partition of the frequency axis. Note first that π σ(R) (6.8) ∆∞ 0 ∠R(ω) = 2 and ω1 ωl =∞− ∆∞ ∠R(ω). (6.9) 0 ∠R(ω) = ∆ω0 =0 ∠R(ω) + · · · + ∆ωl−1 For the case when n − m is even, the plot of R(ω) approaches the negative or positive real axis as ω approaches ∞. Thus, we have π − ωk+1 ∆ωk ∠R(ω) = sgn[Rr (ωk )] − sgn[Rr (ωk+1 ] · sgn[Ri (ωk+1 )], (6.10) 2 for k = 0, · · · , l − 2 and π ωl =∞− ∆ωl−1 ∠R(ω) = sgn[Rr (ωl−1 )] − sgn[Rr (ωl )] · sgn[Ri (∞− )]. (6.11) 2 From (6.8) and (6.9), σ(R) = sgn[Rr (ω0 )] − sgn[Rr (ω1 )] sgn[Ri (ω1− )] + sgn[Rr (ω1 )] − sgn[Rr (ω2 )] sgn[Ri (ω2− )] + · · · − )] + sgn[Rr (ωl−2 )] − sgn[Rr (ωl−1 )] sgn[Ri (ωl−1 + sgn[Rr (ωl−1 )] − sgn[Rr (ωl )] sgn[Ri (∞− )].
212
THREE TERM CONTROLLERS
Since − sgn[Ri (ωi− )] = −sgn[Ri (ωi+1 )]
and − sgn[Ri (ωl−1 )] = −sgn[Ri (∞− )],
we have − sgn[Ri (ωl−2 )] = (−1)2 Ri (∞− ), − sgn[Ri (ωl−3 )] = (−1)3 Ri (∞− )], .. . − sgn[Ri (ω2 )] = (−1)l−2 Ri (∞− ),
(6.12)
sgn[Ri (ω1− )] = (−1)l−1 Ri (∞− ), so that σ(R) = sgn[Rr (ω0 )] − 2sgn[Rr (ω1 )] + 2sgn[Rr (ω2 )] + l−1 l · · · + (−1) sgn[Rr (ωl−1 )] + (−1) sgn[Rr (ωl )] · (−1)l−1 sgn[Ri (∞− )] =
l−1 X sgn[Rr (ω0 )] + 2 (−1)j sgn [Rr (ωj )] j=1
+(−1) sgn [Rr (ωl )] (−1)l−1 sgn[Ri (∞− )]. l
For the case when n − m is odd, the plot of R(ω) approaches the negative or positive imaginary axis as ω approaches ∞. Thus, we have π − k+1 sgn[R (ω )] − sgn[R (ω )] · sgn[Ri (ωk+1 )], ∠R(ω) = ∆ω r k r k+1 ωk 2 for k = 0, · · · , l − 2 − π l =∞ ∆ω ∠R(ω) = sgn[Rr (ωl−1 )]sgn[Ri (∞− )]. ωl−1 2 Then, using (6.12), we have σ(R) = sgn[Rr (ω0 )] − sgn[Rr (ω1 )] sgn[Ri (ω1− )] + sgn[Rr (ω1 )] − sgn[Rr (ω2 )] sgn[Ri (ω2− )] − + · · · + sgn[Rr (ωl−2 )] − sgn[Rr (ωl−1 )] · sgn[Ri (ωl−1 )] +(−1)l−1 sgn[Rr (ωl−1 )]sgn[Rr (∞− )]
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS = sgn[Rr (ω0 )] − 2sgn[Rr (ω1 )] + · · · l−1 +(−1) 2sgn[Rr (ωl−1 )] (−1)l−1 sgn[Ri (∞− )] l−1 X (−1)j sgn[Rr (ωj )] = sgn[Rr (ω0 )] + 2
213
j=1
l−1
·(−1)
sgn[Ri (∞− )].
Now consider a complex rational function Q(s) =
D(s) E(s)
(6.13)
where D(s) and E(s) are polynomials with complex coefficients of degrees n and m, respectively. As before, we assume that D(s) and E(s) do not have zeros on the ω axis. Let ∆+∞ −∞ ∠Q(ω) denote the net change in phase of Q(ω) + − − as ω runs from −∞ to +∞. Also let zQ , p+ Q (zQ , pQ ) denote the numbers of RHP (LHP) zeros and poles of Q(s). Then we have h i − + − + ∆+∞ ∠Q(ω) = π z − z − p − p (6.14) −∞ Q Q Q Q
This easily follows from the fact that each LHP zero (RHP zero) and each RHP pole (LHP pole) contribute +π (−π) to the net phase change. Summing over all poles and zeros we obtain the formula given. Analogous to the real case, we define the signature of the complex rational function Q(s): − + + σ(Q) := zQ − zQ − p− (6.15) Q − pQ . Write
Q(ω) = Qr (ω) + Qi (ω)
(6.16)
where Qr (ω) and Qi (ω) are rational functions with real coefficients. Moreover, Qr (ω) and Qi (ω) have no real poles for ω ∈ (−∞, +∞). It is easy to show that the numerators of Qr (ω) and Qi (ω) are generically of the same degree. Indeed, if this is not so, multiplying Q(s) by an arbitrary complex number α + β will render this condition to be true without changing poles, zeros or signature of Q(s). Therefore, we henceforth assume this to be true. Let ω0 , · · · , ωl−1 ordered as −∞ < ω0 < ω1 < · · · < ωl−1 < +∞ := ωl denote the real, distinct, finite zeros of Qi (ω) = 0 with odd multiplicities.
214
THREE TERM CONTROLLERS
LEMMA 6.2 (Complex Hurwitz Signature Lemma) l−1 X σ(Q) = (−1)l−1−j sgn [Qr (ωj )] sgn[Qi (∞− )]. j=1
PROOF As before we compute the total phase change as a sum over a disjoint partition of the frequency axis. Let us begin by noting that πσ(Q) = ∆+∞ −∞ ∠Q(ω).
(6.17)
To evaluate the right-hand side of (6.17), write ω1 ∆+∞ −∞ ∠Q(ω) = ∆−∞ ∠Q(ω) +
l−2 X
+∞ i+1 ∆ω ωi ∠Q(ω) + ∆ωl−1 ∠Q(ω).
(6.18)
i=1
Let Q(ω) = Qr (ω) + Qi (ω) :=
Ai (ω) Ar (ω) + B(ω) B(ω)
where Ar (ω), Ai (ω), B(ω) are polynomials with real coefficients and B(ω) 6= 0,
ω ∈ (−∞, ∞).
(6.19)
We assume that Ar (ω) and Ai (ω) are of equal degree. If this fails, it can be restored by multiplying Q(s) by almost any complex number without changing its signature. Introduce A(ω) := Ar (ω) + Ai (ω) (6.20) and note that in view of (6.19), ω1 ∆+∞ −∞ ∠Q(ω) = ∆−∞ ∠A(ω) +
l−2 X
+∞ k+1 ∆ω ωk ∠A(ω) + ∆ωl−1 ∠A(ω).
(6.21)
k=1
The finite zeros of Qi (ω) are identical to zeros of Ai (ω) and therefore h i π ωk+1 ˙ ∆ωk ∠Q(ω) = sgn Ai (ωk ) · sgn [Ar (ωk )] − sgn [Ar (ωk+1 )] , (6.22) 2 for k = 1, 2, · · · , l − 2. Also, we have +∞ 1 ∆ω −∞ ∠A(ω) + ∆ωl−1 ∠A(ω)
=
h i h i π π sgn A˙ i (ω1 ) sgn [Ar (ω1 )] + sgn A˙ i (ωl−1 ) sgn [Ar (ωl−1 )] . 2 2
(6.23)
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS
215
The proof of the lemma is completed by noting that h i sgn Q˙ i (ωl−1 ) = sgn Qi (∞− )
sgn [Ar (ωk )] = sgn [Qr (ωk )] h i h i sgn Q˙ i (ωk ) = sgn A˙ i (ωk ) h i sgn Q˙ i (ωk ) = (−1)l−1−k sgn Qi (∞− )
(6.24)
for k = 1, 2, · · · , l − 1 and substituting (6.21) – (6.24) in (6.18).
6.3
Phase, Signature, Poles, Zeros, and Bode Plots
Let P denote a LTI plant and P (s) its rational transfer function with z + , p+ (z − , p− ) denoting the numbers of RHP (LHP) zeros and poles, n(m) the denominator (numerator) degrees. Let the relative degree be denoted rP : rP := n − m As defined earlier the signature of P is: σ(P ) = z − − z + − p− − p+ .
(6.25)
LEMMA 6.3 1 dPdb (ω) rP = − · 20 d(log10 ω) ω→∞ 2 ∠P (ω) σ(P ) = ∆∞ π 0
(6.26) (6.27)
where Pdb (ω) := 20 log10 |P (ω)|. PROOF (6.26) states that the relative degree is the high frequency slope of the Bode magnitude plot and (6.27) states that the signature can be found from the net change in phase from the phase plot. Assuming that P (s) has no ω axis poles and zeros, we can also write σ(P ) = − (n − m) − 2 z + − p+ (6.28)
216
THREE TERM CONTROLLERS
or
σ(P ) = −rP − 2 z + − p+ .
(6.29)
Therefore, z + − p+ can be determined from the Bode plot of P. In particular if P (s) is stable the Bode plot can often be obtained experimentally by measuring the frequency response of the system. Then the above relations with p+ = 0 determine z + from the Bode plot data. Now suppose that P is an unstable LTI plant with a rational transfer function unknown to us and assume that P does not have imaginary axis poles and zeros. We assume however that a known feedback controller C(s) stabilizes P and the closed-loop frequency response can be measured and is denoted by G(ω) for ω ∈ [0, ∞):
unknown plant
known controller +
P
C(s) −
Figure 6.1 Frequency response measurement on an unstable plant.
Then P (ω) =
G(ω) C(ω)(1 − G(ω))
(6.30)
is the computed frequency response of the unstable plant. The next result shows that knowledge of C(s) and G(ω) is sufficient to determine the numbers z + and p+ , that is the numbers of RHP zeros and poles of the plant. Let zc+ denote the number of RHP zeros of C(s). THEOREM 6.1 1 −rP − rC − 2zc+ − σ(G) 2 1 + p = [σ(P ) − σ(G) − rC ] − 2zc+. 2 z+ =
PROOF
We have G(s) =
P (s)C(s) 1 + P (s)C(s)
(6.31) (6.32)
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS
217
and, since G(s) is stable, σ(G) = z − + zc− − z + + zc+ − (n + nc ) = −rP − rC − 2zc+ − 2z +
which implies (6.31). From (6.25) applied to P (s), we have
p+ = z + +
σ(P ) rP + . 2 2
(6.33)
Substituting (6.31) in (6.33), we have (6.32).
REMARK 6.1 In the above theorem, we assume that C(s), a stabilizing controller is known, and the corresponding closed-loop frequency response G(ω) for ω ∈ [0, ∞) can be measured. Thus, P (ω) can be computed from (6.30). Now rP and σ(G) can be found by applying the results of Lemma 6.3 to P (ω) and G(ω), respectively. rC and zc+ are known as C(s) is known. Therefore, z + and p+ can be found.
REMARK 6.2 In the above analysis we have assumed for simplicity that the plant is devoid of imaginary axis poles and zeros. When such poles and zeros are present their numbers may be known from physical considerations or their numbers and locations may be ascertained from the experimentally determined or computed Bode plot. Once identified we can lump these poles and zeros with the controller and proceed with the design procedure. At an imaginary axis zero (pole) of multiplicity k, away from the origin, the magnitude plot is zero (infinity) and the phase plot undergoes an instantaneous change of phase kπ. If such poles or zeros occur at the origin there is a corresponding phase shift of kπ 2 at zero frequency. It is straightforward to establish that, in this case, the relations (6.31) and (6.32) need to be modified to the following: 1 −rP − rC − 2zc+ − zci − σ(G) 2 1 + p = [σ(P ) − σ(G) − rC ] − 2zc+ − zci + z i − pi . 2
z+ =
(6.34) (6.35)
where z i , zci , pi , pic denote the numbers of imaginary axis zeros and poles of plant and controller.
218
6.4
THREE TERM CONTROLLERS
PID Synthesis for Delay Free Continuous-Time Systems
In this section, we consider the synthesis and design of PID controllers for a continuous-time LTI plant, with underlying transfer function P (s) with n(m) poles (zeros). (see Figure 6.2).
PID
+
Controller
−
LTI System
Figure 6.2 A unity feedback system with a PID controller.
We assume that the only information available to the designer is: 1. Knowledge of the frequency response magnitude and phase, equivalently, P (ω), ω ∈ [0, ∞) if the plant is stable. 2. Knowledge of a known stabilizing controller and the corresponding closedloop frequency response G(ω). Such assumptions are reasonable for most systems. We also make the technical assumption that the plant has no ω poles or zeros so that the magnitude, its inverse and phase are well-defined for all ω ≥ 0. As we have seen from the discussion in the last section, the numbers and locations of RHP poles and zeros can be found from the above data for any LTI plant and either “divided out” or lumped with the controller. Write P (ω) = |P (ω)|eφ(ω) = Pr (ω) + Pi (ω) (6.36) where |P (ω)| denotes the magnitude and φ(ω) the phase of the plant, at the frequency ω. Let the PID controller be of the form C(s) =
K i + K p s + K d s2 , s(1 + sT )
for T > 0
(6.37)
where T is assumed to be fixed and small. We now present results for developing our procedure for determining the stabilizing set. LEMMA 6.4 Let
F (s) := s(1 + sT ) + Ki + Kp s + Kd s2 P (s).
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS and
219
F¯ (s) := F (s)P (−s).
Then closed-loop stability is equivalent to σ F¯ (s) = n − m + 2z + + 2.
(6.38)
PROOF Closed-loop stability is equivalent to the condition that all zeros of F (s) lie in the LHP. This in turn is equivalent to the condition σ (F (s)) = n + 2 − p− − p+ . Now consider the rational function
F¯ (s) = F (s)P (−s). Note that
σ F¯ (s) = σ(F (s)) + σ(P (−s)).
Therefore, the stability condition becomes σ F¯ (s) = n + 2 − p− − p+ + z + − z − − p+ − p− = n + 2 + z + − z − = n − m + 2z + + 2.
Write F¯ (ω) = ω(1 + ωT )P (−ω) + Ki + ωKp − ω 2 Kd P (ω)P (−ω) = Ki − Kd ω 2 |P (ω)|2 − ω 2 T Pr (ω) + ωPi (ω) | {z } F¯r (ω,Ki ,Kd )
+ω Kp |P (ω)|2 + Pr (ω) + ωT Pi (ω) {z } | F¯i (ω,Kp )
= F¯r (ω, Ki , Kd ) + ω F¯i (ω, Kp ) .
THEOREM 6.2 Let ω1 < ω2 < · · · < ωl−1 denote the distinct frequencies of odd multiplicities which are solutions of F¯i (ω, Kp∗ ) = 0, (6.39) or Pr (ω) + ωT Pi (ω) |P (ω)|2 cos φ(ω) + ωT sin φ(ω) =− |P (ω)| = : g(ω)
Kp∗ = −
(6.40)
220
THREE TERM CONTROLLERS
for fixed Kp = Kp∗ . Let ω0 = 0 and ωl = ∞, and j := sgn F¯i (∞− , Kp∗ ) . Determine strings of integers I = [i0 , i1 , i2 , · · · , il ] , with it ∈ {+1, −1} such that For n − m even: i0 − 2i1 + 2i2 + · · · + (−1)l−1 2il−1 + (−1)l il (−1)l−1 j = n − m + 2z + + 2.
(6.41)
For n − m odd
i0 − 2i1 + 2i2 + · · · + (−1)l−1 2il−1 (−1)l−1 j
= n − m + 2z + + 2.
(6.42)
Then for the fixed Kp = Kp∗ , the (Ki , Kd ) values corresponding to closed-loop stability are given by F¯r (ωt , Ki , Kd )it > 0, (6.43) where the it ’s are taken from strings satisfying (6.41) and (6.42), and ωt ’s are taken from the solutions of (6.39). PROOF By Lemma 6.4, the stability condition has been reduced to the signature condition in (6.38). The theorem follows from applying Lemma 6.1 to compute the signature of F¯ (s). The following result clarifies what range the parameters Kp must be swept over. THEOREM 6.3 For the given function g(ω) in (6.40) determined completely by the plant data P (ω) and T : (1) A necessary condition for PID stabilization is that there exists Kp such that the function Kp = g(ω) (6.44) has at least k distinct roots of odd multiplicities where n − m + 2z + + 2 −1 2 + n − m + 2z + 3 k≥ −1 2 k≥
if n − m even if n − m odd.
(2) There exists unique range ω, ω = (ωmin , ωmax ) over which the condition (1) occurs and this determines the range of ω to be swept.
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS
221
(3) Every Kp belonging to the stabilizing set of PID parameters lies in the range Kp ∈ Kpmin, Kpmax where
Kpmin := min g(ω), ω∈ω
Kpmax := max g(ω) ω∈ω
where ω = (ωmin , ωmax ). REMARK 6.3 The function g(ω) is well defined due to the assumption that the plant either has no ω zeros or these have been lumped with the controller. The frequency ωmax can be selected as any frequency after which g(ω) continues to grow monotonically. This determines the range of frequencies over which the P (ω) data of the plant should be known accurately. Note that the ranges (ωmin , ωmax ) and (Kpmin , Kpmax ) can consist of multiple segments. The computation of the stabilizing set implied by the above results is summarized as the following procedure. Computation of PID Stabilizing Set from Nyquist/Bode Data The complete set of stabilizing PID gains can be computed by the following procedure: For stable systems: The available data is the frequency response of the plant P (ω). 0.1 Determine the relative degree of the plant rP = n − m from the high frequency slope of the Bode magnitude plot of P (ω). 0.2 Let ∆∞ 0 [φ(ω)] denote the net change of phase of P (ω) for ω ∈ [0, ∞). Determine z + from + π ∆∞ 0 [φ(ω)] = − (n − m) + 2z 2 which follows from (6.28) with p+ = 0.
For unstable systems: The available data are a stabilizing controller transfer function C(s) and the frequency response of the corresponding stable closedloop system G(ω). 0.1 Compute the frequency response P (ω) from (6.30). 0.2 Determine relative degree of the plant rP from the high frequency slope of the Bode magnitude plot of P (ω). 0.3 Determine zc+ and rC from C(s). 0.4 Compute σ(G) from (6.27) applied to G(ω). 0.5 Compute z + using (6.31) in Theorem 6.1. 0.6 Compute g(ω) using (6.40) and the data.
222
THREE TERM CONTROLLERS
1. Fix Kp = Kp∗ , solve (6.40) and let ω1 < ω2 < · · · < ωl−1 denote the distinct frequencies of odd multiplicities which are solutions of (6.40). 2. Set ω0 = 0, ωl = ∞ and j = sgnF¯i (−∞− , Kp∗ ). Determine all strings of integers it ∈ {+1, −1} such that: For n − m even: i0 − 2i1 + 2i2 + · · · + (−1)i−1 2il−1 + (−1)l il (−1)l−1 j = n − m + 2z + + 2.
For n − m odd:
i0 − 2i1 + 2i2 + · · · + (−1)i−1 2il−1 (−1)l−1 j
= n − m + 2z + + 2.
(6.45)
(6.46)
3. For the fixed Kp = Kp∗ chosen in Step 1, solve for the stabilizing (Ki , Kd ) values from ωt sin φ(ωt ) − ωt2 T cos φ(ωt ) Ki − Kd ωt2 + it > 0, (6.47) |P (ωt )|2 for t = 0, 1, · · · , l. 4. Repeat the previous three steps by updating Kp over prescribed ranges. The ranges over which Kp must be swept is determined from the requirements that (6.45) or (6.46) is satisfied for at least one string of integers as in Theorem 6.3. We emphasize that all computations are based on the data P (ω) and knowledge of the transfer function P (s) is not required. For well-posedness of the loop, it is necessary that T . Kd 6= − P (∞) For strictly proper plants, P (∞) = 0 and this constraint vanishes.
6.5
PID Synthesis for Systems with Delay
In this section, we show how the previous results can be extended to systems with delay. Consider the finite dimensional LTI plant PL with a cascaded delay in Figure 6.3. Here P0 represents an LTI delay free system with a proper transfer function. The transfer functions of P0 and PL are denoted P0 (s) and PL (s), respectively. We assume that frequency response measurements can be made at terminals “a” and “b,” that is on the delay system PL . Thus, the data we have is: PL (ω) = e−ωLP0 (ω) = mL (ω)eφL (ω) ,
0 ≤ ω < ∞.
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS
223
PL Delay L
a
P0
b
Figure 6.3 A plant with cascaded delay.
Write P0 (ω) = m0 (ω)eφ0 (ω) . Therefore, m0 (ω) = mL (ω),
(6.48)
and φL (ω) = φ0 (ω) − ωL,
0 ≤ ω < ∞.
(6.49)
It is clear that m0 (ω) and φ0 (ω) can be determined from the data mL (ω) and φL (ω) on the system with embedded delay L when L is known. If L is unknown it can be determined from the high frequency slope of the phase. Now let C(s, k) denote the PID controller C(s, k) =
K i + K p s + K d s2 , s(1 + sT )
k = [Ki , Kp , Kd ] .
Let S0 denote the set of stabilizing PID controllers for the delay free plant S0 = {k : C(s, k) stabilizes P0 } . We have seen how S0 can be found in the previous section when P0 (ω) is known. It follows from (6.49) that S0 can be determined from PL (ω) the data for the embedded delay system. We denote by SL the set of PID controllers stabilizing the plant P0 with cascaded delay ranging from 0 to L seconds. Following the results in Chapter 5, introduce the sets S∞ = {k : |C(s, k)P0 (s)|s=∞ ≥ 1} , −eωL ω ∈ [0, ∞), SB = k : C(ω, k) = , . 0≤L≤L P0 (ω) The following theorem is obtained from the results of Chapter 5. THEOREM 6.4 The set SL can be found from SL = S0 \ (S∞ ∪ SB ) .
(6.50)
224
THREE TERM CONTROLLERS
The set S∞ consists of those PID gains for which the Nyquist plot approaches points outside the unit circle as s → ∞. The set SB consists of those PID gains for which an imaginary axis characteristic root occurs for delays less than L. The relationship (6.50) states that the exclusion of S∞ and SB from S0 determines the stabilizing set SL for the system with cascaded delay up to L seconds. The computation of S0 has been described in the previous section. The set S∞ is easy to calculate. In fact, T . S∞ = k : |Kd | ≥ |P0 (∞)| To determine SB , let θ(ω, k) := ∠
Ki − Kd ω 2 + ωKp . ω(1 + ωT )
Then the conditions defining SB can be written as the magnitude and phase conditions s ω 2 (1 + T 2 ) Ki − Kd ω 2 = ± − ω 2 Kp2 , (6.51) m20 (ω) θ(ω, k) ≥ π − φ0 (ω) − ωL,
ω ∈ [0, ∞),
(6.52)
so that SB = {k : k satisfies (6.51) and (6.52) for some ω ∈ [0, ∞)} . Note that (6.51) represents a straight line in (Ki , Kd ) space for each fixed Kp and ω. The calculation of the set SB is tedious but straightforward if one sweeps over the frequency variable.
6.6
PID Synthesis for Performance
As we have already seen in Chapter 2 many performance attainment problems can be cast as simultaneous stabilization of the plant P (s), and families of real and complex plants. For example: 1. The problem of achieving a gain margin is equivalent to simultaneously stabilizing the plant P (s) and the family of real plants P c (s) = {KP (s) : K ∈ [Kmin , Kmax ]} . 2. The problem of achieving prescribed phase margin θm is equivalent to simultaneously stabilizing the plant P (s) and the family of complex plants P c (s) = e−θ P (s) : θ ∈ [0, θm ] .
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS
225
3. The problem of achieving an H∞ norm specification on the sensitivity function S(s), that is, kW (s)S(s)k∞ < γ is equivalent to simultaneously stabilizing the plant P (s) and the family of complex plants # ) (" 1 c P (s) : θ ∈ [0, 2π] . P (s) = 1 + γ1 eθ W (s) 4. The problem of achieving an H∞ norm specification on the complementary sensitivity function T (s), that is, kW (s)T (s)k∞ < γ, is equivalent to simultaneously stabilizing the plant P (s) and the family of complex plants 1 P c (s) = P (s) 1 + eθ W (s) : θ ∈ [0, 2π] . γ Based on the above, we consider the problem of stabilizing a complex LTI plant with transfer function P c (s) using PID control. Let nc and mc denote the numerator and denominator degrees of P c (s) and zc+ (p+ c ) the number of RHP zeros and poles. Note that in each of the above cases zc+ (p+ c ) are uniquely determined by the numbers of RHP zeros and poles of the real plant P (s). As in the real case, we assume that the only information available to the designer is “Knowledge of the frequency response magnitude and phase of the real plant, equivalently, P (ω), ω ∈ (0, ∞).” Suppose the complex plant with transfer function P c (s) is in a closed-loop with the PID controller C(s) =
K i + K p s + K d s2 , s(1 + sT )
T > 0.
Consider the complex rational function F c (s) := s(1 + sT ) + Ki + Kp s + Kd s2 P c (s).
Closed-loop stability is equivalent to the condition that the zeros of F c (s) lie in the LHP. This in turn is equivalent to the condition + σ(F c (s)) = nc + 2 − p− c − pc . Now consider the rational function
F¯ c (s) = F c (s)P c ∗ (−s), where P c ∗ (−s) is obtained by replacing the numerator and denominator coefficients of P c (s) by their conjugates and replacing “s” by “−s.” Note that σ F¯ c (s) = σ (F c (s)) + σ (P c ∗ (−s)) . It is easy to verify that
σ (P c ∗ (−s)) = σ (P c (−s)) .
226
THREE TERM CONTROLLERS
Therefore, the stability condition becomes σ F¯ c (s) = nc − mc + 2zc+ + 2. To compute σ F¯ c (s) , we write F¯ c (ω) = ω(1 + ωT )P c∗ (−ω)
+ Ki + ωKp − ω 2 Kd P c (ω)P c∗ (−ω).
Write
P c (ω) = Prc (ω) + Pic (ω), where Prc (ω) and Pic (ω) are real rational functions, then P c ∗ (−ω) = Prc (ω) − Pic (ω). Then
F¯ c (ω) = (Ki − Kd ω 2 )P c (ω)P c∗ (−ω) − ω 2 T Prc (ω) + ωPic (ω) | {z } F¯rc (ω,Ki ,Kd )
c
=
+ (Kp P (ω)P |
F¯rc (ω, Ki , Kd )
+
c∗
(−ω) + Prc (ω) + ωT Pic (ω)) {z } F¯ic (ω,Kp )
F¯ic (ω, Kp ).
THEOREM 6.5 Let ω1 < · · · < ωl−1 denote the distinct frequencies of odd multiplicities which are solutions of F¯ic (ω, Kp∗ ) = 0, (6.53) for Kp = Kp∗ . Let ω0 = −∞ and ωl = ∞ and j = sgn F¯ic (∞− , Kp∗ ) . Define strings of integers {i1 , i2 , · · · , il−1 } with it ∈ {+1, −1} such that l−1 X
(−1)l−1−r ir · j = nc − mc + 2zc+ + 2.
(6.54)
r=1
For the fixed Kp = Kp∗ , the (Ki , Kd ) stabilizing values for the complex plant P c are those satisfying F¯rc (ωt , Ki , Kd )it > 0, (6.55) where it ’s are taken from the strings satisfying (6.54) and ωt ’s are the roots of (6.53). The complete set of stabilizing PID gains for a given complex LTI plant can be found from the frequency response data P c (ω) and the knowledge
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS
227
c of the number p+ c of RHP poles of P (s). These in turn can be found from knowledge of the real plant data P (ω). This leads to the procedure parallel to the real stabilization case. Note that (6.53) can be written as
Prc (ω) + ωT Pic(ω) P c (ω)P c ∗ (−ω) P c (ω) + ωT Pic(ω) =− r =: g c (ω). |P c (ω)|2
Kp∗ = −
(6.56)
For the fixed Kp = Kp∗ , the stabilizing (Ki , Kd ) are determined by the following expression: ωt cos φc (ωt ) − ωt2 T sin φc (ωt ) 2 K i − K d ωt + it > 0, (6.57) |P c (ωt )|2 for t = 0, 1, · · ·. Note that for the class of performance problems mentioned, P c (s) is of the form P c (s) = W c (s)P (s), where W c (s) is a weight chosen by the designer. Therefore, P c ∗ (−s) = W c ∗ (−s)P (−s), and thus P c ∗ (−ω) is known from the knowledge of P (ω). It is also important to note that all calculations above use only the data P (ω) of the real plant. Finally, we need to intersect the stabilizing set with the performance set. For a fixed Kp , this amounts to generating linear inequalities corresponding to stability from the previous section and corresponding to performance described above, and solving them simultaneously. In general, these sets are disconnected and a union of convex components. Multiple performance specifications can be handled in a similar manner.
6.7
An Illustrative Example: PID Synthesis
To illustrate the main result, we take a set of frequency response data points for a stable plant: P(ω) := {P (ω),
ω ∈ (0, 10) sampled every 0.01}.
The Nyquist and Bode plot are shown in Figures 6.4 and 6.5. The high frequency slope of the Bode magnitude plot is −40db/decade and thus n − m = 2. The total change of phase is −540 degrees and so π π −6 = − (n − m) − 2(p+ − z + ) , 2 2
228
THREE TERM CONTROLLERS Nyquist Diagram 0.5
0.4
0.3
0.2
Imaginary Axis
0.1
0
−0.1
−0.2
−0.3
−0.4
−0.5
−0.5
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
Real Axis
Figure 6.4 Nyquist plot of the plant. Bode Diagram 0 −10
Magnitude (dB)
−20 −30 −40 −50 −60 −70
Phase (deg)
−80 0
−180
−360
−540 −2 10
−1
10
0
10 Frequency (rad/sec)
Figure 6.5 Bode plot of the plant.
1
10
2
10
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS
229
and since the plant is stable, p+ = 0, giving z + = 2 The required signature for stability can now be determined and is σ(F¯ (s)) = (n − m) + 2z + + 2 = (2) + 2(2) + 2 = 8. Since n − m is even, we must have i0 − 2i1 + 2i2 − 2i3 + 2i4 − · · · + (−1)l il (−1)l−1 j = 8, where
h i j = sgn F¯i (∞− , Kp ) = −sgn lim g(ω) = −1 ω→∞
and it is clear that at least four terms are required to satisfy the above. In other words l ≥ 4. From Figure 6.6 it is easy to see that (6.40) has at most three positive frequencies as solutions and therefore we have i0 − 2i1 + 2i2 − 2i3 + i4 = 8. Also i4 = sgn F¯r (∞− , Ki , Kd ) = 1 independent of Ki and Kd . This means that Kp must be chosen so that F¯i (ω, Kp∗ ) = 0 has exactly three positive real zeros. This gives the feasible range of Kp values as shown in Figure 6.6 which depicts the function: g(ω) = −
cos φ(ω) + ωT sin φ(ω) . |P (ω)|
(6.58)
The feasible range of Kp is such that Kp intersects the graph of g(ω) three times. This feasible range is shown in Figure 6.6. In Figure 6.6, we also observe that the frequency range over which plant data must accurately be known for PID control is [0, 8.2]. We now fix Kp∗ = 1 and compute the set of ω’s that satisfies cos φ(ω) + ωT sin φ(ω) = 1. − |P (ω)| To find the set of ω’s satisfying the above, we plot the function g(ω) as shown in Figure 6.7. The three frequencies ω1 , ω2 , ω3 are required to compute the stability set in (Ki , Kd ) space. From this we found the solutions {ω1 , ω2 , ω3 } = {0, 0.742, 1.865, 7.854}. This leads to the requirement i0 − 2i1 + 2i2 − 2i3 = 7 giving the only feasible string F = {i0 , i1 , i2 , i3 } = {1, − 1, 1, − 1}. Thus, we have the following set of linear inequalities for stability: Ki > 0 −3.8114 + Ki − 0.5506Kd < 0, 12.2106 + Ki − 3.4782Kd > 0, −457.0235 + Ki − 61.6853Kd < 0.
230
THREE TERM CONTROLLERS 40
30
20
necessary range of ω
10
0 feasible range of K
p
−10
−20
0
1
2
3
4
5 ω
6
7
8
9
10
9
10
Figure 6.6 Graph of the function g(ω) in (6.58). 40
30
ω =0.742 1
ω3=7.854
ω =1.865 2
Kp
20
10 Kp=1 0
−10
−20
0
1
2
3
4
5 ω
6
7
8
Figure 6.7 Finding the set of ω’s satisfying (6.40) with Kp = 1.
231
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS
The complete set of stabilizing PID gains for Kp∗ = 1 is given in Figure 6.8. This set is determined by finding the string of integers {i0 , i1 , i2 , i3 } satisfying the stability condition (6.45) and (6.46). The corresponding linear inequalities (6.47) determine the stabilizing set shown in the (Ki , Kd ) space. The point marked with “*” was used as a test point to verify stability.
6
4
2
Kdbar
0
−2
−4
−6
−8
0
1
2
3
4
5
6
7
Kibar
Figure 6.8 The complete set of stabilizing PID gains in (Ki , Kd ) space when Kp = 1.
By sweeping Kp we have the entire set of stabilizing PID gains as shown in Figure 6.9. This set is determined by sweeping over the feasible range of Kp determined in Figure 6.7, and solving the corresponding linear inequalities given by (6.47) in (Ki , Kd ) space for a fixed Kp∗ in this range. Note that the range of Kp over which the search needs to be carried out is also obvious from Figure 6.6 as discussed and it is Kp ∈ [−8.5, 4.2]. We consider the problem of achieving an H∞ norm specification on the complementary sensitivity function T (s), that is, kW (s)T (s)k∞ < 1
where W (s) =
s + 0.1 . s+1
This set is obtained by solving the PID stabilization problem for the family of complex plants for fixed Kp = 1, corresponding to the H∞ norm specification,
232
THREE TERM CONTROLLERS
5
Kpbar
0
−5
−10 8
10
6 4
8 2
6
0 −2
4 −4
2
−6 Kdbar
−8
0
Kibar
Figure 6.9 Entire set of stabilizing PID gains.
and intersecting the resulting stabilizing set with that of the stabilizing set for the real plant (see Figure 6.10). By sweeping Kp over the feasible ranges and generating the linear inequalities (6.57) in (Ki , Kd ) space corresponding to stability and performance, we have the entire stabilizing PID gains that satisfy the given H∞ specification as shown in Figure 6.11.
6.8
Model Free Synthesis for First Order Controllers
Consider the feedback configuration with an LTI plant and a first order controller as shown in Figure 6.12. We consider the synthesis and design of first order controllers of the form: C(s) =
x1 s + x2 s + x3
(6.59)
for an LTI plant for which we only know the frequency response data P (ω), ω ∈ [0, ∞) and number, p+ , of the RHP poles of the plant. We also assume that the plant has no ω axis poles or zeros. Suppose that the plant transfer
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS
233
6
4
2
0
Kd
Stabilizing region for Pc(s)
−2
−4
Stabilizing region for P(s)
−6
−8
0
1
2
3
4
5
6
7
Ki
Figure 6.10 The complete set of stabilizing PID gains satisfying H∞ norm specification when Kp = 1.
2
Kp
0
−2
−4
−6 4 2
4 3
0
2
−2 Kd
1 −4
0
Ki
Figure 6.11 The entire PID parameter set satisfying the prescribed H∞ norm specification.
234
THREE TERM CONTROLLERS +
P (s)
C(s) −
Figure 6.12 A unity feedback system. function is P (s) and P (ω) = Pr (ω) + Pi (ω) := |P (ω)|(cos φ(ω) + sin φ(ω)) where φ(ω) = ∠P (ω). Consider the real rational function F (s) := (s + x3 ) + (sx1 + x2 )P (s)
(6.60)
For closed-loop stability of the plant with a first order controller, it is necessary and sufficient that σ(F (s)) = n + 1 − (p− − p+ ). (6.61) Let
F¯ (s) := F (s)P (−s).
(6.62)
The stability condition can be restated as
Now
σ(F¯ (s)) = n + 1 + (z + − z − ) = n − m + 2z + + 1.
(6.63)
F¯ (s) = (s + x3 )P (−s) + (sx1 + x2 )P (s)P (−s)
(6.64)
so that F¯ (ω, x1 , x2 , x3 ) = x2 |P (ω)|2 + ωPi (ω) + x3 Pr (ω) | {z } F¯r (ω,x2 ,x3 )
+ω x1 |P (ω)|2 − x3 Pi (ω) + Pr (ω) . {z } | F¯i (ω,x1 ,x3 )
It is easy to show that the curves F¯r (ω, x2 , x3 ) = 0 and F¯i (ω, x1 , x3 ) = 0, 0 ≤ ω < ∞ along with the F¯ (0, x1 , x2 , x3 ) = 0 and F¯ (∞, x1 , x2 , x3 ) = 0 partition the (x1 , x2 , x3 ) parameter space into signature invariant regions. By plotting these curves and selecting a test point from each of these regions we can determine the stability regions corresponding to those with signature equal to n − m + 2z + + 1. Thus, we have the following procedure. Computation of First Order Stabilizing Set for Continuous-Time Systems 1. Determine the relative degree n − m from the high frequency slope of the
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS
235
Bode magnitude plot. 2. Let ∆∞ 0 [φ(ω)] denote the net change of phase P (ω) for ω ∈ [0, ∞). Determine z + from knowledge of p+ and π + + . ∆∞ 0 [φ(ω)] = − (n − m) − 2(p − z ) 2
(6.65)
3. Plot the curves below in the (x1 , x2 ) plane for a fixed x3 . x3 + x2 P (0) = 0 sin φ(ω) 1 x (ω) = x − cos φ(ω) , 1 3 |P (ω)| ω for 0 < ω < ∞ 1 x2 (ω) = − |P (ω)| (cos φ(ω)x3 + ω sin φ(ω)) , for 0 < ω < ∞ 1 + P (∞)x2 = 0.
(6.66)
(6.67)
(6.68)
4. The curves x1 (ω) and x2 (ω) partition the (x1 , x2 ) plane into disjoint signature invariant regions. The stabilizing regions correspond to those for which F¯ (s) has a signature of n − m + 2z + + 1. The procedure follows from the preceding discussion. Equations (6.67) are just F¯r (ω, x1 , x2 , x3 ) = 0 and F¯i (ω, x1 , x2 , x3 ) = 0.
(6.69)
The signature associated with a region can be found by picking an arbitrary point in the region and using the formulas in Lemma 6.1. An alternative way to check the signature is to pick any one test point from each region and draw the Nyquist plot of C(ω)P (ω). Stabilizing regions correspond to those points which give p+ counterclockwise encirclements of the −1 + 0 point. Example 6.1 For illustration, we have collected the frequency domain (Nyquist-Bode) data of a stable plant: P(ω) = {P (ω) : ω ∈ (0, 10) sampled every 0.01}. The Nyquist plot of the plant obtained is shown in Figures 6.13 and 6.14. From the data P(ω), we have P (0) = 13.333 and P (∞) = 0. Then it is easy to see that the straight line (6.68) is not applicable. After fixing x3 = 0.2, the data points representing the straight line in (6.66) and the curve in (6.67) are depicted in Figure 6.15. By testing a point for each root invariant region, we obtained the stabilizing regions shown in Figure 6.15.
236
THREE TERM CONTROLLERS Nyquist Diagram 50 40 30
Imaginary Axis
20 Magnification shown in Fig. 3
10 0 −10 −20 −30 −40 −50 −30
−20
−10
0
10
20
30
Real Axis
Figure 6.13 Nyquist plot of P (ω). Nyquist Diagram
5 4 3
Imaginary Axis
2 1 0 −1 −2 −3 −4 −5 −14
−12
−10
−8
−6
−4
Real Axis
Figure 6.14 Nyquist plot of P (ω) (Area magnified).
−2
40
237
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS 0.8
0.7
0.6
0.5
x2
0.4
0.3 Stabilizing regions
0.2
0.1
0
−0.1
−0.2 −0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
x1
Figure 6.15 Stabilizing regions for x3 = 0.2.
6.9
Model Free Synthesis of First Order Controllers for Performance
As discussed in Section 6.6 on PID synthesis for performance, control system performances measured in gain margin, phase margin, and H∞ norm specifications result in the problem of stabilization of a family of complex plants, P c (s). This problem can be solved by procedures completely analogous to that of the real case dealt with above. By intersecting the real and complex stabilizing sets, we obtain the set of controllers achieving stability as well as performance. We illustrate the procedure in the example below. Example 6.2 We consider an example with frequency response: P(ω) = {P (ω) : ω ∈ (−10, 10) sampled every 0.01}. We also have the knowledge that the plant has one RHP pole. We first find the stabilizing region in the controller parameter space. As in the previous example, we let x3 = 2.5. This is shown in Figure 6.16.
238
THREE TERM CONTROLLERS 0
−0.5
x2
−1
−1.5
−2
−2.5
−3 −4
−3
−2
−1 x1
0
1
2
Figure 6.16 Stabilizing regions for x3 = 2.5.
We now consider the problem of determining the entire set of first order stabilizing controllers satisfying the required closed-loop performance described by the requirement on the H∞ norm of the weighted complementary sensitivity function: kW (ω)T (ω)k∞ < γ,
for all ω
(6.70)
As shown above, this is equivalent to the problem of simultaneously stabilizing the complex family corresponding to (6.70) as well as the original plant P (s). In this problem, we let γ = 1. We superimpose on top of the stabilizing region shown in Figure 6.16 for the real plant, the stabilizing sets for the 4π 5π complex plant families Pc (ω, θ) for θ = 0, π3 , 2π 3 , π, 3 , 3 , 2π are plotted in Figure 6.17. To verify, a number of points inside the performance region, construct the corresponding controllers, and Nyquist plots of W (s)T (s) have been plotted as shown in Figure 6.18. These points are shown as “*” in Figure 6.17. We observe from the Nyquist plots in Figure 6.18, every test set satisfies the H∞ performance requirement.
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS
0
−0.5
−1
x2
−1.5
−2
stabilizing first order set
−2.5
−3
−3.5 −4
−3
−2
−1 x1
0
1
2
Figure 6.17 First order controllers satisfying H∞ performance. 1 0.8 0.6 0.4
imag
0.2 0 −0.2 −0.4 −0.6 −0.8 −1 −1
−0.5
0 real
0.5
1
Figure 6.18 Nyquist plots of W (s)T (s) with selected controllers.
239
240
6.10
THREE TERM CONTROLLERS
Data Based Design vs. Model Based Design
In this section, we discuss some differences between model based design and the data based designs described here. In model based design, mathematical models are obtained from the laws of physics that describes the dynamic behavior of a system to be controlled. On the other hand, the most common way of obtaining mathematical models in engineering systems is through a system identification process. Let us assume that the frequency domain data is obtained by exciting a plant with a unknown rational transfer function, by sinusoidal signals. In theory, a system identification procedure should exactly be able to determine the unknown rational transfer function. In this ideal situation there should be no distinction between model based and data based synthesis methods. However, typical system identification procedures can fail to find an exact rational function even if exact (or perfect) data is available. This is especially true when the order of the plant is high. The following example illustrates that this in turn can lead to drastic differences in control design. Example 6.3 Let us assume that the frequency domain data P (jω) shown below is obtained from a 20th order plant. Note that the plant is unstable with 2 RHP poles. Then mathematical models of 20th, 10th, 7th, and 4th orders were obtained by a system identification process applied to P (jω). Figure 6.19 shows that the Bode plots of the four identified models along with the frequency domain data collected from the 20th order plant considered here. It is observed that the Bode plots of these are almost identical except the fact that the 4th order identified model is relatively crude. We now compute the stabilizing PID parameter regions of each of these systems. Figure 6.20 shows the stabilizing regions in the PID controller parameter space. We can make the following observations. For convenience, let us denote by G20 (s), G10 (s), G7 (s) and G4 (s) the 20th, 10th, 7th, and 4th order models identified, respectively. 1. The models G20 (s) and G4 (s) are found to be not PID stabilizable for the chosen Kp = 5. In other words, the stabilizing region in PID parameter space for the given plant is empty with Kp = 5. This is consistent with the data driven case as is evident from Figure 6.21. The signature condition requires that the line representing Kp = 5 must intersect a minimum of 3 times for stability. Figure 6.21 shows that such necessity cannot be satisfied with Kp > 4.5. This is not unexpected for the model G4 (s) since there is some difference between the identified and actual Bode (frequency response) plots.
241
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS
Bode Diagram 20 Bode plot of G (s)
Magnitude (dB)
4
0 −20 −40
|P(jω)|, Bode plots of G20(s), G10(s), and G7(s)
Phase (deg)
−60 90 ∠P(jω), Bode plots of G20(s) and G10(s)
0 −90
Bode plot of G7(s) −180
Bode plot of G4(s)
−270 −2 10
−1
0 Frequency (rad/sec) 101 10
10
2
3
10
10
Figure 6.19 Frequency domain data and the Bode plots of 20th, 10th, 7th, 4th order identified models. For Kp=5 2.5
2
1.5
Model based design with G (s)
K
d
7
1 Model based design with G10(s)
0.5 Data based design
0
−0.5 −2
0
2
4
6 Ki
8
Figure 6.20 Stabilizing regions.
10
12
14
242
THREE TERM CONTROLLERS 15
10
G (s) 7
g(jω)
G10(s)
5
G20(s) G (s) 4
0
−5 −1 10
0
10
1
ω
10
2
10
Figure 6.21 Kp vs ω plot for the identified systems.
2. We have verified that G20 (s) is also not PID-stabilizable for any value of Kp . This may seem surprising since the Bode magnitude and phase plot of G20 (s) is indistinguishably close to the data P (jω). In fact, G20 (s) has additional RHP poles and zeros over those in the original plant model. 3. The stabilizing regions are found for G10 (s) and G7 (s). These regions overlap, but are not the same. It suggests that selection of controllers might be done inside the intersection of the stabilizing regions for G10 (s) and G7 (s). 4. The stabilizing region obtained from the data based method given here differs from those for G10 (s) and G7 (s). Thus, a reasonable selection of controller may be done inside the intersection of the stabilizing regions for G10 (s), G7 (s), and the region obtained from the data based method. 5. In general, the accuracy of system identification depends on accuracy of the data considered. On the other hand, the data based approach will work effectively as long as the roots of g(ω) = Kp∗ are found reliably.
In practice, experimentally obtained frequency domain data always contains noise and measurement errors. As discussed above, the stability regions determined by the model based design and the data based design will generally be different. Although the accuracy of the regions depends on each particular
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS
243
case, the data based design gives useful alternatives to model based design methods and in general the two complement each other. In particular it gives new guidelines for identification when it is to be used for controller design. In the next section we discuss a simple approach to data-robust design.
6.11
Data-Robust Design via Interval Linear Programming
In this section, we evaluate the PID controller parameters assuming some uncertainty in the measurement of the frequency response data. We translate this robust stabilization problem under data uncertainty to a linear programming problem with interval coefficients. The set thus obtained will robustly stabilize the set of plants represented by the given data. Write the plant frequency response as P (ω) = |P (ω)|eφ(ω) = Pr (ω) + Pi (ω) = |P (ω)| cos(φ) + |P (ω)| sin(φ)
(6.71)
where |P (ω)| denotes the magnitude and φ(ω) the phase of the plant, at the frequency ω. Let the PID controller to be designed be of the form C(s) =
K i + K p s + K d s2 , s(1 + sT )
T >0
(6.72)
where T is assumed to be fixed and small. As before define
and
F (s) := s(1 + sT ) + Ki + Kp s + Kd s2 P (s). F¯ (s) := F (s)P (−s).
Then F¯ (ω) = F (ω)P (−ω) = F¯r (ω, Ki , Kd ) + ω F¯i (ω, Kp ) The algorithm designed previously is as follows.
1. Fix Kp = Kp∗ , solve Pr (ω) + ωT Pi (ω) |P (ω)|2 cos φ(ω) + ωT sin φ(ω) =− =: g(ω) |P (ω)|
Kp∗ = −
(6.73)
244
THREE TERM CONTROLLERS
and let ω1 < ω2 < · · · < ωl−1 denote the distinct frequencies of odd multiplicities which are solutions of (6.73). 2. Set ω0 = 0, ωl = ∞ and j = sgn[F¯i (−∞− , Kp∗ )]. Determine all strings of integers it ∈ {+1, −1} such that: For n − m even : i0 − 2i1 + · · · + (−1)i−1 2il−1 + (−1)l il (−1)l−1 j For n − m odd : [i0 − 2i1 + · · · + (−1)i−1 2il−1 ](−1)l−1 j = rP + 2z + + 2.
(6.74)
3. For the fixed Kp = chosen in Step 1, solve for the stabilizing (Ki , Kd ) values from ωt sin φ(ωt ) − ωt2 T cos φ(ωt ) 2 it > 0 (6.75) K i − K d ωt + |P (ωt )| Kp∗
for t = 0, 1, · · · , l. 4. Repeat the previous three steps by updating Kp over prescribed ranges. The ranges over which Kp must be swept is determined from the requirements that (6.74) are satisfied for at least one string of integers.
6.11.1
Data Robust PID Design
The motivation for robust design comes from the fact that in reality there is uncertainty in the measured data P (ω). Thus, equations (6.73) and (6.75) have uncertainties. We can convert (6.75) into an inequality with interval coefficients and can proceed towards the solution with the help of the following theorem. THEOREM 6.6 Consider the interval inequality [y − mx − c] i > 0
(6.76)
where m ∈ [m− m+ ] and c ∈ [c− c+ ] are slope and intercept of the straight line equation above and are intervals varying from a minimum value to maximum value. i = {−1, 1} such that i = 1 means y > mx + c and i = −1 means y < mx + c. Then the region which will satisfy all the inequalities of (6.76) will be the intersection of the regions described by y − m − x − c− i > 0 y − m − x − c+ i > 0 y − m + x − c− i > 0 y − m + x − c+ i > 0 (6.77) PROOF Let us consider the “m-c” plane. The intervals of m and c form a rectangle in the above plane. Let us consider a fixed m. As can be seen
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS
245
from the Figure 6.22, the area which satisfies all the inequalities y > mx + c is bounded by setting c = c+ .
1
c
0.8 0.6 0.4 0.2 0
0
0.1
0.2
0.3
0.4
0.5
m 1
+
m
−
m
0.8
cmax
y
0.6
m
0.4
0.2
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
Top: “m-c” region.
Figure 6.22 Bottom: “x-y” plot with varying slope m and intercept c.
Now let us vary m. It can be seen from the figure that as m varies from m− to m+ , the area described by (6.76) with i = 1 is bounded by the lines y = m− x + c+ and y = m+ x + c+ . Similarly it can be shown that when i = −1, the inequality y < mx + c is bounded by the lines y = m− x + c− and y = m+ x + c− . This shows that it is enough to evaluate the inequalities at the vertices of the m-c rectangle in order to determine the solution that will satisfy the equation (6.76). Now, let us find the intervals for the equation (6.75). We illustrate this with the help of an example. Consider the frequency response P (ω) as shown in Figure 6.23. For the sake of simplicity, we also assume that we know that the plant has
246
Magnitude (in db)
THREE TERM CONTROLLERS
20 Nominal Plot Upper bound Lower Bound
0 −20 −40 −60 −2 10
−1
10
0
10
1
10
2
10
3
10
Phase (In degree)
Frequency (In Radians) 100 0 −100 −200 −300 −2 10
−1
10
0
10
1
10
2
10
3
10
Frequency (In Radians)
Figure 6.23 Bode plot with uncertainty bound.
2 RHP zeros. Let us assume an uncertainty of ±20% around the real and imaginary parts of the response. We also assume that we have no ω axis zeros in this example. If we have ω zeros in the plant we can slightly perturb the plant to get rid of them. If ω zeros are unavoidable, we can lump them with the controller. We denote the maximum and minimum of the real and imaginary parts as Prmax (ω), Prmin (ω), Pimax (ω) and Pimin (ω) respectively. We now compute the g(ω)max for the above data as follows. g(ω)max = max[g(ω, Prmax (ω), Pimax (ω)), g(ω, Prmax (ω), Pimin (ω)), (6.78) g(ω, Prmin (ω), Pimax (ω)), g(ω, Prmin (ω), Pimin (ω))] for 0 ≤ ω ≤ ∞ and g(ω) is evaluated from (6.73) where T = 0.001sec. Similarly we can find g(ω)min . These are shown in Figure 6.24. Now, the high frequency slope is −20db/decade. Therefore rP = 1. The RHS of equation (6.74) is given by rP + 2z + + 2 = 1 + 2 ∗ 2 + 2 = 7
(6.79)
where z + = 2 was obtained from the fact that the plant has 2 RHP zeros as mentioned above. We see that Kp = 5 cuts the g(ω) plot at 3 frequencies.
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS
247
20
ω+ 2
15 ω− 2
10
K =5 g(ω)
p
5
0 ω+
ω−
3
3
ω− 1
−5
0
1
10
10 ω+1
2
ω
10
Figure 6.24 g(ω) plot. The Kp = 5 line is also shown.
Using equation (6.74) for the case where n − m is odd, we see that the only possible string satisfying the equation is F = {i0 , i1 , i2 , i3 } = {1, − 1, 1, − 1}.
(6.80)
Let the roots be ω1 , ω2 and ω3 . From Figure 6.24, we observe that ωt− ≤ ωt ≤ ωt+ and for a fixed ωt we define Pr (ωt )− := min(Pr (ωt )), Pr (ωt )+ := max(Pr (ωt )), Pi (ωt )− := min(Pi (ωt )), Pi (ωt )+ := min(Pr (ωt )). Now we evaluate the following bounds by finding the values from the “Pr (ω)” and “Pi (ω)” graphs as shown in Figures 6.25 and 6.26.
248
THREE TERM CONTROLLERS
Re(P(j(ω))
4
Band of ω
2 0 −2 −4 −2 10
−1
0
10
10
1
ω
2
10
10
3
10
Im(P(j(ω))
4 2 0 −2 −4 −2 10
−1
0
10
10
1
ω
2
10
10
Figure 6.25 “Pr (ω)” and “Pi (ω)” plot.
max(P (jω))
−0.06
i
Im(P(j(ω))
−0.08 −0.1 −0.12 −0.14
min(P(jω))
−0.16 0.14
10
0.16
10
0.18
ω
10
0.2
10
Figure 6.26 Zoomed version of Figure 6.25 for ω1 .
3
10
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS
Pr (ωt )min : = Pr (ωt )max : = Pi (ωt )min : = Pi (ωt )max : =
min
Pr (ωt )−
max
Pr (ωt )+
min
Pi (ωt )−
max
Pi (ωt )+
ωt− ≤ωt ≤ωt+ ωt− ≤ωt ≤ωt+ ωt− ≤ωt ≤ωt+ ωt− ≤ωt ≤ωt+
249
(6.81)
Let us define the constant coefficient, bt of (6.75) as bt :=
−ωt Pi (ωt ) + ωt2 T Pr (ωt ) |P (ωt )|
(6.82)
Now we compare equation (6.76) with (6.75) and (6.82). We observe that + − + we can equate m = [(ωt− )2 , (ωt+ )2 ] and c = [b− t , bt ] where bt and bt are the − + minimum and maximum values of bt at wt . These bt and bt depend on the quadrant at which wt is in the “Pr -Pi ” graph. This graph is shown in Figure 6.27.
0.5
ω
ω2
3
0
ω
Im(P(jω))
ω
1
−0.5
0
−1
−1.5
−2
−2.5
−3 −1.5
−1
−0.5
0
0.5
1
1.5
Re(P(jω))
Figure 6.27 Nyquist plot of the given plant.
2
2.5
250
THREE TERM CONTROLLERS
For example if wt lies in first quadrant i.e. Pr > 0 and Pi > 0 , then b+ t =
−ωt− Pi (ωt )min + (ωt2 )+ T Pr (ωt )max |(Pr (ωt )min )2 + (Pr (ωt )min )2 |
b− t =
−ωt+ Pi (ωt )max + (ωt2 )− T Pr (ωt )min . |(Pr (ωt )max )2 + (Pr (ωt )max )2 |
(6.83)
For the given example, ωt+ = [0, 1.6174, 3.4297, 108.3767], b+ t = [0, 20.1039, −4.6160, 13934.0711].
ωt− = [0, 1.3790, 3.0052, 85.1289], b− t = [0, 1.7982, −47.6582, 1574.0669],
Using equation (6.75) and Theorem 6.6, we solve the following inequalities. − 2 + ki − (ωt− )2 kd − b− (6.84) t it > 0, ki − (ωt ) kd − bt it > 0 + 2 − + 2 + ki − (ωt ) kd − bt it > 0, ki − (ωt ) kd − bt it > 0
where t = 0, 1, 2, 3. Solving all the above inequalities, we obtain the region that robustly stabilizes the given plant for Kp = 5 as shown in Figure 6.28 (shaded region).
For kp=5 7 6 5 4
K
d
3 2 1 0 −1 −2 −5
0
5
10
15
20
25
30
K
i
Figure 6.28 “Ki -Kd ” plot. All possible inequality regions are shown. The shaded area is the intersection of all regions.
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS
6.12
251
Computer-Aided Design
In this section we show some possibilities for computer-aided design using the above theory. The algorithm for the design of a PID Controller from the frequency response data of the system has been programmed in LabVIEW due to its user-friendly graphical environment. The Virtual Instrument (VI) has a front panel that is displayed to the user and a block diagram, where the computations are performed. The inputs to the LabVIEW program are the frequency response data and the number of RHP poles of the system. Given these inputs, the entire range of Kp that can stabilize the system is displayed, and as the user scrolls through the stabilizing range of Kp , entire stabilizing ranges of Ki and Kd are displayed. The file containing the frequency response data from a stable system (number of right-half plane poles equals zero) is fed into the program through the file path box located at the top left-hand side of the VI as shown in Figure 6.29.
Figure 6.29 Front panel of VI page 1 showing stabilizing sets of Kp , Ki , and Kd .
T is set to a very small number. The equation for g(ω) versus frequency generated from the inputs, shown in Figure 6.29 is used to compute the ω val-
252
THREE TERM CONTROLLERS
ues associated with a particular value of Kp . Using the values of ω’s obtained above, linear equations are solved by the program to compute stabilizing sets of Ki and Kd for a fixed Kp . As the selected Kp is changed, the Ki , Kd region changes dynamically to show the new stabilizing ranges of Ki and Kd for the selected Kp . The Ki , Kd graph shown in Figure 6.30. gives the stabilizing range for Kp = 1. The stabilizing set of Kp , Ki and Kd for the given system is shown in Figure 6.31. Each point in the stable range of Ki and Kd for a specific Kp has performance indices, as shown in Figure 6.32. The performance indices dynamically change as the cursor moves over the stable range of Ki and Kd , for a particular Kp . The subsets within these stabilizing ranges that achieve or exceed the desired minimum performance indices can be computed by the program. The front panel of the VI that performs this operation is shown in Figure 6.33 and the final result is shown in Figure 6.34. The set of points generated satisfies multiple performance indices simultaneously. The performance indices used in the program are: Gain Margin≥ 3db, Phase Margin≥ 45o , Overshoot≤ 30%.
Figure 6.30 Front panel of the VI page 2 showing stabilizing sets of Kp , Ki , and Kd .
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS
253
Figure 6.31 3D graph of stabilizing sets.
Figure 6.32 Front panel of VI showing performance indices for specific Kp , Ki , and Kd .
254
THREE TERM CONTROLLERS
Figure 6.33 Front panel of VI that satisfies multiple performance index specifications.
Figure 6.34 LabVIEW generated result for multiple performance indices.
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS
6.13
255
Exercises
The frequency response P (jω) of a plant with one RHP zero is given and its Bode and Nyquist plots are shown below (see Figures 6.35 and 6.36).
Bode Diagram
Magnitude (dB)
50 0 −50 −100 −150
Phase (deg)
−200 90 0 −90 −180 −270 −3 10
−2
−1
10
10
0
1
2
10 10 Frequency (rad/sec)
3
10
10
Figure 6.35 Bode plot of the plant to be controlled. Nyquist Diagram
Nyquist Digram (magnified around the origin)
1.5
0.25 0.2
1
0.1
0.5
Imaginary Axis
Imaginary Axis
0.15
0
0.05 0 −0.05
−0.5 −0.1 −1
−0.15 −0.2
−1.5 −1
−0.8
−0.6
−0.4 −0.2 Real Axis
0
0.2
0.4
−0.05
0
0.05 Real Axis
Figure 6.36 Nyquist plot of the plant to be controlled.
0.1
0.15
256
THREE TERM CONTROLLERS
6.1 Find the relative degree n − m of the plant. 6.2 Suppose that we want to determine the set of PID controllers with T = 0 (ideal PID controllers) that stabilizes the given plant, we need to first determine the required signature for stability. What is the required signature of F¯ (s) for stability? 6.3 For the stability region in PID parameter space to exist, what is the minimum number of solutions that must satisfy g(ω) = Kp for each fixed Kp ? 6.4 g(ω) is given in Figure 6.37. What is the admissible range(s) of Kp values? Also determine j = sgn F¯i (∞− , Kp ) .
5
0
g(jω)
−5
−10
−15
−20
−25
0
0.5
1
1.5
2
ω
2.5
3
Figure 6.37 Graph of the function g(ω).
3.5
4
4.5
CONTROLLER SYNTHESIS FREE OF ANALYTICAL MODELS 6.5 Determine if possible?
257
il = sgn F¯r (∞− , Ki , Kd )
6.6 Write down all possible strings that satisfy the required signature. 6.7 Fix Kp = −18 and determine the frequency range where the frequency response must be accurately measured to determine the entire set of stabilizing PID controllers. 6.8 Using the P (jω) data given below, construct the sets of linear inequalities that determine the stability region. TABLE 6.1 Frequency response data P (jω) ω = 0 : 0.1 : 10 -1.0000 -0.3255 + j1.0473 0.3179 + j0.2366 0.1558 + j0.0338 0.0832 - j0.0071 0.0489 - j0.0206 0.0307 - j0.0274 0.0205 - j0.0323 0.0152 - j0.0368 0.0134 - j0.0415
ω = 11 : 0.1 : 20 0.0147 - j0.0464 0.0192 - j0.0509 0.0272 - j0.0540 0.0382 - j0.0538 0.0503 - j0.0484 0.0600 - j0.0376 0.0646 - j0.0238 0.0638 - j0.0105 0.0595 + j0.0001 0.0536 + j0.0076
ω = 21 : 0.1 : 30 0.0477 + j0.0126 0.0422 + j0.0157 0.0374 + j0.0177 0.0333 + j0.0188 0.0298 + j0.0195 0.0268 + j0.0199 0.0243 + j0.0201 0.0221 + j0.0202 0.0202 + j0.0202 0.0185 + j0.0202
ω = 31 : 0.1 : 40 0.0171 + j0.0202 0.0158 + j0.0203 0.0146 + j0.0204 0.0136 + j0.0205 0.0126 + j0.0207 0.0117 + j0.0209 0.0109 + j0.0212 0.0100 + j0.0215 0.0092 + j0.0220 0.0084 + j0.0225
6.9 Using the sets of inequalities, graphically determine the stability region for Kp = −18. Sweep Kp over the admissible range to determine the entire 3D sets. 6.10 From the stabilizing set found, determine the subset that provides a gain margin Am ≥ 1.5 and a phase margin θm ≥ 40o .
6.14
Notes and References
The results reported here closely follow [127, 128, 132].
7 DATA DRIVEN SYNTHESIS OF THREE TERM DIGITAL CONTROLLERS
This chapter presents a method for determining all digital PID and first order stabilizing controllers from measured data alone. The only information required are frequency response (Nyquist-Bode data) and the number of openloop unstable poles or the impulse response of the system. Specifically no identification of the plant model is required. Examples are given for illustration.
7.1
Introduction
In much of industrial practice, controllers are designed without an analytical model of the plant. Instead, design is based on other available information about the plant. Typical information available for the plant is time or frequency response of the plant. For stable plants, engineers use a sinusoidal input to obtain the response of a linear time-invariant system. For unstable plants, the plants are stabilized first, then a sinusoidal input is given to obtain the frequency response. Indeed, this approach is as popular as the plant model based approaches in classical control design. In the present chapter we consider digital three term controllers to be designed for plants where the only information available is the frequency response or impulse response data. We show how complete sets achieving stability and performance can be determined from this data alone without identifying the plant model. Examples are included for illustration. These techniques have been implemented and an interactive computer-aided design tool has been developed which is also illustrated.
259
260
7.2
THREE TERM CONTROLLERS
Notation and Preliminaries
Consider the rational function Q(z) =
P1 (z) P2 (z)
(7.1)
where Pi (z), i = 1, 2 are real polynomials, which are assumed to be devoid of zeros on the unit circle. Let iz (ip ) denote the numbers of zeros and poles of Q(z) located inside the unit circle. Also let ∆π0 ∠Q(eθ ) denote the net change in phase of Q(eθ ) as θ runs from 0 to π. Then we have ∆π0 ∠Q(eθ ) = π (iz − ip ) .
(7.2)
This follows from the fact that each zero (each pole) inside the unit circle contributes +π (−π) to the net phase change whereas zeros (poles) outside the unit circle do not contribute to the net phase change. As before define the signature of Q(z) as σ(Q) := iz − ip . (7.3) To evaluate Q(z) on the unit circle we use the Tchebyshev representation as in Chapter 4. Set p u := − cos θ, v := 1 − u2 (7.4)
and
z = eθ = −u + v.
(7.5)
z k = ekθ = cos kθ + sin kθ
(7.6)
Then where cos kθ := ck (u)
and
sin kθ sin kθ = := sk (u) sin θ v
(7.7)
and ck (u) and sk (u), the Tchebyshev polynomials are real polynomials in u with 1 dck (u) , k = 1, 2, · · · k du ck+1 (u) = −uck (u) − v 2 sk (u), k = 1, 2, · · · . sk (u) = −
Let Pi (z)|z=−u+v = Ri (u) + vTi (u), i = 1, 2 denote the Tchebyshev representations of Pi (z), i = 1, 2 where Ri (u) and Ti (u) are real polynomials in u. Then it is easy to see that ¯ Q(z)|z=−u+v = Rq (u) + vTq (u) =: Q(u)
(7.8)
DATA DRIVEN SYNTHESIS OF DIGITAL CONTROLLERS where Rq (u) =
R(u) , D(u)
Tq (u) =
T (u) D(u)
261
(7.9)
and R(u) = R1 (u)R2 (u) + v 2 T1 (u)T2 (u) T (u) = T1 (u)R2 (u) − R1 (u)T2 (u) D(u) = R22 (u) + v 2 T22 (u).
(7.10)
Since D(u) > 0 for all u ∈ [−1, 1], the zeros of Tq (u) are identical to those of T (u). Let sgn[·] denote the usual sign function. Then sgn[Rq (·)] = sgn[R(·)]. Suppose that T (u) has p zeros at u = −1 and let t1 , · · · , tk denote the real distinct zeros of T (u) of odd multiplicity ordered as follows: −1 < t1 < t2 < · · · < tk < +1. We now recall the following signature formula for a rational function from Chapter 4. THEOREM 7.1 Let Q(z) be a real rational function with iz and ip being zeros and poles, respectively, inside the unit circle C and no poles or zeros on the unit circle. Then h i 1 σ(Q) := iz − ip = sgn Tq(p) (−1) sgn[Rq (−1)] 2 k X +2 (−1)j sgn[Rq (tj )] + (−1)k+1 sgn[Rq (+1)] . j=1
In subsequent sections, we first develop a way to use the signature formula in Theorem 7.1 without knowledge of any form of analytical models to develop a computation of all digital PID controllers stabilizing the plant. The design of first order controllers without analytical models will also be dealt with similarly. Let the plant transfer function be the rational function P (z) =
N (z) . D(z)
(7.11)
Assumption 1 1. The plant is controllable and observable, that is, the polynomials N (z) and D(z) are coprime. 2. The plant has no poles on the unit circle. 3. We assume that the only information available for the plant to the designers are:
262
THREE TERM CONTROLLERS
A. Knowledge of the frequency response magnitude and phase, i.e., P (eθ ), for θ ∈ [0, 2π],
(7.12)
B. Knowledge of the number of unstable poles of the plant, that is, poles of the plant located outside the unit circle. C. Knowledge of the relative degree r of the plant. Consider a stable plant shown in Figure 7.1.
T
u(t) = sin ωt
P (z) u(kT )
y(kT )
Figure 7.1 Stable linear time-invariant discrete-time system.
Then it is easy to see that the steady state value of the discrete time output is
yss (kT ) = P (eωT ) sin kωT + ∠P (eωT ) .
The frequency response of the plant can be thus determined from the measurement: P (eωT ) = M eθ (7.13) where
M := P (eωT )
and θ := ∠P (eωT ).
(7.14)
Clearly this amounts to knowledge of the complex function P (z)|z=−u+v = Rp (u) + vTp (u) := Pc (u), that is, knowledge of the graphs of the rational functions Rp (u) and Tp (u) evaluated for u ∈ [−1, +1].
7.3
PID Controllers for Discrete-Time Systems
Consider the feedback system shown in Figure 7.2. The general discrete-time PID controller is C(z) = KP +
KI T z KD (z − 1) K2 z 2 + K1 z + K0 + = z−1 Tz z(z − 1)
DATA DRIVEN SYNTHESIS OF DIGITAL CONTROLLERS C(z)
+
263
P (z)
−
Figure 7.2 Unity feedback system.
where K0 + K1 + K2 , KD = K0 T. T The closed-loop characteristic polynomial is δ(z) := z(z − 1)D(z) + K2 z 2 + K1 z + K0 z N (z). KP = −K1 − 2K0 , KI =
Closed-loop stability requires that σ(δ) = n + 2. With δ(z) Π(z) := = z(z − 1) + K0 + K1 z + K2 z 2 P (z), D(z)
(7.15)
(7.16)
closed-loop stability requires that
σ(Π) = n + 2 − ip
(7.17)
where ip is the number of poles of P (z) located inside the unit circle. Finally, introduce ν(z) = z −1 P (z −1 )Π(z) = (z − 1)P (z −1 ) + K0 z −1 + K1 + K2 z P (z)P (z −1 ).
It will turn out that the solution to the PID stabilization problem can be conveniently obtained in terms of the signature of ν(z) because of the property of parameter separation in the real and imaginary parts. LEMMA 7.1 The net change of phase of P (ejθ ) for θ ∈ [0, π] is: ∆π0 ∠P (ejθ ) = −π [r + (oz − op )]
(7.18)
where oz and op are the numbers of zeros and poles of P (z) located outside the unit circle, respectively and r is the relative degree of the plant P (z). PROOF
Let m and n be the degrees of N (z) and D(z), respectively. Then π (iz − ip ) = π [(m − oz ) − (n − op )] = π [−(n − m) − (oz − op )] = −π [r + (oz − op )] .
(7.19)
264
THREE TERM CONTROLLERS
Now introduce ν(z) := z −1 P (z −1 )Π(z) = (z − 1)P (z −1 ) + K0 z −1 + K1 + K2 z P (z)P (z −1 ).
The solution to the PID stabilization problem can be conveniently obtained in terms of the signature of ν(z) because of the property of parameter separation in the real and imaginary parts. THEOREM 7.2 Let P (z) be the plant with relative degree r. Let the PID controller be C(z) =
K2 z 2 + K1 z + K0 . z(z − 1)
Then the closed-loop system is stable if and only if σ(ν) = r + oz + 1. PROOF
Note the fact that
P (z −1 ) =
N (z −1 ) z −m Nr (z) = := z n−m Pr (z) = z r Pr (z) D(z −1 ) z −n Dr (z)
where m and n are degrees of N (z) and D(z), respectively, and Nr (z) and Dr (z) are reverse polynomials of N (z) and D(z), respectively. Also note that σ(Pr ) = oz − op σ P (z −1 ) = σ (z r Pr ) = r + oz − op
(7.20) (7.21)
where oz and op are the numbers of zeros and poles of P (z) located outside the unit circle. Then σ(ν) = σ z −1 P (z −1 )Π(z) = −1 + r + oz − op + n + 2 − ip = r + oz + 1 + n − (op + ip ) | {z } n
= r + oz + 1.
(7.22)
In what follows, we show that it is possible to find all K0 , K1 , and K2 values that satisfy the above signature (stability) condition directly from the
DATA DRIVEN SYNTHESIS OF DIGITAL CONTROLLERS
265
frequency response data and the transfer function of the system is not required. To evaluate σ(ν), write RN (u) + vTN (u) N (z) = P (z)|z=−u+v = D(z) z=−u+v RD (u) + vTD (u) =
RN (u)RD (u) + v 2 TN (u)TD (u) 2 (u) + v 2 T 2 (u) RD | {z D } Rp (u)
+v
RD (u)TN (u) − TD (u)RN (u) 2 (u) + v 2 T 2 (u) RD | {z D } Tp (u)
= Rp (u) + vTp (u). Consequently,
P (z −1 )|z=−u+v = Rp (u) − vTp (u). Now ν(z)|z=−u+v = z −1 P (z −1 )Π(z) z=−u+v = (z − 1)P (z −1 ) + K0 z
−1
+ K1 + K0 z P (z)P (z
−1
)
z=−u+v
= (−u − 1 + v) Rp (u) − vTp (u) + K0 (−u − v) + K1 + K2 (−u + v) m2 (u) where m(u) = |P (z)|z=−u+v . Then ν(z) = Rν (u, K0 , K1 , K3 ) + vTν (u, K3 ) where K3 := K2 − K0 Rν (u, K0 , K1 , K3 ) = −(u + 1)Rp (u) + (1 − u2 )Tp (u) +K1 m2 (u) − u(2K0 + K3 )m2 (u) Tν (u, K3 ) = Rp (u) + (u + 1)Tp (u) + K3 m2 (u). For fixed K3 = K3∗ , we may solve K3∗ =
−Rp (u) − (u + 1)Tp (u) =: g(u) m2 (u)
(7.23)
266
THREE TERM CONTROLLERS
determine the real roots t1 , t2 , · · · of odd multiplicities of (7.23) contained in the open interval (−1, +1): t0 = −1 < t1 < t2 < · · · < tl < tl+1 = +1 and develop linear inequalities corresponding to stability as follows. Let n o I j = ij0 , ij1 , · · · , ijl , ijl+1 denote a string where
iji ∈ {0, 1, −1}
and let k ∈ {+1, −1} such that kj i0 − 2ij1 + 2ij2 − · · · + (−1)l+1 ijl+1 = r + oz + 1. 2
For each string I j satisfying (7.24), we have the set of inequalities sgn Tv (−1+ ) · k > 0 sgn [Rν (ut , K0 , K1 , K3∗ )] ijt
>0
(7.24)
(7.25) (7.26)
which is a set of linear inequalities in K0 , K1 space for fixed K3∗ . By constructing these inequalities for each string satisfying (7.24), we obtain the stabilizing set for K3 = K3∗ . By sweeping over K3 we can generate the complete set. The range of K3 to be swept is determined by the requirement that (7.23) should have r + oz roots at least. We now illustrate that the above signature relationship corresponding to stability can be computed without knowing the plant transfer function P (z) but from knowledge of the frequency response. Example 7.1 To illustrate, we take a set of frequency data points from the example used in Example 4.2. Let us assume that the following information is available. Available Information: 1. Frequency domain data 2π P(eωT ) := P (eωT ), ω = sampled every T = 0.01 . T 2. The plant is stable. In other words, the number of unstable poles of the plant is 0, that is, op = 0. 3. The relative degree of the plant is 2, that is, r = 2. The Nyquist plot of the plant P (z) is shown in Figure 7.3. From Figure 7.3 and Lemma 7.1, ∆π0 ∠P (ejθ ) = −π [r + (oz − op )] := −2π.
267
DATA DRIVEN SYNTHESIS OF DIGITAL CONTROLLERS Nyquist Diagram
1.5
1
Imaginary Axis
0.5
0
−0.5
−1
−1.5 −1
−0.5
0
0.5
1
1.5
Real Axis
Figure 7.3 Nyquist plot of P (eθ ).
Therefore, oz = 2 − r + op = 2 − 2 + 0 = 0. Using Theorem 7.2, the stability requirement is equivalent to σ(ν) = 2 + 0 + 1 = 3. Then Theorem 7.1 requires that 1 sgn[T (−1)] sgn[R(−1)] − 2sgn[R(t1 )] + 2sgn[R(t2 )] 2 − · · · sgn[R(1)] := 3 where ti are the zeros of g(u) in (7.23) for fixed K3 . It is easy to see that at least two zeros ti are required and also that the only feasible string of sign sequences is: sgn of T (−1) R(−1) R(t1 ) 1 1 −1
t2 1
R(1) −1
268
THREE TERM CONTROLLERS
The feasible range of K3 values is that corresponding to the requirement of two zeros in T (u). We now plot the right-hand side of (7.23). Using the relationship u = − cos θ, we have the set of Nyquist data points in u axis. P (eωT ) ωT =θ = P (eθ ) u=− cos θ = Rp (u) + vTp (u). Using (7.23), we now plot the following (see Figure 7.4). 1 −Rp (u) − (1 + u)Tp (u) . K3∗ = 2 m (u)
Using (7.26), we construct the set of linear inequalities for each K3 value. For example, at K3 = 1.3, it is found from Figure 7.4 that t1 = −0.4736,
t2 = −0.0264.
2
1 fesible region of K3 values 0
K3
−1
−2
−3
−4
−5 −1
−0.8
−0.6
−0.4
−0.2
0 u
0.2
0.4
0.6
0.8
1
Figure 7.4 Finding the feasible range of K3 .
Then the set of linear inequalities corresponding to K3 = 1.3 is T (−1) = 1 R(−1) = −2.3111 + 1.7778K1 + 3.5556K2 > 0
269
DATA DRIVEN SYNTHESIS OF DIGITAL CONTROLLERS R(−0.4736) = −0.6939 + 0.7473K1 + 0.7078K2 < 0 R(−0.0264) = 0.7226 + 0.6403K1 + 0.0338K2 > 0 R(1) = −0.3556 + 1.7778K1 − 3.5556K2 < 0
By sweeping K3 over (−0.7, 1.4), we have the stabilizing PID parameter regions shown in Figure 7.5 that is identical to the region obtained in Example 4.2.
2
K3
1
0
−1 3 2 1
1 0
K2
0
0.5
−0.5 −1
−1.5
−1 K1
Figure 7.5 Stabilizing PID parameter region.
REMARK 7.1 The procedure shown in Figure 7.4 also tells us that accurate data of the system is needed up to the frequency where u ≈ 0.5, that is, ω=
cos−1 u . T
To obtain the necessary data, it is not necessary to excite the system beyond this frequency.
270
7.4
THREE TERM CONTROLLERS
Data Based Design: Impulse Response Data
Let us consider a stable discrete time system. Let the output of the system in response to an impulse signal be: y[k] = [m0 , m1 , m2 , · · · , mk , · · ·]
(7.27)
where k is a positive integer. These are also known as the Markov parameters. The response approaches zero with increasing value of k, because the system is stable. Taking the z-transform of the above sequence, Y [z] = m0 + m1 z −1 + m2 z −2 + · · · + mk z −k + · · · .
(7.28)
We know that in the z-domain, the impulse response of a system is the system transfer function. Therefore, we can approximate the plant transfer function as Y (z) truncated up to n points. Y (z) ≈ Pn (z) where Pn (z) = m0 + m1 z −1 + m2 z −2 + ... + mn z −n m0 z n + m1 z n−1 + ... + mn = . zn
(7.29)
LEMMA 7.2 The relative degree of a system, r, is the number of leading zeros in its impulse response. PROOF If we expand the transfer function as a power series in z −1 , the first r zero terms equal the relative degree (n − m) where m is the degree of the numerator and n is the degree of the denominator.
Now, let us consider that this plant is being stabilized by a PID controller: z KD z − 1 + z−1 T z K2 z 2 + K1 z + K0 = z(z − 1)
C(z) = KP + KI T
(7.30)
where KP = −K1 − 2K0 , K0 + K1 + K2 KI = T KD = K0 T.
(7.31)
DATA DRIVEN SYNTHESIS OF DIGITAL CONTROLLERS
271
We now use the technique described in Chapter 4 to find the stabilizing set as follows. Let N (z) . (7.32) Pn (z) = D(z) Therefore, the characteristic equation is δ(z) := z(z − 1)D(z) + K2 z 2 + K1 z + K0 N (z).
(7.33)
z −1 δ(z)N (z −1 ) = (z − 1)D(z)N (z −1 ) + K2 z + K1 + K0 z −1 N (z)N (z −1 ).
(7.34)
Multiplying with z −1 N (z −1 ), we get
Using Tchebyshev representations we have
p (1 − u2 )TN (u) p = RD (u) + (1 − u2 )TD (u)
N (z)|z=−u+√(1−u2 ) = RN (u) + D(z)|z=−u+√(1−u2 ) and z −1 δ(z)N (z −1 )
where
= −(u + 1)P1 (u) − (1 − u2 )P2 (u) − [(2K2 − K3 ) u − k1 ] P3 (u) p + (1 − u2 ) [−(u + 1)P2 (u) + P1 (u) + K3 P3 (u)] p = R(u, K1 , K2 , K3 ) + (1 − u2 )T (u, K3 ) P1 (u) = RD (u)RN (u) − (1 − u2 )TD (u)TN (u) P2 (u) = TN (u)RD (u) − RN (u)TD (u) P3 (u) = RN (u)2 + (1 − u2 )TN (u)2 K3 := K2 − K1 .
Note that we use the parameters K1 , K2 and K3 instead of KP , KI , and KD without any loss of flexibility as these 2 sets are related to each other by the simple coordinate transformation −2 −1 0 KP K0 1 1 1 KI = K1 T T T KD K2 T 0 0 −2 −1 0 K1 0 1 −1 1 1 1 = 1 0 0 K2 . T T T 01 0 K3 T 0 0
272
THREE TERM CONTROLLERS
Now we use Theorem 7.2. THEOREM 7.3 Let the plant be Pn (z) =
N (z) D(z)
with relative degree r. Let the PID controller be C(z) =
K2 z 2 + K1 z + K0 . z(z − 1)
Then the closed-loop system is stable if and only if σ z −1 δ(z)N (z −1 ) = r + oz + 1 where
δ(z) := z(z − 1)D(z) + (K2 z 2 + K1 z + K0 )N (z) and oz is the number of zeros of N (z) outside the unit circle. Substituting Tchebyshev representations of z −1 δ(z)N (z −1 ) for Q(z) and using Theorem 7.3, we have, h i 1 r + oz + 1 = sgn T (p) (−1) sgn[R(−1)] (7.35) 2 k X +2 (−1)j sgn[R(tj )] + (−1)k+1 sgn[R(+1)] . j=1
Next we construct sequence of numbers i0 , i1 , · · · , ik , ik+1 having values −1 or +1 such that k h i X (−1)j ik + ik+1 . 2sgn T (p) (−1) (r + oz + 1) = i0 + 2 j=1
Let I = [i0 , i1 , · · · , ik , ik+1 ] be one such sequence. Then we evaluate the following inequalities which have K1 and K2 as unknowns when we fix K3 . [R(u)|u=−1 ] i0 > 0 R(u)|u=−tj ij > 0
[R(u)|u=+1 ] ik+1 > 0 where j = 1, · · · , k. These inequalities give the stabilizing regions in K1 -K2 space for fixed K3 . As we sweep K3 , we get the entire stabilizing set.
DATA DRIVEN SYNTHESIS OF DIGITAL CONTROLLERS
7.4.1
273
Example: Stabilizing Set from Impulse Response
For illustration, we take the impulse response of a stable system. The impulse response for the first 10 points is as given below. y[n] = [0, 0, 1, 0, 0.25, 0, 0.0625, 0.015625, 0, 0.00390625] where n = 0, 1, 2, · · · , 10, and the sampling time T = 0.001sec. Let us consider data points up to n = 3. Writing the equivalent z-transform, we obtain Pn (z) = 0 + 0.z −1 + 1.z −2 + 0.z −3 1 = 2. z The signature required for stability is r + oz + 1 = 2 + 0 + 1 = 3, where the value of r is obtained using Lemma 7.2 and oz is the number of zeros outside the unit circle for the approximated plant. Now we find the stabilizing set for this plant. Let us fix K3 = 1.2. Then the real roots of T (u, K3 ) in (−1, 1) are −0.3618 and
− 0.1382.
Furthermore, sgn[T (−1)] = 1. Using (7.35), 3=
1 sgn[T (−1)] sgn[R(−1)] − 2sgn[R(−0.3618)] 2 +2sgn[R(−0.1382)] − sgn[R(+1)] .
Here we have only one sequence of signs satisfying the above equation. sgn[R(−1)] = 1 sgn[R(−0.3618)] = −1 sgn[R(−0.1382)] = 1 sgn[R(+1)] = −1 so that 2(i1 − i2 ) = 6.
274
THREE TERM CONTROLLERS
From this sequence, we obtain the following inequalities. K1 + 2K2 > 1.2 K1 + 0.7236K2 < 0.5919 K1 + 0.2674K2 > −0.3913 K1 − 2K2 < 0.8 Solving these inequalities, we obtain the region in K1 -K2 space as shown in Figure 7.6 . As we sweep K3 , we get the entire set as shown in Figure 7.7 .
K =1.2 3
2.5
2
K
2
1.5
1
0.5
0 −1.5
−1
−0.5
K
0
0.5
1
1
Figure 7.6 The stabilizing set in K1 –K2 space when K3 =1.2.
Continuing with n = 5, n = 7, n = 10, we obtain the respective stabilizing sets in a similar way. The results for all the systems with K3 = 1.2 are shown in Figure 7.8. We observe that as we increase n, we get closer and closer to the actual stabilizing region. We also observe that the sets for n = 10 and actual system almost match each other. For more than 10 samples, for example 20 points, the area remains exactly the same. The thin line in Figure 7.8 is barely visible as it coincides with n = 10. This shows the convergence of stabilizing regions with respect to number of terms in the Markov parameters.
275
DATA DRIVEN SYNTHESIS OF DIGITAL CONTROLLERS
1
3
0.5
K
0
−0.5
−1 −1
−1.5 −1
−0.5
−0.5
0
0
0.5
0.5
1
1
1.5 2
1.5
K
1
K
2
Figure 7.7 The 3D stabilizing set for the given example. 2.5
n=5 n=10,20
2
n=7
K2
1.5
n=3 1
0.5
0 −1.5
−1
−0.5
0
0.5
1
K1
Figure 7.8 Stabilizing set at K3 = 1.2 for n = 3, 5, 7, 10 and actual set.
276
7.4.2
THREE TERM CONTROLLERS
Sets Satisfying Performance Requirements
We can also obtain subsets achieving some performance specifications on the PID stabilizing sets obtained above. Here we discuss about gain margin and phase margin and overshoot of the closed loop. For the gain margin and the phase margin, we obtain the approximated open-loop system Gn (z) as Gn (z) = C(z)Pn (z).
(7.36)
From the frequency response of the above rational function, we can obtain the gain and phase margin. To obtain the time response and hence the overshoot, we can close the loop with unity feedback. The feedback system will be GCL (z) =
Gn (z) . 1 + Gn (z)
(7.37)
We obtain the step response of the above rational function. We can also find the region and compute the overshoot corresponding to various points. Figure 7.9, Figure 7.10, and Figure 7.11 show the subsets achieving gain margins more than 1 db, phase margins more than 20 degrees and overshoot less than 20% corresponding to n = 5, n = 7, and n = 10 respectively.
Figure 7.9 The shaded region indicates a gain margin greater than 1db, phase margin greater than 20 degrees and overshoot less than 20% for that region. This was obtained for the approximation with n = 5.
DATA DRIVEN SYNTHESIS OF DIGITAL CONTROLLERS
277
Figure 7.10 The shaded region indicates a gain margin greater than 1 db, phase margin greater than 20 degrees and overshoot less than 20% for that region. This was obtained for the approximation with n = 7.
Figure 7.11 The shaded region indicates a gain margin greater than 1 db, phase margin greater than 20 degrees and overshoot less than 20% for that region. This was obtained for approximation with n = 10.
278
THREE TERM CONTROLLERS
7.5
First Order Controllers for Discrete-Time Systems
Let the plant and controller transfer functions be P (z) =
N (z) , D(z)
C(z) =
x1 z + x2 . z + x3
(7.38)
In Chapter 5, it is shown that the entire set of first order stabilizing controllers for a given discrete-time LTI plant can be characterized in (x1 , x2 , x3 ) space by at most two straight lines and one curve. The analytic expressions of the two straight lines and the curve were obtained in terms of the plant transfer function coefficients. In this section, we give new sets of expressions that are equivalent but rely only on the frequency domain data points of the plant instead of its analytical model. Consider the frequency response of the discrete-time plant P : P (z)|z=−u+v = Rp (u) + vTp (u).
(7.39)
Note that RP (u) and Tp (u) for −1 ≤ u ≤ 1 are immediately available from the given frequency response data points provided by P (ejθ ) for θ ∈ [0, π]. Consider the real rational function F (z) = (z + x3 ) + (zx1 + x2 ) P (z).
(7.40)
THEOREM 7.4 Let P (z) be the plant with the number of unstable poles being op . Let the first order controller be x1 z + x2 . C(z) = z + x3 Then the closed-loop system is stable if and only if σ(F ) = op + 1. PROOF Let n be the order of the plant P (z) and ip be the number of stable poles of P (z). Then closed-loop stability requires that σ(F ) = n + 1 − ip = (op + ip ) + 1 − ip = op + 1.
(7.41)
DATA DRIVEN SYNTHESIS OF DIGITAL CONTROLLERS
279
We can evaluate F (z)|z=−u+v
Rp (u) + vTp (u) = (−u + x3 + v) + −ux1 + x2 + vx1 = (x3 − u) + Rp (u)x2 − uRp (u) + v 2 Tp (u) x1 +v Rp (u) − uTp (u) x1 + Tp (u)x2 + 1 .
The signature invariant regions are therefore bounded by 2 − uRp (u) + v Tp (u) Rp (u) x1 (u) − (x3 − u) . x2 (u) = −1 Rp (u) − uTp (u) Tp (u) {z } | A(u)
Since
det[A(u)] = − Rp2 (u) + v 2 Tp2 (u) 2 = − P (ejθ ) 6= 0, for all θ
the solution of the above is 1 x1 (u) A1 =− 2 x2 (u) |P (ejθ )| A2 where
A1 = − (x3 − u) Tp (u) + Rp (u) 2 A2 = (x3 − u) Rp (u) − uTp (u) + uRp (u) + v Tp (u) and
x1 (u) = x2 (u) =−
(x3 − u) Tp (u) − Rp (u) 2
−
|P (ejθ )| −ux3 + u2 + v 2 Tp (u) + x3 Rp (u) 2
1
2
|P (ejθ )|
|P (ejθ )| (u − x3 ) Tp (u) + Rp (u) . (1 − ux3 ) Tp (u) + x3 Rp (u)
The two straight lines representing the real root crossings can be obtained from the expression of F (z) by letting u = −1 and u = 1, equivalently letting θ = 0 and θ = π. (x3 − 1) + P (e0 )(x2 − x1 ) = 0
(7.42)
π
(7.43)
(x3 − 1) + P (e )(x2 − x1 ) = 0.
280
THREE TERM CONTROLLERS
Example 7.2 To illustrate the method, we take a set of frequency domain data points from the plant used in Example 5.5. Available Information: 1. Frequency domain data π sampled every T = 0.01 . P(eωT ) := P (eωT ), ω = T 2. The plant is stable, that is, op = 0. At x3 = 0.75, Figure 7.12 is obtained. Note that each separated region in Figure 7.12 represents a set of controller parameters that gives a fixed number of unstable poles of the closed-loop system. To identify the stabilizing region, we arbitrarily select a point from each region and plot the corresponding Nyquist plot, that is, x1 z + x2 P (eωT ) . z + x3 z=eωT Figure 7.13 shows the Nyquist plots with selected controllers from the four specified regions.
1 0.8 0.6 Region 3 Region 4
0.4
x2
0.2 0
Region 2
−0.2 −0.4
Region 1
−0.6 −0.8 −1 −1
−0.8
−0.6
−0.4
−0.2
0 x1
0.2
0.4
0.6
0.8
Figure 7.12 Root invariant regions with first order controller.
1
281
DATA DRIVEN SYNTHESIS OF DIGITAL CONTROLLERS
A controller from Region 1
A controller from Region 2 0.5
6
Imag
Imag
4 2
0
0 −2 −5
0
5 Real A controller from Region 3
−0.5 −1.5
10
2
−0.5 0 0.5 Real A controller from Region 4
5
0
0
Imag
Imag
−1
−2
−5
−4 −6 −4
−2
0 Real
2
−10 −10
4
−5
0
5
Real
Figure 7.13 Nyquist plots with selected controllers.
1
x3
0.5
0
−0.5
−1
0.5 0 −0.5 x2
−0.2
0
0.2
0.4
0.6
x1
Figure 7.14 All stabilizing first order controllers.
0.8
1
282
THREE TERM CONTROLLERS
The Nyquist plot with a controller from Region 1 shows that the encirclement around −1 point is N = 2. Since op = 0, the corresponding closedloop system will have 2 poles outside the unit circle. Similarly, corresponding closed-loop systems with controllers from Region 3 and 4 will have 2 and 3 poles outside the unit circle, respectively. This test led us to the conclusion that the region 2 is the only stabilizing region. By sweeping over x3 , we have the entire set of first order stabilizing controllers for the given plant shown below in Figure 7.14.
7.6
Computer-Aided Design
The algorithm for the design of a discrete time PID Controller from the frequency response data of a stable system has been programmed in LabVIEW. The Virtual Instrument (VI) has a front panel that is displayed to the user and a block diagram, where the computations are performed. The inputs to the LabVIEW program are the frequency response data of the stable system, the sampling time and the number of samples to be considered to design the controller. Given these inputs, the entire range of K3 that can stabilize the system is displayed, and as the user scrolls through the stabilizing range of K3 , the entire stabilizing set of K1 and K2 are displayed. When some specific value of controller parameters are chosen, the performance indices such as gain margin, phase margin, rise time, overshoot, peak time, and pole zero constellation of the closed-loop system can be displayed. Additionally the entire 3-D stabilizing set can also be displayed. Furthermore, when some guaranteed performance constraints are specified, the subset achieving or exceeding the desired minimum performance criteria can also be displayed. Once a particular value of controller parameters are chosen based on performance criteria, it can be converted to Kp , Ki , and Kd values through a simple linear transformation. The two examples described below illustrate the above capabilities. Example 7.3 The file containing the frequency response data of a stable system is fed into the program through the file path box located at the top left-hand side of the VI as shown in Figure 7.15. When the number of samples and sampling time are selected, the stabilizing range of K3 is displayed. On selecting a particular value of K3 , (=0.5 in the example), the corresponding stabilizing region in K1 -K2 space is obtained. On choosing particular values, K1 = 0.35 and K2 = −0.6 corresponding to the chosen value of K3 , the performance parameters are displayed as shown in Figure 7.16. Note that all the performance indicators are so arranged that higher values
DATA DRIVEN SYNTHESIS OF DIGITAL CONTROLLERS
Figure 7.15 Front panel of data based PID controller design in Example 7.3.
Figure 7.16 Performance parameters for Example 7.3.
283
284
THREE TERM CONTROLLERS
of the cursor correspond to better performance. This helps in better understanding of how the system behaves when browsing through K1 -K2 values. The pole-zero constellation corresponding to the above controller parameters is also shown in Figure 7.17. The entire 3-D stabilizing region is shown in Figure 7.18. On specifying some performance criteria like Gain Margin > 4db, Phase Margin > 25◦ and overshoot < 50%, the subset achieving the required criteria is shown as in Figure 7.19.
Figure 7.17 Pole zero map for Example 7.3.
Example 7.4 The frequency response of another stable system is as shown in Figure 7.20 and is obtained from a file. Carrying out similar steps as in the previous example, the K1 − K2 stabilizing set is obtained for K3 = 1.5. The subset satisfying the criteria Gain Margin > 2db is as shown in Figure 7.21. Further when the constraint Overshoot < 50% is imposed, the set shrinks as shown in Figure 7.22. When a third condition Phase Margin > 14◦
DATA DRIVEN SYNTHESIS OF DIGITAL CONTROLLERS
Figure 7.18 3D stabilizing for Example 7.3.
Figure 7.19 Performance subset for Example 7.3.
285
286
THREE TERM CONTROLLERS
is imposed, the set further shrinks as shown in Figure 7.23. This illustrates the fact that as more and more conditions are imposed, the resultant set achieving all specifications is a subset of the previous set.
Figure 7.20 Front panel of data based PID controller design in Example 7.4.
The method given here shows that complete stabilizing regions can be constructed for three term controllers without analytical state space or transfer function models provided we know the frequency response or impulse response and the number of unstable poles. The inclusion of performance requirements leads to complex stabilization problems for families of plants which can be solved in like manner. We note that an alternative to the results given here is to apply a model based control design to a mathematical model identified from the frequency domain data. Thus, under the assumption of perfect identification, availability of frequency domain data is equivalent to that of a mathematical model. However, identification involves approximation and assumptions on the system order. Because of this, designs by the two approaches are in general not equivalent and the resulting controllers will have different properties. These issues are subject to further investigation. We believe that the proposed method is a good complement to the existing model based design methods. The extension of these concepts to higher order and MIMO controllers is a topic of future research.
DATA DRIVEN SYNTHESIS OF DIGITAL CONTROLLERS
287
Figure 7.21 Subset satisfying gain margin > 2db for the plant in Example 7.4.
Figure 7.22 Subset satisfying gain margin > 2db and overshoot < 50% for the plant in Example 7.4.
288
THREE TERM CONTROLLERS
Figure 7.23 Subset satisfying gain margin > 2db, phase margin > 14◦ and overshoot < 50% for the plant in Example 7.4.
7.7
Exercises
7.1 Let the impulse response of a stable discrete-time LTI system be y[k] = [0, 0, 1, 0, 0.25, 0, 0.0625, 0.015625, 0, 0.00390625, · · ·]. Determine the stabilizing regions in the PID parameter space from the following. (a) Use first 3 terms of the data y[k], (b) Use first 5 terms of the data y[k], (c) Use first 10 terms of the data y[k]. 7.2 Suppose the unit step response of a stable discrete-time LTI system is measured: ystep [k] = [0, 0, 1, 1, 1.25, 1.25, 1.3125, 1.3125, 1.328125, 1.328125,
DATA DRIVEN SYNTHESIS OF DIGITAL CONTROLLERS
289
1.328125, 1.33203125, · · ·] Find the impulse response from the above data. Then find the stabilizing sets using 5, 7, and 10 terms from this sequence. Finally impose the specifications (a) gain margin ≥ 3dbs and (b) phase margin ≥ 20 degrees and recompute the sets.
7.8
Notes and References
The results given here are taken from [129, 132, 140].
Part II
ROBUST PARAMETRIC CONTROL This part of the book contains results on robust stability and performance of control systems containing several uncertain parameters. For LTI systems, stability is characterized by the root clustering of characteristic polynomials in prescribed stability regions such as the left-half plane and the unit circle. This leads to the problem of robustness of root clustering under parameter uncertainty and this is the main problem studied here. In Chapter 8, some fundamental stability results for polynomials are derived. These include the Boundary Crossing Theorem, the classical Hermite-Biehler Theorem, and elementary proofs of the Routh and Jury criteria. In Chapter 9, line segments of polynomials are considered and several stability criteria are derived. Chapter 10 deals with the computation of parametric stability margins, that is, of the radii of maximal stability balls in parameter space. In Chapter 11, we consider polytopes of polynomials and develop some sharp results for testing the stability of such families. The results include the Edge Theorem, Kharitonov’s Theorem, and the Generalized Kharitonov Theorem and some extensions. In Chapter 12, we consider mixed uncertainty problems where parametric as well as unstructured perturbations are present. Several extremal results are derived and their application to control design are demonstrated.
8 STABILITY THEORY FOR POLYNOMIALS
In this chapter we introduce the Boundary Crossing Theorem for polynomials. Although intuitively obvious, this theorem, used systematically, can lead to many useful and nontrivial results in stability theory. In fact it plays a fundamental role in most of the results on robust stability. We illustrate its usefulness here by using it to give extremely simple derivations of the HermiteBiehler Theorem and of the Routh and Jury tests.
8.1
Introduction
This chapter develops some results on stability theory for a given fixed polynomial. This theory has been extensively studied and has a vast body of literature. Instead of giving a complete account of all existing results we concentrate on a few fundamental results which will be used extensively in the remainder of this book to deal with stability problems related to families of polynomials. These results are presented in a unified and elementary fashion and the approach consists of a systematic use of the following fact: Given a parametrized family of polynomials and any continuous path in the parameter space leading from a stable to an unstable polynomial, then, the first unstable point that is encountered in traversing this path corresponds to a polynomial whose unstable roots lie on the boundary (and not in the interior) of the instability region in the complex plane. The above result, called the Boundary Crossing Theorem, is established rigorously in the next section. The proof follows simply from the continuity of the roots of a polynomial with respect to its coefficients. The consequences of this result, however, are quite far reaching, and this is demonstrated in the subsequent sections by using it to give simple derivations of the classical Hermite-Biehler Theorem, the Routh test for left-half plane stability and the Jury test for unit disc stability. The purpose of this chapter is to give a simple exposition of these fundamental results which makes them particularly easy to learn and understand. Moreover the Boundary Crossing Theorem will play an important role in later
293
294
ROBUST PARAMETRIC CONTROL
chapters dealing with Kharitonov’s Theorem and its generalization, the stability of families of polynomials, and in the calculation of stability margins for control systems. Many results of stability theory extend far beyond the realm of polynomials. The Hermite-Biehler Theorem, in particular, extends to a vast class of entire functions. In the last section of this chapter some of these extensions are briefly overviewed, with an emphasis on those results which are more directly related to problems in control theory.
8.2
The Boundary Crossing Theorem
We begin with the well-known Principle of the Argument of complex variable theory. Let C be a simple closed contour in the complex plane and w = f (z) a function of the complex variable z, which is analytic on C. Let Z and P denote the number of zeros and poles, respectively, of f (z) contained in C. Let ∆C arg[f (z)] denote the net change of argument (angle) of f (z) as z transverses the contour C. THEOREM 8.1 (Principle of the Argument) ∆C arg[f (z)] = 2π(Z − P )
(8.1)
An important consequence of this result is the well-known theorem of Rouch´e. THEOREM 8.2 (Rouch´e’s Theorem) Let f (z) and g(z) be two functions which are analytic inside and on a simple closed contour C in the complex plane. If |g(z)| < |f (z)|
(8.2)
for any z on C, then f (z) and f (z)+g(z) have the same number (multiplicities included) of zeros inside C. Since f (z) cannot vanish on C, because of (8.2), we have g(z) ∆C arg[f (z) + g(z)] = ∆C arg f (z) 1 + f (z) g(z) = ∆C arg[f (z)] + ∆C arg 1 + . (8.3) f (z)
PROOF
STABILITY THEORY FOR POLYNOMIALS Moreover, since
295
g(z) f (z) < 1
for all z ∈ C, the variable point
w = 1+
g(z) f (z)
stays in the disc |w − 1| < 1 as z describes the curve C. Therefore, w cannot wind around the origin, which means that g(z) ∆C arg 1 + = 0. (8.4) f (z) Combining (8.3) and (8.4), we find that ∆C arg [f (z) + g(z)] = ∆C arg[f (z)]. Since f (z) and g(z) are analytic in and on C the theorem now follows as an immediate consequence of the argument principle. Note that the condition |g(z)| < |f (z)| on C implies that neither f (z) nor f (z) + g(z) may have a zero on C. Theorem 8.2 is just one formulation of Rouch´e’s Theorem but it is sufficient for our purposes. The next theorem is a simple application of Rouch´e’s Theorem. It is however most useful since it applies to polynomials. THEOREM 8.3 Let P (s) = p0 + p1 s + · · · + pn sn =
m Y
(s − sj )tj , pn 6= 0,
(8.5)
j=1
Q(s) = (p0 + ǫ0 ) + (p1 + ǫ1 )s + · · · + (pn + ǫn )sn ,
(8.6)
and consider a circle Ck , of radius rk , centered at sk which is a root of P (s) of multiplicity tk . Let rk be fixed in such a way that, 0 < rk < min |sk − sj |,
for
j = 1, 2, · · · , k − 1, k + 1, · · · , m.
(8.7)
Then, there exists a positive number ǫ, such that |ǫi | ≤ ǫ, for i = 0, 1, · · · , n, implies that Q(s) has precisely tk zeros inside the circle Ck . PROOF P (s) is nonzero and continuous on the compact set Ck and therefore it is possible to find δk > 0 such that |P (s)| ≥ δk > 0,
for all s ∈ Ck .
(8.8)
296
ROBUST PARAMETRIC CONTROL
On the other hand, consider the polynomial R(s), defined by R(s) = ǫ0 + ǫ1 s + · · · + ǫn sn .
(8.9)
If s belongs to the circle Ck , then |R(s)| ≤
n X
|ǫj ||sj | ≤
j=0 n X
≤ǫ
j=0
|
n X j=0
j |ǫj | |s − sk | + |sk |
j rk + |sk | . {z
Mk
Thus, if ǫ is chosen so that ǫ
0 such that for any set of numbers {ǫ0 , · · · , ǫn } satisfying |ǫi | ≤ ǫ, for i = 0, 1, · · · , n, Q(s) has precisely tj zeros inside each of the circles Cj . Note, that in this case, Q(s) always has t1 +t2 +· · ·+tm = n zeros and must remain therefore of degree n, so that necessarily ǫ < |pn |. The above theorem and corollary lead to our main result, the Boundary Crossing Theorem. Let us consider the complex plane C and let S ⊂ C be any given open set. We know that S, its boundary ∂S together with the interior U o of the closed set U = C − S form a partition of the complex plane, that is S ∪ ∂S ∪ U o = C,
S ∩ U o = S ∩ ∂S = ∂S ∩ U o = ∅.
(8.11)
Assume moreover that each one of these three sets is non-empty. These assumptions are very general. In stability theory one might choose for S the open left-half plane C − (for continuous-time systems) or the open unit disc D (for discrete-time systems) or suitable subsets of these, as illustrated in Figure 8.1. Consider a family of polynomials P (λ, s) satisfying the following assumptions.
297
STABILITY THEORY FOR POLYNOMIALS 111Im 000 000 111 111 000 000 111 000 111 111 000 000 111 111 000 000 111 000 111 111 000 000 111 000 111 000 111 000 111 111 000 000 111 000 111 000 111 000 111 (a)
Im
Re
111111 000000 000000 111111 000000 111111 000000 111111 000000 111111 000000 111111 000000 111111 000000 111111 000000 111111 000000 111111 000000 111111 000000 111111
Re
1111 Im 0000 0000 1111 1111 0000 0000 1111 0000 1111 1111 0000 0000 1111 1111 0000 0000 1111 0000 1111 1111 0000 0000 1111 0000 1111 0000 1111 0000 1111 1111 0000 0000 1111 0000 1111 0000 1111 0000 1111
(b)
11111 00000 00000 11111 11111 00000 00000 11111 00000 11111 11111 00000 00000 11111 11111 00000 00000 11111 00000 11111 11111 00000 00000 Re 11111 00000 11111 00000 11111 00000 11111 11111 00000 00000 11111 00000 11111 00000 11111 00000 11111
(c)
Im
Re
(d)
Figure 8.1 Some typical stability regions.
Assumption 2 P (λ, s) is a family of polynomials of 1) fixed degree n, (invariant degree), 2) continuous with respect to λ on a fixed interval I = [a, b]. In other words, a typical element of P (λ, s) can be written as P (λ, s) = p0 (λ) + p1 (λ)s + · · · + pn (λ)sn ,
(8.12)
where p0 (λ), p1 (λ), · · · , pn (λ) are continuous functions of λ on I and where pn (λ) 6= 0 for all λ ∈ I. From the results of Theorem 8.3 and its corollary, it is immediate that in general, for any open set O, the set of polynomials of degree n that have all their roots in O is itself open. In the case above, if for some t ∈ I, P (t, s) has all its roots in S, then it is always possible to find a positive real number α such that for all t′ ∈ (t − α, t + α) ∩ I, P (t′ , s) also has all its roots in S.
(8.13)
This leads to the following fundamental result. THEOREM 8.4 (Boundary Crossing Theorem) Under the Assumptions 2, suppose that P (a, s) has all its roots in S whereas P (b, s) has at least one root in U. Then, there exists at least one ρ in (a, b] such that: a) P (ρ, s) has all its roots in S ∪ ∂S, and b) P (ρ, s) has at least one root in ∂S. PROOF To prove this result, let us introduce the set E of all real numbers t belonging to (a, b] and satisfying the following property: P:
for all t′ ∈ (a, t),
P (t′ , s) has all its roots in S.
(8.14)
298
ROBUST PARAMETRIC CONTROL
By assumption, we know that P (a, s) itself has all its roots in S, and therefore as mentioned above, it is possible to find α > 0 such that for all t′ ∈ [a, a + α) ∩ I,
P (t′ , s) also has all its roots in S.
(8.15)
From this it is easy to conclude that E is not empty since, for example, a + α2 belongs to E. Moreover, from the definition of E the following property is obvious: t2 ∈ E, and a < t1 < t2 , implies that t1 itself belongs to E.
(8.16)
Given this, it is easy to see that E is an interval and if ρ := sup t
(8.17)
t∈E
then it is concluded that E = (a, ρ]. A) On the one hand it is impossible that P (ρ, s) has all its roots in S. If this were the case then necessarily ρ < b, and it would be possible to find an α > 0 such that ρ + α < b and for all t′ ∈ (ρ−α, ρ+α)∩I, P (t′ , s) also has all its roots in S. (8.18) As a result, ρ + α2 would belong to E and this would contradict the definition of ρ in (8.17). B) On the other hand, it is also impossible that P (ρ, s) has even one root in the interior of U, because a straightforward application of Theorem 8.3 would grant the possibility of finding an α > 0 such that for all t′ ∈ (ρ − α, ρ + α) ∩ I, P (t′ , s) has at least one root in U 0 , (8.19) and this would contradict the fact that ρ − ǫ belongs to E for ǫ small enough. From A) and B) it is thus concluded that P (ρ, s) has all its roots in S ∪ ∂S, and at least one root in ∂S. The above result is in fact very intuitive and just states that in going from one open set to another open set disjoint from the first, the root set of a continuous family of polynomials P (λ, s) of fixed degree must intersect at some intermediate stage the boundary of the first open set. If P (λ, s) loses degree over the interval [a, b], that is if pn (λ) in (8.12) vanishes for some values of λ, then the Boundary Crossing Theorem does not hold. Example 8.1 Consider the Hurwitz stability of the polynomial a1 s + a0
where
p := [a0 a1 ].
299
STABILITY THEORY FOR POLYNOMIALS a1 p2
C2
p0
C1 a0
p1 Figure 8.2 Degree loss on C1 , no loss on C2 (Example 8.1).
Referring to Figure 8.2, we see that the polynomial is Hurwitz stable for p = p0 . Now let the parameters travel along the path C1 and reach the unstable point p1 . Clearly no polynomial on this path has a jω root for finite ω and thus boundary crossing does not occur. However, observe that the assumption of constant degree does not hold on this path because the point of intersection between the path C1 and the a0 axis corresponds to a polynomial where loss of degree occurs. On the other hand, if the parameters travel along the path C2 and reach the unstable point p2 , there is no loss of degree along the path C2 and indeed a polynomial on this path has s = 0 as a root at a0 = 0 and thus boundary crossing does occur. We illustrate this point in Figure 8.3(a). Along the path C2 , where no loss of degree occurs, the root passes through the stability boundary (jω axis). However, on the path C1 the polynomial passes from stable to unstable without its root passing through the stability boundary. The above example shows that the invariant degree assumption is important. Of course we can eliminate the assumption regarding invariant degree and modify the statement of the Boundary Crossing Theorem to require that any path connecting Pa (s) and Pb (s) contains a polynomial which has a root on the boundary or which drops in degree. If degree dropping does occur, it is always possible to apply the result on subintervals over which pn (λ) has a constant sign. In other words if the family of polynomials P (λ, s) does not have a constant degree then of course Theorem 8.4 cannot be directly applied but that does not complicate the analysis terribly and similar results can be
300
ROBUST PARAMETRIC CONTROL Imag 6
Imag 6
boundary crossing X
-
Real
no boundary crossing
Real
(a)
(b) Figure 8.3 (a) Root locus corresponding to the path C2 . (b) Root locus corresponding to the path C1 (Example 8.1).
derived. The following result gives an example of a situation where the assumption on the degree can be relaxed. As usual let S be the stability region of interest. THEOREM 8.5 Let {Pn (s)} be a sequence of stable polynomials of bounded degree and assume that this sequence converges to a polynomial Q(s). Then the roots of Q(s) are contained in S ∪ ∂S. In words the above theorem says that the limit of a sequence of stable polynomials of bounded degree can only have unstable roots which are on the boundary of the stability region. PROOF By assumption, there exists an integer N such that degree[Pn ] ≤ N for all n ≥ 0. Therefore, we can write for all n, Pn (s) = p0,n + p1,n s + · · · + pN,n sN .
(8.20)
Since the sequence {Pn (s)} converges to Q(s) then Q(s) itself has degree less than or equal to N so that we can also write, Q(s) = q0 + q1 s + · · · + qN sN .
(8.21)
Moreover lim
n−→+∞
pk,n = qk , for k = 0, 1, · · · , N.
(8.22)
STABILITY THEORY FOR POLYNOMIALS
301
Now, suppose that Q(s) has a root s∗ which belongs to U o . We show that this leads to a contradiction. Since U o is open, one can find a positive number r such that the disc C centered at s∗ and of radius r is included in U o . By Theorem 8.3, there exists a positive number ǫ, such that for |ǫi | ≤ ǫ, for i = 0, 1, · · · , N , the polynomial (q0 + ǫ0 ) + (q1 + ǫ1 )s + · · · + (qN + ǫN )sN
(8.23)
has at least one root in C ⊂ U o . Now, according to (8.22) it is possible to find an integer n0 such that n ≥ n0 =⇒ |pk,n − qk | < ǫ, for k = 0, 1, · · · , N.
(8.24)
But then (8.24) implies that for n ≥ n0 , (q0 + p0,n − q0 ) + (q1 + p1,n − q1 )s + · · · + (qN + pN,n − qN )sN = Pn (s) (8.25) has at least one root in C ⊂ U o , and this contradicts the fact that Pn (s) is stable for all n.
8.2.1
Zero Exclusion Principle
The Boundary Crossing Theorem can be applied to a family of polynomials to detect the presence of unstable polynomials in the family. Suppose that δ(s, p) denotes a polynomial whose coefficients depend continuously on the parameter vector p ∈ IRl which varies in a set Ω ∈ IRl and thus generates the family of polynomials ∆(s) := {δ(s, p) : p ∈ Ω}.
(8.26)
We are given a stability region S and would like to determine if the family ∆(s) contains unstable polynomials. Let us assume that there is at least one stable polynomial δ(s, pa ) in the family and every polynomial in the family has the same degree. Then if δ(s, pb ) is an unstable polynomial, it follows from the Boundary Crossing Theorem that on any continuous path connecting pa to pb there must exist a point pc such that the polynomial δ(s, pc ) contains roots on the stability boundary ∂S. If such a path can be constructed entirely inside Ω, that is, if Ω is pathwise connected, then the point pc lies in Ω. In this case the presence of unstable polynomials in the family is equivalent to the presence of polynomials in the family with boundary roots. If s∗ is a root of a polynomial in the family it follows that δ(s∗ , p) = 0 for some p ∈ Ω and this implies that 0 ∈ ∆(s∗ ). Therefore, the presence of unstable elements in ∆(s) can be detected by generating the complex plane image set ∆(s∗ ) of the family at s∗ ∈ ∂S, sweeping s∗ along the stability boundary ∂S, and checking if the zero exclusion condition 0 ∈ / ∆(s∗ ) is violated for some s∗ ∈ ∂S. This is stated formally as an alternative version of the Boundary Crossing Theorem.
302
ROBUST PARAMETRIC CONTROL
THEOREM 8.6 (Zero Exclusion Principle) Assume that the family of polynomials (8.26) is of constant degree, contains at least one stable polynomial, and Ω is pathwise connected. Then the entire family is stable if and only if 0∈ / ∆(s∗ ),
for all s∗ ∈ ∂S.
The Zero Exclusion Principle can be used to derive both theoretical and computational solutions to many robust stability problems. It is systematically exploited in Part II of the book to derive various results on robust parametric stability. In the rest of this chapter however we restrict attention to the problem of stability determination of a fixed polynomial and demonstrate the power of the Boundary Crossing Theorem in tackling some classical stability problems.
8.3
The Hermite-Biehler Theorem
We first present the Hermite-Biehler Theorem, sometimes referred to as the Interlacing Theorem. For the sake of simplicity we restrict ourselves to the case of polynomials with real coefficients. The corresponding result for complex polynomials will be stated separately. We deal with the Hurwitz case first and then the Schur case.
8.3.1
Hurwitz Stability
Consider a polynomial of degree n, P (s) = p0 + p1 s + p2 s2 + · · · + pn sn .
(8.27)
P (s) is said to be a Hurwitz polynomial if and only if all its roots lie in the open left-half of the complex plane. We have the two following properties. Property 1 If P (s) is a real Hurwitz polynomial then all its coefficients are non zero and have the same sign, either all positive or all negative. PROOF Follows from the fact that P (s) can be factored into a product of first and second degree real Hurwitz polynomials for which the property obviously holds. Property 2 If P (s) is a Hurwitz polynomial of degree n, then arg[P (jω)], also called the phase of P (jω), is a continuous and strictly increasing function of ω on (−∞, +∞). Moreover the net increase in phase from −∞ to +∞ is arg[P (+j∞)] − arg[P (−j∞)] = nπ.
(8.28)
303
STABILITY THEORY FOR POLYNOMIALS PROOF
If P (s) is Hurwitz then we can write P (s) = pn
n Y (s − si ), with si = ai + jbi , and ai < 0.
(8.29)
i=1
Then we have,
arg[P (jω)] = arg[pn ] + = arg[pn ] +
n X i=1 n X i=1
arg[jω − ai − jbi ] arctan
ω − bi −ai
(8.30)
and thus arg[P (jω)] is a sum of a constant plus n continuous, strictly increasing functions. Moreover each of these n functions has a net increase of π in going from ω = −∞ to ω = +∞, as shown in Figure 8.4.
Imag s-plane
Imag P (jω)-plane
ω2
ω1
φ2 φ1
Re(·)=0
θ1
Re(·)=0
θ2 Im(·)=0 Real
Im(·)=0
Im(·)=0 Im(·)=0
Real
Re(·)=0 θ1 + φ 1 < θ2 + φ 2 for ω1 < ω2
Figure 8.4 Monotonic phase increase property for Hurwitz polynomials.
The even and odd parts of a real polynomial P (s) are defined as: P even (s) := p0 + p2 s2 + p4 s4 + · · · P odd (s) := p1 s + p3 s3 + p5 s5 + · · · . Define P e (ω) := P even(jω) = p0 − p2 ω 2 + p4 ω 4 − · · ·
(8.31)
304
ROBUST PARAMETRIC CONTROL P o (ω) :=
P odd (jω) = p1 − p3 ω 2 + p5 ω 4 − · · · . jω
(8.32)
P e (ω) and P o (ω) are both polynomials in ω 2 and as an immediate consequence their root sets will always be symmetric with respect to the origin of the complex plane. Suppose now that the degree of the polynomial P (s) is even, that is n = 2m, m > 0. In that case we have P e (ω) = p0 − p2 ω 2 + p4 ω 4 − · · · + (−1)m p2m ω 2m P o (ω) = p1 − p3 ω 2 + p5 ω 4 − · · · + (−1)m−1 p2m−1 ω 2m−2 . DEFINITION 8.1 erty if
(8.33)
A real polynomial P (s) satisfies the interlacing prop-
a) p2m and p2m−1 have the same sign. b) All the roots of P e (ω) and P o (ω) are real and distinct and the m positive roots of P e (ω) together with the m − 1 positive roots of P o (ω) interlace in the following manner: 0 < ωe,1 < ωo,1 < ωe,2 < · · · < ωe,m−1 < ωo,m−1 < ωe,m .
(8.34)
If, on the contrary, the degree of P (s) is odd, then n = 2m + 1, m ≥ 0, and P e (ω) = p0 − p2 ω 2 + p4 ω 4 − · · · + (−1)m p2m ω 2m P o (ω) = p1 − p3 ω 2 + p5 ω 4 − · · · + (−1)m p2m+1 ω 2m
(8.35)
and the definition of the interlacing property, for this case, is then naturally modified to a) p2m+1 and p2m have the same sign. b) All the roots of P e (ω) and P o (ω) are real and the m positive roots of P e (ω) together with the m positive roots of P o (ω) interlace in the following manner: 0 < ωe,1 < ωo,1 < · · · < ωe,m−1 < ωo,m−1 < ωe,m < ωo,m .
(8.36)
An alternative description of the interlacing property is as follows: P (s) = P even (s) + P odd (s) satisfies the interlacing property if and only if a) the leading coefficients of P even (s) and P odd (s) are of the same sign, and b) all the zeroes of P even (s) = 0 and of P odd (s) = 0 are distinct, lie on the imaginary axis and alternate along it.
STABILITY THEORY FOR POLYNOMIALS
305
We can now enunciate and prove the following theorem. THEOREM 8.7 (Interlacing or Hermite-Biehler Theorem) A real polynomial P (s) is Hurwitz if and only if it satisfies the interlacing property. PROOF To prove the necessity of the interlacing property consider a real Hurwitz polynomial of degree n, P (s) = p0 + p1 s + p2 s2 + · · · + pn sn . Since P (s) is Hurwitz it follows from Property 1 that all the coefficients pi have the same sign, thus part a) of the interlacing property is already proven and one can assume without loss of generality that all the coefficients are positive. To prove part b) it is assumed arbitrarily that P (s) is of even degree so that n = 2m. Now, we also know from Property 2 that the phase of P (jω) strictly increases from −nπ/2 to nπ/2 as ω runs from −∞ to +∞. Due to the fact that the roots of P (s) are symmetric with respect to the real axis it is also true that arg(P (jω)) increases from 0 to +nπ/2 = mπ as ω goes from 0 to +∞. Hence, as ω goes from 0 to +∞, P (jω) starts on the positive real axis (P (0) = p0 > 0), circles strictly counterclockwise around the origin mπ radians before going to infinity, and never passes through the origin since P (jω) 6= 0 for all ω. As a result it is very easy to see that the plot of P (jω) has to cut the imaginary axis m times so that the real part of P (jω) becomes zero m times as ω increases, at the positive values ωR,1 , ωR,2 , · · · , ωR,m .
(8.37)
Similarly the plot of P (jω) starts on the positive real axis and cuts the real axis another m − 1 times as ω increases so that the imaginary part of P (jω) also becomes zero m times (including ω = 0) at 0, ωI,1 , ωI,2 , · · · , ωI,m−1
(8.38)
before growing to infinity as ω goes to infinity. Moreover since P (jω) circles around the origin we obviously have 0 < ωR,1 < ωI,1 < ωR,2 < ωI,2 < · · · < ωR,m−1 < ωI,m−1 < ωR,m .
(8.39)
Now the proof of necessity is completed by simply noticing that the real part of P (jω) is nothing but P e (ω), and the imaginary part of P (jω) is ωP o (jω). For the converse assume that P (s) satisfies the interlacing property and suppose for example that P (s) is of degree n = 2m and that p2m , p2m−1 are both positive. Consider the roots of P e (ω) and P o (ω), p p p p p 0 < ωe,1 < ωo,1 < · · · < ωe,m−1 < ωo,m−1 < ωe,m .
(8.40)
306
ROBUST PARAMETRIC CONTROL
From this, P e (ω) and P o (ω) can be written as P e (ω) = p2m
m Y
2
p (ω 2 − ωe,i )
i=1 m−1 Y
2
p (ω 2 − ωo,i ).
P o (ω) = p2m−1
i=1
Now, consider a polynomial Q(s) that is known to be stable, of the same degree 2m and with all its coefficients positive. For example, take Q(s) = (s + 1)2m . In any event, write Q(s) = q0 + q1 s + q2 s2 + · · · + q2m s2m . Since Q(s) is stable, it follows from the first part of the theorem that Q(s) q satisfies the interlacing property, so that Qe (ω) has m positive roots ωe,1 , · · ·, q q q o ωe,m and Q (ω) has m − 1 positive roots ωo,1 , · · ·, ωo,m−1 , and, q q q q q . 0 < ωe,1 < ωo,1 < · · · < ωe,m−1 < ωo,m−1 < ωe,m
(8.41)
Therefore, we can also write: Qe (ω) = q2m
m Y
2
q ) (ω 2 − ωe,i
i=1 m−1 Y
2
q ). (ω 2 − ωo,i
Qo (ω) = q2m−1
i=1
Consider now the polynomial Pλ (s) := Pλeven (s) + sPλodd (s) defined by m Y q 2 p 2 e 2 Pλ (ω) := (1 − λ)q2m + λp2m ω − (1 − λ)(ωe,i ) + λ(ωe,i ) i=1
Pλo (ω) := (1 − λ)q2m−1 + λp2m−1
Y m−1 q 2 p 2 ω 2 − (1 − λ)(ωo,i ) + λ(ωo,i ) . i=1
Obviously, the coefficients of Pλ (s) are polynomial functions in λ which are therefore continuous on [0, 1]. Moreover, the coefficient of the highest degree term in Pλ (s) is (1−λ)q2m +λp2m and always remains positive as λ varies from 0 to 1. For λ = 0 we have P0 (s) = Q(s) and for λ = 1, P1 (s) = P (s). Suppose now that P (s) is not Hurwitz. From the Boundary Crossing Theorem it is then clear that there necessarily exists some λ in (0, 1] such that Pλ (s) has a root on the imaginary axis. However, Pλ (s) has a root on the imaginary axis if and only if Pλe (ω) and Pλo (ω) have a common real root. But, obviously, the roots of Pλe (ω) satisfy 2
2
2
q p λ ωe,i = (1 − λ)ωe,i + λωe,i ,
(8.42)
307
STABILITY THEORY FOR POLYNOMIALS and those of Pλo (ω), 2
2
2
q p λ ωo,i = (1 − λ)ωo,i + λωo,i .
(8.43) 2
2
p p Now, take any two roots of Pλe (ω) in (8.42). If i < j, from (8.40) ωe,i < ωe,j ,
and similarly from (8.41),
q 2 ωe,i
0
309
STABILITY THEORY FOR POLYNOMIALS
400 Pe(ω)
300
200
Po(ω)
100
0
−100
−200
−300
−400
0
0.5
1
1.5 ω
2
2.5
3
Figure 8.6 Interlacing fails for non-Hurwitz polynomials (Example 8.3).
write as usual P (jω) = P e (ω) + jωP o (ω) and let S(ω) and T (ω) denote arbitrary continuous positive functions on 0 ≤ ω < ∞. Let P e (ω) P o (ω) x(ω) := , y(ω) := . S(ω) T (ω) LEMMA 8.1 A real polynomial P (s) is Hurwitz if and only if the frequency plot z(ω) := x(ω) + jy(ω) moves strictly counterclockwise and goes through n quadrants in turn. PROOF The Hermite-Biehler Theorem and the monotonic phase property of Hurwitz polynomials shows that the plot of P (jω) must go through n quadrants if and only if P (s) is Hurwitz. Since the signs of P e (ω) and x(ω), ωP o (ω) and y(ω) coincide for ω > 0, the lemma is true.
310
ROBUST PARAMETRIC CONTROL
Although the P (jω) plot is unbounded, the plot of z(ω) can always be bounded by choosing the functions T (ω) and S(ω) appropriately. For example T (ω) and S(ω) can be chosen to be polynomials with degrees equal to that of P e (ω) and P o (ω) respectively. A similar result can be derived for the complex case. Lemma 8.1 is illustrated with the following example. Example 8.4 Taking the same polynomial as in Example 8.2: P (s) = s9 + 11s8 + 52s7 + 145s6 + 266s5 + 331s4 + 280s3 + 155s2 + 49s + 6 and writing P (jω) := P e (ω) + jωP o (ω) we have P e (ω) = 11ω 8 − 145ω 6 + 331ω 4 − 155ω 2 + 6 P o (ω) = ω 8 − 52ω 6 + 266ω 4 − 280ω 2 + 49. We choose S(ω) = ω 8 + ω 6 + ω 4 + ω 2 + 1 T (ω) = ω 8 + ω 6 + ω 4 + ω 2 + 1. The function z(ω) in Figure 8.7 turns strictly counterclockwise and goes through nine quadrants and this shows that the polynomial P (s) is Hurwitz according to Lemma 8.1. It will be shown in Chapter 10 that the weighting functions S(ω) and T (ω) can be suitably chosen to extend this frequency domain criterion to verify robust Hurwitz stability of an ℓp ball in coefficient space.
8.3.2
Hurwitz Stability for Complex Polynomials
The Hermite-Biehler Theorem for complex polynomials is given below. Its proof is a straightforward extension of that of the real case and will not be given. Let P (s) be a complex polynomial P (s) = (a0 +jb0 )+(a1 +jb1 )s+· · ·+(an−1 +jbn−1 )sn−1 +(an +jbn )sn . (8.44) Define PR (s) = a0 + jb1 s + a2 s2 + jb3 s3 + · · · PI (s) = jb0 + a1 s + jb2 s2 + a3 s3 + · · · and write P (jω) = P r (ω) + jP i (ω),
(8.45)
311
STABILITY THEORY FOR POLYNOMIALS
50
40
30
Image
20
10
0
−10
−20 −15
−10
−5
0 Real
5
10
15
Figure 8.7 Frequency plot of z(ω) (Example 8.4).
where P r (ω) := PR (jω) = a0 − b1 ω − a2 ω 2 + b3 ω 3 + · · · , 1 P i (ω) := PI (jω) = b0 + a1 ω − b2 ω 2 − a3 ω 3 + · · · . j
(8.46)
The Hermite-Biehler Theorem for complex polynomials can then be stated as follows. THEOREM 8.8 The complex polynomial P (s) in (8.44) is a Hurwitz polynomial if and only if, 1) an−1 an + bn−1 bn > 0, 2) The zeros of P r (ω) and P i (ω) are all simple and real and interlace, as ω runs from −∞ to +∞. Note that condition 1) follows directly from the fact that the sum of the
312
ROBUST PARAMETRIC CONTROL
roots of the polynomial P (s) in (8.44) is equal to −
an−1 + jbn−1 an−1 an + bn−1 bn + j(bn−1 an − an−1 bn ) , =− an + jbn a2n + b2n
so that if P (s) is Hurwitz, then the real part of the above complex number must be negative.
8.3.3
Schur Stability
In fact it is always possible to derive results similar to the interlacing theorem with respect to any stability region S which has the property that the phase of a stable polynomial evaluated along the boundary of S increases monotonically. In this case the stability of the polynomial with respect to S is equivalent to the interlacing of its real and imaginary parts evaluated along the boundary of S. Here we concentrate on the case where S is the open unit disc. This is the stability region for discrete time systems. DEFINITION 8.2
A polynomial,
P (z) = pn z n + pn−1 z n−1 + · · · + p1 z + p0 , is said to be a Schur polynomial if all its roots lie in the open unit disc of the complex plane. A necessary condition for Schur stability is |pn | > |p0 | (see Property 3).
A Frequency Plot for Schur Stability P (z) can be written as P (z) = pn (z − z1 )(z − z2 ) · · · (z − zn )
(8.47)
where the zi ’s are the n roots of P (z). If P (z) is Schur, all these roots are located inside the unit disc |z| < 1, so that when z varies along the unit circle, z = ejθ , the argument of P (ejθ ) increases monotonically. For a Schur polynomial of degree n, P (ejθ ) has a net increase of argument of 2nπ, and thus the plot of P (ejθ ) encircles the origin n times. This can be used as a frequency domain test for Schur stability. Example 8.5 Consider the stable polynomial P (z) = 2z 4 − 3.2z 3 + 1.24z 2 + 0.192z − 0.1566 = 2(z + 0.3)(z − 0.5 + 0.2j)(z − 0.5 − 0.2j)(z − 0.9)
313
STABILITY THEORY FOR POLYNOMIALS
6
4
Imag
2
0
−2
−4
−6 −6
−4
−2
0
2
4
6
8
Real
Figure 8.8 Plot of P (ejθ ) (Example 8.5).
Let us evaluate P (z) when z varies along the unit circle. The plot obtained in Figure 8.8 encircles the origin four times, which shows that this fourth order polynomial is Schur stable. A simplification can be made by considering the reversed polynomial z n P (z −1 ). z n P (z −1 ) = p0 z n + p1 z n−1 + · · · + pn = pn (1 − z1 z)(1 − z2 z) · · · (1 − zn z)
(8.48)
z n P (z −1 ) becomes zero at z = zi−1 , i = 1, · · · , n. If P (z) is Schur the zi ’s have modulus less than one, so that the zi−1 are located outside the unit disc. If we let z = ejθ vary along the unit circle the net increase of argument of ejnθ P (e−jθ ) must therefore be zero. This means that for Schur stability of P (z) it is necessary and sufficient that the frequency plot, ejnθ P (e−jθ ) of the reverse polynomial not encircle the origin. Example 8.6 Consider the polynomial in the previous example. The plot of z n P (z −1 ) when z describes the unit circle is shown in Figure 8.9.
314
ROBUST PARAMETRIC CONTROL
5 4 3 2
Imag
1 0 −1 −2 −3 −4 −5 −1
0
1
Plot of e
2
j4θ
3 Real
4
5
6
7
Figure 8.9 P (e−jθ ) (Example 8.6).
As seen, the plot does not encircle the origin and thus we conclude that P (z) is stable. We see that when using the plot of P (z) we must verify that the plot of P (ejθ ) encircles the origin the correct number of times n, whereas using the reverse polynomial R(z) = z n P (z −1 ) we need only check that the plot of R(ejθ ) excludes the origin. This result holds for real as well as complex polynomials. For a real polynomial, it is easy to see from the above that the stability of P (z) is equivalent to the interlacing of the real and imaginary parts of P (z) evaluated along the upper-half of the unit circle. Writing P (ejθ ) = R(θ) + jI(θ) we have: R(θ) = pn cos(nθ) + · · · + p1 cos(θ) + p0 and, I(θ) = pn sin(nθ) + · · · + p1 sin(θ).
315
STABILITY THEORY FOR POLYNOMIALS LEMMA 8.2 A real polynomial P(z) is Schur with |pn | > |p0 | if and only if a) R(θ) has exactly n zeros in [0, π], b) I(θ) has exactly n + 1 zeros in [0, π], and c) the zeros of R(θ) and I(θ) interlace. Example 8.7 Consider the polynomial P (z) = z 5 + 0.2z 4 + 0.3z 3 + 0.4z 2 + 0.03z + 0.02.
As seen in Figure 8.10 the polynomial P (z) is Schur since Re[P (ejθ )] and Im[P (ejθ )] have respectively 5 and 6 distinct zeros for θ ∈ [0, π], and the zeros of Re[P (ejθ )] interlace with the zeros of Im[P (ejθ )].
2 Re[P(ejθ)]
1.5
1
0.5
0
−0.5
−1
Im[P(ejθ)]
−1.5
−2
0
1
2
3
θ
4
5
Figure 8.10 Re[P (ejθ )] and Im[P (ejθ )](Schur case) (Example 8.7).
6
316
ROBUST PARAMETRIC CONTROL
Example 8.8 Consider the polynomial P (z) = z 5 + 2z 4 + 0.3z 3 + 0.4z 2 + 0.03z + 0.02. Since Re[P (ejθ )] and Im[P (ejθ )] each do not have 2n = 10 distinct zeros for 0 ≤ θ < 2π, as shown in Figure 8.11, the polynomial P (z) is not Schur.
4
3
Re[P(ejθ)]
2
1
0
−1
−2
Im[P(ejθ)]
−3
−4
0
1
2
3
θ
4
5
6
Figure 8.11 Re[P (ejθ )] and Im[P (ejθ )] (non-Schur case) (Example 8.8).
These conditions, can in fact be further refined to the interlacing on the unit circle of the two polynomials Ps (z) and Pa (z) which represent the symmetric and asymmetric parts of the real polynomial P (z) = Ps (z) + Pa (z): Ps (z) =
1 1 1 1 P (z) + z n P , and Pa (z) = P (z) − z n P . 2 z 2 z
STABILITY THEORY FOR POLYNOMIALS
317
THEOREM 8.9 A real polynomial P(z) is Schur if and only if Ps (z) and Pa (z) satisfy the following: a) Ps (z) and Pa (z) are polynomials of degree n with leading coefficients of the same sign. b) Ps (z) and Pa (z) have only simple zeros which belong to the unit circle. c) The zeros of Ps (z) and Pa (z) interlace on the unit circle. PROOF Let P (z) = p0 + p1 z + p2 z 2 · · · + pn z n . The condition a) is equivalent to p2n − p20 > 0 which is clearly necessary for Schur stability (see Property 3). Now we apply the bilinear transformation of the unit circle into the left-half plane and use the Hermite Biehler Theorem for Hurwitz stability. It is known that the bilinear mapping, z=
s+1 s−1
maps the open unit disc into the open left-half plane. It can be used to transform a polynomial P (z) into Pˆ (s) as follows: n
(s − 1) P
s+1 s−1
= Pˆ (s).
Write Pˆ (s) = pˆ0 + pˆ1 s + · · · + pˆn−1 sn−1 + pˆn sn where each pˆi is a function which depends on the coefficients of P (z). It follows that if the transformation is degree preserving then P (z) is Schur stable if and only if Pˆ (s) is Hurwitz stable. It is easy to verify that the transformation described above is degree preserving if and only if pˆn =
i=n X
pi = P (1) 6= 0
i=0
and that this holds is implied by condition c). The transformation of P (z) into Pˆ (s) is a linear transformation T . That is, Pˆ (s) is the image of P (z) under the mapping T . Then T P (z) = Pˆ (s) may be written explicitly as (s − 1)n P
s+1 s−1
= T P (z) = Pˆ (s).
318
ROBUST PARAMETRIC CONTROL
For example, for n = 4, expressing P (z) and Pˆ (s) in terms of their coefficient vectors pˆ0 1 1 1 11 p0 −4 −2 0 2 4 p1 pˆ1 6 0 −2 0 6 p2 = pˆ2 . −4 2 0 −2 4 p3 pˆ3 1 −1 1 −1 1 p4 pˆ4
Consider the symmetric and antisymmetric parts of P (z), and their transformed images, given by T Ps (z) and T Pa (z) respectively. A straightforward computation shows that T Ps (z) = Pˆ even (s),
T Pa (z) = Pˆ odd (s),
n even
T Ps (z) = Pˆ odd (s),
T Pa (z) = Pˆ even (s),
n odd.
and The conditions b) and c) now follow immediately from the interlacing property for Hurwitz polynomials applied to Pˆ (s). The functions Ps (z) and Pa (z) are easily evaluated as z traverses the unit circle. Interlacing may be verified from a plot of the zeros of these functions as in Figure 8.12.
Im[z] X X
X
Re[z] X
X X
Figure 8.12 Interlacing of the symmetric and antisymmetric parts of a polynomial on the unit circle.
STABILITY THEORY FOR POLYNOMIALS
8.3.4
319
General Stability Regions
The Hermite-Biehler interlacing theorem holds for any stability region S which has the property that the phase of any polynomial which is stable with respect to S varies monotonically along the boundary ∂S. The left-half plane and the unit circle satisfy this criterion. Obviously there are many other regions of practical interest for which this property holds. For example, the regions shown in Figure 8.1.(c) and (d) also satisfy this condition. In the next two sections we display another application of the Boundary Crossing Theorem by using it to derive the Jury and Routh tables for Schur and Hurwitz stability respectively.
8.4
Schur Stability Test
The problem of checking the stability of a discrete time system reduces to the determination of whether or not the roots of the characteristic polynomial of the system lie strictly within the unit disc, that is whether or not the characteristic polynomial is a Schur polynomial. In this section we develop a simple test procedure for this problem based on the Boundary Crossing Theorem. The procedure turns out to be equivalent to Jury’s test for Schur stability. The development here is given generally for complex polynomials and of course it applies to real polynomials as well. Now, let P (z) = p0 + p1 z + · · · + pn z n , be a polynomial of degree n. The following is a simple necessary condition. Property 3 A necessary condition for P (z) to be a Schur polynomial is that |pn | > |p0 |. Indeed, if P (z) has all its roots z1 , · · · , zn inside the unit circle then the product of these roots is given by (−1)n
n Y
zi =
i=1
hence
p0 , pn
Y n p0 = |zi | < 1. pn i=1
Now, consider a polynomial P (z) of degree n,
P (z) = p0 + p1 z + · · · + pn z n .
320
ROBUST PARAMETRIC CONTROL
Let z denote the conjugate of z and define 1 Q(z) = z n P = p0 z n + p1 z n−1 + · · · + pn−1 z + pn , z p0 1 P (z) − Q(z) . R(z) = z pn
(8.49)
It is easy to see that R(z) is always of degree less than or equal to n − 1. The following key lemma allows the degree of the test polynomial to be reduced without losing stability information. LEMMA 8.3 If P (z) satisfies |pn | > |p0 |, we have the following equivalence P (z) is a Schur polynomial ⇐⇒ R(z) is a Schur polynomial. PROOF
First notice that obviously,
R(z) is a Schur polynomial ⇐⇒ zR(z) is a Schur polynomial. Now consider the family of polynomials Pλ (z) = P (z) − λ
p0 Q(z), where λ ∈ [0, 1]. pn
It can be seen that P0 (z) = P (z), and P1 (z) = zR(z). Moreover the coefficient of degree n of Pλ (z) is |p0 |2 pn − λ , pn and satisfies 2 pn − λ |p0 | > |pn | − λ p0 |p0 | > |pn | − |p0 | > 0, pn pn
so that Pλ (z) remains of fixed degree n. Assume now by way of contradiction that one of these two polynomials P0 (z) or P1 (z) is stable whereas the other one is not. Then, from the Boundary Crossing Theorem it can be concluded that there must exist a λ in [0, 1] such that Pλ (z) has a root on the unit circle at the point z0 = ejθ , θ ∈ [0, 2π), that is p0 n 1 Pλ (z0 ) = P (z0 ) − λ z0 P = 0. (8.50) pn z0
But for any complex number z on the unit circle, z = z1 , and therefore (8.50) implies that, p0 n Pλ (z0 ) = P (z0 ) − λ z P (z0 ) = 0. (8.51) pn 0
321
STABILITY THEORY FOR POLYNOMIALS Taking the complex conjugate of this last expression it is deduced that P (z0 ) − λ
p0 n z P (z0 ) = 0. pn 0
(8.52)
Therefore, from (8.51) and (8.52) after using the fact that |z0 | = 1, |p0 |2 P (z0 ) 1 − λ2 = 0. |pn |2 By assumption λ2
(8.53)
|p0 |2 < 1, and therefore (8.53) implies that, |pn |2 P (z0 ) = 0.
(8.54)
But then this implies that P (z0 ) = P
1 z0
= 0,
and therefore (see (8.49)) R(z0 ) = 0.
(8.55)
But (8.54) and (8.55) contradict the assumption that one of the two polynomials P (z) and zR(z) is stable, and this concludes the proof of the lemma. The above lemma leads to the following procedure for successively reducing the degree and testing for stability. ALGORITHM 8.1 (Schur stability for real or complex polynomials) 1) Set P (0) (z) = P (z), (i)
(i)
2) Verify |pn | > |p0 |, 3) Construct P (i+1) (z) =
1 z
"
# 1 P (i) (z) − (i) z n P ( ) , z pn (i)
p0
4) Go back to 2) until you either find that 2) is violated (P (z) is not Schur) or until you reach P (n−1) (z) (which is of degree 1) in which case condition 2) is also sufficient and P (z) is a Schur polynomial. It can be verified by the reader that this procedure leads precisely to the Jury stability test.
322
ROBUST PARAMETRIC CONTROL
Example 8.9 Consider a real polynomial of degree 3 in the variable z, P (z) = z 3 + az 2 + bz + c. According to the algorithm, we form the following polynomial " # 1 1 3 (1) P (z) − cz P P (z) = z z = (1 − c2 )z 2 + (a − bc)z + b − ac, and then, " # 1 b − ac 1 P (2) (z) = P (1) (z) − z 2 P (1) z 1 − c2 z 2 2 (1 − c2 ) − (b − ac) b − ac = z + (a − bc) 1 − . 1 − c2 1 − c2 On the other hand, the Jury table is given by, c b a 1 a b 2 c − 1 cb − a ca −b ca − b cb − a c2 − 1 2 2 (c2 − 1) − (ca − b) (cb − a)[(c2 − 1) − (ca − b)]
1 c
Here, the first two lines of this table correspond to the coefficients of P (z), the third and fourth lines to those of P (1) (z) and the last one to a constant times P (2) (z), and the tests to be carried out are exactly similar.
8.5
Hurwitz Stability Test
We now turn to the problem of left-half plane or Hurwitz stability for real polynomials and develop an elementary test procedure for it based on the Interlacing Theorem and therefore on the Boundary Crossing Theorem. This procedure turns out to be equivalent to Routh’s well-known test. Let P (s) be a real polynomial of degree n, and assume that all the coefficients of P (s) are positive, P (s) = p0 + p1 s + · · · + pn sn ,
pi > 0, for i = 0, · · · , n .
323
STABILITY THEORY FOR POLYNOMIALS Remember that P (s) can be decomposed into its odd and even parts as P (s) = P even (s) + P odd (s).
Now, define the polynomial Q(s) of degree n − 1 by: p2m even odd If n = 2m : Q(s) = P (s) − (s) + P odd (s), sP p2m−1 (8.56) p2m+1 even sP (s) + P even (s) If n = 2m + 1 : Q(s) = P odd (s) − p2m that is in general, with µ =
pn pn−1 ,
Q(s) = pn−1 sn−1 +(pn−2 −µpn−3 )sn−2 +pn−3 sn−3 +(pn−4 −µpn−5 )sn−4 +· · · . (8.57) Then the following key result on degree reduction is obtained. LEMMA 8.4 If P (s) has all its coefficients positive, P (s) is stable ⇐⇒ Q(s) is stable. PROOF theorem.
Assume, for example that, n = 2m, and use the interlacing
(a) Assume that P (s) = p0 + · · · + p2m s2m is stable and therefore satisfies the interlacing theorem. Let 0 < ωe,1 < ωo,1 < ωe,2 < ωo,2 < · · · < ωe,m−1 < ωo,m−1 < ωe,m , be the interlacing roots of P e (ω) and P o (ω). One can easily check that (8.56) implies that Qe (ω) and Qo (ω) are given by Qe (ω) = P e (ω) + µω 2 P o (ω),
µ=
p2m , p2m−1
Qo (ω) = P o (ω). From this it is already concluded that Qo (ω) has the required number of positive roots, namely the m − 1 roots of P o (ω): ωo,1 , ωo,2 , · · · , ωo,m−1 . Moreover, due to the form of Qe (ω), it can be deduced that, Qe (0) = P e (0) > 0,
324
ROBUST PARAMETRIC CONTROL Qe (ωo,1 ) = P e (ωo,1 ) < 0, .. . Qe (ωo,m−2 ) = P e (ωo,m−2 ), has the sign of (−1)m−2 , Qe (ωo,m−1 ) = P e (ωo,m−1 ), has the sign of (−1)m−1 . Hence, it is already established that Qe (ω) has m − 1 positive roots ′ ′ ′ ωe,1 , ωe,2 , · · ·, ωe,m−1 , that do interlace with the roots of Qo (ω). Since e moreover Q (ω) is of degree m − 1 in ω 2 , these are the only positive roots it can have. Finally, it has been seen that the sign of Qe (ω) at the last root ωo,m−1 of Qo (ω) is that of (−1)m−1 . But the highest coefficient of Qe (ω) is nothing but q2m−2 (−1)m−1 . From this q2m−2 must be strictly positive, as q2m−1 = p2m−1 is, otherwise Qe (ω) would again have a change of sign between ωo,m−1 and +∞, which would result in the contradiction of Qe (ω) having m positive roots (whereas it is a polynomial of degree only m − 1 in ω 2 ). Therefore, Q(s) satisfies the interlacing property and is stable if P (s) is.
(b) Conversely assume that Q(s) is stable. Write P (s) = [Qeven(s) + µsQodd (s)] + Qodd (s) µ =
p2m . p2m−1
By the same reasoning as in a) it can be seen that P o (ω) already has the required number m − 1 of positive roots, and that P e (ω) already has m − 1 roots in the interval (0, ωo,m−1 ) that interlace with the roots of P o (ω). Moreover, the sign of P e (ω) at ωo,m−1 is the same as (−1)m−1 whereas the term p2m s2m in P (s), makes the sign of P e (ω) at +∞ that of (−1)m . Thus, P e (ω) has an mth positive root, ωe,m > ωo,m−1 , so that P (s) satisfies the interlacing property and is therefore stable.
The above lemma shows how the stability of a polynomial P (s) can be checked by successively reducing its degree as follows. ALGORITHM 8.2 (Hurwitz stability for real polynomials) 1) Set P (0) (s) = P (s), 2) Verify that all the coefficients of P (i) (s) are positive,
STABILITY THEORY FOR POLYNOMIALS
325
3) Construct P (i+1) (s) = Q(s) according to (8.57), 4) Go back to 2) until you either find that any 2) is violated (P (s) is not Hurwitz) or until you reach P (n−2) (s) (which is of degree 2) in which case condition 2) is also sufficient (P (s) is Hurwitz). The reader may verify that this procedure is identical to Routh’s test since it generates the Routh table. The proof also shows the known property that for a stable polynomial not only the first column but the entire Routh table must consist only of positive numbers. Example 8.10 Consider a real polynomial of degree 4, P (s) = s4 + as3 + bs2 + cs + d. Following the algorithm above we form the polynomials, µ= and then,
c 2 1 , and P (1) (s) = as3 + b − s + cs + d, a a
a2 c 2 a2 d (2) µ= , and P = b − s + c− s + d. ab − c a ab − c Considering that at each step only the even or the odd part of the polynomial is modified, it is needed to verify the positiveness of the following set of coefficients, 1 bd a c b− c d a 2 a d c− ab − c But this is just the Routh table for this polynomial. Note that a lemma similar to Lemma 8.4 could be derived where the assumption that all the coefficients of P (s) are positive is replaced by the assumption that only the two highest degree coefficients pn−1 and pn are positive. The corresponding algorithm would then exactly lead to checking that the first column of the Routh table is positive. However, since the algorithm requires that the entire table be constructed, it is more efficient to check that every new coefficient is positive.
326
8.5.1
ROBUST PARAMETRIC CONTROL
Root Counting and the Routh Table
The Routh table is known to serve as a root counting procedure. To see how this occurs, we introduce the line segment of polynomials Q(λ, s) defined below. Note that pn µ := pn−1 where P (s) = pn sn + pn−1 sn−1 + · · · + p1 s + p0 . • n = 2m,
Q(λ, s) = P even (s) − λµsP odd (s) + P odd (s) | {z } | {z }
(8.58)
Q(λ, s) = P even (s) + P odd (s) − λµsP odd (s) | {z } | {z }
(8.59)
Qodd (λ,s)
Qeven (λ,s)
• n = 2m + 1,
Qeven (λ,s)
More generally,
Qodd (λ,s)
Q(λ, s) = (pn − λµpn−1 ) sn + pn−1 sn−1 + (pn−2 − λµpn−3 ) sn−2 +pn−3 sn−3 + (pn−4 − λµpn−5 ) sn−4 + · · · and we see, from (8.56), that Q(0, s) = P (s),
Q(1, s) = Q(s)
(8.60)
and Q(λ, s) has degree n for λ ∈ [0, 1), and loses degree only at λ = 1. If we assume that P (jω) 6= 0 for ω ∈ [0, ∞), it follows from (8.58) and (8.59) that Q(λ, jω) 6= 0 for λ ∈ [0, 1) and ω ∈ [0, ∞). Thus, the root distribution about the imaginary axis, of Q(λ, s) remains invariant and equal to that of P (s). As λ → 1, the leading coefficient pn − λµpn−1 = pn − λ
pn pn−1 pn−1
of Q(λ, s) tends to zero. As a result, one root of Q(λ, s) tends to infinity along the positive or negative real axis. It is easy to see that for large |s|, we may approximate Q(λ, s) ≈ (pn − λµpn−1 ) sn + pn−1 sn−1 ≈ sn−1 [(pn − λµpn−1 ) s + pn−1 ]
(8.61)
so that n − 1 roots of Q(λ, s) are finite while the one root tending to infinity can be estimated by s∗ =
−pn−1 pn−1 =− . pn − λµpn−1 pn (1 − λ)
(8.62)
STABILITY THEORY FOR POLYNOMIALS
327
Equation (8.62) demonstrates that the real root that is lost in Q(λ, s) as λ → 1, tends to infinity along the negative real axis if pn and pn−1 have the same sign and along the positive real axis if pn and pn−1 are of opposite sign. This is exactly Routh’s theorem, where the number of sign changes in the first column equals the number of RHP roots.
8.5.2
Complex Polynomials
A similar algorithm can be derived for checking the Hurwitz stability of complex polynomials. The proof, which is very similar to the real case, is omitted and a precise description of the algorithm is given below. Let P (s) be a complex polynomial of degree n, P (s) = (a0 + jb0 ) + (a1 + jb1 )s + · · · + (an−1 + jbn−1 )sn−1 + (an + jbn )sn , where an + jbn 6= 0. Let, T (s) =
1 P (s). an + jbn
Thus, T (s) can be written as, T (s) = (c0 + jd0 ) + (c1 + jd1 )s + · · · + (cn−1 + jdn−1 )sn−1 + sn , and notice that, cn−1 =
an−1 an + bn−1 bn . a2n + b2n
Assume that cn−1 > 0, which is a necessary condition for P (s) to be Hurwitz (see Theorem 8.8). As usual write, T (s) = TR (s) + TI (s), where TR (s) = c0 + jd1 s + c2 s2 + jd3 s3 + · · · , TI (s) = jd0 + c1 s + jd2 s2 + c3 s3 + · · · . Now define the polynomial Q(s) of degree n − 1 by: 1 If n = 2m : Q(s) = TR (s) − sTI (s) + TI (s), c2m−1 1 If n = 2m + 1 : Q(s) = TI (s) − sTR (s) + TR (s) c2m that is in general, with µ =
1 cn−1 ,
Q(s) = [cn−1 + j(dn−1 − µdn−2 )]sn−1 + [(cn−2 − µcn−3 ) + jdn−2 ]sn−2 +[cn−3 + j(dn−3 − µdn−4 )]sn−3 + · · · .
328
ROBUST PARAMETRIC CONTROL
Now, exactly as in the real case, the following lemma can be proved. LEMMA 8.5 If P (s) satisfies an−1 an + bn−1 bn > 0, then P (s) is Hurwitz stable ⇐⇒ Q(s) is Hurwitz stable. The corresponding algorithm is as follows. ALGORITHM 8.3 (Hurwitz stability for complex polynomials) 1) Set P (0) (s) = P (s), 2) Verify that the last two coefficients of P (i) (s) satisfy (i)
(i)
(i) an−1 a(i) n + bn−1 bn > 0,
3) Construct T (i) (s) =
1 (i) an
(i)
+ jbn
P (i) (s),
4) Construct P (i+1) (s) = Q(s) as above, 5) Go back to 2) until you either find that any 2) is violated (P (s) is not Hurwitz) or until you reach P (n−1) (s) (which is of degree 1) in which case condition 2) is also sufficient (P (s) is Hurwitz).
8.6
Exercises
The purpose of Problems 1–5 is to illustrate some possible uses of the Boundary Crossing Theorem and of the Interlacing (Hermite-Biehler) Theorem. In all the problems the word stable means Hurwitz-stable, and all stable polynomials are assumed to have positive coefficients. The following standard notation is also used: Let P (s) be any polynomial. Denote by P even (s) the even part of P (s), and by P odd (s) its odd part, that is P (s) = (p0 + p2 s2 + p4 s4 + · · ·) + (p1 s + p3 s3 + p5 s5 + · · ·) . | {z } | {z } P even (s)
P odd (s)
Also denote,
P e (ω) := P even (jω) = p0 − p2 ω 2 + p4 ω 4 + · · · P odd (jω) P o (ω) := = p1 − p3 ω 2 + p5 ω 4 + · · · . jω
STABILITY THEORY FOR POLYNOMIALS
329
′ Also for any polynomial Q(t) of the variable t, the notation Q (t) designates the derivative of Q(t) with respect to t. 8.1 Suppose that the polynomial P (s) = P even (s) + P odd (s) is stable. Prove that the following two polynomials are also stable: Q(s) = P even (s) + [P even ]′ (s) = p0 + 2p2 s + p2 s2 + 4p4 s3 + · · · , ′ R(s) = P odd (s) + P odd (s) = p1 + p1 s + 3p3 s2 + p3 s3 + · · · .
Hint: In both cases use the Hermite-Biehler (Interlacing) Theorem. First, check that part a) of the theorem is trivially satisfied. To prove part b) of ′ the theorem for Q(s), show that −ωQo (ω) = P e (ω). To conclude use the fact that for any continuous function f (t), if f (a) = f (b) = 0 for some real numbers a < b, and if f is differentiable on the interval [a, b], then there exists a real number c such that: a < c < b and f ′ (c) = 0. (Rolle’s Theorem). Proceed similarly to prove part b) of the theorem for R(s). 8.2 Suppose that: P1 (s) = P even(s) + P1odd (s) P2 (s) = P even(s) + P2odd (s) are two stable polynomials with the same ‘even’ part. Show that the polynomial Qλ,µ (s) defined by: Qλ,µ (s) = P even (s) + λP1odd (s) + µP2odd (s), is stable for all λ > 0 and µ > 0. Hint: You can use directly the Boundary Crossing Theorem. In doing so, check that Qoλ,µ (ω) = λP1o (ω) + µP2o (ω), and use the fact that the sign of Pio (ω) alternates at the positive roots of P e (ω), and does not depend on i. 8.3 Suppose that P (s) is a stable polynomial: P (s) = p0 + p1 s + p2 s2 + · · · + pn sn , n ≥ 1. Write as usual:
P (jω) = P e (ω) + jωP o (ω).
(a) Show that the polynomial Qλ (s) associated with: Qeλ (ω) = P e (ω) − λP o (ω), and, Qoλ (ω) = P o (ω), is stable for all λ satisfying 0 ≤ λ
0. Prove that if P (s) satisfies part b) of the interlacing condition, but violates part a) in the sense that pn pn−1 < 0, then P (s) is completely unstable, that is P (s) has all its roots in the open right half plane. Give the Schur counterpart of this result. Hint: Consider the polynomial Q(s) = P (−s). 8.6 Show, by using the Boundary Crossing Theorem that the set Hn + consisting of nth degree Hurwitz polynomials with positive coefficients is connected. A similar result holds for Schur stable polynomials and in fact for any stability region S. 8.7 Write s = σ + jω and let the stability region be defined by S := {s : σ < ω 2 − 1}. Consider the parametrized family of polynomials p(s, λ) = s3 + (10 − 14λ)s2 + (65λ2 − 94λ + 34)s +(224λ2 − 102λ3 − 164λ + 40), λ ∈ [0, 1]. Verify that p(s, 0) is stable and p(s, 1) is unstable with respect to S. Use the Boundary Crossing Theorem to determine the range of values of λ for which the family is stable and the points on the stability boundary through which the roots cross over from stability to instability. Hint: Consider a point (σ, ω) on the stability boundary and impose the condition for this point to be a root of p(s, λ) in the form of two polynomial equations in ω with coefficients which are polynomial functions of λ. Now
331
STABILITY THEORY FOR POLYNOMIALS
use the eliminant to obtain a polynomial equation in λ the roots of which determine the possible values of λ for which boundary crossing may occur. 8.8 Use Algorithm 8.1 to check that the following complex polynomial is a Schur polynomial, P (z) = 32z 4 + (8 + 32j)z 3 + (−16 + 4j)z 2 − (2 + 8j)z + 2 − j. Use Algorithm 8.3 to check that the following complex polynomial is a Hurwitz polynomial, P (s) = s4 + 6s3 + (14 + j)s2 + (15 + 3j)s + 2j + 6. 8.9 Show using the Hermite Biehler Theorem that the polynomial P (s) + jQ(s) with P (s) and Q(s) being real polynomials has no zeroes in the lower half plane Im s ≤ 0 if and only if i) P (s) and Q(s) have only simple real zeroes which interlace and ii) Q′ (s0 )P (s0 ) − P ′ (s0 )Q(s0 ) > 0 for some point s0 on the real axis. Hint: Use the monotonic phase property. 8.10 Consider the plant - controller pair P (s) =
s−1 , s3 + 2s2 + 3s + 2
C(s) =
x1 s + x2 s + x3
and let x := [x1 , x2 , x3 ]. The characteristic polynomial can be written as δ(s, x). Select 4 specific frequencies ω1 < ω2 < ω3 < ω4 and force δ(jωi , x) to lie in the ith quadrant for i = 1, 2, 3, 4. Write down the linear programming problem and determine the corresponding convex set of stabilizing controllers. Repeat the above with several sets of frequencies and determine an approximation of the stabilizing set as the union of the corresponding convex sets. 8.11 (Controller synthesis using Mikhailov’s criterion) Let P (s, x) denote the nth degree real characteristic polynomial of a control system, with the unknown (controller) parameter x appearing affinely in the coefficients. If 0 < ω1 < ω2 < · · · < ωn < ∞ denotes a selection of n “frequencies”, prove that by forcing the Mikhailov plot P (jω, x) to pass through the ith quadrant at ωi , i = 1, 2, · · · , n, a set of linear inequalities in x can be written down, describing a convex subset of the stabilizing set (inner approximation).
332
8.7
ROBUST PARAMETRIC CONTROL
Notes and References
The material of section 8.2 is mainly based on Marden [150] and Dieudonn´e [68]. In particular the statement of Theorem 8.2 (Rouch´e’s Theorem) follows Marden’s book very closely. The Boundary Crossing Theorem and the unified proof of the Hermite-Biehler Theorem, Routh and Jury tests based on the Boundary Crossing Theorem were developed by Chapellat, Mansour, and Bhattacharyya [51] and the treatment given here closely follows this reference. The stability theory for a single polynomial bears countless references, going back to the last century. For a modern exposition of stability theory, the best reference remains Gantmacher [84] and to a lesser extent Marden [150]. The Hermite-Biehler Theorem for Hurwitz polynomials can be found in the book of Guillemin [91]. The corresponding theorem for the discrete time case is stated in Bose [37] where earlier references are also given. The complex case was treated in Bose and Shi [40]. Jury’s test is described in [107]. Vaidyanathan and Mitra [194] have given a unified network interpretation of classical stability results. The technique of determining stabilizing sets in Exercise 8.10 was proposed in [145]. Exercise 8.11 is the main idea developed in the paper by Malik, Swaroop, and Bhattacharyya [145].
9 STABILITY OF A LINE SEGMENT
In this chapter we develop some results on the stability of a line segment of polynomials. A complete analysis of this problem for both the Hurwitz and Schur cases is given and the results are summarized as the Segment Lemma. We also prove the Vertex Lemma and the Real and Complex Convex Direction Lemmas which give certain useful conditions under which the stability of a line segment of polynomials can be ascertained from the stability of its endpoints. These results are based on some fundamental properties of the phase of Hurwitz polynomials and segments which are also proved.
9.1
Introduction
In the previous chapter, we discussed the stability of a fixed polynomial by using the Boundary Crossing Theorem. In this chapter we focus on the problem of determining the stability of a line segment joining two fixed polynomials which we refer to as the endpoints. This line segment of polynomials is a convex combination of the two endpoints. This kind of problem arises in robust control problems containing a single uncertain parameter, such as a gain or a time constant, when stability of the system must be ascertained for the entire interval of uncertainty. We give some simple solutions to this problem for both the Hurwitz and Schur cases and collectively call these results the Segment Lemma. In general, the stability of the endpoints does not guarantee that of the entire segment of polynomials. For example consider the segment joining the two polynomials P1 (s) = 3s4 + 3s3 + 5s2 + 2s + 1 and P2 (s) = s4 + s3 + 5s2 + 2s + 5. It can be checked that both P1 (s) and P2 (s) are Hurwitz stable and yet the polynomial at the midpoint P1 (s) + P2 (s) has a root at s = j. 2 However, if the polynomial representing the difference of the endpoints assumes certain special forms, the stability of the endpoints does indeed guar-
333
334
ROBUST PARAMETRIC CONTROL
antee stability of the entire segment. These forms, which are frequency independent, are described in the Vertex Lemma and again both Hurwitz and Schur versions of this lemma are given. The conditions specified by the Vertex Lemma are useful for reducing robust stability determinations over a continuum of polynomials to that of a discrete set of points. A related notion, that of convex directions, requires that segment stability hold for all stable endpoints and asks for conditions on the difference polynomial that guarantee this. Conditions are established here for convex directions in the real and complex cases. The proofs of the Vertex Lemma and the Convex Direction Lemmas depend on certain phase relations for Hurwitz polynomials and segments which are established here. These are also of independent interest.
9.2
Bounded Phase Conditions
Let S be an open set in the complex plane representing the stability region and let ∂S denote its boundary. Suppose δ1 (s) and δ2 (s) are polynomials (real or complex) of degree n. Let δλ (s) := λδ1 (s) + (1 − λ)δ2 (s) and consider the following one parameter family of polynomials: [δ1 (s), δ2 (s)] := {δλ (s) : λ ∈ [0, 1]} . This family will be referred to as a segment of polynomials. We shall say that the segment is stable if and only if every polynomial on the segment is stable. This property is also referred to as strong stability of the pair (δ1 (s), δ2 (s)). We begin with a lemma which follows directly from the Boundary Crossing Theorem. Let φδi (s0 ) denote the argument of the complex number δi (s0 ). LEMMA 9.1 (Bounded Phase Lemma) Let δ1 (s) and δ2 (s) be stable with respect to S and assume that the degree of δλ (s) is n for all λ ∈ [0, 1]. Then the following are equivalent: a)
the segment [δ1 (s), δ2 (s)] is stable with respect to S,
b)
δλ (s∗ ) 6= 0,
c)
|φδ1 (s∗ ) − φδ2 (s∗ )| 6= π radians for all s∗ ∈ ∂S,
d)
The complex plane plot of real axis.
for all s∗ ∈ ∂S;
λ ∈ [0, 1],
δ1 (s∗ ) , for s∗ ∈ ∂S does not cut the negative δ2 (s∗ )
335
STABILITY OF A LINE SEGMENT
PROOF The equivalence of a) and b) follows from the Boundary Crossing Theorem. The equivalence of b) and c) is best illustrated geometrically in Figure 9.1. In words this simply states that whenever δλ (s∗ ) = 0 for some λ ∈ [0, 1] the phasors δ1 (s∗ ) and δ2 (s∗ ) must line up with the origin with their endpoints on opposite sides of it. This is expressed by the condition |φδ1 (s∗ ) − φδ2 (s∗ )| = π radians. δ2 (s∗ )
Imag
Imag δ2 (s∗ )
φδl
φδ2 φδ1
φδ1
φδ2
Real
δ1 (s∗ )
Real
φδλ = π
φδλ 6= π ∗
δ1 (s )
Figure 9.1 Image set of δλ (s∗ ) and φδλ .
The equivalence of b) and d) follows from the fact that if λδ1 (s∗ ) + (1 − λ)δ2 (s∗ ) = 0 then
δ1 (s∗ ) 1−λ = − . δ2 (s∗ ) λ As λ varies from 0 to 1, the right-hand side of the above equation generates the negative real axis. Hence, δλ (s∗ ) = 0 for some λ ∈ [0, 1] if and only if δ1 (s∗ ) is negative and real. δ2 (s∗ )
This Lemma essentially states that the entire segment is stable provided the end points are, the degree remains invariant and the phase difference between the endpoints evaluated along the stability boundary is bounded by π. This condition will be referred to as the Bounded Phase Condition. We illustrate this result with some examples. Example 9.1 (Real Polynomials) Consider the following feedback system shown in Figure 9.2. Suppose that
336
ROBUST PARAMETRIC CONTROL
we want to check the robust Hurwitz stability of the closed loop system for α ∈ [2, 3].
+ −6
-
s+α s3 + 2αs2 + αs − 1
-
Figure 9.2 Feedback system (Example 9.1).
We first examine the stability of the two endpoints of the characteristic polynomial δ(s, α) = s3 + 2αs2 + (α + 1)s + (α − 1). We let δ1 (s) := δ(s, α)|α=2 = s3 + 4s2 + 3s + 1 δ2 (s) := δ(s, α)|α=3 = s3 + 6s2 + 4s + 2 Then δλ (s) = λδ1 (s) + (1 − λ)δ2 (s). We check that the endpoints δ1 (s) and δ2 (s) are stable. Then we verify the bounded phase condition, namely that the phase difference |φδ1 (jω)−φδ2 (jω)| between these endpoints never reaches 180◦ as ω runs from 0 to ∞. Thus, the segment is robustly Hurwitz stable. This is shown in Figure 9.3. The condition d) of Lemma 9.1 can also be easily verified by drawing the polar plot of δ1 (jω)/δ2 (jω).
Example 9.2 (Complex Polynomials) Consider the Hurwitz stability of the line segment joining the two complex polynomials: δ1 (s) = s4 + (8 − j)s3 + (28 − j5)s2 + (50 − j3)s + (33 + j9) δ2 (s) = s4 + (7 + j4)s3 + (46 + j15)s2 + (165 + j168)s + (−19 + j373). We first verify that the two endpoints δ1 (s) and δ2 (s) are stable. Then we plot φδ1 (jω) and φδ2 (jω) with respect to ω (Figure 9.4). As we can see, the Bounded Phase Condition is satisfied, that is the phase difference |φδ1 (jω) − φδ2 (jω)| never reaches 180◦ , so we conclude that the given segment [δ1 (s), δ2 (s)] is stable. We can also use the condition d) of Lemma 9.1. As shown in Figure 9.5, the plot of δ1 (jω)/δ2 (jω) does not cut the negative real axis of the complex plane. Therefore, the segment is stable.
337
STABILITY OF A LINE SEGMENT
300 φδ (jω) 1
250
φδ (jω)−φ (jω)
DEGREE
200
1
φδ (jω)
δ
2
2
150
100
50
0
0
5
10
15 ω
20
25
30
Figure 9.3 Phase difference of the endpoints of a stable segment (Example 9.1).
350
φδ (jω)
300
1
DEGREE
250
200 φδ (jω)−φ (jω) 1
φδ (jω)
δ
2
2
150
100
50
0
0
2
4
6 ω
8
10
12
Figure 9.4 Phase difference vs. ω for a complex segment (Example 9.2).
338
ROBUST PARAMETRIC CONTROL
1.5
Imag
1
0.5
0
−0.5 −0.5
0
0.5 Real
1
1.5
Figure 9.5 A stable segment:
δ1 (jω) δ2 (jω)
∩ IR− = ∅ (Example 9.2).
Example 9.3 (Schur Stability) Let us consider the Schur stability of the segment joining the two polynomials δ1 (z) = z 5 + 0.4z 4 − 0.33z 3 + 0.058z 2 + 0.1266z + 0.059 δ2 (z) = z 5 − 2.59z 4 + 2.8565z 3 − 1.4733z 2 + 0.2236z − 0.0121. First we verify that the roots of both δ1 (z) and δ2 (z) lie inside the unit circle. In order to check the stability of the given segment, we simply evaluate the phases of δ1 (z) and δ2 (z) along the stability boundary, namely the unit circle. Figure 9.6 shows that the phase difference φδ1 (ejθ ) − φδ2 (ejθ ) reaches 180◦ at around θ = 0.81 radians. Therefore, we conclude that there exists an unstable polynomial along the segment.
Quasi-polynomial Segments Lemma 9.1 can be extended to a more general class of segments. In particular, let δ1 (s), δ2 (s) be quasi-polynomials of the form δ1 (s) = asn +
X
e−sTi ai (s)
339
STABILITY OF A LINE SEGMENT
500 450 400
∗
∗
jθ φδ (e )−φ (ejθ ) δ
1
2
350
DEGREE
300 250
jθ φδ (e ) 2
200 150 θ∗≈ 0.81 rad
jθ φδ (e )
100
1
50 0
0
0.1
0.2
0.3
0.4
0.5 θ
0.6
0.7
0.8
0.9
1
Figure 9.6 An unstable segment (Example 9.3).
δ2 (s) = bsn +
X
e−sHi bi (s)
(9.1)
where Ti , Hi ≥ 0, ai (s), bi (s) have degrees less than n and a and b are arbitrary but nonzero and of the same sign. The Hurwitz stability of δ1 (s), δ2 (s) is then equivalent to their roots being in the left-half plane. LEMMA 9.2 Lemma 9.1 holds for Hurwitz stability of the quasi-polynomials of the form specified in (9.1). The proof of this lemma follows from the fact that Lemma 9.1 is equivalent to the Boundary Crossing Theorem which applies to Hurwitz stability of quasipolynomials δi (s) of the form given as established in Chapter 3. Example 9.4 (Quasi-polynomials) Let us consider the Hurwitz stability of the line segment joining the following pair of quasi-polynomials: δ1 (s) = (s2 + 3s + 2) + e−sT1 (s + 1) + e−sT2 (2s + 1)
340
ROBUST PARAMETRIC CONTROL δ2 (s) = (s2 + 5s + 3) + e−sT1 (s + 2) + e−sT2 (2s + 1)
where T1 = 1 and T2 = 2. We first check the stability of the endpoints by examining the frequency plots of δ1∗ (jω) :=
δ1 (jω) (jω + 1)2
and δ2∗ (jω) :=
δ2 (jω) . (jω + 1)2
Using the Principle of the Argument (equivalently, the Nyquist stability criterion) the condition for δ1 (s) (or δ2 (s)) having all its roots in the left-half plane is simply that the plot should not encircle the origin since the denominator term (s + 1)2 does not have right-half plane roots. Figure 9.7 shows that both endpoints δ1 (s) and δ2 (s) are stable.
0.5
0
−0.5
Imag
−1
−1.5 δ∗ (jω) 1
−2 δ∗ (jω)
−2.5
2
−3
−3.5 −1
0
1
2
3
4
5
6
Real
Figure 9.7 Stable quasi-polynomials (Example 9.4).
We generate the polar plot of δ1 (jω)/δ2 (ω) (Figure 9.8). As this plot does not cut the negative real axis the stability of the segment [δ1 (s), δ2 (s)] is guaranteed by condition d) of Lemma 9.1. In the next section we focus specifically on the Hurwitz and Schur cases. We show how the frequency sweeping involved in using these results can always
341
STABILITY OF A LINE SEGMENT
0.8
0.6
Imag
0.4
0.2
0
−0.2
−0.4 −0.2
0
0.2
0.4 Real
0.6
0.8
1
Figure 9.8 Stable segment of quasi-polynomials:
δ1 (jω) δ2 (jω)
∩ IR− = (Example 9.4).
be avoided by isolating and checking only those frequencies where the phase difference can potentially reach 180◦.
9.3 9.3.1
Segment Lemma Hurwitz Case
In this subsection we are interested in strong stability of a line segment of polynomials joining two Hurwitz polynomials. We start by introducing a simple lemma which deals with convex combinations of two real polynomials, and finds the conditions under which one of these convex combinations can have a pure imaginary root. Recall the even-odd decomposition of a real polynomial δ(s) and the notation δ(jω) = δ e (ω) + jωδ o (ω) where δ e (ω) and δ o (ω) are real polynomials in ω 2 .
342
ROBUST PARAMETRIC CONTROL
LEMMA 9.3 Let δ1 (·) and δ2 (·) be two arbitrary real polynomials (not necessarily stable). Then there exists λ ∈ [0, 1] such that (1−λ)δ1 (·)+λδ2 (·) has a pure imaginary root jω, with ω > 0 if and only if e δ1 (ω)δ2o (ω) − δ2e (ω)δ1o (ω) = 0 δ e (ω)δ2e (ω) ≤ 0 1o δ1 (ω)δ2o (ω) ≤ 0 PROOF that
Suppose first that there exists some λ ∈ [0, 1] and ω > 0 such (1 − λ)δ1 (jω) + λδ2 (jω) = 0.
(9.2)
δi (jω) = δieven (jω) + δiodd (jω) for i = 1, 2. = δie (ω) + jωδio (ω),
(9.3)
We can write,
Thus, taking (9.3) and the fact that ω > 0 into account, (9.2) is equivalent to (1 − λ)δ1e (ω) + λδ2e (ω) = 0 (9.4) (1 − λ)δ1o (ω) + λδ2o (ω) = 0. But if (9.4) holds then necessarily δ1e (ω)δ2o (ω) − δ2e (ω)δ1o (ω) = 0,
(9.5)
and since λ and 1 − λ are both nonnegative, (9.4) also implies that δ1e (ω)δ2e (ω) ≤ 0 and δ1o (ω)δ2o (ω) ≤ 0,
(9.6)
and therefore this proves that the condition is necessary. For the converse there are two cases: c1) Suppose that δ1e (ω)δ2o (ω) − δ2e (ω)δ1o (ω) = 0,
δ1e (ω)δ2e (ω) ≤ 0,
δ1o (ω)δ2o (ω) ≤ 0,
for some ω ≥ 0, but that we do not have δ1e (ω) = δ2e (ω) = 0, then λ=
δ1e (ω) e δ1 (ω) − δ2e (ω)
satisfies (9.4), and one can check easily that λ is in [0, 1]. c2) Suppose now that δ1e (ω)δ2o (ω) − δ2e (ω)δ1o (ω) = 0, and δ1e (ω) = δ2e (ω) = 0.
(9.7)
STABILITY OF A LINE SEGMENT
343
Then we are left with, δ1o (ω)δ2o (ω) ≤ 0. Here again, if we do not have δ1o (ω) = δ2o (ω) = 0, then the following value of λ satisfies (9.4) λ=
δ1o (ω) . δ1o (ω) − δ2o (ω)
If δ1o (ω) = δ2o (ω) = 0, then from (9.7) we conclude that both λ = 0 and λ = 1 satisfy (9.4) and this completes the proof. Based on this we may now state the Segment Lemma for the Hurwitz case. LEMMA 9.4 (Hurwitz Segment Lemma) Let δ1 (s), δ2 (s) be real Hurwitz polynomials of degree n with leading coefficients of the same sign. Then the line segment of polynomials [δ1 (s), δ2 (s)] is Hurwitz stable if and only there exists no real ω > 0 such that 1) δ1e (ω)δ2o (ω) − δ2e (ω)δ1o (ω) = 0 2) δ1e (ω)δ2e (ω) ≤ 0 3) δ1o (ω)δ2o (ω) ≤ 0.
(9.8)
PROOF The proof of this result again follows from the Boundary Crossing Theorem. We note that since the two polynomials δ1 (s) and δ2 (s) are of degree n with leading coefficients of the same sign, every polynomial on the segment is of degree n. Moreover, no polynomial on the segment has a real root at s = 0 because in such a case δ1 (0)δ2 (0) ≤ 0, and this along with the assumption on the sign of the leading coefficients, contradicts the assumption that δ1 (s) and δ2 (s) are both Hurwitz. Therefore, an unstable polynomial can occur on the segment if and only if a segment polynomial has a root at s = jω with ω > 0. By the previous lemma this can occur if and only if the conditions (9.8) hold. If we consider the image of the segment [δ1 (s), δ2 (s)] evaluated at s = jω, we see that the conditions (9.8) of the Segment Lemma are the necessary and sufficient condition for the line segment [δ1 (jω), δ2 (jω)] to pass through the origin of the complex plane. This in turn is equivalent to the phase difference condition |φδ1 (jω) − φδ2 (jω)| = 180◦. We illustrate this in Figure 9.9. Example 9.5 Consider the robust Hurwitz stability problem of the feedback system shown in Figure 9.10.
344
ROBUST PARAMETRIC CONTROL δ2 (jω ) ∗
δ2o (ω ∗ )
6
δ1e (ω ∗ )δ2e (ω ∗ ) < 0 δ1o (ω ∗ )δ2o (ω ∗ ) < 0 δ1e (ω ∗ )
δ2e (ω ∗ )
δ1o (ω ∗ ) δ1e (ω ∗ )
=
δ2o (ω ∗ ) δ2e (ω ∗ )
m δ1e (ω ∗ )δ2o (ω ∗ ) − δ2e (ω ∗ )δ1o (ω ∗ ) = 0
δ1o (ω ∗ ) δ1 (jω ∗ )
⇔ |φδ1 (jω ∗ ) − φδ2 (jω ∗ )| = 180◦
Figure 9.9 Segment lemma: geometric interpretation.
+ −6
-
s3
s+α + 2αs2 + αs − 1
-
Figure 9.10 Feedback system (Example 9.5).
The characteristic polynomial is: δ(s, α) = s3 + 2αs2 + (α + 1)s + (α − 1). We have already verified the stability of the endpoints δ1 (s) := δ(s, α)|α=2 = s3 + 4s2 + 3s + 1 δ2 (s) := δ(s, α)|α=3 = s3 + 6s2 + 4s + 2. Robust stability of the system is equivalent to that of the segment δλ (s) = λδ1 (s) + (1 − λ)δ2 (s), To apply the Segment Lemma we compute the real positive roots of the polynomial δ1e (ω)δ2o (ω) − δ2e (ω)δ1o (ω) = (−4ω 2 + 1)(−ω 2 + 4) − (−6ω 2 + 2)(−ω 2 + 3) = 0.
345
STABILITY OF A LINE SEGMENT
This equation has no real root in ω and thus there is no jω root on the line segment. Thus, from the Segment Lemma, the segment [δ1 (s), δ2 (s)] is stable and the closed loop system is robustly stable.
20
15
10
Imag
5
0
−5
−10
−15
−20 −20
−15
−10
−5
0 Real
5
10
15
20
Figure 9.11 Image set of a stable segment (Example 9.5).
Although frequency sweeping is unnecessary we have plotted in Figure 9.11 the image set of the line segment. As expected this plot avoids the origin of the complex plane for all ω.
9.4
Schur Segment Lemma via Tchebyshev Representation
In this section, we use the Tchebyshev representation (see Section 4.3 in Chapter 4) to develop a segment lemma for Schur stability. Let P1 (z), P2 (z) be two real Schur stable polynomials of degree n. The question of interest is: Is
346
ROBUST PARAMETRIC CONTROL
the line segment of polynomials S joining P1 (z) and P2 (z) Schur stable? By the Boundary Crossing Theorem, S is not Schur if and only if some polynomial in S has a unit circle root. To test this possibility, set z = ejθ and let λP1 ejθ + (1 − λ)P2 ejθ = 0 (9.9) for some θ ∈ [0, π]. Using the notation
PiC (u) = Ri (u) + j
p 1 − u2 Ti (u), i = 1, 2
for the Tchebyshev representations of Pi (z), i = 1, 2 we can write (9.9) as h i h i p p λ R1 (u) + j 1 − u2 T1 (u) + (1 − λ) R2 (u) + j 1 − u2 T2 (u) = 0, for u ∈ [−1, 1] or separating real and imaginary parts,
λR1 (u) + (1 − λ)R2 (u) = 0 p p 2 λ 1 − u T1 (u) + (1 − λ) 1 − u2 T2 (u) = 0.
(9.10) (9.11)
λR1 (+1) + (1 − λ)R2 (+1) = 0
(9.12)
λR1 (−1) + (1 − λ)R2 (−1) = 0.
(9.13)
Since (9.11) holds at u = ±1 regardless of λ, we have, for roots to occur at z = −1 or z = +1:
Then (9.12) has a solution with λ ∈ (0, 1) if and only if R1 (+1)R2 (+1) ≤ 0
(9.14)
and (9.13) has a solution with λ ∈ (0, 1) if and only if R1 (−1)R2 (−1) ≤ 0.
(9.15)
We can replace the inequalities in (9.14) and (9.15) by strict inequalities because Ri (±1) 6= 0 for i = 1, 2 since Pi (z) are Schur. For u ∈ (−1, 1), we can write (9.10) and (9.11) as λR1 (u) + (1 − λ)R2 (u) = 0, λT1 (u) + (1 − λ)T2 (u) = 0,
λ ∈ [0, 1] u ∈ (−1, 1).
(9.16) (9.17)
A solution to (9.16) and (9.17) exists if and only if R1 (u)T2 (u) − R2 (u)T1 (u) = 0
(9.18)
has at least one solution u∗ ∈ (−1, 1) such that sgn [R1 (u∗ )] sgn [R2 (u∗ )] < 0.
(9.19)
STABILITY OF A LINE SEGMENT
347
To summarize, we have the following lemma. LEMMA 9.5 The segment S is Schur stable if and only if (a) R1 (+1)R2 (+1) > 0, (b) R1 (−1)R2 (−1) > 0, (c) for every real u ∈ (−1, 1) such that R1 (u)T2 (u) − R2 (u)T1 (u) = 0 sgn [R1 (u)] sgn [R2 (u)] > 0. Example 9.6 Consider the following two Schur stable polynomials: P1 (z) = z 4 + 1.3z 3 + 0.24z 2 − 0.298z − 0.136 P2 (z) = z 4 − 0.7z 3 + 0.42z 2 + 0.32z − 0.24. Then their respective Tchebyshev representations are as follows: R1 (u) = 8u4 − 5.2u3 − 7.52u2 + 4.198u + 0.624 T1 (u) = −8u3 + 5.2u2 + 3.52u − 1.598 and R2 (u) = 8u4 + 2.8u3 − 7.1600u2 − 2.42u + 0.3400 T2 (u) = −8u3 − 2.8u2 + 3.16u + 1.02. Using Lemma 9.5, we check the following conditions. (a) R1 (+1)R2 (+1) = 0.1591 > 0 (b) R1 (−1)R2 (−1) = 1.6848 > 0 (c) R1 (u)T2 (u) − R2 (u)T1 (u) = −0.832u3 − 0.8432u2 + 1.1898u + 1.1798. The roots of this polynomial are 1.1899, −1.25, and −0.9534. Since there is only one root in u ∈ (−1, 1), we evaluate sgn [R1 (−0.9534)] sgn [R2 (−0.9534)] = (+1)(+1) > 0 Since all the conditions in Lemma 9.5 are satisfied, the segment joining P1 (z) and P2 (z) is Schur stable. REMARK 9.1 It is easy to see that Schur stability is lost along the segment if the end point polynomials are of differing degrees or of equal degree with leading coefficients of opposite sign.
348
9.5
ROBUST PARAMETRIC CONTROL
Some Fundamental Phase Relations
In this section, we develop some auxiliary results that will aid us in establishing the Convex Direction Lemma and the Vertex Lemma, which deal with conditions under which vertex stability implies segment stability. The above results depend heavily on some fundamental formulas for the rate of change of phase with respect to frequency for fixed Hurwitz polynomials and for a segment of polynomials. These are derived in this section.
9.5.1
Phase Properties of Hurwitz Polynomials
Let δ(s) be a real or complex polynomial and write δ(jω) = p(ω) + jq(ω)
(9.20)
where p(ω) and q(ω) are real functions. Also let X(ω) := and ϕδ (ω) := tan−1
q(ω) p(ω)
q(ω) = tan−1 X(ω). p(ω)
(9.21)
(9.22)
Let Im[x] and Re[x] denote the imaginary and real parts of the complex number x. LEMMA 9.6 If δ(s) is a real or complex Hurwitz polynomial dX(ω) > 0, dω
for all ω ∈ [−∞, +∞].
(9.23)
Equivalently Im
PROOF property
1 dδ(jω) >0 δ(jω) dω
for all ω ∈ [−∞, +∞].
(9.24)
A Hurwitz polynomial satisfies the monotonic phase increase dϕδ (ω) 1 dX(ω) = > 0 for all ω ∈ [−∞, +∞] dω 1 + X 2 (ω) dω
and this implies dX(ω) > 0 for all ω ∈ [−∞, +∞]. dω
349
STABILITY OF A LINE SEGMENT The formula
dϕδ (ω) 1 dδ(jω) = Im dω δ(jω) dω
(9.25)
follows from the relations
1 dq(ω) 1 dδ(jω) dp(ω) = +j (9.26) δ(jω) dω p(ω) + jq(ω) dω dω h i h i dq(ω) dq(ω) dp(ω) p(ω) dp(ω) + q(ω) + j p(ω) − q(ω) dω dω dω dω = p2 (ω) + q 2 (ω) and dϕδ (ω) = dω
h i dp(ω) p(ω) dq(ω) − q(ω) dω dω p2 (ω) + q 2 (ω)
.
(9.27)
We shall see later that inequality (9.23) can be strengthened when δ(s) is a real Hurwitz polynomial. Now let δ(s) be a real polynomial of degree n. We write: δ(s) = δ even (s) + δ odd (s) = h(s2 ) + sg(s2 )
(9.28)
where h and g are real polynomials in s2 . Then δ(jω) = h(−ω 2 ) + jωg(−ω 2 ) = ρδ (ω)ejϕδ (ω) .
(9.29)
We associate with the real polynomial δ(s) the two auxiliary even degree complex polynomials δ(s) := h(s2 ) + jg(s2 ) (9.30) and
¯ := h(s2 ) − js2 g(s2 ) δ(s)
(9.31)
¯ δ(s) is antiand write formulas analogous to (9.29) for δ(jω) and δ(jω). Hurwitz if and only if all its zeros lie in the open right-half plane (Re[s] > 0). Let t = s2 be a new complex variable. LEMMA 9.7 Consider h(t) + jg(t) = 0
(9.32)
h(t) − jtg(t) = 0
(9.33)
and as equations in the complex variable t. If δ(s) = h(s2 ) + sg(s2 ) is Hurwitz and degree δ(s) ≥ 2 each of these equations has all its roots in the lower-half of
350
ROBUST PARAMETRIC CONTROL
the complex plane (Im[t] ≤ 0). When degree [δ(s)] = 1 (9.33) has all its roots in Im[t] ≤ 0. PROOF The statement regarding the case when degree[δ(s)] = 1 can be directly checked. We therefore proceed with the assumption that degree[δ(s)] > 1. Let δ(s) = a0 + a1 s + · · · + an sn . (9.34) Since δ(s) is Hurwitz we can assume without loss of generality that ai > 0, i = 0, · · · , n. We have 2
h(−ω 2 ) = a0 + a2 (−ω 2 ) + a4 (−ω 2 ) + · · · h i 2 ωg(−ω 2 ) = ω a1 + a3 (−ω 2 ) + a5 (−ω 2 ) + · · · .
(9.35)
As s runs from 0 to +j∞, −ω 2 runs from 0 to −∞. We first recall the Hermite-Biehler Theorem of Chapter 8. According to this Theorem if δ(s) is Hurwitz stable, all the roots of the two equations h(t) = 0
and g(t) = 0
(9.36)
are distinct, real and negative. Furthermore, the interlacing property holds and the maximum of the roots is one of h(t) = 0. For the rest of the proof we will assume that δ(s) is of odd degree. A similar proof will hold for the case that δ(s) is of even degree. Let degree[δ(s)] = 2m + 1 with m ≥ 1. Note that the solutions of h(t) + jg(t) = 0 (9.37) are identical with the solutions of g(t) = j. h(t)
(9.38)
Let us denote the roots of h(t) = 0 by λ1 , λ2 , · · ·, λm where λ1 < λ2 < · · · < λm . The sign of h(t) changes alternately in each interval ]λi , λi+1 [, (i = 1, · · ·, m − 1). g(t) If h(t) is expressed by partial fractions as g(t) c1 c2 cm = + + ···+ , h(t) t − λ1 t − λ2 t − λm
(9.39)
then each ci , i = 1, · · · , m should be positive. This is because when t = −ω 2 g(t) passes increasingly (from left to right) through λi , the sign of h(t) changes from − to +. This follows from the fact that g(t) has just one root in each interval and a0 > 0, a1 > 0. If we suppose Im[t] ≥ 0, (9.40)
351
STABILITY OF A LINE SEGMENT then Im
ci ≤0 t − λi
i = 1, · · · , m
and consequently we obtain X g(t) ci Im = Im ≤ 0. h(t) t − λi
(9.41)
(9.42)
1≤i≤m
Such a t cannot satisfy the relation in (9.38). This implies that the equation h(t) + jg(t) = 0
(9.43)
in t has all its roots in the lower-half of the complex plane Im[t] ≤ 0. We can treat [h(t) − jtg(t)] similarly. This lemma leads to a key monotonic phase property. LEMMA 9.8 If δ(s) is Hurwitz and of degree ≥ 2 then dϕδ >0 dω
(9.44)
dϕδ¯ > 0. dω
(9.45)
and
PROOF jg(−ω 2 )
From the previous lemma we have by factorizing h(−ω 2 ) + h(−ω 2 ) + jg(−ω 2 ) = an (−ω 2 − α1 ) · · · (−ω 2 − αm )
(9.46)
with some α1 , α2 , · · · , αm whose imaginary parts are negative. Now m X arg h(−ω 2 ) + jg(−ω 2 ) = arg(−ω 2 − αi ).
(9.47)
i=1
When (−ω 2 ) runs from 0 to (−∞), each component of the form arg(−ω2 − αi ) is monotonically increasing. Consequently arg h(−ω 2 ) + jg(−ω 2 ) is monotonically increasing as (−ω 2 ) runs from 0 to (−∞). In other words, 2 2 arg h(s ) + jg(s ) is monotonically increasing as s(= jω) runs from 0 to j∞. This proves (9.44); (9.45) is proved in like manner. The dual result is given without proof.
352
ROBUST PARAMETRIC CONTROL
LEMMA 9.9 If δ(s) is anti-Hurwitz
and
dϕδ 0.
(9.51)
In (9.50) and (9.51) equality holds only when degree [δ(s)] = 1. PROOF from
The equivalence of the two conditions (9.50) and (9.51) follows dϕδ (ω) 1 dX(ω) = dω 1 + X 2 (ω) dω
and X(ω) 1 1 X(ω) = 1 + X 2 (ω) ω 1 + X 2 (ω) ω h2 −ω 2 g −ω 2 = 2 h (−ω 2 ) + ω 2 g 2 (−ω 2 ) h (−ω 2 )
STABILITY OF A LINE SEGMENT 2 1 = cos (ϕδ (ω)) tan (ϕδ (ω)) ω 1 = cos (ϕδ (ω)) sin (ϕδ (ω)) ω sin (2ϕδ (ω)) . = 2ω
353
We now prove (9.51). The fact that equality holds in (9.51) in the case where δ(s) has degree equal to 1 can be easily verified directly. We therefore proceed with the assumption that degree[δ(s)] ≥ 2. From Lemma 9.6, we know that dX(ω) >0 dω so that d(h(−ω2 )) −ω 2 − ωg −ω 2 dω h2 (−ω 2 ) g −ω 2 h −ω 2 + ω g˙ −ω 2 h −ω 2 − ω h˙ −ω 2 g −ω 2 = h2 (−ω 2 ) 2 2 g˙ −ω h −ω 2 − h˙ −ω 2 g −ω 2 g −ω +ω > 0. = h (−ω 2 ) h2 (−ω 2 ) {z } | {z } |
dX(ω) = dω
d(ωg(−ω 2 )) h dω
X(ω) ω
d dω
„
g(−ω2 )
h(−ω2 )
«
From Lemma 9.8 we have dϕδ > 0 and dω
dϕδ¯ >0 dω
where δ(s) = h s2 + jg s2
¯ = h s2 − js2 g s2 . and δ(s)
First consider
dϕδ = dω Since
1
1+
d 2 2 dω g(−ω ) h(−ω 2 )
1 1+
we have
d dω
g(−ω 2 ) h(−ω 2 )
g(−ω 2 ) h(−ω 2 )
2 > 0
g(−ω 2 ) h(−ω 2 )
> 0.
> 0.
354
ROBUST PARAMETRIC CONTROL
Thus, for ω > 0 we have X(ω) d dX(ω) = +ω dω ω dω
! g −ω 2 X(ω) > . 2 h (−ω ) ω
(9.52)
Now consider dϕδ¯ = dω
1
1+
Here, we have d dω
2
2
ω g −ω h (−ω 2 )
!
d 2 2 2 dω ω g(−ω )
ω 2 g −ω 2 h (−ω 2 )
h(−ω 2 )
!
> 0.
h i 2 2 2 2 ˙ h −ω ω h −ω g ˙ −ω − g −ω g −ω = ω 2 + h (−ω 2 ) h2 (−ω 2 ) " !# g −ω 2 X(ω) d =ω 2 +ω > 0. ω dω h (−ω 2 )
2
With ω > 0, it follows that X(ω) d 2 +ω ω dω and therefore
! g −ω 2 >0 h (−ω 2 )
! g −ω 2 X(ω) X(ω) d >− +ω . ω dω h (−ω 2 ) ω {z } |
(9.53)
dX(ω) dω
Combining (9.52) and (9.53), we have, when degree[δ(s)] ≥ 2, dX(ω) X(ω) > , for all ω > 0. dω ω A useful technical result can be derived from the above Theorem. LEMMA 9.10 Let ω0 > 0 and the constraint ϕδ (ω0 ) = θ
(9.54)
be given. The infimum value of dϕδ (ω) dω ω=ω0
(9.55)
355
STABILITY OF A LINE SEGMENT
taken over all real Hurwitz polynomials δ(s) of a prescribed degree satisfying (9.54) is given by sin (2θ) (9.56) 2ω0 . The infimum is actually attained only when 0 < θ < one, by polynomials of the form
π 2
and δ(s) is of degree
δ(s) = K(s tan θ + ω0 ).
(9.57)
For polynomials of degree (> 1) the infimum can be approximated to arbitrary accuracy. PROOF The lower bound on the infimum given in (9.56) is an immediate consequence of (9.50) in Theorem 9.1. The fact that (9.57) attains the bound (9.56) can be checked directly. To prove that the infimum can be approximated to arbitrary accuracy it suffices to construct a sequence of Hurwitz polynomials δk (s) of prescribed degree n each satisfying (9.54) and such that sin (2θ) dϕδk (ω) . lim = (9.58) k→∞ dω ω=ω0 2ω0 For example when 0 < θ < π2 we can take 1 δk (s) = K s tan θ + ω0 + (ǫk s + 1)n−1 , k
k = 1, 2, . . . .
(9.59)
where ǫk > 0 is adjusted to satisfy the constraint (9.54) for each k: ϕδk (ω0 ) = θ.
(9.60)
It is easy to see that ǫk → 0 and (9.58) holds. A similar construction can be carried out for other values of θ provided that the degree n is large enough that the the constraint (9.60) can be satisfied; in particular n ≥ 4 is always sufficient for arbitrary θ. REMARK 9.2 In the case of complex Hurwitz polynomials the lower bound on the rate of change of phase with respect to ω is zero and the corresponding statement is that dϕδk (ω) (9.61) dω ω=ω0 can be made as small as desired by choosing complex Hurwitz polynomials δk (s) satisfying the constraint (9.60).
These technical results are useful in certain constructions related to convex directions.
356
9.5.2
ROBUST PARAMETRIC CONTROL
Phase Relations for a Segment
Consider now a line segment λδ1 (s) + (1 − λ)δ2 (s), λ ∈ [0, 1] generated by the two real polynomials δ1 (s) and δ2 (s) of degree n with leading coefficients and constant coefficients of the same sign. The necessary and sufficient condition for a polynomial in the interior of this segment to acquire a root at s = jω0 is that λ0 δ1e (ω0 ) + (1 − λ0 )δ2e (ω0 ) = 0 λ0 δ1o (ω0 ) + (1 − λ0 )δ2o (ω0 ) = 0
(9.62)
for some λ0 ∈ (0, 1). Since the segment is real and the constant coefficients are of the same sign it is sufficient to verify the above relations for ω0 > 0. Therefore, the above equations are equivalent to λ0 δ1e (ω0 ) + (1 − λ0 )δ2e (ω0 ) = 0 λ0 ω0 δ1o (ω0 )
+ (1 −
λ0 )ω0 δ2o (ω0 )
= 0.
(9.63) (9.64)
and also to λ0 δ1e (ω0 ) + (1 − λ0 )δ2e (ω0 ) = 0 λ0 ω02 δ1o (ω0 ) + (1 − λ0 )ω02 δ2o (ω0 ) = 0
(9.65)
since ω0 > 0. Noting that δ 1 (jω) = δ1e (ω) + jδ1o (ω) = ρδ1 (ω)ejϕδ1 (ω) δ 2 (jω) = δ2e (ω) + jδ2o (ω) = ρδ2 (ω)ejϕδ2 (ω) δ¯1 (jω) = δ1e (ω) + jω 2 δ1o (ω) = ϕδ¯1 (ω)ejϕδ¯1 (ω) δ¯2 (jω) = δ e (ω) + jω 2 δ o (ω) = ρ ¯ (ω)ejϕδ¯2 (ω) 2
2
(9.66)
δ2
we can write (9.62), (9.64), and (9.65), respectively in the equivalent forms
and
λ0 δ1 (jω0 ) + (1 − λ0 )δ2 (jω0 ) = 0
(9.67)
λ0 δ 1 (jω0 ) + (1 − λ0 )δ 2 (jω0 ) = 0
(9.68)
λ0 δ¯1 (jω0 ) + (1 − λ0 )δ¯2 (jω0 ) = 0.
(9.69)
Now let δ1 (jω) = δ1e (ω) + jωδ1o (ω) = ρδ1 (ω)ejϕδ1 (ω) δ2 (jω) = δ2e (ω) + jωδ2o (ω) = ρδ2 (ω)ejϕδ2 (ω)
(9.70)
δ0 (s) := δ1 (s) − δ2 (s)
(9.71)
δ 0 (jω) := δ 1 (jω) − δ 2 (jω) δ¯0 (jω) := δ¯1 (jω) − δ¯2 (jω).
(9.72)
357
STABILITY OF A LINE SEGMENT We now state a key technical lemma.
LEMMA 9.11 Let δ1 (s), δ2 (s) be real polynomials of degree n with leading coefficients of the same sign and assume that λ0 ∈ (0, 1) and ω0 > 0 satisfy (9.65)–(9.69). Then dϕδ0 dϕδ1 dϕδ2 = λ0 + (1 − λ0 ) (9.73) dω ω=ω0 dω ω=ω0 dω ω=ω0 dϕδ 2 dϕδ 1 dϕδ 0 = λ + (1 (9.74) − λ ) 0 0 dω ω=ω0 dω ω=ω0 dω ω=ω0
and
PROOF
dϕδ¯0 dϕδ¯2 dϕδ¯1 = λ0 + (1 − λ0 ) . dω ω=ω0 dω ω=ω0 dω ω=ω0
We prove only (9.73) in detail. If
δ(jω) = p(ω) + jq(ω) then tan ϕδ (ω) = Let q(ω) ˙ :=
dq(ω) dω
q(ω) . p(ω)
(9.76)
(9.77)
and differentiate (9.77) with respect to ω to get
(1 + tan2 ϕδ (ω)) and
(9.75)
p(ω)q(ω) ˙ − q(ω)p(ω) ˙ dϕδ = 2 dω p (ω)
dϕδ p(ω)q(ω) ˙ − q(ω)p(ω) ˙ = . dω p2 (ω) + q 2 (ω)
(9.78)
(9.79)
We apply the formula in (9.79) to δ0 (jω) = (p1 (ω) − p2 (ω)) + j(q1 (ω) − q2 (ω))
(9.80)
(p1 − p2 )(q˙1 − q˙2 ) − (q1 − q2 )(p˙1 − p˙ 2 ) dϕδ0 = . dω (p1 − p2 )2 + (q1 − q2 )2
(9.81)
λ0 p1 (ω0 ) + (1 − λ0 )p2 (ω0 ) = 0
(9.82)
λ0 q1 (ω0 ) + (1 − λ0 )q2 (ω0 ) = 0.
(9.83)
to get
Using (9.67) and Since δ1 (s) and δ2 (s) are Hurwitz λ0 6= 0, and λ0 6= 1, so that p1 (ω0 ) − p2 (ω0 ) = −
p2 (ω0 ) p1 (ω0 ) = λ0 1 − λ0
(9.84)
358
ROBUST PARAMETRIC CONTROL
and
q2 (ω0 ) q1 (ω0 ) = . λ0 1 − λ0 Substituting these relations in (9.81) we have 1 1 1 1 dϕδ0 1−λ0 p1 q˙1 + λ0 p2 q˙2 − 1−λ0 q1 p˙ 1 − λ0 q2 p˙ 2 = 2 2 dω ω=ω0 (p1 − p2 ) + (q1 − q2 ) ω=ω0 1 1 (p1 q˙1 − q1 p˙1 ) λ0 (p2 q˙2 − q2 p˙ 2 ) + = 1−λ0 p2 +q2 2 +q 2 p 1 1 2 2 q1 (ω0 ) − q2 (ω0 ) = −
ω=ω0
(1−λ0 )2
p1 q˙1 − q1 p˙ 1 = (1 − λ0 ) p2 + q 2 1
1
ω=ω0
λ20
ω=ω0
p2 q˙2 − q2 p˙ 2 + λ0 p2 + q 2 2
(9.85)
2
ω=ω0
dϕδ2 dϕδ1 = (1 − λ0 ) + λ . 0 dω ω=ω0 dω ω=ω0
(9.86)
This proves (9.73). The proofs of (9.74) and (9.75) are identical starting from (9.68) and (9.69), respectively. The relation (9.73) holds for complex segments also. Suppose that δi (s), i = 1, 2 are complex Hurwitz polynomials of degree n and consider the complex segment δ2 (s) + λδ0 (s), λ ∈ [0, 1] with δ0 (s) = δ1 (s) − δ2 (s). The condition for a polynomial in the interior of this segment to have a root at s = jω0 is δ2 (jω0 ) + λ0 δ0 (jω0 ) = 0,
λ0 ∈ (0, 1).
(9.87)
It is straightforward to derive from the above, just as in the real case that dϕδ2 dϕδ1 dϕδ0 = λ0 + (1 − λ0 ) . (9.88) dω ω=ω0 dω ω=ω0 dω ω=ω0
The relationship (9.88) can be stated in terms of Xi (ω) := tan ϕδi (ω),
i = 0, 1, 2.
(9.89)
Using the fact that dϕδi (ω) 1 dXi (ω) = , dω (1 + Xi2 (ω)) dω
i = 0, 1, 2
(9.90)
(9.88) can be written in the equivalent form 1 dX0 (ω) = (1 + X02 (ω)) dω ω=ω0 1 1 dX2 (ω) dX1 (ω) λ0 + (1 − λ0 ) . (1 + X22 (ω)) dω ω=ω0 (1 + X12 (ω)) dω ω=ω0
(9.91)
STABILITY OF A LINE SEGMENT
359
Geometric reasoning (the image set of the segment at s = jω0 passes through the origin) shows that |X0 (ω)|ω=ω0 = |X1 (ω)|ω=ω0 = |X2 (ω)|ω=ω0 .
(9.92)
Using (9.92) in (9.91) we obtain the following result. LEMMA 9.12 Let [λδ1 (s) + (1 − λ)δ2 (s)], λ ∈ [0, 1] be a real or complex segment of polynomials. If a polynomial in the interior of this segment, corresponding to λ = λ0 has a root at s = jω0 then dX2 (ω) dX0 (ω) dX1 (ω) = λ0 + (1 − λ0 ) . (9.93) dω ω=ω0 dω ω=ω0 dω ω=ω0
These auxiliary results will help us to establish the Convex Direction and Vertex Lemmas in the following sections.
9.6
Convex Directions
It turns out that it is possible to give necessary and sufficient conditions on δ0 (s) under which strong stability of the pair (δ2 (s), δ0 (s) + δ2 (s)) will hold for every δ2 (s) and δ0 (s) + δ2 (s) that are Hurwitz. This is accomplished using the notion of convex directions. As before let δ1 (s) and δ2 (s) be polynomials of degree n. Write δ0 (s) := δ1 (s) − δ2 (s) and let δλ (s) = λδ1 (s) + (1 − λ)δ2 (s) = δ2 (s) + λδ0 (s)
(9.94)
and let us assume that the degree of every polynomial on the segment {δλ (s) : λ ∈ [0, 1]} is n. Now, the problem of interest is: Give necessary and sufficient conditions on δ0 (s) under which stability of the segment in (9.94) is guaranteed whenever the endpoints are Hurwitz stable? A polynomial δ0 (s) satisfying the above property is called a convex direction. There are two distinct results on convex directions corresponding to the real and complex cases. We begin with the complex case. Complex Convex Directions In the complex case we have the following result.
360
ROBUST PARAMETRIC CONTROL
LEMMA 9.13 (Complex Convex Direction Lemma) Let δλ (s) : λ ∈ [0, 1] be a complex segment of polynomials of degree n defined as in (9.94). The complex polynomial δ0 (s) is a convex direction if and only if dϕδ0 (ω) ≤0 (9.95) dω for every frequency ω ∈ IR such that δ0 (jω) 6= 0. Equivalently dX0 (ω) ≤0 dω
(9.96)
for every frequency ω ∈ IR such that δ0 (jω) 6= 0. PROOF The equivalence of the two conditions (9.95) and (9.96) is obvious. Suppose now that (9.96) is true. In the first place if ω0 is such that δ0 (jω0 ) = 0, it follows that δ2 (jω0 ) + λ0 δ0 (jω0 ) 6= 0 for any real λ0 ∈ [0, 1] as this would contradict the fact that δ2 (s) is Hurwitz. Now from Lemma 9.12, we see that the segment has a polynomial with a root at s = jω0 only if dX2 (ω) dX0 (ω) dX1 (ω) = λ + (1 . (9.97) − λ ) 0 0 dω ω=ω0 dω ω=ω0 dω ω=ω0
Since δ1 (s) and δ2 (s) are Hurwitz it follows from Lemma 9.6 that dXi (ω) > 0, dω
ω ∈ IR;
i = 1, 2.
(9.98)
and λ0 ∈ (0, 1). Therefore, the right-hand side of (9.97) is strictly positive whereas the left-hand side is nonpositive by hypothesis. This proves that there cannot exist any ω0 ∈ IR for which (9.97) holds. The stability of the segment follows from the Boundary Crossing Theorem (Chapter 8). The proof of necessity is based on showing that if the condition (9.95) fails to hold it is possible to construct a Hurwitz polynomial p2 (s) such that the endpoint p1 (s) = p2 (s)+ δ0 (s) is Hurwitz stable and the segment joining them is of constant degree but contains unstable polynomials. The proof is omitted as it is similar to the real case, which is proved in detail in the next lemma. It suffices to mention that when ω = ω ∗ is such that dϕδ0 (ω) >0 (9.99) dω ω=ω∗
we can take
p1 (s) = (s − jω ∗ )t(s) + µδ0 (s)
(9.100)
p2 (s) = (s − jω ∗ )t(s) − µδ0 (s)
(9.101)
STABILITY OF A LINE SEGMENT
361
where t(s) is chosen to be a complex Hurwitz polynomial of degree greater than the degree of δ0 (s), and satisfying the conditions: |X0 (ω ∗ )| = |Xt (ω ∗ )| dϕδ0 (ω) dϕt (ω) > > 0. dω ω=ω∗ dω ω=ω∗
(9.102)
The existence of such t(s) is clear from Remark 9.2 following Lemma 9.10. The proof is completed by noting that pi (s) are Hurwitz, the segment joining p1 (s) and p2 (s) is of constant degree (for small enough |µ|), but the segment polynomial 12 (p1 (s) + p2 (s)) has s = jω ∗ as a root. Real Convex Directions The following Lemma gives the necessary and sufficient condition for δ0 (s) to be a convex direction in the real case. LEMMA 9.14 (Real Convex Direction Lemma) Consider the real segment {δλ (s) : λ ∈ [0, 1]} of degree n. The real polynomial δ0 (s) is a convex direction if and only if dϕδ0 (ω) sin (2ϕδ0 (ω)) (9.103) ≤ dω 2ω
is satisfied for every frequency ω > 0 such that δ0 (jω) 6= 0. Equivalently dX0 (ω) X0 (ω) (9.104) ≤ dω ω for every frequency ω > 0 such that δ0 (jω) 6= 0.
PROOF The equivalence of the conditions (9.104) and (9.103) has already been shown (see the proof of Theorem 9.1) and so it suffices to prove (9.104). If degree δi (s) = 1 for i = 1, 2, degree δ0 (s) ≤ 1 and (9.104) holds. In this case it is straightforward to verify from the requirements that the degree along the segment is 1 and δi (s), i = 1, 2 are Hurwitz, that no polynomial on the segment has a root at s = 0. Hence, such a segment is stable by the Boundary Crossing Theorem. We assume henceforth that degree[δi (s)] ≥ 2 for i = 1, 2. In general the assumption of invariant degree along the segment (the leading coefficients of δi (s) are of the same sign) along with the requirement that δi (s), i = 1, 2 are Hurwitz imply that the constant coefficients of δi (s), i = 1, 2 are also of the same sign. This rules out the possibility of any polynomial in the segment having a root at s = 0. Now if ω0 > 0 is such that δ0 (jω0 ) = 0, and δ2 (jω0 ) + λ0 δ0 (jω0 ) = 0 for some real λ0 ∈ (0, 1) it follows that δ2 (jω0 ) = 0. However, this would
362
ROBUST PARAMETRIC CONTROL
contradict the fact that δ2 (s) is Hurwitz. Thus such a jω0 also cannot be a root of any polynomial on the segment. To proceed let us first consider the case where δ0 (s) = as + b with b 6= 0. Here (9.104) is again seen to hold. From Lemma 9.11 it follows that s = jω0 is a root of a polynomial on the segment only if for some λ0 ∈ (0, 1) dϕδ 0 dϕδ 1 dϕδ 2 − λ ) = λ + (1 . (9.105) 0 0 dω ω=ω0 dω ω=ω0 dω ω=ω0 In the present case we have
a b
(9.106)
and therefore the left-hand side of (9.105) dϕδ 0 = 0. dω ω=ω0
(9.107)
so that the right-hand side of (9.105) dϕδ 2 dϕδ 1 + (1 − λ0 ) > 0. λ0 dω ω=ω0 dω ω=ω0
(9.109)
tan ϕδ 0 (ω) =
Since δi (s), i = 1, 2 are Hurwitz and of degree ≥ 2 we have from Lemma 9.8 that dϕδi > 0 (9.108) dω ω=ω0
This contradiction shows that such a jω0 cannot be a root of any polynomial on the segment, which must therefore be stable. We now consider the general case where degree[δ0 (s)] ≥ 2 or δ0 (s) = as. From Lemma 9.12 we see that the segment has a polynomial with a root at s = jω0 only if dX0 (ω) dX2 (ω) dX1 (ω) = λ0 + (1 − λ0 ) . (9.110) dω ω=ω0 dω ω=ω0 dω ω=ω0 Since δ1 (s) and δ2 (s) are Hurwitz it follows from Theorem 9.1 that we have dXi (ω) Xi (ω) > , ω > 0; i = 1, 2 (9.111) dω ω and λ0 ∈ (0, 1). Furthermore, we have
|X0 (ω)|ω=ω0 = |X1 (ω)|ω=ω0 = |X2 (ω)|ω=ω0
(9.112)
so that the right-hand side of (9.110) satisfies dX1 (ω) dX2 (ω) λ0 + (1 − λ0 ) dω ω=ω0 dω ω=ω0
(9.113)
STABILITY OF A LINE SEGMENT X1 (ω) X2 (ω) X0 (ω0 ) . > λ0 + (1 − λ0 ) = ω ω=ω0 ω ω=ω0 ω0 On the other hand the left-hand side of (9.110) satisfies X0 (ω0 ) dX0 (ω) . ≤ dω ω=ω0 ω0
363 (9.114)
(9.115)
This contradiction proves that there cannot exist any ω0 ∈ IR for which (9.110) holds. Thus, no polynomial on the segment has a root at s = jω0 , ω0 > 0 and the stability of the entire segment follows from the Boundary Crossing Theorem of Chapter 8. This completes the proof of sufficiency. The proof of necessity requires us to show that if the condition (9.103) fails there exists a Hurwitz polynomial r2 (s) such that r1 (s) = r2 (s) + δ0 (s) is also Hurwitz stable but the segment joining them is not. Suppose then that δ0 (s) is a given polynomial of degree n and ω ∗ > 0 is such that δ0 (jω ∗ ) 6= 0 but dϕδ0 (ω) sin (2ϕδ0 (ω)) (9.116) > dω 2ω
for some ω ∗ > 0. It is then possible to construct a real Hurwitz polynomial t(s) of degree ≥ n − 2 such that 2
r1 (s) := (s2 + ω ∗ )t(s) + µδ0 (s)
(9.117)
and 2
r2 (s) := (s2 + ω ∗ )t(s) − µδ0 (s)
(9.118)
are Hurwitz and have leading coefficients of the same sign for sufficiently small |µ|. It suffices to choose t(s) so that sin 2ϕδ0 (ω ∗ ) sin 2ϕt (ω ∗ ) dϕδ0 dϕt . > > (9.119) = dω ω=ω∗ dω ω=ω∗ 2ω ∗ 2ω ∗
The fact that such t(s) exists is guaranteed by Lemma 9.10. It remains to prove that ri (s), i = 1, 2 can be made Hurwitz stable by choice of µ. For sufficiently small |µ|, n − 2 of the zeros of ri (s), i = 1, 2 are close to those of t(s), and hence in the open left-half plane while the remaining two zeros are close to ±jω ∗ . To prove that that the roots lying close to ±jω ∗ are in the open left-half plane we let s(µ) denote the root close to jω ∗ and analyze the behavior of the real part of s(µ) for small values of µ. We already know that Re[s(µ)]|µ=0 = 0. (9.120) We will establish the fact that Re[s(µ)] has a local maximum at µ = 0 and this together with (9.120) will show that Re[s(µ)] is negative in a neighborhood
364
ROBUST PARAMETRIC CONTROL
of µ = 0, proving that ri (s), i = 1, 2 are stable. To prove that Re[s(µ)] has a local maximum at µ = 0 it suffices to establish that d Re [s(µ)] =0 (9.121) dµ µ=0 and
d2 Re [s(µ)] < 0. dµ2 µ=0
(9.122)
Now since s(µ) is a root of r1 (s) we have
r1 (s(µ)) = (s(µ) − jω ∗ ) u (s(µ)) + µδ0 (s(µ)) = 0
(9.123)
u(s) := (s + jω ∗ )t(s).
(9.124)
where By differentiating (9.123) with respect to µ we get ds(µ) δ0 (s(µ)) =− dδ0 (s(µ)) dµ u (s(µ)) + (s(µ) − jω ∗ ) du(s(µ)) ds(µ) + µ ds(µ) and hence
From the fact that
ds(µ) δ0 (jω ∗ ) δ0 (jω ∗ ) =− = − . dµ µ=0 u(jω ∗ ) 2jω ∗ t(jω ∗ ) sin 2ϕδ0 (ω ∗ ) sin 2ϕt (ω ∗ ) = 2ω ∗ 2ω ∗
(9.125)
(9.126)
(9.127)
(see (9.119)) it follows that δ0 (jω ∗ ) and t(jω ∗ ) have arguments that are equal or differ by π radians so that δ0 (jω ∗ ) (9.128) t(jω ∗ ) is purely real. Therefore, we have d Re [s(µ)] = 0. dµ µ=0
To complete the proof we need to establish that d2 Re [s(µ)] < 0. 2 dµ µ=0
By differentiating (9.125) once again with respect to µ we can obtain the second derivative. After some calculation we get: d2 s(µ) j δ02 (jω ∗ ) 1 du(jω) 1 dδ0 (jω) =− − . dµ2 µ=0 2(ω ∗ )2 t2 (jω ∗ ) u(jω ∗ ) dω ω=ω∗ δ0 (jω ∗ ) dω ω=ω∗
365
STABILITY OF A LINE SEGMENT Using the fact that
δ0 (jω ∗ ) t(jω ∗ )
is purely real and the formulas (see (9.25))
we get
1 du(jω) Im = u(jω ∗ ) dω ω=ω ∗ 1 dδ0 (jω) Im = δ0 (jω ∗ ) dω ω=ω ∗
dϕu dω ω=ω∗ dϕδ0 dω ∗
(9.129)
(9.130)
ω=ω
dϕu d2 1 δ02 (jω ∗ ) dϕδ0 = − − Re [s(µ)] . (9.131) dµ2 2(ω ∗ )2 t2 (jω ∗ ) dω ω=ω∗ dω ω=ω∗ µ=0
Now
and by construction
dϕt dϕu = dω ω=ω∗ dω ω=ω∗
(9.132)
dϕδ0 dϕt > . dω ω=ω∗ dω ω=ω∗
Once again using the fact that
δ0 (jω ∗ ) t(jω ∗ )
is real we finally have from (9.131):
d2 < 0. Re [s(µ)] dµ2 µ=0
(9.133)
This proves that the real part of s(µ) is negative for µ in the neighbourhood of µ = 0 and therefore r1 (s) must be stable as claimed. An identical argument shows that r2 (s) is stable. The proof is now completed by the fact that ri (s), i = 1, 2 are Hurwitz and the segment joining them is of constant degree but the segment polynomial 12 (r1 (s) + r2 (s)) has s = jω ∗ as a root. Thus, δ0 (s) is not a convex direction. We illustrate the usefulness of convex directions by some examples. Example 9.7 Consider the line segment joining the following two endpoints which are Hurwitz: δ1 (s) := s4 + 12.1s3 + 8.46s2 + 11.744s + 2.688 δ2 (s) := 2s4 + 9s3 + 12s2 + 10s + 3. We first verify the stability of the segment by using the Segment Lemma. The positive real roots of the polynomial δ1e (ω)δ2o (ω) − δ2e (ω)δ1o (ω) = 0
366
ROBUST PARAMETRIC CONTROL
that is (ω 4 − 8.46ω 2 + 2.688)(−9ω 2 + 10) − (2ω 4 − 12ω 2 + 3)(−12.1ω 2 + 11.744) = 0 are 2.1085,
0.9150,
0.3842.
However, none of these ω’s satisfy the conditions 2) and 3) of the Segment Lemma (Lemma 9.4). Thus, we conclude that the entire line segment is Hurwitz. Next, we apply the Real Convex Direction Lemma to the difference polynomial δ0 (s) : = δ2 (s) − δ1 (s) = s4 − 3.1s3 + 3.54s2 − 1.744s + 0.312 so that δ0 (jω) = (ω 4 − 3.54ω 2 + 0.312) +j (3.1ω 3 − 1.744ω) {z } | {z } | δ0r (ω)
δ0i (ω)
and the two functions that need to be evaluated are i r dδ0 (ω) dδ0 (ω) δ0r (ω) − δ0i (ω) dω dω d ϕδ (ω) = 2 2 dω 0 (δ r (ω)) + δ i (ω) 0
0
(ω 4 − 3.54ω 2 + 0.312)(9.3ω 2 − 1.744) − (4ω 3 − 7.08ω)(3.1ω 3 − 1.744ω) = (ω 4 − 3.54ω 2 + 0.312)2 + (3.1ω 3 − 1.744ω)2
and δ0r (ω)δ0i (ω) ω (δ0r (ω))2 + (δ0i (ω))2 (ω 4 − 3.54ω 2 + 0.312)(3.1ω 2 − 1.744) = 4 . (ω − 3.54ω 2 + 0.312)2 + (3.1ω 3 − 1.744ω)2
sin (2ϕδ0 (ω)) = 2ω
These two functions are depicted in Figure 9.12. Since the second function dominates the first for each ω the plots show that δ0 (s) is a convex direction. Consequently, the line segment joining the given δ1 (s) and δ2 (s) is Hurwitz. Furthermore since δ0 (s) is a convex direction, we know, in addition, that every line segment of the form δ(s) + λδ0 (s) for λ ∈ [0, 1] is Hurwitz for an arbitrary Hurwitz polynomial δ(s) of degree 4 with positive leading coefficient, provided δ(s) + δ0 (s) is stable. This additional information is not furnished by the Segment Lemma. Example 9.8 Consider the Hurwitz stability of the segment joining the following two polynomials: δ1 (s) = 1.4s4 + 6s3 + 2.2s2 + 1.6s + 0.2
367
STABILITY OF A LINE SEGMENT
6
4
|sin(2ψδ (ω))/2ω| 0
2
0
−2
[d/dω]ψδ (ω) 0
−4
−6
0
0.5
1
1.5
2 ω
2.5
3
3.5
4
Figure 9.12 δ0 (s) is a convex direction (Example 9.7).
δ2 (s) = 0.4s4 + 1.6s3 + 2s2 + 1.6s + 0.4. Since δ1 (s) and δ2 (s) are stable, we can apply the Segment Lemma to check the stability of the line segment [δ1 (s), δ2 (s)]. First we compute the roots of the polynomial equation: δ1e (ω)δ2o (ω) − δ2e (ω)δ1o (ω) = (1.4ω 4 − 2.2ω 2 + 0.2)(−1.6ω 2 + 1.6) − (0.4ω 4 − 2ω 2 + 0.4)(−6ω 2 + 1.6) = 0. There is one positive real root ω ≈ 6.53787. We proceed to check the conditions 2) and 3) of the Segment Lemma (Lemma 9.4): (1.4ω 4 − 2.2ω 2 + 0.2)(0.4ω 4 − 2ω 2 + 0.4)|ω≈6.53787 > 0 (−6ω 2 + 1.6)(−1.6ω 2 + 1.6)|ω≈6.53787 > 0. Thus, we conclude that the segment [δ1 (s), δ2 (s)] is stable. Now let us apply the Real Convex Direction Lemma to the difference polynomial δ0 (s) = δ1 (s) − δ2 (s) = s4 + 4.4s3 + 0.2s2 − 0.2.
368
ROBUST PARAMETRIC CONTROL
We have δ0 (jω) = (ω 4 − 0.2ω 2 − 0.2) +j (−4.4ω 3 ) . {z } | | {z } δ0r (ω)
δ0i (ω)
The two functions that need to be evaluated are dϕδ0 (ω) = dω
(ω 4 − 0.2ω 2 − 0.2)(−13.2ω 2) − (4ω 3 − 0.4ω)(−4.4ω 3) (ω 4 − 0.2ω 2 − 0.2)2 + (−4.4ω 3 )2 and
sin(2ϕδ0 (ω)) (ω 4 − 0.2ω 2 − 0.2)(−4.4ω 2) = (ω 4 − 0.2ω 2 − 0.2)2 + (−4.4ω 3)2 . 2ω
These two functions are depicted in Figure 9.13. Since the second function does not dominate the first at each ω we conclude that δ0 (s) is not a convex direction.
4.5 4 [d/dω]ψδ (ω)
3.5
0
3 2.5 2 1.5 |sin(2ψδ (ω))/2ω|
1
0
0.5 0
0
0.2
0.4
0.6
0.8
1 ω
1.2
1.4
1.6
Figure 9.13 δ0 (s) is a nonconvex direction (Example 9.8).
1.8
2
369
STABILITY OF A LINE SEGMENT
REMARK 9.3 This example reinforces the fact that the segment joining δ1 (s) and δ2 (s) can be stable even though δ0 (s) = δ1 (s) − δ2 (s) is not a convex direction. On the other hand, even though this particular segment is stable, there exists at least one Hurwitz polynomial δ2 (s) of degree 4 such that the segment [δ2 (s), δ2 (s)+δ0 (s)] is not Hurwitz even though δ2 (s) and δ2 (s)+δ0 (s) are.
9.7
The Vertex Lemma
The conditions given by the Convex Direction Lemmas are frequency dependent. It is possible to give frequency-independent conditions on δ0 (s) under which Hurwitz stability of the vertices implies stability of every polynomial on the segment [δ1 (s), δ2 (s)]. In this section we first consider various special forms of the difference polynomial δ0 (s) for which this is possible. In each case we use Lemma 9.11 and Hurwitz stability of the vertices to contradict the hypothesis that the segment has unstable polynomials. We then combine the special cases to obtain the general result. This main result is presented as the Vertex Lemma. We shall assume throughout this subsection that each polynomial on the segment [δ1 (s), δ2 (s)] is of degree n. This will be true if and only if δ1 (s) and δ2 (s) are of degree n and their leading coefficients are of the same sign. We shall assume this without loss of generality. We first consider real polynomials of the form δ0 (s) = st (as + b)P (s) where t is a nonnegative integer and P (s) is odd or even. Suppose arbitrarily, that t is even and P (s) = E(s) an even polynomial. Then δ0 (s) = st E(s)b + st+1 E(s)a . | {z } | {z } δ0even (s)
(9.134)
δ0odd (s)
Defining δ 0 (jω) as before we see that
tan δ 0 (ω) =
a b
(9.135)
so that
dϕδ 0 = 0. dω From Lemma 9.11 (i.e., (9.74)), we see that dϕδ 1 dϕδ 2 λ0 + (1 − λ0 ) =0 dω ω=ω0 dω ω=ω0
(9.136)
(9.137)
370
ROBUST PARAMETRIC CONTROL
and from Lemma 9.8 we see that if δ1 (s) and δ2 (s) are Hurwitz, then dϕδ 2 >0 dω
(9.138)
and
dϕδ 1 >0 (9.139) dω so that (9.137) cannot be satisfied for λ0 ∈ [0, 1]. An identical argument works when t is odd. The case when P (s) = O(s) is an odd polynomial can be handled similarly by using (9.75) in Lemma 9.11. The details are left to the reader. Thus, we are led to the following result. LEMMA 9.15 If δ0 (s) = st (as + b)P (s) where t ≥ 0 is an integer, a and b are arbitrary real numbers and P (s) is an even or odd polynomial, then stability of the segment [δ1 (s),δ2 (s)] is implied by those of the endpoints δ1 (s), δ2 (s). We can now prove the following general result. LEMMA 9.16 (Hurwitz Vertex Lemma) a) Let δ1 (s) and δ2 (s) be real polynomials of degree n with leading coefficients of the same sign and let δ0 (s) = δ1 (s) − δ2 (s) = A(s)st (as + b)P (s)
(9.140)
where A(s) is anti-Hurwitz, t ≥ 0 is an integer, a, b are arbitrary real numbers, and P (s) is even or odd. Then stability of the segment [δ1 (s), δ2 (s)] is implied by that of the endpoints δ1 (s), δ2 (s). b) When δ0 (s) is not of the form specified in a), stability of the endpoints is not sufficient to guarantee that of the segment. PROOF a) Write and let
A(s) = Aeven (s) + Aodd (s)
(9.141)
¯ A(s) := Aeven (s) − Aodd (s).
(9.142)
¯ Since A(s) is anti-Hurwitz, A(s) is Hurwitz. Now consider the segment ¯ ¯ [A(s)δ1 (s), A(s)δ2 (s)] which is Hurwitz if and only if [δ1 (s), δ2 (s)] is
371
STABILITY OF A LINE SEGMENT Hurwitz. But
¯ ¯ ¯ A(s)δ 0 (s) = A(s)δ1 (s) − A(s)δ2 (s) even 2 = [(A (s)) − (Aodd (s))2 ] st (as + b)P (s). (9.143) {z } | T (s)
Since T (s) is an even polynomial we may use Lemma 9.15 to con¯ ¯ clude that the segment [A(s)δ 1 (s), A(s)δ2 (s)] is Hurwitz if and only ¯ ¯ ¯ if A(s)δ1 (s) and A(s)δ2 (s) are. Since A(s) is Hurwitz it follows that the segment [δ1 (s), δ2 (s)] is Hurwitz if and only if the endpoints δ1 (s) and δ2 (s) are. b) We prove this part by means of the following example. Consider the segment δλ (s) = (2 + 14λ)s4 + (5 + 14λ)s3 + (6 + 14λ)s2 + 4s + 3.5.
(9.144)
Now set λ = 0 and λ = 1, then we have δλ |λ=0 = δ1 (s) = 2s4 + 5s3 + 6s2 + 4s + 3.5 δλ |λ=1 = δ2 (s) = 16s4 + 19s3 + 20s2 + 4s + 3.5 and consequently, δ0 (s) = δ2 (s) − δ1 (s) = 14s4 + 14s3 + 14s2 = 14s2 (s2 + s + 1). It can be verified that two endpoints δ1 (s) and δ2 (s) are Hurwitz. Notice that since (s2 + s + 1) is Hurwitz with a pair of complex conjugate roots, δ0 (s) cannot be partitioned into the form of (9.140). Therefore, we conclude that when δ0 (s) is not of the form specified in (9.140), stability of the endpoints is not sufficient to guarantee that of the segment.
REMARK 9.4 We remark that the form of δ0 (s) given in (9.140) is a real convex direction. Example 9.9 Suppose that the transfer function of a plant containing an uncertain parameter is written in the form: P (s) =
P2 P1 (s) + λP0 (s)
372
ROBUST PARAMETRIC CONTROL
where the uncertain parameter λ varies in [0, 1], and the degree of P1 (s) is greater than those of P0 (s) or P2 (s). Suppose that a unity feedback controller is to be designed so that the plant output follows step and ramp inputs and rejects sinusoidal disturbances of radian frequency ω0 . Let us denote the controller by Q2 (s) . C(s) = Q1 (s) A possible choice of Q1 (s) which will meet the tracking and disturbance rejection requirements is Q1 (s) = s2 (s2 + ω02 )(as + b) with Q2 (s) being of degree 5 or less. The stability of the closed loop requires that the segment δλ (s) = Q2 (s)P2 (s) + Q1 (s)(P1 (s) + λP0 (s), be Hurwitz stable. The corresponding difference polynomial δ0 (s) is δ0 (s) = Q1 (s)P0 (s). With Q1 (s) of the form shown above, it follows that δ0 (s) is of the form specified in the Vertex Lemma if P0 (s) is anti-Hurwitz or even or odd or product thereof. Thus, in such a case robust stability of the closed loop would be equivalent to the stability of the two vertex polynomials δ1 (s) = Q2 (s)P2 (s) + Q1 (s)P1 (s) δ2 (s) = Q2 (s)P2 (s) + Q1 (s)P1 (s) + Q1 (s)P0 (s). Let ω0 = 1, a = 1, and b = 1 and P1 (s) = s2 + s + 1,
P0 (s) = s(s − 1),
P2 (s) = s2 + 2s + 1
Q2 (s) = s5 + 5s4 + 10s3 + 10s2 + 5s + 1. Since P0 (s) = s(s − 1) is the product of an odd and an anti-Hurwitz polynomial, the conditions of the Vertex Lemma are satisfied and robust stability is equivalent to that of the two vertex polynomials δ1 (s) = 2s7 + 9s6 + 24s5 + 38s4 + 37s3 + 22s2 + 7s + 1 δ2 (s) = 3s7 + 9s6 + 24s5 + 38s4 + 36s3 + 22s2 + 7s + 1. Since δ1 (s) and δ2 (s) are Hurwitz, the controller C(s) =
Q2 (s) s5 + 5s4 + 10s3 + 10s2 + 5s + 1 = Q1 (s) s2 (s2 + 1)(s + 1)
STABILITY OF A LINE SEGMENT
373
robustly stabilizes the closed loop system and provides robust asymptotic tracking and disturbance rejection. The Vertex Lemma can easily be extended to the case of Schur stability. LEMMA 9.17 (Schur Vertex Lemma) a) Let δ1 (z) and δ2 (z) be polynomials of degree n with δ1 (1) and δ2 (1) nonzero and of the same sign, and with leading coefficients of the same sign. Let δ0 (z) = δ1 (z) − δ2 (z) = A(z)(z − 1)t1 (z + 1)t2 (az + b)P (z)
(9.145)
where A(z) is antiSchur, t1 , t2 ≥ 0 are integers, a, b are arbitrary real numbers, and P (z) is symmetric or antisymmetric. Then Schur stability of the segment [δ1 (z), δ2 (z)] is implied by that of the endpoints δ1 (z), δ2 (z). b) When δ0 (z) is not of the form specified in a), Schur stability of the endpoints is not sufficient to guarantee that of the segment. PROOF The proof is based on applying the bilinear transformation and using the corresponding results for the Hurwitz case. Let P (z) be any polynomial and let s+1 n ˆ P (s) := (s − 1) P . s−1 If P (z) is of degree n, so is Pˆ (s) provided P (1) 6= 0. Now apply the bilinear transformation to the polynomials δ0 (z), δ1 (z) and δ2 (z) to get δˆ0 (s), δˆ1 (s) and δˆ2 (s), where δˆ0 (s) = δˆ1 (s) − δˆ2 (s). The proof consists of showing that under the assumption that δ0 (z) is of the form given in (9.145), δˆ0 (s), δˆ1 (s), and δˆ2 (s) satisfy the conditions of the Vertex Lemma for the Hurwitz case. Since δ1 (1) and δ2 (1) are of the same sign, δλ (1) = λδ1 (1) + (1 − λ)δ2 (1) 6= 0 for λ ∈ [0, 1]. This in turn implies that δˆλ (s) is of degree n for all λ ∈ [0, 1]. A straightforward calculation shows that t1 ˆ δˆ0 (s) = A(s)2 (2s)t2 (cs + d)Pˆ (s)
which is precisely the form required in the Vertex Lemma for the Hurwitz case. Thus, the segment δˆλ (s) cannot have a jω root and the segment δλ (z) cannot have a root on the unit circle. Therefore, Schur stability of δ1 (z) and δ2 (z) guarantees that of the segment.
374
9.8
ROBUST PARAMETRIC CONTROL
Exercises
9.1 Consider the standard unity feedback control system given in Figure 9.14
+ − 6
-
F (s)
-
G(s)
-
Figure 9.14 A unity feedback system.
where G(s) :=
s+1 , s2 (s + p)
F (s) =
(s − 1) s(s + 3)(s2 − 2s + 1.25)
and the parameter p varies in the interval [1, 5]. (a) Verify the robust stability of the closed-loop system. Is the Vertex Lemma applicable to this problem? (b) Verify your answer by the s-plane root locus (or Routh-Hurwitz criteria). 9.2 Rework the problem in Exercise 9.1 by transforming via the bilinear transformation to the z plane, and using the Schur version of the Segment or Vertex Lemma. Verify your answer by the z-plane root locus (or Jury’s test). 9.3 The closed-loop characteristic polynomial of a missile of mass M flying at constant speed is: 1 δ(s) = −83, 200 + 108, 110K − 9, 909.6K M 1 + −3, 328 + 10, 208.2K + 167.6 s M 1 1 1 + −1, 547.79K + 1, 548 − 877.179K + 6.704 − 2.52497K s2 M M M 1 + 64 − 24.1048K + 0.10475 s3 + s4 . M where the nominal value of M , M 0 = 1. Find the range of K for robust 1 stability if M ∈ [1, 4].
375
STABILITY OF A LINE SEGMENT Answer: K = [−0.8, 1.2]. 9.4 For the feedback system shown in Figure 9.15 r
+ − 6
e
-
y -
u n(s, α) d(s, α)
K
Figure 9.15 Feedback control system.
where n(s, α) = s2 + (3 − α)s + 1 d(s, α) = s3 + (4 + α)s2 + 6s + 4 + α. Partition the (K, α) plane into stable and unstable regions. Show that the stable region is bounded by 5 + K(3 − α) > 0 and K + α + 4 > 0. 9.5 Consider the feedback system shown in Figure 9.16.
+ − 6
-
e−sT
-
K
-
P (s)
-
Figure 9.16 Feedback control system.
Let P (s) =
n(s) 6s3 + 18s2 + 30s + 25 = 4 . d(s) s + 6s3 + 18s2 + 30s + 25
Determine the robust stability of the system for T = 0.1 sec with 0 < K < 1.
376
ROBUST PARAMETRIC CONTROL
Hint: Check that P0 (s) = d(s) are stable and the plot
and P1 (s) = d(s) + e−sT n(s)
P0 (jω) does not cut the negative real axis. P1 (jω)
9.6 Consider the system given in Figure 9.17 r
+ − 6
e
-
K
u -
n(z) d(z)
y -
Figure 9.17 Feedback control system.
and let 1+j 1−j 1 n(z) = z − z− z+ 4 4 2 3 1 −j − 1 j−1 d(z) = z + z− z− z− . 4 2 2 2 Find the range of stabilizing K using the Schur Segment Lemma. Answer: K < −1.53 and K > −0.59 9.7 Show that δ0 (s) given in (9.140) is a convex direction. 9.8 Show that the following polynomials are convex directions. (a) δ0 (s) = (s − r1 )(s + r2 )(s − r3 )(s + r4 ) · · · (s + (−1)m rm ) where 0 < r1 < r2 < r3 < · · · < rm . (b) δ0 (s) = (s + r1 )(s − r2 )(s + r3 )(s − r4 ) · · · (s − (−1)m rm ) where 0 ≤ r1 ≤ r2 ≤ r3 ≤ · · · ≤ rm . 9.9 Is the following polynomial a convex direction? δ0 (s) = s4 − 2s3 − 13s2 + 14s + 24
377
STABILITY OF A LINE SEGMENT 9.10 Consider the two Schur polynomials: P1 (z) = z 4 + 2.5z 3 + 2.56z 2 + 1.31z + 0.28 P2 (z) = z 4 + 0.2z 3 + 0.17z 2 + 0.052z + 0.0136
Check the Schur stability of the segment joining these two polynomials by using: (a) Schur Segment Lemma (b) Bounded Phase Lemma 9.11 Consider the feedback control system shown in Figure 9.18 r
+ − 6
e
-
K
u n(s, α) d(s, α)
y -
Figure 9.18 Feedback control system.
where n(s, α) = s + α d(s, α) = s2 + (2α)s2 + αs − 1 and α ∈ [2, 3]. Partition the K axis into robustly stabilizing and nonstabilizing regions. Answer: Stabilizing for K ∈ (−4.5, 0.33) only. 9.12 Repeat Exercise 9.11 when there is a time delay of 1 sec. in the feedback loop. 9.13 Consider a feedback system with plant transfer function G(s) and controller transfer function C(s): G(s) =
N (s) C(s) = K. D(s)
(9.146)
Show that if N (s) is a convex direction there exists at most one segment of stabilizing gains K. 9.14 Carry out the construction of the polynomial δk (s) required in the proof of Lemma 9.10 for the angle θ lying in the second, third, and fourth quadrants.
378
9.9
ROBUST PARAMETRIC CONTROL
Notes and References
The Segment Lemma for the Hurwitz case was derived by Chapellat and Bhattacharyya [45]. An alternative result on segment stability involving the Hurwitz matrix has been given by Bialas [32]. Bose [36] has given analytical tests for the Hurwitz and Schur stability of convex combinations of polynomials. The Convex Direction Lemmas for the real and complex cases and Lemma 9.10 are due to Rantzer [172]. The results leading up to the Vertex Lemma were developed by various researchers: the monotonic phase properties given in Lemmas 9.8 and 9.9 are due to Mansour and Kraus [149]. An alternative proof of Lemma 9.7 is given in Mansour [147]. In Bose [38] monotonicity results for Hurwitz polynomials are derived from the point of view of reactance functions and it is stated that Theorem 9.1 follows from Tellegen’s Theorem in network theory. The direct proof of Theorem 9.1 given here are due to Keel and Bhattacharyya [122]. Lemma 9.15 is due to Hollot and Yang [105] who first proved the vertex property of first order compensators. Mansour and Kraus gave an independent proof of the same lemma [149], and Peterson [168] dealt with the anti-Hurwitz case. The unified proof of the Vertex Lemma given here, based on Lemma 9.11 was first reported in Bhattacharyya [23] and Bhattacharyya and Keel [25]. The vertex result given in Exercise 9.8 was proved by Kang [111] using the alternating Hurwitz minor conditions. The polynomial used in Exercise 9.9 is taken from Barmish [11]. Vertex results for quasi-polynomials have been developed by Kharitonov and Zhabko [139].
10 STABILITY MARGIN COMPUTATION
In this chapter we develop procedures to determine maximal stability regions in the space of coefficients and the space of parameter of a polynomial. The central idea used is the Boundary Crossing Theorem and its alternative version the Zero Exclusion Theorem of Chapter 8. We begin by calculating the largest ℓ2 stability ball centered at a given point in the space of parameters. We calculate the radius of the largest stability ball in the space of real parameters under the assumption that these uncertain parameters enter the characteristic polynomial coefficients linearly or affinely. This radius serves as a quantitative measure of the real parametric stability margin for control systems. Both ellipsoidal and polytopic uncertainty regions are considered. We also deal with robust stability problems where the uncertain interval parameters appear multilinearly in the characteristic polynomial coefficients. We introduce the Mapping Theorem, which reveals a fundamental property of the image set of such systems. This property allows us to effectively approximate the image set evaluated at an arbitrary point in the complex plane by overbounding it with a union of convex polygons; moreover the accuracy of this approximation can be increased as much as desired. When the parameters appear in the matrices in a unity rank perturbation structure, the characteristic polynomial of the matrix is a multilinear function of the parameters. This allows us to use the Mapping Theorem to develop a computational algorithm based on calculating the phase difference over the vertices of the parameter set. Next we introduce some Lyapunov based methods for parameter perturbations in state space systems. A stability region in parameter space can be calculated using this technique and a numerical procedure for enlarging this region by adjusting the controller parameters is described. We illustrate this algorithm with an example. The last part of the chapter describes some results on matrix stability radius for the real and complex cases and for some special classes of matrices.
10.1
Introduction
At the first level of detail every feedback system is composed of at least two subsystems, namely controller and plant connected in a feedback loop. The
379
380
ROBUST PARAMETRIC CONTROL
characteristic polynomial coefficients of such a system is a function of plant parameters and controller parameters. Although both parameters influence the coefficients, the nature of the two sets of parameters are quite different. The plant contains parameters that are subject to uncontrolled variations depending on the physical operating conditions, disturbances, modeling errors, etc. The controller parameters on the other hand, are often fixed during the operation of the system. However, at the design stage, they are also uncertain parameters to be chosen. Robust stability problems deal with the preservation of stability of a closedloop system under perturbations of uncertain parameters. We first develop some methods to calculate the largest regions of stability when the parameters appear linearly or affinely in the characteristic polynomial coefficients. We also deal with robust stability problems where the uncertain interval parameters appear multilinearly in the characteristic polynomial coefficients. We introduce the Mapping Theorem, which reveals a fundamental property of the image set of such systems. This property allows us to effectively approximate the image set evaluated at an arbitrary point in the complex plane by overbounding it with a union of convex polygons; moreover the accuracy of this approximation can be increased as much as desired. A computationally efficient solution to the robust stability problem can then be obtained by replacing the multilinear interval family with a test set consisting of a polytopic family. We also show how various worst case stability and performance margins over the interval parameter set can be estimated from this polytopic test set. These include gain and phase margins, H∞ norms, absolute stability under sector bounded nonlinear feedback, and guaranteed time-delay tolerance. Finally, we describe some robust parametric results formulated specifically for state space models.
10.2 10.2.1
The Parametric Stability Margin The Stability Ball in Parameter Space
In this section we give a useful characterization of the parametric stability margin in the general case. This can be done by finding the largest stability ball in parameter space, centered at a “stable” nominal parameter value p0 . Let S ⊂ CI denote as usual an open set which is symmetric with respect to the real axis. S denotes the stability region of interest. In continuous-time systems S may be the open left-half plane or subsets thereof. For discretetime systems S is the open unit circle or a subset of it. Now let p be a vector of real parameters, p = [p1 , p2 , · · · , pl ]T .
381
STABILITY MARGIN COMPUTATION The characteristic polynomial of the system is denoted by δ(s, p) = δn (p)sn + δn−1 (p)sn−1 + · · · + δ0 (p).
The polynomial δ(s, p) is a real polynomial with coefficients that depend continuously on the real parameter vector p. We suppose that for the nominal parameter p = p0 , δ(s, p0 ) := δ 0 (s) is stable with respect to S (has its roots in S). Write △p := p − p0 = [p1 − p01 , p2 − p02 , · · · , pl − p0l ] to denote the perturbation in the parameter p from its nominal value p0 . Now let us introduce a norm k · k in the space of the parameters p and introduce the open ball of radius ρ B(ρ, p0 ) = {p : kp − p0 k < ρ}.
(10.1)
The hypersphere of radius ρ is defined by S(ρ, p0 ) = {p : kp − p0 k = ρ}.
(10.2)
With the ball B(ρ, p0 ) we associate the family of uncertain polynomials: ∆ρ (s) := {δ(s, p0 + ∆p) : k∆pk < ρ}.
(10.3)
DEFINITION 10.1 The real parametric stability margin in parameter space is defined as the radius, measured in some norm and denoted ρ∗ (p0 ), of the largest ball centered at p0 for which δ(s, p) remains stable whenever p ∈ B(ρ∗ (p0 ), p0 ). This stability margin then tells us how much we can perturb the original parameter p0 and yet remain stable. Our first result is a characterization of this maximal stability ball. To simplify notation we write ρ∗ instead of ρ∗ (p0 ). THEOREM 10.1 With the assumptions as above the parametric stability margin ρ∗ is characterized by: a) There exists a largest stability ball B(ρ∗ , p0 ) centered at p0 , with the property that: a1) For every p′ within the ball, the characteristic polynomial δ(s, p′ ) is stable and of degree n. a2) At least one point p′′ on the hypersphere S(ρ∗ , p0 ) itself is such that δ(s, p′′ ) is unstable or of degree less than n.
382
ROBUST PARAMETRIC CONTROL
b) Moreover if p′′ is any point on the hypersphere S(ρ∗ , p0 ) such that δ(s, p′′ ) is unstable, then the unstable roots of δ(s, p′′ ) can only be on the stability boundary. The proof of this theorem is based on continuity of the roots on the parameter p. This theorem gives the first simplification for the calculation of the parametric stability margin ρ∗ . It states that to determine ρ∗ it suffices to calculate the minimum “distance” of p0 from the set of those points p which endow the characteristic polynomial with a root on the stability boundary, or which cause loss of degree. This last calculation can be carried out using the complex plane image of the family of polynomials ∆ρ (s) evaluated along the stability boundary. We will describe this in the next section. The parametric stability margin or distance to instability is measured in the norm k · k, and therefore the numerical value of ρ∗ will depend on the specific norm chosen. We will consider, in particular, the weighted ℓp norms. These are defined as follows: Let w = [w1 , w2 , · · · , wl ] with wi > 0 be a set of positive weights. ℓ1 norm :
k∆pkw 1
:=
l X
wi |∆pi |
i=1
ℓ2 norm :
v u l uX w wi2 ∆p2i k∆pk2 := t i=1
ℓp norm : ℓ∞ norm :
k∆pkw p
:=
"
l X
p
|wi ∆pi |
i=1
# p1
k∆pkw ∞ := max wi |∆pi |. i
We will write k△pk to refer to a generic weighted norm when the weight and type of norm are unimportant.
10.2.2
The Image Set Approach
The parametric stability margin may be calculated by using the complex plane image of the polynomial family ∆ρ (s) evaluated at each point on the stability boundary ∂S. This is based on the following idea. Suppose that the family has constant degree n and δ(s, p0 ) is stable but ∆ρ (s) contains an unstable polynomial. Then the continuous dependence of the roots on p and the Boundary Crossing Theorem imply that there must also exist a polynomial in ∆ρ (s) such that it has a root, at a point, s∗ , on the stability boundary ∂S. In this case the complex plane image set ∆ρ (s∗ ) must contain the origin of the complex plane. This suggests that to detect the presence of instability in a family of polynomials of constant degree, we generate the
STABILITY MARGIN COMPUTATION
383
image set of the family at each point of the stability boundary and determine if the origin is included in or excluded from this set. THEOREM 10.2 (Zero Exclusion Principle) For given ρ ≥ 0 and p0 , suppose that the family of polynomials ∆ρ (s) is of constant degree and δ(s, p0 ) is stable. Then every polynomial in the family ∆ρ (s) is stable with respect to S if and only if the complex plane image set ∆ρ (s∗ ) excludes the origin for every s∗ ∈ ∂S. PROOF This is simply a consequence of the continuity of the roots of δ(s, p) on p and the Boundary Crossing Theorem (Chapter 8). In fact, the above can be used as a computational tool to determine the maximum value ρ∗ of ρ for which the family is stable. If δ(s, p0 ) is stable, it follows that there always exists an open stability ball around p0 since the stability region S is itself open. Therefore, for small values of ρ the image set ∆ρ (s∗ ) will exclude the origin for every point s∗ ∈ ∂S. As ρ is increased from zero, a limiting value ρ0 may be reached where some polynomial in the corresponding family ∆ρ0 (s) loses degree or a polynomial in the family acquires a root s∗ on the stability boundary. From Theorem 10.1 it is clear that this value ρ0 is equal to ρ∗ , the stability margin. In case the limiting value ρ0 is never achieved, the stability margin ρ∗ is infinity. An alternative way to determine ρ∗ is as follows: Fixing s∗ at a point in the boundary of S, let ρ0 (s∗ ) denote the limiting value of ρ such that 0 ∈ ∆ρ (s∗ ): ρ0 (s∗ ) := inf {ρ : 0 ∈ ∆ρ (s∗ )} . Then, we define ρb = ∗inf ρ0 (s∗ ). s ∈∂S
In other words, ρb is the limiting value of ρ for which some polynomial in the family ∆ρ (s) acquires a root on the stability boundary ∂S. Also let the limiting value of ρ for which some polynomial in ∆ρ (s) loses degree be denoted by ρd : ρd := inf ρ : δn (p0 + ∆p) = 0, k∆pk < ρ .
We have established the following theorem. THEOREM 10.3 The parametric stability margin
ρ∗ = min {ρb , ρd } . REMARK 10.1 We note that this theorem remains valid even when the stability region S is not connected. For instance, one may construct the
384
ROBUST PARAMETRIC CONTROL
stability region as a union of disconnected regions Si surrounding each root of the nominal polynomial. In this case the stability boundary must also consist of the union of the individual boundary components ∂Si . The functional dependence of the coefficients δi on the parameters p is also not restricted in any way except for the assumption of continuity. The above theorem shows that the problem of determining ρ∗ can be reduced to the following steps: A) determine the “local” stability margin ρ(s∗ ) at each point s∗ on the boundary of the stability region, B) minimize the function ρ(s∗ ) over the entire stability boundary and thus determine ρb , C) calculate ρd , and set, D) ρ∗ = min{ρb , ρd }. In general the determination of ρ∗ is a difficult nonlinear optimization problem. However, the breakdown of the problem into the steps described above, exploits the structure of the problem and has the advantage that the local stability margin calculation ρ(s∗ ) with s∗ frozen, can be simple. In particular, when the parameters enter linearly into the characteristic polynomial coefficients, this calculation can be done in closed form. It reduces to a least square problem for the ℓ2 case, and equally simple linear programming or vertex problems for the ℓ∞ or ℓ1 cases. The dependence of ρb on s∗ is in general highly nonlinear but this part of the minimization can be easily performed computationally because sweeping the boundary ∂S is a one-dimensional search. In the next section, we describe and develop this calculation in greater detail.
10.3
Stability Margin Computation
We develop explicit formulas for the parametric stability margin in the case in which the characteristic polynomial coefficients depend linearly on the uncertain parameters. In such cases we may write without loss of generality that δ(s, p) = a1 (s)p1 + · · · + al (s)pl + b(s) (10.4) where ai (s) and b(s) are real polynomials and the parameters pi are real. As before, we write p for the vector of uncertain parameters, p0 denotes the nominal parameter vector and △p the perturbation vector. In other words p = [p1 , p2 , · · · , pl ]
p0 = [p01 , p02 , · · · , p0l ]
385
STABILITY MARGIN COMPUTATION △p = [p1 − p01 , p2 − p02 , · · · , pl − p0l ] = [△p1 , △p2 , · · · , △pl ]. Then the characteristic polynomial can be written as δ(s, p0 + ∆p) = δ(s, p0 ) + a1 (s)△p1 + · · · + al (s)△pl . {z } | {z } | δ 0 (s)
(10.5)
△δ(s,△p)
Now let s∗ denote a point on the stability boundary ∂S. For s∗ ∈ ∂S to be a root of δ(s, p0 + △p) we must have δ(s∗ , p0 ) + a1 (s∗ )△p1 + · · · + al (s∗ )△pl = 0.
(10.6)
We rewrite this equation introducing the weights wi > 0. δ(s∗ , p0 ) +
a1 (s∗ ) al (s∗ ) w1 △p1 + · · · + wl △pl = 0. w1 wl
(10.7)
Obviously, the minimum k△pkw norm solution of this equation gives us ρ(s∗ ), the calculation involved in step A in the last section. a1 (s∗ ) al (s∗ ) ρ(s∗ ) = inf k△pkw : δ(s∗ , p0 ) + w1 △p1 + · · · + wl △pl = 0 . w1 wl Similarly, corresponding to loss of degree we have the equation δn (p0 + ∆p) = 0.
(10.8)
Letting ain denote the coefficient of the nth degree term in the polynomial ai (s), i = 1, 2, · · · , l the above equation becomes a1n p01 + a2n p02 + · · · + aln p0l +a1n △p1 + a2n △p2 + · · · aln △pl = 0. | {z } δn
(10.9)
(p0 )
We can rewrite this after introducing the weight wi > 0
a1n a2n aln a1n p01 + a2n p02 + · · · + aln p0l + w1 △p1 + w2 △p2 + · · · + wl △pl = 0. | {z } w1 w2 wl δn (p0 )
(10.10) The minimum norm k△pkw solution of this equation gives us ρd . We consider the above equations in some more detail. Recall that the polynomials are assumed to be real. The equation (10.10) is real and can be rewritten in the form w1 △p1 h a1n aln i .. ··· (10.11) = −δn0 . . w1 wl |{z} | {z } w △p bn l l An | {z } tn
386
ROBUST PARAMETRIC CONTROL
In (10.7), two cases may occur depending on whether s∗ is real or complex. If s∗ = sr where sr is real, we have the single equation w1 △p1 a1 (sr ) al (sr ) .. 0 (10.12) ··· = −δ (sr ) . . | {z } w1 wl {z } wl △pl | b(sr ) | {z } A(sr ) t(sr )
Let xr and xi denote the real and imaginary parts of a complex number x, i.e. x = xr + jxi with xr , xi real. Using this notation, we will write ak (s∗ ) = akr (s∗ ) + jaki (s∗ ) and δ 0 (s∗ ) = δr0 (s∗ ) + jδi0 (s∗ ).
If s∗ = sc where sc is complex, (10.7) is equivalent to two equations which can be written as follows: a1r (sc ) alr (sc ) w1 △p1 0 ··· −δr (sc ) w1 .. w l = . (10.13) a (s ) . ali (sc ) −δi0 (sc ) 1i c ··· | {z } wl △pl w1 wl b(sc ) | {z } | {z } t(sc )
A(sc )
These equations completely determine the parametric stability margin in any norm. Let t∗ (sc ), t∗ (sr ), and t∗n denote the minimum norm solutions of (10.13), (10.12), and (10.11), respectively. Thus, kt∗ (sc )k = ρ(sc )
(10.14)
kt (sr )k = ρ(sr ) kt∗n k = ρd .
(10.15) (10.16)
∗
If any of the above equations (10.11)-(10.13) do not have a solution, the corresponding value of ρ(·) is set equal to infinity. Let ∂Sr and ∂Sc denote the real and complex subsets of ∂S: ∂S = ∂Sr ∪ ∂Sc . ρr := ρc :=
inf ρ(sr )
sr ∈∂Sr
inf ρ(sc ),
sc ∈∂Sc
Therefore, ρb = inf{ρr , ρc }. We now consider the specific case of the ℓ2 norm.
(10.17)
STABILITY MARGIN COMPUTATION
10.3.1
387
ℓ2 Stability Margin
In this section we suppose that the length of the perturbation vector △p is measured by a weighted ℓ2 norm. That is, the minimum ℓ2 norm solution of the equations (10.11), (10.12) and (10.13) are desired. Consider first (10.13). Assuming that A(sc ) has full row rank=2, the minimum norm solution vector t∗ (sc ) can be calculated as follows: −1 t∗ (sc ) = AT (sc ) A(sc )AT (sc ) b(sc ). (10.18) Similarly if (10.12) and (10.11) are consistent (i.e., A(sr ) and An are nonzero vectors), we can calculate the solution as −1 t∗ (sr ) = AT (sr ) A(sr )AT (sr ) b(sr ) (10.19) −1 bn . (10.20) t∗n = ATn An ATn If A(sc ) has less than full rank the following two cases can occur.
Case 1: Rank A(sc ) = 0 In this case the equation is inconsistent since b(sc ) 6= 0 (otherwise δ 0 (sc ) = 0 and δ 0 (s) would not be stable with respect to S since sc lies on ∂S). In this case (10.13) has no solution and we set ρ(sc ) = ∞. Case 2: Rank A(sc ) = 1 only if
In this case the equation is consistent if and rank [A(sc ), b(sc )] = 1.
If the above rank condition for consistency is satisfied, we replace the two equations (10.13) by a single equation and determine the minimum norm solution of this latter equation. If the rank condition for consistency does not hold, equation (10.13) cannot be satisfied and we again set ρ(sc ) = ∞. Example 10.1 (ℓ2 Schur Stability Margin) Consider the discrete time control system with the controller and plant specified respectively by their transfer functions: C(z) =
z+1 , z2
G(z, p) =
z2
(−0.5 − 2p0 )z + (0.1 + p0 ) . − (1 + 0.4p2 )z + (0.6 + 10p1 + 2p0 )
The characteristic polynomial of the closed-loop system is: δ(z, p) = z 4 − (1 + 0.4p2 )z 3 + (0.1 + 10p1 )z 2 − (0.4 + p0 )z + (0.1 + p0 ). The nominal value of p0 = [p00 p01 p02 ] = [0, 0.1, 1]. The perturbation is denoted as usual by the vector ∆p = ∆p0 ∆p1 ∆p2 .
388
ROBUST PARAMETRIC CONTROL
The polynomial is Schur stable for the nominal parameter p0 . We compute the ℓ2 stability margin of the polynomial with weights w1 = w2 = w3 = 1. Rewrite δ(z, p0 +∆p) = (−z+1)∆p0+10z 2 ∆p1 −0.4z 3∆p2 +(z 4 −1.4z 3+1.1z 2−0.4z+0.1) and note that the degree remains invariant (=4) for all perturbations so that ρd = ∞. The stability region is the unit circle. For z = 1 to be a root of δ(z, p0 + ∆p) (see (10.12)), we have ∆p0 0 10 −0.4 ∆p1 = −0.4 . | {z } {z } ∆p | 2 b(1) A(1) | {z } t(1)
Thus,
−1
ρ(1) = kt∗ (1)k2 = AT (1) A(1)AT (1) b(1) = 0.04 . 2
Similarly, for the case of z = −1 (see (10.12)), we have ∆p0 2 10 0.4 ∆p1 = −4 . |{z} | {z } ∆p 2 b(−1) A(−1) | {z } t(−1)
and ρ(−1) = kt∗ (−1)k2 = 0.3919 . Thus ρr = 0.04 . For the case in which δ(z, p0 + ∆p) has a root at z = ejθ , θ 6= π, θ 6= 0, using (10.13), we have ∆p − cos θ + 1 10 cos 2θ −0.4 cos 3θ 0 ∆p1 − sin θ 10 sin 2θ −0.4 sin 3θ | {z } ∆p2 | {z } A(θ) t(θ)
Thus,
=− |
cos 4θ − 1.4 cos 3θ + 1.1 cos 2θ − 0.4 cos θ + 0.1 . sin 4θ − 1.4 sin 3θ + 1.1 sin 2θ − 0.4 sin θ {z } b(θ)
−1
ρ(ejθ ) = kt∗ (θ)k2 = AT (θ) A(θ)AT (θ) b(θ) .
Figure 10.1 shows the plot of ρ(ejθ ). Therefore, the ℓ2 parametric stability margin is ρc = 0.032 = ρb = ρ∗ .
389
STABILITY MARGIN COMPUTATION
1.2
1
0.8
ρ(ejθ)
ρ(ejθ) 0.6
0.4 ρ=0.032 0.2
0
0
1
2
3
θ
4
5
6
7
Figure 10.1 ρ(θ) (Example 10.1).
Example 10.2 Consider the continuous time control system with the plant G(s, p) =
2s + 3 − 13 p1 − 35 p2 s3 + (4 − p2 )s2 + (−2 − 2p1 )s + (−9 + 35 p1 +
16 3 p2 )
and the proportional integral (PI) controller 3 C(s) = 5 + . s The characteristic polynomial of the closed-loop system is δ(s, p) = s4 + (4 − p2 )s3 + (8 − 2p1 )s2 + (12 − 3p2 )s + (9 − p1 − 5p2 ). We see that the degree remains invariant under the given set of parameter variations and therefore ρd = ∞. The nominal values of the parameters are p0 = [p01 , p02 ] = [0 , 0]. Then
∆p = ∆p1 ∆p2 = p1 p2 .
390
ROBUST PARAMETRIC CONTROL
The polynomial is stable for the nominal parameter p0 . Now we want to compute the ℓ2 stability margin of this polynomial with weights w1 = w2 = 1. We first evaluate δ(s, p) at s = jω: δ(jω, p0 +∆p) = (2ω 2 −1)∆p1 +(jω 3 −3jω−5)∆p2 +ω 4 −4jω 3 −8ω 2 +12jω+9. For the case of a root at s = 0 (see (10.12)), we have ∆p1 −1 −5 = −9. |{z} | {z } ∆p2 | {z } b(0) A(0) t(0)
Thus,
√
−1 9 26
T
T ρ(0) = kt (0)k2 = A (0) A(0)A (0) b(0) = . 26 2 For a root at s = jω, ω > 0, using the formula given in (10.13), we have with w1 = w2 = 1 ∆p1 −ω 4 + 8ω 2 − 9 (2ω 2 − 1) −5 = . (10.21) ∆p2 4ω 2 − 12 0 (ω 2 − 3) | {z } | {z } | {z } ∗
A(jω)
t(jω)
b(jω)
Here, we need to determine if there exists any ω for which the rank of the matrix A(jω) drops. It is easy √ to see from the matrix A(jω), that the rank drops when ω = √12 and ω = 3. rank A(jω) = 1: For ω = √12 , we have rank A(jω) = 1 and rank [A(jω), b(jω)] = 2, and so there is no solution to (10.21). Thus, 1 ρ j√ = ∞. 2 √ For ω = 3, rank A(jω) = rank [A(jω), b(jω)], and we do have a solution to (10.21). Therefore ∆p1 5 −5 = |{z} 6 . | {z } ∆p2 √ | {z } b(j √3) A(j 3)
√ t(j 3)
Consequently, √
√ √ √ √ −1 3 2
T √
∗ √
T ρ(j 3) = t (j 3) = A (j 3) A(j 3)A (j 3) b(j 3) = . 5 2 2 rank A(jω) = 2: In this case (10.21) has a solution (which happens to be unique) and the length of the least square solution is found by
−1
ρ(jω) = kt∗ (jω)k2 = AT (jω) A(jω)AT (jω) b(jω) 2 s 2 ω 4 − 8ω 2 − 11 = + 16. 2ω 2 − 1
391
STABILITY MARGIN COMPUTATION
√ Figure 10.2 shows the plot of ρ(jω) for ω > 0. The values of ρ(0) and ρ(j 3) are also shown in Figure 10.2.
14
12
10 ρ(jω) 8
6
4 ρ(j1.7321) 2
0
ρ(0)
0
1
2
3 ω
4
5
6
Figure 10.2 ρ(jω), (Example 10.2).
Therefore,
√ √ 3 2 ρ(j 3) = ρb = = ρ∗ 5
is the stability margin.
10.3.2
Discontinuity of the Stability Margin
In the last√example we notice that the function ρ(jω) has a discontinuity at ω = ω ∗ = 3. The reason for this discontinuity is that in the neighborhood of ω ∗ , the minimum norm solution of (10.13) is given by the formula for the rank 2 case. On the other hand, at ω = ω ∗ , the minimum norm solution is given by the formula for the rank 1 case. Thus, the discontinuity of the function ρ(jω) is due to the drop of rank from 2 to 1 of the coefficient matrix A(jω) at ω ∗ . Furthermore, we have seen that if the rank of A(jω ∗ ) drops from 2 to 1 but the
392
ROBUST PARAMETRIC CONTROL
rank of [A(jω ∗ ), b(jω ∗ )] does not also drop, then the equation is inconsistent at ω ∗ and ρ(jω ∗ ) is infinity. In this case, this discontinuity in ρ(jω) does not cause any problem in finding the global minimum of ρ(jω). Therefore, the only values of ω ∗ that can cause a problem are those for which the rank of [A(jω ∗ ), b(jω ∗ )] drops to 1. Given the problem data, the occurrence of such a situation can be predicted by setting all 2 × 2 minors of the matrix [A(jω), b(jω)] equal to zero and solving for the common real roots if any. These frequencies can then be treated separately in the calculation. Therefore, such discontinuities do not pose any problem from the computational point of view. Since the parameters for which rank dropping occurs lie on a proper algebraic variety, any slight and arbitrary perturbation of the parameters will dislodge them from this variety and restore the rank of the matrix. If the parameters correspond to physical quantities such arbitrary perturbations are natural and hence such discontinuities should not cause any problem from a physical point of view either.
10.3.3
ℓ2 Stability Margin for Time-Delay Systems
The results given above for determining the largest stability ellipsoid in parameter space for polynomials can be extended to quasi-polynomials. This extension is useful when parameter uncertainty is present in systems containing time delays. As before, we deal with the case where the uncertain parameters appear linearly in the coefficients of the quasi-polynomial. Let us consider real quasi-polynomials δ(s, p) = p1 Q1 (s) + p2 Q2 (s) + · · · + pl Ql (s) + Q0 (s)
(10.22)
where Qi (s) = sni +
ni X m X
i
aikj sni −k e−τj s ,
i = 0, 1, · · · , l
(10.23)
k=1 j=1
and we assume that n0 > ni , i = 1, 2, · · · , l and that all parameters occurring in the equations (10.22) and (10.23) are real. Control systems containing time delays often have characteristic equations of this form (see Example 10.3). The uncertain parameter vector is denoted p = [p1 , p2 , · · · , pl ]. The nominal value of the parameter vector is p = p0 , the nominal quasi-polynomial δ(s, p0 ) = δ 0 (s) and p − p0 = ∆p denotes the deviation or perturbation from the nominal. The parameter vector is assumed to lie in the ball of radius ρ centered at p0 : B(ρ, p0 ) = {p : kp − p0 k2 < ρ}. (10.24) The corresponding set of quasi-polynomials is: ∆ρ (s) := {δ(s, p0 + ∆p) : k∆pk2 < ρ}.
(10.25)
393
STABILITY MARGIN COMPUTATION
Recall the discussion in Chapter 8 regarding the Boundary Crossing Theorem applied to this class of quasi-polynomials. From this discussion and the fact that in the family (10.22) the e−st terms are associated only with the lower degree terms it follows that it is legitimate to say that each quasi-polynomial defined above is Hurwitz stable if all its roots lie inside the left-half of the complex plane. As before we shall say that the family is Hurwitz stable if each quasi-polynomial in the family is Hurwitz. We observe that the “degree” of each quasi-polynomial in ∆ρ (s) is the same since n0 > ni , and therefore stability can be lost only by a root crossing the jω axis. Accordingly, for every −∞ < ω < ∞ we can introduce a set in the parameter space Π(ω) = {p : δ(jω, p) = 0} This set corresponds to quasi-polynomials that have jω as a root. Of course for some particular ω this set may be empty. If Π(ω) is nonempty we can define the distance between Π(ω) and the nominal point p0 : ρ(ω) =
inf {kp − p0 k}.
p∈Π(ω)
If Π(ω) is empty for some ω we set the corresponding ρ(ω) := ∞. We also note that since all coefficients in (10.22) and (10.23) are assumed to be real, Π(ω) = Π(−ω) and accordingly ρ(ω) = ρ(−ω) THEOREM 10.4 The family of quasi-polynomials ∆ρ (s) is Hurwitz stable if and only if the quasi-polynomial δ 0 (s) is stable and ρ < ρ∗ =
inf
0≤ω ni ).
10.3.4
ℓ∞ and ℓ1 Stability Margins
If △p is measured in the ℓ∞ or ℓ1 norm, we face the problem of determining the corresponding minimum norm solution of a linear equation of the type At = b at each point on the stability boundary. Problems of this type can always be converted to a suitable linear programming problem. For instance, in the ℓ∞ case, this problem is equivalent to the following optimization problem: Minimize β subject to the constraints: At = b −β ≤ ti ≤ β,
i = 1, 2, · · · , l.
(10.26)
This is a standard linear programming (LP) problem and can therefore be solved by existing, efficient algorithms.
396
ROBUST PARAMETRIC CONTROL
For the ℓ1 case, we can similarly formulate a linear programming problem by introducing the variables, |ti | := yi+ − yi− ,
i = 1, 2, · · · , l
Then we have the LP problem: Minimize
l X
{yi+ − yi− }
(10.27)
i=1
subject to At = b yi− − yi+ ≤ ti ≤ yi+ − yi−
i = 1, 2, · · · , l.
We do not elaborate further on this approach. The reason is that ℓ∞ and ℓ1 cases are special cases of polytopic families of perturbations. We deal with this general class next.
10.4
The Mapping Theorem
The Mapping Theorem deals with a family of polynomials, which depend multilinearly on a set of interval parameters. We refer to such a family as a Multilinear Interval Polynomial. The Mapping Theorem shows us that the image set of such a family is contained in the convex hull of the image of the vertices. We state and prove this below. Let p = [p1 , p2 , · · · , pl ] denote a vector of real parameters. Consider the polynomial δ(s, p) := δ0 (p) + δ1 (p)s + δ2 (p)s2 + · · · + δn (p)sn
(10.28)
where the coefficients δi (p) are multilinear functions of p, i = 0, 1, · · · n. The vector p lies in an uncertainty set + Π := p : p− (10.29) i ≤ pi ≤ pi , i = 1, 2, · · · , l . The corresponding set of multilinear interval polynomials is denoted by ∆(s) := {δ(s, p) : p ∈ Π} . Let V denote the vertices of Π, i.e., − V := p : pi = p+ i or pi = pi , i = 1, 2, · · · , l
(10.30)
(10.31)
397
STABILITY MARGIN COMPUTATION and ∆V (s) := {δ(s, p) : p ∈ V} := {v1 (s), v2 (s), · · · vk (s)} .
(10.32)
¯ denote the set of vertex polynomials. Let ∆(s) denote the convex hull of the vertex polynomials {v1 (s), v2 (s), · · · vk (s)}: ¯ ∆(s) :=
( i=k X
)
λi vi (s) : 0 ≤ λi ≤ 1, i = 1, 2 · · · k .
i=1
¯ The intersection of the sets ∆(s) and ∆(s) contain the vertex polynomials ∆V (s). The Mapping Theorem deals with the image of ∆(s) at s = s∗ . Let T(s∗ ) denote the complex plane image of the set T(s) evaluated at s = s∗ and let co P denote the convex hull of a set of points P in the complex plane. THEOREM 10.5 (Mapping Theorem) Under the assumption that δi (p) are multilinear functions of p ¯ ∗) ∆(s∗ ) ⊂ co ∆V (s∗ ) = ∆(s
(10.33)
for each s∗ ∈ C. I PROOF For convenience we suppose that there are two uncertain parameters p := [p1 , p2 ] and the uncertainty set Π is the rectangle ABCD shown in Figure 10.5(a). Fixing s = s∗ , we obtain δ(s∗ , p) which maps Π to the complex plane. Let A′ , B ′ , C ′ , D′ denote respectively the complex plane images of the vertices A, B, C, D under this mapping. Figure 10.5(b,c,d,e) show various configurations that can arise under this mapping. Now consider an arbitrary point I in Π and its complex plane image I ′ . The theorem is proved if we establish that I ′ is a convex combination of the complex numbers A′ , B ′ , C ′ , D′ . We note that starting at an arbitrary vertex, say A, of Π we can reach I by moving along straight lines which are either edges of Π or are parallel to an edge of Π. Thus, as shown in Figure 10.5(a), we move from A to E along the edge AB, and then from E to I along EF which is parallel to the edge AD. Because δ(s∗ , p) is multilinear in the pi it follows that the complex plane images of AB, EF , and CD, which are parallel to edges of Π, are straight lines, respectively A′ B ′ , E ′ F ′ , C ′ D′ . Moreover, E ′ lies on the straight line A′ B ′ , F ′ lies on the straight line C ′ D′ , and I ′ lies on the straight line E ′ F ′ . Therefore, I ′ lies in the convex hull of A′ B ′ C ′ D′ . The same reasoning works in higher dimensions. Any point in Π can be reached from a vertex by moving one coordinate at a time. In the image set this corresponds, because of multilinearity, to moving along straight lines joining vertices or convex combinations of vertices. By such movements we
398
ROBUST PARAMETRIC CONTROL
Π ∆(s∗ )
p2
p1
∆(s∗ )
∆(s∗ )
∆(s∗ ) ∆(s∗ )
Figure 10.5 Proof of the mapping theorem.
STABILITY MARGIN COMPUTATION
399
can never escape the convex hull of the vertices. The second equality holds by definition. We point out that the Mapping Theorem does not hold if Π is not an axis-parallel box since the image of the edge of an arbitrary polytope under a multilinear mapping is in general not a straight line. The Mapping Theorem will also not hold when the dependency on the parameters is polynomial rather than multilinear, because of the same reason. Example 10.4 Consider the multilinear interval polynomial δ(s, p) = p1 Q1 (s) + p2 Q2 (s) + p1 p2 Q3 (s) + Q4 (s) with Q1 (s) = −6s + 2, Q2 (s) = −5s − 1, Q3 (s) = 10s + 3, Q4 (s) = 7s + 5. The parameter set p varies inside box Π depicted in Figure 10.6(a). The image set ∆(s∗ , p) of the family at s∗ = j1 is shown in Figure 10.6(b). The convex hull of the image set ∆(s∗ ) is also shown. This shows that ¯ ∗ ). ∆(s∗ , p) ⊂ co ∆V (s∗ , p) = ∆(s The four corners of the polygon in Figure 10.6(b) are the vertices of the image.
This theorem shows us that the image set of the multilinear family ∆(s) can always be approximated by overbounding it with the image of the polytopic ¯ family ∆(s). This approximation is extremely useful. In the next section we show how it allows us to determine robust stability and stability margins quantitatively.
10.4.1
Robust Stability via the Mapping Theorem
As we have seen in earlier chapters, the robust stability of a parametrized family of polynomials can be determined by verifying that the image set evaluated at each point of the stability boundary excludes the origin. The Mapping Theorem shows us that the image set of a multilinear interval polynomial family is contained in the convex hull of the vertices. Obviously a sufficient condition for the entire image set to exclude zero is that the convex hull exclude zero. Since the image set ∆(s∗ ) is overbounded by the convex hull of ∆V (s∗ ), this suggests that the stability of the multilinear set ∆(s) can be guaranteed by ¯ solving the easier problem of verifying the stability of the polytopic set ∆(s). We develop this idea in this section.
400
ROBUST PARAMETRIC CONTROL 8
1.2 Π
7
1
6
5
Imag
p2
0.8
0.6
4
∗
∆(s )
3 0.4
2
co ∆(s∗)
0.2
1
0
0
0.2
0.4
0.6 p1
0.8
1
1.2
0
2
3
(a)
4
5
6 Real
7
8
9
10
(b)
Figure 10.6 Parameter space, image set, and convex hull (Example 10.4).
As usual, let S, an open set in the complex plane, be the stability region. ¯ Introduce the set of edges E(s) of the polytope ∆(s): E(s) := {λvi (s) + (1 − λ)vj (s) : vi (s), vj (s) ∈ ∆V (s)} .
(10.34)
To proceed, we make some standing assumptions about the families ∆(s) and ¯ ∆(s). Assumption 3 ¯ 1) Every polynomial in ∆(s) and ∆(s) is of the same degree. ¯ 0 ) for some s0 ∈ ∂S. 2) 0 ∈ / ∆(s 3) At least one polynomial in ∆(s) is stable. THEOREM 10.6 ¯ Under the above assumptions, ∆(s) is stable with respect to S if ∆(s) and equivalently E(s) is stable with respect to S. PROOF Since the degree remains invariant, we may apply the Boundary Crossing Theorem (Chapter 8). Thus, ∆(s) can be unstable if and only if 0 ∈ ∆(s∗ ) for some s∗ ∈ ∂S. By assumption, there exist s0 ∈ ∂S such that ¯ 0 ) = co E(s0 ). 0∈ / ∆(s
STABILITY MARGIN COMPUTATION
401
By the Mapping Theorem ¯ ∗ ) = co E(s∗ ). ∆(s∗ ) ⊂ ∆(s Therefore, by continuity of the image set on s, 0 ∈ ∆(s∗ ) must imply the ¯ existence of s¯ ∈ ∂S such that 0 ∈ E(¯ s). This contradicts the stability of ∆(s) and of E(s). This theorem is just a statement of the fact that the origin can enter the image set ∆(s∗ ) only by piercing one of the edges E(¯ s). Nevertheless, the result is ¯ rather remarkable in view of the fact that the set ∆(s) does not necessarily contain, nor is it contained in, the set ∆(s). In fact the inverse image of E(s) in the parameter space will in general include points outside Π. The theorem ¯ works because the image set ∆(s∗ ) is overbounded by ∆(s) at every point ∗ s ∈ ∂S. This in turn happens because the Mapping Theorem guarantees that ∆(s∗ ) is “concave” or bulges inward. ¯ The test set ∆(s) or its edges E(s) are linearly parametrized families of polynomials. Thus, the above theorem allows us to test a multilinear interval family using all the techniques available in the linear case. Example 10.5 Consider the multilinear interval polynomial family ∆(s): δ(s, p) = p1 Q1 (s) + p2 Q2 (s) + p1 p2 Q3 (s) where p1 ∈ [1, 2], p2 ∈ [3, 4] and Q1 (s) = s4 + 4.3s3 + 6.2s2 + 3.5s + 0.6, Q2 (s) = s4 + s3 + 0.32s2 + 0.038s + 0.0015, Q3 (s) = s4 + 3.5s3 + 3.56s2 + 1.18s + 0.12. ¯ The edges of the polytopic set ∆(s) are, E(s) = {Ei (s),
i = 1, 2, 3, 4, 5, 6}
with − − − E1 (s) = λ p− 1 Q1 (s) + p2 Q2 (s) + p1 p2 Q3 (s) − + − +(1 − λ) p+ 1 Q1 (s) + p2 Q2 (s) + p1 p2 Q3 (s) + − + E2 (s) = λ p− 1 Q1 (s) + p2 Q2 (s) + p1 p2 Q3 (s) + + + +(1 − λ) p+ 1 Q1 (s) + p2 Q2 (s) + p1 p2 Q3 (s) − − − E3 (s) = λ p− 1 Q1 (s) + p2 Q2 (s) + p1 p2 Q3 (s)
402
ROBUST PARAMETRIC CONTROL + − + +(1 − λ) p− 1 Q1 (s) + p2 Q2 (s) + p1 p2 Q3 (s) − + − E4 (s) = λ p+ 1 Q1 (s) + p2 Q2 (s) + p1 p2 Q3 (s) + + + +(1 − λ) p+ 1 Q1 (s) + p2 Q2 (s) + p1 p2 Q3 (s) − + − E5 (s) = λ p+ 1 Q1 (s) + p2 Q2 (s) + p1 p2 Q3 (s) + − + +(1 − λ) p− 1 Q1 (s) + p2 Q2 (s) + p1 p2 Q3 (s) + + + E6 (s) = λ p+ 1 Q1 (s) + p2 Q2 (s) + p1 p2 Q3 (s) − − − +(1 − λ) p− 1 Q1 (s) + p2 Q2 (s) + p1 p2 Q3 (s) .
The stability of E(s) can be tested simply by applying the Segment Lemma (Chapter 9). Here, all edges in E(s) are found to be stable.
10.4.2
Refinement of the Convex Hull Approximation
¯ Since the stability of the set ∆(s) or E(s) is only a sufficient condition for the stability of ∆(s), it can happen that E(s) is unstable but ∆(s) is stable. In this case the sufficient condition given by the theorem can be tightened by introducing additional vertices. We illustrate this in the accompanying figures. Consider the multilinear polynomial Q0 (s) + p1 Q1 (s) + p2 Q2 (s) + p1 p2 Q3 (s) where Q0 (s) = s4 + s3 + 2s2 + s + 2, Q1 (s) = 2s4 + 3s3 + 4s2 + s + 1, Q2 (s) = 1.5s4 + 1.5s3 + 3s2 + s + 0.5, Q3 (s) = 3s4 + 0.5s3 + 1.5s2 + 2s + 2 and the parameter p varies within the box shown in Figure 10.7. As shown in Figure 10.7 one may introduce additional vertices in the parameter space. The corresponding image sets at ω = 0.85 are shown in Figures 10.8, 10.9, and 10.10. We see that this amounts to decomposing the parameter box as a union of smaller boxes: Π=
t [
(10.35)
Πi .
i=1
If Vi and Ei (z) denote the corresponding vertices and edges, we have co ∆(z ∗ ) ⊂
t [
i=1
¯ V (z ∗ ) = ∆ i
t [
co Ei (z ∗ )
(10.36)
i=1
S and therefore the stability of the set of line segments m i=1 Ei (z) would imply that of ∆(z). We can see from Figures. 10.9 and 10.10 that the nonconvex set
403
STABILITY MARGIN COMPUTATION
p2
p2
6
1
p2
6
1
6
1 Π3 Π4
0
0.5
Π1 Π2
Π 1 p1
0
0.5
1 p1
Π1 Π2 0
Figure 10.7 Additional vertices in parameter space.
Figure 10.8 Image set and convex hull (ω = 0.85).
0.5
1 p1
404
ROBUST PARAMETRIC CONTROL
Figure 10.9 Image set and convex hull (2 partitions) (ω = 0.85).
Figure 10.10 Image set and convex hull (4 partitions) (ω = 0.85).
405
STABILITY MARGIN COMPUTATION Sm
∗ i=1 Ei (z ) approximates Πi increases. It is clear,
∆(z ∗ ) more closely as the number t of polytopes therefore, that the sufficient condition given here can be improved to any desired level of accuracy by checking the stability of smaller convex hulls.
10.5
Stability Margins of Multilinear Interval Systems
In this section we consider system transfer functions which are ratios of multilinear interval polynomials. We call such systems multilinear interval systems. Our objective is to analyze feedback control systems containing such multilinear interval plants. In addition to determining robust stability, we are interested in calculating various types of stability and performance margins for such systems. Let M (s) denote a transfer function which depends on the uncertain, interval, independent parameters q and r: M (s) = M (s, q, r) =
N (s, q) . D(s, r)
(10.37)
We assume that N (s, q) and D(s, r) are multilinear interval polynomials and q and r lie in axis parallel boxes Q and R respectively. Let N(s) := {N (s, q) : q ∈ Q} D(s) := {D(s, r) : r ∈ R} and write M(s) :=
N (s, q) : q ∈ Q, r ∈ R D(s, r)
:=
N(s) . D(s)
Now, suppose that the multilinear interval plant we have described is part of a control system. For robustness studies, it is important that we obtain an assessment of the worst case stability margins and performance measures of the system over the uncertainty set Π := Q × R. We shall show here that by using the Mapping Theorem we can replace the family M(s) by a ¯ polytopic family M(s) which has the property that any worst case stability ¯ or performance margin that is calculated for the family M(s) serves as a corresponding guaranteed margin for the family M(s). The advantage that we gain is that worst case stability and performance margins can be evaluated ¯ easily and exactly for the family M(s) since it is polytopic. ¯ Construction of M(s) Let VQ denote the vertex set of Q, NV (s) the corresponding vertex polynomials NV (s) := {N (s, q) : q ∈ VQ }
406
ROBUST PARAMETRIC CONTROL
¯ and N(s) the convex hull of NV (s): ¯ N(s) := {λNi (s) + (1 − λ)Nj (s) : Ni (s), Nj (s) ∈ NV (s), λ ∈ [0, 1]}. ¯ In an identical manner we can define the sets VR , DV (s) and D(s). Now we can introduce the transfer function sets MV (s) := and
NV (s) DV (s)
¯ N(s) ¯ . M(s) := ¯ D(s)
From the Mapping Theorem we know that at every point s∗ ∈ CI the image sets of N(s) and D(s) contain the image of the vertices and are overbounded ¯ ¯ respectively by the images of the polytopic families N(s) and D(s) respectively: ¯ ∗) NV (s∗ ) ⊂ N(s∗ ) ⊂ N(s ¯ ∗ ). DV (s∗ ) ⊂ D(s∗ ) ⊂ D(s ¯ ∗ ) so that the above sets are well-defined. As usual we assume that 0 ∈ / D(s From the above it follows that the image set of M(s) contains the image of ¯ the vertices MV (s) and is also overbounded by the image of M(s). LEMMA 10.1 ¯ ∗ ). MV (s∗ ) ⊂ M(s∗ ) ⊂ M(s This result suggests that by replacing the multilinear interval family M(s) ¯ by the polytopic family M(s) in a control system we can calculate a lower bound on the worst case stability margin or performance margin of the system. To state this more precisely suppose that M (s) ∈ M(s) is part of a feedback control system. We will assume that the order of the closed-loop system, i.e. the degree of the characteristic polynomial, remains invariant over M (s) ∈ ¯ M(s) and also over M (s) ∈ M(s). As usual let S denote a stability region that is relevant to the system under consideration. THEOREM 10.7 Under the assumption of invariant order, robust stability of the control system ¯ holds with M (s) ∈ M(s) if it holds with M (s) ∈ M(s). The proof of this result follows immediately from the image set bounding ¯ property of the set M(s) evaluated at points s∗ on the stability boundary ∂S, which is stated in the Lemma above. The result holds for arbitrary stability regions with Hurwitz and Schur stability being particular cases.
407
STABILITY MARGIN COMPUTATION
Suppose now that we are dealing with a continuous time control system and that we have verified the robust Hurwitz stability of the system with ¯ M (s) ∈ M(s). The next important issue is to determine the performance of the system in some suitably defined meaningful way. The usual measures of performance are gain margin, phase margin, time-delay tolerance, H∞ stability or performance margins, parametric stability radius in the parameter p and nonlinear sector bounded stability margins as in the Lur’e or Popov problems treated in Chapter 12. Let µ refer to one of these performance measures and let us assume that increasing values of µ reflect better performance. Let µ∗ denote the worst case value of µ over the set M (s) ∈ M(s), µ denote the ¯ worst case value of µ over M (s) ∈ M(s) and µ ¯ denote the worst case value of µ over the vertex set M (s) ∈ MV (s). Then it is obviously true that µ ≤ µ∗ ≤ µ ¯. ¯ Since M(s) is a polytopic set, we can calculate µ exactly. As in the Mapping Theorem, this lower bound can be increased by subdividing the box Π into smaller boxes. On the other hand, µ ¯ can be calculated very easily because it is the minimum of µ evaluated over the vertex points. By subdividing the box Π, this upper bound can be decreased and the gap between the upper and lower bounds can be reduced to arbitrarily small values by introducing additional vertices to refine the convex hull approximation. Thus the worst case performance over the parameter set Π, also known as robust performance, can be accurately determined. In the following section we illustrate this procedure for estimating worst case performance in greater detail with numerical examples.
10.5.1
Examples
We illustrate the procedure for calculating robust stability and performance margins by examples. Parametric Stability Margin For a given uncertainty set Π, we can verify robust stability using the Mapping Theorem. In applications, it is of interest to know how large Π can be without losing stability. This problem can also be solved using the convex hull approximation. We illustrate this with an example. Example 10.6 Consider the discrete time feedback system as shown Figure 10.11. P (z) :=
P1 (z) z − z0 = P2 (z) (z − p1 )(z − p2 )
and C(z) :=
C1 (z) z + 1.29 = C2 (z) z − 0.5
and let the set of system parameters p = [z0 , p1 , p2 ] vary in a box whose size
408
ROBUST PARAMETRIC CONTROL + − 6
-
C(z)
-
P (z)
-
Figure 10.11 Discrete-time feedback systems (Example 10.6).
varies with a dilation parameter ǫ as follows: z0 ∈ [z0− , z0+ ] = [0.5 − ǫ, 0.5 + ǫ] + p1 ∈ [p− 1 , p1 ] = [1.0 − ǫ, 1.0 + ǫ] + p2 ∈ [p− 2 , p2 ] = [−1.0 − ǫ, −1.0 + ǫ].
The characteristic polynomial of the system is δ(z, p) = z 3 + (0.5 − p1 − p2 )z 2 + (1.29 + 0.5p1 + 0.5p2 + p1 p2 − z0 )z −0.5p1p2 − 1.29z0. We see that the coefficients are multilinear functions of p. Therefore, Theorem 10.7 may be applied. We verify that the following member of the family is Schur stable: δ(z, p = [0.5, 1, −1]) = z 3 + 0.5z 2 − 0.21z − 0.145. We would like to determine the largest sized box for which we have closedloop stability. This can be regarded as the worst case parametric stability margin ǫ∗ of the family. Using a simple bisection algorithm, we find that a lower bound ǫ on the the parametric stability margin ǫ∗ is 0.125. On the other hand we can get an upper bound ǫ¯ from the vertices. This gives ǫ¯ = 0.125, which is the same as the lower bound, and thus the true value in this case.
Robust Gain, Phase, Time Delay and H∞ Stability Margins In this section we focus on a control system example and show how robust performance measured in each of the above ways, over a multilinear parameter set, can be estimated from the polytopic overbounding method discussed earlier. Example 10.7 Consider the control system block diagram shown in Figure 10.12. Let P1 (s) F1 (s) s+1 s + p1 F (s) := = and P (s) := = 2 F2 (s) s+2 P2 (s) s + p2 s + 4
409
STABILITY MARGIN COMPUTATION m + x −6
-
F (s)
P (s)
-
Q(s)
Figure 10.12 A multilinear interval control system (Example 10.7).
Q(s) :=
Q1 (s) s + p3 = 3 Q2 (s) s + 3s2 + p4 s + 0.1
where + p1 ∈ [2.9, 3.1] =: [p− 1 , p1 ]
p3 ∈ [4.9, 5.1] =:
+ p2 ∈ [1.9, 2.1] =: [p− 2 , p2 ]
+ [p− 3 , p3 ]
+ p4 ∈ [1.9, 2.1] =: [p− 4 , p4 ].
The family of characteristic polynomials is ∆(s, p) = {F2 (s)P2 (s)Q2 (s) + F1 (s)P1 (s)Q1 (s) : p ∈ Π} where the parameter space box is given as + Π := p : pi ∈ [p− i , pi ] i = 1, 2, 3, 4 .
Now we construct the 16 vertex polynomials corresponding to the vertex set V of Π: ∆V (s) = {v(s, p) : p ∈ V} = {vi (s) = vi (s, p), i = 1, 2, · · · , 16} . The vertex polynomials vi (s) can be written down by setting p to a vertex value. For example: + + + v1 (s) = v(s, p+ 1 , p2 , p3 , p4 ) + + + 3 2 = (s + 2)(s2 + p+ 2 s + 4)(s + 3s + p4 s + 0.1) + (s + 1)(s + p1 )(s + p3 ) . {z } | {z } | V1D (s)
V1N (s)
Similarly, we can write v2 (s), · · ·, v16 .
(a) To determine the worst case upper gain margin L∗ over the parameter set, we replace the vertex polynomials as follows: Vi (s, L) = ViD (s) + (1 + L)ViN (s),
i = 1, 2, · · · , 16.
Now find the maximum phase difference over the entire set of vertex polynomials ∆V (jω) for a fixed ω with a fixed L: Φ∆V (ω, L) = sup arg i,i6=k
Vi (jω, L) Vi (jω, L) − inf arg Vk (jω, L) i,i6=k Vk (jω, L)
410
ROBUST PARAMETRIC CONTROL where k is an arbitrary integer, 1 ≤ k ≤ 16, and i = 1, 2, · · · , 16. Then a lower bound L on the worst case gain margin is obtained by determining the largest real positive value of L such that all the edges connecting the vertices in ∆V (s) remain Hurwitz. Equivalently, by the Bounded Phase Condition we have L = inf
L≥0
L : sup Φ∆V (ω, L) = 180◦ . ω
Figure 10.13 shows that the maximum phase differences over the vertex set for each frequency ω for L = 0.
← Φ∆V (ω, L)
ω Figure 10.13 Φ∆V (ω, L) vs. ω (L = 0) (Example 10.7 (a)).
The figure shows that the maximum phase difference over vertices Φ∆V (ω, L) does not reach 180◦ for any ω. This means that a larger value L is allowed. Figure 10.14 shows the plot of the maximum phase difference over the vertices and over all frequency in terms of L. From this we find that for L = 0.658 there exists a frequency ω that results
411
STABILITY MARGIN COMPUTATION in Φ∆V (ω, L) = 180◦ . Therefore, L ≈ 0.658
On the other hand, we can determine an upper bound on L∗ by finding
supω Φ∆V (ω, L) →
L Figure 10.14 supω Φ∆V (ω, L) vs. L (Example 10.7(a)).
the smallest gain margin over the vertex set. This turns out to be ¯ = 0.658 which is the same as L, and therefore the exact margin in L this case. (b) To determine the worst case phase margin θ∗ we modify each vertex polynomial as follows: Vi (s, θ) = ViD (s) + ejθ ViN (s),
i = 1, 2, · · · , 16.
Then a lower bound θ on θ∗ can be obtained by finding the largest value of θ such that the Bounded Phase Condition is satisfied. Equivalently, θ = inf θ : sup Φ∆V (ω, θ) = 180◦ . θ≥0
ω
412
ROBUST PARAMETRIC CONTROL Figure 10.15 shows the plot of the maximum phase difference over the vertices and over all ω in terms of θ. From this, we find that for θ = 31.6273◦ there exists ω that results in Φ∆V (ω, θ) = 180◦. Thus, θ ≈ 31.6273◦. On the other hand, from the vertex systems we also get an upper bound θ¯ ≈ 31.6273◦, which is the same as for θ, and hence 31.6273◦ is the true margin in this case.
supω Φ∆V (ω, θ) →
θ Figure 10.15 supω Φ∆V (ω, θ) vs. θ (Example 10.7(b)).
(c) To determine the worst case time delay margin T ∗ , we replace each vertex polynomial as follows: Vi (s, T ) = ViD (s) + e−jωT ViN (s),
i = 1, 2, · · · , 16.
A lower bound T on T ∗ is obtained by selecting the largest value of T such that the Bounded Phase Condition is satisfied. Equivalently, T = inf T : sup Φ∆V (ω, T ) = 180◦ . T ≥0
ω
413
STABILITY MARGIN COMPUTATION
Figure 10.16 shows that at T = 0.56 there exists ω that results in Φ∆V (ω, T ) = 180◦. Thus we have T = 0.56 sec.
supω Φ∆V (ω, θ) →
T Figure 10.16 supω Φ∆V (ω, θ) vs. T (Example 10.7(c)).
(d) To determine the worst case H∞ stability margin, let us consider the case of additive perturbation where the H∞ uncertainty block is connected around the two cascaded interval plants as shown in Figure 10.17. Then from
−1
F (s) (1 + F (s)P (s)Q(s))
∞
0. These problems may be effectively solved by using the fact that the characteristic polynomial of the matrix is a multilinear function of the parameters. Problem 1 can be solved by the following algorithm:
418
ROBUST PARAMETRIC CONTROL
Step 1: Determine the eigenvalues of the matrix M (p) with p fixed at each vertex of Π. With this generate the characteristic polynomials corresponding to the vertices of Π. Step 2: Verify the stability of the line segments connecting the vertex characteristic polynomials. This may be done by checking the Bounded Phase Condition or the Segment Lemma. We remark that the procedure outlined above does not require the determination of the characteristic polynomial as a function of the parameter p. It is enough to know that the function is multilinear. To determine the maximum value of ǫ which solves the second problem, we may simply repeat the previous steps for incremental values of ǫ. In fact, an upper bound ǫ¯ can be found as that value of ǫ for which one of the vertices becomes just unstable. A lower bound ǫ can be determined as the value of ǫ for which a segment joining the vertices becomes unstable as follows: Step 1:
Set ǫ = ǫ¯/2
Step 2: Check the maximal phase differences of the vertex polynomials over the parameter box corresponding to ǫ. Step 3: If the maximal phase difference is less than π radians, then increase ǫ − ǫ)/2 for example, and repeat Step 2. ǫ to ǫ + (¯ Step 4: If the maximal phase difference is π radians or greater, then decrease ǫ to ǫ − (¯ ǫ − ǫ)/2 and repeat Step 2. Step 5: This iteration stops when the incremental step or decremental step becomes small enough. This gives a lower bound ǫ and an upper bound ǫ¯. If ǫ and ǫ¯ are not close enough, we can refine the iteration by partitioning the interval uncertainty set into smaller boxes as in Section 10.4. The following examples illustrate this algorithm.
10.6.3
Numerical Examples
Example 10.8 Consider the interval matrix: A(p) =
p1 p2 p3 0
where p0 = [p01 , p02 , p03 ] = [−3, −2, 1]
STABILITY MARGIN COMPUTATION
419
and + − + p1 ∈ [p− 1 , p1 ] = [−3 − ǫ, −3 + ǫ], p2 ∈ [p2 , p2 ] = [−2 − ǫ, −2 + ǫ], + p3 ∈ [p− 3 , p3 ] = [1 − ǫ, 1 + ǫ].
The problem is to find the maximum value ǫ∗ so that the matrix A(p) remains stable for all ǫ ∈ [0, ǫ∗ ]. Although the solution to this simple problem can be worked out analytically we work through the steps in some detail to illustrate the calculations involved. The characteristic polynomial of the matrix is: δ(s, p) = s(s − p1 ) − p2 p3 . In general this functional form is not required since only the vertex characteristic polynomials are needed and they can be found from the eigenvalues of the corresponding vertex matrices. Let us now compute the upper bound for ǫ. We have eight vertex polynomials parametrized by ǫ: ∆V (s) := {δi (s, p) : p ∈ V} where + + − − + − + − − − − + + + V := (p− 1 , p2 , p3 ), (p1 , p2 , p3 ), (p1 , p2 , p3 ), (p1 , p2 , p3 ), (p1 , p2 , p3 ), − + + + − + − − (p+ 1 , p2 , p3 ), (p1 , p2 , p3 ), (p1 , p2 , p3 )
We found that the vertex polynomial δ3 (s) has a jω root at ǫ = 1. Thus we set ǫ¯ = 1. Using the multilinear version of GKT, ∆(s) is robustly Hurwitz stable if and only if the following sets are Hurwitz stable: + + + L1 = s λ(s − p− 1 ) + (1 − λ)(s − p1 ) − p2 p3 : λ ∈ [0, 1] + − + L2 = s λ(s − p− 1 ) + (1 − λ)(s − p1 ) − p2 p3 : λ ∈ [0, 1] + + − L3 = s λ(s − p− 1 ) + (1 − λ)(s − p1 ) − p2 p3 : λ ∈ [0, 1] + − − L4 = s λ(s − p− 1 ) + (1 − λ)(s − p1 ) − p2 p3 : λ ∈ [0, 1] − + + M5 = s(s − p− λ2 p− 1 ) − λ1 p2 + (1 − λ1 )p2 3 + (1 − λ2 )p3 : (λ1 , λ2 ) ∈ [0, 1] × [0, 1]} − + + M6 = s(s − p+ λ2 p− 1 ) − λ1 p2 + (1 − λ1 )p2 3 + (1 − λ2 )p3 :
(λ1 , λ2 ) ∈ [0, 1] × [0, 1]} .
The Li , i = 1, 2, 3, 4 are line segments of polynomials, so we rewrite them as follows: + + + + + L1 = λ(s − p− 1 − p2 p3 ) + (1 − λ)(s − p1 − p2 p3 ) − + + − + L2 = λ(s − p− 1 − p2 p3 ) + (1 − λ)(s − p1 − p2 p3 ) + − + + − L3 = λ(s − p− 1 − p2 p3 ) + (1 − λ)(s − p1 − p2 p3 ) − − + − − L4 = λ(s − p− 1 − p2 p3 ) + (1 − λ)(s − p1 − p2 p3 )
420
ROBUST PARAMETRIC CONTROL
Now we need to generate the set of line segments that constructs the convex hull of the image sets of M5 and M6 . This can be done by connecting every pair of vertex polynomials. The vertex set corresponding to M5 is: M5V (s) := {M5 : (λ1 , λ2 ) ∈ {(0, 0), (0, 1), (1, 0), (1, 1)}} . If we connect every pair of these vertex polynomials, we have the line segments: + + + − L5 = s(s − p− 1 ) − λp2 p3 + (1 − λ)p2 p3 + + − + − 2 = λ(s2 − p− 1 s − p2 p3 ) + (1 − λ)(s − p1 s − p2 p3 ) + + − + L6 = s(s − p− 1 ) − λp2 p3 + (1 − λ)p2 p3
+ + − − + 2 = λ(s2 − p− 1 s − p2 p3 ) + (1 − λ)(s − p1 s − p2 p3 ) − + − − L7 = s(s − p− 1 ) − λp2 p3 + (1 − λ)p2 p3
− + − − − 2 = λ(s2 − p− 1 s − p2 p3 ) + (1 − λ)(s − p1 s − p2 p3 ) + − − − L8 = s(s − p− 1 ) − λp2 p3 + (1 − λ)p2 p3
+ − − − − 2 = λ(s2 − p− 1 s − p2 p3 ) + (1 − λ)(s − p1 s − p2 p3 ) + − − − + L9 = s(s − p1 ) − λp2 p3 + (1 − λ)p2 p3
L10
+ − − − + 2 = λ(s2 − p− 1 s − p2 p3 ) + (1 − λ)(s − p1 s − p2 p3 ) + + − − = s(s − p− 1 ) − λp2 p3 + (1 − λ)p2 p3
+ + − − − 2 = λ(s2 − p− 1 s − p2 p3 ) + (1 − λ)(s − p1 s − p2 p3 ).
Similarly, for M6 we have the line segments: + + + − L11 = s(s − p+ 1 ) − λp2 p3 + (1 − λ)p2 p3 L12 L13 L14 L15 L16
+ + + + − 2 = λ(s2 − p+ 1 s − p2 p3 ) + (1 − λ)(s − p1 s − p2 p3 ) + + + − + = s(s − p1 ) − λp2 p3 + (1 − λ)p2 p3
+ + + − + 2 = λ(s2 − p+ 1 s − p2 p3 ) + (1 − λ)(s − p1 s − p2 p3 ) − + − − = s(s − p+ 1 ) − λp2 p3 + (1 − λ)p2 p3
− + + − − 2 = λ(s2 − p+ 1 s − p2 p3 ) + (1 − λ)(s − p1 s − p2 p3 ) + − − − = s(s − p+ 1 ) − λp2 p3 + (1 − λ)p2 p3
+ − + − − 2 = λ(s2 − p+ 1 s − p2 p3 ) + (1 − λ)(s − p1 s − p2 p3 ) + − − + = s(s − p+ 1 ) − λp2 p3 + (1 − λ)p2 p3
+ − + − + 2 = λ(s2 − p+ 1 s − p2 p3 ) + (1 − λ)(s − p1 s − p2 p3 ) + + + − − = s(s − p1 ) − λp2 p3 + (1 − λ)p2 p3
+ + + − − 2 = λ(s2 − p+ 1 s − p2 p3 ) + (1 − λ)(s − p1 s − p2 p3 ).
The total number of line segments joining vertex pairs in the parameter space is 28. However, the actual number of segments we checked is 16. This saving is due to the multilinear version of GKT, which reduces the set to be checked. By testing these segments we find that they are all stable for ǫ < 1. Since
STABILITY MARGIN COMPUTATION
421
ǫ = 1 corresponds to the instability of the vertex polynomial δ3 (s) we conclude that the exact value of the stability margin is ǫ = 1. The stability check of the segments can be carried out using the Segment Lemma. We can also solve this problem by using the Bounded Phase Condition. The set of vertices is ∆V (s) = {Vi , i = 1, 2, · · · , 8} where + + 2 V1 (s) = s2 − p+ 1 s − p2 p3 = s − 2s + 2 − + 2 V2 (s) = s2 − p+ 1 s − p2 p3 = s − 2s + 6 + − 2 V3 (s) = s2 − p+ 1 s − p2 p3 = s − 2s − − 2 V4 (s) = s2 − p+ 1 s − p2 p3 = s − 2s + + 2 V5 (s) = s2 − p− 1 s − p2 p3 = s − 4s + 2 − + 2 V6 (s) = s2 − p− 1 s − p2 p3 = s − 4s + 6 + − 2 V7 (s) = s2 − p− 1 s − p2 p3 = s − 4s − − 2 V8 (s) = s2 − p− 1 s − p2 p3 = s − 4s.
From the vertex set we see that the difference polynomials Vi (s) − Vj (s) are either constant, first order, anti-Hurwitz or of the form cs and each of these forms satisfy the conditions of the Vertex Lemma in Chapter 9. Thus, the stability of the vertices implies that of the edges. Thus, the first encounter with instability can only occur on a vertex. This implies that the smallest value already found of ǫ = 1 for which a vertex becomes unstable is the correct value of the margin. This conclusion can also be verified by checking the phases of the vertices. Since there are duplicated vertices, we simply plot phase differences of six distinct vertices of ∆(jω) with ǫ = 1. Figure 10.20 shows that the maximum phase difference plot as a function of frequency. This plot shows that the maximal phase difference never reaches 180 degrees confirming once again that ǫ = 1 is indeed the true margin. REMARK 10.5 The phase of a vertex which touches the origin cannot be determined and only the phase difference over the vertex set is meaningful. The phase condition can therefore only be used to determine whether the line segment excluding endpoints, intersects the origin. Thus, the stability of all the vertices must be verified independently. Example 10.9 Let dx = (A + BKC)x dt
422
ROBUST PARAMETRIC CONTROL
100
Maximum Phase Difference of Vertices
90 80 70 60
Φ∆ (ω) v
50 40 30 20 10 0
0
1
2
3
4
5 ω
6
7
8
9
10
Figure 10.20 Maximum phase differences Φ∆V (ω) (in degrees) of vertex polynomials (Example 10.8).
where
−1 0 0 1 = 0 −2 0 + 0 0 0 −3 1
0 −1 + k 0 1 0 1 1 x 1 0 −1 + k2 010 1
k1 ∈ [k1− , k1+ ] = [−ǫ, ǫ]
k2 ∈ [k2− , k2+ ] = [−ǫ, ǫ].
We first find all the vertex polynomials. ∆V (s) := {δi (s, ki ) : ki ∈ V, where
i = 1, 2, 3, 4}
V := (k1 , k2 ) : (k1+ , k2+ ), (k1− , k2− ), (k1− , k2+ ), (k1+ , k2− ) .
We found that the minimum value of ǫ such that a vertex polynomial just becomes unstable is 1.75. Thus, ǫ¯ = 1.75. Then we proceed by checking either the phase condition or the Segment Lemma. If the Segment Lemma is applied, one must verify the stability of six segments joining the four vertices
STABILITY MARGIN COMPUTATION
423
in ∆v (s). If the phase condition is applied, one must evaluate the phases of the four vertex polynomials and find the maximum phase difference at each frequency to observe whether it reaches 180◦. Again, the calculation shows that the smallest value of ǫ that results in a segment becoming unstable is 1.75. Thus, ǫ = 1.75. This shows that the value obtained ǫ = 1.75 is the true margin. The algorithm can also be applied to the robust stability problem for nonHurwitz regions. The following example illustrates the discrete time case. Example 10.10 Consider the discrete time system:
−0.5 0 k2 x(k + 1) = 1 0.50 −1 x(k) k1 k1 0.3
For the nominal values of k10 = k20 = 0, the system is Schur stable. We want to determine the maximum value of ǫ∗ so that for all parameters lying in the range k1 ∈ (−ǫ∗ , ǫ∗ ) k2 ∈ (−ǫ∗ , ǫ∗ ) the system remains Schur stable. Using the procedure, we find the upper bound ǫ¯ = 0.2745 which is the minimum value of ǫ which results in a vertex polynomial just becoming unstable. Figure 10.21 shows that the maximum phase difference over all vertices at each θ ∈ [0, 2π) with ǫ = ǫ¯ is less than 180◦ . Thus, we conclude from the Mapping Theorem that the exact parametric stability margin of this system is 0.2745. In the next section we describe a Lyapunov function based approach to parameter perturbations in state space systems which avoids calculation of the characteristic polynomial.
10.7
Robustness Using a Lyapunov Approach
Suppose that the plant equations in the state space form are x˙ = Ax + Bu y = Cx.
(10.39)
The controller, of order t, is described by x˙c = Ac xc + Bc y u = Cc xc + Dc y.
(10.40)
424
ROBUST PARAMETRIC CONTROL
90 80
Maximum Phase Differences
70 Φ∆ (θ)
60
v
50 40 30 20 10 0
0
1
2
3
θ
4
5
6
7
Figure 10.21 Φ∆V (θ) vs θ (Example 10.9).
The closed-loop system equation is x˙ A + BDc C BCc x = x˙c Bc C Ac xc
B 0 Dc Cc C 0 x A 0 + = . 0 It Bc Ac 0 It xc 0 0t | {z } | {z } | {z } | {z } At
Bt
Kt
(10.41)
Ct
Now (10.40) is a stabilizing controller if and only if At + Bt Kt Ct is stable. We consider the compensator order to be fixed at each stage of the design process and therefore drop the subscript t. Consider then the problem of robustification of A + BKC by choice of K when the plant matrices are subject to parametric uncertainty. Let p = [p1 , p2 , · · · , pr ] denote a parameter vector consisting of physical parameters that enter the state-space description linearly. This situation occurs frequently since the state equations are often written based on physical
425
STABILITY MARGIN COMPUTATION
considerations. In any case combination of primary parameters can always be defined so that the resulting dependence of A, B, C on p is linear. We also assume that the nominal model (10.39) has been determined with the nominal value p0 of p. This allows us to treat p purely as a perturbation with nominal value p0 = 0. Finally, since the perturbation enters at different locations, we consider that A + BKC perturbs to A + BKC +
r X
pi Ei
i=1
for given matrices Ei which prescribe the structure of the perturbation. We now state a result that calculates the radius of a spherical stability region in the parameter space p ∈ IRr . Let the nominal asymptotically stable system be x(t) ˙ = M x(t) = (A + BKC)x(t) (10.42) and the perturbed equation be x(t) ˙ =
M+
r X
pi Ei
i=1
!
x(t)
(10.43)
where the pi , i = 1, · · · , r are perturbations of parameters of interest and the Ei , i = 1, · · · , r are matrices determined by the structure of the parameter perturbations. Let Q > 0 be a positive definite symmetric matrix, σmin (Q) its minimum singular value, and let P denote the unique positive definite symmetric solution of M T P + P M + Q = 0. (10.44) THEOREM 10.8 The system (10.43) is stable for all pi satisfying r X
where µi := kEiT P + P Ei k2 .
σ 2 (Q) 2 |pi | < Pmin r 2 i=1 µi i=1
(10.45)
PROOF Under the assumption that M is asymptotically stable with the stabilizing controller K, choose as a Lyapunov function V (x) = xT P x
(10.46)
where P is the symmetric positive definite solution of (10.44). Since M is an asymptotically stable matrix, the existence of such a P is guaranteed by Lyapunov’s Theorem. Note that V (x) > 0 for all x 6= 0 and V (x) → ∞ as kxk → ∞. We require V˙ (x) ≤ 0 for all trajectories of the system, to ensure the
426
ROBUST PARAMETRIC CONTROL
stability of (10.43). Differentiating (10.46) with respect to x along solutions of (10.44) yields V˙ (x) = x˙ T P x + xT P x˙ T
T
T
= x (M P + P M )x + x
n X
pi EiT P
i=1
+
X
pi P Ei
!
x. (10.47)
Substituting (10.44) into (10.47) we have r X
V˙ (x) = −xT Qx + xT
pi EiT P
+
i=1
r X i=1
pi P Ei
!
x.
The stability requirement V˙ (x) ≤ 0 is equivalent to ! r r X X T T x pi Ei P + pi P Ei x ≤ xT Qx. i=1
(10.48)
(10.49)
i=1
Using the so-called Rayleigh principle, σmin (Q) ≤
xT Qx ≤ σmax (Q), xT x
for all x 6= 0
(10.50)
and we have σmin (Q)xT x ≤ xT Qx.
(10.51)
Thus, (10.49) is satisfied if T
x
r X i=1
pi EiT P
+
r X i=1
pi P Ei
!
x ≤ σmin (Q)xT x.
(10.52)
Since
! ! r r r r
X
X X T X
T T T pi Ei P + pi P Ei x ≤ kx k2 pi Ei P + pi P Ei kxk2 x
i=1 i=1 i=1 i=1 2 ! r X ≤ kxk22 |pi | kEiT P + P Ei k2 , i=1
(10.52) is satisfied if r X i=1
|pi | kEiT P + P Ei k2 ≤ σmin (Q).
Let µi := kEiT P + P Ei k2 = σmax (EiT P + P Ei ).
(10.53)
427
STABILITY MARGIN COMPUTATION Then (10.53) can be rewritten as r X
|pi | kEiT P + P Ei k2
i=1
µ1 µ2 .. . = (|p1 | |p2 | · · · · · · |pr |) ≤ σmin (Q) {z } | . . p . µr | {z }
(10.54)
µ
which is satisfied if
2 kpµk22 ≤ kpk22 kµk22 ≤ σmin (Q).
(10.55)
Using the fact that kpk22 =
r X
|pi |2
i=1
kµk22 =
r X
|µi |2
i=1
we obtain
r X
σ 2 (Q) |pi |2 ≤ Pmin r 2. i=1 µi i=1
(10.56)
This theorem determines for the given stabilizing controller K, the quantity 2 σmin (Q) σ 2 (Q) P ρ(K, Q) := Pmin = r r 2 T 2 i=1 µi i=1 kEi P + P Ei k2
(10.57)
which determines the range of perturbations for which stability is guaranteed and this is therefore the radius of a stability hypersphere in parameter space.
10.7.1
Robustification Procedure
Using the index obtained in (10.57) we now give an iterative design procedure to obtain the optimal controller K ∗ so that (10.57) is as large as possible. For a given K, the largest stability hypersphere we can obtain is σ 2 (Q) max ρ2 (K, Q) = max Pmin r 2 Q Q i=1 µi
(10.58)
428
ROBUST PARAMETRIC CONTROL
and the problem of designing a robust controller with respect to structured parameter perturbations can be formulated as follows: Find K to maximize (10.58), i.e. 2 σmin (Q) 2 (10.59) max max ρ (K, Q) = max max Pr 2 K Q K Q i=1 µi subject to all the eigenvalues of A + BKC lying in the left-half plane, i.e., λ(A + BKC) ⊂ CI − . Equivalently
subject to
σ 2 (Q) max ρ2 (K, Q) = max Pmin r 2 K,Q K,Q i=1 µi
(10.60)
λ(A + BKC) ⊂ CI − .
Thus, the following constrained optimization problem is formulated: For the given (A, B, C) with the nominal stabilizing controller K define (A + BKC)T P + P (A + BKC) = −Q := −LT L,
(10.61)
and the optimization problem min J := min
Pr
Jc :=
max
K,L
kEiT P + P Ei k22 2 (LT L) σmin
i=1
K,L
(10.62)
subject to λ(A+BKC)
Re[λ] < 0.
Note that the positive definite matrix Q has been replaced without loss of generality by LT L. For any square full rank matrix L, LT L is positive definite symmetric. This replacement also reduces computational complexity. A gradient based descent procedure to optimize the design parameters K and L can be devised. The gradient of J with respect to K and L is given below. Before we state this result, we consider a slightly more general class of perturbations by letting A = A0 +
r X
pi Ai ,
i=1
B = B0 +
r X
pi Bi .
(10.63)
and Ei = Ai + Bi KC.
(10.64)
i=1
Then we get M = A0 + B0 KC
THEOREM 10.9 Let J be defined as in (10.62) and let (10.63) and (10.64) hold. Then
429
STABILITY MARGIN COMPUTATION a) 2 ∂J = 3 L ∂L σmin (LT L) (
σmin (LT L)V T −
r X
2 σmax EiT P + P Ei
i=1
where V satisfies
) T
T um vm + vm um
(A0 + B0 KC)V + V (A0 + B0 KC)T = −
r X
σmax EiT P + P Ei
i=1
(10.65)
T T Ei (uai vai + vai uTia ) + (uai vai + vai uTai )EiT
vai and uai are left and right singular vectors corresponding to σmax (EiT P + P Ei ), respectively, and vm and um are left and right singular vectors corresponding to σmin (LT L). b) ∂J 2 = 2 ∂K σmin (LT L) ( c)
r X i=1
σmax EiT P
+
P Ei BiT P
vai uTai
+
T uai vai
(10.66)
T
T
+B P V
T
∂K v T B0 ∂K Cw ∂Jc ij = Re ∂Kij vT w
)
CT
(10.67)
where v and w are the corresponding left and right eigenvectors of (A0 + B0 KC) corresponding to λmax the eigenvector with max{Re (λ)}. The proof of this theorem is omitted. The gradient can be computed by solving the two equations (10.65) and (10.66). A gradient based algorithm for enlarging the radius of the stability hypersphere ρ(K, Q) by iteration on (K, Q) can be devised using these gradients. This procedure is somewhat ad hoc but nevertheless it can be useful. Example 10.11 A VTOL helicopter is described as the linearized dynamic equation: x1 −0.0366 0.0271 0.0188 −0.4555 x1 0.0482 −1.0100 0.0024 −4.0208 x2 d x 2 = x3 p1 −0.7070 p2 dt x3 0.1002 x4 0 0 1 0 x4
430
x1 x2 y= 0100 x3 . x4
where
ROBUST PARAMETRIC CONTROL 0.4422 0.1761 p3 −7.5922 u1 + −5.5200 4.4900 u2 0 0
x1 x2 x3 x4 u1 u2
horizontal velocity, knots vertical velocity, knots pitch rate, degrees/sec pitch angle, degrees collective pitch control longitudinal cyclic pitch control
The given dynamic equation is computed for typical loading and flight conditions of the VTOL helicopter at the airspeed of 135 knots. As the airspeed changes all the elements of the first three rows of both matrices also change. The most significant changes take place in the elements p1 , p2 , and p3 . Therefore, in the following all the other elements are assumed to be constants. The following bounds on the parameters are given: p1 = 0.3681 + ∆p1 , p2 = 1.4200 + ∆p2 ,
|∆p1 | ≤ 0.05 |∆p2 | ≤ 0.01
p3 = 3.5446 + ∆p3 ,
|∆p3 | ≤ 0.04.
Let ∆p = [∆p1 , ∆p2 , ∆p3 ] and compute max k∆pk2 = 0.0648. The eigenvalues of the open-loop plant are 0.27579 ± j0.25758 . −0.2325 λ(A) = −2.072667
The nominal stabilizing controller is given by −1.63522 K0 = . 1.58236
(10.68)
STABILITY MARGIN COMPUTATION
431
Starting with this nominal stabilizing controller, we performed the robustification procedure. For this step the initial value is chosen as
1.0 0.0 −0.50 0.06 0.5 1.0 −0.03 0.00 L0 = −0.1 0.4 1.00 0.14 0.2 0.6 −0.13 1.50 The nominal values gave the stability margin ρ0 = 0.02712 < 0.0648 = k∆pk2 which does not satisfy the requirement (10.68). After 26 iterations we have ρ∗ = 0.12947 > 0.0648 = k∆pk2 which does satisfy the requirement. The robust stabilizing 0th order controller computed is −0.996339890 ∗ K = 1.801833665 and the corresponding optimal L∗ , P ∗ and the closed-loop eigenvalues are
0.51243 0.02871 −0.13260 0.05889 −0.00040 0.39582 −0.07210 −0.35040 L∗ = 0.12938 0.08042 0.51089 −0.01450 −0.07150 0.34789 −0.02530 0.39751 2.00394 −0.38940 −0.50010 −0.49220 −0.38940 0.36491 0.46352 0.19652 P∗ = −0.50010 0.46352 0.61151 0.29841 −0.49220 0.19652 0.29841 0.98734
−18.396295 λ(A + BK ∗ C) = −0.247592 ± j1.2501375 . −0.0736273
This example can also be solved by the Mapping Theorem technique previously described. With the controller K ∗ obtained by the robustification procedure given above we obtained a stability margin of ǫ∗ = 1.257568 which is much greater than the value obtained by the Lyapunov stability based method. In fact, the Lyapunov stability based method gives ρ∗ = 0.12947 which is equivalent to ǫ = 0.07475. This comparison shows that the Mapping Theorem based technique gives a much better approximation of the stability margin than the Lyapunov-based technique.
432
10.8
ROBUST PARAMETRIC CONTROL
Exercises
10.1 Consider polynomials with the uncertain parameters being the coefficients. Calculate the radius of the Hurwitz stability ball in the coefficient space for each of the polynomials (a) (s + 1)3 (b) (s + 2)3 (c) (s + 3)3 (d) (s + 1)(s2 + s + 1) (e) (s + 2)(s2 + s + 1) (f) (s + 1)(s2 + 2s + 2) (g) (s + 2)(s2 + 2s + 2) considering both the cases where the leading coefficient is fixed and subject to perturbation. 10.2 Derive a closed form expression for the radius of the Hurwitz stability ball for the polynomial a2 s2 + a1 s + a0 . 10.3 Calculate the radius of the Schur stability ball in the coefficient space for the polynomials (a) z 3 (z + 0.5)3 (b) (z − 0.5)3 (c) z(z 2 + z + 0.5) (d) z(z + 0.5)(z − 0.5) (e) z(z 2 − z + 0.5) (f) z 2 (z + 0.5) considering both the cases where the leading coefficient is fixed and where it is subject to perturbation. 10.4 Consider the feedback system shown in Figure 10.22. The characteristic polynomial of the closed-loop system is δ(z) = (α0 + β0 ) + (α1 + β1 )z + β2 z 2 = δ0 + δ1 z + δ2 z 2
433
STABILITY MARGIN COMPUTATION + − 6
-
α0 + α1 z β0 + β1 z + β2 z 2
-
Figure 10.22 Feedback control system.
with nominal value
1 − z + z2. 2 Find ǫmax so that with δi ∈ [δi0 − ǫ, δi0 + ǫ] the closed-loop system is robustly Schur stable. Answer: ǫmax = 0.17. δ 0 (z) =
10.5 Consider the feedback system shown in Figure 10.23.
+ − 6
-
n0 + n1 z d0 + d1 z + d2 z 2
-
Figure 10.23 Feedback control system.
Assume that the nominal values of the coefficients of the transfer function are 0 0 n0 , n1 = [−1, 2] and [d0 , d1 , d2 ] = [−1, −2, 8] . Now let
ni ∈ [n0i − ǫ, n0i + ǫ] and dj ∈ [d0j − ǫ, d0j + ǫ] and find ǫmax for robust Schur stability. Answer: ǫmax = 1.2. 10.6 Consider the standard unity feedback control system with transfer functions G(s) and C(s) in the forward path. Let G(s) =
p0 + p1 s s2 (s + q0 )
and C(s) = 1.
Determine the stability margin in the space of parameters (p0 , p1 , q0 ) assuming the nominal value (p00 , p01 , q00 ) = (1, 1, 2).
434
ROBUST PARAMETRIC CONTROL
10.7 Calculate the weighted ℓ1 , ℓ2 , and ℓ∞ stability margins in the coefficient space for the polynomial δ(s) = s4 + 6s3 + 13s2 + 12s + 4 choosing the weights proportional to the magnitude of each nominal value. 10.8 For the nominal polynomial δ(s) = s3 + 5s2 + (3 − j)s + 6 + j find the largest number ǫmax such that all polynomials with coefficients centered at the nominal value and of radius ǫmax are Hurwitz stable. 10.9 Consider a feedback control system with the plant transfer function G(s) =
n0 + n1 s + · · · + nn−1 sn−1 + nn sn d0 + d1 s + · · · + dn−1 sn−1 + dn sn
and the controller transfer function C(s) =
b0 + b1 s + · · · + bm−1 sm−1 + bm sm . a0 + a1 s + · · · + am−1 sm−1 + am sm
Let δ(s) be the closed-loop characteristic polynomial with coefficients T δ := δ0 δ1 · · · δn+m−1 δn+m
and let x := [b0 b1 · · · bm a0 a1 · · · am ]T . Then we can write δ = Mp x where d0 n0 d1 d0 n1 n0 .. .. d2 d1 . n2 n1 . .. .. . d2 . n 2 Mp := . . . . . .. .. .. .. .. dn ... nn . dn nn .. .. . . | {z } Mp ∈IR(n+m+1)×2(m+1)
Prove that x stabilizes the plant if there exists δ such that d(δ, Mp ) < ρ(δ)
where ρ(δ) is the Euclidean radius of the largest stability hypersphere centered at δ and d(δ, Mp ) is the Euclidean distance between δ and the subspace spanned by the columns of Mp .
STABILITY MARGIN COMPUTATION
435
10.10 In a standard unity feedback control system, the plant is G(s) =
s2
a1 s − 1 . + b1 s + b0
The parameters a1 , b0 and b1 have nominal values a01 = 1, b01 = −1, b00 = 2 and are subject to perturbation. Consider the constant controller C(s) = K. Determine the range of values Sk of K for which the nominal system remains Hurwitz stable. For each value of K ∈ Sk find the ℓ2 stability margin ρ(K) in the space of the three parameters subject to perturbation. Determine the optimally robust compensator value K ∈ Sk . 10.11 Repeat Exercise 10.10 using the ℓ∞ stability margin. 10.12 Consider the polynomial s4 + s3 (12 + p1 + p3 ) + s2 (47 + 10.75p1 + 0.75p2 + 7p3 + 0.25p4) +s(32.5p1 + 7.5p2 + 12p3 + 0.5p4 ) + 18.75p1 + 18.75p2 + 10p3 + 0.5p4 . Determine the ℓ2 Hurwitz stability margin in the parameter space [p1 , p2 , p3 , p4 ]. 10.13 The nominal polynomial (i.e. with ∆pi = 0) in Exercise 4.3 has roots −5, −5, −1 − j, −1 + j. Let the stability boundary consist of two circles of radii 0.25 centered at −1 + j and −1 − j respectively and a circle of radius 1 centered at -5. Determine the maximal ℓ2 stability ball for this case. 10.14 Repeat Exercise 10.13 with the ℓ∞ stability margin. 10.15 The transfer function of a plant in a unity feedback control system is given by 1 G(s) = 3 s + d2 (p)s2 + d1 (p)s + d0 (p) where p = [p1 , p2 , p3 ] and d2 (p) = p1 + p2 , d1 (p) = 2p1 − p3 , d0 (p) = 1 − p2 + 2p3 . The nominal parameter vector is p0 = [3, 3, 1] and the controller proposed is C(s) = Kc
1 + 0.667s 1 + 0.0667s
with the nominal value of the gain being Kc = 15. Compute the weighted ℓ∞ Hurwitz stability margin with weights [w1 , w2 , w3 ] = [1, 1, 0.1]. 10.16 In Exercise 10.15 assume that Kc is the only adjustable design parameter and let ρ(Kc ) denote the weighted ℓ∞ Hurwitz stability margin with
436
ROBUST PARAMETRIC CONTROL
weights as in Problem 10.15. Determine the range of values of Kc for which all perturbations within the unit weighted ℓ∞ ball are stabilized, i.e. for which ρ(Kc ) > 1. 10.17 Repeat Exercises 10.15 and 10.16 with the weighted ℓ2 stability margin. 10.18 In Exercises 10.16 and 10.17 determine the optimally robust controllers in the weighted ℓ2 or weighted ℓ∞ stability margins, over the range of values of Kc which stabilizes the nominal plant. 10.19 Consider a complex plane polygon P with consecutive vertices z1 , z2 , · · · zN with N > 2. Show that a point z is excluded from (does not belong to) P if and only if z − zi < 0. Im min 1≤n≤N zi+1 − z1 Use this result to develop a test for stability of a polytopic family. 10.20 In a discrete time control system the transfer function of the plant is: G(z) =
z−2 . (z + 1)(z + 2)
Determine the transfer function of a first order controller C(z) =
α0 + α1 z z + β0
which results in a closed-loop system that is deadbeat, i.e. the characteristic roots are all at the origin. Determine the largest ℓ2 and ℓ∞ stability balls centered at this controller in the space of controller parameters α0 , α1 , β0 for which closed-loop Schur stability is preserved. 10.21 Consider the Hurwitz stability of the family of polynomials δ(s, p) = p1 s(s + 9.5)(s − 1) − p2 (6.5s + 0.5)(s − 2) where p1 ∈ [1, 1.1] and p2 ∈ [1.2, 1.25]. Show that the family is robustly stable. Determine the worst case stability margins over the given uncertainty set using the ℓ2 and then the ℓ∞ norm. 10.22 For the system given in Example 10.7, estimate the size of the sector containing nonlinear gains for which the entire multilinear family of closedloop systems remains robustly absolutely stable. Estimate the sectors using respectively the (a) Lur’e criterion,
437
STABILITY MARGIN COMPUTATION (b) Popov criterion, (c) Circle criterion. 10.23 In the feedback system shown below in Figure 10.24.
- C3 (s) u
+ 6 −
- C1 (s)
-
G(s)
y -
? -+
C2 (s) Figure 10.24 Feedback system (Exercise 10.23).
Let G(s) =
s+α , s+β
C1 (s) =
s+2 , s+3
C3 (s) =
2 s+4
with the nominal values α0 = 1, β 0 = −5. (a) Find a controller C2 (s) that stabilizes the nominal closed-loop system. (b) With the controller obtained in a), let α ∈ [1 − ǫ, 1 + ǫ] and β ∈ [−5 − ǫ. − 5 + ǫ]. Find the maximum value of ǫ for which the closed-loop system remains robustly stable. 10.24 Consider the feedback system shown in Figure 10.25.
u
- x+ 6 −
C(s)
- P1 (s)
- P2 (s)
Figure 10.25 Feedback system (Exercise 10.24).
y -
438
ROBUST PARAMETRIC CONTROL
Let C(s) =
1 , s+1
P1 (s) =
p0 , s + q0
P2 (s) =
p1 s + q1
and let the parameters range over q0 ∈ [0.5, 1.5], p0 ∈ [0.5, 1.5], q1 ∈ [1.5, 2.5], p1 ∈ [0.5, 1.5]. (a) Verify that the closed-loop system is robustly stable. ¯ (b) Construct the polytopic system M(s). ¯ (c) Using the polytopic system M(s), estimate the maximum guaranteed gain margin measured at the loop breaking point “x.” (d) Likewise estimate the maximum guaranteed phase margin at “x.” 10.25 In the the feedback system of Exercise 10.24, suppose we want to expand the range of allowable parameter variations by letting the parameters vary as q0 ∈ [0.5 − ǫ, 1.5 + ǫ],
p0 ∈ [0.5 − ǫ, 1.5 + ǫ],
q1 ∈ [1.5 − ǫ, 2.5 + ǫ],
p1 ∈ [0.5 − ǫ, 1.5 + ǫ].
¯ What is the maximum value of ǫ for which the polytopic system M(s) does not lose robust stability? 10.26 Suppose that unstructured additive uncertainty is introduced into the system in Exercise 10.24 as shown in Figure 10.26:
-
u
+ 6 −
C(s)
-
P (s)
∆
-
Q(s)
? -+
y -
Figure 10.26 Feedback system with additive H∞ uncertainty (Exercise 10.26).
Estimate the maximum additive H∞ uncertainty that the closed-loop system ¯ can tolerate, by using the polytopic system M(s).
439
STABILITY MARGIN COMPUTATION u
-
+ −
- C(s)
- P (s)
∆ y - +?-
- Q(s)
6
Figure 10.27 Feedback system with multiplicative H∞ uncertainty (Exercise 10.27).
10.27 Suppose that unstructured perturbation is applied to the system in Exercise 10.3 in a multiplicative form as shown below in Figure 10.27. Find the maximum multiplicative H∞ stability margin possessed by the closedloop system. 10.28 Suppose that a time-delay block, e−sT is inserted at “x” in the feedback system of Exercise 10.23 (see Figure 10.25). Find a lower bound in the the time delay that can be robustly tolerated by the closed-loop system, using the ¯ polytopic system M(s). 10.29 Consider the feedback system shown below in Figure 10.28. u
- x+ 6 −
C(s)
- P1 (s)
- P2 (s)
y -
Figure 10.28 Feedback system (Exercise 10.29).
Let C(s) =
1 , s+1
P1 (s) =
s + β0 , α1 s + α0
P2 (s) =
and α1 ∈ [1, 2], β0 ∈ [1, 2], γ1 ∈ [1, 5] α0 ∈ [1, 2], δ0 ∈ [1, 3], γ0 ∈ [6, 10]. (a) Verify that the closed-loop system is robustly stable. ¯ (b) Construct the polytopic system M(s).
s + δ0 . γ1 s + γ0
440
ROBUST PARAMETRIC CONTROL
¯ (c) Using the polytopic system M(s), find the maximum guaranteed gain margin measured at the loop breaking point “x.” (d) Similarly estimate the maximum guaranteed phase margin measured at the point “x.” 10.30 In the the feedback system of Exercise 10.29, suppose we want to expand the range of allowable parameter variations by letting the parameters vary as α1 ∈ [1 − ǫ, 2 + ǫ], β0 ∈ [1 − ǫ, 2 + ǫ], γ1 ∈ [1 − ǫ, 5 + ǫ], α0 ∈ [1 − ǫ, 2 + ǫ], δ0 ∈ [1 − ǫ, 3 + ǫ], γ0 ∈ [6 − ǫ, 10 + ǫ]. ¯ Calculate the maximum value of ǫ for which the polytopic system M(s) remains robustly stable? 10.31 Consider the feedback system shown below (Figure 10.29): u
- x+ 6 −
C(s)
- P1 (s)
y -
- P2 (s)
Figure 10.29 Feedback system (Exercise 10.31).
Let C(s) = 2,
P1 (s) =
s + β0 , s2 + α1 s + α0
P2 (s) = −
s + δ0 s + γ0
with α1 ∈ [3, 6], α0 ∈ [1, 2], δ0 ∈ [1, 2] γ0 ∈ [4, 6], β0 ∈ [3, 5]. (a) Check whether the closed-loop system is robustly stable. ¯ (b) Construct the polytopic system M(s). ¯ (c) Using the polytopic system M(s), estimate the maximum guaranteed gain margin at the loop breaking point “x.” (d) Estimate the maximum guaranteed phase margin at “x.”
441
STABILITY MARGIN COMPUTATION 10.32 Consider the following transfer function matrix: 1 1 s+1 s+2 G(s) = 1 1 s+3 s+4
Suppose that this plant is perturbed by feedback perturbations as shown in Figure 10.30:
-
G(s)
∆
Figure 10.30 Feedback system.
Compute the complex matrix stability radius with respect to entries of ∆. 10.33 In Exercise 10.32, suppose that all the entries of ∆ perturb within the interval [−ǫ, +ǫ]. Compute a stability radius ǫmax such that the closed-loop system remains stable. Hint: The characteristic equation of the closed-loop system is multilinear in the parameters. One can apply the Mapping Theorem based technique to this characteristic polynomial.
10.9
Notes and References
The ℓ2 stability hypersphere in coefficient space was first calculated by Soh, Berger, and Dabke [180]. Here we have endeavored to present this result for both the Hurwitz and Schur cases in its greatest generality. Exercise 10.9 is taken from Bhattacharyya, Keel, and Howze [27]. The problem of calculating the ℓ2 stability margin in parameter space for Hurwitz stability in the linear case was solved in Biernacki, Hwang, and Bhattacharyya [33], Chapellat, Bhattacharyya, and Keel [48], Chapellat and Bhattacharyya [47].
442
ROBUST PARAMETRIC CONTROL
The problem of computing the real stability margin has been dealt with by Hinrichsen and Pritchard [94, 93, 92], Tesi, and Vicino [187, 185], Vicino, Tesi, and Milanese [196], DeGaston and Safonov [65], and Sideris and S´anches Pe˜ na [177]. Real parametric stability margins for discrete time control systems were calculated in Aguirre, Chapellat, and Bhattacharyya [1]. The calculation of the ℓ2 stability margin in time-delay systems (Theorem 10.4) is due to Kharitonov [138]. The reduction of the ℓ∞ problem to a linear programming problem was pointed out by Tesi and Vicino in [187]. The problem of discontinuity of the parametric stability margin was highlighted by Barmish, Khargonekar, Shi, and Tempo [14], and Example 10.2 is adapted from [14]. A thorough analysis of the problem of discontinuity of the parametric stability margin has been carried out by Vicino and Tesi in [195]. This idea was explored in Fu and Barmish [81] for polytopes of polynomials and by Fu, Olbrot, and Polis [82] for polytopic systems with time delay. Exercise 10.19 is taken from Kogan [141]. The image set referred to here is also called the value set in the literature in this field. The Mapping Theorem was stated and proved in the 1963 book of Zadeh and Desoer [208]. It was effectively used in parametric stability margin calculations by deGaston and Safonov [65] and Sideris and Sanchez-Pena [177]. The mixed uncertainty stability margin calculations given in Section 10.4.1 were developed by Keel and Bhattacharyya [119, 120]. Vicino, Tesi, and Milanese [196] gave an algorithm for calculating parametric stability margins in the case of nonlinearly correlated parameter dependence. In Hollot and Xu [104], Polyak [169] and Anderson, Kraus, Mansour, and Dasgupta [6] conditions under which the image set of a multilinear interval polynomial reduces to a polygon are investigated. In 1980, Patel and Toda [165] gave a sufficient condition for the robust stability of interval matrices using unstructured perturbations. Numerous results have followed since then. Most of these follow-up results treated the case of structured perturbations because of its practical importance over the unstructured perturbation case. Some of these works are found in Yedavalli [205], Yedavalli and Liang [206], Martin [154], Zhou and Khargonekar [214], Keel, ˘ Bhattacharyya, and Howze [130], Sezer and Siljak [176], Leal and Gibson [142], Foo and Soh, [78], and Tesi and Vicino [186]. Theorem 10.8 and the formula for the gradients given in Section 10.7.1 are proved in Keel, Bhattacharyya, and Howze [130]. Most of the cited works employed either the Lyapunov equation or norm inequalities and provided sufficient conditions with various degrees of inherent conservatism. Using robust eigenstructure assignment techniques, Keel, Lim, and Juang [131] developed a method to robustify a controller such that the closed-loop eigenvalues remain inside circular regions centered at the nominal eigenvalues while it allows the maximum parameter perturbation. The algorithm for determining the stability of an interval matrix is reported in Keel and Bhattacharyya [117]. Some of the difficulties of dealing with general forms of interval matrices along with various results are discussed in Mansour [146].
11 STABILITY OF A POLYTOPE
In this chapter, we deal with robust stability problems where the characteristic polynomial family to be tested is a polytopic family. We first establish the stability detecting property of exposed edges and some extremal properties of edges and vertices. Then we deal with the robust stability of a polytopic family of polynomials with respect to an arbitrary region. The Edge Theorem shows that the root space of the entire affine linear family can be obtained from the root set of the exposed edges. Since the exposed edges are oneparameter sets of polynomials, this theorem effectively and constructively reduces the problem of determining the root space under multiple parameter uncertainty to a set of one-parameter root locus problems. The subsequent section presents Kharitonov’s Theorem on robust Hurwitz stability of interval polynomials, dealing with both the real and complex cases. This elegant result forms the basis for many of the results on robustness under parametric uncertainty. Finally, we study the Hurwitz stability of a family of polynomials which consists of a linear combination, with fixed polynomial coefficients, of interval polynomials. The Generalized Kharitonov Theorem given here provides a constructive solution to this problem by reducing it to the Hurwitz stability of a prescribed set of extremal line segments. The number of line segments in this test set is independent of the dimension of the parameter space. Under special conditions on the fixed polynomials this test set reduces to a set of vertex polynomials. This test set has many important extremal properties that are useful in control systems. We conclude with an extension of Kharitonov type results for polynomials containing parameters appearing polynomially, and give an application to fixed order multivariable controller synthesis.
11.1
Introduction
Many control systems contain various uncertain or unknown parameters that vary in intervals. Such systems may have characteristic polynomials that vary in a polytope in the coefficient space. A polytopic family of polynomials can be thought of as the convex hull of a finite number of points (polynomials).
443
444
ROBUST PARAMETRIC CONTROL
Mathematically, this can be represented as the family P (s) = λ1 P1 (s) + · · · + λn Pn (s) where Pi (s) are fixed real polynomials and the λi are real with λi ≥ 0 and P λi = 1. An alternative representation of a polytopic family, P (s) = a1 Q1 (s) + a2 Q2 (s) + · · · + am Qm (s) ¯i ]. In where each real parameter ai varies independently in the interval [ai , a other words, the parameter vector a := [a1 , · · · , am ] varies in the hypercube ¯i , i = 1, · · · , m} A := {a : ai ≤ ai ≤ a In some problems, a polytopic family may arise because the system characteristic polynomial δ(s, p) := δ0 (p) + δ1 (p)s + · · · + δn (p)sn has coefficients δi (p) which are affine functions of the parameter vector p. If p varies within a hypercube, it generates a polytopic family of characteristic polynomials. In control problems the elements of p could be physical parameters belonging to the plant or design parameters belonging to the controller.
11.2
Stability of Polytopic Families
In this section we deal with the case where each component pi of the real parameter vector p := [p1 , p2 , · · · , pl ]T can vary independently of the other components. In other words, we assume that p lies in an uncertainty set which is box-like: + Π := {p : p− i ≤ pi ≤ pi ,
i = 1, 2, · · · , l}.
(11.1)
Represent the system characteristic polynomial δ(s) := δ0 + δ1 s + δ2 s2 + · · · + δn sn
(11.2)
by the vector δ := [δ0 , δ1 , · · · , δn ]T . We assume that each component δi of δ, is a linear function of p. To be explicit, δ(s, p) := δ0 (p) + δ1 (p)s + δ2 (p)s2 + · · · + δn (p)sn := p1 Q1 (s) + p2 Q2 (s) + · · · + pl Ql (s) + Q0 (s).
445
STABILITY OF A POLYTOPE By equating coefficients of like powers of s we can write this as δ = T p + b, where δ, p and b are column vectors, and T : IRl −→ IRn+1
is a linear map. In other words, δ is a linear (or affine) transformation of p. Now introduce the coefficient set ∆ := {δ : δ = T p + b,
p ∈ Π}
(11.3)
of the set of polynomials ∆(s) := {δ(s, p) : p ∈ Π}.
11.2.1
(11.4)
Exposed Edges and Vertices
To describe the geometry of the set ∆ (equivalently ∆(s)) we introduce some basic facts regarding polytopes. Note that the set Π is in fact an example of a special kind of polytope. In general, a polytope in n-dimensional space is the convex hull of a set of points called generators in this space. A set of generators is minimal if removal of any point from the set alters the convex hull. A minimal set of generators is unique and constitutes the vertex set of the polytope. Consider the special polytope Π, which we call a box. The − vertices V of Π are obtained by setting each pi to p+ i or pi : + V := {p : pi = p− i or pi = pi , i = 1, 2, · · · , l}.
(11.5)
The exposed edges E of the box Π are defined as follows. For fixed i, + − + Ei := {p : p− i ≤ pi ≤ pi , pj = pj or pj , for all j 6= i}
(11.6)
then E := ∪li=1 Ei .
(11.7)
We use the notational convention ∆ = TΠ + b
(11.8)
as an alternative to (11.3). LEMMA 11.1 Let Π be a box and T a linear map, then ∆ is a polytope. If ∆v and ∆E denote the vertices and exposed edges of ∆ and E and V denote the exposed edges and vertices of Π, we have ∆V ⊂ T V + b ∆E ⊂ T E + b.
446
ROBUST PARAMETRIC CONTROL
This lemma shows us that the polynomial set ∆(s) is a polytopic family and that the vertices and exposed edges of ∆ can be obtained by mapping the vertices and exposed edges of Π. Since Π is an axis parallel box its vertices and exposed edges are easy to identify. For an arbitrary polytope in n dimensions, it is difficult to distinguish the “exposed edges” computationally. This lemma is therefore useful even though the mapped edges and vertices of Π contain more elements than only the vertices and exposed edges of ∆. These facts are now used to characterize the image set of the family ∆(s). Fixing s = s∗ , we let ∆(s∗ ) denote the image set of the points δ(s∗ , p) in the complex plane obtained by letting p range over Π: ∆(s∗ ) := {δ(s∗ , p) : p ∈ Π}.
(11.9)
Introduce the vertex polynomials: ∆V (s) := {δ(s, p) : p ∈ V} and the edge polynomials: ∆E (s) := {δ(s, p) : p ∈ E} Their respective images at s = s∗ are ∆V (s∗ ) := {δ(s∗ , p) : p ∈ V}
(11.10)
∆E (s∗ ) := {δ(s∗ , p) : p ∈ E}.
(11.11)
and The set ∆(s∗ ) is a convex polygon in the complex plane whose vertices and exposed edges originate from the vertices and edges of Π. More precisely, we have the following lemma which describes the geometry of the image set in this polytopic case. Let co (·) denote the convex hull of the set (·). LEMMA 11.2 1) ∆(s∗ ) = co (∆V (s∗ )), 2) The vertices of ∆(s∗ ) are contained in ∆V (s∗ ), 3) The exposed edges of ∆(s∗ ) are contained in ∆E (s∗ ). This lemma states that the vertices and exposed edges of the complex plane polygon ∆(s∗ ) are contained in the images at s∗ of the mapped vertices and edges of the box Π. We illustrate this in the Figure 11.1. The above results will be useful in determining the robust stability of the family ∆(s) with respect to an open stability region S. In fact they are the key to establishing the important result that the stability of a polytopic family can be determined from that of its edges.
447
STABILITY OF A POLYTOPE P3
δ2
E Box Π
Polytope ∆
V ∆ = TΠ + b P2 P1
Parameter Space Imag
∆E δ0
δ1
Coefficient Space
∆(s∗) Image set (Polygon)
Real Figure 11.1 Vertices and edges of Π, ∆, and ∆(s∗ ).
Assumption 4 Assume the family of polynomials ∆(s) is of 1) constant degree, and 2) there exists at least one point s0 ∈ ∂S such that 0 6∈ ∆(s0 ). THEOREM 11.1 Under the above assumptions ∆(s) is stable with respect to S if and only if ∆E (s) is stable with respect to S. PROOF Following the image set approach described in Section 10.2.2, it is clear that under the assumption of constant degree and the existence of at least one stable polynomial in the family, the stability of ∆(s) is guaranteed if the image set ∆(s∗ ) excludes the origin for every s∗ ∈ ∂S. Since ∆(s0 ) excludes the origin it follows that ∆(s∗ ) excludes the origin for every s∗ ∈ ∂S if and only if the edges of ∆(s∗ ) exclude the origin. Here we have implicitly used the assumption that the image set moves continuously with respect to s∗ . From Lemma 11.2, this is equivalent to the condition that ∆E (s∗ ) excludes the origin for every s∗ ∈ ∂S. This condition is finally equivalent to the stability of the set ∆E (s). Theorem 11.1 has established that to determine the stability of ∆(s), it suffices to check the stability of the exposed edges. The stability verification of
448
ROBUST PARAMETRIC CONTROL
a multiparameter family is therefore reduced to that of a set of one parameter families. In the next subsection we elaborate on the latter approach.
11.2.2
Bounded Phase Conditions for Checking Robust Stability of Polytopes
From a computational point of view it is desirable to introduce some further simplification into the problem of verifying the stability of a polytope of polynomials. Note that from Theorem 10.2 (Zero Exclusion Principle), to verify robust stability we need to determine whether or not the image set ∆(s∗ ) excludes the origin for every point s∗ ∈ ∂S. This computation is particularly easy since ∆(s∗ ) is a convex polygon. In fact, a convex polygon in the complex plane excludes the origin if and only if the angle subtended at the origin by its vertices is less than π radians (180◦ ). Consider the convex polygon P in the complex plane with vertices [v1 , v2 , · · ·] := V. Let p0 be an arbitrary point in P and define vi φvi := arg . p0
(11.12)
We adopt the convention that every angle lies between −π and +π. Now define φ+ := sup φvi ,
0 ≤ φ+ ≤ π
vi ∈V
φ− := inf φvi , vi ∈V
−π < φ− ≤ 0.
The angle subtended at the origin by P is given by ΦP := φ+ − φ− .
(11.13)
P excludes the origin if and only if ΦP < π. Applying this fact to the convex polygon ∆(s∗ ), we can easily conclude the following. THEOREM 11.2 (Bounded Phase Theorem) Under the assumptions a) every polynomial in ∆(s) is of the same degree (δn (p) 6= 0, p ∈ Π), b) at least one polynomial in ∆(s) is stable with respect to S, the set of polynomials ∆(s) is stable with respect to the open stability region S if and only if Φ∆V (s∗ ) < π, for all s∗ ∈ ∂S. (11.14)
449
STABILITY OF A POLYTOPE
The computational burden imposed by this theorem is that we need to evaluate the maximal phase difference across the vertices of Π. The condition (11.14) will be referred to conveniently as the Bounded Phase Condition. We illustrate Theorems 11.1 and 11.2 in the example below. Example 11.1 Consider the standard feedback control system with the plant G(s) =
s2
s+a + bs + c
where the parameters vary within the ranges: a ∈ [1, 2] = [a− , a+ ], b ∈ [9, 11] = [b− , b+ ], c ∈ [15, 18] = [c− , c+ ] The controller is
3s + 2 . s+5 We want to examine whether this controller robustly stabilizes the plant. The closed-loop characteristic polynomial is: C(s) =
δ(s) = a(3s + 2) + b(s2 + 5s) + c(s + 5) + (s3 + 8s2 + 2s). The polytope of polynomials whose stability is to be checked is: ∆(s) = δ(s) : a ∈ [a− , a+ ], b ∈ [b− , b+ ], c ∈ [c− , c+ ]
We see that the degree is invariant over the uncertainty set. From Theorem 11.1 this polytope is stable if and only if the exposed edges are. Here, we write the 12 edge polynomials ∆E (s): δE1 (s) = λa− + (1 − λ)a+ (3s + 2) + b− (s2 + 5s) + c− (s + 5) + (s3 + 8s2 + 2s) δE2 (s) = λa− + (1 − λ)a+ (3s + 2) + b− (s2 + 5s) + c+ (s + 5) + (s3 + 8s2 + 2s) δE3 (s) = λa− + (1 − λ)a+ (3s + 2) + b+ (s2 + 5s) + c− (s + 5) + (s3 + 8s2 + 2s) δE4 (s) = λa− + (1 − λ)a+ (3s + 2) + b+ (s2 + 5s) + c+ (s + 5) + (s3 + 8s2 + 2s) δE5 (s) = a− (3s + 2) + λb− + (1 − λ)b+ (s2 + 5s) + c− (s + 5) + (s3 + 8s2 + 2s) δE6 (s) = a− (3s + 2) + λb− + (1 − λ)b+ (s2 + 5s) + c+ (s + 5) + (s3 + 8s2 + 2s) δE7 (s) = a+ (3s + 2) + λb− + (1 − λ)b+ (s2 + 5s) + c− (s + 5) + (s3 + 8s2 + 2s) δE8 (s) = a+ (3s + 2) + λb− + (1 − λ)b+ (s2 + 5s) + c+ (s + 5) + (s3 + 8s2 + 2s) δE9 (s) = a− (3s + 2) + b− (s2 + 5s) + λc− + (1 − λ)c+ (s + 5) + (s3 + 8s2 + 2s) δE10 (s) = a− (3s + 2) + b+ (s2 + 5s) + λc− + (1 − λ)c+ (s + 5) + (s3 + 8s2 + 2s) δE11 (s) = a+ (3s + 2) + b− (s2 + 5s) + λc− + (1 − λ)c+ (s + 5) + (s3 + 8s2 + 2s) δE12 (s) = a+ (3s + 2) + b+ (s2 + 5s) λc− + (1 − λ)c+ (s + 5) + (s3 + 8s2 + 2s).
450
ROBUST PARAMETRIC CONTROL
The robust stability of ∆(s) is equivalent to the stability of the above 12 edges. From Theorem 11.2, it is also equivalent to the condition that ∆(s) has at least one stable polynomial, that 0 6∈ ∆(jω) for at least one ω and that the maximum phase difference over the vertex set is less than 180◦ for any frequency ω (bounded phase condition). At ω = 0 we have 0 6∈ ∆(j0) = 2a + 5c = 2[1, 2] + 5[15, 18]. Also it may be verified that the center point of the box is stable. Thus we examine the maximum phase difference, evaluated at each frequency, over the following eight vertex polynomials: δv1 (s) = a− (3s + 2) + b− (s2 + 5s) + c− (s + 5) + (s3 + 8s2 + 2s) δv2 (s) = a− (3s + 2) + b− (s2 + 5s) + c+ (s + 5) + (s3 + 8s2 + 2s) δv3 (s) = a− (3s + 2) + b+ (s2 + 5s) + c− (s + 5) + (s3 + 8s2 + 2s) δv4 (s) = a− (3s + 2) + b+ (s2 + 5s) + c+ (s + 5) + (s3 + 8s2 + 2s) δv5 (s) = a+ (3s + 2) + b− (s2 + 5s) + c− (s + 5) + (s3 + 8s2 + 2s) δv6 (s) = a+ (3s + 2) + b− (s2 + 5s) + c+ (s + 5) + (s3 + 8s2 + 2s) δv7 (s) = a+ (3s + 2) + b+ (s2 + 5s) + c− (s + 5) + (s3 + 8s2 + 2s) δv8 (s) = a+ (3s + 2) + b+ (s2 + 5s) + c+ (s + 5) + (s3 + 8s2 + 2s). 12
10
DEGREE
8
6
4
2
0
0
1
2
3
4
5 ω
6
7
8
9
Figure 11.2 Maximum phase differences of vertices (Example 11.1).
10
451
STABILITY OF A POLYTOPE
Figure 11.2 shows that the maximum phase difference over all vertices does not reach 180◦ at any ω. Therefore, we conclude that the controller C(s) robustly stabilizes the plant G(s). Example 11.2 Continuing with the previous example, suppose that we now want to expand the range of parameter perturbations and determine the largest size box of parameters stabilized by the controller given. We define a variable sized parameter box as follows: a ∈ [a− − ǫ, a+ + ǫ],
b ∈ [b− − ǫ, b+ + ǫ],
c ∈ [c− − ǫ, c+ + ǫ].
We can easily determine the maximum value of ǫ for which robust stability is preserved by simply applying the Bounded Phase Condition while increasing the value of ǫ. Figure 11.3 shows that the maximum phase difference over the vertex set (attained at some frequency) plotted as a function of ǫ.
180 160 140
DEGREE
120 100 80 60 40 20 0
0
1
2
3
ε
4
5
6
7
Figure 11.3 Maximum phase differences of vertices vs. ǫ (Example 11.2).
We find that at ǫ = 6.5121, the phase difference over the vertex set reaches
452
ROBUST PARAMETRIC CONTROL
180◦ at ω = 1.3537. Therefore, we conclude that the maximum value of ǫ is 6.5120. This means that the controller C(s) can robustly stabilize the family of plants G(s, p) where the ranges of the parameters are a ∈ [−5.512, 8.512],
11.2.3
b ∈ [2.488, 17.512],
c ∈ [8.488, 24.512].
Extremal Properties of Edges and Vertices
In this subsection we deal with the problem of finding the worst case stability margin over a polytope of stable polynomials. This value of the worst case stability margin is a measure of the robust performance of the system or associated controller. As mentioned earlier this is a formidably difficult problem in the general case. In the linear case that is being treated in this chapter, it is feasible to determine robust performance by exploiting the image set boundary generating property of the exposed edges. Indeed for such families it is shown below that the worst case stability margins occur in general over the exposed edges. In special cases it can occur over certain vertices. Consider the polytopic family of polynomials δ(s, p) = p1 Q1 (s) + p2 Q2 (s) + · · · + pl Ql (s) + Q0 (s)
(11.15)
with associated uncertainty set Π as defined in (11.1) and the exposed edges E. Specifically, let + − + Ei := p : p− (11.16) i ≤ pi ≤ pi , pj = pj or pj , for all i 6= j then
E := ∪li=1 Ei .
(11.17)
As before, we assume that the degree of the polynomial family remains invariant as p varies over Π. Suppose that the stability of the family with respect to a stability region S has been proved. Then, with each point p ∈ Π we can associate a stability ball of radius ρ(p) in some norm kpk. A natural question to ask at this point is: What is the minimum value of ρ(p) as p ranges over Π? It turns out that the minimum value of ρ(p) is attained over the exposed edge set E. THEOREM 11.3 inf ρ(p) = inf ρ(p).
p∈Π
p∈E
(11.18)
PROOF From the property that the edge set generates the boundary of the image set ∆(s∗ ), we have inf ρ(p) = inf {k∆pk : δ(s, p + ∆p) unstable for p ∈ Π}
p∈Π
453
STABILITY OF A POLYTOPE = inf {k∆pk : δ(s∗ , p + ∆p) = 0, s∗ ∈ ∂S, p ∈ Π} = inf {k∆pk : δ(s∗ , p + ∆p) = 0, s∗ ∈ ∂S, p ∈ E} = inf ρ(p). p∈E
This theorem is useful for determining the worst case parametric stability margin over an uncertainty box. Essentially it again reduces a multiparameter optimization problem to a set of one parameter optimization problems. Further simplification can be obtained when the stability of the polytope can be guaranteed from the vertices. In this case we can find the worst case stability margin by evaluating ρ(p) over the vertex set. For example, for the case of Hurwitz stability we have the following result which depends on the Vertex Lemma of Chapter 9. THEOREM 11.4 Let δ(s) be a polytope of real Hurwitz stable polynomials of degree n of the form (11.15) with Qi (s) of the form Qi (s) = sti (ai s + bi )Ai (s)Pi (s)
(11.19)
where ti ≥ 0 are integers, ai and bi are arbitrary real numbers, Ai (s) are anti-Hurwitz, and Pi (s) are even or odd polynomials. Then inf ρ(p) = inf ρ(p).
p∈Π
p∈V
(11.20)
The proof utilizes the Vertex Lemma of Chapter 9 but is otherwise identical to the argument used in the previous case and is therefore omitted. A similar result holds for the Schur case and can be derived using the corresponding Vertex Lemma from Chapter 9. We illustrate the usefulness of this result in determining the worst case performance evaluation and optimization of controllers in the examples below. Example 11.3 Consider the plant G(s) =
p2 s + 1 p1 s + p0
where the parameters p = [p0 p1 p2 ] vary in Π := {p0 ∈ [2, 4],
p1 ∈ [4, 6],
p2 ∈ [10, 15]} .
This plant is robustly stabilized by the controller C(s) = K
s+5 s(s − 1)
454
ROBUST PARAMETRIC CONTROL
when 0.5 ≤ K ≤ 2. The characteristic polynomial is δ(s, p, K) = s2 (s − 1)p1 + s(s − 1)p0 + Ks(s + 5)p2 + K(s + 5) and δ(jω, p, K) = −jω 3 p1 − ω 2 (p0 − p1 − p2 K) + jω(5Kp2 − p0 + K) + 5K. This leads to 2 ∆p0 ω p0 − ω 2 p1 + ω 2 Kp2 − 5K −ω ω −ω K ∆p = 1 ωp0 + ω 3 p1 − 5ωKp2 − ωK −ω −ω 3 5ωK {z } ∆p2 | {z } | | {z } A(ω,K) b(ω,p0 ,K)
2
2
2
t
and the ℓ2 stability margin around p0 is
ρ∗ (p0 , K) = min ρ(ω, p0 , K) ω
−1
b(ω, p0 , K) . = min AT (ω, K) A(ω, K)AT (ω, K) ω
2
To determine the worst case ℓ2 stability margin over Π for fixed K = K0 we apply Theorem 11.3. This tells us that the worst case stability margin occurs on one of the edges ΠE of Π. The characteristic polynomial of the closed-loop system is δ(s, p) = s2 (s − 1) p1 + s(s − 1) p0 + K0 s(s + 5) p2 + K0 (s + 5) | {z } | {z } | {z } Q1 (s)
Q2 (s)
Q3 (s)
which shows that the vertex condition of Theorem 11.4 is satisfied so that the worst case stability margin occurs on one of the vertices of Π. Therefore, the worst case ℓ2 stability margin for K = K0 is ρ∗ (K0 ) = min ρ∗ (p, K0 ). p∈ΠV
Suppose that we wish to determine the value of K ∈ [0.5, 2] that possesses the maximum value of the worst case ℓ2 stability margin. This problem can be solved by repeating the procedure of determining ρ∗ (K) for each K in [0.5, 2] and selecting the value of K whose corresponding ρ∗ (K) is maximum: ρ∗ = max ρ∗ (K) K
From Figure 11.4, we find that the maximum worst case ℓ2 stability margin is 5.8878 when K = 2 .
455
STABILITY OF A POLYTOPE
7
6
5
ρ∗(K)
4
3
2
1
0 0.5
1
1.5
2
K
Figure 11.4 K vs. ρ∗ (K) = minp∈ΠV ρ∗ (p, K) (Example 11.3).
11.3
The Edge Theorem
In the last section, we proved the stability detecting property of exposed edges. Actually, the edges provide much more information of the roots of such families. This is established in the Edge Theorem. Let us consider a family of nth degree real polynomials whose typical element is given by δ(s) = δ0 + δ1 s + · · · + δn−1 sn−1 + δn sn .
(11.21)
As usual, we identify Pn the vector space of all real polynomials of degree less than or equal to n with IRn+1 , and we will identify the polynomial in (11.21) with the vector δ := [δn , δn−1 , · · · , δ1 , δ0 ]T . (11.22) Let Ω ⊂ IRn+1 be an m-dimensional polytope, that is, the convex hull of a finite number of points. As a polytope, Ω is a closed-bounded set and therefore it is compact. We make the assumption that all polynomials in Ω have the same degree:
456
ROBUST PARAMETRIC CONTROL
Assumption 5 The sign of δn is constant over Ω, either always positive or always negative. Assuming for example that this sign is always positive, and using the fact that Ω is compact, it is always possible to find ∆ > 0 such that, δn > ∆, for every δ ∈ Ω.
(11.23)
A supporting hyperplane H is an affine set of dimension n such that Ω∩H 6= ∅, and such that every point of Ω lies on just one side of H. The exposed sets of Ω are those (convex) sets Ω ∩ H where H is a supporting hyperplane. The one-dimensional exposed sets are called exposed edges, whereas the twodimensional exposed sets are the exposed faces. Before proceeding we need to introduce the notion of root space. Consider any W ⊂ Ω. Then R(W ) is said to be the root space of W if, R(W ) = {s : δ(s) = 0, for some δ ∈ W } .
(11.24)
Finally, recall that the boundary of an arbitrary set S of the complex plane is designated by ∂S. We can now enunciate and prove the Edge Theorem.
11.3.1
Edge Theorem
THEOREM 11.5 (Edge Theorem) Let Ω ⊂ IRn+1 be a polytope of polynomials which satisfies Assumption 5. Then the boundary of R(Ω) is contained in the root space of the exposed edges of Ω. To prove the theorem we need two lemmas. LEMMA 11.3 If a real sr belongs to R(Ω), then there exists an exposed edge E of Ω such that sr ∈ R(E), and if a complex number sc belongs to R(Ω), then there exists an exposed face F of Ω such that sc ∈ R(F ). PROOF Consider an arbitrary δ in Ω, and suppose that sr is a real root of δ(s). We know that the set of all polynomials having sr among their roots is a vector space Psr of dimension n. Let af f (Ω) denote the affine hull of Ω, that is, the smallest affine subspace containing Ω. Now, assume that m = dim[af f (Ω)] ≥ 2. Then we have that, dim [Psr ∩ af f (Ω)] ≥ 1, and this implies that this set Psr ∩ af f (Ω) must pierce the relative boundary of Ω. This relative boundary however, is the union of some m − 1 dimensional
457
STABILITY OF A POLYTOPE
polytopes which are all exposed sets of Ω. Therefore, at least one of these boundary polytopes Ωm−1 satisfies, sr ∈ R(Ωm−1 ). If dim[af f (Ωm−1 )] ≥ 2, we see that we can repeat the preceding argument and ultimately we will find a one-dimensional boundary polytope Ω1 for which sr ∈ R(Ω1 ). But Ω1 is just an exposed edge of Ω, so that sr does indeed belong to the root space of the exposed edges of Ω. For the case of a complex root sc , it suffices to know that the set of all real polynomials having sc among their roots is a vector space Psc of dimension n − 1. As a consequence the same reasoning as above holds, yielding eventually an exposed face Ω2 of Ω for which sc ∈ R(Ω2 ). We illustrate this lemma in Figures 11.5, 11.6, and 11.7 with a three dimensional polytope Ω (see Figure 11.5). Here Psr is a subspace of dimension 2 and cuts the edges of Ω (see Figure 11.6). Psc is of dimension 1 and must penetrate a face of Ω (see Figure 11.7). δ2
δ
δ1
δ0 Figure 11.5 Polytope Ω.
458
ROBUST PARAMETRIC CONTROL δ2
δ
Psr δ1
δ0 Figure 11.6 Psr cuts edges of Ω. δ2
δ
Psr
δ1
δ0 Psc
Figure 11.7 penetrates a face of Ω.
459
STABILITY OF A POLYTOPE
The conclusion of this first lemma is that if pF is the number of exposed faces, then pF [ R(Ω) = R(Fi ). (11.25) i=1
The next lemma focuses now on an exposed face. Let F be an exposed face of Ω and let us denote by ∂F its relative boundary. Since F is a compact set and because of Assumption 5 on Ω, we know from Chapter 9 that R(F ) is itself a closed set. We have the following. LEMMA 11.4 ∂R(F ) ⊂ R(∂F ).
PROOF Let s∗ be an arbitrary element of ∂R(F ), we want to show that s∗ is also an element of R(∂F ). Since ∂F is the union of exposed edges of Ω, it follows from Lemma 11.3 that if s∗ is real then s∗ ∈ R(∂F ). Now assume that s∗ is complex. Since R(F ) is a closed set, ∂R(F ) ⊂ R(F ), so that it is possible to find δ ∗ ∈ F with δ ∗ (s∗ ) = 0. We can write δ ∗ (s) = (s2 + αs + β)(dn−2 sn−2 + · · · + d1 s + d0 )
(11.26)
where α = −2Re(s∗ ) and β = |s∗ |2 . Let af f (F ) be the affine hull of F . Since F is two-dimensional it is possible to write af f (F ) = {δ ∗ + V λ; λ ∈ IR2 }, where V is some full rank (n + 1) × 2 matrix. On the other hand, an arbitrary element of the vector space of real polynomials with a root at s∗ can be written as P ∗ (s) = (s2 + αs + β) ((µn−2 + dn−2 )sn−2 + · · · + (µ1 + d1 )s + (µ0 + d0 ) , (11.27) or more generally we can write, Ps∗ = {δ ∗ + W µ : µ = [µn−2 , · · · , µ1 , µ0 ]T ∈ IRn−2 }, where W is the (n + 1) × (n − 1) matrix, 1 0 ··· 0 α 1 ··· 0 β α ··· 0 0 β ··· 0 W = . . . . . .. .. . . .. 0 0 ··· 1 0 0 ··· α 0 0 ··· β
(11.28)
(11.29)
The intersection between af f (F ) and Ps∗ contains all λ, µ satisfying, λ δ ∗ + V λ = δ ∗ + W µ, or equivalently, [V, − W ] = 0. (11.30) µ
460
ROBUST PARAMETRIC CONTROL
Two possibilities have to be considered: A. [V , −W ] does not have full rank In this case, the space of solutions to (11.30) is either of dimension 1 or 2. If it is of dimension one, then the intersection af f (F ) ∩ Ps∗ is a straight line ˆ ∗ ) = 0, which implies which must intersect ∂F at a point ˆδ. Since ˆδ ∈ Ps∗ , δ(s ∗ that s ∈ R(∂F ). If the dimension is two then af f (F ) ⊂ Ps∗ and for any ˆδ ∈ ∂F we have δ(s ˆ ∗ ) = 0 so that clearly s∗ ∈ R(∂F ). B. [V , −W ] has full rank In this case the intersection af f (F ) ∩ Ps∗ is reduced to δ ∗ . We now prove that δ ∗ ∈ ∂F and this is where the fact that s∗ ∈ ∂R(F ) is utilized. Indeed, s∗ ∈ ∂R(F ) implies the existence of a sequence of complex numbers sn such that sn 6∈ R(F ) for all n and such that sn −→ s∗ as n → +∞. In particular this implies that, −2Re(sn ) −→ α and |sn |2 −→ β
as n → +∞.
(11.31)
As usual, let Psn be the vector space of all real polynomials with a root at sn . An arbitrary element of Psn can be expressed as P (s) = δ ∗ (s) + (s2 − 2Re(sn )s + |sn |2 (µn−2 sn2 + · · · + µ1 s + µ0 ) + + −(2Re(sn ) + α)s + (|s2n | − β) dn−2 sn−2 + · · · + d1 s + d0 ,
or, similarly
Psn = {δ ∗ + Wn µ + νn : µ = [µn−2 , · · · , µ1 , µ0 ] ∈ IRn−1 }. where,
and
1 0 ··· 0 −2Re(sn ) 1 ··· 0 |sn |2 −2Re(sn ) · · · 0 2 0 |sn | ··· 0 Wn = . .. .. . .. .. . . . 0 0 · · · 1 0 0 · · · −2Re(sn ) 0 0 · · · |sn |2
dn−2 dn−3 dn−4 νn = . .. d0 0
0
(11.32)
dn−2 dn−3 − 2Re(sn ) + α . .. |sn |2 − β . d1 d0
(11.33)
461
STABILITY OF A POLYTOPE Clearly, Wn −→ W and νn −→ 0 as n → +∞.
(11.34)
Now, since det(·) is a continuous function and since det[V , −W ] 6= 0, there must exist n1 such that det[V − Wn ] 6= 0 for n ≥ n1 . Also, for every n, the intersection between Psn and af f (F ) consists of all λ, µ that satisfy: δ ∗ + Wn µ + νn = δ ∗ + V λ or equivalently [V , −Wn ]
λ = νn . µ
(11.35)
For n ≥ n1 , the system (11.35) has a unique solution, λn = [V , −Wn ]−1 νn . µn
(11.36)
From (11.36) we deduce that [λTn , µTn ] −→ 0 when n → +∞. We now show that δ ∗ belongs to ∂F . Let us consider an arbitrary open neighborhood in af f (F ), BF (δ ∗ , ǫ) = {δ ∈ af f (F ) : kδ − δ ∗ k < ǫ}, We must show that BF (δ ∗ , ǫ) contains at least one vector not contained in F . To do so, consider the intersection between Psn and af f (F ), that is the vector δ n = δ ∗ + V λn . This vector belongs to af f (F ), and since λn goes to 0, it belongs to BF (δ ∗ , ǫ) for n sufficiently large. Moreover, the polynomial corresponding to this vector has a root at sn and we know that sn does not belong to R(F ). Hence, it must be the case that δ n does not belong to F , and this completes the proof of the lemma. Figures 11.8 and 11.9 illustrate this lemma. The sequence sn converges to s∗ ∈ R(F ) from outside of R(F ). The corresponding subspaces Psn converge to Ps∗ from outside F . Thus Ps∗ must touch an edge of F . We can now complete the proof of the Edge Theorem. PROOF (Edge Theorem, Theorem 11.5) From (11.25) and Lemma 11.4 we have pF pF pF [ [ [ R(∂Fi ). ∂R(Ω) = ∂ R(Fi ) = ∂R(Fi ) ⊂ i=1
i=1
i=1
The ∂Fi are precisely the exposed edges of Ω and this proves the theorem. Let us now consider an arbitrary simply connected domain of the complex plane, that is, a subset of the complex plane in which every simple (without
462
ROBUST PARAMETRIC CONTROL Im
sn
←−
s∗
∂R(F ) →
R(F )
Re
Figure 11.8 The sequence sn 6∈ R(F ) converges to s∗ ∈ ∂R(F ).
δ2 F
sc δ1
Ps∗
δ0
Psn Figure 11.9 The sequence Psn (Psi ∩ F = ∅) converges to Ps∗ .
463
STABILITY OF A POLYTOPE
self-crossings) closed contour encloses only points of the set. We can state the following corollary: COROLLARY 11.1 If Γ ⊂ C is a simply connected domain, then for any polytope satisfying Assumption 5, R(Ω) is contained in Γ if and only if the root space of all the exposed edges of Ω is contained in Γ.
Exposed Edges In general, a polytope is defined by its vertices and it is not immediately clear how to determine which are the exposed edges of Ω. However, it is clear that those exposed edges are part of all pairwise convex combinations of the vertices of Ω, and therefore it is enough to check those. In the representation P := {P (s) : P (s) = a1 Q1 (s) + a2 Q2 (s) + · · · + am Qm (s), a ∈ A} where a = [a1 , a2 , · · · , am ] the exposed edges of the polytope P are obtained from the exposed edges of the hypercube A to which a belongs. This can be ¯i , and letting ak done by fixing all ai except one, say ak , at a vertex ai or a vary in the interval [ak , a ¯k ], and repeating this for k = 1, · · · , m. In general, the number of line segments in the coefficient space generated by this exceeds the number of exposed edges of P. Nevertheless, this procedure captures all the exposed edges. We note that within the assumptions required by this result, stability verification amounts to checking the root-location of line segments of polynomials of the form Pλ (s) = (1 − λ)P1 (s) + λP2 (s),
λ ∈ [0, 1].
(11.37)
The root-locus technique can be used for this purpose. Alternatively, the Segment Lemma given in Chapter 9 can also be used when the boundary of the domain Γ of interest can be parametrized easily. This theorem is the best result that one can expect at this level of generality, because as we have shown in Chapter 9 a line segment joining two stable polynomials is not necessarily stable. To reiterate, consider the following simple polytope consisting of the segment joining the two points P1 (s) = 3s4 + 3s3 + 5s2 + 2s + 1 and P2 (s) = s4 + s3 + 5s2 + 2s + 5. It can be checked that both P1 (s) and P2 (s) are Hurwitz stable and yet the polynomial P1 (s) + P2 (s) has a root at s = j. 2 We illustrate the Edge Theorem with some examples.
464
11.3.2
ROBUST PARAMETRIC CONTROL
Examples
Example 11.4 Consider the interval control system in Figure 11.10:
+ − 6
-
K
-
G(s)
-
Figure 11.10 A gain feedback system (Example 11.4).
Let G(s) =
δ 2 s2 + δ 0 s(s2 + δ1 )
and assume that K = 1. Then the characteristic polynomial of this family of systems is the interval polynomial δ(s) = s3 + δ2 s2 + δ1 s + δ0 where δ2 ∈ [6, 8],
δ1 ∈ [14, 18],
δ0 ∈ [9.5, 10.5].
The three variable coefficients form a box with 12 edges in the coefficient space. By the Edge Theorem, the boundary of the root space of the interval polynomial family can be obtained by plotting the root loci along the exposed edges of the box. The root loci of the edges is shown in Figure 11.11. Since the entire root space of the set of characteristic polynomials is found to be in the LHP, the family of feedback systems is robustly stable. We remark that the robust stability of this system could have been checked by determining whether the Kharitonov polynomials are stable or not. However the Edge Theorem has given us considerably more information by generating the entire root set. From this set, depicted in Figure 11.11, we can evaluate the performance of the system in terms of such useful quantities as the worst case damping ratio, stability degree (minimum distance of the root set to the imaginary axis), largest damped and undamped natural frequencies, etc. The movement of the entire root space with respect to the gain K can be studied systematically by repeatedly applying the Edge Theorem for each K. Figure 11.12 shows the movement of the root space with respect to various gains K. It shows that the root space approaches the imaginary axis as the
465
STABILITY OF A POLYTOPE
3
2
Imag
1
0
−1
−2
−3 −6
−5
−4
−3 Real
−2
−1
0
Figure 11.11 Root space for K = 1 (Example 11.4).
3 K=3 2
K=5
K=1 K=1.6
Imag
1
0
−1
−2
−3 −6
−5
−4
−3 Real
−2
−1
Figure 11.12 Root spaces for various K (Example 11.4).
0
466
ROBUST PARAMETRIC CONTROL
gain K approaches the value 5. The root sets of the Kharitonov polynomials are properly contained in the root space for small values of K. However as K approaches the value where the family is just about to become unstable, the roots of the Kharitonov polynomials move out to the right-hand boundary of the root set. These roots are therefore the “first” set of roots of the system to cross the imaginary axis. Example 11.5 Let us consider the unity feedback discrete time control system with forward transfer function: δ1 z + δ0 G(z) = 2 . z (z + δ2 ) The characteristic polynomial is δ(z) = z 3 + δ2 z 2 + δ1 z + δ0 . Suppose that the coefficients vary in the intervals δ2 ∈ [0.042, 0.158],
δ1 ∈ [−0.058, 0.058],
δ0 ∈ [−0.06, 0.056]
The boundary of the root space of the family can be generated by drawing the root loci along the 12 exposed edges of the box in coefficient space. The root space is inside the unit disc as shown in Figure 11.13. Hence, the entire family is Schur stable.
Example 11.6 Consider the interval plant G(s) =
s+a s2 + bs + c
where a ∈ [1, 2],
b ∈ [9, 11],
c ∈ [15, 18].
The controller is
3s + 2 . s+5 The closed-loop characteristic polynomial is C(s) =
δ(s) = (s2 + bs + c)(s + 5) + (s + a)(3s + 2) = a(3s + 2) + b(s2 + 5s) + c(s + 5) + (s3 + 8s2 + 2s). The boundary of the root space of δ(s) can be obtained by plotting the root loci along the 12 exposed edges. It can be seen from Figure 11.14 that the family δ(s) is stable since the root space is in the left-half plane. Hence, the
467
STABILITY OF A POLYTOPE
1.5
1
Imag
0.5
0
−0.5
−1
−1.5 −1.5
−1
−0.5
0 Real
0.5
1
1.5
Figure 11.13 Root space of δ(z) (Example 11.5).
1.5
1
Imag
0.5
0
−0.5
−1
−1.5 −16
−14
−12
−10
−8 Real
−6
−4
Figure 11.14 Root loci of the edges (Example 11.6).
−2
0
468
ROBUST PARAMETRIC CONTROL
given compensator robustly stabilizes the interval plant. From the root set generated we can evaluate the performance of the controller in terms of the worst case damping ratio, the minimum stability degree and the maximum frequency of oscillation. The Edge Theorem has many useful applications. For instance, it can be effectively used to determine the coprimeness of two polytopic families of polynomials as shown in the following example. Example 11.7 Consider the two polynomials δA (s) = p0 δA0 (s) + p1 δA1 (s) + p2 δA2 (s) δB (s) = q0 δB0 (s) + q1 δB1 (s) + q2 δB2 (s) where δA0 (s) = 0.2s4 + 2s3 + 100s2 + 600s + 5000 δA1 (s) = 0.3s4 + 8s3 + 200s2 + 1000s + 15000 δA2 (s) = 0.5s4 + 2s3 + 115s2 + 998s + 18194 δB0 (s) = 0.1s4 + 3s3 + 50s2 + 500s + 1000 δB1 (s) = 0.3s4 + 3s3 + 50s2 + 500s + 2000 δB2 (s) = 0.6s4 + 3s3 + 88.5s2 + 190.3s + 2229.1 and the nominal value of parameters p are p0 = [p00 p01 p02 q00 q10 q20 ] = [1 1 1 1 1 1]. Figure 11.15 shows the roots of the two polynomials at the nominal parameter p = p0 . The roots of δA (s) and δB (s) are labeled in the figure as “A” and “B,” respectively. Clearly, these two polynomials are coprime as the root sets are disjoint. Now suppose that the parameters p and q perturb in interval sets. We define perturbation boxes for the parameters p and q as follows: Πp := {[pi − ω1 ǫ, pi + ω1 ǫ], i = 0, 1, 2} Πq := {[qi − ω2 ǫ, qi + ω2 ǫ], i = 0, 1, 2} where [ω1 ω2 ] = [1 5]. Suppose that we want to determine the maximum value of ǫ such that these two families of polynomials remain coprime. This can be accomplished by examining the root space for increment values of ǫ. We observe that the root spaces are touching each other at ǫ = 0.14. As shown in Figure 11.16, certain polynomials in the δA (s) and δB (s) families share common roots at the “*” locations. Therefore, at this point the families cease to be coprime.
469
STABILITY OF A POLYTOPE
20 B 15 B A
10 A
Imag
5
0
−5
A
−10
A B
−15 B −20 −6
−5
−4
−3
−2
−1
0
1
0
1
Real
Figure 11.15 Roots of δA (s) and δB (s) (Example 11.7).
20 B 15 B 10
A
5
Imag
A 0 A −5 A
−10
−15
B B
−20 −6
−5
−4
−3
−2
−1
Real
Figure 11.16 Root space of δA (s) and δB (s) for ǫ = 0.14 (Example 11.7).
470
11.4
ROBUST PARAMETRIC CONTROL
Stability of Interval Polynomials
In this section, we continue our discussion of polytopic families, restricting our attention to a very special class of polytopes. This special class consists of polynomial families where each coefficient varies in an interval, and is known as an interval polynomial.
11.4.1
Kharitonov’s Theorem for Real Polynomials
Consider now the set I(s) of real polynomials of degree n of the form δ(s) = δo + δ1 s + δ2 s2 + δ3 s3 + δ4 s4 + · · · + δn sn where the coefficients lie within given ranges, δ0 ∈ [x0 , y0 ], δ1 ∈ [x1 , y1 ], · · · , δn ∈ [xn , yn ]. Write δ := [δ0 , δ1 , · · · , δn ] and identify a polynomial δ(s) with its coefficient vector δ. Introduce the hyper-rectangle or box of coefficients ∆ := δ : δ ∈ IRn+1 , xi ≤ δi ≤ yi , i = 0, 1, · · · , n . (11.38) We assume that the degree remains invariant over the family, so that 0 ∈ / [xn , yn ]. Such a set of polynomials is called a real interval family and we loosely refer to I(s) as an interval polynomial. Kharitonov’s Theorem provides a surprisingly simple necessary and sufficient condition for the Hurwitz stability of the entire family. THEOREM 11.6 (Kharitonov’s Theorem) Every polynomial in the family I(s) is Hurwitz if and only if the following four extreme polynomials are Hurwitz: K 1 (s) = xo + x1 s + y2 s2 + y3 s3 + x4 s4 + x5 s5 + y6 s6 + · · · , K 2 (s) = xo + y1 s + y2 s2 + x3 s3 + x4 s4 + y5 s5 + y6 s6 + · · · , K 3 (s) = yo + x1 s + x2 s2 + y3 s3 + y4 s4 + x5 s5 + x6 s6 + · · · , (11.39) K 4 (s) = yo + y1 s + x2 s2 + x3 s3 + y4 s4 + y5 s5 + x6 s6 + · · · . The box ∆ and the vertices corresponding to the Kharitonov polynomials are shown in Figure 11.17. The proof that follows allows for the interpretation of Kharitonov’s Theorem as a generalization of the Hermite-Biehler Theorem
471
STABILITY OF A POLYTOPE δ2 6 ∆ K2 ? K1 K4
K3
δ0
δ1 Figure 11.17 The box ∆ and the four Kharitonov vertices.
for Hurwitz polynomials. We start by introducing two symmetric lemmas that will lead us naturally to the proof of the theorem. LEMMA 11.5 Let P1 (s) = P even(s) + P1odd (s) P2 (s) = P even(s) + P2odd (s) denote two stable polynomials of the same degree with the same even part P even (s) and differing odd parts P1odd (s) and P2odd (s) satisfying P1o (ω) ≤ P2o (ω),
for all ω ∈ [0, ∞].
(11.40)
Then, P (s) = P even (s) + P odd (s) is stable for every polynomial P (s) with odd part P odd (s) satisfying P1o (ω) ≤ P o (ω) ≤ P2o (ω),
for all ω ∈ [0, ∞].
(11.41)
PROOF Since P1 (s) and P2 (s) are stable, P1o (ω) and P2o (ω) both satisfy the interlacing property with P e (ω). In particular, P1o (ω) and P2o (ω) are not only of the same degree, but the sign of their highest coefficient is also the same since it is in fact the same as that of the highest coefficient of P e (ω).
472
ROBUST PARAMETRIC CONTROL
Given this it is easy to see that P o (ω) cannot satisfy (11.41) unless it also has this same degree and the same sign for its highest coefficient. Then, the condition in (11.41) forces the roots of P o (ω) to interlace with those of P e (ω). Therefore, according to the Hermite-Biehler Theorem, P even (s) + P odd (s) is stable. We remark that Lemma 11.5 as well as its dual, Lemma 11.6 given below, are special cases of the Vertex Lemma, developed in Chapter 9 and follow immediately from it. We illustrate Lemma 11.5 in the example below (see Figure 11.18).
150
Po(ω) 100
Po(ω) 2
50 Pe(ω)
0 Po(ω) 1
−50
0
0.5
1
1.5 ω
2
2.5
3
Figure 11.18 P e (ω) and (P1o (ω), P2o (ω)) (Example 11.8).
Example 11.8 Let P1 (s) = s7 + 9s6 + 31s5 + 71s4 + 111s3 + 109s2 + 76s + 12 P2 (s) = s7 + 9s6 + 34s5 + 71s4 + 111s3 + 109s2 + 83s + 12.
473
STABILITY OF A POLYTOPE Then P even (s) = 9s6 + 71s4 + 109s2 + 12 P1odd (s) = s7 + 31s5 + 111s3 + 76s P2odd (s) = s7 + 34s5 + 111s3 + 83s.
Figure 11.18 shows that P e (ω) and the tube bounded by P1o (ω) and P2o (ω) satisfy the interlacing property. Thus, we conclude that every polynomial P (s) with odd part P odd (s) satisfying P1o (ω) ≤ P o (ω) ≤ P2o (ω), for all ω ∈ [0, ∞] is stable. For example, the dotted line shown inside the tube represents P odd (s) = s7 + 32s5 + 111s3 + 79s.
The dual of Lemma 11.5 is: LEMMA 11.6 Let P1 (s) = P1even(s) + P odd (s) P2 (s) = P2even(s) + P odd (s) denote two stable polynomials of the same degree with the same odd part P odd (s) and differing even parts P1even (s) and P2even (s) satisfying P1e (ω) ≤ P2e (ω),
for all ω ∈ [0, ∞].
(11.42)
Then, P (s) = P even (s) + P odd (s) is stable for every polynomial P (s) with even part P even (s) satisfying P1e (ω) ≤ P e (ω) ≤ P2e (ω),
for all ω ∈ [0, ∞].
(11.43)
We are now ready to prove Kharitonov’s Theorem. PROOF (Kharitonov’s Theorem) The Kharitonov polynomials repeated below, for convenience are four specific vertices of the box ∆: K 1 (s) = xo + x1 s + y2 s2 + y3 s3 + x4 s4 + x5 s5 + y6 s6 + · · · , K 2 (s) = xo + y1 s + y2 s2 + x3 s3 + x4 s4 + y5 s5 + y6 s6 + · · · , K 3 (s) = yo + x1 s + x2 s2 + y3 s3 + y4 s4 + x5 s5 + x6 s6 + · · · , (11.44) K 4 (s) = yo + y1 s + x2 s2 + x3 s3 + y4 s4 + y5 s5 + x6 s6 + · · · .
474
ROBUST PARAMETRIC CONTROL
even even These polynomials are built from two different even parts Kmax (s) and Kmin (s) odd odd and two different odd parts Kmax (s) and Kmin (s) defined below: even Kmax (s) := yo + x2 s2 + y4 s4 + x6 s6 + y8 s8 + · · · , even Kmin (s) := xo + y2 s2 + x4 s4 + y6 s6 + x8 s8 + · · · ,
and odd Kmax (s) := y1 s + x3 s3 + y5 s5 + x7 s7 + y9 s9 + · · · , odd Kmin (s) := x1 s + y3 s3 + x5 s5 + y7 s7 + x9 s9 + · · · .
The Kharitonov polynomials in (11.39) or (11.44) can be rewritten as: even odd K 1 (s) = Kmin (s) + Kmin (s), 2 even odd K (s) = Kmin (s) + Kmax (s), even odd K 3 (s) = Kmax (s) + Kmin (s), odd 4 even K (s) = Kmax (s) + Kmax (s).
(11.45)
The motivation for the subscripts “max” and “min” is as follows. Let δ(s) be an arbitrary polynomial with its coefficients lying in the box ∆ and let δ even (s) be its even part. Then e Kmax (ω) = y0 − x2 ω 2 + y4 ω 4 − x6 ω 6 + y8 ω 8 + · · · ,
δ e (ω) = δ0 − δ2 ω 2 + δ4 ω 4 − δ6 ω 6 + δ8 ω 8 + · · · , e Kmin (ω) = x0 − y2 ω 2 + x4 ω 4 − y6 ω 6 + x8 ω 8 + · · · , so that e Kmax (ω) − δ e (ω) = (y0 − δ0 ) + (δ2 − x2 )ω 2 + (y4 − δ4 )w4 + (δ6 − x6 )ω 6 + · · · ,
and e δ e (ω) − Kmin (ω) = (δ0 − x0 ) + (y2 − δ2 )ω 2 + (δ4 − x4 )ω 4 + (y6 − δ6 )ω 6 + · · · .
Therefore, e e Kmin (ω) ≤ δ e (ω) ≤ Kmax (ω),
for all ω ∈ [0, ∞].
(11.46)
Similarly, if δ odd (s) denotes the odd part of δ(s), and δ odd (jω) = jωδ o (ω) it can be verified that o o Kmin (ω) ≤ δ o (ω) ≤ Kmax (ω),
for all ω ∈ [0, ∞].
(11.47)
Thus δ(jω) lies in an axis parallel rectangle I(jω) as shown in Figure 11.19. To proceed with the proof of Kharitonov’s Theorem we note that necessity of the condition is trivial since if all the polynomials with coefficients in the box ∆ are stable, it is clear that the Kharitonov polynomials must also be
475
STABILITY OF A POLYTOPE Imag 6 o Kmax (ω)
I(jω) o Kmin (ω)
e Kmin (ω)
e Kmax (ω)
Real
Figure 11.19 Axis parallel rectangle I(jω).
stable since their coefficients lie in ∆. For the converse, assume that the Kharitonov polynomials are stable, and let δ(s) = δ even (s) + δ odd (s) be an arbitrary polynomial belonging to the family I(s), with even part δ even (s) and odd part δ odd (s). We conclude, from Lemma 11.5 applied to the stable polynomials K 3 (s) and K 4 (s) in (11.45), that even Kmax (s) + δ odd (s) is stable.
(11.48)
Similarly, from Lemma 11.5 applied to the stable polynomials K 1 (s) and K 2 (s) in (11.45) we conclude that even (s) + δ odd (s) is stable. Kmin
(11.49)
Now, since (11.46) holds, we can apply Lemma 11.6 to the two stable polynomials even even (s) + δ odd (s) and Kmin (s) + δ odd (s) Kmax to conclude that δ even (s) + δ odd (s) = δ(s) is stable. Since δ(s) was an arbitrary polynomial of I(s) we conclude that the entire family of polynomials I(s) is stable and this concludes the proof of the theorem. REMARK 11.1 The Kharitonov polynomials can also be written with the highest order coefficient as the first term: ˆ 1 (s) = xn sn + yn−1 sn−1 + yn−2 sn−2 + xn−3 sn−3 + xn−4 sn−4 + · · · , K ˆ 2 (s) = xn sn + xn−1 sn−1 + yn−2 sn−2 + yn−3 sn−3 + xn−4 sn−4 + · · · , K ˆ 3 (s) = yn sn + xn−1 sn−1 + xn−2 sn−2 + yn−3 sn−3 + yn−4 sn−4 + · · · , K ˆ 4 (s) = yn sn + yn−1 sn−1 + xn−2 sn−2 + xn−3 sn−3 + yn−4 sn−4 + · · · . K
476
ROBUST PARAMETRIC CONTROL
REMARK 11.2 The assumption regarding invariant degree of the interval family can be relaxed. In this case some additional polynomials need to be tested for stability. This is dealt with in Exercise 11.13. REMARK 11.3 The assumption inherent in Kharitonov’s Theorem that the coefficients perturb independently is crucial to the working of the theorem. In the examples below we have constructed some control problems where this assumption is satisfied. Obviously in many real world problems this assumption would fail to hold, since the characteristic polynomial coefficients would perturb interdependently through other primary parameters. However even in these cases Kharitonov’s Theorem can give useful and computationally simple answers by overbounding the actual perturbations by an axis parallel box ∆ in coefficient space. REMARK 11.4 As remarked above Kharitonov’s Theorem would give conservative results when the characteristic polynomial coefficients perturb interdependently. The Edge Theorem and the Generalized Kharitonov Theorem, respectively were developed precisely to deal nonconservatively with such dependencies. Example 11.9 Consider the problem of checking the robust stability of the feedback system shown in Figure 11.20.
+ − 6
-
G(s)
-
Figure 11.20 Feedback system (Example 11.9).
The plant transfer function is G(s) =
s2 (δ
δ1 s + δ0 2 3 4 s + δ3 s + δ2 )
with coefficients being bounded as δ4 ∈ [x4 , y4 ], δ3 ∈ [x3 , y3 ], δ2 ∈ [x2 , y2 ], δ1 ∈ [x1 , y1 ], δ0 ∈ [x0 , y0 ]. The characteristic polynomial of the family is written as δ(s) = δ4 s4 + δ3 s3 + δ2 s2 + δ1 s + δ0 .
477
STABILITY OF A POLYTOPE
The associated even and odd polynomials for Kharitonov’s test are as follows: even Kmin (s) = x0 + y2 s2 + x4 s4 , odd Kmin (s) = x1 s + y3 s3 ,
even Kmax (s) = y0 + x2 s2 + y4 s4 , odd Kmax (s) = y1 s + x3 s3 .
The Kharitonov polynomials are: K 1 (s) = x0 + x1 s + y2 s2 + y3 s3 +,
K 2 (s) = x0 + y1 s + y2 s2 + x3 s3 +,
K 3 (s) = y0 + x1 s + x2 s2 + y3 s3 +,
K 4 (s) = y0 + y1 s + x2 s2 + x3 s3 + .
The problem of checking the Hurwitz stability of the family therefore is reduced to that of checking the Hurwitz stability of these four polynomials. This in turn reduces to checking that the coefficients have the same sign (positive, say; otherwise multiply δ(s) by -1) and that the following inequalities hold: K 1 (s)
Hurwitz : y2 y3 > x1 x4 ,
x1 y2 y3 > x21 x4 + y32 x0 ,
K 2 (s) K 3 (s)
Hurwitz : y2 x3 > y1 x4 , Hurwitz : x2 y3 > x1 y4 ,
y1 y2 x3 > y12 x4 + x23 x0 , x1 x2 y3 > x21 y4 + y32 y0 ,
K 4 (s)
Hurwitz : x2 x3 > y1 y4 ,
y1 x2 x3 > y12 y4 + x23 y0 .
Example 11.10 Consider the control system shown in Figure 11.21.
+ − 6
-
C(s)
-
G(s)
-
Figure 11.21 Feedback system with controller (Example 11.10).
The plant is described by the rational transfer function G(s) with numerator and denominator coefficients varying independently in prescribed intervals. We refer to such a family of transfer functions G(s) as an interval plant. In the present example we take n 2 s2 + n 1 s + n 0 G(s) := G(s) = 3 : s + d2 s2 + d1 s + d0 n0 ∈ [1, 2.5], n1 ∈ [1, 6], n2 ∈ [1, 7], . d2 ∈ [−1, 1], d1 ∈ [−0.5, 1.5], d0 ∈ [1, 1.5]
478
ROBUST PARAMETRIC CONTROL
The controller is a constant gain, C(s) = k that is to be adjusted, if possible, to robustly stabilize the closed-loop system. More precisely we are interested in determining the range of values of the gain k ∈ [−∞, +∞] for which the closed-loop system is robustly stable, i.e. stable for all G(s) ∈ G(s). The characteristic polynomial of the closed-loop system is: δ(k, s) = s3 + (d2 + kn2 ) s2 + (d1 + kn1 ) s + (d0 + kn0 ) . | {z } | {z } | {z } δ2 (k)
δ1 (k)
δ0 (k)
Since the parameters di , nj , i = 0, 1, 2, j = 0, 1, 2 vary independently it follows that for each fixed k, δ(k, s) is an interval polynomial. Using the bounds given to describe the family G(s) we get the following coefficient bounds for positive k: δ2 (k) ∈ [−1 + k, 1 + 7k], δ1 (k) ∈ [−0.5 + k, 1.5 + 6k], δ0 (k) ∈ [−1 + k, 1.5 + 2.5k]. Since the leading coefficient is +1 the remaining coefficients must be all positive for the polynomial to be Hurwitz. This leads to the constraints: (a) −1 + k > 0,
− 0.5 + k > 0,
1 + k > 0.
From Kharitonov’s Theorem applied to third order interval polynomials it can be easily shown that to ascertain the Hurwitz stability of the entire family it suffices to check in addition to positivity of the coefficients, the Hurwitz stability of only the third Kharitonov polynomial K3 (s). In this example we therefore have that the entire system is Hurwitz if and only if in addition to the above constraints (a) we have: (−0.5 + k)(−1 + k) > 1.5 + 2.5k. From this it follows that the closed-loop system is robustly stabilized if and only if √ k ∈ (2 + 5, +∞].
To complete our treatment of this important result we state Kharitonov’s Theorem for polynomials with complex coefficients in the next section.
11.4.2
Kharitonov’s Theorem for Complex Polynomials
Consider the set I ∗ (s) of all complex polynomials of the form, δ(s) = (α0 + jβ0 ) + (α1 + jβ1 )s + · · · + (αn + jβn )sn
(11.50)
479
STABILITY OF A POLYTOPE with α0 ∈ [x0 , y0 ], α1 ∈ [x1 , y1 ],
· · · , αn ∈ [xn , yn ]
(11.51)
β0 ∈ [u0 , v0 ], β1 ∈ [u1 , v1 ], · · · , βn ∈ [un , vn ].
(11.52)
and
This is a complex interval family of polynomials of degree n which includes the real interval family studied earlier as a special case. It is natural to consider the generalization of Kharitonov’s Theorem for the real case to this family. The Hurwitz stability of complex interval families will also arise naturally in studying the extremal H∞ norms of interval systems in Chapter 10. Complex polynomials also arise in the study of phase margins of control systems and in time-delay systems. Kharitonov extended his result for the real case to the above case of complex interval families. We assume as before that the degree remains invariant over the family. Introduce two sets of complex polynomials as follows: K1+ (s) := (x0 + ju0 ) + (x1 + jv1 )s + (y2 + jv2 )s2 + (y3 + ju3 )s3 +(x4 + ju4 )s4 + (x5 + jv5 )s5 + · · · , K2+ (s) := (x0 + jv0 ) + (y1 + jv1 )s + (y2 + ju2 )s2 + (x3 + ju3 )s3 K3+ (s)
+(x4 + jv4 )s4 + (y5 + jv5 )s5 + · · · , (11.53) := (y0 + ju0 ) + (x1 + ju1 )s + (x2 + jv2 )s2 + (y3 + jv3 )s3 +(y4 + ju4 )s4 + (x5 + u5 )s5 + · · · ,
K4+ (s) := (y0 + jv0 ) + (y1 + ju1 )s + (x2 + ju2 )s2 + (x3 + jv3 )s3 +(y4 + jv4 )s4 + (y5 + ju5 )s5 + · · · , and K1− (s) := (x0 + ju0 ) + (y1 + ju1 )s + (y2 + jv2 )s2 + (x3 + jv3 )s3 +(x4 + ju4 )s4 + (y5 + ju5 )s5 + · · · , K2− (s) := (x0 + jv0 ) + (x1 + ju1 )s + (y2 + ju2 )s2 + (y3 + jv3 )s3 K3− (s)
+(x4 + jv4 )s4 + (x5 + ju5 )s5 + · · · , (11.54) := (y0 + ju0 ) + (y1 + jv1 )s + (x2 + jv2 )s2 + (x3 + ju3 )s3
+(y4 + ju4 )s4 + (y5 + jv5 )s5 + · · · , K4− (s) := (y0 + jv0 ) + (x1 + jv1 )s + (x2 + ju2 )s2 + (y3 + ju3 )s3 +(y4 + jv4 )s4 + (x5 + jv5 )s5 + · · · . THEOREM 11.7 The family of polynomials I ∗ (s) is Hurwitz if and only if the eight Kharitonov polynomials K1+ (s), K2+ (s), K3+ (s), K4+ (s), K1− (s), K2− (s), K3− (s), K4− (s) are all Hurwitz.
480
ROBUST PARAMETRIC CONTROL
PROOF The necessity of the condition is obvious because the eight Kharitonov polynomials are in I ∗ (s). The proof of sufficiency follows again from the Hermite-Biehler Theorem for complex polynomials (Theorem 8.8). Observe that the Kharitonov polynomials in (11.53) and (11.54) are composed of the following extremal polynomials: For the “positive” Kharitonov polynomials define: + Rmax (s) := y0 + ju1 s + x2 s2 + jv3 s3 + y4 s4 + · · · + Rmin (s) := x0 + jv1 s + y2 s2 + ju3 s3 + x4 s4 + · · · + (s) := jv0 + y1 s + ju2 s2 + x3 s3 + jv4 s4 + · · · Imax + Imin (s) := ju0 + x1 s + jv2 s2 + y3 s3 + ju4 s4 + · · ·
so that + + K1+ (s) = Rmin (s) + Imin (s) + + K2+ (s) = Rmin (s) (s) + Imax + + + K3 (s) = Rmax (s) + Imin (s) + + (s). K4+ (s) = Rmax (s) + Imax
For the “negative” Kharitonov polynomials we have − Rmax (s) = y0 + jv1 s + x2 s2 + ju3 s3 + y4 s4 + · · · − Rmin (s) = x0 + ju1 s + y2 s2 + jv3 s3 + x4 s4 + · · · − Imax (s) = jv0 + x1 s + ju2 s2 + y3 s3 + jv4 s4 + · · · − Imin (s) = ju0 + y1 s + jv2 s2 + x3 s3 + ju4 s4 + · · ·
and − − K1− (s) = Rmin (s) + Imin (s) − − K2− (s) = Rmin (s) + Imax (s) − − − K3 (s) = Rmax (s) + Imin (s) − − K4− (s) = Rmax (s) + Imax (s). ± ± ± ± Rmax (jω) and Rmin (jω) are real and Imax (jω) and Imin (jω) are imaginary. r i Let Re[δ(jω)] := δ (ω) and Im[δ(jω)] := δ (ω) denote the real and imaginary parts of δ(s) evaluated at s = jω. Then we have:
δ r (ω) = α0 − β1 ω − α2 ω 2 + β3 ω 3 + · · · , δ i (ω) = β0 + α1 ω − β2 ω 2 − α3 ω 3 + · · · . It is easy to verify that + + (jω), Rmin (jω) ≤ δ r (ω) ≤ Rmax + + I (jω) Imin (jω) ≤ δ i (ω) ≤ max , j j
for all ω ∈ [0, ∞] for all ω ∈ [0, ∞]
(11.55)
481
STABILITY OF A POLYTOPE − − (jω), Rmin (jω) ≤ δ r (ω) ≤ Rmax − − I (jω) Imin (jω) ≤ δ i (ω) ≤ max , j j
for all ω ∈ [0, −∞] for all ω ∈ [0, −∞]
(11.56)
The proof of the theorem is now completed as follows. The stability of the 4 positive Kharitonov polynomials guarantees interlacing of the “real tube” + + (bounded by Rmax (jω) and Rmin (jω)) with the “imaginary tube” (bounded by + + Imax (jω) and Imin (jω)) for ω ≥ 0. The relation in (11.55) then guarantees that the real and imaginary parts of an arbitrary polynomial in I ∗ (s) are forced to interlace for ω ≥ 0. Analogous arguments, using the bounds in (11.56) and the “negative” Kharitonov polynomials forces interlacing for ω ≤ 0. Thus, by the Hermite-Biehler Theorem for complex polynomials δ(s) is Hurwitz. Since δ(s) was arbitrary, it follows that each and every polynomial in I ∗ (s) is Hurwitz. REMARK 11.5 In the complex case the real and imaginary parts of δ(jω) are polynomials in ω and not ω 2 , and therefore it is necessary to verify the interlacing of the roots on the entire imaginary axis and not only on its positive part. This is the reason why there are twice as many polynomials to check in the complex case.
11.4.3
Interlacing and Image Set
In this section we interpret Kharitonov’s Theorem in terms of the interlacing property or Hermite-Biehler Theorem and also in terms of the complex plane image of the set of polynomials I(s), evaluated at s = jω for each ω ∈ [0, ∞]. In Chapter 8, we have seen that the Hurwitz stability of a single polynomial δ(s) = δ even (s) + δ odd (s) is equivalent to the interlacing property δ odd (jω) . In considering the Hurwitz staof δ e (ω) = δ even (jω) and δ o (ω) = jω bility of the interval family I(s) we see that the family is stable if and only if every element satisfies the interlacing property. In view of Kharitonov’s Theorem it must therefore be true that verification of the interlacing property for the four Kharitonov polynomials guarantees the interlacing property of every member of the family. This point of view is expressed in the following version of Kharitonov’s Theorem. Let ωemax (ωemin ) denote the positive i i e e max min roots of Kmax (ω)(Kmin (ω)) and let ωoi (ωoi ) denote the positive roots of o o Kmax (ω)(Kmin (ω)). THEOREM 11.8 (Interlacing Statement of Kharitonov’s Theorem) The family I(s) contains only stable polynomials if and only if e e o o 1) The polynomials Kmax (ω), Kmin (ω), Kmax (ω), Kmin (ω) have only real
482
ROBUST PARAMETRIC CONTROL roots and the set of positive roots interlace as follows: 0 < ωemin < ωemax < ωomin < ωomax < ωemin < ωemax < ωomin < ωomax < ···, 1 1 1 1 2 2 2 2
e e o o 2) Kmax (0), Kmin (0), Kmax (0), Kmin (0) are non zero and of the same sign.
This theorem is illustrated in Figure 11.22 which shows how the interlacing of the odd and even tubes implies the interlacing of the odd and even parts of each polynomial in the interval family.
e ← Kmax (ω) o Kmax (ω)
↓
ωomin 1 ωemin 1
B1
ωemax 1
ωomax 1 B2
e Kmin (ω) →
ω
↑ o Kmin (ω)
Figure 11.22 Interlacing odd and even tubes.
We illustrate this interlacing property of interval polynomials with an example. Example 11.11 Consider the interval family δ(s) = s7 + δ6 s6 + δ5 s5 + δ4 s4 + δ3 s3 + δ2 s2 + δ1 s + δ0 where
δ6 ∈ [9, 9.5], δ3 ∈ [111, 111.5], δ0 ∈ [12, 12.5]
δ5 ∈ [31, 31.5], δ2 ∈ [109, 109.5],
δ4 ∈ [71, 71.5], δ1 ∈ [76, 76.5],
483
STABILITY OF A POLYTOPE Then e Kmax (ω) = −9ω 6 + 71.5ω 4 − 109ω 2 + 12.5 e Kmin (ω) = −9.5ω 6 + 71ω 4 − 109.5ω 2 + 12 o Kmax (ω) = −ω 6 + 31.5ω 4 − 111ω 2 + 76.5 o Kmin (ω) = −ω 6 + 31ω 4 − 111.5ω 2 + 76.
We can verify the interlacing property of these polynomials (see Figure 11.23).
140 120
e Kmax(ω)
100 80 60
o Kmax(ω) e Kmin(ω)
40 20 B1
0
B3 −20 −40
0
0.5
1
B5
B4
B2
1.5 rad/sec
o Kmin(ω)
2
2.5
3
Figure 11.23 Interlacing property of an interval polynomial (Example 11.11).
The illustration in Figure 11.23 shows how all polynomials with even parts e e o bounded by Kmax (ω) and Kmin (ω) and odd parts bounded by Kmax (ω) and o Kmin (ω) on the imaginary axis satisfy the interlacing property when the Kharitonov polynomials are stable. Figure 11.23 also shows that the interlacing property for a single stable polynomial corresponding to a point δ in coefficient space generalizes to the box ∆ of stable polynomials as the requirement of “interlacing” of the odd and even “tubes.” This interpretation is
484
ROBUST PARAMETRIC CONTROL
useful; for instance it can be used to show that for polynomials of order less than six, fewer than four Kharitonov polynomials need to be tested for robust stability (see Exercise 11.4). Image Set Interpretation It is instructive to interpret Kharitonov’s Theorem in terms of the evolution of the complex plane image of I(s) evaluated along the imaginary axis. Let I(jω) denote the set of complex numbers δ(jω) obtained by letting the coefficients of δ(s) range over ∆: I(jω) := {δ(jω) : δ ∈ ∆} . Now it follows from the relations in (11.46) and (11.47) that I(jω) is a rectangle in the complex plane with the corners K1 (jω), K2 (jω), K3 (jω), K4 (jω) corresponding to the Kharitonov polynomials evaluated at s = jω. This is shown in Figure 11.24. As ω runs from 0 to ∞ the rectangle I(jω) varies in shape size and location but its sides always remain parallel to the real and imaginary axes of the complex plane. We illustrate this by using a numerical example. Example 11.12 Consider the interval polynomial of Example 11.11. The image set of this family is calculated for various frequencies. These frequency dependent rectangles are shown in Figure 11.24.
11.4.4
Image Set Based Proof of Kharitonov’s Theorem
We give an alternative proof of Kharitonov’s Theorem based on analysis of the image set. Suppose that the family I(s) is of degree n and contains at least one stable polynomial.Then stability of the family I(s) can be ascertained by verifying that no polynomial in the family has a root on the imaginary axis. This follows immediately from the Boundary Crossing Theorem of Chapter 8. Indeed if some element of I(s) has an unstable root then there must also exist a frequency ω ∗ and a polynomial with a root at s = jω ∗ . The case ω ∗ = 0 e is ruled out since this would contradict the requirement that Kmax (0) and e Kmin (0) are of the same sign. Thus, it is only necessary to check that the rectangle I(jω ∗ ) excludes the origin of the complex plane for every ω ∗ > 0. Suppose that the Kharitonov polynomials are stable. By the monotonic phase property of Hurwitz polynomials it follows that the corners K 1 (jω), K 2 (jω), K 3 (jω), K 4 (jω) of I(jω) start on the positive real axis (say), turn strictly counterclockwise around the origin and do not pass through it as ω runs from 0 to ∞. Now suppose by contradiction that 0 ∈ I(jω ∗ ) for some ω ∗ > 0. Since I(jω) moves continuously with respect to ω and the origin lies outside of I(0) it follows that there exists ω0 ≤ ω ∗ for which the origin just begins to enter the set I(jω0 ). We now consider this limiting situation in which the
485
STABILITY OF A POLYTOPE
150
Imag
100
50
K2
K4
K1
K3
0
−50 −50
0
50 Real
100
150
Figure 11.24 Image sets of interval polynomial (Kharitonov boxes) (Example 11.12).
origin lies on the boundary of I(jω0 ) and is just about to enter this set as ω increases from ω0 . This is depicted in Figure 11.25. The origin can lie on one of the four sides of the image set rectangle, say AB. The reader can easily verify that in each of these cases the entry of the origin implies that the phase angle (argument) of one of the corners, A or B on the side through which the entry takes place, decreases with increasing ω at ω = ω0 . Since the corners correspond to Kharitonov polynomials which are Hurwitz stable, we have a contradiction with the monotonic phase increase property of Hurwitz polynomials.
11.4.5
Image Set Edge Generators and Exposed Edges
The interval family I(s), or equivalently, the coefficient set ∆, is a polytope and therefore, its stability is equivalent to that of its exposed edges. There are in general (n + 1)2n+1 such exposed edges. However, from the image set arguments given above, it is clear that stability of the family I(s) is, in fact, also equivalent to that of the four polynomial segments [K1 (s), K2 (s)], [K1 (s), K3 (s)], [K2 (s), K4 (s)] and [K3 (s), K4 (s)]. This follows from the previous continuity arguments and the fact that these polynomial segments gener-
486
ROBUST PARAMETRIC CONTROL
6
Imag
6
Imag
−→ B
-
=⇒
Real
↓
⇓
↓
A
B
Real
A −→
6
Imag
6
Imag
←− B
↑
A
⇑
B
↑
⇐=
Real
A ←−
Figure 11.25 Alternative proof of Kharitonov’s Theorem.
Real
487
STABILITY OF A POLYTOPE
ate the boundary of the image set I(jω) for each ω. We now observe that each of the differences K2 (s)−K1 (s),K3 (s)−K1 (s),K4 (s)−K2 (s) and K4 (s)−K3 (s) is either an even or an odd polynomial. It follows then from the Vertex Lemma of Chapter 8 that these segments are Hurwitz stable if and only if the endpoints K1 (s), K2 (s), K3 (s) and K4 (s) are. These arguments serve as yet another proof of Kharitonov’s Theorem. They serve to highlight 1) the important fact that it is only necessary to check stability of that subset of polynomials which generate the boundary of the image set and 2) the role of the Vertex Lemma in reducing the stability tests to that of fixed polynomials.
11.4.6
Extremal Properties of the Kharitonov Polynomials
In this section we derive some useful extremal properties of the Kharitonov polynomials. Suppose that we have proved the stability of the family of polynomials δ(s) = δ0 + δ1 s + δ2 s2 + · · · + δn sn , with coefficients in the box ∆ = [x0 , y0 ] × [x1 , y1 ] × · · · × [xn , yn ]. Each polynomial in the family is stable. A natural question that arises now is the following: What point in ∆ is closest to instability? The stability margin of this point is in a sense the worst case stability margin of the interval system. It turns out that a precise answer to this question can be given in terms of the parametric stability margin as well as in terms of the gain margins of an associated interval system. We first deal with the extremal parametric stability margin problem. Extremal Parametric Stability Margin Property We consider a stable interval polynomial family. It is therefore possible to associate with each polynomial of the family the largest stability ball centered around it. Write δ = [δ0 , δ1 , · · · , δn ], and regard δ as a point in IRn+1 . Let kδkp denote the p norm in IRn+1 and let this be associated with δ(s). The set of polynomials which are unstable of degree n or of degree less than n is denoted by U. Then the radius of the stability ball centered at δ is ρ(δ) = inf kδ − ukp . u∈U
We thus define a mapping from ∆ to the set of all positive real numbers: ρ
∆ −→ R+ \{0} δ(s) −→ ρ(δ).
488
ROBUST PARAMETRIC CONTROL
A natural question to ask is the following: Is there a point in ∆ which is the nearest to instability? Or stated in terms of functions: Has the function ρ a minimum and is there a precise point in ∆ where it is reached? The answer to that question is given in the following theorem. In the discussion to follow we drop the subscript p from the norm since the result holds for any norm chosen. THEOREM 11.9 (Extremal Property of the Kharitonov Polynomials) The function ρ
∆ −→ R+ \{0} δ(s) −→ ρ(δ) has a minimum which is reached at one of the four Kharitonov polynomials associated with ∆. PROOF Let K i (s), i = 1, 2, 3, 4 denote the four Kharitonov polynomials. Consider the four radii associated with these four extreme polynomials, and let us assume for example that ρ(K 1 ) = min ρ(K 1 ), ρ(K 2 ), ρ(K 3 ), ρ(K 4 ) . (11.57) Let us now suppose, by way of contradiction, that some polynomial γ(s) in the box is such that ρ(γ) < ρ(K 1 ). (11.58)
For convenience we will denote ρ(γ) by ργ , and ρ(K 1 ) by ρ1 . By definition, there is at least one polynomial situated on the hypersphere S(γ(s), ργ ) which is unstable or of degree less than n. Let β(s) be such a polynomial. Since β(s) is on S(γ(s), ργ ), there exists α = [α0 , α1 , · · · , αn ] with kαk = 1 such that β(s) = γ0 + α0 ργ + (γ1 + α1 ργ )s + · · · + (γn + αn ργ )sn ,
(11.59)
(α0 , α1 , · · ·, αn can be positive or non positive here.) But by (11.57), ρ1 is the smallest of the four extreme radii and by (11.58) ργ is less than ρ1 . As a consequence, the four new polynomials δn1 (s) = (x0 − |α0 |ργ ) + (x1 − |α1 |ργ )s + (y2 + |α2 |ργ )s2 + (y3 + |α3 |ργ )s3 + · · · δn2 (s) = (x0 − |α0 |ργ ) + (y1 + |α1 |ργ )s + (y2 + |α2 |ργ )s2 + (x3 − |α3 |ργ )s3 + · · · δn3 (s) = (y0 + |α0 |ργ ) + (x1 − |α1 |ργ )s + (x2 − |α2 |ργ )s2 + (y3 + |α3 |ργ )s3 + · · · δn4 (s) = (y0 + |α0 |ργ ) + (y1 + |α1 |ργ )s + (x2 − |α2 |ργ )s2 + (x3 − |α3 |ργ )s3 + · · · are all stable because kδni − K i k = ργ < ρi ,
i = 1, 2, 3, 4.
489
STABILITY OF A POLYTOPE By applying Kharitonov’s Theorem, we thus conclude that the new box
∆n = [x0 − |α0 |ργ , y0 + |α0 |ργ ] × · · · × [xn − |αn |ργ , yn + |αn |ργ ] (11.60) contains only stable polynomials of degree n. The contradiction now clearly follows from the fact that β(s) in (11.59) certainly belongs to ∆n , and yet it is unstable or of degree less than n, and this proves the theorem. The above result tells us that, over the entire box, one is closest to instability at one of the Kharitonov corners, say K i (s). It is clear that if we take the box ∆n , constructed in the above proof, (11.60) and replace ργ by ρ(K i ), the resulting box is larger than the original box ∆. This fact can be used to develop an algorithm that enlarges the stability box to its maximum limit. We leave the details to the reader but give an illustrative example. Example 11.13 Consider the system given in Example 11.9: G(s) =
s3 (δ
δ 2 s2 + δ 1 s + δ 0 3 2 6 s + δ5 s + δ4 s + δ3 )
with the coefficients being bounded as follows: δ0 ∈ [300, 400], δ1 ∈ [600, 700], δ2 ∈ [450, 500], δ3 ∈ [240, 300], δ4 ∈ [70, 80], δ5 ∈ [12, 14], δ6 ∈ [1, 1]. We wish to verify if the system is robustly stable, and if it is we would like to calculate the smallest value of the stability radius in the space of δ = [δ0 , δ1 , δ2 , δ3 , δ4 , δ5 , δ6 ] as these coefficients range over the uncertainty box. The characteristic polynomial of the closed-loop system is δ(s) = δ6 s6 + δ5 s5 + δ4 s4 + δ3 s3 + δ2 s2 + δ1 s + δ0 . Since all coefficients of the polynomial are perturbing independently, we can apply Kharitonov’s Theorem. This gives us the following four polynomials to check: K 1 (s) = 300 + 600s + 500s2 + 300s3 + 70s4 + 12s5 + s6 K 2 (s) = 300 + 700s + 500s2 + 240s3 + 70s4 + 14s5 + s6 K 3 (s) = 400 + 600s + 450s2 + 300s3 + 80s4 + 12s5 + s6 K 4 (s) = 400 + 700s + 450s2 + 240s3 + 80s4 + 14s5 + s6 . Since all four Kharitonov polynomials are Hurwitz, we proceed to calculate the worst case stability margin.
490
ROBUST PARAMETRIC CONTROL
From the result established above, Theorem 11.9 we know that this occurs at one of the Kharitonov vertices and thus it suffices to determine the stability radius at these four vertices. It is assumed that [α0 , α1 , α2 , α3 , α4 , α5 , α6 ] = [43, 33, 25, 15, 5, 1.5, 1]. We can compute the stability radius by simply applying the techniques of Chapter 8 to each of these fixed Kharitonov polynomials and taking the minimum value of the stability margin. We illustrate this calculation using two different norm measures, ℓ2 and ℓ∞ , for which we have ρ2 (δ) = 1
and ρ∞ (δ) = 0.4953.
Extremal Gain Margin for Interval Systems Let us now consider the standard unity feedback control system shown below in Figure 11.26. p + − 6
x -
G(s)
-
Figure 11.26 A feedback system.
We assume that the system represented by G(s) contains parameter uncertainty. In particular let us assume that G(s) is a proper transfer function which is a ratio of polynomials n(s) and d(s) the coefficients of which vary in independent intervals. Thus, the polynomials n(s) and d(s) vary in respective independent interval polynomial families N (s) and D(s) respectively. We refer to such a family of systems as an interval system. Let n(s) G(s) := G(s) = : n(s) ∈ N (s), d(s) ∈ D(s) . d(s) represent the interval family of systems in which the open-loop transfer function lies. We assume that the closed-loop system containing the interval family G(s) is robustly stable. In other words we assume that the characteristic polynomial of the closed-loop system given by Π(s) = d(s) + n(s)
491
STABILITY OF A POLYTOPE
is of invariant degree n and is Hurwitz for all (n(s),d(s)) ∈ (N (s) × D(s)). Let Π(s) = {Π(s) = n(s) + d(s) : n(s) ∈ N (s), d(s) ∈ D(s)} . Then robust stability means that every polynomial in Π(s) is Hurwitz and of i degree n. This can in fact be verified constructively. Let KN (s), i = 1, 2, 3, 4 j and KD (s), j = 1, 2, 3, 4 denote the Kharitonov polynomials associated with N (s) and D(s) respectively. Now introduce the positive set of Kharitonov systems G+ K (s) associated with the interval family G(s) as follows: G+ K (s) :=
i KN (s) : i = 1, 2, 3, 4 . i (s) KD
THEOREM 11.10 The closed-loop system of Figure 11.26 containing the interval plant G(s) is robustly stable if and only if each of the positive Kharitonov systems in G+ K (s) is stable. PROOF We need to verify that Π(s) remains of degree n and Hurwitz for all (n(s), d(s)) ∈ N (s)×D(s). It follows from the assumption of independence of the families N (s) and D(s) that Π(s) is itself an interval polynomial family. It is easy to check that the Kharitonov polynomials of Π(s) are the four i i KN (s) + KD (s), i = 1, 2, 3, 4. Thus, the family Π(s) is stable if and only if i i the subset KN (s) + KD (s), i = 1, 2, 3, 4 are all stable and of degree n. The latter in turn is equivalent to the stability of the feedback systems obtained by replacing G(s) by each element of G+ K (s). In classical control gain margin is commonly regarded as a useful measure of robust stability. Suppose the closed system is stable. The gain margin at the loop breaking point p is defined to be the largest value k ∗ of k ≥ 1 for which closed-loop stability is preserved with G(s) replaced by kG(s) for all k ∈ [1, k ∗ ). Suppose now that we have verified the robust stability of the interval family of systems G(s). The next question that is of interest is: What is the gain margin of the system at the loop breaking point p? To be more precise we need to ask: What is the worst case gain margin of the system at the point p as G(s) ranges over G(s)? An exact answer to this question can be given as follows. THEOREM 11.11 The worst case gain margin of the system at the point p over the family G(s) is the minimum gain margin corresponding to the positive Kharitonov systems G+ K (s).
492
ROBUST PARAMETRIC CONTROL
PROOF Consider the characteristic polynomial Π(s) = d(s) + kn(s) corresponding to the open-loop system kG(s). For each fixed value of k this is an interval family. For positive k the Kharitonov polynomials of this family i i are KD (s) + kKN (s), i = 1, 2, 3, 4. Therefore, the minimum value of the gain margin over the set G(s) is in fact attained over the subset G+ K (s). REMARK 11.6 A similar result can be stated for the case of a positive feedback system by introducing the set of negative Kharitonov systems (see Exercise 5.9).
11.4.7
Robust State Feedback Stabilization
In this subsection we give an application of Kharitonov’s Theorem to robust stabilization. We consider the following problem: Suppose that you are given a set of n nominal parameters 0 0 a0 , a1 , · · · , a0n−1 ,
together with a set of prescribed uncertainty ranges: ∆a0 , ∆a1 , · · ·, ∆an−1 , and that you consider the family I0 (s) of monic polynomials, δ(s) = δ0 + δ1 s + δ2 s2 + · · · + δn−1 sn−1 + sn , where δ0 ∈
a00
∆a0 0 ∆a0 ∆an−1 0 ∆an−1 0 − , a0 + , · · · , δn−1 ∈ an−1 − , an−1 + . 2 2 2 2
To avoid trivial cases assume that the family I0 (s) contains at least one unstable polynomial. Suppose now that you can use a vector of n free parameters k = (k0 , k1 , · · · , kn−1 ), to transform the family I0 (s) into the family Ik (s) described by: δ(s) = (δ0 + k0 ) + (δ1 + k1 )s + (δ2 + k2 )s2 + · · · + (δn−1 + kn−1 )sn−1 + sn . The problem of interest, then, is the following: Given ∆a0 , ∆a1 , · · ·, ∆an−1 the perturbation ranges fixed a priori, find, if possible, a vector k so that the new family of polynomials Ik (s) is entirely Hurwitz stable. This problem arises, for example, when one applies a state-feedback control to a single input system where the matrices A, b are in controllable companion form and the coefficients of the characteristic polynomial of A are subject to bounded perturbations. The answer to this problem is always affirmative and is precisely given in Theorem 11.12. Before stating it, however, we need to prove the following lemma.
493
STABILITY OF A POLYTOPE
LEMMA 11.7 Let n be a positive integer and let P (s) be a stable polynomial of degree n− 1 : P (s) = p0 + p1 s + · · · + pn−1 sn−1 ,
with all pi > 0.
Then there exists α > 0 such that: Q(s) = P (s) + pn sn = p0 + p1 s + · · · + pn−1 sn−1 + pn sn , is stable if and only if:
pn ∈ [0, α).
PROOF To be absolutely rigorous there should be four different proofs depending on whether n is of the form 4r or 4r + 1 or 4r + 2 or 4r + 3. We will give the proof of this lemma when n is of the form 4r and one can check that only slight changes are needed if n is of the form 4r + j, j = 1, 2, 3. If n = 4r, r > 0, we can write P (s) = p0 + p1 s + · · · + p4r−1 s4r−1 , and the even and odd parts of P (s) are given by: Peven (s) = p0 + p2 s2 + · · · + p4r−2 s4r−2 , Podd (s) = p1 s + p3 s3 + · · · + p4r−1 s4r−1 . Let us also define P e (ω) := Peven (jω) = p0 − p2 ω 2 + p4 ω 4 − · · · − p4r−2 ω 4r−2 , Podd (jω) = p1 − p3 ω 2 + p5 ω 4 − · · · − p4r−1 ω 4r−2 . P o (ω) := jω P (s) being stable, we know by the Hermite-Biehler Theorem that P e (ω) has precisely 2r − 1 positive roots ωe,1 , ωe,2 , · · ·, ωe,2r−1 , that P o (ω) has also 2r − 1 positive roots ωo,1 , ωo,2 , · · ·, ωo,2r−1 , and that these roots interlace in the following manner: 0 < ωe,1 < ωo,1 < ωe,2 < ωo,2 < · · · < ωe,2r−1 < ωo,2r−1 . It can be also checked that, P e (ωo,j ) < 0 if and only if j is odd, and P e (ωo,j ) > 0 if and only if j is even, that is, P e (ωo,1 ) < 0, P e (ωo,2 ) > 0, · · · , P e (ωo,2r−2 ) > 0, P e (ωo,2r−1 ) < 0. (11.61) Let us denote α = min
j odd
−P e (ωo,j ) (ωo,j )4r
.
(11.62)
494
ROBUST PARAMETRIC CONTROL
By (11.61) we know that α is positive. We can now prove the following: Q(s) = P (s) + p4r s4r is stable if and only if
p4r ∈ [0, α).
Q(s) is certainly stable when p4r = 0. Let us now suppose that 0 < p4r < α.
(11.63)
Qo (ω) and Qe (ω) are given by Qo (ω) = P o (ω) = p1 − p3 ω 2 + p5 ω 4 − · · · − p4r−1 ω 4r−1 , Qe (ω) = P e (ω) + p4r ω 4r = p0 − p2 ω 2 + p4 ω 4 − · · · − p4r−2 ω 4r−2 + p4r ω 4r . We are going to show that Qe (ω) and Qo (ω) satisfy the Hermite-Biehler Theorem provided that p4r remains within the bounds defined by (11.63). First we know the roots of Qo (ω) = P o (ω). Then we have that Qe (0) = p0 > 0, and also Qe (ωo,1 ) = P e (ωo,1 ) + p4r (ωo,1 )4r . But, by (11.62) and (11.63) we have P e (ωo,1 ) (ωo,1 )4r . Qe (ωo,1 ) < P e (ωo,1 ) − (ωo,1 )4r | {z } =0
Thus, Qe (ωo,1 ) < 0. Then we have
Qe (ωo,2 ) = P e (ωo,2 ) + p4r (ωo,2 )4r . But by (11.61), we know that P e (ωo,2 ) > 0, and therefore we also have Qe (ωo,2 ) > 0. Pursuing the same reasoning we could prove in exactly the same way that the following inequalities hold Qe (0) > 0, Qe (ωo,1 ) < 0, Qe (ωo,2 ) > 0, · · · , Qe (ωo,2r−2 ) > 0, Qe (ωo,2r−1 ) < 0. (11.64) From this we conclude that Qe (ω) has precisely 2r − 1 roots in the open interval (0, ωo,2r−1 ), namely ′ ′ ′ , , · · · , ωe,2r−1 ωe,1 , ωe,2
and that these roots interlace with the roots of Qo (ω), ′ ′ ′ < ωo,1 < ωe,2 < ωo,2 < · · · < ωe,2r−1 < ωo,2r−1 . 0 < ωe,1
Moreover, we see in (11.64) that Qe (ωo,2r−1 ) < 0,
(11.65)
495
STABILITY OF A POLYTOPE and since p4r > 0, we also obviously have Qe (+∞) > 0. ′ Therefore, Qe (ω) has a final positive root ωe,2r which satisfies ′ ωo,2r−1 < ωe,2r .
(11.66)
From (11.65) and (11.66) we conclude that Qo (ω) and Qe (ω) satisfy the Hermite-Biehler Theorem and therefore Q(s) is stable. To complete the proof of this lemma, notice that Q(s) is obviously unstable if p4r < 0 since we have assumed that all the pi are positive. Moreover, it can be shown that for p4r = α, the polynomial P (s) + αs4r has a pure imaginary root and therefore is unstable. Now, it is impossible that P (s) + p4r s4r be stable for some p4r > α, because otherwise we could use Kharitonov’s Theorem and say, P (s) +
α 4r s and P (s) + p4r s4r both stable =⇒ P (s) + αs4r stable , 2
which would be a contradiction. This completes the proof of the theorem when n = 4r. For the sake of completeness, let us just make precise that in general we have, −P e (ωo,j ) if n = 4r, α = min , j odd (ωo,j )4r −P o (ωe,j ) if n = 4r + 1, α = min , j even (ωo,j )4r+1 e P (ωo,j ) if n = 4r + 2, α = min , j even (ωo,j )4r+2 o P (ωe,j ) if n = 4r + 3, α = min . j odd (ωo,j )4r+3 The details of the proof for the other cases are omitted. We can now enunciate the following theorem to answer the question raised at the beginning of this section. THEOREM 11.12 For any set of nominal parameters {a0 , a1 , · · · , an−1 }, and for any set of positive numbers ∆a0 , ∆a1 , · · ·, ∆an−1 , it is possible to find a vector k such that the entire family Ik (s) is stable. PROOF
The proof is constructive.
496
ROBUST PARAMETRIC CONTROL
Step 1: Take any stable polynomial R(s) of degree n − 1. Let ρ(R(·)) be the radius of the largest stability hypersphere around R(s). It can be checked from the formulas of Chapter 10, that for any positive real number λ, we have ρ(λR(·)) = λρ(R(·)). Thus, it is possible to find a positive real λ such that if P (s) = λR(s), s ∆a0 2 ∆a1 2 ∆an−1 2 + + ···+ . (11.67) ρ(P (·)) > 4 4 4 Denote P (s) = p0 + p1 s + p2 s2 + · · · + pn−1 sn−1 , and consider the four 1 P (s) = p0 − 2 P (s) = p0 − 3 P (s) = p0 + 4 P (s) = p0 +
following Kharitonov polynomials of degree n − 1: ∆a0 ∆a1 ∆a2 + p1 − s + p2 + s2 + · · · , 2 2 2 ∆a1 ∆a2 ∆a0 + p1 + s + p2 + s2 + · · · , 2 2 2 ∆a0 ∆a1 ∆a2 + p1 − s + p2 − s2 + · · · ,(11.68) 2 2 2 ∆a0 ∆a1 ∆a2 + p1 + s + p2 − s2 + · · · . 2 2 2
We conclude that these four polynomials are stable since kP i (s) − P (s)k < ρ(P (s)). Step 2: Now, applying Lemma 11.7, we know that we can find four positive numbers α1 , α2 , α3 , α4 , such that P j (s) + pn sn is stable for 0 ≤ pn < αj ,
j = 1, 2, 3, 4.
Let us select a single positive number α such that the polynomials, P j (s) + αsn
(11.69)
are all stable. If α can be chosen to be equal to 1 (that is if the four αj are greater than 1) then we do choose α = 1; otherwise we multiply everything by 1 α which is greater than 1 and we know from (11.69) that the four polynomials K j (s) =
1 j P (s) + sn , α
are stable. But the four polynomials K j (s) are nothing but the four Kharitonov polynomials associated with the family of polynomials δ(s) = δ0 + δ1 s + · · · + δn−1 sn−1 + sn ,
497
STABILITY OF A POLYTOPE where δ0 ∈ · · · , δn−1 ∈
1 ∆a0 1 1 ∆a0 1 p0 − , p0 + ,··· α α 2 α α 2
1 1 ∆an−1 1 1 ∆an−1 pn−1 − , pn−1 + , α α 2 α α 2
and therefore this family is entirely stable. Step 3: It suffices now to chose the vector k such that ki + a0i =
1 pi , α
for i = 1, · · · , n − 1.
REMARK 11.7 It is clear that in step 1 one can determine the largest box around R(·) with sides proportional to ∆ai . The dimensions of such a box are also enlarged by the factor λ when R(·) is replaced by λR(·). This change does not affect the remaining steps of the proof. Example 11.14 Suppose that our nominal polynomial is s6 − s5 + 2s4 − 3s3 + 2s2 + s + 1, that is (a00 , a01 , a02 , a03 , a04 , a05 ) = (1, 1, 2, −3, 2, −1). And suppose that we want to handle the following set of uncertainty ranges: ∆a0 = 3, ∆a1 = 5, ∆a2 = 2, ∆a3 = 1, ∆a4 = 7, ∆a5 = 5. Step 1: Consider the following stable polynomial of degree 5 R(s) = (s + 1)5 = 1 + 5s + 10s2 + 10s3 + 5s4 + s5 . The calculation of ρ(R(·)) gives: ρ(R(·)) = 1. On the other hand we have s ∆a0 2 ∆a1 2 ∆an−1 2 + + ··· + = 5.31. 4 4 4 Taking therefore λ = 6, we have that P (s) = 6 + 30s + 60s2 + 60s3 + 30s4 + 6s5 ,
498
ROBUST PARAMETRIC CONTROL
has a radius ρ(P (·)) = 6 that is greater than 5.31. The four polynomials P j (s) are given by P 1 (s) = 4.5 + 27.5s + 61s2 + 60.5s3 + 26.5s4 + 3.5s5 , P 2 (s) = 4.5 + 32.5s + 61s2 + 59.5s3 + 26.5s4 + 8.5s5 , P 3 (s) = 7.5 + 27.5s + 59s2 + 60.5s3 + 33.5s4 + 3.5s5 , P 4 (s) = 7.5 + 32.5s + 59s2 + 59.5s3 + 33.5s4 + 8.5s5 .
Step 2: The application of Lemma 11.7 gives the following values α1 ≃ 1.360,
α2 ≃ 2.667,
α3 ≃ 1.784,
α4 ≃ 3.821,
and therefore we can chose α = 1, so that the four polynomials K 1 (s) = 4.5 + 27.5s + 61s2 + 60.5s3 + 26.5s4 + 3.5s5 + s6 , K 2 (s) = 4.5 + 32.5s + 61s2 + 59.5s3 + 26.5s4 + 8.5s5 + s6 , K 3 (s) = 7.5 + 27.5s + 59s2 + 60.5s3 + 33.5s4 + 3.5s5 + s6 , K 4 (s) = 7.5 + 32.5s + 59s2 + 59.5s3 + 33.5s4 + 8.5s5 + s6 , are stable. Step 3: We just have to take k0 = p0 − a0 = 5, k3 = p3 − a3 = 63,
11.5
k1 = p1 − a1 = 29, k4 = p4 − a4 = 28,
k2 = p2 − a2 = 58, k5 = p5 − a5 = 7.
Stability of Interval Systems
For many control systems, Kharitonov’s theorem has a conservatism associated with the independence of variation of all characteristic polynomial coefficients. We illustrate this in the example below. Example 11.15 Consider the plant: G(s) =
np (s) s = , where α ∈ [3.4, 5], dp (s) 1 − s + αs2 + s3
499
STABILITY OF A POLYTOPE and has a nominal value α0 = 4.
3 stabilizes the nominal s+1 plant, yielding the nominal closed-loop characteristic polynomial, It is easy to check that the controller C(s) =
δ4 (s) = 1 + 3s + 3s2 + 5s3 + s4 . To determine whether C(s) also stabilizes the family of perturbed plants we observe that the characteristic polynomial of the system is δα (s) = 1 + 3s + (α − 1)s2 + (α + 1)s3 + s4 . In the space (δ2 , δ3 ), the coefficients of s2 and s3 describe the segment [R1 , R2 ] shown in Figure 11.27. δ3 6
A1
R2
R1
A2
2.4
4
6
4.4 δ2
Figure 11.27 A box in parameter space is transformed into a segment in coefficient space (Example 11.15).
The only way to apply Kharitonov’s theorem here is to enclose this segment in the box B defined by the two “real” points R1 and R2 and two “artificial” points A1 and A2 and to check the stability of the Kharitonov polynomials which correspond to the characteristic polynomial evaluated at the four corners of B. But δA1 (s) = 1 + 3s + 2.4s2 + 6s3 + s4 ,
500
ROBUST PARAMETRIC CONTROL
is unstable because its third Hurwitz determinant H3 is 6 3 0 H3 = 1 2.4 1 = −1.8 < 0. 0 6 3
Therefore, using Kharitonov’s theorem here does not allow us to conclude the stability of the entire family of closed-loop systems. And yet, if one checks the values of the Hurwitz determinants along the segment [R1 , R2 ] one finds 1 + α 3 0 0 1 α − 1 1 0 H = 0 1 + α 3 0 0 1 α − 1 1
and
H1 H2 H3 H4
=1+α = α2 − 4 all positive for α ∈ [3.4, 5]. = 2α2 − 2α − 13 = H3
This example demonstrates that Kharitonov’s theorem provides only sufficient conditions which may sometimes be too conservative for control problems. An alternative that we have in this type of situation is to apply the Edge Theorem, since the parameters of the plant are within a box, which is, of course, a particular case of a polytope. However, we shall see that the solution given by the Edge Theorem, in general, requires us to carry out many redundant checks. Moreover, the Edge Theorem is not a generalization of Kharitonov’s Theorem. An appropriate generalization of Kharitonov’s Theorem would be expected to produce a test set that would enjoy the economy and optimality of the Kharitonov polynomials, without any unnecessary conservatism. Motivated by these considerations, we formulate the problem of generalizing Kharitonov’s Theorem in the next subsection. Before proceeding to the main results, we introduce some notation and notational conventions with a view towards streamlining the presentation.
11.5.1
Problem Formulation and Notation
We will be dealing with polynomials of the form δ(s) = F1 (s)P1 (s) + F2 (s)P2 (s) + · · · + Fm (s)Pm (s).
(11.70)
Write F (s) := (F1 (s), F2 (s), · · · , Fm (s))
(11.71)
P (s) := (P1 (s), P2 (s), · · · , Pm (s))
(11.72)
501
STABILITY OF A POLYTOPE and introduce the notation < F (s), P (s) >:= F1 (s)P1 (s) + F2 (s)P2 (s) + · · · + Fm (s)Pm (s).
(11.73)
We will say that F (s) stabilizes P (s) if δ(s) =< F (s), P (s) > is Hurwitz stable. Note that throughout this chapter, stable will mean Hurwitz stable, unless otherwise stated. The polynomials Fi (s) are assumed to be fixed real polynomials whereas Pi (s) are real polynomials with coefficients varying independently in prescribed intervals. An extension of the results to the case where Fi (s) are complex polynomials or quasi-polynomials will also be given in a later section. Let do (Pi ) be the degree of Pi (s) Pi (s) := pi,0 + pi,1 s + · · · + pi,do (Pi ) sd
o
(Pi )
.
(11.74)
and pi := [pi,0 , pi,1 , · · · , pi,do (Pi ) ].
(11.75)
Let n = [1, 2, · · · , n]. Each Pi (s) belongs to an interval family Pi (s) specified by the intervals pi,j ∈ [αi,j , βi,j ] i ∈ m
j = 0, · · · , do (Pi ).
(11.76)
j = 0, 1, · · · , do (Pi )}.
(11.77)
The corresponding parameter box is Πi := {pi : αi,j ≤ pi,j ≤ βi,j ,
Write P (s) := [P1 (s), · · · Pm (s)] and introduce the family of m-tuples P(s) := P1 (s) × P2 (s) × · · · × Pm (s).
(11.78)
p := [p1 , p2 , · · · , pm ]
(11.79)
Let denote the global parameter vector and let Π := Π1 × Π2 × · · · × Πm
(11.80)
denote the global parameter uncertainty set. Now let us consider the polynomial (11.70) and rewrite it as δ(s, p) or δ(s, P (s)) to emphasize its dependence on the parameter vector p or the m-tuple P (s). We are interested in determining the Hurwitz stability of the set of polynomials ∆(s) := {δ(s, p) : p ∈ Π} = {< F (s), P (s) >: P (s) ∈ P(s)}.
(11.81)
We call this a linear interval polynomial and adopt the convention ∆(s) = F1 (s)P1 (s) + F2 (s)P2 (s) + · · · + Fm (s)Pm (s). We shall make the following standing assumptions about this family.
(11.82)
502
ROBUST PARAMETRIC CONTROL
Assumption 6 a1) Elements of p perturb independently of each other. Equivalently, Π is an axis parallel rectangular box. a2) Every polynomial in ∆(s) is of the same degree. The above assumptions will allow us to use the usual results such as the Boundary Crossing Theorem and the Edge Theorem to develop the solution. It is also justified from a control system viewpoint where loss of the degree of the characteristic polynomial also implies loss of bounded-input boundedoutput stability. Henceforth, we will say that ∆(s) is stable if every polynomial in ∆(s) is Hurwitz stable. An equivalent statement is that F (s) stabilizes every P (s) ∈ P(s). The solution developed below constructs an extremal set of line segments ∆E (s) ⊂ ∆(s) with the property that the stability of ∆E (s) implies stability of ∆(s). This solution is constructive because the stability of ∆E (s) can be checked, for instance by a set of root locus problems. The solution will be efficient since the number of elements of ∆E (s) will be independent of the dimension of the parameter space Π. The extremal subset ∆E (s) will be generated by first constructing an extremal subset PE (s) of the m-tuple family P(s). The extremal subset PE (s) is constructed from the Kharitonov polynomials of Pi (s). We describe the construction of ∆E (s) next. Construction of the Extremal Subset The Kharitonov polynomials corresponding to each Pi (s) are Ki1 (s) = αi,0 + αi,1 s + βi,2 s2 + βi,3 s3 + · · · Ki2 (s) = αi,0 + βi,1 s + βi,2 s2 + αi,3 s3 + · · · Ki3 (s) = βi,0 + αi,1 s + αi,2 s2 + βi,3 s3 + · · · Ki4 (s) = βi,0 + βi,1 s + αi,2 s2 + αi,3 s3 + · · · and we denote them as: Ki (s) := {Ki1 (s), Ki2 (s), Ki3 (s), Ki4 (s)}.
(11.83)
For each Pi (s) we introduce 4 line segments joining pairs of Kharitonov polynomials as defined below: Si (s) := [Ki1 (s), Ki2 (s)], [Ki1 (s), Ki3 (s)], [Ki2 (s), Ki4 (s)], [Ki3 (s), Ki4 (s)] . (11.84) These four segments are called Kharitonov segments. They are illustrated in Figure 11.28 for the case of a polynomial of degree 2. For each l ∈ {1, · · · , m} let us define PlE (s) := K1 (s) × · · · × Kl−1 (s) × Sl (s) × Kl+1 (s) × · · · × Km (s).
(11.85)
503
STABILITY OF A POLYTOPE δ2 6 K2 K1 K4 K3
δ0
δ1 Figure 11.28 The four Kharitonov segments.
A typical element of PlE (s) is jl+1 jl−1 jm K1j1 (s), K2j2 (s), · · · , Kl−1 (s), (1 − λ)Kl1 (s) + λKl2 (s), Kl+1 (s), · · · , Km (s) (11.86) with λ ∈ [0, 1]. This can be rewritten as jl−1 jl+1 jm (s), · · · , Km (s) (1 − λ) K1j1 (s), K2j2 (s), · · · , Kl−1 (s), Kl1 (s), Kl+1 jl−1 jl+1 jm +λ K1j1 (s), K2j2 (s), · · · , Kl−1 (s), Kl2 (s), Kl+1 (s), · · · , Km (s) . (11.87) Corresponding to the m-tuple PlE (s), introduce the polynomial family ∆lE (s) := {< F (s), P (s) >: P (s) ∈ PlE (s)}.
(11.88)
The set ∆lE (s) is also described as ∆lE (s) = F1 (s)K1 (s) + · · · + Fl−1 (s)Kl−1 (s) + Fl (s)Sl (s) + Fl+1 (s)Kl+1 (s) + · · · + Fm (s)Km (s).
(11.89)
A typical element of ∆lE (s) is the line segment of polynomials jl−1 F1 (s)K1j1 (s)+F2 (s)K2j2 (s)+· · ·+Fl−1 (s)Kl−1 (s)+Fl (s) (1 − λ)Kl1 (s) + λKl2 (s) j
l+1 jm +Fl+1 (s)Kl+1 (s) + · · · + Fm (s)Km (s)
(11.90)
with λ ∈ [0, 1]. The extremal subset of P(s) is defined by l PE (s) := ∪m l=1 PE (s).
(11.91)
504
ROBUST PARAMETRIC CONTROL
The corresponding generalized Kharitonov segment polynomials are l ∆E (s) := ∪m l=1 ∆E (s)
= {< F (s), P (s) > : P (s) ∈ PE (s)}.
(11.92)
The set of m-tuples of Kharitonov polynomials are denoted PK (s) and referred to as the Kharitonov vertices of P(s): PK (s) := K1 (s)xK2 (s)x · · · xKm (s) ⊂ PE (s).
(11.93)
The corresponding set of Kharitonov vertex polynomials is ∆K (s) := {< F (s), P (s) > : P (s) ∈ PK (s)}.
(11.94)
A typical element of ∆K (s) is jm F1 (s)K1j1 (s) + F2 (s)K2j2 (s) + · · · + Fm (s)Km (s).
(11.95)
The set PE (s) is made up of one parameter families of polynomial vectors. It is easy to see that there are m4m such segments in the most general case where there are four distinct Kharitonov polynomials for each Pi (s). The parameter space subsets corresponding to PlE (s) and PE (s) are denoted by Πl and ΠE := ∪m (11.96) l=1 Πl , respectively. Similarly, let ΠK denote the vertices of Π corresponding to the Kharitonov polynomials. Then, we also have ∆E (s) := {δ(s, p) : p ∈ ΠE }
(11.97)
∆K (s) := {δ(s, p) : p ∈ ΠK }
(11.98)
The set PK (s) in general has 4m distinct elements when each Pi (s) has four distinct Kharitonov polynomials. Thus, ∆K (s) is a discrete set of polynomials, ∆E (s) is a set of line segments of polynomials, ∆(s) is a polytope of polynomials, and ∆K (s) ⊂ ∆E (s) ⊂ ∆(s). (11.99) With these preliminaries, we are ready to state the Generalized Kharitonov Theorem (GKT).
11.5.2
The Generalized Kharitonov Theorem
Let us say that F (s) stabilizes a set of m-tuples if it stabilizes every element in the set. We can now enunciate the Generalized Kharitonov Theorem. THEOREM 11.13 (Generalized Kharitonov Theorem (GKT)) For a given m-tuple F (s) = (F1 (s), · · · , Fm (s)) of real polynomials:
505
STABILITY OF A POLYTOPE
1) F (s) stabilizes the entire family P(s) of m-tuples if and only if F stabilizes every m-tuple segment in PE (s). Equivalently, ∆(s) is stable if and only if ∆E (s) is stable. 2) Moreover, if the polynomials Fi (s) are of the form Fi (s) = sti (ai s + bi )Ui (s)Qi (s) where ti ≥ 0 is an arbitrary integer, ai and bi are arbitrary real numbers, Ui (s) is an anti-Hurwitz polynomial, and Qi (s) is an even or odd polynomial, then it is enough that F (s) stabilizes the finite set of m-tuples PK (s), or equivalently, that the set of Kharitonov vertex polynomials ∆K (s) are stable. 3) Finally, stabilizing the finite set PK (s) is not sufficient to stabilize P(s) when the polynomials Fi (s) do not satisfy the conditions in II). Equivalently, stability of ∆K (s) does not imply stability of ∆(s) when Fi (s) do not satisfy the conditions in II). The strategy of the proof is to construct an intermediate polytope ∆I (s) of dimension 2m such that ∆E (s) ⊂ ∆I (s) ⊂ ∆(s).
(11.100)
In the first lemma we shall show that the stability of ∆E (s) implies the stability of ∆I (s). The next two lemmas will be used recursively to show further that the stability of ∆I (s) implies the stability of ∆(s). Recall that Kharitonov polynomials are built from even and odd parts as follows: Ki1 (s) = Kieven,min(s) + Kiodd,min(s) Ki2 (s) = Kieven,min(s) + Kiodd,max(s) (11.101) Ki3 (s) = Kieven,max(s) + Kiodd,min(s) Ki4 (s) = Kieven,max(s) + Kiodd,max(s), where
Kieven,min(s) = αi,0 + βi,2 s2 + αi,4 s4 + · · · Kieven,max(s) = βi,0 + αi,2 s2 + βi,4 s4 + · · · Kiodd,min(s) = αi,1 s + βi,3 s3 + αi,5 s5 + · · · Kiodd,max(s) = βi,1 s + αi,3 s3 + βi,5 s5 + · · · .
(11.102)
Now introduce the polytope ∆I (s): (m X ∆I (s) := Fi (s) (1 − λi )Kieven,min(s) + λi Kieven,max(s) i=1
+(1 − µi )Kiodd,min(s) + µi Kiodd,max(s) :
(λ1 , µ1 , λ2 , µ2 , · · · , λm , µm ) ,
λi ∈ [0, 1], µi ∈ [0, 1]} (11.103) .
506
ROBUST PARAMETRIC CONTROL
LEMMA 11.8 ∆I (s) is stable if and only if ∆E (s) is stable. PROOF It is clear that the stability of ∆I (s) implies the stability of ∆E (s) since ∆E (s) ⊂ ∆I (s). To prove the converse, we note that the degree of all polynomials in ∆I (s) is the same (see Assumption a2). Moreover, the exposed edges of ∆I (s) are obtained by setting 2m − 1 coordinates of the set (λ1 , µ1 , λ2 , µ2 , · · · , λm , µm ) to 0 or 1 and letting the remaining one range over [0, 1]. It is easy to see that this family of line segments is nothing but ∆E (s). Therefore, by the Edge Theorem it follows that the stability of ∆E (s) implies the stability of ∆I (s). We shall also need the following two symmetric lemmas. LEMMA 11.9 Let B e (s) be the family of real even polynomials B(s) = b0 + b2 s2 + b4 s4 + · · · + b2p s2p , where:
b0 ∈ [x0 , y0 ], b2 ∈ [x2 , y2 ], · · · , b2p ∈ [x2p , y2p ],
and define, K1 (s) = x0 + y2 s2 + x4 s4 + · · · K2 (s) = y0 + x2 s2 + y4 s4 + · · · . Let also A(s) and C(s) be two arbitrary but fixed real polynomials. Then, A) A(s) + C(s)B(s) is stable for every polynomial B(s) in B e (s) if and only if the segment [A(s) + C(s)K1 (s), A(s) + C(s)K2 (s)] is stable. B) Moreover if C(s) = st (as + b)U (s)R(s) where t ≥ 0, a and b are arbitrary real numbers, U (s) is an anti-Hurwitz polynomial, and R(s) is an even or odd polynomial, then A(s) + C(s)B(s) is Hurwitz stable for every polynomial B(s) in B e (s) if and only if A(s) + C(s)K1 (s), and A(s) + C(s)K2 (s) are Hurwitz stable. PROOF Let us assume that A(s)+C(s)B(s) is stable for every polynomial B(s) in [K1 (s), K2 (s)], that is for every polynomial of the form, B(s) = (1 − λ)K1 (s) + λK2 (s),
λ ∈ [0, 1].
507
STABILITY OF A POLYTOPE
Let us now assume by way of contradiction, that A(s)+C(s)P (s) was unstable for some polynomial P (s) in B e (s). Then we know that there must also exist a polynomial Q(s) in B e (s) such that A(s) + C(s)Q(s) has a root at the origin or a pure imaginary root. Let us at once discard the case of a polynomial Q(s) in the box B e (s), being such that A(0) + C(0)Q(0) = 0.
(11.104)
Indeed, since Q(0) = q0 belongs to [x0 , y0 ], it can be written q0 = (1 − λ)x0 + λy0 ,
for some λ in [0, 1].
Then (11.104) would imply A(0) + C(0) ((1 − λ)x0 + λy0 ) = A(0) + C(0) ((1 − λ)K1 (0) + λK2 (0)) = 0, which would contradict our assumption that A(s) + C(s)B(s) is stable. Suppose now that A(s) + C(s)Q(s) has a pure imaginary root jω, for some ω > 0. If this is true then we have e A (ω) + C e (ω)Q(ω) = 0 (11.105) Ao (ω) + C o (ω)Q(ω) = 0. Notice here that since Q(s) is an even polynomial, we simply have Q(jω) = Qe (ω) = Qeven (jω) := Q(ω). Now, (11.105) implies that for this particular value of ω we have Ae (ω)C o (ω) − Ao (ω)C e (ω) = 0.
(11.106)
On the other hand, consider the two polynomials B1 (s) = A(s) + C(s)K1 (s), and B2 (s) = A(s) + C(s)K2 (s). We can write for i = 1, 2 Bie (ω) = (A + CKi )e (ω) = Ae (ω) + C e (ω)Ki (ω) and Bio (ω) = (A + CKi )o (ω) = Ao (ω) + C o (ω)Ki (ω). Thinking then of using the Segment Lemma, we compute B1e (ω)B2o (ω) − B2e (ω)B1o (ω) = Ae (ω) + C e (ω)K1 (ω) Ao (ω) + C o (ω)K2 (ω) − Ae (ω) + C e (ω)K2 (ω) Ao (ω) + C o (ω)K1 (ω) ,
508
ROBUST PARAMETRIC CONTROL
which can be written as B1e (ω)B2o (ω) − B2e (ω)B1o (ω) = (K2 (ω) − K1 (ω))(Ae (ω)C o (ω) − Ao (ω)C e (ω)), and therefore because of (11.106), B1e (ω)B2o (ω) − B2e (ω)B1o (ω) = 0.
(11.107)
Moreover, assume without loss of generality that C e (ω) ≥ 0, and C o (ω) ≤ 0,
(11.108)
and remember that due to the special form of K1 (s) and K2 (s) we have K1 (ω) ≤ Q(ω) ≤ K2 (ω),
for all ω ∈ [0, +∞).
Then we conclude from (11.105) and (11.108) that B1e (ω) = Ae (ω) + C e (ω)K1 (ω) ≤ 0 ≤ B2e (ω) = Ae (ω) + C e (ω)K2 (ω), B2o (ω) = Ao (ω) + C o (ω)K2 (ω) ≤ 0 ≤ B1o (ω) = Ao (ω) + C o (ω)K1 (ω). (11.109) But if we put together equations (11.107) and (11.109) we see that e B1 (ω)B2o (ω) − B2e (ω)B1o (ω) = 0 B e (ω)B2e (ω) ≤ 0 1o B1 (ω)B2o (ω) ≤ 0.
We see therefore from the Segment Lemma that some polynomial on the segment [B1 (s), B2 (s)] has the same jω root, which here again is a contradiction of our original assumption that A(s) + C(s)B(s) is stable. This concludes the proof of part A. To prove part B, let us assume that C(s) is of the form specified and that B1 (s) = A(s) + C(s)K1 (s), and B2 (s) = A(s) + C(s)K2 (s), are both Hurwitz stable. Then B1 (s) − B2 (s) = st (as + b)U (s)R(s) (K1 (s) − K2 (s)) .
(11.110)
Since K1 (s) − K2 (s) is even we conclude from the Vertex Lemma that the segment [B1 (s), B2 (s)] is Hurwitz stable. This proves part B. The dual lemma is stated without proof. LEMMA 11.10 Let B o (s) be the family of real odd polynomials B(s) = b1 s + b3 s3 + b5 s5 + · · · + b2p+1 s2p+1 ,
509
STABILITY OF A POLYTOPE where:
b1 ∈ [x1 , y1 ], b3 ∈ [x3 , y3 ],
· · · , b2p+1 ∈ [x2p+1 , y2p+1 ],
and define, K1 (s) = x1 s + y3 s3 + x5 s5 + · · · K2 (s) = y1 s + x3 s3 + y5 s5 + · · · . Let also D(s) and E(s) be two arbitrary but fixed real polynomials. Then a) D(s) + E(s)B(s) is stable for every polynomial B(s) in B o (s) if and only if the segment [D(s) + E(s)K1 (s), D(s) + E(s)K2 (s)] is Hurwitz stable. b) Moreover if E(s) = st (as + b)U (s)R(s) where t ≥ 0, a and b are arbitrary real numbers, U (s) is an anti-Hurwitz polynomial, and R(s) is an even or odd polynomial, then D(s) + E(s)B(s) is stable for every polynomial B(s) in B o (s) if and only if D(s) + E(s)K1 (s), and D(s) + E(s)K2 (s) are Hurwitz stable. PROOF (GKT, Theorem 11.13) Since ∆E (s) ⊂ ∆(s), it is only necessary to prove that the stability of ∆E (s) implies that of ∆(s). Therefore, let us assume that ∆E (s) is stable, or equivalently that F (s) stabilizes PE (s). Now consider an arbitrary m-tuple of polynomials in P(s) P (s) = (P1 (s), · · · , Pm (s)) . Our task is to prove that F (s) stabilizes this P (s). For the sake of convenience we divide the proof into four steps. Step 1 Write as usual Pi (s) = Pi,even (s) + Pi,odd (s),
i = 1, · · · , m.
Since ∆E (s) is stable, it follows from Lemma 11.8 that ∆I (s) is stable. In other words, m P Fi (s) (1 − λi )Kieven,min(s) + λi Kieven,max(s) i=1 (11.111) +(1 − µi )Kiodd,min(s) + µi Kiodd,max(s) , is Hurwitz stable for all possible
(λ1 , µ1 , λ2 , µ2 , · · · , λm , µm ) ,
all in [0, 1].
Step 2 In this step we show that stability of ∆I (s) implies the stability of ∆(s). In equation (11.111) set Fi (s) (1 − λi )Kieven,min(s) + λi Kieven,max(s) + (1 − µi )Kiodd,min(s) i=1 even,min even,max +µi Kiodd,max(s) + Fm (s) (1 − λm )Km (s) + λm Km (s) ,
D(s) =
m−1 P
510
ROBUST PARAMETRIC CONTROL
and E(s) = Fm (s). We know from (11.111) that odd,min odd,max D(s) + E(s) (1 − µm )Km (s) + µm Km (s)
odd,min odd,max is stable for all µm in [0, 1]. But Km (s) and Km (s) play exactly the role of K1 (s) and K2 (s) in Lemma 11.10, and therefore we conclude that D(s) + E(s)Pm,odd (s) is stable. In other words
Fi (s) (1 − λi )Kieven,min(s) + λi Kieven,max(s) + (1 − µi )Kiodd,min(s) i=1 even,min even,max +µi Kiodd,max(s) + Fm (s) (1 − λm )Km (s) + λm Km (s) +Fm (s)Pm,odd (s), (11.112) is stable, and remains stable for all possible values m−1 P
(λ1 , µ1 , λ2 , µ2 , · · · , λm ) ,
all in [0, 1],
since we fixed them arbitrarily. Proceeding, we can now set A(s) =
m−1 X i=1
and
Fi (s) (1 − λi )Kieven,min(s) + λi Kieven,max(s) + (1 − µi )Kiodd,min(s)
+µi Kiodd,max(s) + Fm (s)Pm,odd (s)
C(s) = Fm (s). Then we know by (11.112) that even,min even,max A(s) + C(s) (1 − λm )Km (s) + λm Km (s)
even,min even,max is stable for all λm in [0, 1]. But, here again, Km (s) and Km (s) play exactly the role of K1 (s) and K2 (s) in Lemma 11.9, and hence we conclude that A(s) + C(s)Pm,even (s) is stable. That is m−1 P i=1
Fi (s) (1 − λi )Kieven,min(s) + λi Kieven,max(s) + (1 − µi )Kiodd,min(s) +µi Kiodd,max(s) + Fm (s)Pm,odd (s) + Fm (s)Pm,even (s)
or finally m−1 P i=1
Fi (s) (1 − λi )Kieven,min(s) + λi Kieven,max(s) + (1 − µi )Kiodd,min(s) +µi Kiodd,max(s) + Fm (s)Pm (s)
511
STABILITY OF A POLYTOPE is stable, and this is true for all possible values (λ1 , µ1 , λ2 , µ2 , · · · , λm−1 , µm−1 ) ,
in [0, 1].
The same reasoning can be carried out by induction until one reaches the point where F1 (s)P1 (s) + F2 (s)P2 (s) + · · · + Fm (s)Pm (s), is found to be stable. Since P (s) = (P1 (s), · · · , Pm (s)) , was an arbitrary element of P(s), this proves that F (s) stabilizes P(s). Equivalently, ∆(s) is stable. This concludes the proof of part I of the Theorem. Step 3
To prove part II observe that a typical segment of ∆E (s) is
jm δλ (s) := F1 (s)K1j1 (s)+· · ·+Fl (s)[λKljl (s)+(1−λ)Klil (s)]+· · ·+Fm (s)Km (s).
The endpoints of this segment are jm δ1 (s) = F1 (s)K1j1 (s) + · · · + Fl (s)Kljl (s) + · · · + Fm (s)Km (s) jm δ2 (s) = F1 (s)K1j1 (s) + · · · + Fl (s)Klil (s) + · · · + Fm (s)Km (s).
The difference between the end points of this segment is δ0 (s) := δ1 (s) − δ2 (s) = Fl (s)[Kljl (s) − Klil (s)]. If Fl (s) is of the form st (as + b)U (s)R(s) where t ≥ 0, a and b are arbitrary real numbers, U (s) are anti-Hurwitz, and R(s) is even or odd, then so is δ0 (s) since Kljl (s) − Klil (s) is either even or odd. Therefore, by the Vertex Lemma, stability of the segment [δ1 (s), δ2 (s)] for λ ∈ [0, 1] is implied by the stability of the vertices. We complete the proof of part II by applying this reasoning to every segment in ∆E (s). Step 4
We prove part III by giving a counter example. Consider P (s) = (1.5 − s − s2 , 2 + 3s + γs2 ), where γ ∈ [2, 16],
and F (s) = (1, 1 + s + s2 ). Then δγ (s) := F1 (s)P1 (s) + F2 (s)P2 (s) = 3.5 + 4s + (4 + γ)s2 + (3 + γ)s3 + γs4 .
512
ROBUST PARAMETRIC CONTROL
Here PK (s) consists of the two 2-tuples P1 (s) = 1.5 − s − s2 ,
P2′ (s) = 2 + 3s + 2s2
and P1 (s) = 1.5 − s − s2 ,
P2′′ (s) = 2 + 3s + 16s2 .
The corresponding polynomials of ∆K (s) are δ2 (s) = 3.5 + 4s + 6s2 + 5s3 + 2s4 , δ16 (s) = 3.5 + 4s + 20s2 + 19s3 + 16s4 . The Hurwitz matrix for δγ is 3 + γ 4 0 0 γ 4 + γ 3.5 0 H = 0 0 3+γ 4 0 γ 4 + γ 3.5
and the Hurwitz determinants are H1 = 3 + γ H2 = γ 2 + 3γ + 12 H3 = 0.5γ 2 − 9γ + 16.5 H4 = 3.5H3 .
Now one can see that H1 , H2 are positive for all values of γ in [2, 16]. However, H3 and H4 are positive for γ = 2, or γ = 16, but negative when, for example, γ = 10. Therefore, it is not enough to check the stability of the extreme polynomials δγ (s) corresponding to the couples of polynomials in PK and one must check the stability of the entire segment (P1 (s), (λP2′ (s) + (1 − λ)P2′′ (s))) ,
λ ∈ [0, 1],
which is the only element in PE for this particular example. This completes the proof. An alternative way to prove step 2 is to show that if ∆(s) contains an unstable polynomial then the polytope ∆I (s) contains a polynomial with a jω root. This contradicts the conclusion reached in step 1. This approach to the proof is sketched below. 11.5.2.1
Alternative Proof of Step 2 of GKT (Theorem 11.13)
If F (s) stabilizes every m-tuple segment in PE (s), we conclude from Step 1 that every polynomial of the form m X β(s) = Fi (s) (1 − λi )Kieven,min(s) + λi Kieven,max(s) i=1
+(1 − µi )Kiodd,min(s) + µi Kiodd,max(s)
(11.113)
513
STABILITY OF A POLYTOPE is stable for all possible (λ1 , µ1 , λ2 , µ2 , · · · , λm , µm ),
λi ∈ [0, 1], µi ∈ [0, 1].
To complete the proof of part I we have to prove that the stability of these polynomials implies the stability of every polynomial in ∆(s). If every polynomial in (11.113) is stable, F (s) will not stabilize the entire family P(s) if and only if for at least one m-tuple R(s) := (R1 (s), R2 (s), · · · , Rm (s)) in P(s) the corresponding polynomial δ(s) = F1 (s)R1 (s) + F2 (s)R2 (s) + · · · + Fm (s)Rm (s) has a root at jω ∗ for some ω ∗ ≥ 0. This last statement is a consequence of the Boundary Crossing Theorem (Chapter 8). In other words, for this ω ∗ we would have δ(jω ∗ ) = F1 (jω ∗ )R1 (jω ∗ ) + F2 (jω ∗ )R2 (jω ∗ ) + · · · + Fm (jω ∗ )Rm (jω ∗ ) = 0. (11.114) Consider now one of the polynomials Ri (s). We can decompose Ri (s) into its odd and even part Ri (s) = Rieven (s) + Riodd (s) and we know that on the imaginary axis, Rieven(jω) and 1j Riodd (jω), are, respectively, the real and imaginary parts of Ri (jω). Then the associated extremal polynomials Kieven,min(s), Kieven,max(s), Kiodd,min(s), Kiodd,max(s) satisfy the inequalities Kieven,min(jω) ≤ Rieven (jω) ≤ Kieven,max(jω), and 1 1 1 odd,min Ki (jω) ≤ Riodd (jω) ≤ Kiodd,max(jω), j j j
for all ω ∈ [0, ∞)
for all ω ∈ [0, ∞).
(11.115) Using (11.115) we conclude that we can find λi ∈ [0, 1] and µi ∈ [0, 1] such that Rieven (jω ∗ ) = (1 − λi )Kieven,min(jω ∗ ) + λi Kieven,max(jω ∗ ) 1 odd 1 1 R (jω ∗ ) = (1 − µi ) Kiodd,min(jω ∗ ) + µ Kiodd,max(jω ∗ ). (11.116) j i j j From (11.116) we deduce that we can write Ri (jω ∗ ) = (1 − λi )Kieven,min(jω ∗ ) + λi Kieven,max(jω ∗ ) +(1 − µi )Kiodd,min(jω ∗ ) + µi Kiodd,max(jω ∗ ).
(11.117)
514
ROBUST PARAMETRIC CONTROL
However, substituting (11.117) for every i = 1, · · · , m into (11.114), we eventually get m X i=1
Fi (jω ∗ ) (1 − λi )Kieven,min(jω ∗ ) + λi Kieven,max(jω ∗ ) +(1 − µi )Kiodd,min(jω ∗ ) + µi Kiodd,max(jω ∗ ) = 0
but this is a contradiction with the fact that every polynomial β(s) in (11.113) is stable as proved in Step 1. REMARK 11.8 One can immediately see that in the particular case m = 1 and F1 (s) = 1, the GKT (Theorem 11.13) reduces to Kharitonov’s Theorem because F1 (s) = 1 is even and thus part II of the theorem applies.
11.5.3
Comparison with the Edge Theorem
The problem addressed in the Generalized Kharitonov Theorem (GKT) deals with a polytope and therefore it can also be solved by using the Edge Theorem. This would require us to check the stability of the exposed edges of the polytope of polynomials ∆(s). GKT on the other hand requires us to check the stability of the segments ∆E (s). In general these two sets are quite different. Consider the simplest case of an interval polynomial containing three variable parameters. The 12 exposed edges and 4 extremal segments are shown in Figure 11.28. While two of the extremal segments are also exposed edges, the other two extremal segments lie on the exposed faces and are not edges at all. More importantly, the number of exposed edges depends exponentially on the number of the uncertain parameters (dimension of p ∈ Π). The number of extremal segments, on the other hand, depends only on m (the number of uncertain polynomials). To compare these numbers, consider for instance that each uncertain polynomial Pi (s) has q uncertain parameters. Table 11.1 shows the number of exposed edges and number of segments PE (s) for various values of m and q. We can see that the number of exposed edges grows exponentially with the number of parameters whereas the number of extremal segments remains constant for a fixed m. REMARK 11.9 In some situations, not all the coefficients of the polynomials are necessarily going to vary. In such cases, the number of extremal segments to be checked would be smaller than the maximum theoretical number, m4m . With regard to the vertex result given in part II, it can happen that some Fi (s) satisfy the conditions given in part II whereas other Fi (s) do not. Suppose Fl (s) satisfies the vertex conditions in part II. Then we can replace the stability check of the segments corresponding to PlE (s) by the stability check of the corresponding vertices.
515
STABILITY OF A POLYTOPE TABLE 11.1
Number of exposed edges vs. number of extremal segments
11.5.4
Exposed Edges Extremal Segments
m
q
2 2 2 · · 3 · ·
2 3 4 · · 4 · ·
32 80 192
32 32 32
24,576
192
Examples
Example 11.16 Consider the plant G(s) =
P1 (s) s3 + αs2 − 2s + β = 4 P2 (s) s + 2s3 − s2 + γs + 1
where α ∈ [−1, −2], β ∈ [0.5, 1], γ ∈ [0, 1]. Let C(s) =
F1 (s) F2 (s)
denote the compensator. To determine if C(s) robustly stabilizes the set of plants given we must verify the Hurwitz stability of the family of characteristic polynomials ∆(s) defined as F1 (s)(s3 + αs2 − 2s + β) + F2 (s)(s4 + 2s3 − s2 + γs + 1) with α ∈ [−1, −2], β ∈ [0.5, 1], γ ∈ [0, 1]. To construct the generalized Kharitonov segments, we start with the Kharitonov polynomials. There are two Kharitonov polynomials associated with P1 (s) K11 (s) = K12 (s) = 0.5 − 2s − s2 + s3 K13 (s) = K14 (s) = 1 − 2s − 2s2 + s3 and also two Kharitonov polynomials associated with P2 (s) K21 (s) = K23 (s) = 1 − s2 + 2s3 + s4 K22 (s) = K24 (s) = 1 + s − s2 + 2s3 + s4 .
516
ROBUST PARAMETRIC CONTROL
The set P1E (s) therefore consists of the 2 plant segments λ1 K11 (s) + (1 − λ1 )K13 (s) : λ1 ∈ [0, 1] K21 (s) λ2 K11 (s) + (1 − λ2 )K13 (s) : λ2 ∈ [0, 1]. K22 (s) The set P2E (s) consists of the 2 plant segments K11 (s) : λ3 ∈ [0, 1] + (1 − λ3 )K22 (s) K13 (s) : λ4 ∈ [0, 1]. 1 λ4 K2 (s) + (1 − λ4 )K22 (s) λ3 K21 (s)
Thus, the extremal set PE (s) consists of the following four plant segments. 0.5(1 + λ1 ) − 2s − (1 + λ1 )s2 + s3 : λ1 ∈ [0, 1] 1 − s2 + 2s3 + s4 0.5(1 + λ2 ) − 2s − (1 + λ2 )s2 + s3 : λ2 ∈ [0, 1] 1 + s − s2 + 2s3 + s4 1 − 2s − 2s2 + s3 0.5 − 2s − s2 + s3 : λ3 ∈ [0, 1], : λ4 ∈ [0, 1]. 2 3 4 1 + λ3 s − s + 2s + s 1 + λ4 s − s2 + 2s3 + s4 Therefore, we can verify robust stability by checking the Hurwitz stability of the set ∆E (s) which consists of the following four polynomial segments. F1 (s) 0.5(1 + λ1 ) − 2s − (1 + λ1 )s2 + s3 + F2 (s) 1 − s2 + 2s3 + s4 F1 (s) 0.5(1 + λ2 ) − 2s − (1 + λ2 )s2 + s3 + F2 (s) 1 + s − s2 + 2s3 + s4 F1 (s) 0.5 − 2s − s2 + s3 + F2 (s) 1 + λ3 s − s2 + 2s3 + s4 F1 (s) 1 − 2s − 2s2 + s3 + F2 (s) 1 + λ4 s − s2 + 2s3 + s4 λi ∈ [0, 1] : i = 1, 2, 3, 4.
In other words, any compensator that stabilizes the family of plants P(s) must stabilize the 4 one-parameter family of extremal plants PE (s). If we had used the Edge Theorem it would have been necessary to check the 12 line segments of plants corresponding to the exposed edges of ∆(s). If the compensator polynomials Fi (s) satisfy the “vertex conditions” in part II of GKT, it is enough to check that they stabilize the plants corresponding to the four Kharitonov vertices. This corresponds to checking the Hurwitz stability of the four fixed polynomials F1 (s) 1 − 2s − 2s2 + s3 + F2 (s) 1 − s2 + 2s3 + s4 F1 (s) 1 − 2s − 2s2 + s3 + F2 (s) 1 + s − s2 + 2s3 + s4 F1 (s) 0.5 − 2s − s2 + s3 + F2 (s) 1 + s − s2 + 2s3 + s4 F1 (s) 0.5 − 2s − s2 + s3 + F2 (s) 1 − s2 + 2s3 + s4
517
STABILITY OF A POLYTOPE
Example 11.17 (Stable Example) Consider the interval plant and controller pair G(s) =
P1 (s) a1 s + a0 s2 + 2s + 1 F1 (s) = = 4 and C(s) = 2 P2 (s) b2 s + b1 s + b0 F2 (s) s + 2s3 + 2s2 + s
where the plant parameters vary as follows: a1 ∈ [0.1, 0.2], a0 ∈ [0.9, 1.1], b2 ∈ [0.9, 1.0], b1 ∈ [1.8, 2.0], b0 ∈ [1.9, 2.1]. The characteristic polynomial of the closed-loop system is δ(s) = F1 (s)P1 (s) + F2 (s)P2 (s). From GKT, the robust stability of the closed-loop system is equivalent to that of the set of 32 generalized Kharitonov segments. To construct these segments, we begin with the Kharitonov polynomials of the interval polynomials P1 (s) and P2 (s), respectively: K11 (s) = 0.9+0.1s,
K12 (s) = 0.9+0.2s,
K13 (s) = 1+0.1s,
K14 (s) = 1+0.2s
and K21 (s) = 1.9 + 1.8s + s2 , K23 (s) = 2.1 + 1.8s + 0.9s2 ,
K22 (s) = 1.9 + 2s + s2 , K24 (s) = 2.1 + 2s + 0.9s2 .
Then the corresponding generalized Kharitonov segments are F1 (s)K1i (s) + F2 (s) λK2j (s) + (1 − λ)K2k (s) and
F1 (s) λK1i (s) + (1 − λ)K1j (s) + F2 (s)K2k (s)
where i, j, k ∈ 4 × 4 × 4. For example, two such segments are F1 (s)K11 (s) + F2 (s) λK21 (s) + (1 − λ)K22 (s) = (s2 + 2s + 1)(0.9 + 0.1s) + (s4 + 2s3 + 2s2 + s) λ(1.9 + 1.8s + s2 ) + (1 − λ)(1.9 + 2s + s2 )
and
F1 (s) λK11 (s) + (1 − λ)K12 (s) + F2 (s)K21 = (s2 + 2s + 1)
(λ(0.9 + 0.1s) + (1 − λ)(0.9 + 0.2s)) + (s4 + 2s3 + 2s2 + s)(1.9 + 1.8s + s2 )
for λ ∈ [0, 1]. The stability of these 32 segments can be checked in a number of ways such as the Segment Lemma, Bounded Phase Conditions or the Zero
518
ROBUST PARAMETRIC CONTROL
1.5
1
Imag
0.5
0
−0.5
−1
−1.5 −1.5
−1
−0.5
0 Real
0.5
1
1.5
Figure 11.29 Image set of generalized Kharitonov segments (Example 11.17).
Exclusion Theorem. In Figure 11.29 we show the evolution of the image sets with frequency. We see that the origin is excluded from the image sets for all frequency. In addition, since at least one element (a vertex polynomial) in the family is Hurwitz, the entire family is Hurwitz. Thus, we conclude that the controller C(s) robustly stabilizes the interval plant. Example 11.18 (Unstable Example) Consider the interval plant and controller pair G(s) =
P1 (s) a1 s + a0 F1 (s) s2 + 2s + 1 = and C(s) = = 4 2 P2 (s) b2 s + b1 s + b0 F2 (s) s + 2s3 + 2s2 + s
where the plant parameters vary in intervals as follows: a1 ∈ [0.1, 0.2], a0 ∈ [0.9, 1.5], b2 ∈ [0.9, 1.1], b1 ∈ [1.8, 2.0], b0 ∈ [1.9, 2.1]. The characteristic polynomial of the closed-loop system is δ(s) = F1 (s)P1 (s) + F2 (s)P2 (s).
519
STABILITY OF A POLYTOPE
From GKT, the robust stability of the closed-loop system is equivalent to that of the set of 32 generalized Kharitonov segments. We construct this set as in the previous example. The image set of these segments are displayed as a function of frequency in Figure 11.30.
1.5
1
Imag
0.5
0
−0.5
−1
−1.5 −1.5
−1
−0.5
0 Real
0.5
1
1.5
Figure 11.30 Image set of generalized Kharitonov segments (Example 11.18).
From this figure, we see that the origin is included in the image set at some frequencies. Thus, we conclude that the controller C(s) does not robustly stabilize the given family of plants G(s). Example 11.19 (Vertex Example) Let us consider the plant and controller G(s) =
P1 (s) a 2 s2 + a 1 s + a 0 = P2 (s) b 2 s2 + b 1 s + b 0
and C(s) =
F1 (s) (3s + 5)(s2 + 1) = F2 (s) s2 (s − 5)
where the plant parameters vary as a2 = −67 and a1 ∈ [248, 250], a0 ∈ [623, 626], b2 ∈ [202, 203], b1 ∈ [624, 626], b0 ∈ [457, 458].
520
ROBUST PARAMETRIC CONTROL
Then the characteristic polynomial of the closed-loop system is δ(s) = F1 (s)P1 (s) + F2 (s)P2 (s). In this particular problem, we observe that F1 (s) : (1st order)(even) and F2 (s) : st (anti − Hurwitz). This satisfies the vertex condition of GKT. Thus, the stability of the 16 vertex polynomials is equivalent to that of the closed-loop system. Since all the roots of the 16 vertex polynomials F1 (s)K1i (s) + F2 (s)K2j (s),
i = 1, 2, 3, 4;
j = 1, 2, 3, 4
lie in the left half of the complex plane, we conclude that the closed-loop system is robustly stable. Figure 11.31 confirms this fact.
4
1
x 10
0 −1 −2
Imag
−3 −4 −5 −6 −7 −8 −2
−1
0
1
2
3 Real
4
5
6
7 4
x 10
Figure 11.31 Image set of generalized Kharitonov segments (Example 11.19).
521
STABILITY OF A POLYTOPE
11.5.5
Image Set Interpretation
The Generalized Kharitonov Theorem has an appealing geometric interpretation in terms of the image set ∆(jω) of ∆(s). Recall that in Step 1 of the proof, the stability of ∆(s) was reduced to that of the 2m parameter polytope ∆I (s). It is easy to see that even though ∆I (s) is in general a proper subset of ∆(s), the image sets are in fact equal: ∆(jω) = ∆I (jω). This follows from the fact, established in this chapter, that each of the m interval polynomials Pi (s) in ∆(s) can be replaced by a 2-parameter family as far as its jω evaluation is concerned. This proves that regardless of the dimension of the parameter space Π, a linear interval problem with m terms can always be replaced by a 2m parameter problem. Of course in the rest of the Theorem we show that this 2m parameter problem can be further reduced to a set of one-parameter problems. In fact, ∆(jω) is a convex polygon in the complex plane and it may be described in terms of its vertices or exposed edges. Let ∂∆(jω) denote the exposed edges of ∆(jω) and ∆V (jω) denote its vertices. Then it is easy to establish the following. LEMMA 11.11 1) ∂∆(jω) ⊂ ∆E (jω) PROOF
2) ∆V (jω) ⊂ ∆K (jω)
Observe that ∆(jω) is the sum of complex plane sets as follows:
∆(jω) = F1 (jω)P1 (jω) + F2 (jω)P2 (jω) + · · · + Fm (jω)Pm (jω). Each polygon Fi (jω)Pi (jω) is a rectangle with vertex set Fi (jω)Ki (jω) and edge set Fi (jω)Si (jω). Since the vertices of ∆(jω) can only be generated by the vertices of Fi (jω)Pi (jω), we immediately have 2). To establish 1) we note that the boundary of the sum of two complex plane polygons can be generated by summing over all vertex-edge pairs with the vertices belonging to one and the edges belonging to the other. This fact used recursively to add m polygons shows that one has to sum vertices from m − 1 of the sets to edges of the remaining set and repeat this over all possibilities. This leads to 1). The vertex property in 2) allows us to check robust stability of the family ∆(s) by using the phase conditions for a polytopic family described in Chapter 10. More precisely, define δ(λ) φδ (λ) := arg . (11.118) δ0 (λ)
522
ROBUST PARAMETRIC CONTROL
and with δ0 (jω) ∈ ∆(jω), φ+ (jω) :=
sup
φδi (jω),
0 ≤ φ+ ≤ π
φδi (jω),
− π < φ− ≤ 0
δi (jω)∈∆K (jω)
φ− (jω) :=
inf
δi (jω)∈∆K (jω)
and Φ∆K (jω) := φ+ (jω) − φ− (jω).
(11.119)
THEOREM 11.14 Assume that ∆(s) has at least one polynomial which is stable, then the entire family is stable if and only if Φ∆K (jω) < π for all ω. Example 11.20 Consider the plant controller pair of Example 11.17. We first check the stability of an arbitrary point in the family, say, one of the Kharitonov vertices. Next for each ω, we evaluate the maximum phase difference over the following 16 Kharitonov vertices F1 (s)K1i (s) + F2 (s)K2j (s), i = 1, 2, 3, 4; j = 1, 2, 3, 4 The result is plotted in Figure 11.32. It shows that the maximum phase difference over these vertices never reaches 180◦ at any frequency. Thus, we conclude that the system is robustly stable which agrees with the conclusion reached using the image set plot shown in Figure 11.29. Example 11.21 For the plant controller pair of Example 11.18, we evaluate the maximum phase difference over the Kharitonov vertices at each frequency. The plot is shown in Figure 11.33. This graph shows that the maximal phase difference reaches 180◦ showing that the family is not stable. This again agrees with the analysis using the image sets given in Figure 11.30.
11.6
Polynomic Interval Families
In this section, we develop results for the stability analysis of polynomials containing interval parameters that appear polynomially in the coefficients. We begin by discussing the robust positivity of a function.
523
STABILITY OF A POLYTOPE
50 45 40 35
DEGREE
30 25 20 15 10 5 0
0
0.5
1
1.5
2
2.5 ω
3
3.5
4
4.5
5
Figure 11.32 Maximum phase difference of Kharitonov vertices (Example 11.20).
200 180 160 140
DEGREE
120 100 80 60 40 20 0
0
0.1
0.2
0.3
0.4
ω
0.5
0.6
0.7
0.8
0.9
Figure 11.33 Maximum phase difference of Kharitonov vertices (Example 11.21).
524
11.6.1
ROBUST PARAMETRIC CONTROL
Robust Positivity
Let x = (x1 , x2 , · · · , xl )
(11.120)
be a real vector, f (x) a real polynomial function of x, and consider the problem of determining if f (x) is positive for all x ∈ B, where B is the box: + B = x : x− i = 1, 2, · · · , l . (11.121) i ≤ xi ≤ xi ,
A related problem is: In case f (x) is not robustly positive over B, determine subsets B + of B over which it is positive. Without loss of generality, we can assume that B lies in the first orthant with xi ≥ 0 for i = 1, 2, · · · , l. Indeed if Bˆ is an arbitrary box ˆ:x Bˆ = x ˆ− ˆi ≤ x ˆ+ i = 1, 2, · · · , l . (11.122) i ≤ x i , we can introduce the change of coordinates
xˆi = ai xi + bi
(11.123)
with ai =
x ˆ+ ˆ− i −x i −, x+ − x i i
bi =
− + x+ ˆ− ˆi i x i − xi x , + − xi − xi
i = 1, 2, · · · , l i = 1, 2, · · · , l
(11.124) (11.125)
to transform the box Bˆ in (11.122) to B (11.121). By choosing x− i in the first ˆ is relocated to the first orthant. orthant (x− ≥ 0, i = 1, 2, · · · , i) the box B i With B situated in the first orthant, we can make the sign definite decomposition f (x) = f + (x) − f − (x) (11.126) where f + (x) and f − (x) are uniquely determined polynomial functions of x with positive coefficients. Identify the following vertices of B − − x− := x− (11.127) 1 , x2 , · · · , xl + + + + x := x1 , x2 , · · · , xl . (11.128) Example 11.22 Consider the function fˆ(ˆ x) = 2 + 3ˆ x1 x ˆ2 + 3ˆ x31 − 4ˆ x1 x ˆ2 − x ˆ21 and the box
Bˆ = {ˆ x:x ˆ1 ∈ [−1, 1], x ˆ2 ∈ [−1, 2]} .
525
STABILITY OF A POLYTOPE Using the transformation x ˆ1 = 2x1 − 1,
xˆ2 = 3x2 − 1.
Bˆ is transformed into the new box: B = {x : x1 ∈ [0, 1], x2 ∈ [0, 1]} and the corresponding function: f (x) = −3 − 6x1 x2 + 24x1 + 3x2 + 24x31 − 40x21 so that f + (x) = 24x1 + 3x2 + 24x31 f − (x) = 3 + 6x1 x2 + 40x21 and x− = [0, 0], x+ = [1, 1]. Based on the above sign definite decomposition, we have the following. LEMMA 11.12 For all x ∈ B, the following inequalities hold: f + (x− ) ≤ f + (x) ≤ f + (x+ ) f − (x− ) ≤ f − (x) ≤ f − (x+ ). The function f (x) can be represented in the (f − , f + ) plane by associating f (x) with the point f − (x), f + (x) as shown below (Figure 11.34). Consider the rectangle formed by the four points in the (f − , f + ) plane A = f − (x− ), f + (x− ) , B = f − (x− ), f + (x+ ) , C = f − (x+ ), f + (x+ ) , D = f − (x+ ), f + (x− ) .
From Lemma 11.12, we have the following results.
LEMMA 11.13 For every x ∈ B, (f − (x), f + (x)) lies inside ABCD (Figure 11.35). LEMMA 11.14 For all x ∈ B, f (x)
> 0, if f + (x− ) − f − (x+ ) > 0,
< 0, if f + (x+ ) − f − (x− ) < 0,
526
ROBUST PARAMETRIC CONTROL
f + (·) f (·) > 0
L f (·) = 0
f + (x) f (x) f (·) < 0
f − (x)
f − (·)
Figure 11.34 f + (·) and f − (·) representation.
f (B)
f + (·)
B
C
A
D f − (·)
Figure 11.35 A rectangle ABCD.
527
STABILITY OF A POLYTOPE
PROOF Follows from Lemmas 11.12 and 11.13 and the three possible relationships between the line L in Figure 11.34 and the rectangle ABCD as shown in (Figure 11.36). B
C
A
D C B
A
(I)
D
(II)
L
f + (·)
B
C
A
D
(III)
f − (·)
Figure 11.36 Three possible relationships between the line L and ABCD.
Recursive Algorithm In Figure 11.36.(III), B and D lie on opposite sides of L f + (x+ ) − f − (x− ) > 0 f + (x− ) − f − (x+ ) < 0
(11.129)
and it is not possible to conclude robust positivity or negativity. In this case, the box B can be decomposed into smaller boxes Bk , k = 1, 2, · · · , m so that B = ∪m k=1 Bk
(11.130)
and the above test applied to each Bk . This can be repeated recursively to generate subsets B + and B − of B such that f (x) > 0 for all x ∈ B + and
528
ROBUST PARAMETRIC CONTROL
f (x) < 0 for all x ∈ B − . In general, B + or B − are unions of boxes but are not necessarily box like or even connected. If a number of functions fi (x) are to be robustly positive, one can determine the corresponding Bi+ and find ∩i Bi+ .
11.6.2
Robust Stability
Consider now the polynomial family P := {P (s, x) : x ∈ B}
(11.131)
where without loss of generality B is a box in the first orthant. A typical element of the family is P (s) = a0 (x) + a1 (x)s + a2 (x)s2 + a3 (x)s3 + a4 (x)s4 + · · · where ai (x) are polynomial functions of x for i = 0, 1, · · · , n and therefore admit the decomposition − ai (x) = a+ i (x) − ai (x).
(11.132)
We assume throughout that an (x) 6= 0,
for all x ∈ B.
(11.133)
Since x ∈ B and B is in the first orthant, the above decomposition is sign definite: − a+ for all x ∈ B. i (x), ai (x) > 0, Now define − + + 2 4 Peven (s2 , x) := a+ 0 (x) − a2 (x)s + a4 (x)s − · · ·
and
+ − − 2 4 Peven (s2 , x) := a− 0 (x) − a2 (x)s + a4 (x)s − · · · + − + 2 4 sPodd (s2 , x) := s a+ 1 (x) − a3 (x)s + a5 (x)s − · · · − + − 2 4 sPodd (s2 , x) := s a− 1 (x) − a3 (x)s + a5 (x)s − · · · + − Peven (s2 , x) := Peven (s2 , x) − Peven (s2 , x) + − sPodd (s2 , x) := sPodd (s2 , x) − sPodd (s2 , x).
Finally, let + − P¯even (s2 ) := Peven (s2 , x+ ) − Peven (s2 , x− ) + − (s2 , x− ) − Peven (s2 , x+ ) P even (s2 ) := Peven + − sP¯odd (s2 ) := sPodd (s2 , x+ ) − sPodd (s2 , x− ) 2
sP odd (s ) :=
+ sPodd (s2 , x− )
−
− sPodd (s2 , x+ ).
(11.134)
529
STABILITY OF A POLYTOPE
THEOREM 11.15 (Robust Stability of Polynomic Interval Families) The family P is robustly Hurwitz stable if the following four fixed polynomials are Hurwitz stable. P1 (s) = P even (s2 ) + sP odd (s2 ) P2 (s) = P even (s2 ) + sP¯odd (s2 ) P3 (s) = P¯even (s2 ) + sP¯odd (s2 )
(11.135)
P4 (s) = P¯even (s2 ) + sP odd (s2 ) To prove the theorem, we require the following. Let co{v1 , v2 , · · · , vk } denote the convex hull of the complex plane points v1 , v2 , ·, vk . LEMMA 11.15 {P (jω, x), x ∈ B} ⊂ co {P1 (jω), P2 (jω), P3 (jω), P4 (jω)} . PROOF
We have P (jω, x) = Peven (−ω 2 , x) + jωPodd (−ω 2 , x),
x∈B
where + − Peven (−ω 2 , x) = Peven (−ω 2 , x) − jωPeven (−ω 2 , x) + − Podd (−ω 2 , x) = Podd (−ω 2 , x) − jωPodd (−ω 2 , x)
The real part is bounded as follows + − Peven (−ω 2 , x− ) − Peven (−ω 2 , x+ ) ≤ | {z }
(11.136)
P even (−ω 2 )
+ − Peven (−ω 2 , x) ≤ Peven (−ω 2 , x+ ) − Peven (−ω 2 , x− ) . | {z } P¯even (−ω 2 )
Similarly, the imaginary part is bounded:
+ − Podd (−ω 2 , x− ) − Podd (−ω 2 , x+ ) ≤ | {z }
(11.137)
P odd (−ω 2 )
+ − Podd (−ω 2 , x) ≤ Podd (−ω 2 , x+ ) − Podd (−ω 2 , x− ) . | {z } P¯odd (−ω 2 )
This is depicted in Figure 11.37.
We require another technical lemma before giving the proof of Theorem 11.15.
530
ROBUST PARAMETRIC CONTROL
Podd (−ω 2 , x)
P¯odd (−ω 2 )
B′
C′
P (jω, x) P odd (−ω 2 ) A′
D′
P even (−ω 2 )
Peven (−ω 2 , x)
P¯even (−ω 2 )
Figure 11.37 A bounded image set.
LEMMA 11.16 The convex combinations λ1 P1 (s) + (1 − λ1 )P2 (s), λ2 P2 (s) + (1 − λ2 )P3 (s), λ3 P3 (s) + (1 − λ3 )P4 (s), λ3 P4 (s) + (1 − λ4 )P1 (s), λi ∈ [0, 1] are Hurwitz stable if and only if P1 (s), P2 (s), P3 (s), P4 (s) are Hurwitz stable. PROOF Following the Vertex Lemma, the above segments are Hurwitz stable since in each case the even and odd part of the endpoint are the same. We now give the proof of Theorem 11.15. PROOF (Theorem 11.15) Consider an arbitrary polynomial P (s, x∗ ) in the family P with x∗ ∈ B. It is clear from Lemma 11.15 that the image P (jω, x∗ ) is contained in the rectangle (ABCD) for every ω. Since the vertices are Hurwitz stable, the rectangle will pass through n quadrants as ω runs 0 to ∞ and so does the image P (jω, x∗ ). Therefore, P (s, x∗ ) is Hurwitz and the theorem is proved.
531
STABILITY OF A POLYTOPE Example 11.23 (Kharitonov’s Theorem) Consider the interval family of polynomials P (s) = x0 + x1 s + x2 s2 + x3 s3 + x4 s4 + x5 s5 + x6 s6 + x7 s7 + · · · where + 0 < x− i < xi < xi .
Note that, using the previous notation, ai = xi = a+ i
and a− i = 0.
+ Peven (s2 ) = x0 + x4 s4 + x8 s8 + · · · − (s2 ) = −x2 s2 − x6 s6 − x10 s10 + · · · Peven + sPodd (s2 ) = x1 s + x5 s5 + x9 s9 + · · · − sPodd (s2 ) = −x3 s3 − x7 s7 − x11 s11 + · · ·
and + − P¯even (s2 ) = Peven (s2 , x+ ) − Peven (s2 , x− ) − 2 + 4 − 6 + 8 = x+ 0 + x2 s + x4 s + x6 s + x8 s + · · · 2 − − 2 + + P even (s ) = Peven (s , x ) − Peven (s , x ) 2
+ 2 − 4 + 6 − 8 = x− 0 + x2 s + x4 s + x6 s + x8 s + · · · + − sP¯odd (s2 ) = sPodd (s2 , x+ ) − sPodd (s2 , x− ) − 3 + 5 − 7 + 9 = x+ 1 s + x3 s + x5 s + x7 s + x9 s + · · · + − 2 − 2 + sP odd (s ) = sPodd (s , x ) − sPodd (s , x ) 2
+ 3 − 5 + 7 − 9 = x− 1 s + x3 s + x5 s + x7 s + x9 s + · · ·
Therefore, P 1 (s) = P even (s2 ) + sP odd (s2 ) − + 2 + 3 − 4 = x− 0 + x1 s + x2 s + x3 s + x4 s + · · · P 2 (s) = P even (s2 ) + sP¯odd (s2 ) + + 2 − 3 − 4 = x− 0 + x1 s + x2 s + x3 s + x4 s + · · · P 3 (s) = P¯even (s2 ) + sP¯odd (s2 ) + − 2 − 3 + 4 = x+ 0 + x1 s + x2 s + x3 s + x4 s + · · · P 4 (s) = P¯even (s2 ) + sP odd (s2 ) − − 2 + 3 + 4 = x+ 0 + x1 s + x2 x + x3 s + x4 s + · · ·
Therefore, Kharitonov’s theorem has been recovered from Theorem 11.15.
532
ROBUST PARAMETRIC CONTROL
11.6.3
Application to Controller Synthesis
It is best to illustrate the use of Theorem 11.15 in controller synthesis by examples. Example 11.24 Consider the feedback system with the plant and controller transfer function matrix s−5 s (s + 1)(s + 2) (s + 1)(s + 2) G(s) = s+6 s−7 (s + 3)(s + 4) (s + 3)(s + 4)
and
C(s) = We now have G(s) = Dg (s)−1 Ng (s)
K1 0
0 . K2
and C(s) = Nc (s)Dc (s)−1
where Dg (s) =
(s + 1)(s + 2) 0 , 0 (s + 3)(s + 4) s−5 s Ng (s) = s+6 s−7
1 0
and Dc (s) =
0 , 1
Nc (s) =
K1 0
0 . K2
The characteristic polynomial of the closed-loop system is
P (s, K1 , K2 ) = s4 + (10 + K2 + K1 )s3 + (−4K2 + 2K1 + 35)s2 +(50 − 19K2 K1 − 23K1 − 19K2 )s +(24 + 35K2 K1 − 60K1 − 14K2 ) We begin searching for the set of stabilizing parameters (K1 , K2 ) inside the box (K1 , K2 ) ∈ [0, 1] × [0, 1]. Let K1 ∈ [K1− , K1+ ] and K2 ∈ [K2− , K2+ ]. Also denote x := [K1 K2 ]. We now have + Peven (s2 , x) = s4 − 4K2 s2 + (24 + 35K2 K1 ) − Peven (s2 , x) = −(2K1 + 35)s2 + (60 + K1 + 14K2 ) + sPodd (s2 , x) = 50s − sPodd (s2 , x) = −(10 + K2 + K1 )s3 + (18K2 K1 + 23K1 + 19K2 )s.
533
STABILITY OF A POLYTOPE Thus, P1 (s) = s4 − (10 + K2− + K1− )s3 + (−4K2+ + 2K1− + 35)s2 +(50 − 18K2− K1− − 23K1− − 19K2− )s +(24 + 35K2+ K1+ − 60K1− − 14K2−) P2 (s) = s4 − (10 + K2+ + K1+ )s3 + (−4K2+ + 2K1− + 35)s2 +(50 − 18K2+ K1+ − 23K1+ − 19K2+)s +(24 + 35K2+ K1+ − 60K1− − 14K2−) P3 (s) = s4 − (10 + K2− + K1− )s3 + (−4K2− + 2K1+ + 35)s2 +(50 − 18K2− K1− − 23K1− − 19K2− )s +(24 + 35K2− K1− − 60K1+ − 14K2+) P4 (s) = s4 − (10 + K2+ + K1+ )s3 + (−4K2− + 2K1+ + 35)s2 +(50 − 18K2+ K1+ − 23K1+ − 19K2+)s +(24 + 35K2− K1− − 60K1+ − 14K2+).
As stated above, stability of the above four polynomials is a sufficient condition for robust stability of the feedback system. The search for the stabilizing region will be done by bisecting the box whenever the sufficient condition fails. We finally obtained the stabilizing region within the given box as: (K1 , K2 ) ∈ [0, 0.399] × [0, 1]. Figure 11.38 shows that the box with its four corners at Pi (jω), i = 1, 2, 3, 4 clearly contains the image set at a fixed frequency. Figure 11.39 illustrates the evolution of the image set over the range of frequencies. Example 11.25 Consider the plant and controller transfer function matrices s s−a (s + 1)(s + 2) (s + 1)(s + 2) G(s) = s+6 s−b (s + 3)(s + 4) (s + 3)(s + 4) and
C(s) =
K1 0
0 . K2
Then, we have the characteristic polynomial of the closed-loop system
P (s, K1 , K2 , a, b) = s4 + (10 + K2 + K1 )s3 + (−K2 b + 3K2 − K1 a + 7K1 + 35)s2 +(−K1 K2 b − 6K1 K2 − K1 K2 a − 3K2 b + 2K2 − 7K1 a + 12K1 + 5 +(K1 K2 ab − 2K2 b − 12K1 a + 24)
534
ROBUST PARAMETRIC CONTROL ω=0.5, k =[0 1], k =[0 1] 1
2
60
40 B
C
Pi(ω,x)
20
0 A
D
−20
−40
−60
−60
−40
−20
0 Pr(ω,x)
20
40
60
Figure 11.38 Image set. k1=[0 0.3125], k2=[0 0.25] 60 ω=1.5
ω=1.2
40 ω=0.9 ω=1.8
Pi(ω,x)
20
ω=0.6 ω=0.3
ω=2.1
0
ω=0
−20 ω=2.4 −40
−60 −200
−150
−100
Pr(ω,x)
−50
0
Figure 11.39 Image set for a range of frequencies.
50
STABILITY OF A POLYTOPE
535
The objective of the design is to find the region of stabilizing controller parameters (K1 , K2 ) inside the box (K1 , K2 ) ∈ [0, 1] × [0, 1] so that the closed-loop system is robustly stable under all plant parameter variations a ∈ [a− , a+ ] and b ∈ [b− , b+ ]. We now have + Peven (s2 , x) = s4 − (K2 b + K1 a)s2 + (24 + K1 K2 ab) − Peven (s2 , x) = −(35 + 3K2 + 7K1 )s2 + (2K2 b + 12K1 a) + sPodd (s2 , x) = (2K2 + 12K1 + 50)s − sPodd (s2 , x) = −(K1 + K2 + 10)s3
+(K1 K2 b + 3K2 b + 7K1 a + K1 K2 a + 6K1 K2 )s. Thus, P1 (s) = s4 + (K1+ + K2+ + 10)s3 − (K2− b− + K1− a− + 35 + 3K2+ + 7K1+ )s2 +(2K2− + 12K1− + 50 − K1+ K2+ b+ − 3K2+ b+ − 7K1+ a+ − K1+ K2+ a+ −6K1+ K2+ )s + (24 + K1− K2− a− b− − 2K2+ b+ − 12K1+ a+ ) P2 (s) = s4 + (K1− + K2− + 10)s3 − (K2− b− + K1− a− + 35 + 3K2+ + 7K1+ )s2 +(2K2+ + 12K1+ + 50 − K1− K2− b− − 3K2− b− − 7K1− a− − K1− K2− a− −6K1− K2− )s + (24 + K1− K2− a− b− − 2K2+ b+ − 12K1+ a+ ) P3 (s) = s4 + (K1− + K2− + 10)s3 − (K2+ b+ + K1+ a+ + 35 + 3K2− + 7K1− )s2 +(2K2+ + 12K1+ + 50 − K1− K2− b− − 3K2− b− − 7K1− a− − K1− K2− a− −6K1− K2− )s + (24 + K1+ K2+ a+ b+ − 2K2− b− − 12K1− a− ) P4 (s) = s4 + (K1+ + K2+ + 10)s3 − (K2+ b+ + K1+ a+ + 35 + 3K2− + 7K1− )s2 +(2K2− + 12K1− + 50 − K1+ K2+ b+ − 3K2+ b+ − 7K1+ a+ − K1+ K2+ a+ −6K1+ K2+ )s + (24 + K1+ K2+ a+ b+ − 2K2− b− − 12K1−a− ). Keeping the box of plant parameters that represent robustness requirement, the algorithm bisects the controller parameter box whenever the sufficient condition fails. After several iterations, we obtain the box, as shown in Figure 11.40, (K1 , K2 ) ∈ [0, 0.333] × [0, 1] that robustly stabilizes the closed-loop system under the plant parameter perturbations in (a, b) ∈ [4, 6] × [6, 8].
536
ROBUST PARAMETRIC CONTROL
b
K2 plant parameter perturbations
controller parameter space box
8 1
7 6
stabilizing parameters
4
5
a
6
0.33
1
K1
Figure 11.40 Parameter spaces.
11.7
Exercises
11.1 Consider the control system shown in Figure 11.41. α2 s2 + α1 s + α0 + 4 s + β3 s 3 + β2 s 2 + β1 s + β0 −6
-
Figure 11.41 A feedback system (Exercise 11.1).
The parameters αi , βj vary in the following ranges: α0 ∈ [2, 6], α1 ∈ [0, 2], α2 ∈ [−1, 3], and β0 ∈ [4, 8], β1 ∈ [0.5, 1.5], β2 ∈ [2, 6], β3 ∈ [6, 14]. Determine if the closed-loop system is Hurwitz stable or not for this class of perturbations. 11.2 For the system in Exercise 11.1, determine the largest box with the same center and shape (i.e., the ratios of the lengths of the sides are prespecified) as prescribed in the problem, for which the closed-loop remains stable.
STABILITY OF A POLYTOPE
537
11.3 Show by an example that Kharitonov’s Theorem does not hold when the stability region is the shifted half plane Re[s] ≤ −α, α > 0. 11.4 Show that for interval polynomials of degree less than six it suffices to test fewer than four polynomials in applying Kharitonov’s test. Determine for each degree the number of polynomials to be checked, in addition to the condition that the signs of the coefficients are the same. Hint: Consider the interlacing tubes corresponding to the interval family for each degree. 11.5 Consider the Hurwitz stability of an interval family where the only coefficient subject to perturbation is δk , the coefficient of sk for an arbitrary k ∈ [0, 1, 2, .., n]. Show that the δk axis is partitioned into at most one stable segment and one or two unstable segments. 11.6 Apply the result of Exercise 11.5 to the Hurwitz polynomial δ(s) = s4 + δ3 s3 + δ2 s2 + δ1 s + δ0 with nominal parameters δ30 = 4, δ20 = 10, δ10 = 12, δ00 = 5. Suppose that all coefficients except δ3 remain fixed and δ3 varies as follows: δ3− ≤ δ3 ≤ δ3+ . Determine the largest interval (δ3− , δ3+ ) for which δ(s) remains Hurwitz. Repeat this for each of the coefficients δ2 , δ1 , and δ0 . 11.7 Consider the Hurwitz polynomial δ(s) = s4 + δ3 s3 + δ2 s2 + δ1 s + δ0 with nominal parameters δ 0 = [δ30 , δ20 , δ10 , δ00 ] given by: δ30 = 4, δ20 = 10, δ10 = 12, δ00 = 5. Suppose that the coefficients vary independently within a weighted l∞ box of size ρ given by: ∆ρ := δ : δi0 − ρwi ≤ δi ≤ δi0 + ρwi , i = 0, 1, 2, 3 with weights wi ≥ 0. Find the maximal value of ρ for which stability is preserved assuming that wi = δi0 .
11.8 Consider an interval family and show that if the Kharitonov polynomials are completely unstable (i.e., their roots are all in the closed right-half plane) then the entire family is completely unstable.
538
ROBUST PARAMETRIC CONTROL
11.9 Consider the positive feedback control system shown in Figure 11.42.
+ + 6
x
-
G(s)
-
Figure 11.42 A feedback system (Exercise 11.9).
Let G(s) belong to a family of interval systems G(s) as in Section 11.4.1. Show that the closed-loop system is robustly stable (stable for all G(s) ∈ G(s)) if and only if it is stable for each system in the set of negative Kharitonov systems 5−i KN (s) G− (s) = , i = 1, 2, 3, 4 K i (s) KD Prove that if the system is robustly stable, the minimum gain margin over the interval family G(s) is attained over the subset G− K (s). 11.10 Let the state space representation of the system (A, b) be 0 1 0 0 0 0 0 1 0 0 A= B= 0 0 0 1 0 a0 a1 a2 a3 1
where
a0 ∈ [0, 1.5], a1 ∈ [−1.5, 2], a2 ∈ [0, 1], a3 ∈ [1, 2]. Find the state feedback control law that robustly stabilizes the closed-loop system. 11.11 Consider the Hurwitz stable interval polynomial family δ(s) = δ3 s3 + δ2 s2 + δ1 s + δ0 δ3 ∈ [1.5, 2.5], δ2 ∈ [2, 6], δ1 ∈ [4, 8], δ0 ∈ [0.5, 1.5]. Determine the worst case parametric stability margin ρ(δ) over the parameter box in the ℓ2 and ℓ∞ norms. 11.12 For the interval polynomial δ(z) = δ3 z 3 + δ2 z 2 + δ1 z + δ0
539
STABILITY OF A POLYTOPE 1 1 δ3 ∈ [1 − ǫ, 1 + ǫ], δ2 ∈ − − ǫ, − + ǫ , 4 4 3 3 3 3 δ1 ∈ − − ǫ, − + ǫ , δ0 ∈ − ǫ, +ǫ 4 4 16 16 determine the maximal value of ǫ for which the family is Schur stable. 11.13 Consider the interval family I(s) of real polynomials δ(s) = δo + δ1 s + δ2 s2 + δ3 s3 + δ4 s4 + · · · + δn sn where the coefficients lie within given ranges, δ0 ∈ [x0 , y0 ], δ1 ∈ [x1 , y1 ], · · · , δn ∈ [xn , yn ].
Suppose now that xn = 0 and that xi > 0, i = 0, 1, 2, · · · , n − 1. Show that the Hurwitz stability of the family can be determined by checking, in addition to the usual four Kharitonov polynomials ˆ 1 (s) = xn sn + yn−1 sn−1 + yn−2 sn−2 + xn−3 sn−3 + xn−4 sn−4 + · · · , K ˆ 2 (s) = xn sn + xn−1 sn−1 + yn−2 sn−2 + yn−3 sn−3 + xn−4 sn−4 + · · · , K ˆ 3 (s) = yn sn + xn−1 sn−1 + xn−2 sn−2 + yn−3 sn−3 + yn−4 sn−4 + · · · , K ˆ 4 (s) = yn sn + yn−1 sn−1 + xn−2 sn−2 + xn−3 sn−3 + yn−4 sn−4 + · · · . K the following two additional polynomials ˆ 5 (s) = xn−1 sn−1 + xn−2 sn−2 + yn−3 sn−3 + yn−4 sn−4 + · · · K ˆ 6 (s) = yn−1 sn−1 + xn−2 sn−2 + xn−3 sn−3 + yn−4 sn−4 + · · · K ˆ 5 (s) and K ˆ 6 (s) can be obtained from K ˆ 3 (s) and K ˆ 4 (s) Hint: Note that K respectively by setting yn = 0. Now use the argument that h for a given polynomial q(s) of degree n − 1, the family sn + µq(s), µ ∈ y1n , ∞ is Hurwitz
stable if and only if q(s) and yn sn + q(s) are Hurwitz stable.)
11.14 Prove Lemma 11.7 for the case n = 4r + j, j = 1, 2, 3. 11.15 Using the Edge Theorem, check the robust Hurwitz stability of the following family of polynomials. Also show the root cluster of the family. δ(s) := s3 + (a + 3b)s2 + cs + d where a ∈ [1, 2], b ∈ [0, 3], c ∈ [10, 15] and d ∈ [9, 14]. 11.16 Consider the plant G(s) and the controller C(s) G(s) :=
s2
s+1 −s−1
C(s) :=
as + b . s+c
540
ROBUST PARAMETRIC CONTROL
First, choose the controller parameter {a0 , b0 , c0 } so that the closed-loop system has its characteristic roots at −1 ± j1 and − 10. Now for h ǫi ǫ , a ∈ a 0 − , a0 + 2 2
h ǫ ǫi b ∈ b0 − , b0 + , 2 2
h ǫ ǫi c ∈ c 0 − , c0 + 2 2
find the maximum value ǫmax of ǫ that robustly maintains closed-loop stability. Find the root set of the system when the parameters range over a box with ǫmax sides . 2 11.17 Repeat Exercise 11.16 with the additional requirement that the dominant pair of roots remain inside circles of radii 0.5 centered at −1 ± j1. 11.18 Consider the discrete time plant G(z) and the controller C(z) G(z) :=
z2
z−1 , + 2z + 3
C(z) :=
az + b z+c
Choose the controller parameter {a0 , b0 , c0 } so that deadbeat control is achieved, namely all the closed-loop poles are placed at z = 0. Use the Edge Theorem, find the maximum range of the controller parameters so that the closed-loop poles remain inside the circle of radius 0.5 centered at the origin. Assume that the controller parameters are bounded by the same amount, i.e., a ∈ [a0 − ǫ, a0 + ǫ],
b ∈ [b0 − ǫ, b0 + ǫ],
c ∈ [c0 − ǫ, c0 + ǫ].
Find the root set of the system for the parameters {a, b, c} varying in a box h h h ǫ ǫi ǫ ǫi ǫ ǫi a ∈ a 0 − , a0 + , b ∈ b0 − , b0 + , c ∈ c 0 − , c0 + . 2 2 2 2 2 2 11.19 Consider the polynomials s2 + a 1 s + a 0 where
and
0 0 a1 , a0 = [2, 2] ,
s2 + b 1 s + b 0
0 0 b1 , b0 = [4, 8] .
Now find the maximum value ǫmax of ǫ so that the families remain coprime as [a1 , a0 ] varies over the box [a01 − ǫ, a01 + ǫ] × [a00 − ǫ, a00 + ǫ] and b varies independently over the box [b01 − ǫ, b01 + ǫ] × [b00 − ǫ, b00 + ǫ]. 11.20 Repeat Exercise 11.19, this time verifying coprimeness over the righthalf plane.
541
STABILITY OF A POLYTOPE
11.21 Consider a unity feedback system with the plant G(s) and C(s) given as s + b0 s+1 . G(s) = 2 and C(s) = s + a1 s + a0 s+2 Assume that the plant parameters vary independently as: a0 ∈ [2, 4],
a1 ∈ [2, 4],
b0 ∈ [1, 3].
Determine the root space of the family of closed-loop polynomials using the Edge Theorem. 11.22 Consider the two polynomials A(s) = a2 s2 + a1 s + a0 B(s) = b3 s3 + b2 s2 + b1 s + b0 where the nominal values of the parameters are a00 = 2, a1 = 2, a2 = 1, b0 = 2.5, b1 = 7, b2 = 4.5, b3 = 1. Suppose the parameter perturbations are: ai ∈ [a0i − ǫ, a0i + ǫ], bj ∈ [b0j − ǫ, b0j + ǫ],
i = 0, 1, 2 j = 0, 1, 2, 3.
Find the maximum value of ǫ for which the two polynomial sets remain coprime. Answer: ǫmax = 0.25 11.23 Let A(s) = a3 s33 + a2 s2 + a1 s + a0 B(s) = b3 s3 + b2 s2 + b1 s + b0 and
a00 , a01 , a02 , a03 , b00 , b01 , b02 , b03 = [100, 100, 10, 3, 1, 3, 3, 3].
Assume that all the coefficients of the above two polynomials are allowed to perturb independently. Find the maximum value of ǫ so that the two polynomial families remain coprime when ai ∈ a0i − ǫ, a0i + ǫ , i = 0, 1, 2, 3 0 0 bj ∈ bj − ǫ, bj + ǫ , j = 0, 1, 2, 3. Answer: ǫmax = 0.525
11.24 Repeat Exercise 11.23 with the requirement that the families remain coprime over the right half of the complex plane.
542
ROBUST PARAMETRIC CONTROL
11.25 Consider the polynomial s2 + a1 s + a0 and let the coefficients (a1 , a0 ) vary in the convex hull of the points (0, 0), (0, R), (R2 , 0), (R2 , 2R). Show that the root space of this set is the intersection with the left-half plane of the circle of radius R centered at the origin. Describe also the root space of the convex hull of the points (0, 0), (0, 2R), (R2 , 0), (R2 , 2R). 11.26 In a unity feedback system the plant transfer function is: G(s) =
α2 s2 + α1 s + α0 . s 3 + β2 s 2 + β1 s + β0
The nominal values of the plant parameters are α02 = 1, α01 = 5, α00 = −2 β20 = −3, β10 = −4, β00 = 6. Determine a feedback controller of second order that places the 5 closed-loop poles at −1, −2, −3, −2 + 2j, −2 − 2j. Suppose that the parameters of G(s) are subject to perturbation as follows: α1 ∈ [3, 7],
α0 ∈ [−3, −1],
β1 ∈ [−6, −2],
β0 ∈ [5, 7].
Determine if the closed-loop is robustly stable with the controller designed as above for the nominal system. 11.27 Consider the two masses connected together by a spring and a damper as shown in Figure 11.43: y
d k u M
b
Figure 11.43 Mass-spring-damper system.
m
543
STABILITY OF A POLYTOPE
Assuming that there is no friction between the masses and the ground, then we have the following dynamic equations: M d¨ + b(d˙ − y) ˙ + k(d − y) = u ˙ + k(y − d) = 0. m¨ y + b(y˙ − d) The transfer functions of the system are as follows: k b s+ y(s) m m = b m k u(s) 2 2 Ms s + 1 + s+ M m m b k s2 + s + d(s) m m . = m b k u(s) M s2 s2 + 1 + s+ M m m The feedback controller
C(s) =
nc1 (s) nc (s) y(s) + 2 d(s) dc (s) dc (s)
with dc (s) = s2 + β1 s + β0 , nc1 (s) = δ1 s + δ0 , nc2 (s) = γ1 s + γ0 , is to be designed so that the closed-loop is stabilized. With nominal parameters m = 1, M = 2, b = 2, k = 3 determine the controller parameters so that the closed-loop poles for the nominal system are all at −1. Determine if the closed-loop remains stable when the parameters b and k suffer perturbations in their magnitudes of 50%. Determine the largest l∞ box centered at the above nominal parameter in the b, k parameter space for which closed-loop stability is preserved with the controller designed. Also, use the Boundary Crossing Theorem to plot the entire stability region in this parameter space. 11.28 In a unity feedback system G(s) =
s − z0 3s3 − p2 s2 + s + p0
and C(s) =
α0 + α1 s + α2 s2 s 2 + β1 s + β0
represent the transfer functions of the plant and controller respectively. The nominal values of the parameters [z0 , p0 , p2 ] are given by [z00 , p00 , p02 ] = [1, 1, 2]. Find the controller parameters so that the closed-loop poles are placed at [−1, −2, −3, −2 − j, −2 + j]. Determine if the closed-loop system that results remains stable if the parameters [z0 , p0 , p2 ] are subject to a 50% variation in their numerical values centered about the nominal, i.e. z0 ∈ [0.5, 1.5], p0 ∈ [0.5, 1.5], p2 ∈ [1, 3]. 11.29 In the previous problem let the plant parameters have their nominal values and assume that the nominal controller has been designed to place
544
ROBUST PARAMETRIC CONTROL
the closed-loop poles as specified in the problem. Determine the largest l∞ ball in the controller parameter space x = [α0 , α1 , α2 , β0 , β1 ], centered at the nominal value calculated above, for which the closed-loop system with the nominal plant remains stable. 11.30 In a unity feedback system with G(s) =
s + z0 2 s + p1 s + p0
and C(s) =
α2 s2 + α1 s + α0 . s(s + β)
Assume that the nominal values of the plant parameters are (z00 , p00 , p01 ) := (1, 1, 1). Choose the controller parameters (α0 , α1 , α2 , β) so that the closed-loop system is stabilized at the nominal plant parameters. Check if your controller robustly stabilizes the family of the closed-loop systems under plant parameter perturbations of 20%. 11.31 Consider the unity feedback configuration with the plant and controller being s + b0 s+1 C(s) = and G(s) = 2 s+2 s + a1 s + a0 where a0 ∈ [2, 4],
a1 ∈ [2, 4],
b0 ∈ [1, 3].
Is this closed-loop system robustly stable? 11.32 Consider the unity feedback system shown in Figure 11.44 r
+ − 6
e
-
F1 F2
u -
y -
P1 P2
Figure 11.44 Feedback control system.
where
F1 (s) 2s2 + 4s + 3 = 2 F2 (s) s + 3s + 4
and
P1 (s) s2 + a 1 s + a 0 = P2 (s) s(s2 + b1 s + b0 )
with a01 = −2,
a00 = 1,
b00 = 2,
b01 = 1.
545
STABILITY OF A POLYTOPE Now let a0 ∈ [1 − ǫ, 1 + ǫ], b0 ∈ [2 − ǫ, 2 + ǫ] a1 ∈ [−2 − ǫ, −2 + ǫ], b1 ∈ [1 − ǫ, 1 + ǫ] Find ǫmax for which the system is robustly stable. Answer: ǫmax = 0.175 11.33 Referring to the system given in Figure 11.44 with 2s2 + 4s + 3 F1 (s) = 2 F2 (s) s + 3s + 4
P1 (s) s2 + a 1 s + a 0 = . P2 (s) s(s2 + b1 s + b0 )
and
Let the nominal system be P10 (s) s2 − 2s + 7 = . 0 P2 (s) s(s2 + 8s − 0.25) Suppose that the parameters vary within intervals: a1 ∈ [−2 − ǫ, −2 + ǫ], b1 ∈ [8 − ǫ, 8 + ǫ],
a0 ∈ [7 − ǫ, 7 + ǫ] b0 ∈ [−0.25 − ǫ, −0.25 + ǫ].
Find the maximum value of ǫ for robust stability of the family using GKT. Answer: ǫmax = 0.23 11.34 For the same configuration in Figure 11.44 with F1 (s) 26 + 27s = F2 (s) −17 + 2s
and
P1 (s) s + a0 = 2 P2 (s) s + b1 s + b0
with a0 ∈ [−1.5, −0.5],
b1 ∈ [−2.5, −1.5],
b0 ∈ [−1.5, −0.5]
Show that the family of closed-loop systems is unstable using GKT. 11.35 For the same configuration in Figure 11.44 with F1 (s) 26 + 27s = F2 (s) −17 + 2s
and
P1 (s) s + a0 = 2 P2 (s) s + b1 s + b0
with a0 ∈ [−1.5, −0.5],
b1 ∈ [−2.5, −1.5],
b0 ∈ [−1.5, −0.5]
Show that the family of closed-loop systems is unstable using GKT. 11.36 Let C 0 (s) =
Nc0 (s) Dc0 (s)
546
ROBUST PARAMETRIC CONTROL
be a rational compensator that stabilizes an LTI plant P without jω poles. Consider the interval family of controllers {Nc (s), Dc (s)} containing {Nc0 (s), Dc0 (s)}). Prove that each member of such a family stabilizes P if and only if the phase difference between the points j i KD (jω) + KN (jω)P (jω),
i = 1, 2, 3, 4; j = 1, 2, 3, 4
j i is less than π for all ω. Note that KN (s) and KD (s) are the Kharitonov polynomials associated with N (s) and D(s).
11.37 Consider the unstable batch reactor process 1.38 −0.2077 6.715 −5.676 0 0 −0.5814 −4.29 0 0.675 0 x(t) + 5.679 x(t) ˙ = 1.067 1.136 −3.146 u(t) 4.273 −6.654 5.893 0.048 4.273 1.343 −2.104 1.136 0 010 0 y(t) = x(t). 1 0 1 −1 The control objectives are stabilization and step input tracking and are to be achieved by two PI controllers # " sKpi + Kii Ei (s) Ui (s) = s where ei (t) = ri (t) − yi (t),
i = 1, 2.
(a) Determine the closed-loop characteristic polynomial as a function of (Kpi , Kii ) for i = 1, 2. (b) Determine nominal stabilizing values (Kpi,0 , Kii,0 ), for i = 1, 2. For example, (Kp1,0 , Ki1,0 , Kp2,0 , Ki2,0 ) = (2, 2, − 5, − 8). (c) Using Theorem 11.15, determine a (large) box of stabilizing controller parameters around the nominal.
11.8
Notes and References
The Edge Theorem is due to Bartlett, Hollot, and Lin [17]. We note that the weaker and more obvious result in Corollary 11.1, that is, the stability detecting property of the exposed edges, is often referred to, loosely, in the
STABILITY OF A POLYTOPE
547
literature as the Edge Theorem. In fact as we have seen in Chapter 10, Corollary 11.1 applies to complex polytopic polynomial and quasi-polynomial families. However, the root space boundary generating property does not necessarily hold in these more general situations. The interval polynomial problem was originally posed by Faedo [75] who attempted to solve it using the Routh-Hurwitz conditions. Some necessary and some sufficient conditions were obtained by Faedo and the problem remained open until Kharitonov gave a complete solution. Kharitonov first published his theorem for real polynomials in 1978 [136], and then extended it to the complex case in [137]. The papers of Bialas [31] and Barmish [10] are credited with introducing this result to the Western literature. Several treatments of this theorem are available in the literature. Among them we can mention Bose [34], Yeung and Wang [207] and Minnichelli, Anagnost, and Desoer [156] and Chapellat and Bhattacharyya [45]. A system-theoretic proof of Kharitonov’s Theorem for the complex case was given by Bose and Shi [41] using complex reactance functions. That the set I(jω) is a rectangle was first pointed out by Dasgupta [60] and hence it came to be known as Dasgupta’s rectangle. The proof in Minnichelli et al. is based on the image set analysis given in Section 11.4.3. The proof in Chapellat and Bhattacharyya [45] is based on the Segment Lemma. Mansour and Anderson [148] have proved Kharitonov’s Theorem using the second method of Lyapunov. The computational approach to enlarging the ℓ∞ box described in Exercise 11.7 was first reported in Barmish [10]. The extremal property of the Kharitonov polynomials, Theorem 11.9 was first proved by Chapellat and Bhattacharyya [44] and the robust stabilization result of Theorem 11.12 is adapted from Chapellat and Bhattacharyya [47]. Rantzer [171] studied the problem of characterizing stability regions in the complex plane for which it is true that stability of all the vertices of an interval family guarantee that of the entire family. He showed that such regions D, called Kharitonov regions, are characterized by the condition that D as well as 1/D are both convex. Meressi, Chen, and Paden [155] have applied Kharitonov’s Theorem to mechanical systems. Mori and Kokame [159] dealt with the modifications required to extend Kharitonov’s Theorem to the case where the degree can drop, i.e., xn = 0 (see Exercise 11.12). Kharitonov’s Theorem was generalized by Chapellat and Bhattacharyya [46] for the control problem and this generalization was called the GKT. Various other generalizations of the Kharitonov’s Theorem have been reported. In [35] Bose generalized Kharitonov’s Theorem in another direction and showed that the scattering Hurwitz property of a set of bivariate interval polynomials could be established by checking only a finite number of extreme bivariate polynomials. Multidimensional interval polynomials were also studied by Basu [19]. Barmish [12] has reported a generalization of the so-called four polynomial concept of Kharitonov. The Generalized Kharitonov Theorem was proved by Chapellat and Bhattacharyya in [46], where it was called the Box Theorem. The vertex condition that was given in [46] dealt only with the case where the Fi (s) were even or
548
ROBUST PARAMETRIC CONTROL
odd. The more general vertex condition given here in part II of Theorem 11.13 is based on the Vertex Lemma. In Bhattacharyya [22] and Bhattacharyya and Keel [26] a comprehensive survey of the applications of this theorem were given. In the latter papers this result was referred to as GKT. A special case of the vertex results of part II of GKT is reported by Barmish, Hollot, Kraus, and Tempo [13]. A discrete time counterpart of GKT was developed by Katbab and Jury [113] but in this case the stability test set that results consists of manifolds rather than line segments. The robust positivity results of Section 11.6 are due to Elizondo-Gonzales [74] and Theorem 11.15 is due to Keel and Bhattacharyya [116]. The result of Exercise 11.34 was obtained in [114]. It suggests how a box of stabilizing controllers may be generated only from the plant frequency response measurement data P (jω). Exercise 11.35 is taken from Rosenbrock [173].
12 ROBUST CONTROL DESIGN
In this chapter we develop some useful frequency domain properties of systems containing uncertain parameters. The Generalized Kharitonov Theorem of the last chapter introduced a set of one parameter extremal plants which completely characterize the frequency domain behavior of linear interval systems. We show here that this extremal set can be used to exactly calculate the uncertainty template at each frequency as well as the Bode, Nyquist, and Nichols envelopes of the system. We also prove that the worst case gain, phase, and parametric stability margins of control systems containing such a plant occur over this extremal set. The utility of these tools in robust classical control design is illustrated by examples. Next, we consider control systems containing parameter as well as unstructured uncertainty. Parametric uncertainty is modelled as usual by interval systems or linear interval systems. Two types of unstructured feedback perturbations are considered. First, we deal with unstructured uncertainty modelled as H∞ norm bounded perturbations. A robust version of the Small Gain Theorem is developed for interval systems. An exact calculation of both the worst case parametric and the unstructured stability margins are given and the maximum values of various H∞ norms over the parameter set are determined. It is shown that these occur on the same extremal subset which appear in the Generalized Kharitonov Theorem of Chapter 11. This solves the important problem of determining robust performance when it is specified in terms of H∞ norms. Next, we deal with unstructured perturbations consisting of a family of nonlinear sector bounded feedback gains perturbing interval or linear interval systems. Extremal results for this robust version of the classical Absolute Stability problem are given. The constructive solution to this problem is also based on the extremal systems introduced in the Generalized Kharitonov Theorem.
12.1
Introduction
Frequency response methods play a fundamental role in the fields of control, communications, and signal processing. Classical control focuses on the frequency domain properties of control systems and has developed design meth-
549
550
ROBUST PARAMETRIC CONTROL
ods based on simple but powerful graphical tools such as the Nyquist plot, Bode plots, and Nichols Chart. These techniques are well known and are popular with practicing engineers. However, they were developed for a fixed nominal system and in general are inapplicable when several uncertain parameters are present. In these situations it is necessary to evaluate the frequency domain behavior of the entire family of systems in order to effectively carry out analysis and design. A brute force approach to this problem (grid the uncertainty set) can be avoided by assuming a certain amount of structure for the perturbations even if such an assumption introduces some conservatism. In this chapter we shall consider the class of linear interval systems where the uncertain parameters lie in intervals and appear linearly in the numerator and denominator coefficients of the transfer functions. For example, the family of transfer functions G(s) =
s4
4s3 + α2 s2 + α1 s + 5 + 10β3 s3 + β2 s2 + (β1 + 2γ1 )s
where α2 , α1 , β3 , β2 , β1 , γ1 vary in independent intervals is a linear interval system containing six interval parameters. In this example, the uncertainty template G(jω) at each frequency ω is a complex plane set generated by the parameter vector ranging over the six dimensional parameter box. With the results to be developed in this chapter we will be able to replace G(s) by a subset of systems GE (s). This extremal subset will allow us to constructively generate the exact boundary of the uncertainty template by means of a set of one parameter problems. These extremal systems will allow us to exactly calculate the boundaries of the Bode, Nyquist, and Nichols plots of all transfer functions in the control system. They also can be used to calculate the worst case gain, phase, and parametric stability margins over the uncertain set of parameters. The utility of these concepts in control system design is illustrated by giving examples which robustify classical design techniques by incorporating parametric uncertainty. Robustness of stability in the presence of unstructured uncertainty is an important and well developed subject in control system analysis. In the 1950s an important robustness problem called the absolute stability problem was formulated and studied. In this problem, also known as the Lur’e or Popov problem, a fixed linear system is subjected to perturbations consisting of all possible nonlinear feedback gains lying in a sector. In the 1980s a similar problem was studied by modelling the perturbations as H∞ norm bounded perturbations of a fixed linear system. We begin by considering an interval plant connected to a fixed feedback controller and develop the appropriate mathematical machinery for this system. The generalized Kharitonov segments introduced in Chapter 11 serve to define the extremal systems. Using these systems, we calculate the boundaries of the image sets of various system transfer functions evaluated at s = jω. These include the characteristic polynomial, open- and closed-loop transfer
551
ROBUST CONTROL DESIGN
functions, sensitivity and complementary sensitivity and disturbance transfer functions. We also evaluate the worst case stability margins using these extremal systems. These results depend on some simple geometric facts regarding the sum and quotients of complex plane sets. We then generalize these results to the larger class of linear interval systems using essentially the same geometric ideas.
12.2
Interval Control Systems
Consider the feedback system shown in Figure 12.1 with
+ − 6
-
F (s)
-
-
G(s)
Figure 12.1 A unity feedback interval control system.
F (s) :=
F1 (s) , F2 (s)
G(s) :=
N (s) . D(s)
(12.1)
We suppose that F (s) is fixed but G(s) contains uncertain real parameters which appear as the coefficients of N (s) and D(s). Write D(s) := a0 + a1 s + a2 s2 + a3 s3 + · · · + an−1 sn−1 + an sn N (s) := b0 + b1 s + b2 s2 + b3 s3 + · · · + bm−1 sm−1 + bm sm
(12.2)
− ¯+ where ak ∈ [a− ¯+ k ,a k ], for k ∈ n := {1, · · · , n} and bk ∈ [bk , bk ], for k ∈ m. Let us define the interval polynomial sets + D(s) := {D(s) : a0 + a1 s + a2 s2 + · · · + an sn , ak ∈ [a− k , ak ], for k ∈ n} + N(s) := {N (s) : b0 + b1 s + b2 s2 + · · · + bm sm , bk ∈ [b− k , bk ], for k ∈ m}
and the corresponding set of interval systems: N (s) G(s) := : (N (s), D(s)) ∈ (N(s)xD(s)) . D(s)
(12.3)
552
ROBUST PARAMETRIC CONTROL
We refer to the unity feedback system in Figure 8.1 as an interval control system. For simplicity, we will use the notational convention G(s) =
N(s) D(s)
(12.4)
to denote the family (12.3). The characteristic polynomial of the system is δ(s) := F1 (s)N (s) + F2 (s)D(s)
(12.5)
and the set of system characteristic polynomials can be written as ∆(s) := F1 (s)N(s) + F2 (s)D(s).
(12.6)
The control system is robustly stable if each polynomial in ∆(s) is of the same degree and is Hurwitz. This is precisely the type of robust stability problem dealt with in the Generalized Kharitonov Theorem (GKT) of the last chapter, where we showed that Hurwitz stability of the control system over the set G(s) could be reduced to testing over the much smaller extremal set of systems GE (s). Following the notation of the last chapter, let KN (s) and KD (s) denote Kharitonov polynomials associated with N(s) and D(s), and let SN (s) and SD (s) denote the corresponding sets of Kharitonov segments. Recall that these segments are pairwise convex combinations of Kharitonov polynomials sharing a common even or odd part. Define the extremal subsets, using the above notational convention: KN (s) SN (s) ∪ SD (s) KD (s) KN (s) GK (s) := KD (s) GE (s) :=
(extremal systems)
(12.7)
(Kharitonov systems).
(12.8)
We shall say that F (s) satisfies the vertex condition if the polynomials Fi (s) are of the form Fi (s) := sti (ai s + bi )Ui (s)Ri (s),
i = 1, 2
(12.9)
where ti are nonnegative integers, ai , bi are arbitrary real numbers, Ui (s) is an anti-Hurwitz polynomial, and Ri (s) is an even or odd polynomial. We recall the result given by GKT. THEOREM 12.1 The control system of Figure 12.1 is robustly stable that is stable for all G(s) ∈ G(s) if and only if it is stable for all G(s) ∈ GE (s). If in addition F (s) satisfies the vertex condition, robust stability holds if the system is stable for each G(s) ∈ GK (s).
ROBUST CONTROL DESIGN
553
The GKT thus reduces the problem of verifying robust stability over the multiparameter set G(s) to a set of one parameter stability problems over GE (s) in general, and under the special conditions on F (s) stated, to the vertex set GK (s). In the rest of this chapter we shall show that the systems GE (s) and GK (s) enjoy many other useful boundary and extremal properties. They can be constructively used to carry out frequency response calculations in control system analysis and design. In fact, it will turn out that most of the important system properties such as worst case stability and performance margins over the set of uncertain parameters can be determined by replacing G(s) ∈ G(s) by the elements of G(s) ∈ GE (s). In some special cases one may even replace G(s) by the elements of GK (s). The results are first developed for interval plants for the sake of simplicity.
12.3
Frequency Domain Properties
In order to carry out frequency response analysis and design incorporating robustness with respect to parameter uncertainty we need to be able to determine the complex plane images of various parametrized sets. In this section we will develop some computationally efficient procedures to generate such sets. We shall first consider the complex plane images of ∆(s) and G(s) at s = jω. These sets, called uncertainty templates, are denoted ∆(jω) and G(jω). Since N(s) and D(s) are interval families, N(jω) and D(jω) are axis parallel rectangles in the complex plane. F1 (jω)N(jω) and F2 (jω)D(jω) are likewise rotated rectangles in the complex plane. Thus, ∆(jω) is the complex plane sum of two rectangles whereas G(jω) is the quotient of two rectangles. We assume here that 0 6∈ D(jω). If this assumption fails to hold we can always “indent” the jω axis to exclude those values of ω which violate the assumption. Therefore, throughout this chapter we will make the standing assumption that the denominator of any quotients exclude zero. The next lemma will show us how to evaluate the sum and quotient of two complex plane polygons Q1 and Q2 with vertex sets V1 and V2 , and edge sets E1 and E2 , respectively. Let ∂(·) denote the boundary of the complex plane set (·). LEMMA 12.1
(a) (b)
∂(Q1 + Q2 ) ⊂ (V1 + E2 ) ∪ (E1 + V2 ) Q1 E1 V1 ∂ ⊂ ∪ . Q2 V2 E2
554
ROBUST PARAMETRIC CONTROL
PROOF (a) From simple complex plane geometry the complex sum of two straight lines is generated by adding vertex-segment pairs. Thus, the result is true for this case. For the general case it is known that ∂(Q1 + Q2 ) ⊂ ∂Q1 + ∂Q2 . Then without loss of generality we let ∂Q1 be an edge and ∂Q2 be another edge and use the previous argument. This proves part (a) of the lemma. 1 if and only if 0 ∈ ∂ (Q1 − zQ2 ). (b) First, we establish that z ∈ ∂ Q Q2 1 Indeed if z0 ∈ ∂ Q Q2 , then for every ǫ > 0 the open disc |z − z0 | < ǫ contains points z such that z 6∈
Q1 Q2 .
Thus, 0 6∈ Q1 − zQ2 . However,
Q1 Q2 .
0 ∈ Q1 − z0 Q2 since z0 ∈ By continuity of Q1 − zQ2 with respect to z at z0 , it follows that 0 ∈ ∂(Q1 − z0 Q2 ). Now suppose conversely that 0 ∈ ∂(Q1 − z0 Q2 ). Then for every ǫ > 0 the disc of radius ǫ centered at the origin contains points q such that q 6∈ Q1 − z0 Q2 . By continuity of Q1 the mapping Q −→ Q1 − zQ2 with respect to z the inverse image z of 2 the point q is close to z0 . But this point z 6∈
Q1 Q2
since 0 ∈ / Q1 − zQ2 .
However, z can be chosen to be arbitrarily close to z0 . Since z0 ∈ 1 it follows that z0 ∈ ∂ Q Q2 .
Q1 Q2 ,
Now,
z∈∂
Q1 Q2
⇐⇒ 0 ∈ ∂(Q1 − zQ2 ) ⇐⇒ 0 ∈ (V1 − zE2 ) ∪ (E1 − zV2 ) ⇐⇒ z ∈
V1 E1 ∪ . E2 V2
(12.10)
This lemma shows us the interesting fact that the boundaries of sums and quotients of two polygons can be determined by sums and quotients of the corresponding vertex-edge pairs. Figure 12.2 illustrates that the boundaries of the sum of two four-sided polygons are obtained by generating the sum of all segment-edge pairs. Similarly, Figure 12.3 shows the sum of two general polygons. It can be shown that the inverse image of a line segment which excludes the origin, is an arc of a circle passing through the origin. This is shown in Figure 12.4. Therefore the inverse image of a polygon is bounded by such arcs as shown in Figure 12.5. To determine ∆(jω) and G(jω) we note that the vertices of N(jω) and D(jω) correspond to the Kharitonov polynomials whereas the edges correspond to the Kharitonov segments. The set of points KN (jω) are therefore
555
ROBUST CONTROL DESIGN
Image
Image Q2
Q1 β α Real Image
Real Image
αQ1
βQ2
Real Image
Real Image
βQ2 αQ1 + βQ2 αQ1 vertices + edges Real Figure 12.2 Illustration of sum of two polygons.
Real
556
ROBUST PARAMETRIC CONTROL
4
5
3
1
Imag
Imag
2
0
0
−1 −2 −3 −4
−2
0 Real
−5 −5
2
0 Real
5
4
4
3
3
2
2
Imag
Imag
Figure 12.3 Sum of two polygons.
B
1 0
C’
1 A’
0 C
−1
−1
A
−2 −2
B’
−2 0
2
4
−2
Real
0
2 Real
Figure 12.4 Line and inverse of line.
4
557
3
3
2
2
1
1
Imag
Imag
ROBUST CONTROL DESIGN
0
0
−1
−1
−2
−2
−3 −2
0
2
4
−3 −2
0
2
Real
4
Real
Figure 12.5 Polygon and the inverse of polygon.
the vertices of N(jω) and the four lines SN (jω) are the edges of N(jω). F1 (jω)N(jω) is also a polygon with vertices F1 (jω)KN (jω) and edges F1 (jω)SN (jω). Similarly, F2 (jω)KD (jω) and F2 (jω)SD (jω) are the vertices and edges of the polygon F2 (jω)D(jω). The jω image of the extremal systems GE (s) defined earlier exactly coincides with these vertex-edge pairs. Let (N(s)xD(s))E := (KN (s)xSD (s)) ∪ (SN (s)xKD (s)).
(12.11)
Recall that the extremal systems are GE (s) :=
N (s) : (N (s), D(s)) ∈ (N(s)xD(s))E D(s)
:=
KN (s) SN (s) ∪ SD (s) KD (s) (12.12)
and define ∆E (s) := {F1 (s)N (s) + F2 (s)D(s) : (N (s), D(s)) ∈ (N(s)xD(s))E } . (12.13) We can now state an important result regarding the boundary of image sets.
558
ROBUST PARAMETRIC CONTROL
THEOREM 12.2 (Boundary Generating Property) a)
∂∆(jω) ⊂ ∆E (jω)
b)
∂G(jω) ⊂ GE (jω)
PROOF The proof of this theorem follows immediately from Lemma 12.1 and the observation regarding the vertices and edges of N(jω) and D(jω). a) From Lemma 12.1, ∂∆(jω) ⊂ (F1 (jω)KN (jω) + F2 (jω)SD (jω)) ∪ (F1 (jω)SN (jω) + F2 (jω)KD (jω)) = ∆E (jω) (see(12.11) and (12.13)). b) KN (jω) SN (jω) ∂G(jω) ⊂ ∪ SD (jω) KD (jω) = GE (jω) (see (12.12)).
Example 12.1 Consider the problem of determining the frequency template of the interval plant n(s) b1 s + b0 G(s) = = d(s) a 2 s2 + a 1 s + a 0 where the parameters vary as follows: a0 ∈ [1, 2], a1 ∈ [2, 3], a2 ∈ [2, 3], b0 ∈ [1, 2], b1 ∈ [2, 3]. The Kharitonov polynomials of d(s) and n(s) are: Kd1 (s) = 3s2 + 2s + 1 Kd2 (s) = 3s2 + 3s + 1 Kd3 (s) = 2s2 + 2s + 2 Kd4 (s) = 2s2 + 3s + 2 and Kn1 (s) = 2s + 1 Kn2 (s) = 3s + 1 Kn3 (s) = 2s + 2 Kn4 (s) = 3s + 2.
559
ROBUST CONTROL DESIGN
Thus, the boundary of the entire frequency domain template is obtained by the frequency evaluation of the following 32 systems: Kni (s) λKdj (s)
+ (1 − λ)Kdk (s)
,
and
λKnj (s) + (1 − λ)Knk (s) , Kdi (s)
for i = 1, 2, 3, 4; (j, k) ∈ {(1, 2), (1, 3), (2, 3), (3, 4)}. Figure 12.6 shows the template G(jω) at ω = 1.
1
0.5
Imag
0
−0.5
−1
−1.5
−2 −1
−0.5
0
0.5 Real
1
1.5
2
Figure 12.6 Frequency domain template G(jω) (Example 12.1).
12.3.0.1
Closed Loop Transfer Functions
Referring now to the control system in Figure 12.1, we consider the following transfer functions of interest in analysis and design problems: y(s) = G(s), u(s)
u(s) = F (s) e(s)
(12.14)
560
ROBUST PARAMETRIC CONTROL y(s) = F (s)G(s) e(s) 1 e(s) = T e (s) := r(s) 1 + F (s)G(s) F (s) u(s) = T u (s) := r(s) 1 + F (s)G(s) y(s) F (s)G(s) T y (s) := = . r(s) 1 + F (s)G(s) T o (s) :=
(12.15) (12.16) (12.17) (12.18)
As G(s) ranges over the uncertainty set G(s) the transfer functions T o (s), T y (s), T u (s), T e (s) range over corresponding uncertainty sets To (s), Ty (s), Tu (s), and Te (s), respectively. In other words, To (s) := {F (s)G(s) : G(s) ∈ G(s)} 1 e T (s) := : G(s) ∈ G(s) 1 + F (s)G(s) F (s) u : G(s) ∈ G(s) T (s) := 1 + F (s)G(s) F (s)G(s) y T (s) := : G(s) ∈ G(s) . 1 + F (s)G(s)
(12.19) (12.20) (12.21) (12.22)
We will now show that the boundary generating property of the extremal subsets shown in Theorem 12.2 carries over to each of the system transfer functions listed above. In fact, we will show that the boundary of the image set at s = jω, the Nyquist plot and Bode plot boundaries of each of the above sets are all generated by the subset GE (s). Introduce the subsets of (12.19) – (12.22) obtained by replacing G(s) by GE (s): ToE (s) := {F (s)G(s) : G(s) ∈ GE (s)} 1 TeE (s) := : G(s) ∈ GE (s) 1 + F (s)G(s) F (s) TuE (s) := : G(s) ∈ GE (s) 1 + F (s)G(s) F (s)G(s) TyE (s) := : G(s) ∈ GE (s) . 1 + F (s)G(s) The main result can now be stated. THEOREM 12.3 For every ω ≥ 0, (a) (b) (c) (d)
∂To (jω) ⊂ ToE (jω) ∂Te (jω) ⊂ TeE (jω) ∂Tu (jω) ⊂ TuE (jω) ∂Ty (jω) ⊂ TyE (jω)
(12.23) (12.24) (12.25) (12.26)
561
ROBUST CONTROL DESIGN The proof will require the following technical lemma. LEMMA 12.2 Let D be a closed set in the complex plane with 0 ∈ / D. Then 1 1 = . ∂ D ∂D Let
PROOF
1 d0
(12.27)
∈ ∂( D1 ), then there exists an open disc such that 1d − 1 d
1 D.
1 d0
0, kL
for all ω ∈ IR.
We illustrate this with an example. Example 12.13 Let us consider the following stable transfer function G(s) =
s4
+
1.1s3
3.1s + 3.2 + 24.5s2 + 2.5s + 3.5
(12.44)
611
ROBUST CONTROL DESIGN
Figure 12.42 shows that the Lur’e gain kL is obtained from the minimum real value of G(jω). From Figure 12.42 we obtain kL = 0.98396. Using the Lur’e gain, Figure 12.43 shows that 1 + Re [G(s)] kL is SPR.
1 0.5 0
−1/kL
−0.5
Imag
−1 −1.5 G(jω) −2 −2.5 −3 −3.5 −4 −2
−1
0
1
2
3
Real
Figure 12.42 G(jω) : Lur’e problem (Example 12.13).
We now impose the further restriction that the nonlinearity φ is timeinvariant and enunciate the Popov criterion. THEOREM 12.14 (Popov Criterion) If G(s) is a stable transfer function, and φ is a time-invariant nonlinearity which belongs to the sector [0, kP ], then a sufficient condition for absolute
612
ROBUST PARAMETRIC CONTROL
1 0.5 0 −0.5
Imag
−1 −1.5 1/kL+G(jω) −2 −2.5 −3 −3.5 −4 −1
0
1
2
3
4
Real
Figure 12.43 SPR property of k1L + G(s) (Example 12.13).
stability is that there exist a real number q such that 1 + Re [(1 + qjω)G(jω)] > 0, kP
for all ω ∈ IR.
(12.45)
This theorem has a graphical interpretation which is illustrated in the next example. Example 12.14 Consider the transfer function used in Example 12.13. To illustrate the Popov criterion, we need the Popov plot ˜ G(jω) = Re [G(jω)] + jωIm [G(jω)] . As shown in Figure 12.44, the limiting value of the Popov gain kP is obtained by selecting a straight line in the Popov plane such that the Popov plot of ˜ G(jω) lies below this line. From Figure 12.44 we obtain kP = 2.5 We remark that the Lur’e gain corresponds to the case q = 0 in the Popov plot. In addition to the Lur’e and Popov criteria, there is another useful result
613
ROBUST CONTROL DESIGN
0.5
−1/kP
0
Imag
−0.5
−1
G(jω) −1.5
−2 −2.5
−2
−1.5
−1
−0.5
0
Real
Figure 12.44 Popov criterion (Example 12.14).
in robust stability of nonlinear systems known as the Circle Criterion. It is assumed that the nonlinearity is time-invariant and lies in the sector [k1 , k2 ]: 0 ≤ k1 ≤ φ(σ) ≤ k2 .
(12.46)
Introduce the complex plane circle C centered on the negative real axis and 1 1 and − . cutting it at the points − k1 k2 THEOREM 12.15 (Circle Criterion) If G(s) is a stable transfer function and φ is a time-invariant nonlinearity which belongs to the sector [k1 , k2 ], then a sufficient condition for absolute stability is that the Nyquist plot G(jω) stays out of the circle C. We illustrate this theorem with an example. Example 12.15 Consider again the transfer function G(s) used in Example 12.13. Figure 12.45 shows the plot of G(jω) for 0 ≤ ω < ∞. From Figure 12.45 we
614
ROBUST PARAMETRIC CONTROL
see that the smallest circle centered at −1 touches the G(jω) locus and cuts 1 1 the negative real axis at − = −0.3 and − = −1.7. This gives the k2 k1 absolute stability sector [0.59, 3.33].
1 0.5
−1/k1
−1/k2
0 −1 −0.5
Imag
−1 −1.5 G(jω) −2 −2.5 −3 −3.5 −4 −2
−1
0
1
2
3
Real
Figure 12.45 Circle criterion (Example 12.15).
In a later section we will consider the robust version of the absolute stability problem by letting the transfer function G(s) vary over a family G(s). In this case it will be necessary to determine the infimum value of the stability sectors as G(s) ranges over the family G(s). We will see that in each case these stability sectors can be found from the extremal systems GE (s) in a constructive manner. We develop some preliminary results on the SPR property that will aid in this calculation.
615
ROBUST CONTROL DESIGN
12.11
Characterization of the SPR Property
The importance of the SPR property in robustness can be seen from the fact that a unity feedback system containing a forward transfer function which is SPR has infinite gain margin and at least 90◦ phase margin. Our first result is a stability characterization of proper stable real transfer functions satisfying the SPR property. More precisely, let G(s) =
N (s) D(s)
(12.47)
be a real proper transfer function with no poles in the closed right-half plane. THEOREM 12.16 G(s) is SPR if and only if the following three conditions are satisfied: a) Re[G(0)] > 0, b) N (s) is Hurwitz stable, c) D(s) + jαN (s) is Hurwitz stable for all α in IR. PROOF Let us first assume that G(s) is SPR and let us show that conditions b) and c) are satisfied since condition a) is clearly true in that case. Consider the family of polynomials: P := {Pα (s) = D(s) + jαN (s) : α ∈ IR} . Every polynomial in this family has the same degree as that of D(s). Since this family contains a stable element, namely P0 (s) = D(s), it follows from the continuity of the roots of a polynomial (Boundary Crossing Theorem of Chapter 8) that P will contain an unstable polynomial if and only if it also contains an element with a root at jω for some ω ∈ IR. Assume that for some αo and ω in IR we had that: Pαo (jω) = D(jω) + jαo D(jω) = 0. We write D(jω) = De (ω) + jωDo (ω)
(12.48)
with similar notation for N (jω). Separating the real and imaginary parts of Pαo (jω), we deduce that De (ω) − αo ωN o (ω) = 0 and ωDo (ω) + αo N e (ω) = 0.
616
ROBUST PARAMETRIC CONTROL
But this implies necessarily that N e (ω)De (ω) + ω 2 N o (ω)Do (ω) = 0 that is Re [G(jω)] = 0 and this contradicts the fact that G(s) is SPR. Thus, c) is also true. Since c) is true it now implies that, N (s) + jβD(s) is Hurwitz stable for all β 6= 0. Therefore, letting β tend to 0 we see that N (s) is a limit of Hurwitz polynomials of bounded degree. Rouch´e’s theorem immediately implies that the unstable roots of N (s), if any, can only be on the jω−axis. However, if N (s) has a root on the jω−axis then Re [G(jω)] = 0 at this root and again this contradicts the fact that G(s) is SPR. To prove the converse, we use the fact that a) and b) hold and reason by contradiction. Since a) holds it follows by continuity that G(s) is not SPR if and only if for some ω ∈ IR, 0 6= ω, we have that Re [G(jω)] = 0, or equivalently N e (ω)De (ω) + ω 2 N o (ω)Do (ω) = 0. (12.49) Now, assume that at this particular ω, N (s) satisfies N e (ω) 6= 0, and N o (ω) 6= 0. From (12.49) we then conclude that −ωDo (ω) De (ω) = = αo o ωN (ω) N e (ω)
(12.50)
and therefore De (ω) − αo ωN o (ω) = 0, and ωDo (ω) + αo N e (ω) = 0
(12.51)
so that [D + jαo N ] (jω) = 0, contradicting c). On the other hand assume for example that N e (ω) = 0. Since N (s) is stable, we deduce that N o (ω) 6= 0, and from (12.49) we also have that Do (ω) = 0. Therefore, (12.50) is still true with αo =
De (ω) . ωN o (ω)
617
ROBUST CONTROL DESIGN
12.11.1
SPR Conditions for Interval Systems
Now consider the following family G(s) of transfer functions G(s) =
A(s) , B(s)
where A(s) belongs to a family of real interval polynomials A(s), and B(s) belongs to a family of real interval polynomials B(s), defined as follows: A(s) = A(s) : A(s) = a0 + a1 s + · · · + ap sp , and ai ∈ [αi , βi ], for all i = 0, · · · , p B(s) = B(s) : B(s) = b0 + b1 s + · · · + bn sn , and bj ∈ [γj , δj ], for all j = 0, · · · , n . i i Let KA (s), i = 1, 2, 3, 4 and KB (s), i = 1, 2, 3, 4 denote the Kharitonov polynomials associated with A(s) and B(s), respectively. We call G(s) a family of interval plants and the Kharitonov systems associated with G(s) are naturally defined to be the 16 plants of the following set, ) ( i KA (s) GK (s) := : i, j ∈ {1, 2, 3, 4} . j KB (s)
We assume that the interval family G(s) is stable. Let γ be any given real number. We want to find necessary and sufficient conditions under which it is true that for all G(s) in G(s): Re [G(jω)] + γ > 0, for all ω ∈ IR.
(12.52)
In other words we ask the question: Under what conditions is G(s) + γ SPR for all G(s) in G(s) ? The answer to this question is given in the following lemma. LEMMA 12.6 Equation (12.52) is satisfied by every element in G(s) if and only if it is satisfied for the 16 Kharitonov systems in GK (s). PROOF can write:
For an arbitrary A(s) in A(s) and an arbitrary B(s) in B(s) we A(jω) Re +γ >0 B(jω)
618
ROBUST PARAMETRIC CONTROL
⇐⇒ (Ae (ω) + γB e (ω)) B e (ω) + ω 2 (Ao (ω) + γB o (ω)) B o (ω) > 0. e
(12.53) o
The right-hand side of this last inequality is linear in A (ω) and A (ω) and thus from the facts (see Chapter 8) e,min even,min e,max even,max KA (ω) := KA (jω) ≤ Ae (ω) ≤ KA (ω) := KA (jω) odd,min KA (jω) K odd,max(jω) o,max ≤ Ao (ω) ≤ KA (ω) := A , jω jω it is clear that it is enough to check (12.53) when A(s) is fixed and equal to one of the 4 Kharitonov polynomials associated with A(s). To further explain this point, let A(s) and B(s) be arbitrary polynomials in A(s) and B(s) respectively and suppose arbitrarily that at a given ω we have B e (ω) > 0 and B o (ω) > 0. Then the expression in (12.53) is obviously bounded below by e,min o,min KA (ω) + γB e (ω) B e (ω) + ω 2 KA (ω) + γB o (ω) B o (ω), o,min KA (ω) :=
1 which corresponds to A(s) = KA (s). Now, since A(s) is a fixed polynomial, we deduce from Theorem 12.16 that the following is true: A(jω) Re + γ > 0, for all ω ∈ IR, and for all B(s) ∈ B(s), B(jω)
if and only if the following three conditions are satisfied A(0) + γ > 0, for all B(s) ∈ B(s), 1) Re B(0) 2) A(s) + γB(s) is Hurwitz stable for all B(s) ∈ B(s), jα 3) B(s) + A(s) is Hurwitz stable for all α ∈ IR and all B(s) ∈ B(s). 1 + jαγ Note that in condition 3) above we have used the fact that A(jω) A(jω) + γB(jω) Re + γ > 0 ⇐⇒ Re > 0, B(jω) B(jω) and therefore condition c) of Theorem 12.16 can be written as B(s) + jα (A(s) + γB(s)) stable for all α ∈ IR which is, of course, equivalent to B(s) +
jα A(s) 1 + jαγ
is Hurwitz stable for all α ∈ IR.
The family of polynomials defined by condition 2) is a real interval family so that by using Kharitonov’s theorem for real polynomials, we deduce that condition 2) is equivalent to:
619
ROBUST CONTROL DESIGN 2′ ) 1 A(s) + γKB (s), 3 A(s) + γKB (s),
2 A(s) + γKB (s), 4 A(s) + γKB (s)
stable. The polynomials defined in 3) is a complex interval family for every α and thus Kharitonov’s theorem for complex polynomials applies and 3) is equivalent to: 3′ ) jα A(s), 1 + jαγ jα 3 KB A(s), (s) + 1 + jαγ
1 KB (s) +
jα A(s), 1 + jαγ jα 4 KB (s) + A(s) 1 + jαγ 2 KB (s) +
stable for all α ∈ IR. Also 1) is equivalent to 1′ )
A(0) A(0) Re 1 (0) + γ > 0 and Re K 3 (0) + γ > 0. KB B
Thus, by using Theorem 12.16 in the other direction, we conclude that when A(s) is fixed, A(jω) Re + γ > 0, for all ω ∈ IR, and for all B(s) ∈ B(s) B(jω) if and only if A(jω) + γ > 0, for all ω ∈ IR, and for all k ∈ {1, 2, 3, 4}, Re k (jω) KB and this concludes the proof of the lemma. As a consequence of Lemma 12.6 we have the following result. THEOREM 12.17 Given a proper stable family G(s) of interval plants, the minimum of Re(G(jω)) over all ω and over all G(s) in G(s) is achieved at one of the 16 Kharitonov systems in GK (s) PROOF First, since G(s) is proper it is clear that this overall minimum is finite. Assume for the sake of argument that the minimum of Re [G(jω)] over
620
ROBUST PARAMETRIC CONTROL
all ω and over the 16 Kharitonov systems is γ0 , but that some plant G∗ (s) in G(s) satisfies inf Re [G∗ (jω)] = γ1 < γ0 . (12.54) ω∈IR
Take any γ satisfying γ1 < γ < γ0 . By assumption we have that inf Re [G(jω)] − γ > 0,
ω∈IR
(12.55)
whenever G(s) is one of the 16 Kharitonov systems. By Lemma 12.6 this implies that (12.55) is true for all G(s) in G(s), and this obviously contradicts (12.54). We now look more carefully at the situation where one only needs to check that every plant G(s) in G(s) has the SPR property. In other words, we are interested in the special case in which γ = 0. A line of reasoning similar to that of Theorem 12.17 would show that here again it is enough to check the 16 Kharitonov systems. However, a more careful analysis shows that it is enough to check only 8 systems and we have the following result. THEOREM 12.18 Every plant G(s) in G(s) is SPR if and only if it is the case for the 8 following plants: G1 (s) =
2 KA (s) K 3 (s) K 1 (s) K 4 (s) , G2 (s) = A , G3 (s) = A , G4 (s) = A 1 1 2 2 (s) , KB (s) KB (s) KB (s) KB
G5 (s) =
1 KA (s) K 4 (s) K 2 (s) K 3 (s) , G6 (s) = A , G7 (s) = A , G8 (s) = A 3 3 4 4 (s) . KB (s) KB (s) KB (s) KB
PROOF Using Definition 12.1 and Theorem 12.16, it is easy to see that every transfer function A(s) G(s) = B(s) in the family is SPR if and only if the following three conditions are satisfied: 1) A(0)B(0) > 0 for all A(s) ∈ A(s) and all B(s) ∈ B(s), 2) A(s) is Hurwitz stable for all A(s) ∈ A(s), 3) B(s) + jαA(s) is stable for all A(s) ∈ A(s), all B(s) ∈ B(s), and all α ∈ IR. By Kharitonov’s theorem for real polynomials, it is clear that condition 2) is equivalent to: 1 2 3 4 2′ ) KA (s), KA (s), KA (s), KA (s) are Hurwitz stable.
621
ROBUST CONTROL DESIGN
Now, the simplification over Theorem 12.17 stems from the fact that here in condition 3), even if A(s) is not a fixed polynomial, we still have to deal with a complex interval family since γ = 0. Hence, using Kharitonov’s theorem for complex polynomials (see Chapter 8), we conclude that 3) is satisfied if and only if: 3′ ) 1 2 KB (s) + jαKA (s), 2 1 KB (s) + jαKA (s),
1 3 KB (s) + jαKA (s), 2 4 KB (s) + jαKA (s),
1 3 (s) + jαKA (s), KB 2 4 KB (s) + jαKA (s),
3 4 (s), KB (s) + jαKA 4 3 KB (s) + jαKA (s)
are Hurwitz stable for all α in IR. Note that you only have to check these eight polynomials whether α is positive or negative. As for condition 1) it is clear that it is equivalent to: 2 1 3 1 1 3 4 3 1′ ) KA (0)KB (0) > 0, KA (0)KB (0) > 0, KA (0)KB (0) > 0, KA (0)KB (0) > 0.
Once again using Theorem 12.16 in the other direction we can see that conditions 1′ ), 2′ ) and 3′ ) are precisely equivalent to the fact that the eight transfer functions specified in Theorem 12.18 satisfy the SPR property. As a final remark on the SPR property we see that when the entire family is SPR as in Theorem 12.18, there are two cases. On one hand, if the family is strictly proper then the overall minimum is 0. On the other hand, when the family is proper but not strictly proper, then the overall minimum is achieved at one of the 16 Kharitonov systems even though one only has to check eight plants to verify the SPR property for the entire family. In fact, the minimum need not be achieved at one of these eight plants as the following example shows. Example 12.16 Consider the following stable family G(s) of interval systems whose generic element is given by 1 + αs + βs2 + s3 G(s) = γ + δs + ǫs2 + s3 where α ∈ [1, 2], β ∈ [3, 4], γ ∈ [1, 2], δ ∈ [5, 6], ǫ ∈ [3, 4]. GK (s) consists of the following 16 rational functions, r1 (s) =
1 + s + 3s2 + s3 , 1 + 5s + 4s2 + s3
r2 (s) =
1 + s + 3s2 + s3 , 1 + 6s + 4s2 + s3
622
ROBUST PARAMETRIC CONTROL r3 (s) = r5 (s) = r7 (s) = r9 (s) = r11 (s) = r13 (s) = r15 (s) =
1 + s + 3s2 + s3 , 2 + 5s + 3s2 + s3 1 + s + 4s2 + s3 , 1 + 5s + 4s2 + s3 1 + s + 4s2 + s3 , 2 + 5s + 3s2 + s3 1 + 2s + 3s2 + s3 , 1 + 5s + 4s2 + s3 1 + 2s + 3s2 + s3 , 2 + 5s + 3s2 + s3 1 + 2s + 4s2 + s3 , 1 + 5s + 4s2 + s3 1 + 2s + 4s2 + s3 , 2 + 5s + 3s2 + s3
1 + s + 3s2 + s3 , 2 + 6s + 3s2 + s3 1 + s + 4s2 + s3 r6 (s) = , 1 + 6s + 4s2 + s3 1 + s + 4s2 + s3 r8 (s) = , 2 + 6s + 3s2 + s3 1 + 2s + 3s2 + s3 r10 (s) = , 1 + 6s + 4s2 + s3 1 + 2s + 3s2 + s3 r12 (s) = , 2 + 6s + 3s2 + s3 1 + 2s + 4s2 + s3 r14 (s) = , 1 + 6s + 4s2 + s3 1 + 2s + 4s2 + s3 r16 (s) = . 2 + 6s + 3s2 + s3 r4 (s) =
The corresponding minima of their respective real parts along the imaginary axis are given by, inf Re [r1 (jω)] = 0.1385416,
ω∈IR
inf Re [r3 (jω)] = 0.0764526,
ω∈IR
inf Re [r5 (jω)] = 0.1540306,
ω∈IR
inf Re [r7 (jω)] = 0.0602399,
ω∈IR
inf Re [r9 (jω)] = 0.3467740,
ω∈IR
inf Re [r11 (jω)] = 0.3011472,
ω∈IR
inf Re [r13 (jω)] = 0.3655230,
ω∈IR
inf Re [r15 (jω)] = 0.2706398,
ω∈IR
inf Re [r2 (jω)] = 0.1134093,
ω∈IR
inf Re [r4 (jω)] = 0.0621581,
ω∈IR
inf Re [r6 (jω)] = 0.1262789,
ω∈IR
inf Re [r8 (jω)] = 0.0563546,
ω∈IR
inf Re [r10 (jω)] = 0.2862616,
ω∈IR
inf Re [r12 (jω)] = 0.2495148,
ω∈IR
inf Re [r14 (jω)] = 0.3010231,
ω∈IR
inf Re [r16 (jω)] = 0.2345989 .
ω∈IR
Therefore, the entire family is SPR and the minimum is achieved at r8 (s). However, r8 (s) corresponds to 1 KA (s) 4 KB (s) which is not among the eight rational functions of Theorem 12.18. 12.11.1.1
Complex Rational Functions
It is possible to extend the above results to the case of complex rational functions. In the following we give the corresponding results and sketch the small differences in the proofs. The SPR property for a complex rational function is again given by Definition 12.1. Thus, a proper complex rational
623
ROBUST CONTROL DESIGN function G(s) =
N (s) D(s)
is SPR if 1) G(s) has no poles in the closed right-half plane, 2) Re [G(jω)] > 0 for all ω ∈ IR, or equivalently, Re [N (jω)] Re [D(jω)] + Im [N (jω)] Im [D(jω)] > 0, for all ω ∈ IR As with real rational functions, the characterization given by Theorem 12.16 is true and we state this below. THEOREM 12.19 The complex rational function G(s) is SPR if and only if the conditions a),b), and c) of Theorem 12.16 hold. PROOF The proof is similar to that for Theorem 12.16. However, there is a slight difference in proving that the SPR property implies part c) which is, D(s) + jαN (s) is Hurwitz for all α ∈ IR. (12.56) To do so in the real case, we consider the family of polynomials: P := {Pα (s) = D(s) + jαN (s) : α ∈ IR} and we start by arguing that this family of polynomials has constant degree. This may not be true in the complex case when the rational function is proper but not strictly proper. To prove that (12.56) is nevertheless correct we first observe that the same proof carries over in the strictly proper case. Let us suppose now that N (s) and D(s) have the same degree p and their leading coefficients are np = nrp + jnip , dp = drp + jdip . Then it is easy to see that the family P does not have constant degree if and only if drp nrp + dip nip = 0. (12.57) Thus, if G(s) is SPR and (12.57) is not satisfied then again the same proof works and (12.56) is true. Now, let us assume that G(s) is SPR, proper but not strictly proper, and that N ′ (s) , where N ′ (s) = N (s) + γD(s). Gγ (s) = G(s) + γ = D(s)
624
ROBUST PARAMETRIC CONTROL
It is clear that Gγ (s) is still SPR, and it can be checked that it is always proper and not strictly proper. Moreover, (12.57) cannot hold for N ′ (s) and D(s) since in that case, r
i
drp n′ p + dip n′ p = drp (nrp + γdrp ) + dip (nip + γdip ) = γ((drp )2 + (dip )2 ) > 0. Thus, we conclude that for all α ∈ IR, D(s) + jα(N (s) + γD(s)) is Hurwitz stable.
(12.58)
Now letting γ go to 0, we see that D(s) + jαN (s) is a limit of Hurwitz polynomials of bounded degree and therefore Rouch´e’s theorem implies that the unstable roots of D(s) + jαN (s), if any, can only be on the jω−axis. However, since G(s) is SPR this cannot happen since, Re [D(jω)] − αIm [N (jω)] = 0 d(jω) + jαn(jω) = 0 =⇒ Im [D(jω)] + αRe [N (jω)] = 0, and these two equations in turn imply that Re [N (jω)] Re [D(jω)] + Im [N (jω)] Im [D(jω)] = 0, a contradiction. Now consider a family G(s) of proper complex interval rational functions G(s) =
A(s) B(s)
where A(s) belongs to a family of complex interval polynomials A(s), and B(s) belongs to a family of complex interval polynomials B(s). The Kharitonov polynomials for such a family are 8 extreme polynomials. We refer the reader to Chapter 8 for the definition of these polynomials. The Kharitonov systems associated with G(s) are the 64 rational functions in the set ( ) i KA (s) GK (s) = : i, j ∈ {1, 2, 3, 4, 5, 6, 7, 8} . j KB (s) Similar to the real case we have the following theorem. THEOREM 12.20 Given a proper stable family G(s) of complex interval rational functions, the minimum of Re [G(jω)] over all ω and over all G(s) in G(s) is achieved at one of the 64 Kharitonov systems. The proof is identical to that for the real case and is omitted.
ROBUST CONTROL DESIGN
625
One may also consider the problem of only checking that the entire family is SPR, and here again a stronger result holds in that case. THEOREM 12.21 Every rational function G(s) in G(s) is SPR if and only if it is the case for the 16 following rational functions: G1 (s) =
2 K 3 (s) K 1 (s) K 4 (s) KA (s) , G2 (s) = A , G3 (s) = A , G4 (s) = A 1 1 2 2 (s) , KB (s) KB (s) KB (s) KB
G5 (s) =
1 KA (s) K 4 (s) K 2 (s) K 3 (s) , G6 (s) = A , G7 (s) = A , G8 (s) = A 3 3 4 4 (s) . KB (s) KB (s) KB (s) KB
G9 (s) = G13 (s) =
6 K 7 (s) K 5 (s) K 8 (s) KA (s) , G10 (s) = A , G11 (s) = A , G12 (s) = A 5 5 6 6 (s) , KB (s) KB (s) KB (s) KB 5 K 8 (s) K 6 (s) K 7 (s) KA (s) , G14 (s) = A , G15 (s) = A , G16 (s) = A 7 7 8 8 (s) . KB (s) KB (s) KB (s) KB
The proof is the same as for Theorem 12.18 and is omitted.
12.12
The Robust Absolute Stability Problem
We now extend the classical absolute stability problem by allowing the linear system G(s) to lie in a family of systems G(s) containing parametric uncertainty. Thus, we are dealing with a robustness problem where parametric uncertainty as well as sector bounded nonlinear feedback gains are simultaneously present. For a given class of nonlinearities lying in a prescribed sector the closed-loop system will be said to be robustly absolutely stable if it is absolutely stable for every G(s) ∈ G(s). In this section we will give a constructive procedure to calculate the size of the stability sector using the Lur’e, Popov, or Circle Criterion when G(s) is an interval system or a linear interval system. In each case we shall see that an appropriate sector can be determined by replacing the family G(s) by the extremal set GE (s). Specifically we deal with the Lur’e problem. However, it will be obvious from the boundary generating properties of the set GE (jω) that identical results will hold for the Popov sector and the Circle Criterion with time-invariant nonlinearities. First consider the Robust Lur’e problem where the forward loop element G(s) shown in Figure 12.46 lies in an interval family G(s) and the feedback loop contains as before a time-varying sector bounded nonlinearity φ lying in the sector [0, k]. As usual, let GK (s) denote the transfer functions of the Kharitonov systems associated with the family G(s).
626
ROBUST PARAMETRIC CONTROL
−
-
G(s)
6
φ
Figure 12.46 Robust absolute stability problem.
THEOREM 12.22 (Absolute Stability for Interval Systems) The feedback system in Figure 12.46 is absolutely stable for every G(s) in the the interval family G(s) of stable proper systems, if the time-varying nonlinearity φ belongs to the sector [0, k] where k = ∞, if inf inf Re [G(jω)] ≥ 0, GK ω∈IR
otherwise k 0 G∈GK ω k By Theorem 12.13, the absolute stability of the closed-loop system follows. We can extend this absolute stability result to feedback systems. Consider a feedback system in which a fixed controller C(s) stabilizes each plant G(s) belonging to a family of linear interval systems G(s). Let GE (s) denote the extremal set for this family G(s). Now suppose that the closed-loop system is subject to nonlinear sector bounded feedback perturbations. Refer to Figure 12.47. Our task is to determine the size of the sector for which absolute stability is preserved for each G(s) ∈ G(s). The solution to this problem is given below.
627
ROBUST CONTROL DESIGN
φ
− ?
−6
-
C(s)
-
G(s)
-
Figure 12.47 Closed-loop system with nonlinear feedback perturbations.
THEOREM 12.23 (Absolute Stability of Interval Control Systems) Given the feedback system in Figure 12.47 where G(s) belongs to a linear interval family G(s), the corresponding nonlinear system is absolutely stable if the nonlinearity φ belongs to the sector [0, k], where k > 0 must satisfy: h i −1 k = ∞, if inf inf Re C(jω)G(jω) (1 + C(jω)G(jω)) ≥ 0, GE ω∈IR
otherwise, k 0). The matrix M is symmetric positive semidefinite, M ≥ 0. Let V (x(t), t) denote the value function. Then (13.9), (13.10), (13.11), and (13.12) imply that ∂V (x(t), t) − ∂t ∂V T T T (A(t)x(t) + B(t)u(t)) (13.22) = inf x (t)Q(t)x(t) + u (t)R(t)u(t) + ∂x u(t) The optimal control u∗ (t) must satisfy (13.13) and therefore 2R(t)u∗ (t) + B T (t) so that
∂V =0 ∂x
1 ∂V u∗ (t) = − R−1 (t)B T (t) . 2 ∂x
(13.24)
(13.25)
646
OPTIMAL AND ROBUST CONTROL
Substituting (13.25) back into (13.22), we obtain: ∂V T 1 ∂V T ∂V ∂V = xT (t)Q(t)x(t)+ A(t)x(t)− B(t)R−1 (t)B T (t) (13.26) ∂t ∂x 4 ∂x ∂x as the partial differential equation to be satisfied by V (x, t) subject to the boundary condition V (x(T ), T ) = xT (T )M x(T ). (13.27) −
To solve (13.26) and (13.27), we make the reasonable guess that a V function that is quadratic in x might work, and propose V (x(t), t) = xT (t)P (t)x(t)
(13.28)
where P (t) is a symmetric matrix, as a candidate solution. Then (13.26) becomes −xT (t)P˙ (t)x(t) = xT (t)Q(t)x(t) + 2xT (t)P (t)A(t)x(t) −xT (t)P (t)B(t)R−1 (t)B T (t)P (t)x(t) (13.29) and (13.27) becomes
xT (T )P (T )x(T ) = xT (T )M x(T ).
(13.30)
Since 2P (t)A(t) = P (t)A(t) + AT (t)P (t) + P (t)A(t) − AT (t)P (t) | {z } | {z } symmetric
and
(13.31)
antisymmetric
xT (t)S(t)x(t) = −xT (t)S(t)x(t)
(13.32)
for S(t) antisymmetric, we can rewrite (13.29) as −xT (t)P˙ (t)x(t) = xT (t) Q(t) + P (t)A(t) + AT (t)P (t)
−P (t)B(t)R−1 (t)B T (t)P (t) x(t). (13.33)
It is now clear that we have obtained a solution of (13.22) and (13.23), the sufficient conditions for optimality, if P (t) can be chosen to satisfy:
−P˙ (t) = Q(t) + P (t)A(t) + AT (t)P (t) − P (t)B(t)R−1 (t)B T (t)P (t) (13.34) for t ∈ [t0 , T ] with P (T ) = M.
(13.35)
If a solution P (t) to (13.34) and (13.35) can be found, the optimal control is given, from (13.25), by u∗ (t) = − R−1 (t)B T (t)P (t) x(t) | {z }
(13.36)
K(t)
which represents a time-varying state feedback control law. To implement this control law, P (t) must be precomputed and stored starting from the boundary condition (13.35), and satisfying (13.34) for t ∈ [t0 , T ].
647
LINEAR QUADRATIC REGULATOR
13.2.1
Solution of the Matrix Ricatti Differential Equation
The solution of the nonlinear matrix differential equation in (13.34), known as the Ricatti differential equation, can be obtained by considering the associated linear matrix differential equation: ˙ X(t) A(t) −B(t)R−1 (t)B T (t) X(t) = (13.37) −Q(t) −AT (t) Y (t) Y˙ (t) X(T ) I = (13.38) Y (T ) M and setting P (t) = Y (t)X −1 (t).
(13.39)
The verification that (13.39) satisfies (13.34) is straightforward and is left to the reader. The solution of (13.37) can be represented in terms of its state transition matrix Φ(t, T ) appropriately partitioned I X(t)] Φ11 (t, T ) Φ12 (t, T ) (13.40) = M Y (t) Φ21 (t, T ) Φ22 (t, T ) {z } | Φ(t,T )
so that
−1
P (t) = [Φ21 (t, T ) + Φ22 (t, T )M ] [Φ11 (t, T ) + Φ12 (t, T )M ]
13.2.2
.
(13.41)
Cross Product Terms
A slightly more general LQR problem can be formulated by considering the performance index Z T Q S x I= [xT uT ] dt + θ(x(T ), T ) ST R u t0 Z T T T T = x Qx + 2x Su + u Ru dt + θ(x(T ), T ). t0
To reduce this to the standard problem, note that xT Qx + uT Ru + 2xT Su T = u + R−1 S T x R u + R−1 S T x + xT Q − SR−1 S T x.
Assume that
Q − SR−1 S T ≥ 0 and define u¯ := u + R−1 S T x.
648 Then
OPTIMAL AND ROBUST CONTROL x(t) ˙ = A − BR−1 S T x(t) + B u ¯ Z T I= xT (Q − SR−1 S T )x + u ¯T R¯ u dt + θ(x(T ), T ) t0
so that the optimal u¯(t) is u¯(t) = −R−1 B T P (t)x(t) = −K(t)x(t) T −P˙ = P A − BR−1 S T + A − BR−1 S T P + Q − SR−1 S T −P BR−1 B T P. Therefore, the optimal control u(t) is
u(t) = −R−1 (t) B T (t)P (t) + S T (t) x(t)
and the optimal performance is
xT (t0 )P (t0 )x(t0 ).
13.3
The Infinite Horizon LQR Problem
We first consider a general infinite horizon optimal control problem before specializing to the linear quadratic case.
13.3.1
General Conditions for Optimality
For the infinite horizon problem, the cost functional assumes the form Z ∞ I(x(0), u[0, ∞)) = ψ(x(t), u(t))dt. (13.42) 0
The system dynamics are described by x(t) ˙ = f (x(t), u(t)),
x(0) given
(13.43)
and we seek feedback controls of the form u(t) = µ(x(t)) which minimize the cost I in (13.42) which can be rewritten as Z ∞ I(x(0), µ) = ψ(x(t), µ(x(t)))dt. 0
(13.44)
(13.45)
649
LINEAR QUADRATIC REGULATOR
The closed-loop optimal system is also required to be asymptotically stable. This means that we require lim x(t) = 0,
t→∞
for all x(0).
(13.46)
Thus, we introduce the family Ω of admissible control laws µ(·) which are continuous functions of x(t), for which (13.43) has a unique and continuously differentiable solution x(t) and for which (13.46) holds. The optimal control problem is to find a control law µ∗ (x(t)) ∈ Ω that minimizes (13.42): Z ∞ µ∗ = Arg inf ψ(x(t), µ(x(t)))dt. (13.47) µ∈Ω
0
Let us now consider the value function approach for this problem and define: V (x(0)) := inf I(x(0), µ).
(13.48)
µ∈Ω
This suggests a search for the general value function V (x(t)) with the given problem being the evaluation of V at x = x(0). The following result provides the basis for this approach. THEOREM 13.2 If there exists a control law u = µ∗ (x(t)) and a continuously differentiable V (x) such that, 0 ≤ V (x) ≤ xT T x
for some T = T T > 0
for all x
(a) ∂V T f (x(t)µ∗ (x(t))) + ψ(x(t), µ∗ (x(t))) = 0, ∂x
for all x
(13.49)
(b) ∂V T f (x(t), u) + ψ(x(t), u(t)) ≥ 0 ∂x that is
for all x, u
(13.50)
∂V ∂V ∗ J x, , u ≥ J x, , µ (x) = 0. ∂x ∂x
then µ∗ (x(t)) is the optimal control minimizing (13.42). PROOF
Let u(t) = µ∗ (x(t)). Then dV ∂V T ∂V x(t, x(0), µ∗ ) = f (x, µ∗ ) + dt ∂x ∂t |{z} =0
∂V T = f (x, µ∗ ) ∂x = −ψ(x, µ∗ (x)) by (13.49).
(13.51)
650
OPTIMAL AND ROBUST CONTROL
Integrating from 0 to τ , V (x(τ )) − V (x(0)) = −
Z
τ
ψ(x, µ∗ (x))dt
(13.52)
0
Since V (x(τ )) ≤ xT (τ )T x(τ ) and x(τ ) → 0, it follows that lim V (x(τ )) = 0
(13.53)
V (x(0)) = I(x(0), µ∗ ).
(13.54)
τ →∞
and therefore Now let u(t) = µ(x(t)) be an arbitrary admissible control law and let x(t, x(0), µ) denote the corresponding solution of (13.43). Integrating (13.50) with u(t) = µ(x(t)), we obtain V (x(τ )) − V (x(0)) =
Z
τ
0
∂V T f (x, µ(x))dt ≥ − ∂x
or V (x(0)) ≤ V (x(τ )) +
Z
Z
τ
ψ(x(t), µ(x(t)))dt
0
(13.55)
τ
ψ(x(t), µ(x(t)))dt.
(13.56)
0
Letting τ → ∞ and using (13.53) and (13.54), we obtain I(x(0), µ∗ ) ≤ I(x(0), µ)
(13.57)
V (x(0)) = inf I(x(0), µ).
(13.58)
so that µ∈Ω
In the next subsection we apply these results to the infinite horizon LQR.
13.3.2
The Infinite Horizon LQR Problem
The infinite horizon LQR problem considers the linear time-invariant plant x(t) ˙ = Ax(t) + Bu(t) and the time-invariant quadratic cost functional Z ∞ T I(x(0), u[0, ∞)) = x (t)Qx(t) + uT (t)Ru(t) dt
(13.59)
(13.60)
0
with
Q = QT ≥ 0
and R = RT > 0.
(13.61)
651
LINEAR QUADRATIC REGULATOR
Following the approach of the previous theorem, we search for functions V (x) and µ∗ (x) satisfying (13.49) and (13.50), the sufficient conditions for optimality: ∂V T T (Ax + Bµ∗ (x)) + xT Qx + (µ∗ (x)) Rµ∗ (x) = 0 (13.62) ∂x and for arbitrary admissible µ(x(t)) ∂V T (Ax + Bµ) + xT Qx + µRµ ≥ 0 ∂x
(13.63)
It is easy to see that (13.62) and (13.63) are satisfied by V (x) = xT P x
(13.64)
µ∗ (x) = −R−1 B T P x
(13.65)
AT P + P A + Q − P BR−1 B T P = 0.
(13.66)
and provided The above equation is known as the Algebraic Ricatti Equation (ARE). In the next section we discuss its solution and thus the solution of the optimal LQR.
13.4
Solution of the Algebraic Riccati Equation
The main result of this section gives conditions for the existence of stabilizing solutions of the ARE T AT P + P A − P BR−1 B T P + C | {zC} = 0
(13.67)
Q
and the computation of these solutions.
THEOREM 13.3 If (A, B) is stabilizable and (C, A) is detectable then the Algebraic Riccati Equation (ARE) has a unique positive semidefinite solution P , the optimal control for the corresponding LQR problem is u∗ (t) = −R−1 B T P x(t) and (A − BR−1 B T P ) is stable. To prove this result, we develop some auxiliary machinery. First, the ARE in (13.67) can be written in the following three equivalent ways: A −BR−1 B T I [−P I] =0 (13.68) P −Q −AT
652
OPTIMAL AND ROBUST CONTROL (A − BK)T P + P (A − BK) = −Q − P BR−1 B T P
A −BR−1 B T −Q −AT | {z }
(13.69)
T
K=R B P I I = (A − BK) P P −1
(13.70)
H
The matrix H is called the associated Hamiltonian. Let σ(H) denote the spectrum or eigenvalue set of H. LEMMA 13.1 σ(H) is symmetric about the imaginary axis. PROOF
H is similar to −H T since with 0 −I J= , I 0
we have J −1 HJ = −H T . Therefore, if λ ∈ σ(H), −λ∗ is also in σ(H). Therefore, if H has no jω eigenvalues, it has n eigenvalues each in the open left-half plane and open right-half plane, respectively. Suppose H has no jω eigenvalues. Let the n dimensional stable eigenspace of H be denoted X1 V = Im X2 | {z } 2n×n
where X1 , X2 are in general complex. Since H is real, it is possible to choose a basis so that X1 , X2 are rendered real. LEMMA 13.2 Suppose that H (a) has no jω eigenvalues, and (b) has a stable eigenspace with X1 real and invertible and X2 real, then P = X2 X1−1 is a solution of the ARE and (A − BR−1 B T P ) is stable.
653
LINEAR QUADRATIC REGULATOR PROOF
We have, for some matrix H− , A −BR−1 B T X1 X1 = H− X2 X2 −Q −AT
where σ(H− ) is in the open left-half plane. Then I I A −BR−1 B T = X1 H− X1−1 −Q −AT X2 X1−1 X2 X1−1 and multiplying (13.72) on the left by −X2 X1−1 I , we get
so that
−X2 X1−1 I
A −BR−1 B T −Q −AT
I =0 X2 X1−1
(13.71)
(13.72)
(13.73)
P = X2 X1−1 solves the ARE from (13.68). Also A − BR−1 B T X2 X1−1 = X1 H− X1−1 | {z } P
so that
σ A − BR−1 B T P = σ(H− ).
Under an arbitrary change of basis X1 X1 T V = Im T = Im X2 X2 T
(13.74)
where T is invertible, and so the solution P is unchanged since: P = (X2 T ) (X1 T )−1 = X2 X1−1 . Thus, under the assumptions of Lemma 13.2, the solution P = X2 X1−1 is unique and is stabilizing, that is, σ(A − BR−1 B T P ) is in the open left-half plane. Let U ∗ denote the conjugate transpose of U . LEMMA 13.3 Under the assumptions of Lemma 13.2, except that X1 , X2 may now be complex, we have:
654
OPTIMAL AND ROBUST CONTROL
(a) X1∗ X2 is Hermitian, that is, (X1∗ X2 )∗ = X1∗ X2 (b) P = X2 X1−1 is Hermitian, that is P = P ∗ . PROOF
We know that
X1 H X2 Premultiply by [X1∗ [X1∗
X1 = H− . X2
X2∗ ]J to get X1 = [X1∗ X2∗ ] JH X2
X2∗ ] J
X1 H− . X2
(13.75)
Now
0 −I JH = I 0
A −BR−1 B T −Q −AT
Q AT = A −BR−1 B T
is symmetric. Therefore, both sides of (13.75) are Hermitian so that X2 ∗ (−X1∗ X2 + X2∗ X1 ) H− = H− [X1∗ X2∗ ] −X1 | {z } |{z}
(13.76)
A3
X3
∗ = −H− [−X1∗ X2 + X2∗ X1 ] .
(13.77)
Equation (13.77) is of the type X3 A3 + A∗3 X3 = 0
(13.78)
and since σ(A3 ) ⊂CI − , the unique solution is X3 = 0. Therefore, X1∗ X2 = X2∗ X1
(13.79)
P = X2 X1−1
(13.80)
and is Hermitian since P = X1−1 is Hermitian.
∗
(X1∗ X2 ) X1−1
(13.81)
LEMMA 13.4 If H has no jω eigenvalues, then X1 is invertible if and only if (A, B) is stabilizable. PROOF If X1 is invertible P = X2 X1−1 is a stabilizing solution so A − BR−1 B T P is stable implying that (A, B) must be stabilizable. Conversely, let X1 X1 H = H− ; X2 X2
655
LINEAR QUADRATIC REGULATOR
we now need to show that X1 must be invertible. Arguing by contradiction, suppose that there exists x 6= 0 such that X1 x = 0 or x∗ X1∗ = 0
or x ∈ KerX1 .
(13.82)
Since AX1 − BR−1 B T X2 = X1 H−
(13.83)
multiplying on the left by x∗ X2∗ and on the right by x, we get −x∗ X2∗ BR−1 B T X2 x = x∗ X2∗ X1 H− x = x∗ X1∗ X2 H− x = 0
(13.84)
using (13.79). Therefore, x∗ X2∗ BR−1 B T X2 x = 0
(13.85)
B T X2 x = 0.
(13.86)
X 1 H− x = 0
(13.87)
or From (13.83), we have proving that H− (KerX1 ) ⊂ KerX1 . If KerX1 6= 0, there exists x 6= 0 such that x ∈ KerX1 and H− x = λx
(13.88)
−QX1 − AT X2 = X2 H−
(13.89)
with λ ∈ CI − . Using postmultiplying by x above and using (13.88), we obtain AT + λI X2 x = 0
(13.90)
or
x∗ X2∗ (A + λ∗ I) = 0.
(13.91)
Combining (13.86) and (13.91) we obtain, x∗ X2∗ [A − (−λ∗ ) I
B] = 0.
(13.92)
Since −λ∗ ∈ RHP, it follows from the stabilizability of (A, B) that x∗ X2∗ = 0, and hence X1 x = 0. X2 X1 However has rank n and so x = 0. Thus, KerX1 = 0 that is X1 is X2 invertible.
656
OPTIMAL AND ROBUST CONTROL
LEMMA 13.5 H has no jω axis eigenvalues and X1 is nonsingular if and only if (A, B) is stabilizable and (C, A) has no jω axis unobservable eigenvalues. Also the solution P = X2 X1−1 is positive semidefinite. PROOF It has already been shown that if H has no jω axis eigenvalues, then X1 is nonsingular if and only if (A, B) is stabilizable. It remains to prove that H has no jω eigenvalues if and only if (C, A) has no jω unobservable eigenvalues. Suppose, by contradiction, that jω0 is an eigenvalue of H, with eigenvector x 6= 0. z Then
A −BR−1 B T T −C C −AT
x x = jω0 z z
(13.93)
and (A − jω0 I) x = BR−1 B T z T
− (A + jω0 I) z = C T Cx. Then z ∗ (A − jω0 I) x = z ∗ BR−1 B T z ≥ 0 (real) x∗ (A + jω0 I)T z = −x∗ C T Cx ≤ 0 (real). Therefore, since the left-hand sides are equal, we must have BT z = 0 Cx = 0 and so z T [A − (−jω0 )I B] = 0 A − jω0 I x = 0. C
(13.94) (13.95)
Since (A, B) is stabilizable, z = 0 and since (C, A) has no jω unobservable modes x = 0. Thus, jω0 cannot be an eigenvalue of H. To prove that P = X2 X1−1 is positive semidefinite, note that P satisfies the Lyapunov equation (A − BK)T P + P (A − BK) = −Q − P BR−1 B T P
657
LINEAR QUADRATIC REGULATOR
with A − BK stable and thus the standard solution of the Lyapunov equation yields Z ∞ T P = e(A−BK) t Q + P BR−1 B T P e(A−BK)t dt 0
which is positive semidefinite since Q + P BR−1 B T P is positive semidefinite.
PROOF (Theorem 13.3) Since (A, B) is stabilizable and (C, A) is detectable H clearly has no jω eigenvalues, and X1 is invertible. Suppose now that, X2 X1−1 = P is not stabilizing, then A − BR−1 B T P x = λx, Re[λ] ≥ 0, x 6= 0. Since
(A − BK)T P + P (A − BK) + P BR−1 B T P + Q = 0 premultiplying by x∗ and postmultiplying by x, we get (λ + λ∗ )x∗ P x + x∗ P BR−1 B T P x + x∗ C T Cx = 0 so that B T P x = 0,
Cx = 0
(13.96)
Cx = 0.
(13.97)
and therefore Ax = λx,
By detectability of (C, A), we must have x = 0 so that λ with Re[λ] ≥ 0 cannot be an eigenvalue of A − BK. It is now easy to prove optimality, that is we claim that u∗ = −Kx minimizes I. Note that Z ∞ d T x P x dt = xT P x|t→∞ − xT P x|t=0 . dt 0
Let
u(t) = −Kx(t) + v(t) so that x(t) ˙ = (A − BK)x(t) + Bv(t) and assume that v(t) is such that Z ∞ v T (t)v(t)dt < ∞, 0
and x(t) → 0 as t → ∞. Now Z ∞ I − xT (0)P x(0) = xT (A − BK)T P + P (A − BK) x 0 +xT Q + P BR−1 B T P x + v T Rv dt.
658
OPTIMAL AND ROBUST CONTROL
Since (A − BK)T P + P (A − BK) = −Q − P BR−1 B T P we have I = xT (0)P x(0) +
Z
(13.98)
∞
v T Rvdt
0
which is minimized by setting v = 0.
13.5
The LQR as an Output Zeroing Problem
In this section we develop some geometric solvability conditions for the general LQR problem by regarding it as a problem of zeroing the output of a linear system. This existence condition will show that the unstable eigenspace of the open-loop system must be covered by the sum of the controllable subspace and the subspace unobservable from the performance index. Consider the linear time-invariant (LTI) plant of order n, x(t) ˙ = Ax(t) + Bu(t) with the performance index Z ∞ I= x(t)T DT Dx(t) + u(t)T Ru(t) dt
(13.99)
(13.100)
0
and introduce the following subspaces of the state space X . Xu (A) the unstable eigenspace of A θ the unobservable subspace corresponding to the pair (D, A) C the controllable subspace of the pair (A, B). If α(λ) the minimal polynomial of A is factored as α(λ) = αs (λ)αu (λ)
with αs (λ) having zeros in the open LHP and αu (λ) having zeros in the closed RHP, we have
and with Im[B] = B,
Xu (A) = Kernel [αu (A)] i θ = ∩n−1 i=0 Kernel DA
(13.101) (13.102)
C = B + AB + · · · + An−1 B.
(13.103)
659
LINEAR QUADRATIC REGULATOR THEOREM 13.4 There exists a state feedback control u(t) = F x(t)
(13.104)
that minimizes I=
Z
∞
0
if and only if
x(t)T DT Dx(t) + u(t)T Ru(t) dt Xu (A) ⊂ θ + C
(13.105)
(13.106)
PROOF Note that (13.101), (13.102), and (13.103) are A-invariant subspaces. We introduce the factor space X¯ := X /θ
(13.107)
T : X → X¯
(13.108)
Tx = x ¯.
(13.109)
and the canonical projection
given by ¯ A, ¯ and B ¯ defined by Then there exist maps D, ¯ T A = AT ¯ = D DT ¯ T B =: B
(13.110) (13.111) (13.112)
where A¯ is the induced map in the factor space as shown in the commutative diagram below: A X
X
T
T X /θ
A¯
X /θ
Figure 13.1 A commutative diagram.
660
OPTIMAL AND ROBUST CONTROL
Now (13.99) implies ¯x(t) + Bu(t) ¯ x¯˙ (t) = A¯ ¯ x(t) y(t) = D¯ and (13.100) can be rewritten Z ∞ ¯ T D¯ ¯ x(t) + u(t)T Ru(t) dt. I= x¯(t)T D
(13.113)
(13.114)
0
¯ A) ¯ is observable and thus an optimal control It is easy to verify that (D, u(t) = F¯ x ¯(t) ¯ B) ¯ is stabilizable, that is, minimizing (13.100) exists if and only if (A, ¯ ⊂ C¯ X¯u (A)
(13.115)
¯ is the unstable eigenspace of A¯ and C¯ is the controllable subspace where X¯u (A) ¯ B). ¯ It is now straightforward to prove that (13.115) is equivalent to of (A, (13.106). REMARK 13.1 It is emphasized that the optimal control zeroes the output y asymptotically with u(t) tending asymptotically to zero. However, the optimal control can only stabilize the dynamics in the factor space and does not necessarily stabilize the original system.
13.6
Return Difference Relations
In this section we derive some loop transfer function properties for LQR systems known as return difference relations, which are helpful in subsequent sections to establish stability and robustness properties. We use the following standard LQR relations, restated below for convenience: Z ∞ T I= x (t)Qx(t) + uT (t)Ru(t) dt (13.116) 0
x(t) ˙ = Ax(t) + Bu(t)
(13.117)
0 = AT P + P A + Q − P BR−1 B T P u(t) = −R−1 B T P x(t) = −Kx(t)
(13.118) (13.119)
where K = R−1 B T P.
(13.120)
661
LINEAR QUADRATIC REGULATOR From the ARE we have P (sI − A) + (−sI − AT )P − Q = −P BR−1 RR−1 B T P = −K T RK.
Multiplying the above on the left by B T (−sI − AT )−1 and on the right by (sI − A)−1 B, we get using (P B = K T R) B T −sI − AT
= BT
−1
K T R + RK T (sI − A)−1 B −1 T +B T −sI − AT K RK(sI − A)−1 B −1 −sI − AT Q(sI − A)−1 B.
Adding R to both sides we obtain −1 KRK T (sI − A)−1 B R + B T −sI − AT −1 KR + RK T (sI − A)−1 B +B T −sI − AT −1 = R + B T −sI − AT Q(sI − A)−1 B or, finally
h −1 T i I + B T −sI − AT K R I + K(sI − A)−1 B −1 = R + B T −sI − AT Q(sI − A)−1 B
(13.121)
which is known as Kalman’s Return Difference Identity.
13.7
Guaranteed Stability Margins for the LQR
The state feedback LQR enjoys some exceptional guaranteed or universal stability margins. We derive these here using Kalman’s Return Difference relations established in the last section. Consider the multivariable system x(t) ˙ = Ax(t) + Bu(t) u(t) = −Kx(t) + v(t)
(13.122)
where K has been determined using LQR theory so that A − BK is stable. This corresponds to the block diagram (see Figure 13.2) with the loop transfer function L(s) = K(sI − A)−1 B (13.123) and u(s) = [I + L(s)]−1 v(s).
(13.124)
662
OPTIMAL AND ROBUST CONTROL v(t)
ℓ
+
u(t)
x(t) (sI − A)−1 B
− K Figure 13.2 A state feedback loop.
The stability margins of the above system can be determined by inserting a perturbation matrix ∆ at the loop breaking point ℓ shown in Figure 13.2, and determining the maximal “size” of ∆ that preserves closed-loop stability. We assume for simplicity that ∆ is a complex constant invertible matrix. The loop gain of the perturbed system is L(s)∆. It is easily verified that I + L(s)∆ = ∆−1 − I (I + L(s))−1 + I (I + L(s))∆. (13.125) Now let N (F ) denote the net change in the argument of the scalar function F (s) as s traverses the Nyquist D-contour, consisting of the imaginary axis along with a half circle of infinite radius enclosing the entire right half of the complex plane. From (13.125) N (|I + L(s)∆|) = N (∆−1 − I)(I + L(s))−1 + I + N |(I + L(s)|) (13.126) since N (|∆|) = 0. We will, therefore, have
if
N (|I + L(s)∆|) = N (|I + L(s)|)
(13.127)
N (∆−1 − I)(I + L(s))−1 + I = 0.
(13.128)
Note that (13.127) is a necessary and sufficient condition for stability of the perturbed closed loop since the nominal loop transfer function L(s) corresponds to a stable closed-loop system. Let σ ¯ [U ] and σ[U ] denote the maximum and minimum singular values of the matrix U . A sufficient condition for (13.128), and therefore for stability of the perturbed system, is given in terms of singular values as: σ ¯ (∆−1 − I)(I + L(jω))−1 < 1, for all ω (13.129) or
or
σ ¯ ∆−1 − I · σ ¯ (I + L(jω))−1 < 1, σ ¯ ∆−1 − I < σ[I + L(jω)],
for all ω
(13.130)
for all ω.
(13.131)
663
LINEAR QUADRATIC REGULATOR
From Kalman’s Return Difference relation (13.121), we already know that [I + L(jω)]∗ R[I + L(jω)] ≥ R.
(13.132)
With R = ρI, we obtain [I + L(jω)]∗ [I + L(jω)] ≥ I
(13.133)
which can be expressed in terms of singular values as σ[I + L(jω)] ≥ 1.
(13.134)
From (13.131) and (13.134), it follows that σ ¯ ∆−1 − I < 1
(13.135)
is a sufficient condition for closed-loop stability of the perturbed system.
13.7.1
Gain Margin
Let
K1 ..
∆=
. Kr
,
where the Ki are real. Then (13.135) becomes 1 Ki − 1 < 1, for i = 1, 2, · · · , r
(13.136)
(13.137)
and this is equivalent to
1 < Ki < ∞, 2
for i = 1, 2, · · · , r.
(13.138)
Thus, the LQR has a gain margin in each input channel that ranges from to ∞.
13.7.2
1 2
Phase Margin
In this case, let
∆=
,
(13.139)
for i = 1, 2, · · · , r
(13.140)
ejθ1
so that (13.135) is equivalent to −jθ e i − 1 < 1,
..
. e
jθr
664
OPTIMAL AND ROBUST CONTROL
or 2
(cos θi − 1) + sin2 θi < 1,
for i = 1, 2, · · · , r
(13.141)
or cos θi >
1 , 2
for i = 1, 2, · · · , r.
(13.142)
The condition (13.142) is equivalent to a phase margin of 60o , that is −60o < θi < 60o
(13.143)
in each input channel of the LQR.
13.7.3
Single Input Case
In the single input case, R = r, K = k a row vector, B = b a column vector, and the Return Difference Identity becomes:
so that
1 + k(sI − A)−1 b 2 r = r + bT −sI − AT −1 Q(sI − A)b 1 + k(jωI − A)−1 b ≥ 1
x(t)
u(t) K
+
(13.144)
x(t) ˙ = Ax(t) + Bu(t)
−
Figure 13.3 A feedback system.
Relation (13.144) means that the Nyquist plot g(jω) = k(jωI − A)−1 b stays outside a circle of radius 1 centered at −1 + j0 as depicted. This clearly results in the gain and phase margins established above.
665
LINEAR QUADRATIC REGULATOR
1 + g(jω) −1 g(jω)
−1 Figure 13.4 Nyquist plot and the unit circle centered at −1.
13.8
Eigenvalues of the Optimal Closed Loop System
It is possible to determine the eigenvalues of the closed-loop optimal LQR from the matrices defining the performance index, without computing the feedback control as we show below.
13.8.1
Closed-Loop Spectrum
Recall the definition of the Hamiltonian, A −BR−1 B T H= . −Q −AT
(13.145)
Since x(t) ˙ = (A − BK)x(t) the closed-loop eigenvalues are the eigenvalues of A − BK. Using the ARE, it is easily verified that I 0 sI − (A − BK) BR−1 B T I 0 sI − H = P −I 0 −sI − (A − BK)T −P I so that det[sI − H] = (−1)n det[sI − (A − BK)] det −sI − (A − BK)T = det[sI − (A − BK)] det[sI + (A − BK)]. (13.146) The LHP eigenvalues of H therefore consist of the closed-loop eigenvalues of the optimal system.
666
OPTIMAL AND ROBUST CONTROL
It is now quite natural to ask: How do the closed-loop eigenvalues of the optimal system behave with respect to, say ρ where R = ρR0 for fixed R0 and ρ increases or decreases. Let αk (s) := det[sI − (A − BK)] α0 (k) := det[sI − A]
(13.147)
and with Q = C T C let P (s) := C(sI − A)−1 B.
(13.148)
αk (s)αk (−s) = (−1)n det[sI − H].
(13.149)
From (13.146), we have
Now using the identities, X Y det = det[X] det W − ZX −1 Y , Z W
det[X] 6= 0
(13.150)
and
det [In + U V ] = det [Im + V U ] ,
(13.151)
we have det[sI − H] = det[sI − A] det sI + AT − C T C(sI − A)−1 BR−1 B T = (−1)n det[sI − A] det −sI − AT + C T C(sI − A)−1 BR−1 B T = (−1)n det[sI − A] det −sI − AT det I + C T C(sI − A)−1 BR−1 B T (−sI − AT )−1 = (−1)n det[sI − A] det −sI − AT (13.152) −1 −1 T T −1 T det I + C(sI − A) BR B (−sI − A ) C .
Therefore, using (13.147) - (13.149), we have
αk (s)αk (−s) = α0 (s)α0 (−s) det I + P (s)R−1 P T (−s) .
(13.153)
When R = ρI and ρ is large, it is seen from (13.153) that the closed-loop eigenvalues of the optimal system approach the LHP open-loop eigenvalues and the reflection of the open-loop RHP eigenvalues about the imaginary axis. On the other hand as ρ ↓ 0 the closed-loop eigenvalues approach the LHP zeros of P (s)P T (−s) or approach ∞ in the LHP.
LINEAR QUADRATIC REGULATOR
13.9
667
Optimal Dynamic Compensators
Consider the controllable and observable system x(t) ˙ = Ax(t) + Bu(t)
(13.154)
y(t) = Cx(t)
(13.155)
with measurements where x(t) is an n-vector, u(t) is an r-vector, and y(t) is an m-vector, with m < n. It is easily seen that implementation of state feedback control using the measurements y(t) would require that we obtain derivatives of y(t). In most practical systems, it is impossible to carry our pure differentiation and one must employ dynamic compensators as approximate differentiators. If optimal state feedback is computed for (13.154) using LQR theory and dynamic compensation is introduced as an afterthought, the optimality of the closed-loop system is inevitably deteriorated. This is also obviously true when an observer is used to obtain the compensator. An intelligent approach to avoid this pitfall is to anticipate the use of dynamic compensation of a suitable order and introduce an equal number of integrators to augment the plant state, so that state feedback control in the augmented system can be exactly implemented by a proper dynamic compensator. In this way, the system that is implemented is optimal and there is no uncontrollable loss of performance from that designed. We describe the details of this approach below. We assume that the system is observable and let q denote an integer such that C CA 2 (13.156) Rank CA = n. .. . CAq
Write
u(t) ˙ = u1 (t)
(13.157)
and let u˙ 1 (t) = u2 (t) u˙ 2 (t) = u3 (t) u˙ 3 (t) = u4 (t) .. . u˙ q−1 (t) = v(t)
(13.158)
668
OPTIMAL AND ROBUST CONTROL
denote a string of rq integrators. The augmented system consisting of (13.154) and (13.158) has state vector
xa (t) :=
x(t) u(t) u1 (t) .. . uq−1 (t)
,
(13.159)
input v(t), and state equation x˙ a (t) = Aa xa (t) + Ba v(t) where
A B 0 0 ··· 0 .. 0 0 I 0 . .. 0 0 0 I . , Aa = . . .. 0 .. . .. I 0 0 0 0 ··· 0
0 0 0 Ba = . . .. 0 I
(13.160)
(13.161)
Optimal control of (13.160) using LQR theory requires that we specify a performance index Ia =
Z
0
∞
xa (t)T Qa xa (t) + v(t)T Ra v(t) dt
(13.162)
with Ra , symmetric positive definite and Qa symmetric, positive semi-definite with (Qa , Aa ) observable. The optimal state feedback control can then be solved from the corresponding ARE, and can be written in the form v(t) = Ka xa (t)
(13.163)
or in the expanded form v(t) = K0 x(t) + L0 u(t) + L1 u1 (t) + · · · + Lq−1 uq−1 (t).
(13.164)
It is easy to see that (Aa , Ba ) is controllable and this along with the observability of (Qa , Aa ) implies, as we have seen before, that (Aa + Ba Ka ) is stable. From (13.156), it follows that there exists P = [P0 P1 · · · Pq ]
669
LINEAR QUADRATIC REGULATOR such that
[P0 P1
C CA 2 · · · Pq ] CA = K0 . .. .
(13.165)
CAq
Now observe that y(t) = Cx(t)
y(t) ˙ = CAx(t) + CBu(t) y¨(t) = CA2 x(t) + CABu(t) + CBu1 (t) .. .
(13.166)
y (q) (t) = CAq x(t) + CAq−1 Bu(t) + CAq−2 Bu1 (t) + · · · + CBuq−1 (t) so that the optimal control (13.164) may be rewritten as dy(t) dq y(t) dq u(t) = P y(t) + P + · · · + P 0 1 q dtq dt dtq du(t) dq−1 u(t) +Q0 u(t) + Q1 + · · · + Qq−1 (13.167) dt dtq−1 where Qq−1 = Lq−1 − Pq CB Qq−2 = Lq−2 − Pq CAB − Pq−1 CB .. .
(13.168)
q−2
Q1 = Lq − Pq CA B − · · · − P2 CB Q0 = L0 − Pq CAq−1 B − · · · − P1 CB. Thus, (13.167) corresponds to the optimal compensator transfer function matrix given by u(s) = Q−1 (s)P (s)y(s) (13.169) where Q(s) = sq I − Qq−1 sq−1 − Qq−2 sq−2 − · · · − Q0 P (s) = Pq sq + Pq−1 sq−1 + · · · + P0 .
(13.170)
A state space realization of (13.169) and (13.170) is: 0 0 ··· 0 Q P +Q P 0 0 0 q z˙1 (t) z1 (t) I0 0 Q1 z˙2 (t) P1 + Q1 Pq z2 (t) . . .. z˙3 (t) .. .. z3 (t) y(t) . + = 0 I .. . .. . . . . . . . . . . . . . . . . z˙q (t) z (t) q 0 0 · · · I Qq−1 Pq−1 + Qq−1 Pq
670
OPTIMAL AND ROBUST CONTROL (13.171)
z1 (t) z2 (t) u(t) = [0 0 · · · 0 I] z3 (t) + Pq y(t). .. . zq (t)
13.9.1
Dual Compensators
A dual procedure for optimal dynamic compensator design is described next. For this, we consider the “dual plant.” x ˜˙ (t) = AT x ˜(t) + C T u ˜(t) T
y˜(t) = B x ˜(t).
(13.172) (13.173)
Let Rank [B AB · · · Ap B] = n,
(13.174)
introduce u˜˙ (t) := u ˜1 (t) ˙u ˜1 (t) := u ˜2 (t) .. .
(13.175)
u ˜˙ p−1 (t) := v˜(t) and consider the augmented system ˜a v˜(t) x ˜˙ a (t) = A˜a x˜a (t) + B with quadratic performance index Z ∞h i ˜ a x˜a (t) + v˜T (t)R˜ ˜ v (t) dt I= x˜Ta (t)Q
(13.176)
(13.177)
0
˜ a , A˜a ) is observable, R ˜=R ˜ T is positive definite and where (Q T A CT 0 · · · 0 0 0 0 I 0 0 .. . . .. . , B ˜a = . . . A˜a = .. . . 0 .. I I 0 0 0 ··· 0
(13.178)
The dual plant is observable and controllable, since the original plant in ˜a ) is con(13.154) and (13.155) is controllable and observable. Thus, (A˜a , B trollable and therefore the optimal control ˜ aT x v˜(t) = K ˜a (t),
(13.179)
671
LINEAR QUADRATIC REGULATOR written in expanded form as ˜ 0T x v˜(t) = K ˜(t) + LT0 u ˜(t) + LT1 u ˜1 (t) + · · · + LTp−1 u ˜p−1 (t) is stabilizing that is
˜a K ˜ aT A˜a + B
(13.180) (13.181)
is stable. Now introduce the “dual” compensator p−1 u ˜(t) dp u ˜(t) u(t) T d T T d˜ = P + · · · + P u ˜ (t) + P p−1 0 1 dtp dt dtp−1 dp y˜(t) d˜ y (t) +QT0 y˜(t) + QT1 + · · · + QTq dt dtp with transfer function −1 ˜ T y˜(s) u˜(s) = P˜ (s)T Q(s)
(13.182)
(13.183)
where
−1 T P˜ (s)T = sp I − sp−1 Pp−1 − · · · − P0T ˜ T = QT0 + QT1 s + · · · + QTq sq . Q(s)
(13.184)
We take the “dual” of (13.183) and (13.184) that is transpose them to finally obtain the dynamic compensator transfer function matrix for the original plant ˜ P˜ −1 (s)y(s). u(s) = Q(s) (13.185) A state space realization of (13.185) is: 0 I 0 ··· 0 w˙ 1 (t) w1 (t) 0 .. w (t) 0 w˙ 2 (t) 2 0 0 I . w˙ 3 (t) w3 (t) . . + ... y(t) = . . . . .. . 0 . . 0 . 0 I w˙ p (t) wp (t) I P0 P1 · · · · · · Pp−1
(13.186)
w1 (t) w2 (t) u(t) = [Q0 + Qp P0 , Q1 + Qp P1 , · · · Qp−1 + Qp Pp ] w3 (t) + Qp y(t) .. . wp (t)
where the Qi , Pj are calculated from
Q0 Q1 ˜0 [B AB · · · Ap B] . = K ..
Qp
(13.187)
672
OPTIMAL AND ROBUST CONTROL
and Pp−1 = Lp−1 − CBQp Pp−2 = Lp−2 − CABQp − CBQp−1 .. . p−1
P0 = L0 − CA
p−2
BQp − CA
(13.188)
BQp−1 − · · · − CBQ1 .
It is straightforward to prove that the plant in (13.154) and (13.155) with the compensator in (13.186) has closed-loop spectrum that is identical to the ˜a K ˜ aT . eigenvalues of A˜a + B
13.10
Servomechanisms and Regulators
We have seen that the LQR theory essentially provides a method of computing a state feedback control that stabilizes a state space system. It is natural to want to extend this rather limited scope of the LQR theory to a more realistic and broader class of control problems. To this end, we have already demonstrated in the previous section how optimal output feedback stabilizing compensators can be designed using LQR theory. In this section, we show that a central problem of control theory, namely the tracking and disturbance rejection problem, can be solved by formulating it as a suitably redefined regulator problem. This is a significant application of regulator theory to a very broad class of realistic control problems which we describe next.
13.10.1
Notation and Problem Formulation
We consider the system or plant x˙ p (t) = Ap xp (t) + Bp up (t) + Ep d(t)
(13.189)
y(t) = Cp xp (t) + Dp up (t) + Fp d(t)
(13.190)
where xp (·), up (·), y(·) and d(·) represent the n, ni , n0 , and q dimensional state, input, output, and disturbance signals. Let r(·) denote a reference signal vector of dimension n0 and e(t) := r(t) − y(t)
(13.191)
the tracking error. The servomechanism problem aims is to find feedback generated control signals up (t) such that y(t) tracks r(t) and rejects d(t), that is lim e(t) = 0. (13.192) t→∞
LINEAR QUADRATIC REGULATOR
673
The above condition must hold for prescribed classes of reference and disturbance signals and arbitrary plant and controller initial conditions.
13.10.2
Reference and Disturbance Signal Classes
The typical servomechanism requires for example the tracking and disturbance rejection of arbitrary steps, ramps, and sinusoids. To represent such classes of signals, we assume that the references and disturbances satisfy the differential equations below. With d D := dt consider the signals determined by the differential equations Dt + βt−1 Dt−1 + · · · + β1 D + β0 ri (t) = 0 (13.193) t t−1 D + βt−1 D + · · · + β1 D + β0 dj (t) = 0 (13.194)
for i = 1, 2, · · · , n0 , j = 1, 2, · · · , q. Let
β(D) := Dt + βt−1 Dt−1 + · · · + β1 D + β0 .
(13.195)
The class of all reference signals to be tracked and all disturbance signals to be rejected are generated by placing all possible initial conditions in (13.193) and (13.194), respectively. We make the standing assumption that the roots of β(s) = 0 (13.196) lie in the closed RHP so that the reference and disturbance signals are all unstable exponentials or persistent signals such as steps, ramps, and sinusoids. In the following subsection, we show how the above problem can be solved as a regulator or stabilization problem for a suitably augmented plant.
13.10.3
Solution of the Servomechanism Problem
Let us note that if a linear time-invariant feedback system achieves tracking and disturbance rejection as defined by (13.192), it must satisfy the following requirements: (a) Each reference signal and each disturbance signal must be blocked or decoupled from any component of the error signal, asymptotically, that is, in the steady state. (b) The closed-loop system must be asymptotically stable. Translating (a) and (b) into specifications on transfer functions, we see that (a) is equivalent to the requirement that the reference to error and disturbance to error transfer functions possess zeros precisely at the same locations and with the same multiplicities as the roots of β(s) = 0. The condition (b) is,
674
OPTIMAL AND ROBUST CONTROL
of course, the usual one of requiring the closed-loop eigenvalues to lie in the open LHP, that is, of a regulator or stabilization problem. We show below how condition (a) can be satisfied by solving a standard regulator problem for an appropriately augmented system or plant. Introduce the so-called state space internal model x˙ im (t) = M xim (t) + mei (t)
(13.197)
where ei (t) is the ith component of e(t), i = 1, 2, · · · , n0 , (M, m) is controllable, and |sI − M | = β(s). (13.198) Then, with
x1m (t) x2m (t) xm (t) := ··· , xnm0 (t)
we have
x˙ m (t) = |
M M ..
. M
{z
xm (t) + }
Am
|
m m ..
. m
{z
e(t)
(13.199)
}
Bm
which we call the Internal Model. Consider now the augmented system consisting of the plant and the Internal Model: Ap 0 xp (t) Bp x˙ p (t) = + u (t) −Bm Cp Am xm (t) −Bm Dp p x˙ m (t) | {z } | {z } | {z } | {z } A
x(t) ˙
B
x(t)
e(t) = [−Cp
0 Ep d(t) + r(t) + (13.200) Bm −Bm Fp xp (t) 0] − Dp up (t) − Fp d(t) + Ir(t). (13.201) xm (t)
We can now consider a state feedback control law for (13.200) of the form up (t) = − (Kp xp (t) + Km xm (t)) xp (t) = − [Kp Km ] . | {z } xm (t) | {z } K x(t)
(13.202)
LINEAR QUADRATIC REGULATOR
675
We will show that, provided (13.202) stabilizes (13.200), or equivalently stabilizes (A − BK), the zero placement condition (a) will automatically be satisfied, thus achieving asymptotic tracking and disturbance rejection. LEMMA 13.6 (A, B) is stabilizable if (i)
(Ap , Bp ) is stabilizable, and λI − Ap Bp rank = n + n0 −Cp Dp
(ii)
(13.203) (13.204)
for all λ satisfying β(λ) = 0. PROOF
Let ρ(µ) := rank[µI − A B].
Since [sI − A B] =
sI − Ap 0 Bp , Bm Cp sI − Am −Bm Dp
(13.205) (13.206)
it follows from the stabilizability of (AP , Bp ), (13.203), that when µ ∈ CI + , µ∈ / σ(Am ) ρ(µ) = n + n0 t. (13.207) Now write [sI − A B] =
In 0 0 −Bm
and consider µ ∈ σ(Am ). Let
sI − Ap 0 Bp 0 −Cp 0 DP sI − Am 0 In0 t 0
µI − Ap Bp x := rank , −Cp Dp
(13.208)
(13.209)
note that (Am , Bm ) is controllable, and apply Sylvester’s inequality for the rank of a product of matrices to (13.208), to get n + n0 t + x + n0 t − (n + n0 + n0 t) ≤ ρ(µ) ≤ n + n0 t.
(13.210)
It follows from condition (ii), (13.204) that x = n + n0 so that ρ(µ) = n + n0 t. Therefore, ρ(µ) = n + n0 t, for all µ ∈ CI + , proving that (A, B) is stabilizable. We can now state the solution to the servomechanism problem.
676
OPTIMAL AND ROBUST CONTROL
THEOREM 13.5 If the plant is stabilizable, that is, (Ap , Bp ) is stabilizable and (13.204) holds, then there exists a solution to the servomechanism problem of the form up (t) = −Kp xp (t) − Km xm (t)
(13.211)
x˙ m (t) = Am xm (t) + Bm e(t)
(13.212)
where and (A − BK) is stable (see Figure 13.5). d(t) xm (t) r(t) +
e(t)
Internal Model
Km
up (t) −
−
y(t)
Plant
−
xp (t)
Kp Figure 13.5 State feedback servomechanism.
PROOF By Lemma 13.6, there exists K such that (A − BK) is stable. Now consider the closed-loop error system x˙ p (t) Ap − Bp Kp −Bp Km xp (t) = x˙ m (t) −Bm (Cp − Dp Kp ) Am + Bm Dp Km xm (t) | {z } Ae
Ep 0 d(t) + −Bm Fp Bm r(t) | {z }
(13.213)
Be
e(t) = [− (Cp − Dp Kp ) | {z Ce
xp (t) d(t) Dp K m ] + [−Fp I] } xm (t) | {z } r(t) De
with state x(t) = [xp (t) xm (t)]T , input [d(t) r(t)]T , output e(t), and state space representation [Ae , Be , Ce , De ]. Let hij (s) denote the transfer function relating ei (s) to the j th component of the input [d(t) r(t)]T . For a matrix S, let Si· , S·j , and Sij denote its ith row, j th column, and ij th element. With this notation, hij (s) = (Ce )i· (sI − Ae )
−1
(Be )·j + (De )ij
677
LINEAR QUADRATIC REGULATOR sI − Ae (Be )·j det (−Ce )i· (De )ij . = det [sI − Ae ]
(13.214)
The numerator of (13.214) is, in expanded form sI − Ap + Bp Kp Bp Km X m(Cp − Dp Kp )1· m(De )1j ˜ m(Cp − Dp Kp )2· A (mDe )2j det .. .. . . m(Cp − Dp Kp )n · (mD e )n0 n0 j 0 (Cp − Dp Kp )i·
(−Dp Km )i·
(De )ij
where
A˜ =
sI − M sI − M ..
. sI − M
−
(Cp − Dp Kp )i·
Bp Km
A¯
m(Dp Km )n0 ·
X
m(DL )1j m(DL )2j .. . 0 .. . m(DL )n0 j
−(Dp Km )i·
m(Dp Km )1· m(Dp Km )2· .. .
Premultiplying the matrix in (13.215) by In It . . . −m T = ← (i + 1)th block It 1
we obtain the numerator to be sI − Ap + Bp Kp m(Cp − Dp Kp )1· m(Cp − Dp Kp )2· .. . det 0 .. . m(Cp − Dp Kp )n · 0
(DL )ij
(13.215)
.
(13.216)
678
OPTIMAL AND ROBUST CONTROL
where
sI − M
¯ A=
sI − M ..
. sI − M ..
. sI − M
and therefore, finally hij (s) =
−
m(Dp Km )1· m(Dp Km )2· .. . 0 .. . m(Dp Km )n0 ·
β(s)nij (s) |sI − M | · nij (s) = |sI − Ae | |sI − Ae |
.
(13.217)
for some polynomial nij (s). It follows that each entry of the error transfer matrix has β(s) as a factor of the numerator and the stable polynomial |sI − Ae | = |sI − A + BK| as the denominator. Thus, the condition in (13.192) is satisfied for every reference input and every disturbance input satisfying (13.193) and (13.194), respectively. REMARK 13.2 The special property that β(s) is a factor of each and every entry of the error transfer function is described by labelling the zeros of β(s) as blocking zeros of the error transfer function. REMARK 13.3 The condition in (13.204) requires that ni ≥ n0 , that is, there are at least as many control inputs as outputs to be controlled. When ni ≥ n0 , it is equivalent to the condition that the plant transfer function evaluated at s = λ, Gp (λ) := Cp (λI − Ap )−1 Bp + Dp
(13.218)
have rank n0 . REMARK 13.4 It is easy to see that under small perturbations of the plant model [Ap , Bp , Cp , Dp ] and [Kp Km ] the property that β(s) is a factor of the numerator of each entry of the error transfer function is preserved. The latter is determined only by the Internal Model controller. Thus, the system functions as a servomechanism as long as Ae remains stable and the Internal Model controller is accurate and fixed. REMARK 13.5 The state feedback control law in (13.211) may be unimplementable if xp (t) is inaccessible as a measurement signal or otherwise unavailable for feedback. The state xm (t) of the Internal Model will most often
679
LINEAR QUADRATIC REGULATOR
be available since the Internal Model is a synthetic device specifically built for control. In the case that (13.211) is unimplementable, we can introduce the output feedback stabilizing controller: e(t) x˙ s (t) = As xs (t) + Bs xm (t) e(t) up (t) = Cs xs (t) + Ds . (13.219) xm (t) The ability of the controller in (13.219) to stabilize the closed-loop system is ¯ where determined by the stabilizability and detectability of (A, B, C) Cp 0 C¯ = . (13.220) 0 In0 t Lemma 13.6 has already given conditions for stabilizability. For detectability, observe that sI − Ap 0 Bm Cp sI − Am sI − A = Cp C¯ 0 0 In0 t ¯ A) is detectable if and only if (Cp , Ap ) is detectable. The proof so that (C, that the controller in (13.219) also assigns |sI − M | = β(s) as a numerator polynomial to each entry of the error transfer function is similar to the state feedback case and is left to the reader (Exercise 13.14). The output feedback servomechanism is shown in Figure 13.6. d(t) r(t) +
e(t) −
Internal Model
xm (t) e(t)
Stabilizing Controller
up (t)
y(t) Plant
Figure 13.6 Output feedback servomechanism.
We have arrived at the following important result. THEOREM 13.6 If the plant [Ap , Bp , Cp , Dp ] is stabilizable and detectable and condition (13.204) is satisfied, then a solution to the servomechanism problem exists and is of the
680
OPTIMAL AND ROBUST CONTROL
form shown in Figure 13.6. The solution is insensitive to perturbations of the plant and stabilizing controller parameters as long as they do not destabilize the closed loop. Tracking and disturbance rejection are not preserved if the Internal Model part of the controller is perturbed by even a vanishingly small amount. REMARK 13.6 As a final remark, we point out that the design of the stabilizing part of the controller can always by carried out using optimal LQR theory and the technique of the last section for designing optimal output feedback controllers.
13.11
Exercises
13.1 Consider the system x(t) ˙ = u(t),
x(0) = x0
and the performance index Z T 1 4 2 2 I= x (t) + u (t) + x (t) dt. 2 0 Write down the Hamilton-Jacobi-Bellman equation for the optimal system. Answer: 1 ∂V ∂V H = x2 (t) + u2 (t) + x4 (t) + u(t) H = φ+ f 2 ∂x ∂x ∂V 2u∗ (t) + =0 ∂x 2 2 1 ∂V 1 4 1 ∂V ∗ 2 H = x (t) + + x (t) − 4 ∂x 2 2 ∂x 2 ∂V 1 1 ∂V + x2 (t) + x4 (t) − =0 (HJB) ∂t 2 4 ∂x ∂V ∗ V (x(T ), T ) = 0, H + =0 ∂t x(t) ˙ =−
1 ∂V 2 ∂x
(Optimal System)
13.2 Determine the optimal control for x(t) ˙ = u(t)
681
LINEAR QUADRATIC REGULATOR and I=
Z
T
x2 (t) + u2 (t) dt.
0
Discuss what happens when T tends to ∞? Answer: u∗ (t) = −P (t)x(t) where P˙ (t) = P 2 (t) − 1,
P (T ) = 0,
P (t) =
e2(T −t) − 1 . e2(T −t) + 1
When T → ∞, P (t) → 1 and u∗ (t) = −x(t). 13.3 Determine the optimal control for x(t) ˙ = u(t) and I=
1 2 1 sx (2) + 2 2
Z
2
u2 (t)dt,
s > 0.
0
Discuss the behavior of the solution as s → 0. 13.4 Find the optimal control for the double integrator y¨(t) = u(t) with I=
Z
0
∞
y 2 (t) + u2 (t) dt.
13.5 Design an LQ optimal first order controller for the double integrator plant in Problem 13.4 using the performance index Z ∞h i 2 I= y 2 (t) + (u(t)) ˙ dt. 0
13.6 Given the state space system for the double integrator 0 1 0 x(t) ˙ = x(t) + u(t) 0 0 1 choose a quadratic cost functional with R = 1 and Q = I and show that for finite T K = [K1 K2 ]
682
OPTIMAL AND ROBUST CONTROL
where √ 3(t − T ) − 1 √ , K1 = 2 + cosh 3(t − T ) cosh
√ √ − 3 sinh 3(t − T ) √ . K2 = 2 + cosh 3(t − T )
Show that when T tends to ∞ the gain tends to K = [1
√ 3].
13.7 The inverted pendulum cart shown in Figure 13.7 is linearized about the straight-up position and has the following equations when the disturbance d is perpendicular to the pendulum: d(t) y(t) m x(t) L θ
u(t) M Figure 13.7 The inverted pendulum.
0 1
0
0
0
0
1 1 0 0 − m g 0 − M M M x(t) + u(t) + d(t) x(t) ˙ = 0 0 0 0 1 0 M +m 1 M +m 0 0 g 0 − ML ML mM L Assume that d(t) = 0,
M = 2 kg,
m = 1 kg,
L = 0.5 m,
g = 9.18 m/s2
683
LINEAR QUADRATIC REGULATOR
Find the infinite horizon LQR controller when Q = diag[1 1 10 10] and R = [1] and R = [10]. Plot θ and u(t) for both cases when x(0) = [0 0 1 0]T . 13.8 The system shown in Figure 13.8 is described by the equations: x1 (t)
x2 (t) K
m1
m2
u(t)
d(t) Figure 13.8 Two-mass spring system.
0
0
1 0
0 0 0 0 1 x˙ 1 (t) x1 (t) 0 x˙ 2 (t) x (t) 2 + k k x˙ 3 (t) = − 0 0 x3 (t) 1 m1 m1 x˙ 4 (t) x4 (t) m1 k k 0 − 0 0 m2 m2
0
0 u(t) + d(t) 0 1 m2
where x1 (t) is the position of body 1 (m) x2 (t) is the position of body 2 (m) x3 (t) is the velocity of body 1 (m/s) x4 (t) is the velocity of body 2 (m/s) u(t) is the control-force input (N) d(t) is a disturbance-force input (N) Assume m1 = m2 = k = 1, d(t) = 0. (1) Design a time-invariant LQR controller with Q1 = diag[1 0 0 0] and R = [1].
684
OPTIMAL AND ROBUST CONTROL
(2) Compare the transient behavior of the state components x1 (t) and x2 (t) and the control effort u(t), given the initial condition x(0) = [1 0 0 0]T and the control law above, with that obtained if Q1 were changed to Q2 = diag[10 0 0 0]. 13.9 Find the optimal control for the double integrator y¨(t) = u(t) and I=
Z
∞
0
y 2 (t) + ρu2 (t) dt,
ρ > 0.
Determine the closed-loop eigenvalues as a function of ρ. Discuss the behavior of the eigenvalues as ρ → 0 (cheap control) and ρ → ∞ (expensive control). 13.10 Design a LQ optimal first order compensator for the double integrator y¨(t) = u(t) using the cost functional I=
Z
∞
0
h
i 2 y 2 (t) + ρ (u(t)) ˙ dt.
Determine the compensator and closed-loop spectrum as a function of ρ and their behavior as ρ → 0 and ρ → ∞. Also determine the gain and phase margins of the closed-loop system at the input to the plant and study their behavior as ρ → 0 and ρ → ∞. 13.11 Consider the scalar plant x(t) ˙ = ax(t) + bu(t) with I=
Z
∞
0
Show that the optimal control is
qx2 (t) + ρu2 (t) dt.
u∗ (t) = −Kx(t) where K=
q a2 +
q ρb2
b
+a
685
LINEAR QUADRATIC REGULATOR and the closed-loop eigenvalue is s
λ = − a2 +
qb2 . ρ
Discuss the behavior of the system as ρ → 0 and ρ → ∞ for fixed q. 13.12 Show that the optimal control problem for x˙ 1 (t) = x2 (t), with I=
Z
∞
0
results in
x˙ 2 (t) = u(t)
x21 (t) + qx22 (t) + ρu2 (t) dt u∗ (t) = −Kx(t)
with K=
1 √ , ρ
s
q √ ! q+2 ρ 1 √ = √ 1, q + 2 ρ ρ ρ
and the closed-loop eigenvalues are q q 1 √ √ λ1 , λ2 = √ − q+2 ρ± q−2 ρ . 2 ρ Discuss the behavior of the closed-loop system as ρ → 0 and ρ → ∞ for fixed q. 13.13 Prove that if (A, B) is controllable and (C, A) is observable, then the ARE has a unique positive definite solution P . Hint: Argue by contradiction and suppose there exists x(0) 6= 0 such that Z ∞ x(t)T CC T x(t) + u(t)T Ru(t) dt. 0 = xT (0)P x(0) = 0
13.14 Verify that the stabilizing controller (13.219) along with the Internal Model controller assigns the zeros of β(s) as blocking zeros to the error transfer function. 13.15 Consider the plant x˙ p (t) = Ap xp (t) + Bp up (t) + Ep d(t) y(t) = Cp xp (t)
686
OPTIMAL AND ROBUST CONTROL
with
−1 0 0 −2 Ap = 0 0 0 0
1 0 1 1 , 0 0 0 −1
00 0 0 Bp = 1 0, 01
Cp =
1000 , 0100
1 0 Ep = 0 0
Design an output feedback controller to make y1 (t) track arbitrary steps and ramps, y2 (t) track arbitrary steps and reject arbitrary constant disturbances d(t), using optimal LQR theory and the results in this chapter. Calculate the closed-loop error transfer function and verify that the controller assigns the correct blocking zeros to this transfer function. 13.16 Let G(s) = C(sI − A)−1 B with (A, B, C) a minimal realization and suppose that there exists a positive definite solution P satisfying AT P + P A + C T C +
P BB T P = 0. γ2
Prove that kG(s)k∞ ≤ γ(kG(jω)k2 ≤ γ,
∀ ω)
Hint: Use the Return Difference Identity. 13.17 Let G(s) = C(sI − A)−1 B with (A, B, C) a minimal realization. Prove that kG(s)k∞ ≤ γ,
(kG(jω)k2 ≤ γ,
∀ ω ∈ IR)
implies that the Hamiltonian matrix .. BB T A . γ2 H = ··· ··· .. T T −C C . −A
cannot have imaginary axis eigenvalues. This suggests a bisection algorithm to determine kG(s)k∞ . Hint: Consider −1 G(−s)T G(s) I− . γ2 13.18 Extend the results of the previous two exercises to the case G(s) = C(sI − A)−1 B + D.
LINEAR QUADRATIC REGULATOR
13.12
687
Notes and References
The LQR was introduced by Kalman in the now historic 1960 paper [108]. The return difference relations were developed in the 1964 paper [109]. Our treatment of the LQR theory, which is now a standard topic has closely followed and liberally borrowed from the excellent treatments in Wonham [200], Dorato, Abdallah, and Cerone [69], Green and Limebeer [87], and Zhou, Doyle, and Glover [213]. The existence condition for the LQR, obtained by treating it as a problem of zeroing the output was obtained by Bhattacharyya [21]. The approach to optimal compensation was initiated by Pearson in [166] and pole placement and dual compensators were discussed in Brasch and Pearson [42] where an upper bound on the order of stabilizing compensators was established. The servomechanism problem was solved in Bhattacharyya and Pearson [28, 29] in 1970 and 1972. In [28], the single output problem was solved and the internal model was introduced. In [29], the multivariable servomechanism problem was solved by formulating it as a problem of zeroing the output of a nonstate-stabilizable system and using the results of Bhattacharyya, Pearson, and Wonham [30]. Necessary and sufficient conditions for output feedback tracking and disturbance rejection with internal stability were obtained in [29] under the assumption that signal modes are disjoint from plant zeros. These results were subsequently refined in Wonham and Pearson [201] where the reference and disturbances were allowed to have poles at the system zeros. In [79], Francis, Sebakhy, and Wonham introduced the Internal Model Principle wherein the necessity of error feedback and hence of internal models, was established for the existence of a solution that is insensitive to plant and stabilizing controller parameters. A controller incorporating the Internal Model had been presented for the multivariable case by Davison in [63, 64] where the term “robust servomechanism” was used to refer to the above mentioned insensitivity. In Howze and Bhattacharyya [106] it was shown that error feedback is not necessary if insensitivity to only plant parameters is required. The definition of blocking zeros and the fact that Internal Models assign “blocking zeros” to the error transfer function were established in Ferreira [76] and Ferreira and Bhattacharyya [77]. A clear and self-contained treatment of the servomechanism problem is given in Desoer and Wang [67] where it is stated “we leave it to the science historian to describe fairly the history of the subject.” The command “lqr” in Matlab can be used to carry out the computations for the linear quadratic regulator.
14 SISO H∞ AND l1 OPTIMAL CONTROL
In this chapter we formulate and solve the single-input single-output H∞ and l1 optimal control problems. Both of these problems are focused on designing a controller to minimize the induced norm of a closed-loop transfer function. The mathematical machinery needed for both the problem formulations and their solutions are developed starting from the basics, and the connection of the mathematics to the physical control problems of interest is highlighted in detail.
14.1
Introduction
During the last several decades, H2 , H∞ , and L1 optimal control have emerged as elegant mathematical techniques for designing optimal controllers. The distinguishing feature between these approaches is the choice of the norm. In optimal control, there is usually a performance index which seeks to mathematically capture the tradeoffs that are involved in the control design. The performance index must of necessity be a scalar. Consequently, we need for vectors the analogue of the magnitude or absolute value of a scalar. Such an analogue is provided by the norm of a vector, which can be thought of as its length. The aim of this chapter and the next several ones is to provide a unified treatment of the seemingly diverse optimal control methodologies that have been proposed in the literature. In the process, we introduce the reader to many mathematical results which may be useful in other fields as well. As an introduction to the typical tradeoffs that are involved in designing a control system, consider a single-input single-output control system as shown in Figure 14.1. Here r, y, e, dy and dn represent the command signal, the output, the tracking error, the output disturbance and the sensor noise respectively, while P and C represent the plant and the controller respectively.
689
690
OPTIMAL AND ROBUST CONTROL dy + r
e +
y
C
P
−
+ +
+ dn
Figure 14.1 A typical control system.
From this figure, the tracking error e satisfies∗ e=
1 [r]. 1 + PC
(14.1)
The loop gain L,which is the gain around the loop, is defined by L = P C 1 while the Sensitivity function S is defined by S = 1+P C . Clearly, to make S and hence e small, we have to make L large. From the figure, it also follows that y=
−P C [dn ] 1 + PC
PC Defining the Complementary Sensitivity function T by T = 1+P C , we see that to attenuate the effect of the sensor noise dn on the output, one would have to make T small which implies that the loop gain L would have to be made small. Thus, we have already run into the first tradeoff involved in designing a feedback control system. Now let us examine how the output disturbance dy propagates to the output of the plant. Once again from the figure we have
y=
1 [dy ] 1 + PC
(14.2)
from which we conclude that a large loop gain will result in better output disturbance rejection. Even assuming that we are interested only in good tracking and good output disturbance rejection and do not really care about sensor noise rejection, in ∗ In
the following three equations, the lower case letters refer to time-domain signals while the upper case letters refer to transfer functions. Also the notation z = H[u] denotes that the time domain signal z is the convolution of the impulse response of H with the time-domain signal u.
SISO H∞ AND l1 OPTIMAL CONTROL
691
most situations it is not possible to keep on increasing the loop gain. This is because in most situations, a large loop gain would destabilize the closed-loop system. To see this, consider a typical Nyquist plot shown in Figure 14.2.
x
Figure 14.2 A typical Nyquist plot.
From this figure, it is clear that if one tries to increase the loop gain by increasing the parameter K, then beyond a certain value of K, the −1 + j0 point will be enclosed in a clockwise direction by the Nyquist plot resulting in closed-loop instability. The first requirement for any control system is Stability. Once this minimal requirement is met, one can focus on meeting the performance objectives. The latter may be specified in terms of tracking, output disturbance rejection, sensor noise rejection, and so on. In general, it will be impossible to completely model any physical system and, therefore, the control system design must be robust, or to some extent insensitive, to the presence of uncertainties or modeling errors. Thus, in addition to satisfying the stability and performance requirements for the nominal plant, that is, the plant in the absence of any modeling errors, the control design must also be able to guarantee stability and performance in the presence of a class of modeling errors. These properties are referred to as robust stability and robust performance. Different classes of uncertainty have been considered in the controls literature and different control design techniques are particularly well suited for handling a particular class of uncertainty. Plant uncertainty can be classified into two broad classes: structured uncertainty and unstructured uncertainty. Structured uncertainty includes parametric uncertainty which usually takes the form of uncertainty in the coefficients of the transfer function or transfer function matrix defining a linear system. In addition, block diagonal matrix perturbations, with each block representing unstructured uncertainty, is referred to as structured uncertainty to capture the fact that there is a block diagonal structure. Tech-
692
OPTIMAL AND ROBUST CONTROL
niques for handling parametric uncertainty are presented in detail in Part II of this book. Adaptive Control, which involves the design of nonlinear, timevarying controllers to cope with parametric uncertainty is beyond the scope of this book and is, therefore, not discussed here. Structured uncertainty of the block diagonal type is typically handled using µ-synthesis. Unstructured uncertainty, on the other hand, is usually specified as norm-bounded perturbations that do not have any structure. A typical situation is illustrated in Figure 14.3 where the nominal plant P in Figure 14.1 has been replaced by a nominal plant P with an additive perturbation ∆. The uncertainty here is referred to as unstructured since no assumption is made about the structure of ∆. Instead, the only property characterizing ∆ is its norm evaluated by considering it to be a linear map from its input to its output. From the
dy
∆ + r
e +
C
P
+
+
−
+
y
+
+ dn (Sensor Noise)
Figure 14.3 A feedback control system with perturbed plant.
preceding discussion, it is clear that in order to study norm-based optimal control and robustness in the presence of unstructured uncertainty, one must be familiar with norms. Accordingly in Appendix A, we will introduce vector spaces and norms and build up the machinery that will be subsequently used in Appendix B to define the norms of linear systems when considered as mappings from their inputs to their outputs. Both these appendices are quite detailed with lots of illustrative examples, and readers lacking prior exposure to this material are strongly advised to digest the contents of these appendices before proceeding on to read the rest of this chapter.
693
SISO H∞ AND l1 OPTIMAL CONTROL
14.2
The Small Gain Theorem
Many robustness results in automatic control are based on the Small Gain Theorem. Consider the standard feedback interconnection shown in Figure 14.4. e1 u1
y1 H1
+ −
+ +
H2 y2
e2
u2
Figure 14.4 A standard feedback system.
The small gain theorem provides sufficient conditions under which the boundedness of the exogenous signals u1 and u2 (in some sense) guarantees the boundedness of the signals e1 , e2 , y1 , y2 (in the same sense). The small gain theorem, however, does not answer any questions about whether the feedback interconnection is well posed and if the solutions to the system exist and are unique. To formally state the small gain theorem, we need to introduce a general framework and the notions of truncations, extended spaces, and causality. The general framework that we now introduce is valid for continuous time systems, discrete time systems, distributed parameter systems, and lumped parameter systems. Let T : subset of R+ (typically T = R+ or Z+ ), V : normed space with k.k (typically, V = R, Rn , C, C n ) F : set of all functions mapping T into V, that is {f : T → V}. The function space F is a natural linear space over C (or R) under pointwise addition and scalar multiplication, which are defined as follows: (f + g)(t) = f (t) + g(t), ∀ f, g ∈ F; ∀ t ∈ T (αf )(t) = αf (t), ∀ f ∈ F; ∀t ∈ T , ∀α ∈ C (or R). ∆
For each T ∈ T , let PT be the map of F into F such that with fT = PT f we have f (t) ∀ t ≤ T, (t, T ∈ T ) fT (t) = (14.3) θv ∀t > T
694
OPTIMAL AND ROBUST CONTROL
where θv is the zero vector in V. Example 14.1 Show that PT is a linear map. Solution: Now (αf + βg)(t) if t ≤ T αf (t) + βg(t) if t ≤ T PT (αf + βg)(t) = = 0 otherwise 0 otherwise = αPT (f )(t) + βPT (g)(t),
∀ t ≥ 0.
Thus, PT (αf + βg) = αPT f + βPT g which shows that PT is a linear map. Note that PT2 = PT ⇒ PT is a projection map on F . We say that fT is obtained by truncating f at T . Introduce a norm k.k on F ; this defines a normed linear subspace L of the linear space F : ∆
L = {f : T → V | kf k < ∞} (14.4) R∞ P∞ (Typically kf k = 0 kf (t)kdt or kf k = k=0 kf (k)k). Associated with the normed space L is the extended Le space defined by ∆
Le = {f : T → V | ∀ T ∈ T , kfT k < ∞}.
(14.5)
We shall impose throughout the following requirements on the k.k that is used to define L and Le : (i) ∀ f ∈ Le , the map T 7→ kfT k is monotonically increasing; and (ii) ∀ f ∈ L, kfT k → kf k as T → ∞. With these two assumptions L and Le can be reinterpreted as follows: f belongs to L if and only if the real valued functions T 7→ kfT k is bounded on T ; and f ∈ Le if and only if T 7→ kfT k maps T into R. If V is either R or Rn , and if k.k is an Lp norm, we denote L by Lp or Lnp ; similarly, Le is n denoted by Lpe or Lnpe . For the discrete-time case, we write lp , lpn , lpe , and lpe . Example 14.2 2 (a) Take L to be L∞ . Let f : t → et . Is f ∈ L∞ ? Is f ∈ L∞e ? (b) Give examples of sequences in l1e but not in l1 ; in l∞e but not in l∞ . Solution: (a) 2
f : t → et
2
f∈ / L∞ since |f (t)| = et → ∞ as t → ∞. 2
f ∈ L∞e since |fT |∞ = eT < ∞ ∀ finite T.
SISO H∞ AND l1 OPTIMAL CONTROL (b) The sequence
695
1 1 1 x = 1, , , , · · · 2 3 4
is in l1e but not in l1 . The sequence
x = (1, 2, 3, 4, 5, · · ·) is in l∞e but not in l∞ . Example 14.3 Consider the restriction of PT to L and call it still PT . Show that in terms of induced norms kPT k ≤ 1. Thus, PT is a bounded linear map, and its operator norm is at most equal to 1. Solution: Now x(t), 0 ≤ t ≤ T (PT x)(t) = 0 otherwise Thus, kPT xk = kxT k ≤ kxk ∀ x ∈ L. Since the map T 7→ kxT k is monotonically increasing and ∀x ∈ L, kxT k → kxk as T → ∞, we must have kxT k ≤ kxk ∀ x ∈ L, ∀ T ∈ [0, ∞) Thus kPT xk ≤ kxk ∀ x ∈ L ⇒ kPT k ≤ 1 (using the definition of the induced norm.)
Most models used in system theory are nonanticipative; so we define nonanticipative maps. DEFINITION 14.1 pative) if and only if
Let H : Le → Le ; H is said to be causal (nonanticiPT HPT = PT H ∀ T ∈ T
(14.6)
Example 14.4 Let H1 , H2 : Le → Le and be nonanticipative. Show that H1 H2 is also nonanticipative.
696
OPTIMAL AND ROBUST CONTROL
Solution: Now H1 , H2 nonanticipative ⇒ PT H1 PT = PT H1
(14.7)
PT H2 PT = PT H2 .
(14.8)
We need to show that PT H1 H2 PT = PT H1 H2 . Now PT H1 H2 PT = PT H1 PT H2 PT (using (14.7)) = PT H1 PT H2 (using (14.8)) = PT H1 H2 (using (14.7) again). Thus PT H1 H2 PT = PT H1 H2 which shows that H1 H2 is also nonanticipative. Example 14.5 Consider an alternative definition of a nonanticipative map. Again let H : Le → Le ; then H is said to be nonanticipative if and only if ∀ T ∈ T and ∀ x, y ∈ Le PT x = PT y ⇒ PT Hx = PT Hy. Show that this definition is equivalent to the earlier one. Solution: (a) Given PT HPT = PT H, to show that PT x = PT y ⇒ PT Hx = PT Hy. Now PT HPT x = PT Hx PT HPT y = PT Hy Thus, PT x = PT y ⇒ the left-hand sides of the two equations above are equal. ⇒ PT Hx = PT Hy and this is what we were required to prove.
697
SISO H∞ AND l1 OPTIMAL CONTROL (b) Given PT x = PT y ⇒ PT Hx = PT Hy, to show that PT HPT = PT H. Now PT (PT x) = PT x ∀ x ∈ L. | {z } y
Thus, if we define y = PT x, then we have PT y = PT x
⇒ PT Hy = PT Hx ⇒ PT HPT x = PT Hx ∀ x ∈ Le ⇒ PT HPT = PT H
and the proof is complete.
Example 14.6 Let h ∈ L1 (R) and u ∈ L2 (R). Show that the map H : L2 (R) → L2 (R) defined by H : u 7→ Hu = h ∗ u Z ∞ (Hu)(t) = h(t − τ )u(τ )dτ, t ∈ R (14.9) −∞
is nonanticipative if and only if h(t) = 0, almost everywhere on (−∞, 0). Solution: We will use the equivalent characterization of a nonanticipative map given in the last example: H : Le → Le is said to be nonanticipative if and only if ∀ T ∈ T and ∀ x, y ∈ Le , PT x = PT y ⇒ PT Hx = PT Hy. Let T > 0 be arbitrary. Let u1 , u2 ∈ L2e be any two signals such that PT u1 = PT u2 . Then we must show that PT Hu1 = PT Hu2 . Now from (14.9) PT Hu1 − PT Hu2 = PT
Z
∞
−∞
Z h(t − τ )u1 (τ )dτ − PT
∞
−∞
h(t − τ )u2 (τ )dτ
698 = PT
Z
∞
−∞
OPTIMAL AND ROBUST CONTROL h(t − τ )[u1 (τ ) − u2 (τ )]dτ
(using the fact that PT is a linear map.) "Z T = PT h(t − τ )[u1 (τ ) − u2 (τ )]dτ −∞
+
Z
∞
h(t − τ )[u1 (τ ) − u2 (τ )]dτ ∞ h(t − τ )[u1 (τ ) − u2 (τ )]dτ T
= PT
Z
T
(since PT u1 = PT u2 ⇒ the first integral is zero.) R ∞ T h(t − τ )[u1 (τ ) − u2 (τ )]dτ, ∀ t ≤ T = 0 ∀ t > T. Thus, for H to be causal we must have Z ∞ h(t − τ )[u1 (τ ) − u2 (τ )]dτ = 0,
∀ t ≤ T, ∀ T ≥ 0.
T
Since the above relationship must hold for any u1 , u2 satisfying PT u1 = PT u2 we can always choose u1 , u2 such that u1 (τ ) 6= u2 (τ ) a.e. for τ ∈ (T, ∞) This means that h(t − τ ) = 0 a.e., ∀ t ≤ T and τ ∈ (T, ∞) ⇔ h(t) = 0 a.e. on (−∞, 0).
Example 14.7 Let H1 , H2 : Le → Le . If H1 is a linear map and if (I + H1 H2 )−1 and −1 (I + H2 H1 ) are well-defined maps from Le into Le , show that H1 (I + H2 H1 )
−1
−1
= (I + H1 H2 )
H1
Solution: Let x ∈ Le be arbitrary. Then H1 (I + H2 H1 )x = H1 (x + H2 H1 x) = H1 (x) + H1 H2 H1 x (since H1 is linear) = (I + H1 H2 )H1 x.
(14.10)
699
SISO H∞ AND l1 OPTIMAL CONTROL Hence, H1 (I + H2 H1 )x = (I + H1 H2 )H1 x,
∀ x ∈ Le
(14.11)
Now let y ∈ Le be given. Define −1
x = (I + H2 H1 )
y
Clearly x ∈ Le since we are given that (I + H2 H1 )
−1
: Le → Le .
Thus, from (14.11), we obtain H1 y = (I + H1 H2 )H1 (I + H2 H1 )−1 y −1
Now operating on both sides by (I + H1 H2 )
, we get
(I + H1 H2 )−1 H1 y = H1 (I + H2 H1 )−1 y and this is true ∀ y ∈ Le . −1
⇒ H1 (I + H2 H1 )
−1
= (I + H1 H2 )
H1 ,
as desired. Example 14.8
u1
e1
y1 H1
+ −
H2 y2
e2
Figure 14.5 A standard feedback system.
Consider the feedback system shown in Figure 14.5 which is obtained by setting u2 = 0 in the feedback system shown in Figure 14.4. Suppose that (I + H2 H1 )−1 is a well-defined map from Le into Le ; then show that −1
y1 = H1 (I + H2 H1 )
u1 ,
∀ u1 ∈ Le .
700
OPTIMAL AND ROBUST CONTROL
Solution: From the figure, we have e 1 = u 1 − H2 H1 e 1 y 1 = H1 e 1
(14.12) (14.13)
From (14.12), we get (I + H2 H1 )e1 = u1 ⇒ e1 = (I + H2 H1 )
−1
u1
Substituting this in (14.13), we obtain y1 = H1 (I + H2 H1 )
−1
u1 , ∀ u1 ∈ Le
which is the desired relationship. We are now ready to state and prove the Small Gain Theorem. THEOREM 14.1 (Small Gain Theorem) Consider the feedback system shown in Figure 14.4 which is reproduced in Figure 14.6 for ease of presentation. e1 u1
y1 H1
+ −
+ +
H2 y2
e2
u2
Figure 14.6 A standard feedback system.
Let H1 , H2 : Le → Le . Let e1 , e2 ∈ Le and define u1 and u2 by u 1 = e 1 + H2 e 2 u 2 = e 2 − H1 e 1 Suppose that there are constants β1 , β2 , γ1 ≥ 0, γ2 ≥ 0 such that k(H1 e1 )T k ≤ γ1 ke1T k + β1 ,
∀ T ∈T
k(H2 e2 )T k ≤ γ2 ke2T k + β2 ,
∀ T ∈T
Under these conditions, if γ1 γ2 < 1, then
SISO H∞ AND l1 OPTIMAL CONTROL
701
(i) ke1T k ≤ (1 − γ1 γ2 )
−1
ke2T k ≤ (1 − γ1 γ2 )
−1
(ku1T k + γ2 ku2T k + β2 + γ2 β1 ), ∀T ∈T
(14.14)
(ku2T k + γ1 ku1T k + β1 + γ1 β2 ), ∀T ∈T (14.15)
(ii) if, in addition, ku1 k, ku2 k < ∞ then e1 , e2 , y1 , y2 have finite norms, and the norms of the errors, namely, ke1 k and ke2 k, are bounded by the right-hand sides of (14.14) and (14.15), provided all subscripts T are dropped. PROOF
Now u 1 = e 1 + H2 e 2 u 2 = e 2 − H1 e 1 k(H1 e1 )T k ≤ γ1 ke1T k + β1 k(H2 e2 )T k ≤ γ2 ke2T k + β2
(14.16) (14.17) (14.18) (14.19)
From (14.16) ∀ T ∈ T , we have e1T = u1T − (H2 e2 )T Since all vectors ∈ Le , ke1T k ≤ ku1T k + k(H2 e2 )T k ≤ ku1T k + γ2 ke2T k + β2 ∀ T ∈ T (using (14.19)).
(14.20) (14.21)
We have a similar calculation from (14.17): ke2T k ≤ ku2T k + k(H1 e1 )T k ≤ ku2T k + γ1 ke1T k + β1 ∀ T ∈ T .
(14.22) (14.23)
Hence, using the fact that γ2 ≥ 0, ke1T k ≤ ku1T k + γ2 [ku2T k + γ1 ke1T k + β1 ] + β2 or (1 − γ1 γ2 )ke1T k ≤ ku1T k + γ2 ku2T k + β2 + γ2 β1 ⇒ ke1T k ≤ (ku1T k + γ2 ku2T k + β2 + γ2 β1 )(1 − γ1 γ2 )−1 The rest of the proof follows immediately.
(14.24)
702
OPTIMAL AND ROBUST CONTROL
Given an operator H1 : Le → Le , suppose there are real numbers β¯1 and γ¯1 such that k(H1 x)T k ≤ γ¯1 kxT k + β¯1 ∀ x ∈ Le , ∀ T ∈ T
(14.25)
Clearly, γ¯1 is not uniquely defined by the above inequality. Intuitively, it is clear that we are interested in the smallest γ¯1 that “works.” More precisely, we say the gain of H1 is the number γ(H1 ) defined by γ(H1 ) = inf γ¯1 ∈ R+ | ∃ β¯1 such that inequality (14.25) holds .
This infimum will often be denoted by γ(H1 ) or γ1 . With this terminology, the above Theorem can be interpreted to state that if γ(H1 ), the gain of H1 , and γ(H2 ), the gain of H2 , have a product smaller than 1, then, provided a solution exists, any bounded input pair (u1 , u2 ) produces a bounded output pair and the map (u1 , u2 ) −→ (y1 , y2 ) has also finite gain. The statement of the small gain theorem requires that the norms of the truncations of the outputs of the operators H1 and H2 can be bounded using the norms of the corresponding truncations of their respective inputs. The following example shows that for causal H1 and H2 , any bound that may be valid in the absence of truncations, will also work when truncations are used. Example 14.9 Show that if H1 is causal, then kH1 xk ≤ γ˜1 kxk + β1 ∀ x ∈ L
(14.26)
⇒ k(H1 x)T k ≤ γ˜1 kxT k + β1
(14.27)
k(H1 x)T k = kPT (H1 x)k = kPT H1 PT xk (by causality)
(14.28) (14.29)
Solution: Now
≤ kH1 PT xk (Since kPT k ≤ 1 by Example 14.3 ) (14.30) = kH1 xT k (14.31) But from (14.26) it follows that kH1 xT k ≤ γ˜1 kxT k + β1 ∀ x ∈ L (substituting x = xT into (14.26)). Combining (14.31) and (14.32), we obtain k(H1 x)T k ≤ γ˜1 kxT k + β1 and this completes the proof.
(14.32)
SISO H∞ AND l1 OPTIMAL CONTROL
703
The following example shows that if u2 ≡ 0 then, in the statement of the Small Gain Theorem, there is no need to separate the gains of H1 and H2 . In this case, once the “loop gain” is smaller than 1, the closed-loop system has finite gain. Example 14.10 For the feedback system shown in Figure 14.5, suppose H1 , H2 : Le → Le . Let e1 ∈ Le and define u1 by u 1 = e 1 + H2 e 2 e 2 = H1 e 1 Suppose that there are constants γ21 , γ1 , β21 , and β1 , with γ21 ≥ 0, γ1 ≥ 0 such that k(H2 H1 e1 )T k ≤ γ21 ke1T k + β21 ∀ T ∈T. k(H1 e1 )T k ≤ γ1 ke1T k + β1 Under these conditions, if γ21 < 1, then show that 1 [ku1T k + β21 ] 1 − γ21 γ1 ky1T k ≤ [ku1T k + β21 ] + β1 1 − γ21 ke1T k ≤
Solution: Now e 1 = u 1 − H2 e 2
(14.33)
e 2 = H1 e 1
(14.34)
k(H2 H1 e1 )T k ≤ γ21 ke1T k + β21 k(H1 e1 )T k ≤ γ1 ke1T k + β1 .
(14.35) (14.36)
From (14.33) and (14.34), we have e 1 = u 1 − H2 H1 e 1 ⇒ ke1T k ≤ ku1T k + k(H2 H1 e1 )T k ≤ ku1T k + γ21 ke1T k + β21 (using (14.35)) ⇒ (1 − γ21 )ke1T k ≤ ku1T k + β21 1 ⇒ ke1T k ≤ [ku1T k + β21 ] (since γ21 < 1) 1 − γ21
(14.37) (14.38)
704
OPTIMAL AND ROBUST CONTROL
Also y1 = H1 e1 ⇒ ky1T k = k(H1 e1 )T k ≤ γ1 ke1T k + β1 (by (14.36)) γ1 ≤ [ku1T k + β21 ] + β1 ( using (14.38)) 1 − γ21 Thus ky1T k ≤
γ1 [ku1T k + β21 ] + β1 1 − γ21
(14.39)
The inequalities (14.38) and (14.39) are the desired relationships.
14.3
L-Stability and Robustness via the Small Gain Theorem
The small gain theorem gives us an intuitive feel for the stability of a feedback system since it gives us sufficient conditions under which the “boundedness” of the exogenous signals guarantees the “boundedness” of all the closed-loop signals. To formalize this notion, we introduce the definition of L-stability. DEFINITION 14.2 The feedback system shown in Figure 14.6 is said to be L-stable if (i) u1 , u2 ∈ L =⇒ y1 , y2 , e1 , e2 ∈ L; and (ii) kei k ≤ k(ku1 k + ku2 k), kyi k ≤ k(ku1 k + ku2 k), i = 1, 2 where the constant k does not depend on u1 , u2 . If the norm used to define the L space happens to be the Lp norm, then the corresponding L stability is referred to as Lp stability. We next study the Lp stability of the feedback system shown in Figure 14.7 which arises in many control applications. From Figure 14.7, we obtain e = r − Hy and y = Gr − GHy. Truncating signals on both sides of the second equation and taking Lp -norms, we obtain kyT kp ≤ k(Gr)T kp + k(GHy)T kp ⇒ kyT kp ≤ kGHkA kyT kp + kGkA · krT kp (using Theorem B.11)
705
SISO H∞ AND l1 OPTIMAL CONTROL r
e
y G(s)
+ −
H(s)
Figure 14.7 A feedback system (G(s), H(s) are open-loop stable transfer functions).
Thus, the closed-loop system is Lp -stable ∀ p ∈ [1, ∞] if kGHkA < 1.
(14.40)
Furthermore, if (14.40) is satisfied then we have kyT kp ≤
1 · kGkA · krT kp . 1 − kGHkA
(14.41)
The calculation of k.kA is difficult. Also the condition (14.40) cannot be easily verified in the frequency domain. If, however, we choose p = 2, then kyT k2 ≤ kGHkH∞ · kyT k2 + c
(14.42)
Thus, the closed-loop system is L2 stable if max |GH(jω)| < 1. ω
(14.43)
The condition (14.43) can be easily checked in the frequency domain. However, we can now conclude only L2 -stability. In the real world, we would like to be able to establish L∞ stability by verifying a frequency domain condition. This is made possible by the technique of exponential weighting which is discussed next. The technique of exponential weighting is based on the following two facts which can be easily verified. Fact 1: If y(t) = g(t) ∗ u(t), then y(t)eat = g(t)eat ∗ u(t)eat . Fact 2: If L[g(t)] = G(s) then L[g(t)eat ] = G(s − a). By weighting all the signals in Figure 14.7 with eat and making use of the two facts above, we obtain the exponentially weighted system shown in Figure 14.8. In this figure, the exponential e is denoted by the letter ǫ in order to avoid any potential confusion with the error signal e. We can now state the following theorem which basically says that L2 stability with exponential weighting implies L∞ stability. THEOREM 14.2 If the exponentially weighted system in Figure 14.8 is L2 stable for some a > 0, then the original system in Figure 14.7 is L∞ stable, that is r ∈ L∞ ⇒ y ∈ L∞ and kyk∞ ≤ ckrk∞ , for some c > 0 which is independent of r.
706
OPTIMAL AND ROBUST CONTROL rǫat
y(t)ǫat
eǫat G(s − a)
+ −
H(s − a)
Figure 14.8 Exponentially weighted feedback system.
PROOF Now L2 stability of the exponentially weighted system ⇒ ∃ c > 0 such that k(yeaτ )t k2 ≤ ck(reaτ )t k2 , ∀ t > 0. (14.44) Let r ∈ L∞ . We must show that y ∈ L∞ . Now from Figure 14.7 we have ∆
y = Gr − GHy = Gr − G1 y where G1 = GH Z t ⇒ |y(t)| ≤ kgk1 · krk∞ + |g1 (t − τ )||y(τ )|dτ 0
(since G1 (s) is stable) Z t = kgk1 · krk∞ + e−at |g1 (t − τ )ea(t−τ ) | · |y(τ )eaτ |dτ 0
≤ kgk1 · krk∞ + e−at · kg1 eaτ k2 · k(yeaτ )t k2
(Using the Cauchy Schwartz Inequality and assuming g1 (t)eat ∈ L2 ) ≤ kgk1 · krk∞ + e−at · kg1 eaτ k2 · ck(reaτ )t k2 (using (14.44))
(14.45)
Now aτ
k(re )t k2 =
Z
t 2
r (τ )e
2aτ
dτ
0
21
(14.46)
21 Z t 2at −2a(t−τ ) ≤ krk∞ e e dτ
(14.47)
1 · 2a
(Since a > 0)
(14.48)
12
(14.49)
0
≤ krk∞ · e
at
21
Therefore, from (14.45) |y(t)| ≤ kgk1 krk∞ + ckg1 eaτ k2 ·
1 2a
krk∞
707
SISO H∞ AND l1 OPTIMAL CONTROL aτ
⇒ kyk∞ ≤ kgk1 krk∞ + ckg1 e k2 ·
1 2a
12
krk∞
(14.50) (14.51)
⇒ The given system is L∞ stable. The following lemma provides a sufficient frequency domain condition for ascertaining the L2 stability of the exponentially weighted system shown in Figure 14.8. LEMMA 14.1 A sufficient condition for the L2 stability of the exponentially weighted system in Figure 14.8 is kGH(s − a)kH∞ < 1. PROOF
Now from Figure 14.8
yeat = G(s − a)[reat ] − GH(s − a)[yeat ] ⇒ k(ye )T kL2 ≤ kG(s − a)kH∞ · k(reat )T kL2 + kGH(s − a)kH∞ · k(yeat )T kL2 . at
If kGH(s − a)kH∞ < 1, then k(yeat )T kL2 ≤
kG(s − a)kH∞ k(reat )T kL2 . 1 − kG(s − a)H(s − a)kH∞
Thus, if kGH(s − a)kH∞ < 1, that is if the shifted H∞ -norm of the loop transfer function is less than 1, then the exponentially weighted system is L2 -stable. An important application of the small gain theorem is in the derivation of robustness conditions for a nominally stable closed-loop system that is being perturbed by norm bounded uncertainties. To illustrate the steps involved, we derive the robustness condition for one particular type of uncertainty called additive uncertainty. Towards this end, consider the feedback control system shown in Figure 14.9. Here P0 (s) represents the nominal plant and C(s) the feedback controller. Due to modelling errors, the actual plant is the sum of the nominal plant P0 (s) and an additive perturbation† ∆a (s). Let us assume that ∆a (s) is stable. Now the controller C(s) is usually designed based on P0 (s) so that P0 (s)C(s) 1 + P0 (s)C(s) † The
choice of an additive perturbation is without any loss of generality; indeed similar conditions can be developed for other types of perturbations such as multiplicative perturbations [197].
708
OPTIMAL AND ROBUST CONTROL ∆a (s) +
v1
r
+
e
y
P0 (s)
C(s)
+
u1
u −
Figure 14.9 A feedback system with additive plant uncertainty.
is a stable transfer function, by design. The problem of interest now is to find conditions on ∆a (s) to guarantee that the closed-loop system remains stable. In what follows, we show that such a condition can be developed quite easily by using small gain type arguments. From Figure 14.9, we have y = P0 (s)u + ∆a (s)u u = C(s)[r − y] ⇒ u = C(s)[r − P0 (s)[u] − ∆a (s)[u]] or [1 + P0 (s)C(s)][u] = C(s)r − C(s)∆a (s)[u] C(s) C(s) or u = [r] − ∆a (s)[u] 1 + P0 (s)C(s) 1 + P0 (s)C(s) Thus,
kut k2 ≤ ckrt k2 +
C(s) ∆a (s) · kut k2
1 + P0 (s)C(s) H∞
So the given system is L2 stable provided
C(s)
∆ (s)
1 + P0 (s)C(s) a
0 is some constant. This problem can be converted into a standard H∞ control problem by absorbing the factor γ1 in Ty1 u1 . Solving this standard problem, and progressively reducing γ at each stage, we will ultimately reach a stage when the standard H∞ control problem can no longer be solved. At that stage, we can say that the terminal value of γ is very close to the minimal H∞ norm. This technique is called the γ-iteration technique. We next present two examples to show how the closed-loop disturbance rejection problem and the robust stability problem can be accommodated within the H∞ framework. Example 14.11 (Closed-Loop Disturbance Response for the Single-Input Single-Output Plant Case) Consider the feedback control configuration shown in Figure 14.15. Here d is a disturbance that enters the system at the plant output. Let yCL and yOL denote the disturbance responses at the plant output in the closed-loop and open-loop configurations. Then clearly, 1 d 1 + GF = d.
yCL = and yOL
(14.85) (14.86)
719
SISO H∞ AND l1 OPTIMAL CONTROL d + r
e
y F
+
G
+
−
Figure 14.15 Disturbance attenuation using feedback control.
1 yOL 1 +
GF
1
so that kyCLk2 ≤
1 + GF · kyOL k2 . Thus yCL =
(14.87) (14.88)
∞
Thus, by using feedback control and making
1
1 + GF
< 1
∞
we can attenuate the effect of disturbances (plant uncertainty) on the plant output. Here, the attenuation is measured in terms of the energy in the signal 1 before and after feedback. Note that Ter = Tyd = 1+GF = S, the Sensitivity function. Thus, good tracking goes hand in hand with good disturbance rejection and both require that the Sensitivity function be small. Example 14.12 (Robust Stability for the Single-Input Single-Output Plant Case) Consider the feedback control system shown in Figure 14.16. Here the nominal plant Go (s) is perturbed by a multiplicative perturbation ∆m (s).
G
w ∆m
+
e
r
y F
+ −
G0 v
Figure 14.16 Stability robustness of feedback control.
+
720
OPTIMAL AND ROBUST CONTROL
Note that in the absence of ∆m (s), Tyr =
Go F = T, the Complementary Sensitivity Function. 1 + Go F (14.89)
Furthermore, the transfer function seen by ∆m (s), Tvw can be computed as follows: v = −Go F (v + w) (14.90) (1 + Go F )v = −Go F w so that Tvw = −
(14.91)
Go F = −T. 1 + Go F
(14.92)
Assuming that ∆m (s) is stable, it follows from the small gain theorem that if the controller F (s) stabilizes the nominal plant Go (s), then the closed-loop system with the multiplicative perturbation will continue to be stable provided k∆m (s)T (s)k∞ < 1. Thus, choosing F (s) to make T (s) small will enhance the robustness of the design. Since T +S =1 both T and S cannot be simultaneously made small. This brings us to a fundamental tradeoff in feedback design. However, fortunately, good tracking and good disturbance rejection are required in the low frequency range while good stability margin is required in the high frequency range where there is maximum model uncertainty. Thus, the tradeoff between performance and robustness can be achieved by assigning frequency dependent weights to the Sensitivity and Complementary Sensitivity functions. Suppose it is desired that σ ¯ [S(jω)] ≤ |W1 (jω)|, ∀ω (14.93) where W1 (s) is a stable and minimum phase weighting function chosen by the designer to have a small magnitude response at low frequencies. Since the H∞ norm is the frequency domain supremum of the largest singular value of a transfer function matrix, it follows that the requirement (14.93) can be satisfied by imposing the H∞ -norm constraint
1
S (14.94)
W1 ≤ 1. ∞ Similarly, if it is desired that
σ ¯ [T (jω)] ≤ |W2 (jω)|,
∀ω
(14.95)
721
SISO H∞ AND l1 OPTIMAL CONTROL
where W2 (s) is an appropriately chosen weight, then we arrive at the H∞ norm constraint
1
(14.96)
W2 T ≤ 1. ∞ It can be shown that for any two matrices A and B having the same number of columns, n√ o √ A max{¯ σ (A), σ ¯ (B)} ≤ σ ¯ ≤ max 2¯ σ (A), 2¯ σ (B) . (14.97) B
Thus, to within a factor of 3dB, the requirements (14.94) and (14.96) can be combined into the single requirement:
1
W S
11 ≤ 1. (14.98)
W2 T ∞
Figure 14.17 shows how the sensitivity and complementary sensitivity requirements (14.93), (14.95) can be converted into a standard H∞ problem involving an augmented plant P (s) = (Pij (s)), i, j = 1, 2. y1a
1 W1
y1b
1 W2
u1 F
+ −
G y
e y2 F u2
u1 P11 ≡
∆
y1 =
P12
u2
y2
"
y1a y1b
#
P22
P21
F
Figure 14.17 Converting sensitivity and complementary sensitivity requirements into a standard H∞ problem.
722
OPTIMAL AND ROBUST CONTROL
In addition to the disturbance rejection and robust stability problems outlined above, many control problems involving plant uncertainty can be put into the canonical robust control framework shown in Figure 14.18.
Uncertain gains ∆
Commands and Disturbances
P0 (s)
Errors
Nominal Plant F (s) Controller
Figure 14.18 Canonical robust control configuration.
Here ∆ is a block diagonal matrix of perturbations (stable transfer function matrices) as shown in Figure 14.19 with each block satisfying the H∞ -norm constraint k∆i k∞ < 1.
∆1 ∆2 ∆3 ∆=
,
∆n Figure 14.19 Structure of the perturbation.
σ ¯ [∆i (jω)] < 1,
∀ ω
723
SISO H∞ AND l1 OPTIMAL CONTROL
The following example shows how uncertainty in the transfer function coefficients of a plant can be modeled using the configuration of Figure 14.18. Example 14.13 Consider the single-input single-output feedback control system shown in Figure 14.20 where the plant parameters a, b, c are uncertain.
s+c s2 + as + b
F (s)
+ −
Figure 14.20 Feedback control system with uncertain plant parameters.
Specifically, a ≃ 0.5 ± 10%, b ≃ 1.0 ± 20%, c ≃ 2.0 ± 10%. By first realizing the nominal plant, that is plant with a = 0.5, b = 1.0, c = 2.0 using summers, multipliers and integrators, and then introducing the coefficient perturbations, we obtain the configuration shown in Figure 14.21.
r + −
F (s)
+ −
1 s
−
1 s
2.0
+ + +
∆a
0.1
+ +
1 10
∆c
0.5 1
+ +
1 5
∆b
Figure 14.21 A canonical realization of the feedback system of Figure 14.20.
It is now clear that by pulling out ∆a , ∆b and ∆c and defining ∆ = diag(∆a , ∆b , ∆c ), the feedback system in Figure 14.21 can be put into the form of the configuration shown in Figure 14.18, where each component of ∆
724
OPTIMAL AND ROBUST CONTROL
satisfies the required H∞ norm constraint. Once this is done, the small gain theorem can be used to conclude that if the nominal feedback system is stable, then the perturbed feedback system is stable provided k∆k∞ · kT k∞ < 1. (14.99) Although the small gain theorem provides only a sufficient condition for stability, it can be shown that when no constraints are placed on the structure of ∆, the small gain condition is in fact both necessary and sufficient for robust stability in the following sense. If the small gain condition (14.99) is violated, then one can find a stable rational ∆(s) with k∆(s)k∞ =
1 kT k∞
which destabilizes the loop. In other words, the size of the smallest destabilizing ∆ is given by 1 k∆(s)k∞ = . (14.100) kT (s)k∞ Another way to look at this relationship is that when the structure of ∆(s) is not constrained, the supremum of the largest singular value of T equals the reciprocal of the size of the smallest destabilizing stable ∆. Note, however, that if one were to restrict the perturbations to be block diagonal as in the canonical robust control problem, then the necessity disappears and the condition (14.99) becomes conservative. In this case, in the spirit of (14.100), one may define the Structured Singular Value µ to be µ(T ) =
1 . size of smallest destabilizing and diagonal ∆
The exact computation of µ is very difficult. However, a lot of research has gone into calculating approximations for µ, and one such technique is called the DK Iteration. From Figure 14.18 and the small gain theorem, it is clear that once the feedback system has been put into the canonical robust control framework of Figure 14.18, the robust stability of the closed-loop system can be ensured by solving a standard H∞ problem, that is choosing a stabilizing controller F (s) for the nominal plant to make the H∞ norm of a particular transfer function matrix less than one. The same technique can also be extended to solve the so called performance robustness problem as demonstrated below. Suppose, we want to guarantee robust performance , that is, kTy1 u1 k∞ ≤ 1 for all ∆1 , ∆2 , · · · ∆n satisfying k∆i k∞ ≤ 1(i = 1, 2, 3, · · · n). Introducing a fictitious ∆n+1 connecting y1 to u1 as shown in Figure 14.22 with k∆n+1 k∞ ≤ 1, T and defining y˜1 = [y∆ , y1T ]T , u ˜1 = [uT∆ , uT1 ]T , one can solve the standard H∞ problem: find a stabilizing F (s) to make kTy˜1 ,˜u1 k∞ ≤ 1 to obtain a solution
725
SISO H∞ AND l1 OPTIMAL CONTROL ∆1 ∆2
..
y∆
. ∆n ∆n+1
y1
u∆ u ˜1
u1
+ P0 (s)
+
y˜1
u2 F (s)
Figure 14.22 Converting the robust performance problem to an equivalent robust stability problem.
to the performance robustness problem. This shows that the performance robustness problem can be reduced to the robust stability problem for a new system with additional fictitious uncertainty. Having shown that several control problems can be cast within the H∞ framework, we are now ready to present the solution to the single-input singleoutput H∞ optimal control problem. This will be carried out in the next section.
14.6
H∞ Optimal Control: SISO Case
Recall from Theorem 14.3 that if the plant P is expressed as the ratio of two n stable rational transfer functions np and dp which are coprime, that is P = dpp and x, y are stable rational transfer functions satisfying the Bezout identity xnp + ydp = 1, then all stabilizing controllers C(s) are given by C=
x + qdp y − qnp
726
OPTIMAL AND ROBUST CONTROL
where q is any stable rational transfer function satisfying y − qnp 6= 0. Consequently, we have S=
1 1 = = dp dc = dp (y − qnp ) n 1 + PC 1 + dpp ndcc
and
np nc
PC dp dc T = = = np (x + qdp ). n 1 + PC 1 + dpp ndcc These two expressions imply that every RHP zero of the plant is also a RHP zero of T and every RHP pole of the plant is also a RHP zero of S. In other words, T = 0 at every RHP zero of the plant; and S = 0 at every RHP pole of the plant. Since S + T = 1, it follows that (i) S = 1 at every RHP zero of the plant; and (ii) S = 0 at every RHP pole of the plant. Since T is also the transfer function from the command signal to the output, it is also clear that no stabilizing feedback controller can get rid of the RHP zeros of the plant. The following theorem which is an immediate consequence of the expression for S derived above finds use in weighted sensitivity minimization problems. THEOREM 14.4 Let S0 (s) = (1 + P (s)C0 (s))−1 for some stabilizing controller C0 (s). Then every S(s) realizable by a stabilizing controller may be written as S(s) = So (s) + np (s)q(s)dp (s)
(14.101)
for some stable transfer function q(s). Since the Sensitivity function S is required to be small only in the low frequency range making kSk∞ ≤ 1 is not interesting from a practical point of view. Consequently, we focus on the weighted Sensitivity function W (s)S(s) and try to design a stabilizing controller C(s) to either make kW (s)S(s)k∞ ≤ 1 or minimize kW (s)S(s)k∞ . The following Theorem, which is a result from complex variable theory, plays an important role in solving some H∞ norm minimization problems. THEOREM 14.5 (Maximum Modulus Theorem) Let G(s) be analytic in some closed region D as shown in Figure 14.23. Then |G(s)| takes its maximum value on the boundary ∂D of D. COROLLARY 14.3 If G(s) is stable then ∆
kG(s)k∞ = sup |G(jω)| ≥ |G(so )| ω
(14.102)
727
SISO H∞ AND l1 OPTIMAL CONTROL Im[s] ∂D
D
Re[s]
Figure 14.23 A closed region D in the complex plane.
for any so ∈ RHP . In particular, for G(s) = W (s)S(s) (which is stable provided the controller is stabilizing and W (s) is stable), we have kW (s)S(s)k∞ ≥ |W (zi )S(zi )|,
∀ zi ∈ RHP.
(14.103)
Note that if zi happens to be a right-half plane zero of the plant then S(zi ) = 1 which means that kW (s)S(s)k∞ ≥ |W (zi )|. Thus, the magnitude of the weighting function evaluated at each right-half plane plant zero provides a lower bound on the achievable H∞ norm of the weighted sensitivity function. If the controller C(s) can be designed to make kW (s)S(s)k∞ equal to the largest of these lower bounds, we will have in fact obtained a solution to the weighted sensitivity minimization problem. This is illustrated using the following example. Example 14.14 1 Minimize the weighted sensitivity W (s)S(s) for the plant P (s) = s−z d(s) where z1 > 0 and d(s) is stable. Thus, the plant has one RHP zero and no RHP poles. Solution: We are interested in finding a stabilizing controller C(s) to solve the problem: min kW (s)S(s)k∞ . C(s)
In view of Corollary 14.3, we have kW (s)S(s)k∞ ≥ |W (z1 )|
(14.104)
728
OPTIMAL AND ROBUST CONTROL
So let us try S(s) =
W (z1 ) W (s)
(14.105)
which yields W (s)S(s) = W (z1 ). This shows that the particular choice of S(s) does satisfy (14.104) and, in fact, the lower bound for kW (s)S(s)k∞ is attained. It only remains to show that the sensitivity function given in (14.105) is realizable. This is indeed the case since S(z1 ) = 1, thereby implying that the S(s) given in (14.105) does satisfy the constraint imposed by the sole RHP zero at s = z1 . Next one can −1 (z1 ) W (s)−W (z1 ) d(s) . (s−z1 ) as the optimal solve (1 + P C) = W W (s) to obtain C(s) = W (z1 ) controller. The frequency response plot of the optimal weighted sensitivity function is shown in Figure 14.24.
|W (jω)S(jω)|
ω
Figure 14.24 Frequency response plot of optimal weighted sensitivity function.
In the preceding example we observed that the frequency response of the optimal cost function is constant over all frequencies. This is a generic property of H∞ optimal controllers and to understand the reason behind it, we will have to introduce the notion of duality in optimization problems. To build up the necessary mathematical background, in the next three subsections we focus on dual spaces, inner product spaces, and the notions of orthogonality and alignment in normed spaces that do not have an inner product.
SISO H∞ AND l1 OPTIMAL CONTROL
14.6.1
729
Dual Spaces
DEFINITION 14.7 A mapping T : X −→ Y where X and Y are any two vector spaces is called an operator. An operator T : X −→ R is called a functional and is usually denoted by the letter f . We will consider linear functionals. Given a linear functional f defined on a normed linear space X, we define kf k = induced norm of f |f (x)| = sup x6=0 kxk = sup |f (x)|
(14.106) (14.107) (14.108)
kxk≤1
From (14.107), we also have |f (x)| ≤ kf k · kxk ∀ x ∈ X.
(14.109)
Note that the above definitions are really not new. We have already encountered them in Appendix B and here they have simply been specialized to the case where the range of the mapping is the real line. DEFINITION 14.8 A linear functional f is said to be bounded if kf k < ∞. The set of all bounded linear functionals on X forms a normed linear space. This space is called the normed dual space of X and is denoted by X ∗ . Notation: Let x ∈ X and x∗ ∈ X ∗ . Then x∗ (x) denotes the value of the functional x∗ evaluated at x. For reasons that will become clear later, we denote x∗ (x) by < x, x∗ >. To get a better feel for dual spaces, we now consider some common Banach spaces and identify their duals. 14.6.1.1
Duals of Some Common Banach Spaces
Dual of E n In the n-dimensional Euclidean space E n , each vector is an n-tuple of real 1 Pn 2 2 scalars, that is x = (ξ1 , ξ2 , · · · , ξn )T with norm kxk = . Let f be i=1 ξi any bounded linear functional on E n . Let ηi = f (ei ) where ei is the ith basis vector. Then for any x = (ξ1 , ξ2 , · · · , ξn )T we have n X ξi ei ) f (x) = f ( i=1
(14.110)
730
OPTIMAL AND ROBUST CONTROL = = Thus,
|f (x)| ≤
n X
i=1 n X i=1 n X
ξi f (ei ) (since f is linear)
(14.111)
ξi ηi .
(14.112)
|ξi ||ηi |
(14.113)
i=1
≤
n X
2
|ξi |
i=1
! 12
n X i=1
2
|ηi |
! 21
(by the Cauchy Schwartz Inequality) ! 12 n X 2 = kxk |ηi | .
(14.114)
(14.115)
i=1
Hence,
kf k ≤
n X i=1
2
|ηi |
! 21
.
(14.116)
1 Pn 2 2 Choosing ξi = ηi , we can get equality and kf k = . Thus, correi=1 ηi sponding to each f , ∃ y = (η1 , η2 , · · · , ηn )T . Moreover, kf k = kyk (Euclidean Norm). Pn Furthermore, for any vector y = (η1 , η2 , · · · , ηn )T , define f (x) = i=1 ξi ηi which, in view of the earlier analysis, is a bounded linear functional with kf k = kyk. Thus, to every f there corresponds a y in E n and vice versa. In this sense, the dual of E n is E n . Next we discuss the dual of lp , 1 ≤ p < ∞. THEOREM 14.6 Every bounded linear functional f on lp , 1 ≤ p < ∞, is representable uniquely in the form ∞ X f (x) = ηi ξi (14.117) i=1
where y = {ηi } is an element of lq . Furthermore, every element of lq defines a member of (lp )∗ in this way, and we have ( P∞ q q1 ( |η | ) if 1 < p < ∞ i i=1 kf k = kykq = (14.118) supk |ηk | if p = 1.
731
SISO H∞ AND l1 OPTIMAL CONTROL
PROOF Suppose f is a bounded linear functional on lp . Define the element ei ∈ lp , i = 1, 2, 3, ..., as the sequence that is identically zero except for a 1 in the ith component. Define ηi = f (ei ). Then for any x = {ξi } ∈ lp , we have ! ∞ ∞ X X ξi ηi (by the continuity of f ). (14.119) f (x) = f ξi ei = i=1
i=1
Suppose first that 1 < p < ∞. For a given positive integer N define the vector xN ∈ lp having components
ξi =
q
|ηi | p sgn(ηi ) 0
i ≤ N i > N
(14.120)
! p1
(14.121)
Then N X
kxN k =
|ηi |q
i=1
and f (xN ) =
N X
q
|ηi | p
+1
=
i=1
Thus, |f (xN )| =
N X
N X
q
|ηi | .
(14.122)
i=1
q
|ηi |
(14.123)
i=1
=
N X
q
|ηi |
i=1
=
N X
|ηi |q
i=1
!1− p1 ! q1
·
N X
q
|ηi |
i=1
|
{z
kxN k
! p1
(14.124)
}
· kxN k.
(14.125)
! q1
(14.126)
Thus, kf k ≥
N X i=1
q
|ηi |
∀ N
⇒ y = {ηi } ∈ lq since kf k < ∞ and kf k ≥ kykq .
(14.127)
732
OPTIMAL AND ROBUST CONTROL
Also f (x) =
∞ X
(14.128)
ξi ηi
i=1
∞ X
⇒ |f (x)| ≤
p
|ξi |
i=1
! p1
·
= kxkp kykq
∞ X
q
|ηi |
i=1
! q1
⇒ kf k ≤ kykq
(14.129) (14.130) (14.131)
From (14.127) and (14.131) kf k = kykq .
(14.132)
Suppose now that y = {ηi } is an element of lq . If x = {ξi } ∈ lp , then f (x) =
∞ X
ξi ηi
i=1
is a bounded linear functional on lp since, by the Holder Inequality, |f (x)| ≤
∞ X
|ξi ηi | ≤ kxkp kykq
(14.133)
i=1
⇒ kf k ≤ kykq .
(14.134)
Since f (ei ) = ηi in this case, it follows from the previous analysis (that is the steps leading up to (14.127)) that kykq ≤ kf k. Therefore, kf k = kykq . For p = 1, q = ∞, define xN by 0 i 6= N ξi = sgn ηN i = N ( Here ηi = f (ei )). Then kxN k ≤ 1 and f (xN ) = |ηN |. But f (xN ) ≤ kf k · kxN k
(14.135)
≤ kf k ⇒ kf k ≥ |ηN |
(14.136) (14.137)
Thus, the sequence y = {ηi } is bounded by kf k. Hence, kyk∞ ≤ kf k.
(14.138)
733
SISO H∞ AND l1 OPTIMAL CONTROL But f (x) =
∞ X
ξi ηi
(14.139)
i=1
⇒ |f (x)| ≤ kyk∞ kxk1 ⇒ kf k ≤ kyk∞ .
(14.140) (14.141)
From (14.138) and (14.141), it follows that kf k = kyk∞ . (14.142) P∞ ∗ Similarly, ∀ y = {ηi } ∈ l∞ , f (x) = i=1 ηi ξi defines an element f of (l1 ) ∗ with kf k = kyk. Hence, (l1 ) = l∞ . REMARK 14.1 Note that Theorem 14.6 does not hold for p = ∞, that is (l∞ )∗ 6= l1 . In fact, it can be shown that c∗0 = l1 where c0 = {x ∈ l∞ | |x(k)| → 0 as k → ∞} . We next state the result analogous to Theorem 14.6 for continuous time functions. THEOREM 14.7 If p and q are conjugate exponents, then (Lp )∗ = Lq for 1 ≤ p < ∞. Furthermore, the dual of L∞ is not L1 . In fact, ∗
(C0 ) = L1 where C0 = {x ∈ L∞ | |x(t)| → 0 as t → ∞}.
14.6.2
Inner Product Spaces
DEFINITION 14.9 An inner product space is a linear vector space X together with an inner product defined on X × X. Corresponding to each pair of vectors, x, y ∈ X, the inner product < x, y > of x and y is a scalar. The inner product satisfies the following axioms (i) < x, y >= < y, x > (ii) < x + y, z >=< x, z > + < y, z > (iii) < λx, y >= λ < x, y >
734 (iv) < x, x >≥ 0
OPTIMAL AND ROBUST CONTROL and
< x, x >= 0 if and only x = 0.
Some examples of inner products are: (i) In Rn : < x, y >= xT y R∞ (ii) In L2 : < x, y >= −∞ x(t)y(t)dt
A question that naturally comes to mind is whether the inner product defined in (ii) above is a well defined quantity or not. In other words, does x, y ∈ L2 imply that the integral in (ii) is finite ? An affirmative answer to this question follows from the next theorem whose proof can be found in any book on functional analysis. THEOREM 14.8 (Cauchy Schwartz Inequality) ∆ √ In an inner product space X with kxk = < x, x >, we have | < x, y > | ≤ kxk · kyk ∀ x, y ∈ X.
(14.143)
Furthermore equality holds if and only if x = λy or y = 0. The following example shows that the function vector norm.
√ < x, x > does define a
Example 14.15 Use the Cauchy Schwartz Inequality and the properties of the inner product to show that the function ∆ √ kxk = < x, x > is indeed a vector norm. Solution: (i) kxk ≥ 0 and kxk = 0 if and only if x = 0. This is true because < x, x >≥ 0 and < x, x >= 0 if and only if x = 0. (ii) √ < αx, αx > √ = α < x, αx > (Since < λx, y >= λ < x, y >) p = α< αx, x > (Since < x, y >= < y, x >) p = αα< ¯ x, x > √ = |α| < x, x >
kαxk =
= |α|kxk
SISO H∞ AND l1 OPTIMAL CONTROL Thus,
735
kαxk = |α| · kxk.
(iii) Triangle Inequality: Now kx + yk2 = < x + y, x + y > = < x, x > + < y, y > + < x, y > + < y, x > = kxk2 + kyk2 + < x, y > +< x, y > = kxk2 + kyk2 + 2Re[< x, y >] ≤ kxk2 + kyk2 + 2| < x, y > | ≤ kxk2 + kyk2 + 2kxkkyk (Using the Cauchy Schwartz Inequality) 2
= (kxk + kyk)
Taking square roots on both sides, we have kx + yk ≤ kxk + kyk Thus, all the three norm properties are satisfied by ∆
kxk =
√ < x, x >
so that this function is indeed a vector norm.
DEFINITION 14.10 An inner product space, which is a complete metric space (with respect to the norm obtained from the inner product), is called a Hilbert Space. Note that every Hilbert space is a Banach space but the converse is not true. Hilbert Spaces generalize many of our geometrical insights for two and three dimensions. (for instance, the shortest distance from a point to a straight line is the perpendicular.) DEFINITION 14.11 In an inner product space two vectors x and y are said to be orthogonal if < x, y >= 0. We denote this by x ⊥ y. A vector is said to be orthogonal to a set S (written as x ⊥ S) if x ⊥ s ∀ s ∈ S. Example 14.16 Prove the Pythagorean Theorem in an inner product space, that is, show that x ⊥ y ⇒ kx + yk2 = kxk2 + kyk2 .
736
OPTIMAL AND ROBUST CONTROL
Solution: Now kx + yk2 = < x + y, x + y > = < x, x > + < x, y > + < y, x > + < y, y > = kxk2 + kyk2 (Since < x, y >=< y, x >= 0 as x ⊥ y). This proves the Pythagorean Theorem. DEFINITION 14.12 If S is a set in an inner product space, X, then the orthogonal complement of S, denoted by S ⊥ = {y ∈ X, < x, y >= 0 ∀ x ∈ S}
14.6.3
Orthogonality and Alignment in Noninner Product Spaces
Recall from the notation following Definition 14.8 that < x, x∗ >= x∗ (x), where x ∈ X and x∗ ∈ X ∗ . Also by the definition of kx∗ k < x, x∗ >≤ kx∗ k · kxk.
(14.144)
Note that if we have an inner product space, then in the Schwartz Inequality < x, y >= kxkkyk
(14.145)
if and only if x = λy, that is x is aligned with y. This notion of alignment between two vectors can be extended to vector spaces without an inner product using the following definition: DEFINITION 14.13 Let X be a normed linear space and X ∗ its dual. Then x ∈ X is said to be aligned with x∗ ∈ X ∗ if and only if < x, x∗ >= kx∗ kkxk. Example 14.17 Characterize the alignment condition between l1 and l∞ . Solution: From Theorem 14.6, (l1 )∗ = l∞ . Furthermore, if x ∈ l1 and y ∈ l∞ , then ∞ X < x, y >= xi yi . i=1
Also
kxk =
∞ X
|xi |
i=1
kyk = sup |yi |. i
737
SISO H∞ AND l1 OPTIMAL CONTROL We want to determine conditions under which < x, y >= kxk · kyk that is
∞ X
xi yi = (sup |yi |) ·
i=1
i
∞ X
|xi |.
i=1
This will clearly happen if
(i) xi = 0 whenever |yi | < kyk∞ (ii) xi yi ≥ 0 ∀ i = 1, 2, 3, · · · which is the required alignment condition. The notion of orthogonality between two vectors can also be extended to vector spaces without an inner product by using the following definition. DEFINITION 14.14 Let X be a normed linear space and X ∗ be its normed dual. Then x ∈ X is said to be orthogonal to x∗ ∈ X ∗ if < x, x∗ >= 0. This is denoted by x ⊥ x∗ . Using the above notion of orthogonality, we can now define the orthogonal complement of a set S. DEFINITION 14.15 Let S be a subset of a normed linear space X. Then the orthogonal complement S ⊥ of S is defined by S ⊥ = {x∗ ∈ X ∗ |< x, x∗ >= 0
14.6.4
∀ x ∈ S} .
(14.146)
The All-Pass Property of H∞ Optimal Controllers
In this subsection, we establish the all-pass property of H∞ optimal controllers. This makes use of the following two theorems which are consequences of the Hahn-Banach Theorem. THEOREM 14.9 Let x be an element in a real normed linear space X and let d denote its distance from the subspace M . Then d = inf kx − mk = m∈M
max < x, x∗ > kx k ≤ 1 x∗ ∈ M ⊥
(14.147)
∗
where the maximum on the right is achieved for some x∗o ∈ M ⊥ with kx∗o k = 1.
738
OPTIMAL AND ROBUST CONTROL
If the infimum on the left is achieved for some mo ∈ M , then x∗o is aligned with x − mo , that is < x − mo , x∗o >= kx∗o kkx − mo k.
(14.148)
The above theorem is a generalization to vector spaces without inner products of the following transparent result from two-dimensional Euclidean geometry. Subspace M
kx − m0 k
0 φ
x
Figure 14.25 Physical intuition behind Theorem 14.9.
With respect to Figure 14.25, suppose that we are interested in finding the vector mo in the subspace M such that kx − mo k is minimized. Note that for any vector y in the orthogonal complement of M xT y = |x| · |y| cos φ = |y|Pr[x] where Pr[x] is the projection of x onto the orthogonal complement of M . If, in addition, |y| ≤ 1, then it follows that xT y ≤ Pr[x] with equality being attained when |y| = 1. Thus, Pr[x] =
max xT y |y| ≤ 1 y ∈ M⊥
739
SISO H∞ AND l1 OPTIMAL CONTROL Since Pr[x] is also equal to kx − mo k, we have min kx − mk =
m∈M
max < x, y > . kyk ≤ 1 y ∈ M⊥
This fairly straight forward result in Euclidean space is what Theorem 14.9 attempts to generalize. THEOREM 14.10 Let M be a subspace in a real normed linear space X. Let x∗ ∈ X ∗ be a distance d from M ⊥ . Then d = min kx∗ − m∗ k = m∗ ∈M ⊥
sup < x, x∗ > x∈M kxk ≤ 1
(14.149)
where the minimum on the left is achieved for m∗o ∈ M ⊥ . If the supremum on the right is achieved for some xo ∈ M , then x∗ − m∗o is aligned with xo , that is, (14.150) < xo , x∗ − m∗o >= kxo kkx∗ − m∗o k.
We now use Theorem 14.10 to establish the all-pass property of H∞ optimal controllers. Recall from Theorem 14.7 that ∗
(L1 ) = L∞ (in the frequency domain).
(14.151)
∗
In other words, ∀ F ∈ (L1 ) , ∃ Y ∈ L∞ such that Z ∞ < X, F >=< X, Y >= X(jω)Y (jω)dω
(14.152)
−∞
∀ X ∈ L1 . Here L1 and L∞ denote L1 (−∞, ∞) and L∞ (−∞, ∞) respectively. Define H1 = {Y (s) | Y (s) is analytic in Re[s] ≥ 0 and kY k1 < ∞} and H∞ = {Y (s) | Y (s) is analytic in Re[s] ≥ 0 and kY k∞ < ∞}. Then we have the following lemma. LEMMA 14.3 Let H1 and H∞ be as already defined. Then ⊥
H∞ = (H1 ) .
740
OPTIMAL AND ROBUST CONTROL
PROOF Since H1 is a subspace of L1 , every bounded linear functional on H1 can be represented by Z ∞ < X, Y >= X(jω)Y (jω)dω (14.153) −∞
where Y ∈ L∞ . We show below that < X, Y >= 0 if and only if Y ∈ H∞ . To see this let us evaluate the integral in (14.153) using the contour shown in Figure 14.26.
Im[s] radius
∞
Γ
Re[s]
no poles Figure 14.26 Evaluating (14.153) using a contour integral.
From (14.153), we have < X, Y > =
Z
∞
X(jω)Y (jω)dω Z−∞ = X(z)Y (z)dz.
(14.154) (14.155)
Γ
R Using Cauchy’s Residue Theorem, it follows that Γ X(z)Y (z)dz = 0 if and only if Y is analytic in Re[s] ≥ 0 which is equivalent to saying that Y ∈ H∞ . Thus, (H1 )⊥ = H∞ . Recall from Theorem 14.4 that if So (s) is the sensitivity function corresponding to some stabilizing controller then every realizable sensitivity function is given by S(s) = So (s) + np (s)q(s)dp (s),
q(s) ∈ H∞
(14.156)
741
SISO H∞ AND l1 OPTIMAL CONTROL n (s)
where P (s) = dpp(s) is a coprime factorization of the plant P (s). Suppose P (s) has no poles or zeros on the imaginary axis and let zi be the RHP zeros and pi be the RHP poles. Define Y −s + zi Bz (s) = (14.157) s + z¯i zi ∈RHP Y −s + pi and Bp (s) = . (14.158) s + p¯i pi ∈RHP
The functions Bz (s) and Bp (s) are referred to as Blaschke products or all pass functions and have a flat magnitude frequency response. The sensitivity function S(s) in (14.156) can now be expressed in terms of Bz (s) and Bp (s) as follows: S(s) = So (s) + Bz (s)˜ q (s)Bp (s) (14.159) where q˜(s) = Bz−1 (s)np (s)q(s)dp (s)Bp−1 (s). Note that Bz−1 (s)np (s) and dp (s) Bp−1 (s) are both stable and minimum phase. Without any loss of generality, let us assume that the weighting function W (s) is both stable as well as minimum phase. Then min kW (s)S(s)k∞ = min kW (s) [So (s) + np (s)q(s)dp (s)] k∞
q∈H∞
q∈H∞
= min kW (s) [So (s) + Bz (s)˜ q (s)Bp (s)] k∞ q˜∈H∞
(since Bz−1 (s)np (s) and dp (s)Bp−1 (s) are both stable and minimum phase) = min kW (s)So (s) + Bz (s)W (s)˜ q (s)Bp (s)k∞ q˜∈H∞
˜ (s)Bp (s)k∞ = min kW (s)So (s) + Bz (s)W ˜ ∈H∞ W
(Since W (s), W −1 (s) are stable) ˜ k∞ = min kBz−1 W So Bp−1 + W ˜ ∈H∞ W
(Since Bz , Bp are all pass and hence multiplication by Bz−1 and Bp−1 does not affect the L∞ norm) ˜ )k∞ . = min kBz−1 W So Bp−1 − (−W ˜ ∈H∞ W
˜ , we have Defining Y = Bz−1 W So Bp−1 and V = −W min kW (s)S(s)k∞ = min kY − V k∞ V ∈H∞ Z ∞ = sup X(jω)Y (jω)dω. X ∈ H1 −∞ kXk1 ≤ 1
q∈H∞
(using Theorem 14.10)
(14.160)
742
OPTIMAL AND ROBUST CONTROL
Furthermore, it can be shown that the supremum on the right is achieved for some Xo ∈ H1 , so that we must have Z ∞ Xo (jω) [Y (jω) − Vo (jω)] dω = kXo k1 · kY − Vo k∞ . −∞
(alignment condition)
But this can happen only if Y (jω) − Vo (jω) = µsgn[Xo (jω)] for almost all ω where µ > 0 is some constant. In other words, the following two conditions must be satisfied (i) Xo (jω) [Y (jω) − Vo (jω)] ≥ 0 ∀ ω (ii) | Y (jω) − Vo (jω) | = kY − Vo k∞ ∀ ω. Thus, if W (s)S(s) is the optimal weighted sensitivity function, then | W (jω)S(jω) |= c ∀ ω so that the optimal cost function in an H∞ design has a flat frequency response. This is usually referred to as the all-pass property of H∞ optimal designs. This result is summarized in the following Theorem. THEOREM 14.11 If Ty1 u1 (s) is the solution to an optimal H∞ control problem, then ∆
σ ¯ (Ty1 u1 (jω)) = kTy1 u1 (s)k∞ = sup σ ¯ (Ty1 u1 (jω)).
(14.161)
ω
In other words, the optimal Bode magnitude plot of σ ¯ (Ty1 u1 ) is flat ∀ ω, as shown in Figure 14.27.
14.6.5
The Single-Input Single-Output Solution
For single-input single-output (SISO) systems, the above result provides an explicit form for the optimal cost function. COROLLARY 14.4 If Ty1 u1 is SISO, then the H∞ optimal Ty1 u1 is a Blaschke product (that is a stable SISO all-pass). In this case, Ty1 u1 takes the form Ty1 u1 = c
Y (−s + si ) i
(s + s¯i )
(14.162)
743
SISO H∞ AND l1 OPTIMAL CONTROL σ ¯ (Ty1 u1 )
ω
Figure 14.27 The all-pass property of H∞ optimal controllers.
where Re[si ] > 0. The form (14.162) of the H∞ -optimal closed-loop transfer function can be used to develop a solution to the H∞ optimization problem. To see how this can be done, consider the weighted sensitivity minimization problem min kW (s)S(s)k∞ . C(s)
(14.163)
Recall that using the YJBK parametrization, we established the following two facts (i) the sensitivity function S(s) = 0 at every plant RHP pole pi and (ii) the sensitivity function S(s) = 1 at every plant RHP zero zi . Consequently, if W (s) is stable, W (s)S(s) = 0 at every RHP pole pi . Define Bp (s) =
Y
Re[pi ]>0
(−s + pi ) (s + p¯i )
(14.164)
that is Bp (s) is the Blaschke product corresponding to the plant right-half plane poles. Then since the optimal W (s)S(s) is known to be a Blaschke product, and must satisfy W (s)S(s) = 0 at every RHP plant pole, we can take the Q (−s+ci ) optimal W (s)S(s) to be of the form W (s)S(s) = Bp (s)D r−1 i=1 (s+c¯i ) where the constants D, ci must be chosen to satisfy the interpolation constraints imposed on W (s)S(s) by the r RHP zeros b1 , b2 , · · · , br : Define X(s) = W (s)S(s) (14.165) so that r−1 Y (−s + ci ) ∆ X(s) ˜ X(s) = =D . Bp (s) (s + c¯i ) i=1
(14.166)
744
OPTIMAL AND ROBUST CONTROL
Now since S(bj ) = 1 ∀ j = 1, 2, 3, · · · , r ˜ j ) = X(bj ) = W (bj )S(bj ) it follows that X(b Bp (bj ) Bp (bj ) W (bj ) = . Bp (bj )
(14.167)
(14.168) (14.169)
Define θj =
W (bj ) Bp (bj )
(14.170)
From (14.166), (14.169), and (14.170), it follows that the constants D and ci must be chosen to guarantee that D
r−1 Y
ci − b j = θj j = 1, 2, 3 · · · , r. c¯i + bj
i=1
(14.171)
We now consider two cases that can arise. Case (I) : All θj are equal or r = 1. Choose D = θ1 =
W (b1 ) = a constant Bp (b1 )
(14.172)
so that W (b1 ) ˜ X(s) = Bp (b1 ) and X(s) =
W (b1 ) · Bp (s). Bp (b1 )
(14.173)
Clearly, for this X(s) X(b1 ) = W (b1 )
(14.174)
so that the interpolation constraint is satisfied. Also for this X(s) W (b1 ) , kX(s)k∞ = Bp (b1 )
(14.175)
and for any realizable X(s)
X(s)Bp−1 (s) ∈ H∞
(14.176)
745
SISO H∞ AND l1 OPTIMAL CONTROL and kX(s)Bp−1 (s)k∞ ≥ | X(b1 )Bp−1 (b1 ) |
that is kX(s)k∞
(14.177)
(by the maximum modulus theorem) W (b1 ) (since X(b1 ) = W (b1 )) = Bp (b1 ) W (b1 ) . ≥ (14.178) Bp (b1 )
Thus, the X(s) given in (14.173) solves the H∞ optimal control problem. Recall that in the case of Example 14.14, the plant had one RHP zero and no RHP poles. Thus, Bp (s) = 1 and X(s) = W (b1 ), which is exactly the solution that we obtained earlier. Case (II) : r > 1 and not all θj are equal. is not a constant and |θi | ≤ kXBp−1 k∞ ∀ i = 1, 2, · · · , r. Here, BX(s) p (s) In this case, the theory depends on a special transformation, which reduces the number of interpolation constraints by one in each application. The starting point is the invertible mapping u ↔ x u=
M 2 (x − θ) M 2 (u + θ) , x = ¯ ¯ , |θ| < |M | M 2 − θx M 2 + θu
(14.179)
where M is some constant. The above mapping is analytic in x, and establishes a 1 : 1 correspondence between points in the disks |x| ≤ |M | and |u| ≤ |M | of the complex u and x planes respectively. Furthermore, points on the boundary are mapped into points on the boundary, and points in the interior into points in the interior, that is, |x| < |M | ↔ |u| < |M | and |x| = |M | ↔ |u| = |M |.
(14.180)
Let M be fixed, and for any |θ| < |M |; let Uθ : H∞ → H∞ denote the mapping x(.) 7→ u(.) satisfying the equation u(s) =
M 2 [x(s) − θ] ¯ M 2 − θx(s)
(14.181)
The transformation (14.181) maps the ball of radius |M | of H∞ into itself and the set of all pass-functions of norm |M | into itself. Moreover, if x(.) ∈ H∞ satisfies an interpolation constraint X(b) = θ, then u(s) has a zero at s = b. −1 Equation (14.181) is now modified by division by (b − s)(¯b + s) , thereby removing the zero at b, but keeping u(.) in H∞ to obtain the transformation Uθ,b : H∞ → H∞ , x(.) 7→ x1 (.)
(14.182)
746
OPTIMAL AND ROBUST CONTROL
given by x1 (s) =
M 2 [x(s) − θ] (¯b + s) . · ¯ (b − s) M 2 − θx(s)
(14.183)
Note that this division by ¯b−s leaves H∞ norms invariant (does not change b+s jω-axis norms). Equation (14.183) provides a correspondence between the set of all allpass functions x(s) of norm |M | that assume the value θ at b, and the set of all allpass functions x1 (s) of norm |M | (whose value at b is arbitrary). Recall that X(s) ˜ X(s) = Bp (s) ˜ j ) = θj ∀ j = 1, 2, · · · , r. and X(b
(14.184) (14.185)
˜ Suppose X(s) is the minimum H∞ -norm solution with H∞ - norm = |M |. Define ˜ M 2 [X(s) − θ1 ] (¯b1 + s) · X˜1 (s) = (14.186) ˜ (b1 − s) M 2 − θ¯1 X(s) | {z } Uθ1 ,b1
Then kX˜1 (s)k∞ = |M | subject to the constraints X˜1 (bj ) = Uθ1 ,b1 (θj )
j = 2, 3, 4, · · · , r. We continue this process until we reach either all constraints are equal or only one constraint remains. In this way, we finally reach ˜ k−1 (s)k∞ = |M | kX
(14.187)
subject to ˜ k−1 (bk ) = Uθ ,b Uθ ,b X · · · Uθ1 ,b1 (θk ) k−1 k−1 k−2 k−2 ∆
(k−1)
= θk
Thus, the original problem has now been reduced to Case (I) for which the solution is ˜ k−1 (s) = θ(k−1) . X (14.188) k Also
(k−1)
|M | = |θk
|.
˜ Now we can use the inverse transformation to get the desired X(s) and then X(s) can be determined using ˜ X(s) = X(s)B p (s).
(14.189)
SISO H∞ AND l1 OPTIMAL CONTROL
747
This completes our discussion of the single-input single-output weighted sensitivity minimization problem. In determining the solvability of this problem, the following mathematical question is of interest: Given ρ > 0, and complex numbers zi , wi ; i = 1, 2, · · · , n, is it possible to make kT (s)k∞ < ρ subject to the interpolation constraints T (zi ) = wi ? (14.190) The answer to this question can be obtained by constructing a special matrix: Using zi ’s, wi ’s construct the matrix [N (ρ)]i,j =
ρ2 − wi w ¯j zi + z¯j
(14.191)
which is referred to as the Nevanlinna Matrix or the Pick Matrix. It can be shown that the interpolation problem is solvable if and only if N (ρ) is positive definite. The approach that we have presented here for solving the single-input singleoutput H∞ optimal control problem is based on Nevanlinna Pick interpolation. Multi-input multi-output (MIMO) generalizations of this approach involving Matrix Interpolation Theory were developed in the controls literature in the early to mid nineteen eighties. We will not discuss these generalizations here. Instead, we will present the single-input single-output l1 norm optimization problem in the next section and then treat the MIMO H∞ problem in the next chapter using two distinct approaches, one based on Hankel approximation theory and the other based on game theory.
14.7
l1 Optimal Control: SISO Case
In this section, we formulate and solve the l1 optimal control problem. The main difference between the H∞ optimal control problems and their corresponding l1 /L1 counterparts is that in the H∞ optimal control problems, we are interested in measuring the worst-case signal amplification/attenuation in terms of energy whereas in the l1 /L1 optimal control problems, we are interested in measuring the worst case signal amplification/attenuation in terms of the peak signal excursions about the zero value. The discrete-time l1 optimal control problem is easier to solve than its continuous-time L1 counterpart and consequently, here we will formulate and solve the single-input single-output l1 optimal control problem. Given a sequence h ∈ l1 define the z-transform of h as H(z) =
∞ X i=0
h(i)z i .
(14.192)
748
OPTIMAL AND ROBUST CONTROL
Note that this definition of the z-transform is different from the one found in most textbooks where z −1 is the backward shift operator. Here z is the backward shift operator and, with this definition, the stability region becomes the outside of the unit disk. As we will see shortly, the l1 optimal control problem can be solved by reducing it to a dual problem whose solution can be obtained via linear programming subject to constraints. By making the interior of the unit disk the unstable region, a potentially countably infinite number of constraints collapses to a finite number, thereby facilitating the solution. Consider the discrete-time unity feedback control system shown in Figure 14.28. r
e +
C(z)
u
P (z)
y
−
Figure 14.28 Discrete-time feedback control system.
Here P (z) is the plant to be controlled; C(z) is the cascade compensator; y(k), u(k) are the output and input signals of the plant; and r(k) is the command signal. The discrete-time counterparts of the continuous-time YJBK parametrization results of Section 14.4 can be developed by essentially mimicking the steps used earlier. The main difference is that in the discrete-time case, a stable transfer function will be one which has all its poles outside the closed unit disk. N (z) Let P (z) = Dpp (z) where Np (z), Dp (z) are two stable rational transfer functions that are coprime. Furthermore, suppose that X(z), Y (z) are two stable rational transfer functions such that the Bezout identity X(z)Np (z) + Y (z)Dp (z) = 1 holds. Then, as in the continuous-time case, it can be shown that the sensitivity function S(z) and the complementary sensitivity function T (z) are given by: S(z) = Dp (z)[Y (z) − Q(z)Np (z)] T (z) = Np (z) [X(z) + Q(z)Dp (z)] where Q(z) is any stable rational transfer function, that is q ∈ l1 . Now consider the operator H : l∞ 7→ l∞ given by H(x) = h ∗ x
(14.193)
749
SISO H∞ AND l1 OPTIMAL CONTROL
where h ∈ l1 . Then, by the discrete-time counterpart of Theorem B.8, the induced norm of this operator as a mapping from l∞ to l∞ is given by kHkA =
∞ X
|h(i)|
i=0
= khk1 . Just as the H∞ norm measures the worst case energy amplification from the input to the output of a system, the l1 norm measures the worst case magnitude amplification from the input to the output. As such, several control problems can be posed as l1 norm minimization problems. To get a better feel for the solution to the l1 optimal control problem using duality, consider the problem of minimizing kT (z)kA using a stabilizing controller. Since T (z) is given by (14.193), it follows that this problem is equivalent to the problem min kH(z) − G(z)Q(z)kA
Q(z)∈A
where H(z) = Np (z)X(z) and G(z) = −Np (z)Dp (z). Without any loss of generality, let us assume that G(z) has n distinct zeros inside the unit disc (that is unstable zeros) a 1 , a2 , · · · , an . Define K = GQ. Then K can be any stable rational function such that K(ai ) = 0, i = 1, 2, · · · , n. Hence, the above problem becomes equivalent to finding a stable function K, with K(ai ) = 0, ∀ i = 1, 2, · · · , n such that kH − KkA is minimized. Let ∞ X K(z) = ki z i . (14.194) i=0
Then
K(a) = 0 if and only if
∞ X
ki ai = 0
(14.195)
i=0
⇔ < k, ar >= 0 and < k, ai >= 0
(14.196)
where k = [k0 , k1 , k2 , · · ·], ar = Re[1, a, a2 , a3 , · · ·] and ai = Im[0, a, a2 , a3 , · · ·]. Define arj = ar with a = aj and aij = ai with a = aj . Let S be a subset of l1 defined in the following way: S = {k ∈ l1 | < k, arj >= 0 and < k, aij >= 0 ∀ j = 1, 2, · · · , n} . (14.197) Then the above minimization problem can be stated as inf kH − KkA = inf kh − kkl1 .
k∈S
k∈S
(14.198)
750
OPTIMAL AND ROBUST CONTROL
This optimization problem is difficult to solve, except in some special cases. Indeed, it cannot be solved in the z-domain and one would have to take inverse z-transforms and work in the time domain. However, its dual problem is much easier to solve as we now show. Using Theorem 14.9, we have inf kh − kkl1 =
k∈S
max < h, h∗ > . ⊥ h ∈S kh∗ k∞ ≤ 1
(14.199)
∗
Recall from Theorem 14.6 that l1 ∗ = l∞ . Thus, h∗ ∈ l1∗ can be replaced by r ∈ l∞ and we have inf kh − kkl1 =
k∈S
max < h, r > . r ∈ S⊥ krk∞ ≤ 1
(14.200)
From the definition of S in (14.197), it is clear that S ⊥ = set of all linear combinations of aij , arj , j = 1, 2, 3, · · · , n so that every r ∈ S ⊥ may be represented as n n X X r= αj arj + αj+n aij (14.201) j=1
j=1
for some scalars α1 , α2 , · · · , α2n . Hence ∀ r ∈ S ⊥ , we have < h, r > = h(0)r(0) + h(1)r(1) + · · · n n n X X X = h(0) αj + h(1) αj Re[aj ] + αj+n Im[aj ] j=1
j=1
j=1
n n X X +h(2) αj Re[a2j ] + αj+n Im[a2j ] + · · · j=1
=
n X j=1
+
αj h(0) + h(1)Re[aj ] + h(2)Re[a2j ] + · · ·
n X j=1
=
n X
j=1
αj+n h(1)Im[aj ] + h(2)Im[a2j ] + · · ·
αj Re[H(aj )] +
j=1
n X
αj+n Im[H(aj )].
(14.202)
j=1
Also from (14.201) note that if r ∈ S ⊥ then r=
n X j=1
αj Re[1, aj , a2j , a3j , · · ·] +
n X j=1
αj+n Im[0, aj , a2j , a3j , · · ·]
(14.203)
751
SISO H∞ AND l1 OPTIMAL CONTROL so that krk∞
X n X n l l = sup αj Re[(aj ) ] + αj+n Im[(aj ) ] . l j=1 j=1
(14.204)
Thus, krk∞ ≤ 1 if and only if −1 ≤
n X
l
αj Re[(aj ) ] +
n X
αj+n Im[(aj )l ] ≤ 1 ∀ l = 0, 1, 2, · · ·(14.205)
j=1
j=1
Thus, from (14.200), (14.202) and (14.205), we obtain n n X X inf kh − kkl1 = max αj Re[H(aj )] + αj+n Im[H(aj )] αj
k∈S
j=1
(14.206)
j=1
subject to
−1 ≤
n X j=1
αj Re[(aj )l ] +
n X
αj+n Im [(aj )l ] ≤ 1, l = 0, 1, 2, · · · (14.207)
j=1
Thus, the original l1 norm minimization problem (14.198) has now been reduced to a (dual) linear programming (LP) problem subject to an infinite number of constraints. Note, however, that since |aj | < 1, after a certain N that is l > N , these constraints are automatically satisfied. Hence, we only have a finite number of active constraints. Note that the lth constraint is exactly the lth component of r. Hence, the r˜ that solves the dual problem has the following properties: 1. k˜ rk∞ = 1 (since the optimal r˜ occurs on the unit sphere). 2. |˜ rl | < 1 for all l > N for some N . We will now use the solution to the dual problem to construct the solution to the original (primal) problem. To this end, let k˜ ∈ S be the solution to ˜ Also let µ denote the the optimization problem (14.198) and define b = h − k. value of the maximal cost that results from the solution of the dual problem (14.206), (14.207). Then from the alignment condition for (14.200), b ∈ l1 must be aligned with r˜ ∈ l∞ . Thus, from Example 14.17, it follows that • (i) bi = 0 whenever |r˜i | 6= 1; and • (ii) bi r˜i ≥ 0. In addition b must satisfy the following two properties P • (iii) ∞ i=0 |bi | = µ (since the primal and dual problems yield the same value for the optimal cost)
752
OPTIMAL AND ROBUST CONTROL
˜ i ) = 0 ∀ i = 1.2, · · · , n, • (iv) h − b ∈ S that is K(a that is H(a P∞i ) − B(ai ) = 0 i = 1, 2, · · · , n that is i=0 bi aij = H(aj ) for j = 1, 2, · · · , n (admissibility conditions).
Conditions (i), (ii), (iii) and (iv) above characterize the solution to the original optimization problem. The following example illustrates the role played by these conditions in solving an l1 norm minimization problem. Example 14.18 We want to solve min kH − KkA
(14.208)
K∈A
subject to K(jx) = 0 and K(−jx) = 0 where 0 < x < 1 is fixed but arbitrary. Solution: Here a1 = jx and a2 = −jx. Step 1: (finding the minimum norm). ∆
∆
Let Hr = Re[H(jx)] and Hi = Im[H(jx)]. (14.207), that is max
α1 ,α2 ,α3 ,α4
We want to solve (14.206),
[α1 Re[H(jx)] + α2 Re[H(−jx)] + α3 Im[H(jx)] + α4 Im[H(−jx)]] (14.209)
subject to −1 ≤ α1 + α2 ≤ 1 and − 1 ≤ α3 x + α4 (−x) ≤ 1. Note that Re[H(jx)] = Re[H(−jx)] and Im[H(jx)] = −Im[H(−jx)] (assuming H has real coefficients) Redefining α¯1 = α1 + α2 , α¯2 = α3 − α4 , the optimization problem (14.209) becomes max (α¯1 Hr + α¯2 Hi ) (14.210) α¯1 ,α¯2
subject to −1 ≤ α¯1 ≤ 1 −1 ≤ α¯2 x ≤ 1 The solution to this constrained maximization problem is α¯1 = sgn[Hr ] sgn[Hi ] α¯2 = x and the corresponding maximal cost is given by µ = |Hr | +
|Hi | . x
753
SISO H∞ AND l1 OPTIMAL CONTROL Step 2: Construction of the optimal solution. We first note that r = α1 Re[1, jx, −x2 , −jx3 , · · ·] + α2 Re[1, −jx, −x2 , jx3 , · · ·] +α3 Im[0, jx, −x2 , −jx3 , · · ·] + α4 Im[0, −jx, −x2 , jx3 , · · ·] = α¯1 [1, 0, −x2 , 0. · · ·] + α¯2 [0, x, 0, −x3 , · · ·] = [α¯1 , α¯2 x, −x2 α¯1 , −α¯2 x3 , · · ·] Thus, the extremal r˜ is given by r˜ = [sgn[Hr ], sgn[Hi ], −x2 sgn[Hr ], −x2 sgn[Hi ], · · ·]
(14.211)
Thus, to satisfy conditions (i), (ii) and (iii), we arrive at b = {Hr ,
Hi , 0, 0, · · · , 0}. x
(14.212)
Condition (iv) is also satisfied by the above b since ˜ B(jx) = Hr + jHi = H(jx) ⇒ K(jx) =0 ˜ and B(−jx) = Hr − jHi = H(−jx) ⇒ K(−jx) = 0.
(14.213) (14.214)
Hence, k˜ = h − b where b is given by (14.212) is indeed the k˜ that solves the minimization problem (14.208). Example 14.19 Consider the feedback control system shown in Figure 14.28 where the plant P (z) is given by P (z) =
z , 0 < x < 1. z 2 + x2
(14.215)
Suppose we wish to design a controller C(z) to minimize the k.kA norm of the complementary sensitivity function T (z). Let Np (z) = z, Dp (z) = z 2 + x2 1 −z X(z) = 2 , Y (z) = 2 x x
(14.216) (14.217)
so that the Bezout identity X(z)Np (z) + Y (z)Dp (z) = 1 N (z)
(14.218)
holds. Thus, P (z) = Dpp (z) is a coprime factorization of P (z) and, as in the continuous-time case, all realizable complementary sensitivities T (z) are given by −z 2 2 T (z) = Np (z)[X(z) + Q(z)Dp (z)] = z + Q(z)(z + x ) (14.219) x2
754
OPTIMAL AND ROBUST CONTROL
where Q(z) ranges over all transfer functions with poles outside the unit disk. Furthermore,
−z 2 2
+ (z + x )Q(z) kT (z)kA =
x2 A (since kzH(z)kA = kH(z)kA ). Define K(z) = −(z 2 + x2 )Q(z) = −(z + jx)(z − jx)Q(z) z and H(z) = − 2 . x Then we want to solve the problem min kH(z) − K(z)kA
K∈A
subject to K(jx) = 0 and K(−jx) = 0 where 0 < x < 1. But we have already solved this problem in Example 14.18. In the notation of that example, we now have −jx Hr = Re =0 (14.220) x2 −jx x Hi = Im =− 2 (14.221) 2 x x
⇒ b = {0, and B(z) =
−1 , 0, · · · , 0} x2
−z x2
(14.222) (14.223)
Thus, z −z ˜ K(z) = H(z) − B(z) = 2 + 2 = 0 x x
⇒
Q(z) = 0.
(14.224)
Thus, the optimal compensator is given by C(z) = which is a pure time delay.
X(z) = −z Y (z)
(14.225)
755
SISO H∞ AND l1 OPTIMAL CONTROL
14.8
Exercises
14.1 Find a parametrization of all stabilizing compensators for the plants 1 1 1 , (b) , (c) 2 s−1 (s − 1) (s − 1)(s − 2) s−2 (s − 2)(s + 1) (d) , (e) . (s − 1)(s − 3) (s − 1)(s − 3) (a)
14.2 In the compensators obtained in Problem 14.1, attempt to determine (a) the constant gain compensators, (b) the first order compensators, and (c) the PID compensators. 14.3 (Additive uncertainty) Consider the nominal plant P0 (s) and the family of plants P (s), with the same number of RHP poles as P0 (s) and satisfying, for given r(s) |P (jω) − P0 (jω)| ≤ r(jω) > 0, ∀ ω. Prove that a sufficient condition for robust stability of the feedback system with the controller C(s) is
(1 + P0 (s)C(s))−1 C(s)r(s) < 1. ∞
14.4 (Multiplicative uncertainty) Consider the plant P0 (s) and the family of plants P (s) = (1 + m(s))P0 (s) with |m(jω)| < |r(jω)|,
∀ ω
and r(s) is stable and minimum phase. Prove that a sufficient condition for robust stability with the controller C(s) is
P0 (s)C(s)(1 + P0 (s)C(s))−1 r(s) < 1. ∞ 14.5 For the plant
P0 (s) =
s−2 s − 12
756
OPTIMAL AND ROBUST CONTROL
with multiplicative uncertainty specified by m(s) =
(s + 6)2 1 s+1 · · 2 , 3 s + 2 s + 2s + 37
find a robustly stabilizing controller. 14.6 For the plant P0 (s) =
1 (s − 1)(s − 2)(s − 3)
with additive uncertainty defined by r(s) =
s + 0.1 , 10s + 1
find a robustly stabilizing controller. 14.7 For the plant P0 (s) =
1 (s − 1)(s − 2)
with additive perturbations specified by r(s) = r > 0, determine robustly stabilizing compensators as a function of r, and the maximum value of r for which robust stabilization is possible. 14.8 For the plant P0 (s) =
(s − 1)(s − 4) , (s − 2)(s − 3)
find a controller C(s) such that kW (s)S(s)k∞ is minimized with W (s) =
s+1 8s + 1
and S(s) is the error (sensitivity) transfer function. 14.9 Solve inf
Q(s)∈H∞
kT1 (s) − T2 (s)Q(s)k∞
for T1 (s) =
s+1 , 10s + 1
T2 (s) =
(s − 1)(s − 2) . (s + 1)2
SISO H∞ AND l1 OPTIMAL CONTROL
14.9
757
Notes and References
The original introduction of the Small Gain Theorem in the controls literature is due to Zames [210]. The treatment of the small gain theorem presented in this chapter follows the book by Desoer and Vidyasagar [66]. The singleinput single-output YJBK parametrization presented here follows the book by Vidyasagar [197]. The standard material on dual spaces, inner product spaces, and orthogonality and alignment in noninner product spaces is adapted from the book by Luenberger [143]. The use of H∞ norms to quantify feedback system performance and robustness is due to Zames [211]. The solution to the single-input single-output H∞ optimization problem presented here is due to Zames and Francis [212]. The solution to the multi-input multi-output H∞ problem via matrix interpolation theory was developed by Chang and Pearson [43]. The l1 optimal control problem was first formulated by Vidyasagar [198] and a few special cases were solved by him. However, the complete solution using the notion of duality is due to Dahleh and Pearson [56]. In this chapter, we have confined ourselves only to the single-input single-output problem in discrete-time. Solutions to the MIMO problem for continuous and discretetime can be found in [58] and [57] respectively. An authoritative reference devoted to l1 optimal control and related topics is [55]. Exercises 14.5, 14.6, 14.7, 14.8, and 14.9 are taken from Dorato, Fortuna, and Muscato [70].
15 H∞ OPTIMAL MULTIVARIABLE CONTROL
In this chapter, we present the solution to the H∞ optimal control problem in the multivariable case. Two distinct approaches to the problem are considered, one based on the theory of Hankel approximations and using the YJBK parametrization, and the other based on a game-theoretic approach and using state space models. The approach based on Hankel approximation theory requires a considerable amount of background mathematical machinery, all of which is developed here in a fairly self-contained fashion. The game theoretic approach, on the other hand, is based on a simple completion of squares.
15.1
H∞ Optimal Control Using Hankel Theory
In this section, we present a solution to the multi-input multi-output H∞ optimal control problem using the theory of Hankel operators. As we will see, the H∞ optimal control problem is closely related to the theory of Hankel approximation, and results from the latter can be used to provide a solution to the former. Although it is possible to entirely skip the material in this section and proceed to the next one to arrive at the H∞ solution for the multivariable case, we believe that the material in this section provides a much more motivated development of the theory and is, therefore, of considerable pedagogical value. In fact, it would be fair to say that the elegant state-space solution presented in the next section emerged only after years of researching the material presented here.
15.1.1
H∞ and Hankel Operators
Consider a stable linear-time invariant system having impulse response g(t) and frequency response G(jω). Then G(jω) = F (g(t)) where F denotes the Fourier transform. Recall that we already proved the following in Appendix B.
759
760
OPTIMAL AND ROBUST CONTROL
LEMMA 15.1 ∆
gain(g) = sup u
kg ∗ ukL2 = kGk∞ . kukL2
(15.1)
We now define the positive and negative time projections on L2 (−∞, ∞). DEFINITION 15.1 by
The positive-time projection P+ : L2 → L2 is defined
P+ x =
x(t), 0,
∀t ≥ 0 ∀t < 0
and the negative-time projection P− is defined by 0, ∀t ≥ 0 P− x = x(t), ∀t < 0
(15.2)
(15.3)
Clearly P− = I − P+ . It is also clear from the above definitions that for x ∈ L2 (−∞, ∞), kP+ xkL2 ≤ kxkL2 (15.4) and kP− xkL2 ≤ kxkL2 .
(15.5)
{P− x | x ∈ L2 } ⊂ L2 .
(15.6)
Thus
DEFINITION 15.2 For a given stable linear time-invariant system with impulse response g(t) and frequency response G(jω), the Hankel operator ΓG : L2 → L2 is defined by ∆ ΓG u = P+ [g ∗ (P− u)] (15.7) DEFINITION 15.3
The Hankel norm of G, kGkHankel is defined by ∆
kGkHankel = gain (ΓG ) kΓG ukL2 = sup u6=0 kukL2
(15.8) (15.9)
The following theorem establishes a relationship between the Hankel and H∞ norms. THEOREM 15.1 The H∞ norm is an upper bound on the Hankel norm: kGkHankel ≤ kGk∞ .
(15.10)
H∞ OPTIMAL MULTIVARIABLE CONTROL
761
PROOF kGkHankel = sup u∈L2
≤ sup u∈L2
kP+ GP− ukL2 kukL2 kP+ GP− ukL2 (using (15.5)) kP− ukL2
=
kP+ GukL2 kukL2 u∈P− (L2 )
≤
kGukL2 (using (15.4)) u∈P− (L2 ) kukL2
sup
sup
≤ sup u∈L2
kGukL2 (using (15.6)) kukL2
= kGk∞ .
Although we have defined the Hankel operator using a continuous-time system, for illustrative purposes it is more appropriate to focus on a discretetime system since, in that case, the convolution operation can be represented by an infinite matrix of impulse responses. Towards this end, let us consider a single-input single-output discrete-time system with impulse response g(k), input u(k) and output y(k): y(n) =
∞ X
g(n − k)u(k)
(15.11)
k=−∞
Then the input and output signals can be related as follows using an infinite matrix of impulse responses: y(−∞) g(0) g(−1) · · · · · · y() g(1) g(0) · · · · · · u(−∞) . . g(1) · · · · · · . . . . . . . u(−1) y(−1) . u(0) y(0) = . (15.12) u(1) y(1) . . y(2) . . . . . . . . . u(∞) y(∞) . ··· g(1) g(0)
762
OPTIMAL AND ROBUST CONTROL
Notice that the infinite matrix in (15.12) has constant diagonal entries. Such a matrix is called a Toeplitz matrix. The Hankel operator here is characterized by the entries in the lower left quarter of the infinite matrix of impulse responses. This is because the Hankel operator maps the negative-time projection of the input to the positive time projection of the resulting output. REMARK 15.1 Notice that if g has an anticausal part (that is g(t) 6= 0 f or t < 0), then this anticausal part does not affect ΓG or kGkHankel . However, by adjusting the anticausal part, the H∞ norm of the resulting system can be made to equal the Hankel norm. This is stated in the following Theorem. THEOREM 15.2 Given any G(s), there exists an anticausal∗ Q(s) such that kGkHankel = kG + QkHankel = kG + Qk∞ ˜ ∞ = min kG + Qk ˜ Q(−s)∈H ∞
Recall from (14.160) that min kW (s)S(s)k∞ =
q∈H∞
=
min
kY (s) − V (s)k∞ for some Y ∈ L∞ .
min
kY (−s) − V (−s)k∞
V (s)∈H∞ V (s)∈H∞
(since for transfer functions with real coefficients kY (−s)k∞ = kY (s)k∞ , etc. where k.k∞ denotes the L∞ norm) ˜ Now identify G(s) = Y (−s), Q(s) = V (−s) so that the above problem becomes min
˜ Q(−s)∈H ∞
˜ kG(s) − Q(s)k ∞ = kG(s)kHankel (by Theorem 15.2) = kY (−s)kHankel .
Thus, if one can compute the Hankel norm kY (−s)kHankel , then this is equal to the minimal H∞ norm. Thus, the Hankel problem and the H∞ optimal control problems are closely linked. Indeed several solutions to the H∞ control problems have been developed based entirely on the Hankel approximation theory. We next proceed to develop computational algorithms for calculating that Q anticausal means that q(t) = F −1 [Q(jω)] = 0 ∀ t > 0 or, equivalently, Q(s) is analytic in Re[s] ≤ 0.
∗ Note
763
H∞ OPTIMAL MULTIVARIABLE CONTROL
the Hankel norm. To simplify the notation, in the following presentation, we will drop the subscript G from ΓG , the context making the particular subscript obvious.
15.1.2
State Space Computations of the Hankel Norm
Suppose G(s) is a stable, strictly proper, causal transfer function having a state space representation (A, B, C). Then −1
G(s) = C(sI − A)
B
and the state and output equations are x˙ = Ax + Bu y = Cx respectively. Define the mappings: C : L2 (−∞, 0] −→ Rn O : Rn −→ L2 [0, ∞) by Cu =
Z
0
e−At Bu(t)dt ∀ u ∈ L2 (−∞, 0]
−∞ At
and Ox = Ce x ∀ x ∈ Rn . Then we can state and prove the following lemma. LEMMA 15.2 The Hankel operator, Γ : L2 (−∞, 0] −→ L2 [0, ∞) may be decomposed as: (Γu)(t) = OCu.
PROOF
(15.13)
Now ∆
(Γu)(t) = P+
Z
∞
CeA(t−τ ) B(P− u)(τ )dτ
−∞ 0
= CeAt = OCu.
Z
e−Aτ Bu(τ )dτ −∞
Thus, the Hankel operator Γ can be factored as Γ = OC
(15.14)
764
OPTIMAL AND ROBUST CONTROL
Calculation of the Hankel norm using the above factorization involves the use of Adjoint Operators. Consequently, we next introduce adjoint operators and their properties. DEFINITION 15.4 Let T : X → Y be a bounded linear operator. Then the Adjoint Operator T ∗ : Y ∗ → X ∗ is the operator which satisfies (see notation following Definition 14.8) ∀ y ∗ ∈ Y ∗ and x ∈ X.
< T x, y ∗ >=< x, T ∗ y ∗ >
(15.15)
If X and Y are Hilbert spaces, then X ∗ = X and Y ∗ = Y . So the defining relationship for the adjoint becomes < T x, y >Y =< x, T ∗ y >X , ∀ x ∈ X, ∀ y ∈ Y
(15.16)
where T ∗ : Y → X and X , Y denote the inner products on X and Y respectively. Example 15.1 Suppose P is a matrix representing the operator P : C n → C m . The inner products on C n and C m are defined by < x, z >C n = x ¯T z, < y, w >C m = y¯T w where x¯ denotes the conjugate of x. Determine the adjoint of P . Solution: Now < P x, y >C m = (P¯x)T y = x¯T P¯ T y = < x, P¯ T y >C n T Thus, by definition, P ∗ = (P¯ ) . So the adjoint of a matrix P is obtained by taking its conjugate transpose.
Example 15.2 Suppose C : L2 (−∞, 0] → Rn is defined by Cu =
Z
0
e−Aτ Bu(τ )dτ.
−∞
Also suppose the inner products on L2 (−∞, 0] and Rn are defined by ∆
< u1 , u2 >L2 =
Z
0
−∞
uT1 (t)u2 (t)dt
765
H∞ OPTIMAL MULTIVARIABLE CONTROL and ∆
< x1 , x2 >Rn = xT1 x2 respectively. Determine the adjoint of C. Solution: Now ∆
< Cu, x >Rn = =
Z Z
0
e−Aτ Bu(τ )dτ
−∞ 0
uT (τ )B T e−A
T
T
τ
x
xdτ
−∞ T
−A t = < u, |B T e{z } x >L 2 C∗
T
Thus, by definition C ∗ x = B T e−A t x ∀ x ∈ Rn . Example 15.3 Suppose O : Rn → L2 [0, ∞) is defined by Ox = CeAt x.
Also suppose that the inner products on L2 [0, ∞) and Rn are defined by ∆
< u1 , u2 >L2 =
Z
0
∆
∞
uT1 (t)u2 (t)dt
and < x1 , x2 >Rn = xT1 x2 . Determine the adjoint of O. Solution: Now < Ox, y >L2 = < CeAt x, y >L2 Z ∞ T = xT eA t C T y(t)dt 0 Z ∞ T T =x eA t C T y(t)dt 0 Z ∞ T = < x, eA t C T y(t)dt >Rn 0
= < x, O∗ y >Rn .
Thus, by definition, O∗ y =
R∞ 0
T
eA t C T y(t)dt ∀ y ∈ L2 [0, ∞).
766
OPTIMAL AND ROBUST CONTROL
DEFINITION 15.5 Let T : X → Y be a linear operator where X and Y are inner product spaces. Then ∆
∆
R(T ) = range(T) = {y | y = T x for some x ∈ X} ∆
∆
η(T ) = null (T) = {x ∈ X | T x = 0} .
(15.17) (15.18)
Fact: If X and Y are Hilbert spaces and Y is finite dimensional, (that is R(T ) is finite dimensional) then R(T ) = R(T T ∗ ). Since C : L2 (−∞, 0] → Rn , it follows that R(C) is finite dimensional and, therefore, R(C) = R(CC ∗ ). The reachability and observability Grammian matrices play an important role in the calculation of Hankel norms. Given a state space triple (A, B, C) with A ∈ Rn×n being stable, the Reachability Grammian P ∈ Rn×n is defined by Z 0 T ∆ P = e−At BB T e−A t dt (15.19) −∞
while the Observability Grammian Q ∈ Rn×n is defined by Z ∞ T ∆ Q= eA t C T CeAt dt.
(15.20)
0
It can be easily verified that P = CC ∗ and Q = O∗ O. Furthermore, P and Q can be obtained using the following lemma. LEMMA 15.3 The Grammians P and Q satisfy the following Lyapunov equations, and hence can be determined by solving them: AP + P AT + BB T = 0 AT Q + QA + C T C = 0. PROOF
(15.21) (15.22)
We first consider (15.21). Now
AP + P AT + BB T =
Z
0
T
Ae−At BB T e−A t dt
−∞ Z 0
T
e−At BB T e−A t AT dt + BB T
+
−∞ 0
=−
Z
−∞
= −e
−At
i T d h −At e BB T e−A t dt + BB T dt
BB T e−A
T
t 0 |−∞
+BB T
= −BB T + BB T (since A is a stable matrix) = 0.
767
H∞ OPTIMAL MULTIVARIABLE CONTROL Thus, (15.21) holds. A similar calculation can be used to verify (15.22). Example 15.4 Compute kΓk using the Grammians P and Q. Solution: Now 2 kΓukL2 [0,∞) 2 kΓk = sup u kukL2 (−∞,0] kOCukL2 [0,∞) 2 (using Lemma 15.2) = sup kukL2 (−∞,0] u 2 kOCC ∗ xkL2 [0,∞) = sup (since R(C) = R(CC ∗ )) ∗ x∈Rn kC xkL2 (−∞,0] ! < OCC ∗ x, OCC ∗ x >L2 [0,∞) = sup < C ∗ x, C ∗ x >L2 (−∞,0] x∈Rn < CC ∗ x, O∗ OCC ∗ x >Rn = sup < CC ∗ x, x >Rn x∈Rn = sup
xT (CC ∗ )T O∗ OCC ∗ x T
xT (CC ∗ ) x xT P QP x = sup (since CC ∗ = P and OO∗ = Q) xT P x x∈Rn x∈Rn
1
= sup x∈Rn
1
T
T
xT P 2 P 2 QP 2 P 2 x 1 2
T 2
xT P P x T
1
x ˜T P 2 QP 2 x ˜ Tx n ˜ x ˜ x ˜∈R h T i 1 = λmax P 2 QP 2 h 1 T i = λmax P 2 P 2 Q (since λmax (AB) = λmax (BA)) = sup
= λmax (P Q).
From the previous example 1
kG(s)kHankel = kΓG k = [λmax (P Q)] 2 .
(15.23)
Recall from Theorem 15.1 that the H∞ norm is an upper bound on the Hankel norm, that is kG(s)kHankel ≤ kG(s)k∞ .
768
OPTIMAL AND ROBUST CONTROL
Next we state the multivariable counterpart of Theorem 15.2. THEOREM 15.3 (Nehari, 1957) Let G(s) be a given transfer function matrix. Then (a) kG(s)kHankel = min kG + Q∗ k∞ Q∈H∞
T
where Q (s) = Q (−s) ∗
and is, therefore, anticausal. (b) The minimizing Q∗opt makes G + Q∗opt all pass, that is [G + Q∗opt ]∗ [G + Q∗opt ] = I Thus, the all pass nature of the H∞ optimal solution that we established for the scalar case in Chapter 14 extends to the multivariable case too. Even without relying on the above theorem, it is possible to give a constructive state-space proof that there always exists a stable Q(s) such that kG + Q∗ k∞ = kG + Q∗ kHankel = kGkHankel
(15.24) (15.25)
where Q∗ (s) = QT (−s). In other words, every G(s) has an anticasual all pass extension Q∗ such that the H∞ norm equals the Hankel norm. To get a physically intuitive feel about what is happening, we return to the SISO discrete-time case and the infinite matrix of impulse responses. As shown in Figure 15.1, an anticausal part is added to G to make the resulting transfer function matrix all-pass, while leaving the Hankel norm unaltered. The Hankel approximation approach to solving the H∞ optimal control problem involves the following steps: (i) obtain a YJBK parametrization of all stabilizing controllers so that the closed-loop transfer function matrix whose H∞ norm is to be minimized becomes affine in the YJBK parameter; (ii) convert the H∞ norm minimization problem to an equivalent Hankel approximation problem; (iii) solve for the optimal Hankel approximant, which by Nehari’s Theorem must make the resulting optimal transfer function matrix all-pass; and (iv) use the optimal Hankel approximant to recover the optimal YJBK parameter and, therefore, the optimal controller. Although the Hankel approach to H∞ is based on the transfer function approach and makes use of the YJBK parametrization, all the necessary computations can be carried out using state-space techniques. In the next subsection, we first present a state-space characterization for all-pass transfer function matrices and then use it to solve for an anticausal all pass extension for a given square, stable transfer function matrix.
769
H∞ OPTIMAL MULTIVARIABLE CONTROL
0
Q +
=
G
0
Causal
Anticausal ΓG = ΓG+Q (All Pass)
Figure 15.1 Pictorial representation of the all-pass extension.
15.1.3
State Space Computation of an All-Pass Extension
The following theorem, stated without proof, provides a set of sufficient conditions, which if satisfied, guarantee that a given transfer function matrix Ge (s) is all-pass. THEOREM 15.4 (All-pass) −1 Let Ge (s) = Ce (sI − Ae ) Be + De . If there exists matrices Pe and Qe satisfying (i) Ae Pe + Pe ATe + Be BeT = 0 (ii) ATe Qe + Qe Ae + CeT Ce = 0 (iii) Pe Qe = I (iv) DeT Ce + BeT Qe = 0 (v) De BeT + Ce Pe = 0 (vi) DeT De = I Then Ge (s) is all pass, that is GTe (−s)Ge (s) = I. REMARK 15.2 Note that Ge (s) in the above theorem is not required to be stable and, although the Grammians Pe and Qe are not defined, the corresponding Lyapunov equations (i) and (ii) will have solutions Pe and Qe which will be indefinite but still satisfy (iii).
770
OPTIMAL AND ROBUST CONTROL
Suppose G(s) = C(sI − A)−1 B ∈ C n×n and is stable. The preceding Theorem can be used to solve for an all-pass extension of G(s), that is to find an anticausal Q(s) such that G(s) − Q(s) is all-pass. Before presenting the result as a theorem, let us first introduce some associated notation. ∆ Let Ge (s) = G(s) − Q(s) have a state-space representation (Ae , Be , Ce , De ), that is −1 Ge (s) = Ce (sI − Ae ) Be + De . Suppose Q(s) has the state space representation ˆ ˆ −1 B ˆ + D. ˆ Q(s) = C(sI − A) Then a possible choice for (Ae , Be , Ce , De ) is given by A0 B Ae Be ˆ = 0 Aˆ B Ce De ˆ ˆ C −C D
(15.26)
ˆ B, ˆ C, ˆ D ˆ are as defined below: where A, Let P and Q satisfy the equations
AP + P AT + BB T = 0, QA + AT Q + C T C = 0 respectively and define ∆
R = PQ − I ∆
Pe = ∆
Qe =
P I I QR−1
(15.27)
Q −RT . −R RP
ˆ suppose that B, ˆ C, ˆ and Aˆ are given by the folFurthermore, for a given D lowing expressions ˆ = (R−1 )T (QB + C T D) ˆ B ˆ T Cˆ = CP + DB −T ˆ T ). Aˆ = R (AT + QAP − C T DB
(15.28) (15.29) (15.30)
ˆ = I and A, ˆ B, ˆ Cˆ given by ˆ satisfying D ˆ ∗D It can be verified that, for any D (15.28) – (15.30), the Ge given by (15.26) satisfies the all-pass theorem, that is conditions (i) through (vi) of Theorem 15.4, and hence ∆
Ge (s) = G(s) − Q(s) is all pass, that is GTe (−s)Ge (s) = I
H∞ OPTIMAL MULTIVARIABLE CONTROL
771
Using (i) and (ii) of Theorem 15.4, we can show that ˆB ˆ T = 0, Q ˆ Aˆ + AˆT Q ˆ + Cˆ T Cˆ = 0 AˆPˆ + Pˆ AˆT + B −1 ˆ = RP = (P Q − I)P . By Lyapunov where Pˆ = QR−1 = Q(P Q − I) and Q ˆ Theory, A is antistable if and only if λmax (P Q) < 1. Furthermore, from Example 15.4, note that
kGkHankel = λmax (P Q). So, if λmax (P Q) > 1, then by Nehari’s Theorem, no solution exists, that is no all pass extension can be found. The above ideas are formalized in the following theorem. THEOREM 15.5 G(s) has an antistable all-pass extension Q(s) if and only if λmax (P Q) < 1. Moreover, one such extension is −1
ˆ ˆ Q(s) = C(sI − A)
ˆ +D ˆ B
ˆ is any unitary matrix and A, ˆ B, ˆ Cˆ are given by Equations (15.27)where D (15.30).
15.1.4
H∞ Optimal Control Based on the YJBK Parametrization and Hankel Approximation Theory
From the material presented in the last three subsections, including the connection between the H∞ norm minimization and Hankel approximation problems, it is clear that, in principle, one can solve the multi-input multi-output standard H∞ problem by combining the results of Theorem 15.5 with Nehari’s Theorem. In this subsection, we present the necessary implementation details when using state space techniques. Here it should be pointed out that although the calculations for the H∞ optimal control solution based on Hankel theory can be carried out using state-space techniques, this should not be confused with the state-space solution to H∞ to be presented in the next section. Indeed, the two approaches are quite distinct and while the Hankel approach makes use of the YJBK parametrization, the state-space approach completely bypasses the need for it. As before, let us start with an augmented plant shown in the standard H∞ configuration in Figure 15.2. Our objective is to find a stabilizing F such that kTy1 u1 k∞ < 1. From Figure 15.2, we have y1 = P11 u1 + P12 u2 y2 = P21 u1 + P22 u2 u 2 = F y2 .
772
OPTIMAL AND ROBUST CONTROL u1
y1
u2
P11
P12
P21
P22
y2
F
Figure 15.2 The standard H∞ configuration.
Thus, Ty1 u1 = P11 + P12 F (I − P22 F )−1 P21
(15.31)
We now proceed to determine Ty1 u1 using state space calculations. To do so, we first start with a state-space representation (A, [B1 , B2 ], [C1T , C2T ]T , D) for the augmented plant P which is usually expressed in the packed matrix notation as: A B1 B2 P = C1 D11 D12 . C2 D21 D22 The next step is to carry out a coprime factorization for P22 , that is determine stable rational transfer function matrices Nr , Dr , Nl , Dl such that P22 = Nr Dr−1 = Dl−1 Nl and (Nr , Dr ) are right coprime and (Dl , Nl ) are left coprime. State space formula for determining stable rational transfer function matrices Nr , Dr , Nl , Dl , Ur , Vr , Ul , Vl satisfying the matrix equation Vr Ur Dr −Ul I0 = (15.32) −Nl Dl Nr Vl 0I are developed in Section 15.1.6. Notice that two out of the four equations resulting from (15.32) are the Bezout identities guaranteeing the right coprimeness of (Nr , Dr ) and the left coprimeness of (Dl , Nl ). Thus, (15.32) is a doubly coprime factorization. With these coprime factorizations in hand, the multi-input multi-output Youla parametrization tells us that all stabilizing controllers F for P22 can be represented as −1
F = −(Ul + Dr Q)(Vl − Nr Q) −1
= −(Vr − QNl )
(Ur + QDl )
(15.33)
773
H∞ OPTIMAL MULTIVARIABLE CONTROL
where Q is any stable rational transfer function matrix. Substituting (15.33) into (15.31) and making use of (15.32), we obtain Ty1 u1 = T11 + T12 QT21 where ∆
∆
∆
T11 = (P11 − P12 Ul Dl P21 ), T12 = (−P12 Dr ), T21 = (Dl P21 ). Our original standard H∞ problem now becomes: find a stable Q to make kT11 + T12 QT21 k∞ < 1. To solve this problem, we consider two different cases: Case (I): T12 and T21 are square: If T12 and T21 are square, we can factorize them into an inner (all pass and stable factor) and an outer (minimum phase and stable) factor using inner-outer factorization. The details of this factorization technique will be discussed in the next subsection. For the time being, it is sufficient to note that since the outer factors are stable and minimum phase, they can always be absorbed in Q. Consequently, without any loss of generality, we can assume that T12 and T21 are square and inner. Then min kT11 + T12 QT21 k∞
Q∈H∞
∗ ∗ = min kT12 T11 T21 + Qk∞ .
(15.34)
Q∈H∞
Since replacing s by −s does not alter the magnitudes and norms evaluated on the jω-axis, we replace s by −s in the above expres∗ ∗ ˜ sion and identify G(s) = T12 T11 T21 (−s) and Q(s) = −Q(−s) to obtain ∗ ∗ min kT12 T11 T21 + Qk∞ =
Q∈H∞
min
˜ Q(−s)∈H ∞
kG(s) −
˜ k∞ Q(s) | {z }
antistable
= kG(s)kHankel (by Nehari’s Theorem) ∗ ∗ = kT12 T11 T21 (−s)kHankel .
˜ Now, we can use Theorem 15.5 to get the optimal Q(s). This, in turn, can be used to determine the optimal Q(s) and then the optimal H∞ controller. The H∞ optimization problem here, namely (15.34) is called the one block problem. For the single-input single-output case, the required inner-outer factorizations can be carried out using Blaschke products as was done in Section 14.6.5. In the multivariable case, the inner outer factorization is carried out using the LQ return difference equality, to be discussed in the next subsection.
774
OPTIMAL AND ROBUST CONTROL
Case (II): T12 , T21 are not square: Here T12 has more rows than columns (the number of control inputs is less than the number of outputs that we are trying to keep small) and/or T21 has more columns than rows (the number of exogenous inputs is more than the number of measurements). In this case, we can augment T12 with extra ⊥ ⊥ columns T12 such that T12 T12 is square and inner. Similarly, T21 ⊥ we can augment T21 with extra rows T21 such that ⊥ is square T21 ⊥ ⊥ and inner. T12 , T21 are said to be co-inner with respect to T12 and ⊥ ⊥ T21 respectively. Introducing T12 , T21 we can write T21 Q0 ⊥ T11 + T12 QT21 = T11 + [T12 T12 ] ⊥ 0 0 T21 ⇒ kTy1 u1 k∞
Q0 T21 ⊥
= T11 + T12 T12 ⊥ 0 0 T21 ∞
∗
T12 ∗ ∗⊥ Q0
= + ∗⊥ T11 T21 T21
T12 0 0 ∞
R11 R12 Q0
+ =
R21 R22 0 0 ∞
∗ ∗ ∗ ∗⊥ ∗⊥ ∗ where R11 = T12 T11 T21 , R12 = T12 T11 T21 , R21 = T12 T11 T21 and ∗⊥ ∗⊥ R22 = T12 T11 T21 . Thus
R11 R12 Q0
min kTy1 u1 k∞ = min + (15.35) R21 R22 0 0 ∞ Q∈H∞ Q∈H∞
The H∞ optimization problem in (15.35) above is called a four block problem. It is clear that if only T12 or T21 but not both were nonsquare, then instead of a four block problem, we would have had a two-block one. Equation (15.35) also provides the justification for referring to the optimization problem in (15.34) as the one-block problem.
In case (I) above, we showed how the one block problem could be solved using Theorem 15.5. Our strategy for solving two and four block problems would be to reduce them to equivalent one block problems which can then be treated as in Case (I). The reduction of four block problems to two block problems and then to one block ones is made possible by a technique that makes use of Spectral Factorization. To see how this technique works, suppose that we have a two block problem
R11 + Q
≤1 (15.36)
R21 ∞
775
H∞ OPTIMAL MULTIVARIABLE CONTROL and we wish to reduce it to an equivalent one block one. (R11 + Q) ∗ ∗ Clearly (15.36) ⇔ (R11 + Q) R21 ≤I R21 (R11 + Q) ∗ ∗ ⇔ I − (R11 + Q) R21 ≥0 R21 ∗
∗ ⇔ (I − R21 R21 ) − (R11 + Q) (R11 + Q) ≥ 0 (15.37)
If the above holds, then ∗ I − R21 R21 ≥ 0 ∗ so that I − R21 R21 can be factored as ∗ M ∗ M = I − R21 R21 .
(15.38)
Equation (15.38) is called a Spectral Factorization of R21 (s) and M (s) is called a spectral factor. Suppose it is possible to obtain a spectral factor M such that M (s) is stable and minimum phase. Then (15.37) can be rewritten as M ∗ M − (R11 + Q)∗ (R11 + Q) ≥ 0 −1
∗
⇔ (M ∗ ) [M ∗ M − (R11 + Q) (R11 + Q)]M −1 ≥ 0 ∗ that is I − [(R11 + Q)M −1 ] (R11 + Q)M −1 ≥ 0 ˜ ∞≤1 ⇔ kR11 M −1 + Qk
(15.39)
∆
˜ = QM −1 . Since M is stable and minimum phase, it is clear that where Q ˜ to make finding a stable Q ˜ ∞≤1 kR11 M −1 + Qk is equivalent to finding a stable Q to solve the 2 block problem (15.36). But the one block problem (15.39) can be solved as in Case (I). Thus, we have shown how one can use Spectral Factorization to reduce a two block problem to a one block one. A similar technique can be used to reduce a four block problem to a two block one. It only remains to show how the Spectral Factorization in (15.38) can be carried out. This will be demonstrated once we have revisited the LQ Return Difference Equality in the next subsection. However, before doing that, let us first summarize the steps involved in the state-space H∞ solution using Hankel theory: 1. Get a coprime factorization of P22 to obtain Ty1 u1 = T11 + T12 QT21 By a specific choice (dictated by the LQ return difference equality) of state feedback gain and observer gain (given by solutions of Ricatti Equations), we can make T12 , T21 inner.
776
OPTIMAL AND ROBUST CONTROL
2. If T12 , T21 are not square, then the same choice of gains gives co-inner ⊥ ⊥ factors T12 , T21 so that Q0 T21 ⊥ Ty1 u1 = T11 + T12 T21 ⊥ 0 0 T21 3. Reduce the 4-block/2-block problem to a 1-block problem using Spectral Factorization(s). 4. Solve the Hankel approximation problem using Theorem 15.5. 5. Calculate the resulting controller from Q(s). The coprime factorization required in Step 1 above can be carried out using state-space formulas. To make the presentation self-contained, these formulas are derived in Section 15.1.6. Note that implementing Steps 1 and 2 above involve inner-outer factorization while implementing Step 3 involves the use of spectral factorization. Both these kinds of factorizations are facilitated by the LQ Return Difference Equality which is revisited in the next subsection.
15.1.5
LQ Return Difference Equality
Consider the LQ (Linear Quadratic) Optimal Control Problem: Minimize over u(.) Z ∞ T Q N x(t) x (t) uT (t) dt NT R u(t) 0
(15.40)
subject to
x˙ = Ax + Bu. Let us assume that A, B, Q = QT , N , R = RT are such that the problem is well-posed. Then, as in Chapter 13, it can be shown that the optimal solution is a state feedback u = −F x where
T
F = R−1 (N + P B)
and P satisfies the Algebraic Ricatti Equation I −P I H =0 P
where
H=
"
A − BR−1 N T −BR−1 B T T −(Q − N R−1 N T ) −(A − BR−1 N T )
#
777
H∞ OPTIMAL MULTIVARIABLE CONTROL
is called the associated Hamiltonian matrix. The block diagram for the resulting feedback system can be obtained as follows. First, −1
x˙ = Ax + Bu ⇒ X(s) = (sI − A) Also u = −F x ⇒ U (s) = −F X(s).
BU (s)
These two relationships yield the feedback interconnection shown in Figure 15.3. 0
U (s)
X(s) (sI − A)−1
B
+ −
F
Figure 15.3 Feedback structure resulting from LQ optimal control.
From this figure, the loop transfer function is L(s) = F (sI − A)
−1
B
∆
and the Return Difference = difference between injected signal and what we are getting back in return (prior to closing the loop) = I + L(s). The return difference resulting from a linear quadratic optimal control design satisfies a particular relationship. This relationship is called the LQ Return Difference Equality and was derived in Section 13.6 for the special case of N = 0, that is no cross terms in the performance index. In the next example, we derive the LQ Return Difference Equality for the more general case, since it is this more general version that is needed for carrying out inner-outer factorizations and spectral factorizations. Example 15.5 Prove that if L(s) is the loop transfer function resulting from the solution to the optimal control problem (15.40), then hh i∗ i Q N −1 (sI − A) B ∗ [I + L(s)] R [I + L(s)] = (sI − A)−1 B I NT R I
778
OPTIMAL AND ROBUST CONTROL
where
∆
G∗ (s) = GT (−s). Solution: The Ricatti Equation is
Substituting H=
"
−P I H
I P
=0
A − BR−1 N T −BR−1 B T T −(Q − N R−1 N T ) −(A − BR−1 N T )
#
we obtain T
−P A+P BR−1 N T +P BR−1 B T P −Q+N R−1 N T −AT P +(BR−1 N T ) P = 0. Using F = R−1 (N + P B)T , we obtain −P A + P BF − Q + N F − AT P = 0. Adding P s and −P s, using the fact that P B + N = (RF )T , and grouping terms, we get T
∗
P (sI − A) + (RF ) F − Q + (sI − A) P = 0. ∗ −1 Now multiplying from the left by B T (sI − A) and from the right by (sI − A)−1 B, it follows that −1 −1 T F RF (sI − A)−1 B P B + B T (sI − A)∗ B T (sI − A)∗ ∗ −1 −1 −1 −B T (sI − A) Q(sI − A) B + B T P (sI − A) B = 0.
Using F = R−1 (N + P B)T , we obtain h i ∗ −1 T ∗ −1 T −1 B T (sI − A) (RF ) − N + B T (sI − A) F RF (sI − A) B ∗ −1 −1 −1 −B T (sI − A) Q(sI − A) B + (RF − N T ) · (sI − A) B = 0. −1 −1 ⇒ L∗ (s)R − B T (sI − A)∗ N + L∗ (s)RL(s) − B T (sI − A)∗ −1
Q(sI − A)
B + RL(s) − N T (sI − A)
−1
B=0
∗ ∗ −1 ⇒ [I + L(s)] R [I + L(s)] = R + B T (sI − A) N −1 ∗ −1 −1 +B T (sI − A) Q(sI − A) B + N T (sI − A) B.
779
H∞ OPTIMAL MULTIVARIABLE CONTROL The above expression may be rewritten as ∗
[I + L(s)] R [I + L(s)] =
hh
(sI − A)
−1
i∗ i Q N −1 (sI − A) B B ,I NT R I
which is the LQ Return Difference Equality. The next two examples demonstrate how the LQ Return Difference Equality can be used to carry out Spectral Factorization and Inner-Outer Factorization respectively. Example 15.6 (Spectral Factorization) Suppose −1 G(s) = C(sI − A) B + D is stable. To determine the spectral factor of G(s), take
Q N NT R
CT =− DT
00 CD + 0I
and solve the LQ optimal state feedback problem. If L(s) denotes the loop transfer function matrix corresponding to the optimal state feedback, then from the LQ Return Difference Equality, it follows that [I + L(s)]∗ R [I + L(s)] = I − G∗ (s)G(s). 1
Since (I + L(s))−1 is stable, it follows that R 2 [I + L(s)] is a minimum phase spectral factor of G(s). Example 15.7 (Inner-Outer Factorization) . Again suppose that G(s) = C(sI − A)−1 B + D is stable. To determine the inner and outer factors of G(s) take
Q N NT R
=
CT CD T D
and solve the LQ optimal state feedback problem. If L(s) denotes the loop transfer function matrix corresponding to the optimal state feedback, then from the LQ Return Difference Equality, it follows that ∗
[I + L(s)] R [I + L(s)] = G∗ (s)G(s) and hence I = θ∗ θ
780
OPTIMAL AND ROBUST CONTROL
where i−1 h 1 θ(s) = G(s) R 2 (I + L(s)) 1
= G(s)(I + L(s))−1 R− 2
Thus, G(s) has inner-outer factorization G(s) = θ(s)M (s) 1
where M (s) = R 2 (I + L(s)) is stable and minimum phase (that is outer) and 1 −1 θ(s) = G(s)(I + L(s)) R− 2 is inner.
15.1.6
State Space Formulas for Coprime Factorizations
The YJBK parametrization of all stabilizing controllers requires a coprime factorization of the plant and the solution of a Bezout Identity. In this subsection, we show how both can be obtained using state space techniques. The following example presents a preliminary result that will be repeatedly used in our development. Example 15.8 Given G(s) = C(sI − A)−1 B + D is a square transfer function matrix with D nonsingular, derive a state-space expression for G−1 (s). Solution: Now
From (15.42)
x˙ = Ax + Bu y = Cx + Du h i −1 ⇒ Y (s) = C(sI − A) B + D U (s) u = −D−1 Cx + D−1 y
which when substituted in (15.41) yields x˙ = Ax + B −D−1 Cx + D−1 y that is
x˙ = (A − BD−1 C)x + BD−1 y Thus ˜ ˜ −1 B ˜+D ˜ G−1 (s) = C(sI − A)
(15.41) (15.42) (15.43)
781
H∞ OPTIMAL MULTIVARIABLE CONTROL ˜ = BD−1 , C˜ = −D−1 C and D ˜ = D−1 . where A˜ = A − BD−1 C, B Suppose that we are given the plant x˙ = Ax + Bu y = Cx
Taking Laplace Transforms and setting initial conditions to zero, we obtain −1
BU (s)
−1
B
Y (s) = C(sI − A)
⇒ Tyu = C(sI − A) Now consider the feedback
u = −F x + r so that the closed-loop state equation becomes x˙ = Ax − BF x + Br Taking Laplace Transforms and setting initial conditions to zero, we have −1
X(s) = (sI − A + BF )
BR(s).
Thus, −1
Tyr = C(sI − A + BF )
B
−1
and U (s) = −F (sI − A + BF ) BR(s) + R(s) h i −1 = I − F (sI − A + BF ) B R(s) −1
Since
⇒ Tur = I − F (sI − A + BF ) B h i−1 = I + F (sI − A)−1 B (using Example 15.8) Tyu = Tyr (Tur )
−1
if we define −1
Nr = C(sI − A + BF )
(15.44)
B −1
Dr = I − F (sI − A + BF )
B
(15.45)
then we have P (s) = Tyu = Nr (s)Dr−1 (s).
(15.46)
As long as the matrix (A − BF ) has all eigenvalues in the open left-half plane, Nr (s) and Dr (s) will be stable rational transfer function matrices. Thus (15.44), (15.45) and (15.46) do represent a stable factorization of the plant.
782
OPTIMAL AND ROBUST CONTROL
We will later show that this representation is also right coprime. Note that the stable factors Nr (s) and Dr (s) can be conveniently represented as follows using the packed matrix notation: A − BF B Nr (15.47) = C 0 Dr −F I Suppose now that the state x of the plant is not available and so a state feedback of the form u = −F x has to be implemented as u = −F x ˆ
(15.48)
where xˆ is the estimate of x obtained from a Luenberger observer x ˆ˙ = Aˆ x + Bu + H(y − C x ˆ).
(15.49)
From (15.48), (15.49), we obtain x ˆ˙ = (A − BF − HC)ˆ x + Hy −1 ˆ ⇒ X(s) = (sI − A + BF + HC) HY (s) −1
⇒ U (s) = − F (sI − A + BF + HC) | {z C(s)
H Y (s). }
Thus, the transfer function matrix of the equivalent output feedback compensator is given by C(s) = F (sI − A + BF + HC)−1 H. Recognizing that every stabilizing compensator, when properly factored, yields a solution to the appropriate Bezout identity, we now seek a stable factorization of this compensator, that is we want to find a stable Ul (s) and Vl (s) such that ˜ = H, F˜ = −C, and C(s) = Ul (s)Vl−1 (s). Define C˜ = F , A˜ = A−BF −HC, B using the factorization result that we just developed using full state feedback (see (15.47)), it follows that the factors Ul (s), Vl (s) are given by A − BF H Ul = (15.50) F 0 . Vl C I
Furthermore, (15.47) and (15.50) can be represented more compactly as: A − BF B H Nr Vl = (15.51) C 0 I . Dr −Ul −F I 0
Since (A − BF ) is a stable matrix, it follows that Nr , Dr , Vl , Ul are all stable. To show that Nr (s), Dr (s) are right coprime and Ul (s), Vl (s) are right coprime, we must show that there exist stable transfer function matrices Ur (s), Vr (s), Nl (s), Dl (s) such that the following two Bezout identities hold Ur Nr + Vr Dr = I Nl Ul + Dl Vl = I.
(15.52) (15.53)
H∞ OPTIMAL MULTIVARIABLE CONTROL
783
It turns out that it is possible to find stable Ur , Vr , Nl , Dl which not only satisfy (15.52) and (15.53) but also satisfy Nr (s)Dr−1 (s) = Dl−1 (s)Nl (s) and Ul (s)Vl−1 (s) = Vr−1 (s)Ur (s).
(15.54) (15.55)
If (15.52), (15.53), (15.54), and (15.55) are all satisfied then it is clear that the plant P (s) has a right coprime factorization Nr (s)Dr−1 (s) and a left coprime factorization Dl−1 (s)Nl (s); similarly, the output feedback controller C(s) would have a right coprime factorization Ul (s)Vl−1 (s) and a left coprime factorization Vr−1 (s)Ur (s). Note that Nr , Dr , Vl and Ul have already been determined from (15.51); hence, our task now is to determine Nl , Dl , Vr , Ur to satisfy (15.52) – (15.55). Observe that (15.52) – (15.55) can be compactly written as Nr Vl Ur Vr I0 = . (15.56) 0I Dl −Nl Dr −Ul Thus
Ur Vr Dl −Nl
=
Nr Vl Dr −Ul
−1
A − HC H B = F 0 I . −C I 0
(15.57)
(using Example 15.8)
As long as the observer gain H is chosen to make (A− HC) stable, the Ur , Vr , Dl , Nl obtained from (15.57) will be stable. Thus, any stabilizing F and H used in (15.51) and (15.57) provide coprime factorizations of the plant along with solutions to all the Bezout identities. However, by making use of the LQ Return Difference Equality and choosing F and H in a special way, we can make T12 , T21 in Section 15.1.4 inner and also get co-inner factors in the process. A serious drawback of the approach that we have presented in this section for solving the multi-input multi-output H∞ optimal control problem is the so-called degree inflation. Pre and postmultiplication by inner factors increases the order of the transfer function matrices involved in the optimization problem, as do the spectral factorizations used to reduce 4 block/2 block problems to 1 block ones. The net result is that the final solution is of high order with hidden pole zero cancellations. These are difficult to detect, and, therefore, cannot be easily removed. This is referred to in the literature as the problem of degree inflation. A considerable amount of effort was expended by several researchers to come up with alternative approaches that do not have this drawback. These efforts finally culminated in the state-space solution that we will present in the next section.
784
15.2
OPTIMAL AND ROBUST CONTROL
The State Space Solution of H∞ Optimal Control
In this section, we present the state-space solution to the standard H∞ control problem, which is sometimes referred to as the DGKF (Doyle-GloverKhargonekar-Francis) solution. In H∞ optimal control, the design objective is to find a controller to minimize the worst-case energy amplification of certain signals of interest. Thus, the H∞ control problem can be viewed as a min-max problem where the controller objective is to minimize the cost function (or the performance index) while the uncertainties such as external disturbances are assumed to be the worst-case inputs. Such min-max problems can be studied within the framework of game theory and the state-space solution does precisely that. As we will see in this section, by using a game theoretic approach, the state-space solution bypasses the need for the YJBK parametrization. The result is obtained by a simple completion of squares and involves the solution of two Ricatti Equations. To keep the presentation simple, we will impose certain restrictions to be satisfied by the augmented plant. These restrictions can be relaxed but at the expense of making the formulas more complicated. It turns out that the state-space approach to H∞ presented here can be extended to the H2 optimal control problem in a fairly transparent, albeit not so rigorous, fashion and leads to the recovery of well-known results from the H2 optimal control literature, provided one sets all initial conditions to zero throughout. In this section, we will be also presenting this extension applicable to the H2 case. For the entire subsequent development, we use the following notation: P = Ric
A −R −Q −AT
means that P satisfies the Algebraic Ricatti Equation
0 = −P I
A −R −Q −AT
I P
with A-RP stable. In general the Algebraic Ricatti Equation has several solutions. However, as shown in Theorem 13.3, under appropriate assumptions, it has a unique stabilizing solution P , that is P makes the matrix A − RP stable.
15.2.1
The H∞ Solution
We will develop the state-space solution to H∞ in two parts. First, we will consider the simpler case where all states are measured. Thereafter, we will extend the results to observer-reconstructed-state H∞ .
785
H∞ OPTIMAL MULTIVARIABLE CONTROL 15.2.1.1
Full State Feedback H∞
Suppose that the full state x is available as y2 as shown in Figure 15.4.
A
B1
B2
C1
0
D12
I
0
0
exogenous input, u1
control, u2
regulated output, y1
G(s) “PLANT”
measurement, y2
F (s) Figure 15.4 The standard plant controller setup.
Here the packed (A, B, C, D) matrix representation describes the augmented plant while F (s) represents the feedback controller to be designed. The equations describing the augmented plant are: x˙ = Ax + B1 u1 + B2 u2 y1 = C1 x + D12 u2
(15.58) (15.59)
y2 = x.
(15.60)
For simplicity, we make the following assumptions: T D12 C1 = 0 D11 = 0
(15.61) (15.62)
T D12 D12 = I.
(15.63) (15.64)
In this section, we will focus on the general problem ky1 kL2 [0,∞) 0 is some constant, and refer to it as the standard H∞ control problem. (Note that the H∞ norm of a transfer function matrix captures the
786
OPTIMAL AND ROBUST CONTROL
input-output behavior of the system and the initial condition has no role in its definition.) Clearly, this problem can be equivalently expressed as (ky1 kL2 )2 − γ 2 (ku1 kL2 )2 < 0 ∀ u1 . Let ∆
2
2
J(x, u1 , u2 ) = (ky1 kL2 [0,∞) ) − γ 2 (ku1 kL2 [0,∞) ) Z ∞ = (y1T y1 − γ 2 uT1 u1 )dt.
(15.65)
0
Here J is the infinite horizon cost and the goal is to make J negative. In a typical game theoretic problem, there are two adversarial players, one trying to maximize the cost while the other tries to minimize it. The situation here is similar with the exogenous input u1 trying to maximize J while the control input u2 tries to minimize J so that the optimization problem becomes min max J(x, u1 , u2 ). u2
u1
We will start with the cost (15.65) and employ a simple completion of squares to arrive at a solution to the min-max problem. From (15.65), Z ∞ J= (y1T y1 − γ 2 uT1 u1 )dt Z0 ∞ h i T = (C1 x + D12 u2 ) (C1 x + D12 u2 ) − γ 2 uT1 u1 dt (using (15.59)) Z0 ∞ T T x C1 C1 x + uT2 u2 − γ 2 uT1 u1 dt = 0
(using (15.61) and (15.63))
Now for J to be well defined, we must have J = limT →∞ JT where ∆
JT =
Z
0
T
T T x C1 C1 x + uT2 u2 − γ 2 uT1 u1 dt
Let P (t) = P T (t) be an arbitrary symmetric matrix and consider xT (T )P (T )x(T ) − xT (0)P (0)x(0) + JT Z T Z T d T (x P x)dt + (xT C1T C1 x + uT2 u2 − γ 2 uT1 u1 )dt = 0 0 dt Z Th i = x˙ T P x + xT P˙ x + xT P x˙ + xT C1T C1 x + uT2 u2 − γ 2 uT1 u1 dt 0
=
Z
0
T
h T (Ax + B1 u1 + B2 u2 ) P x + xT P˙ x + xT P (Ax + B1 u1 + B2 u2 )
787
H∞ OPTIMAL MULTIVARIABLE CONTROL
+xT C1T C1 x + uT2 u2 − γ 2 uT1 u1 dt (using (15.58)) Z T B1 B1T T T T T ˙ − B2 B2 )P + C1 C1 x = x P + A P + PA + P( γ2 0 ) T T T B P B P T + u2 + B2T P x u2 + B2T P x − γu1 − 1 x (γu1 − 1 x) dt γ γ (completing the squares to get rid of cross terms).
Since P (t) is arbitrary, to simplify the above expression, we choose P (t) to be the solution of the Ricatti Differential Equation T
B1 B1 −P˙ = AT P + P A + P ( − B2 B2T )P + C1T C1 γ2 P (T ) = 0. Also let us define ∆
u∗1 (x) =
B1T P x, γ2
∆
u∗2 (x) = −B2T P x. Then 2 − γ 2 ku1 − u∗1 (x)kL2 [0,T ] . (15.66) Thus, if the state x is available for measurement, the cost minimizing u2 (t) is simply u2 = u∗2 (x) = −B2T P x, JT = x(0)T P (0)x(0) + ku2 − u∗2 (x)kL2 [0,T ]
2
the cost-maximizing disturbance u1 is u1 = u∗1 (x) =
B1T P x γ2
and the optimal cost is JT = x(0)T P (0)x(0). Furthermore, if P¯ = limT →∞ P (t) exists, then using arguments similar to those used for the infinite horizon LQR problem in Section 13.3 it can be shown that P¯ satisfies the Algebraic Ricatti Equation 0 = AT P + P A + P (
B1 B1T − B2 B2T )P + C1T C1 γ2
or equivalently "
B BT
1 1 A − B2 B2T γ2 P = Ric −C1T C1 −AT
#
.
788
OPTIMAL AND ROBUST CONTROL
Summarizing, we now have the following Theorem: THEOREM 15.6 (Full State Feedback H∞ ) Let A B1 B2 G(s) = C1 0 D12 0 I 0 T D12 C1 = 0
T D12 D12
= I.
(15.67) (15.68) (15.69)
Then a solution to the standard H∞ control problem kTy1 u1 k∞ ≤ γ is F (s) = F = −B2T P where P is a solution to the Algebraic Ricatti Equation " # B1 B1T T − B B A 2 2 2 γ P = Ric . −C1T C1 −AT Some of the assumptions in the above theorem can be relaxed. However, in the process, the formulas become more complicated. The following corollary demonstrates this for assumption (15.68). COROLLARY 15.1 (Full State Feedback without Assumption (15.68)) Let everything be as in Theorem 15.6, except that condition (15.68) does not necessarily hold, that is T D12 C1 6= 0. Then a solution to the standard H∞ problem (kTy1 u1 k∞ ≤ γ) is T u2 = F x, F = −(D12 C1 + B2T P )
where P is a solution to the Algebraic Ricatti Equation " # B1 B1T T A − B2 D12 C1 − B2 B2T 2 γ P = Ric . T T T −C1T (I − D12 D12 )C1 −(A − B2 D12 C1 )
(15.70)
T PROOF Note that the change of variables u2 = u˜2 − D12 C1 x does not in any way affect the map from u1 to y1 . Moreover, the new control input u˜2 results in T A − B2 D12 C1 B1 B2 T ˜ G(s) = (I − D12 D12 )C1 0 D12 I 0 0
H∞ OPTIMAL MULTIVARIABLE CONTROL
789
˜ and this G(s) satisfies the conditions of Theorem 15.6. Hence, it follows that the optimal u˜2 = −B2T P x where P satisfies (15.70). Thus, the optimal u2 is given by T u2 = −(D12 C1 + B2T P )x.
REMARK 15.3 Note that in both the above theorems, the introduction of a nonzero D21 does not affect the solution since the state x and not the output y2 is used to generate the control input u2 . REMARK 15.4 The existence of a stabilizing solution P to the Ricatti equations in the above theorems depends on the value of γ. Typically, one could start with a large value of γ (for which a stabilizing solution exists as long as the subplant G22 is stabilizable) and step down γ using a bisection algorithm until finally a stabilizing solution to the appropriate Ricatti Equation cannot be found and ceases to exist. It is clear that using such an approach, one could get very close to the minimal value of kTy1 u1 k∞ . 15.2.1.2
Observer Reconstructed Achievable State Feedback H∞
Here we solve the H∞ problem when the state x is not directly measured. The net result of our endeavors will be that we will arrive at a two-Ricatti solution to the general H∞ control problem. This procedure involves a number of steps. The first step is to show that an augmented plant G(s) for the standard H∞ ˜ problem can be replaced by an equivalent one in which G(s) has D12 = I. This is done in the following lemma. LEMMA 15.4 Consider the following two plants
A B1 B2 ∆ G(s) = C1 0 D12 , C2 D21 0
B BT A + 1γ 2 1 P B1 B2 ∆ ˜ G(s) = B2T P 0 I . C2 D21 0
T Assume that the conditions of Theorem 15.6 hold, that is D12 C1 = 0 and T T D12 D12 = I. In addition, suppose B1 D21 = 0 and " # B1 B1T T A − B B 2 2 γ2 P = Ric . −C1T C1 −AT
790
OPTIMAL AND ROBUST CONTROL
Then, a control law u2 = F (s)[y2 ] solves the standard H∞ problem for G(s) if and only if it solves it for the ˜ plant G(s). PROOF
Define B1T P x γ2 y˜1 = u2 + B2T P x.
u˜1 = u1 −
Then, from (15.66), we have 2 − γ 2 ku1 kL2 [0,T ] 2 2 = x(0)T P (0)x(0) + ky˜1 kL2 [0,T ] − γ 2 ku˜1 kL2 [0,T ] .
JT = ky1 kL2 [0,T ]
2
Since the initial condition x(0) does not affect the H∞ norm (which is an input-output property), by setting x(0) = 0, we conclude that kTy1 u1 k∞ ≤ γ iff kTy˜1 u˜1 k∞ ≤ γ. So, solving the H∞ problem for A B1 B2 ∆ G(s) = C1 0 D12 C2 D21 0 ˜ is equivalent to solving the H∞ problem for the plant G(s) corresponding to ˜ y˜1 , u ˜1 . Now let us derive the packed matrix representation for G(s). Substituting u1 = u ˜1 +
B1T P γ2
x into the state and output equations x˙ = Ax + B1 u1 + B2 u2 y1 = C1 x + D12 u2 y2 = C2 x + D21 u1
we obtain the new state and output equations x˙ = Ax + B1 (u˜1 + y˜1 = B2T P x + u2
B1T P γ2
y2 = C2 x + D21 (u˜1 +
x) + B2 u2
B1T P γ2
x) = C2 x + D21 u˜1
from which it follows that
B BT A + 1γ 2 1 P B1 B2 ˜ G(s) = B2T P 0 I C2 D21 0
791
H∞ OPTIMAL MULTIVARIABLE CONTROL is the plant corresponding to y˜1 , u˜1 . Lemma 15.4 has a dual result which is the following: LEMMA 15.5 Consider
CT C A B1 B2 A + Q 1γ 2 1 QC2T B2 ˜ G(s) = C1 0 D12 & G(s) = C1 0 D12 C2 D21 0 C2 I 0
T T T where B1 D21 = 0, D21 D21 = I, D12 C1 = 0 and " # C1T C1 T AT − C C 2 2 γ2 Q = Ric −B1 B1T −A
Then a control law u2 = F (s)[y2 ] solves the standard H∞ control problem for ˜ G(s) if and only if it solves it for the plant G(s). PROOF Noting that kG(s)k∞ = kGT (s)k∞ , the result follows directly from Lemma 15.4 by taking the transpose of everything, applying Lemma 15.4, and then transposing all of the resultant equations. LEMMA 15.6 (Observer-Based H∞ Feedback) Consider the plant A H B2 G(s) = C1 0 D12 C2 I 0
T T and suppose that A − HC2 is stable, and that D12 C1 = 0 and D12 D12 = I. ∆ T Let F = −B2 P be the optimal full state feedback u2 = F x given by Theorem 15.6. Then the observer-based feedback u2 = K(s)[y2 ] given below solves the standard H∞ problem for G(s):
u2 = −B2T P x ˆ ˙x ˆ = Aˆ x + B2 u2 + H(y2 − C2 xˆ) # " HH T T A − B B 2 2 γ2 P = Ric −C1T C1 −AT that is K(s) = −B2T P (sI − A + B2 B2T P + HC2 ) PROOF
Define the state observation error e=x−x ˆ.
−1
H
792
OPTIMAL AND ROBUST CONTROL
Now the plant state equation is x˙ = Ax + Hu1 + B2 u2
(15.71)
while the observed state equation is x ˆ˙ = (A − HC2 )ˆ x + B2 u2 + Hy2 or x ˆ˙ = (A − HC2 )ˆ x + B2 u2 + H(C2 x + u1 ) or x ˆ˙ = (A − HC2 )ˆ x + B2 u2 + HC2 x + Hu1 .
(15.72)
Subtracting (15.72) from (15.71), we obtain e˙ = (A − HC2 )e. Noting that (A − HC2 ) is stable and e is not excited by u1 , it follows that if x(0) = x ˆ(0), then e(t) ≡ 0 ∀ t and hence x(t) = xˆ(t) ∀ t. The H∞ norm being an input-output property, nonzero initial conditions will not affect its value. Therefore, one may substitute xˆ for x in the control law u2 = −B2T P x without affecting Ty1 u1 or its H∞ norm. Combining Lemmas 15.5 and 15.6 one has a solution to the standard H∞ control problem which is stated formally in the following lemma. LEMMA 15.7 (A 2-Ricatti Solution to the Standard H∞ Control Problem) Let A B1 B2 G(s) = C1 0 D12 C2 D21 0
T T T T with D12 D12 = I, D21 D21 = I, D12 C1 = 0 and B1 D21 = 0. Define # " T C1 C1 T ∆ AT γ 2 − C2 C2 . Q = Ric T −B1 B1 −A
Let A˜ H B2 ∆ ˜ G(s) = C1 0 D12 C2 I 0
C T C1 ∆ ∆ where A˜ = A + Q 1 2 , H = QC2T γ
and let F = B2T P˜ where "
# HH T T ˜ A ( − B B ) 2 2 γ2 P˜ = Ric −C1T C1 −A˜T C1T C1 C2T C2 T A + Q Q Q − B B 2 2 2 2 ∆ γ γ = Ric T . C1T C1 T −C1 C1 −(A + Q γ 2 )
H∞ OPTIMAL MULTIVARIABLE CONTROL
793
Then the observer-based control law u2 = K(s)[y2 ] solves the standard H∞ control problem with −1 K(s) = −B2T P˜ sI − A˜ + B2 B2T P˜ + HC2 H −1 C1T C1 T ˜ T T ˜ = −B2 P sI − A − Q 2 + B2 B2 P + QC2 C2 QC2T γ or, equivalently u2 = −B2T P˜ x ˆ ˙xˆ = Aˆ ˜x + B2 u2 + H(y2 − C2 x ˆ) T C C1 = (A + Q 1 2 )ˆ x + B2 u2 + QC2T (y2 − C2 x ˆ) γ PROOF 15.6.
The proof is completed by using Lemma 15.5 followed by Lemma
The solution in Lemma 15.7 is stated in terms of the Ricatti solution P˜ which is different from the Ricatti solution P used for the full state feedback case in Theorem 15.6. We next show that P˜ is related to P . We first note that the P˜ Ricatti Hamiltonian is similar to the P Ricatti Hamiltonian, that is # " C1T C1 C2T C2 T B1 B1T T Q − B B A + Q Q 2 2 2 2 A − B B γ γ −1 2 2 2 γ T T = T CT C −C1T C1 −AT −C1T C1 −(A + Q 1 2 1 ) γ
(15.73)
where
I − γQ2 T = , 0I Q I γ2 T −1 = 0I and the fact that QAT + AQ + Q(
C1T C1 − C2T C2 )Q + B1 B1T = 0 γ2 B BT
is used to simplify the upper right entry to 1γ 2 1 −B2 B2T . Thus the two Hamiltonians will have the same eigenvalues and their corresponding eigenvectors will also be related through the nonsingular matrix T . Recall from Lemma 13.2 in Chapter 13 that if A −R P = Ric −Q −AT
794
OPTIMAL AND ROBUST CONTROL
then P may be computed as P = P2 P1−1 where of
P1 is any matrix whose columns form a basis for the stable eigenspace P2
A −R . −Q −AT
Furthermore, since the two Hamiltonian matrices are similar, we have P1 − γQ2 P2 I − γQ2 P1 P˜1 P1 = . =T = P2 P2 P˜2 0I P2 −1 Thus P˜ = P˜2 P˜1
= P2 (P1 −
−1 Q P2 ) 2 γ
−1 Q −1 = P2 I − 2 P2 P1 P1 γ −1 Q = P2 P1−1 I − 2 P2 P1−1 γ −1 QP =P I− 2 . γ For the above expression to be well defined, ρ(QP ), the spectral radius of QP −1 must be less than γ 2 . Under this condition, P˜ can be replaced by P (I − QP γ2 ) in Lemma 15.7 to obtain −1 C T C1 QP −1 QP K(s) = −B2T P (I − 2 ) (sI − A − Q 1 2 + B2 B2T P I − 2 γ γ γ +QC2T C2 )−1 QC2T
or equivalently QP −1 ) x ˆ γ2 C T C1 x ˆ˙ = (A + Q 1 2 )ˆ x + B2 u2 + QC2T (y2 − C2 x ˆ). γ
u2 = −B2T P (I −
Restating Lemma 15.7 in terms of P , we have the following main result. THEOREM 15.7 (2-Ricatti Solution to the Standard H∞ Problem) Let A B1 B2 G(s) = C1 0 D12 C2 D21 0
H∞ OPTIMAL MULTIVARIABLE CONTROL
795
T T T T with D12 D12 = I, D21 D21 = I, D12 C1 = 0 and B1 D21 = 0. Let # " T B1 B1 − B2 B2T A γ2 P = Ric T T −C1 C1 −A # Two Ricattis " C1T C1 T T A γ 2 − C2 C2 Q = Ric T −B B −A 1
F = B2T P H = QC2T
1
(Optimal H∞ state feedback and its dual)
and ρ(QP ) < γ 2 . Then an H∞ control law for which kTy1 u1 k∞ ≤ γ is given (in observerreconstructed state-feedback form) by −1 QP u2 = −F I − 2 x ˆ (15.74) γ C T C1 x ˆ˙ = A + Q 1 2 x ˆ + B2 u2 + QC2T (y2 − C2 xˆ). γ REMARK 15.5 From the above theorem, it is clear that for the existence of a stabilizing controller such that kTy1 u1 k∞ < γ, we must have (i) P stabilizing, (ii) Q stabilizing, and (iii) ρ(QP ) < γ 2 . Thus, one can progressively step down γ using a bisection algorithm until one of these three conditions breaks down. At that stage, one would be very close to having a controller that is H∞ optimal. REMARK 15.6 The solution presented here in Theorem 15.7 is a particular solution. It is possible to also obtain all solutions to the standard H∞ control problem. Moreover, the simplifying assumptions that we have made at the very outset have resulted in simpler expressions which are much easier to present and interpret. More general expressions, which subsume the current ones, are available in the H∞ control literature. Since the matrix Q appears in the expression (15.74), it is clear that in observer-based H∞ optimal control, the control and state estimation are not decoupled from each other. This is in contrast to H2 optimal control where such decoupling exists. This will be demonstrated in the next subsection.
15.2.2
The H2 Solution
In this subsection, we will present a solution to the H2 optimal control problem using an approach similar to the one that was used in the last subsection for
796
OPTIMAL AND ROBUST CONTROL
the standard H∞ optimal control problem. Towards this end, let
A B1 B2 G(s) = C1 0 D12 C2 D21 0 T T T T with D12 D12 = I, D21 D21 = I, D12 C1 = 0 and B1 D21 = 0. Furthermore, let us define AT −C2T C2 ∆ Q = Ric −B1 B1T −A
A −B2 B2T P = Ric . −C1T C1 −AT ∆
We will show that the optimal H2 solution is given by u2 = −B2T P x ˆ where x ˆ˙ = Aˆ x + B2 u2 + QC2T (y2 − C2 xˆ). This will be carried out in two stages. First, we will solve the H2 optimal control problem using full state feedback. Thereafter, we will extend the results to observer reconstructed state feedback. 15.2.2.1
Full State Feedback H2
Suppose that the full state x is available as y2 as shown in the Figure 15.5.
A
B1
B2
C1
0
D12
I
0
0
u1
y1
u2
y2 = x = state
G(s) “PLANT”
F (s)
Figure 15.5 The standard plant controller setup.
H∞ OPTIMAL MULTIVARIABLE CONTROL
797
Here, the packed (A, B, C, D) matrix representation describes the augmented plant while F (s) represents the feedback controller to be designed. The equations describing the augmented plant are: x˙ = Ax + B1 u1 + B2 u2 y1 = C1 x + D12 u2 y2 = x. For simplicity, we make the following assumptions which are a subset of the assumptions introduced at the beginning of this subsection: T C1 = 0 D12 D11 = 0 T D12 = I D12 D22 = 0.
Now we want to minimize kTy1 u1 k2 . To carry out this minimization, we first consider the case when Ty1 u1 is scalar. Then y1 = Ty1 u1 (s)[u1 ] ⇒
Z
∞
y1T (t)y1 (t)dt
=
Z
∞
|Y1 (jω)|
2 dω
2π (By Parseval’s Theorem) Z ∞ dω = |Ty1 u1 (jω)|2 2π −∞ (assuming U1 (jω) = 1 ∀ ω that is u1 (t) = δ(t)) = kTy1 u1 k2 . 2 Thus, to minimize kTy1 u1 k2 , one could define the cost J = ky1 kL2 [0,∞) and minimize J over u2 , that is solve the problem minu2 J(x0 , u2 ). We note that in contrast to the H∞ case, u1 here is a fixed signal, namely an impulse function. An important point to note here is that the H2 norm is defined for a transfer function matrix and thus would not depend on the possibly nonzero initial conditions. In an effort to assign a physical significance to it, one assumes that the system is driven by one or more impulsive inputs u1 as discussed in Appendix B. Unlike H∞ which is a Banach space, H2 is a Hilbert space and, therefore, the H2 norm minimization problem can be easily solved in the transfer function domain by using the YJBK parametrization followed by results on orthogonal projection in a Hilbert space. Such a solution is derived in the frequency domain and, therefore, one does not have to worry about nonzero initial conditions, which do not appear at all. Our objective here is a little different. We would like to demonstrate that the well known H2 solution can be derived along the lines of the state-space 0
−∞
798
OPTIMAL AND ROBUST CONTROL
H∞ solution presented earlier. To do so, we have converted a transfer function domain performance index into a time domain performance index. As we will see, as a consequence initial condition terms appear in the time domain performance index possibly affecting the rigor of the optimality arguments. Recognizing that such initial condition terms have only appeared as an artifact of our simplistic approach and have nothing to do with the frequency domain H2 norm, we will set them equal to zero as we go along. Now 2 J = ky1 kL2 [0,∞) Z ∞ T = (C1 x + D12 u2 ) (C1 x + D12 u2 )dt 0 Z ∞ T T = x C1 C1 x + uT2 u2 dt 0
T T C1 = 0). (since D12 D12 = I and D12
Let us consider the finite time interval cost Z T ∆ JT (x0 , u2 ) = L (x(t), u2 (t)) dt 0
where
L(x, u2 ) = xT C1T C1 x + uT2 u2 . Clearly, for J to be well defined, we must have J = limT →∞ JT . We will now show that JT can be somewhat simplified via completion of squares. Let P (t) = P T (t) be an arbitrary symmetric matrix and consider the function xT (T )P (T )x(T ) − xT0 P (0)x0 + JT (x0 , u2 ) Z T Z T d T (x P x)dt + L(x, u2 )dt = 0 0 dt Z Th i = x˙ T P x + xT P x˙ + xT P˙ x + xT C1T C1 x + uT2 u2 dt 0
=
Z
T
h T (Ax + B1 u1 + B2 u2 ) P x
0 i +xT P˙ x + xT P (Ax + B1 u1 + B2 u2 ) + xT C1T C1 x + uT2 u2 dt Z Th = xT (P˙ + AT P + P A + P (−B2 B2T )P + C1T C1 )x 0 i T + (u2 + B2T P x) (u2 + B2T P x) + xT P B1 u1 + uT1 B1T P x dt.
Since P (t) is arbitrary, to simplify the above expression, we choose P (t) to be the solution of the Ricatti Differential Equation −P˙ = P A + AT P − P B2 B2T P + C1T C1 P (T ) = 0.
799
H∞ OPTIMAL MULTIVARIABLE CONTROL Note u1 (t) = δ(t), and with the above choice of P (t), we obtain: LEMMA 15.8 (Completion of Squares) JT = xT0 P (0)x0 + 2xT0 P (0)B1 + ku2 − u∗2 kL2 [0,T ] where u∗2 = −B2T P x.
2
From Lemma 15.8, it is clear that if the state x is available for measurement, and the initial condition x0 is assumed to be zero, then the cost minimizing u2 (t) is simply u2 = u∗2 = −B2T P x. Moreover, if P¯ = limT →∞ P (t) exists, then it is well known that P¯ satisfies the Algebraic Ricatti Equation (ARE) 0 = P A + AT P − P B2 B2T P + C1T C1 or equivalently P = Ric
A −B2 B2T . −C1T C1 −AT
Summarizing the above discussion, we have the following theorem. THEOREM 15.8 (Full State Feedback H2 ) Let A B1 B2 G(s) = C1 0 D12 0 I 0 T D12 C1 = 0
T D12 D12
= I.
(15.75) (15.76)
Then the solution to the H2 problem is F (s) = −B2T P where P is a solution to the Algebraic Ricatti Equation A −B2 B2T P = Ric . −C1T C1 −AT Now the discussion leading up to the above theorem was valid only for a scalar Ty1 u1 . However, as we now show, the theorem statement holds even
800
OPTIMAL AND ROBUST CONTROL
when Ty1 u1 is multi-input multi-output (MIMO). For the MIMO case, we solve the following problem: min u2
n Z X i=1
0
∞ T y1i y1i dt
(15.77)
where y1i = output with u1i = ei δ(t) and ei is the ith basis vector. Now, by the arguments preceding Theorem 15.8, the control input (15.78) u2 = −B2T P x minimizes each of the terms in the above summation, independent of which basis vector ei is used for synthesizing the input u1 . Thus, the input (15.78) is also optimal for the cost in (15.77). Note that (15.77) is equivalent to Z ∞ 1 min trace Ty∗1 u1 (jω)Ty1 u1 (jω) dω = min kTy1 u1 (s)k2 . u2 2π −∞ u2
Thus, Theorem 15.8 provides the H2 optimal full state feedback controller even for the MIMO case. The next corollary shows that the assumption (15.75) can be relaxed at the expense of making the formulas a little more complicated. T COROLLARY 15.2 (Full State Feedback Without the Assumption D12 C1 = 0). Let everything be as in Theorem 15.8, except that condition (15.75) does not necessarily hold, that is T C1 6= 0. D12
Then the solution to the H2 optimal control problem is T F (s) = −D12 C1 − B2T P
where P is a solution to the Algebraic Ricatti Equation " # T A − B2 D12 C1 −B2 B2T P = Ric . T T T −C1T (I − D12 D12 )C1 −(A − B2 D12 C1 ) PROOF
T Noting that the change of variables u2 = u˜2 − D12 C1 x results in T A − B2 D12 C1 B1 B2 A˜ B˜1 B˜2 ∆ T ˜ G(s) = (I − D12 D12 )C1 0 D12 = C˜1 0 D˜12 I 0 0 I 0 0
H∞ OPTIMAL MULTIVARIABLE CONTROL
801
˜ and this G(s) satisfies the conditions of Theorem 15.8, it follows that u ˜2 = −B2T P x and hence T u2 = −(D12 C1 + B2T P )x.
15.2.2.2
Observer Reconstructed State Feedback H2 optimal control
We now solve the H2 problem when the state x is not directly measured. As in the H∞ case, this can be done by developing and applying a series of lemmas. The first of these lemmas, which is stated next, shows that a plant ˜ G(s) for the H2 problem can be replaced by an equivalent one in which G(s) has a square D12 matrix, that is D12 = I. LEMMA 15.9 Consider the following two plants A B1 B2 G(s) = C1 0 D12 C2 D21 0 ∆
A B1 B2 ˜ G(s) = B2T P 0 I C2 D21 0 ∆
T Assume that the conditions of Theorem 15.8 hold, that is D12 C1 = 0 and T T D12 D12 = I. Also B1 D21 = 0 and −B2 B2T A . P = Ric −C1T C1 −AT
Then, a control law u2 = F (s)[y2 ] solves the standard H2 problem for G(s) if ˜ and only if it solves it for the “squared down plant” G(s). PROOF
The proof follows directly from Lemma 15.8 by letting y˜1 = u2 + B2T P x, u˜1 = u1 y˜2 = y2 , u˜2 = u2
˜ which when substituted into the equations for G(s), yields G(s). Note that 2 J = ky1 kL2 [0,∞) 2 = ky˜1 kL2 [0,∞) + xT0 P¯ x0 + 2xT0 P¯ B1
802
OPTIMAL AND ROBUST CONTROL
Thus, assuming x0 = 0, it follows that ky1 kL2 [0,∞) is minimized if and only if k˜ y1 kL2 [0,∞) is minimized. Lemma 15.9 has a dual result which is the following: LEMMA 15.10 Consider
A G(s) = C1 C2 A ∆ ˜ and G(s) = C1 C2
B1 B2 0 D12 , D21 0 QC2T B2 0 D12 I 0
T T T where B1 D21 = 0, D21 D21 = I, D12 C1 = 0 and AT −C2T C2 Q = Ric . −B1 B1T −A
Then a control law u2 = F (s)[y2 ] solves the H2 optimal control problem for ˜ G(s) if and only if it solves it for the plant G(s). PROOF Noting that kG(s)k2 = kGT (s)k2 , the result follows directly from Lemma 15.9 by taking the transpose of everything, applying Lemma 15.9, and then transposing all of the resultant equations. The following lemma shows that observer-based H2 optimal control is possible for plants with D21 = I. LEMMA 15.11 (Observer-Based H2 Consider the plant A G(s) = C1 C2
Feedback) H B2 0 D12 I 0
T T and suppose that A − HC2 is stable, and that D12 C1 = 0 and D12 D12 = I. ∆ T Let F = −B2 P be the optimal full state feedback u2 = F x given by Theorem 15.8. Then the observer based feedback u2 = K(s)[y2 ] given below solves the H2 problem for G(s):
u2 = −B2T P x ˆ ˙x ˆ = Aˆ x + B2 u2 + H(y2 − C2 x ˆ) T A −B2 B2 P = Ric , −C1T C1 −AT
803
H∞ OPTIMAL MULTIVARIABLE CONTROL that is −1
K(s) = −B2T P (sI − A + B2 B2T P + HC2 ) PROOF
H.
Let e = x − x ˆ. Then
e˙ = (Ax + Hu1 + B2 u2 ) − (Aˆ x + B2 u2 + H(C2 x + u1 − C2 x ˆ)) or e˙ = (A − HC2 )e. Noting that e is not excited by u1 , it follows that if x(0) = x ˆ(0), then e(t) ≡ 0 ∀ t and hence x(t) = xˆ(t) ∀ t. Therefore, one may substitute x ˆ for x in the control law u2 = −B2T P x without affecting Ty1 u1 or its H2 norm. Combining Lemmas 15.10 and 15.11, one has a solution to the H2 optimal control problem: LEMMA 15.12 (A 2-Ricatti Solution to the H2 Optimal Control Problem) Let A B1 B2 G(s) = C1 0 D12 C2 D21 0
T T T T with D12 D12 = I, D21 D21 = I, D12 C1 = 0 and B1 D21 = 0. Define AT −C2T C2 ∆ Q = Ric −B1 B1T −A
Let
A H B2 ˜ G(s) = C1 0 D12 C2 I 0 ∆
where H = QC2T and let F = −B2T P where
A −B2 B2T P = Ric T −C1 C1 −AT
Then the observer-based control law u2 = K(s)[y2 ] solves the H2 optimal control problem with −1
K(s) = −B2T P (sI − A + B2 B2T P + HC2 )
H
804
OPTIMAL AND ROBUST CONTROL
or equivalently u2 = −B2T P x ˆ xˆ˙ = Aˆ x + B2 u2 + H(y2 − C2 x ˆ) T = Aˆ x + B2 u2 + QC2 (y2 − C2 xˆ)
(15.79) (15.80)
PROOF The proof follows directly by using Lemma 15.10 followed by Lemma 15.11.
REMARK 15.7 Notice from (15.79) and (15.80) that in the case of H2 optimal control, the optimal controller is obtained by just replacing the state in the full state feedback control by the reconstructed state obtained from the observer. Thus, in this case, there is a separation between the design of the stabilizing feedback (characterized by the value of P ) and the observer (characterized by the value of Q). This is an important difference between H2 and H∞ optimal control.
15.3
Exercises
15.1 Solve the H∞ optimal control problem for the generalized plant
A
B1
B2
C1
D11
D12
C2
D21
D22
=
1
0
−1
0
0
0.2
−1
1
0
with γ = 1. 15.2 Solve the H∞ optimal control problem for the following generalized
805
H∞ OPTIMAL MULTIVARIABLE CONTROL plant, find the optimum γ.
A
B1
B2
C1
D11
D12
C2
D21
D22
1 0 0 = 0 1 0 1 0
0 1
1 0
2 1
0 0
0 0
1 0
0 0 0 2
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
1 0 0 0
0 1
0 0
0 0
1 0
0 1
0 0
15.3 Solve the H∞ optimal control problem for the following plant, find the optimum γ. 0 1 0 1 1 0 0 0 2 1 − 1 0 0 0 −1 1 1 0 2 0 0 A B1 B2 0 0 0 0 0 0 0 = C1 0 D D 0 0 0 0 0 0 11 12 1 1 0 0 0 0 0 0 C2 D21 D22 − 1 2 0 0 0 0 1 1 0 0 0 1 0 0 0 1 0 0 0 1
0 1 0 1 0 0 0 0
generalized 1 0 0 1 0 1 1 0 0 1 0 0 0 0 0 0 0 0
15.4 (MIMO robust stability under additive uncertainty). Let P (s) = P0 (s)+ ∆P (s) where k∆P (jω)k2 < |rA (jω)| , ∀ ω for a stable rational rA (s) and suppose that P (s) has an invariant number of RHP poles. Prove that C(s) robustly stabilizes all plants in the above family if and only if
−1
C(s) [I + P0 (s)C(s)] rA (s) ≤ 1. ∞
15.5 (MIMO robust stability under multiplicative uncertainty). cises 15.4, let ∆(s) = M (s)P0 (s) where kM (jω)k2 < |rM (jω)| ,
∀ ω
In Exer-
806
OPTIMAL AND ROBUST CONTROL
for a stable rational rM (s). Prove that C(s) robustly stabilizes all plants in the above family if and only if
−1
P0 (s)C(s) [I + P0 (s)C(s)] rM (s) ≤ 1. ∞
15.4
Notes and References
The connection between Hankel approximation theory and the H∞ optimal control problem was pointed out by Safonov and Verma [175]. The proofs of Theorems 15.4 and 15.5 are due to Glover [85] (Proof of Theorems 5.1 and 6.3, respectively). The details about how to choose the state feedback gain F and the observer gain H to make T12 , T21 inner while obtaining co-inner factors in the process, can be found in [174]. The state-space formulas for obtaining coprime factorizations are adapted from [160]. The use of the LQ Return Difference Equality for carrying out inner-outer and spectral factorizations is due to Doyle [71]. The material in Section 15.2 is adapted from [73]. The simplifying assumptions made about the augmented plant have allowed us to obtain neat formulas for the solution to the standard H∞ control problem in terms of the solutions of the two Ricatti equations. Relaxing these assumptions leads to more complicated formulas as derived in [86]. A geometric treatment of the multivariable model matching problem was given in Ohm, Howze, and Bhattacharyya [162]. The results of Exercises 15.4 and 15.5 are in Vidyasagar [197]. The H∞ control design methods presented in this chapter can be carried out by using “hinf” command in Matlab.
A SIGNAL SPACES
A.1
Vector Spaces and Norms
A norm is a yardstick for measuring the size of a vector. Therefore, it is appropriate that before we define a norm, we introduce vector spaces. DEFINITION A.1 A vector space X is a set of elements called vectors together with two operations. The first operation is addition (+) which associates with any two vectors x, y ∈ X a vector x + y ∈ X, the sum of x and y. The second operation is scalar multiplication which associates with any vector x ∈ X and any scalar α, from a field F , a vector αx, the scalar multiple of x by α. The set X and the operations of addition and scalar multiplication are assumed to satisfy the following axioms: 1. x + y = y + x ∀ x, y ∈ X (Commutative Law) 2. (x + y) + z = x + (y + z) ∀ x, y, z ∈ X (Associative Law) 3. There is a null vector Θ in X such that x + Θ = x for all x in X 4. α(x + y) = αx + αy ∀ x, y ∈ X and scalar α (Distributive Law) 5. (α + β)x = αx + βx ∀ x ∈ X and all scalars α, β (Distributive Law) 6. (αβ)x = α(βx) ∀ x ∈ X and all scalars α, β (Associative Law) 7. 0x = Θ, 1x = x. DEFINITION A.2 Let X be a vector space over the field K (typically the set of all real scalars R or the set of all complex scalars C). Denote the zero vector in X by 0. We say that the function k.k is a norm on X if and only if it satisfies the following three conditions: (i) kxk ≥ 0; kxk = 0 if and only if x = 0 (ii) kαxk = |α|kxk ∀ α ∈ K, x ∈ X (iii) kx + yk ≤ kxk + kyk (triangle inequality).
807
808
OPTIMAL AND ROBUST CONTROL
REMARK A.1 Given a linear space X, there may be many possible norms on X. Given the vector space X and a norm k.k, the pair (X, k.k) is called a normed linear space. Example A.1 Let X = C n , that is x = (x1 , x2 , ..., xn ) where xi ∈ C ∀ i = 1, 2, ..., n. Show that the following are norms on X ∆
(i) kxk1 = ∆
Pn
(ii) kxkp = ( ∆
i=1
Pn
|xi |
i=1
1
|xi |p ) p where 1 ≤ p < ∞
(iii) kxk∞ = maxi |xi | Solution: ∆ Pn n (i) To show that kxkP 1 = i=1 |xi | is a norm on C . n (a) x = 0 ⇒ kxk1 = i=1 |xi | = 0 Also kxk1 = 0 n X |xi | = 0 ⇒ i=1
⇒ xi = 0 ∀i = 1, 2, · · · , n ⇒ x = 0.
Thus, kxk1 ≥ 0 and kxk1 = 0 if and only if x = 0. (b) kαxk1 =
n X
|αxi |
i=1
= |α|
n X
|xi |
i=1
= |α|kxk1 (c) kx + yk1 = ≤
n X
i=1 n X i=1
|xi + yi | (|xi | + |yi |)
809
SIGNAL SPACES n X
=
|xi | +
i=1
n X
|yi |
i=1
= kxk1 + kyk1
so that the Triangle Inequality is satisfied. Hence, k.k1 is a vector norm on C n. (ii) To show that ! p1 n X ∆ p kxkp = |xi | i=1
where 1 < p < ∞ is a norm on C n . (a) Clearly kxkp =
n X
p
|xi |
i=1
Also x = 0 ⇒ kxkp = 0. and
! p1
≥0
kxkp = 0 ⇒ ⇒
n X
p
|xi |
i=1 n X
! p1
=0
p
|xi | = 0
i=1
⇒ xi = 0 ∀ i = 1, 2, · · · , n ⇒ x = 0. Thus, kxkp ≥ 0 and kxkp = 0 if and only if x = 0. (b) ! p1 n X p kαxkp = |αxi | i=1
=
|α|
p
n X
p
|xi |
i=1
= |α|
n X i=1
p
|xi |
! p1
! p1
= |α|kxkp
(c) Also, using Minkowski’s inequality (see Lemma A.2 for a more general version applicable to sequences) it follows that kx + ykp ≤ kxkp + kykp
810
OPTIMAL AND ROBUST CONTROL
so that the triangle inequality is satisfied. Hence, k.kp is a vector norm on C n . (iii) To show that ∆ kxk∞ = max |xi | i
n
is a vector norm on C . (a) Clearly kxk∞ = max |xi | ≥ 0. i
Also x = 0 ⇒ kxk∞ = 0 and kxk∞ = 0 ⇒ max |xi | = 0 i
⇒ xi = 0 ∀ i = 1, 2, · · · , n ⇒ x = 0. Thus, kxk∞ ≥ 0 and kxk∞ = 0 if and only if x = 0. (b) kαxk∞ = max |αxi | i
= |α| max |xi | i
= |α|kxk∞ (c) kx + yk∞ = max |xi + yi | i
≤ max (|xi | + |yi |) i
≤ max |xi | + max |yi | i
i
= kxk∞ + kyk∞ so that the triangle inequality is satisfied. Hence, k · k∞ is a vector norm on C n. Example A.2 In R2 draw, for p = 1, 2, ∞.
E = x ∈ R2 | kxkp = 1
Case (I): p = 1, kxk1 = |x1 | + |x2 | So kxk1 = 1 ⇒ |x1 | + |x2 | = 1 which gives the set shown in Figure A.1.
811
SIGNAL SPACES x2 (0, 1)
(−1, 0)
Set E
(0, 0)
(1, 0) x1
(0, −1)
Figure A.1 The set E for p = 1.
1 2 2 2 Case (II): p = 2, kxk2 = |x1 | + |x2 | So kxk2 = 1 gives rise to the unit circle shown in Figure A.2.
x2 (0, 1)
(−1, 0)
Set E
(0, 0)
(1, 0) x1
(0, −1)
Figure A.2 The set E for p = 2.
Case (III): p = ∞, So kxk∞ = 1 ⇒ max (|x1 |, |x2 |) = 1 which gives the square shown in Figure A.3.
812
OPTIMAL AND ROBUST CONTROL x2 Set E
(0, 1)
(0, 0)
(−1, 0)
x1 (1, 0)
(0, −1)
Figure A.3 The set E for p = ∞.
Example A.3 Let X be the space of infinite sequences of complex numbers: x = (ξ1 , ξ2 , ...) with ξi ∈ C for i = 1, 2, ..... Note that X is a vector space. Frequently used norms on appropriate proper subsets of X are: ∆
kxk1 =
P∞
i=1 |ξi | ∆ P∞ p 1 kxkp = ( i=1 |ξi | ) p , 1 ≤ p < ∞ ∆
kxk∞ = supi≥1 |ξi |
In the above example, we have used the sup of a set of real numbers. Therefore, it is appropriate that we formally define the sup. DEFINITION A.3 Supremum (Sup) The supremum of a set of real numbers bounded from above is its Least Upper Bound. In the same spirit, the Infimum (Inf) of a set of real numbers bounded from below is its largest lower bound. When the maximum of a set of real numbers exists, the sup is necessarily equal to the maximum. However, the definition of the sup provides a tight upper bound even in cases where no maximum exists. This is illustrated by the following example.
813
SIGNAL SPACES Example A.4
1
0.9
0.8
0.7
f(t)
0.6
0.5
0.4
0.3
0.2
0.1
0
0
1
2
3
4
5 t
6
7
8
9
10
Figure A.4 Plot of f (t) = 1 − e−t vs. t.
Consider the function f (t) = 1 − e−t defined for all t ≥ 0 shown in Figure A.4. This function could model the response of a typical RC circuit with time constant equal to one. Clearly, this function does not have a maximum value but its sup or least upper bound is 1. Note that any number smaller than 1 cannot be an upper bound for this function since we can always find a time t such that the function value exceeds that number. Thus, the sup of 1 provides the tightest possible upper bound on the values assumed by this function for all t ≥ 0. In defining the k.k∞ above, we used the sup instead of the maximum since the latter may not be well defined for an infinite set of positive numbers. In defining the norms on the space of infinite sequences X, we have put in the caveat that the norms are defined on appropriate subsets of X. By this we mean subsets of X on which the corresponding norm is finite. The subset of X on which the k.k1 is finite is called l1 . Similarly we can define the spaces lp and l∞ . Exercise A.1.1 Show that l1 , lp , l∞ are all vector spaces. Also show that k.k1 , k.kp , k.k∞ are bonafide vector norms on l1 , lp , l∞ , that is they satisfy the three defining properties of a vector norm.
814
OPTIMAL AND ROBUST CONTROL
The following two lemmas establish useful properties for norms defined on lp , p ∈ [1, ∞]: LEMMA A.1 (Holder’s Inequality for Sequences) Let x = (x1 , x2 , ...), y = (y1 , y2 , ...) be sequences in X. Let p, q be real numbers in the interval [1, ∞] with p1 + q1 = 1. If p and q satisfy this relationship, they are called conjugate exponents. If x ∈ lp and y ∈ lq , with p and q being conjugate exponents, then X∞
k=0
and
X∞
k=0
|xk yk | < ∞
|xk yk | ≤ kxkp .kykq
LEMMA A.2 (Minkowski’s Inequality) kx + ykp ≤ kxkp + kykp ∀ p ∈ [1, ∞] The Minkowski inequality is useful for demonstrating that the lp norm satisfies the triangle inequality. Example A.5 Let X = space of all complex valued sequences. Find some x ∈ X for which kxk1 = ∞ but kxk∞ = 1. Also find some x ∈ X for which kxk∞ = ∞. (This example justifies why k.k1 , k.k∞ etc. must be defined on appropriate subsets of X.) Solution: (i) Let 1 1 1 1 x = 1, , , , , · · · 2 3 4 5 Then kxk1 = 1 +
1 1 1 + + + · · · = ∞. 2 3 4
but kxk∞ = 1. (ii) Let x = (1, 2, 3, 4, · · ·) Then kxk∞ = ∞.
815
SIGNAL SPACES Example A.6
Let X = C n×n be the set of all n × n matrices with elements in C. It is easy to verify that X is a vector space. The following are norms on C n×n .
∆
kAka = max |aij | i,j Xn ∆ kAkb = |aij | i,j=1
∆
kAks = ∆
Xn
i,j=1
kAk1 = max j
∆
Xn
2
|aij |
i=1
12
← Frobenius norm
|aij |(Maximum Column Sum) 1
kAk2 = max [λi (A∗ A)] 2 i
(where λi (M ) denotes the ith eigenvalue of M kAk∞
and M ∗ denotes the complex conjugate transpose of M .) Xn ∆ = max |aij |(Maximum Row Sum) i
j=1
Here matrices are looked upon as elements of a linear space. We next consider normed vector spaces whose elements are functions of time. Intuitively, it appears that by using integrals instead of the summations that we used in the lp (p ∈ [1, ∞)) norm definitions we should be able to define analogous norms for functions of time. This intuition is correct except that the integral that we will be using in the definition is not the usual Riemann integral and the usual supremum of the function over all time will not be used in the ∞-norm definition. Instead, the p-norm, p ∈ [1, ∞) will be defined using a Lebesgue integral and the supremum will be replaced by the essential supremum. Before defining these norms, we introduce the Lebesgue integral and the notion of the essential supremum in a very intuitive way. A rigorous treatment of these topics belongs to a course on measure theory and is beyond the scope of the current text. To introduce the Lebesgue integral, we need to introduce the concept of the Lebesgue measure. Roughly speaking, the Lebesgue measure generalizes the notion of the length of an interval. In R, a point is a set of measure zero since its length is 0. Moreover, it is a fact that any set that is a countable union of measure zero sets will also have measure zero. Recall from introductory calculus that the Riemann integral of a function f (t) can be defined as the limit of a sum and is evaluated by summing up the areas under the curve by using vertical strips of infinitesimal width along the domain of the function. This is illustrated in Figure A.5. Now consider a function f (t) defined on
816
OPTIMAL AND ROBUST CONTROL
[0, 1] as follows: f (t) =
0 1
if t is rational otherwise
(A.1)
If we accept the premise that between any two real numbers, there is a rational number, it is clear that for such a function, one cannot define the Riemann integral since it is impossible to come up with the infinitesimal-width vertical strips needed to define the Riemann sums. However, in this case, the function f (t) assumes two values, zero and one, and so the range of the function is a finite set. Consequently, if we could come up with a mechanism for characterizing the measures of the two sets where the function assumes the two different values, the integral could be very easily evaluated by multiplying each value by the measure of the set associated with it and then summing the resulting quantities. The Lebesgue integral makes this notion mathematically precise. When the Riemann integral exists, it is in fact equal to the Lebesgue integral. However, there are functions which are not Riemann integrable but which do have a Lebesgue integral. For instance, the function f (t) that we have just defined has a Lebesgue integral on [0, 1] whose value is equal to one. To see this, we note that since the rationals on [0, 1] are countable, the measure of that set is zero and consequently the function f (t) is 1 on a set of measure one.
f(t)
t
Figure A.5 Evaluating the Riemann integral as the limit of a sum.
For a Riemann integrable function, it is easy to see that changing the value of the function at a single point does not alter the area under the curve and,
817
SIGNAL SPACES
therefore, the value of the integral. The same is true for a Lebesgue integral since a point is considered to be a set of measure zero. Indeed, altering the value of the function at a countable number of points has no effect on the Lebesgue integral since the countable union of zero measure sets also has measure zero. Thus, in integration theory, sets of measure zero are not important. Consequently, if we are looking at the supremum of a function over a set, we might as well look at the supremum after throwing out the sets of measure zero. This brings us to the notion of the essential supremum which is the supremum of that function except possibly on sets of measure zero. The following example illustrates the distinction between the supremum of a function and its essential supremum. Example A.7 Consider two functions f (t) and g(t) defined as follows: f (t) = sin t for t ≥ 0 and g(t) = 1 for t > 0 and g(0) = 2. These functions are shown in Figures A.6 and A.7 respectively.
1
0.8
0.6
0.4
f(t)
0.2
0
−0.2
−0.4
−0.6
−0.8
−1
0
1
2
3
4
5 t
6
7
8
9
10
Figure A.6 Plot of f (t) versus t.
Clearly sup f (t) = 1
(A.2)
t≥0
= max f (t) = ess.supt≥0 f (t) t≥0
(A.3)
On the other hand, supt≥0 g(t) = 2 but ess.supt≥0 g(t) = 1. Note that with respect to Figure A.7 the set on which g(t) > ess.supt≥0 g(t) (the point {0})
818
OPTIMAL AND ROBUST CONTROL 2
x
g(t) 1
)
0
t
Figure A.7 Plot of g(t) versus t.
is a set of measure zero. We are now ready to define the p-norms for Lebesgue integrable functions of time. Example A.8 Let X = {f : R → R|f locally (Lebesgue) integrable.}∗ Frequently used norms on appropriate proper subsets of X are as follows: ∆
kxk1 = ∆
kxkp = ∆
Z
|x(t)|dt
Z
p
|x(t)| dt
p1
, 1 ≤p 0 ∃ N (ǫ) such that n, m ≥ N (ǫ) ⇒ d(xn , xm ) < ǫ. The following theorem establishes the relationship between Cauchy sequences and convergence. THEOREM A.1 Every convergent sequence is a Cauchy sequence but the converse is not true.
823
SIGNAL SPACES PROOF such that
(a) Let {xn } be a convergent sequence. Then given ǫ > 0, ∃ N (ǫ)
n ≥ N ⇒ d(xn , x)
n. n
d(xn , xm ) = |
Thus, d(xn , xm ) can be made arbitrarily small by choosing m, n large enough ⇒ {xn } is a Cauchy sequence. However, limn→∞ xn does not exist since 0∈ / X. DEFINITION A.8 A metric space in which every Cauchy sequence converges is called a Complete Metric Space. Intuitively, in a complete metric space, every sequence that is trying to converge, that is every sequence whose terms are getting progressively closer, in fact converges to a point in that space. Thus, a complete metric space does not have any “missing elements” and this justifies the “complete” terminology. DEFINITION A.9 A complete normed linear space (X, k.k) is called a Banach space. Here the completeness is with respect to the metric induced by the vector norm on the normed linear space, that is d(x, y) = kx − yk. We reiterate that convergence depends on the choice of the metric. In many situations, we may not be able to directly write down the solution to a particular problem but can construct a sequence of approximations that we hope
824
OPTIMAL AND ROBUST CONTROL
will converge to the true solution. If such a problem is set in a Banach space and the sequence of approximations is carefully chosen to be a Cauchy sequence, the approximations are guaranteed to converge to the true solution. Such a technique can be used to establish the existence of solutions of ordinary differential equations under appropriate conditions and finds application in many other areas where the solution is a fixed point of an appropriate mapping. For some of the small gain results presented in Chapter 14 of this book, stronger results, guaranteeing the existence of the solution, can be obtained if the problem is set in a Banach space. This is the main reason why we have introduced Lp spaces of (Lebesgue integrable) functions instead of working with the space of all continuous (Riemann integrable) functions. Lp -spaces are Banach spaces since they are complete with respect to the appropriate Lp norm. The proof of that is a deep result that belongs to a course on measure theory and is beyond the scope of the current text. However, the following example shows that the space of all continuous functions with the k.k1 defined on it is not a Banach space since we can find a Cauchy sequence which does not converge. Example A.13 Let X be the space of continuous functions on [0, 1] with norm defined by R1 kxk = 0 |x(t)|dt. It can be easily verified that X is a normed linear space. We will show that X is not Rcomplete. 1 Now for x, y ∈ X, d(x, y) = 0 |x(t) − y(t)|dt ⇒ d(xn , xm ) =
Z
1
|xn (t) − xm (t)|dt
0
For n ≥ 2, let xn (t) be defined by
0, xn (t) = nt + (1 − 21 n), 1,
0 ≤ t ≤ 12 − n1 1 1 1 2 − n < t≤ 2 1 2 < t
This sequence of functions is sketched in Figure A.8. Here d(xn , xm ) is the area between the lines corresponding to xn and xm and, therefore, from the figure, it is clear that d(xn , xm ) → 0 as n, m → ∞ ⇒ {xn } is a Cauchy sequence. There is however no continuous function to which this sequence converges ⇒ X is not complete.
825
SIGNAL SPACES
1 n=2 n=3
0
1/2
1
Figure A.8 Sequence of functions xn (t) for Example A.13.
A.3
Equivalent Norms and Convergence
Two norms are said to be equivalent if every sequence that converges in terms of one of the norms converges in terms of the other. This notion of equivalence is consistent with the definition of equivalent norms given in Definition A.4. DEFINITION A.10 A Finite Dimensional Vector Space is any vector space that can be spanned by a finite number of elements, for example Rn , C n , etc. (dimension = n). Infinite Dimensional vector spaces are those vector spaces which are not finite dimensional, for example C[0, 1], Lp [a, b]. Fact: In a finite dimensional vector space, all norms are equivalent. The following example shows that that the 1, 2 and ∞-norms on R2 are equivalent. Example A.14 In R2 , x = [x1 , x2 ]T and three possible norms are kxk∞ , kxk2 , kxk1 defined by kxk1 = |x1 | + |x2 | kxk∞ = max(|x1 |, |x2 |) q kxk2 = x21 + x22
826
OPTIMAL AND ROBUST CONTROL
It is easy to verify that 1 kxk1 ≤ kxk∞ ≤ kxk1 2 √ and kxk2 ≤ kxk1 ≤ 2kxk2 so that all three norms are equivalent. Example A.15 Let A=
0.9 104 0 0.9
Calculate the kAka , kAkb , kAks , kAk1 , kAk2 , kAk∞ . Show that Ak → θ as k → ∞. Solution: ∆
kAka = max |aij | = 104 i,j
∆
kAkb =
n X
|aij | = 10001.8
i,j=1
∆
kAks =
n X
i,j=1
∆
kAk1 = max j
∆
kAk∞ = max i
∆
12
2 |aij | ≈ 10000
n X i=1 n X
|aij | = 10000.9 |aij | = 10000.9
j=1
1
kAk2 = max [λi (A∗ A)] 2 ≈ 104 i
To show that Ak → θ as k → ∞. When k = 2 0.9 104 0.9 104 2 A = A·A = 0 0.9 0 0.9 2 (0.9) 0.9 × 104 × 2 = 0 (0.9)2 We can use induction to show that
827
SIGNAL SPACES
k
A =
"
k
k−1
(0.9) (0.9) k 0 (0.9)
× k × 104
k
# k−1
Clearly (0.9) → 0 as k → ∞. We can also argue that (0.9) as k → ∞. Thus, as k → ∞, Ak → θ.
×k×104 → 0
In infinite dimensional spaces, norms are not necessarily equivalent as shown by the following example. Example A.16 Let X be the set of all real sequences and consider the sequences xi ∈ X defined by x1 = (1, 0, 0, ...) x2 = (1, 2−1 , 0, ...) xn = (1, 2−1 , 3−1 , ...) then kxk k∞ = 1 ∀ k ∈ Z+ and kxk k1 → ∞ as k → ∞. So far, we have been discussing different kinds of norms: norms on n dimensional real and complex spaces; norms on spaces of sequences and functions. Furthermore, in Appendix B, we will be introduced to norms of linear maps. Thus, it is appropriate at this stage to establish the following standard notation: |.|: denotes norms on Rn or C n k.k: denotes norms on sequence and function spaces, for example lp , Lp k.k: also denotes induced norms of linear maps (to be introduced in Appendix B). Example A.17 Let X be the space of sequences in C n , that is each x ∈ X is of the form x = (ξ1 , ξ2 , ...) where each ξi ∈ C n . ∞ X ∆ Then kxk1 = |ξi | where |.| is a vector norm on C n . i=1
∆
kxkp =
∞ X
|ξi |
i=1
∆
kxk∞ = sup |ξi | i≥1
p
! p1
, 1 ≤ p |f |dt ≥ |f | dt, ∀ p ∈ [1, ∞] Ic
Ic
The conclusion follows from these two observations. The relationship among L1 , L2 , and L∞ is shown in the Venn Diagram of Figure A.9. L∞ L2 f1
f2 f4
L1
Figure A.9 Relationship between L1 , L2 , and L∞ .
Example A.18 Consider functions mapping R+ into R and defined by f1 : t 7→ 1; f2 : t 7→
1 ; 1+t
f4 : t 7→ e−t
Show that (i) f1 ∈ L∞ , f1 ∈ / L 2 , f1 ∈ / L1 . (ii) f2 ∈ L2 ∩ L∞ , f2 ∈ / L1 . (iii) f4 ∈ L1 ∩ L2 ∩ L∞ . Solution: (i) f1 (t) = 1 Clearly, f1 ∈ L∞ and f1 ∈ / L 2 , f1 ∈ / L1 . (follows by evaluating the appropriate integrals).
831
SIGNAL SPACES (ii) f2 (t) =
1 1+t "Z ∞
2 # 12 1 ⇒ kf2 k2 = dt 1+t 0 ∞ 1 1 2 = − 1 + t 0 =1 Re[s0 ]. Fact 2: If f ∈ L1 (R+ ) or L2 (R+ ) or L∞ (R+ ) then the map s 7→ fˆ(s) is analytic in Re[s] > 0. Recall our earlier definition for the convolution of two time functions defined for t ≥ 0: Z t
y(t) = (h ∗ u)(t) =
h(t − τ )u(τ )dτ ∀ t ≥ 0.
(B.31)
0
In this context, we state the following two facts:
1. Any linear time-invariant system can be represented by a convolution with a kernel, which is a distribution (or a generalized function). 2. Furthermore, convolution allows lumped parameter systems, distributed parameter systems and systems with transportation lags to all be considered under the same framework. This is a feature that is not readily achievable using realization based state-space approaches. In (B.31), the simplest case occurs when h(.) is the impulse response of a linear, time-invariant, nonanticipative system whose transfer function ˆh(s) is ˆ ˆ a rational function of s, with |h(s)| → 0 as |s| → ∞. In other words, h(s) is
849
NORMS FOR LINEAR SYSTEMS
strictly proper. Note that, under these conditions, ˆ (A) h(s) has all its poles in the open left-half plane if and only if h ∈ L1 . If (A) holds, then (i) h decays exponentially, that is for some hm < ∞, and some α > 0, |h(t)| ≤ hm e−αt ∀ t ∈ R+
(B.32)
˙ (ii) h(t) = h(0+ )δ(t) + h1 (t) where h1 (.) decays exponentially; h(0+ ) is ˆ possibly zero. Indeed h(0+ ) = lims→∞ sh(s) (Initial Value Theorem). (iii) If u ∈ L1 then y ∈ L1 ∩ L∞ and y˙ ∈ L1 ; furthermore, y is continuous and y(t) → 0 as t → ∞. If u ∈ L2 then y ∈ L2 ∩ L∞ and y˙ ∈ L2 ; furthermore, y is continuous and y(t) → 0 as t → ∞. (iv) For 1 ≤ p ≤ ∞, u ∈ Lp ⇒ y and y˙ ∈ Lp ; furthermore, y is continuous. Statements (i) and (ii) above are obvious while statements (iii) and (iv) require a couple of theorems that we will prove in the next section.
B.3 B.3.1
Lp /lp Norms of Convolutions of Signals L1 Theory
THEOREM B.4 Let u, w : R+ → R. If u, w ∈ L1 then u ∗ w ∈ L1 and ku ∗ wk1 ≤ kuk1 .kwk1 . Now
PROOF
Z t |(w ∗ u)(t)| = w(t − τ )u(τ )dτ 0 Z t ≤ |w(t − τ )||u(τ )|dτ. 0
Let us calculate kw ∗ uk1 : Z ∞ Z |(w ∗ u)(t)|dt ≤ 0
0
∞
Z
0
t
|w(t − τ )||u(τ )|dτ dt.
(B.33)
Interchanging the order of integration on the right-hand side, we obtain the other iterated integral Z ∞ Z ∞ |w(t − τ )||u(τ )|dt dτ. 0
τ
850
OPTIMAL AND ROBUST CONTROL
This integral evaluates out to Z Z ∞ |u(τ )| 0
∞
|w(t − τ )|dt dτ
(B.34)
= kwk1 kuk1 < ∞.
(B.35)
τ
Since the integral in (B.34) is finite, it follows by Tonelli’s Theorem on interchanging the order of integration [66] that the integral in (B.33) is also finite and is equal to the integral in (B.34). Thus, kw ∗ uk1 ≤ kwk1 · kuk1 . The following example shows that the above result for the convolution of two scalar valued functions can be extended to the convolution of a matrix valued function with a vector valued function. Example B.10 Let W : R+ → Rn×n and u : R+ → Rn . If all elements of W and of u are in L1 , then kW ∗ uk1 ≤ kW k1 · kuk1 (B.36) R∞ R∞ where kuk1 = 0 |u(t)|dt, kW k1 = 0 |W (t)|dt with |u(t)| designating some Rn norm of u(t) and |W (t)| the corresponding induced matrix norm in Rn×n . Solution: Now Z ∞ Z t kW ∗ uk1 = W (t − τ )u(τ )dτ dt 0 0 Z ∞ Z t ≤ |W (t − τ )u(τ )| dτ dt 0 0 Z ∞ Z t ≤ |W (t − τ )| |u(τ )| dτ dt 0
0
= k|W (t)| ∗ |u(t)|k1
≤ k|W (t)|k1 · k|u(t)|k1 (by Theorem B.4 ) = kW k1 · kuk1 (using the definition for kuk1 and kW k1 .)
B.3.2
Lp Theory
THEOREM B.5 Let u, w : R+ → R and p ∈ [1, ∞]. If u ∈ Lp and w ∈ L1 , then kw ∗ ukp ≤ kwk1 · kukp
(B.37)
851
NORMS FOR LINEAR SYSTEMS
PROOF
First, let us consider 1 < p < ∞. Then we have |(w ∗ u)(t)| ≤
Z
0
t
1
1
|w(t − τ )| p |u(τ )| · |w(t − τ )| q dτ | {z } | {z } 1 p
where q is the conjugate exponent of p, that is Inequality, |(w ∗ u)(t)| ≤
Z
t
p
|w(t − τ )| · |u(τ )| dτ
0
p1 Z
+
1 q
(B.38)
= 1. By Holder’s
t
|w(t − τ )|dτ
0
1q
(B.39)
1
Note that the last factor is bounded by (kwk1 ) q . Taking Lp norms of both sides, kw ∗ ukp ≤ kwk1
1 q
1
Z
∞ 0 1
Z
t
p
|w(t − τ )| · |u(τ )| dτ dt 0
p1
≤ kwk1 q kwk1 p kukp (by Theorem B.4) = kwk1 · kukp (since p1 + q1 = 1). For p = 1, we revert to Theorem B.4 and for p = ∞, the proof is immediate since we can pull the essential supremum of u out of the convolution integral. Theorems B.4 and B.5 also extend to the discrete-convolution of sequences. These discrete versions are next stated without proof. THEOREM B.6 If a and b are l1 sequences then a ∗ b ∈ l1 and ka ∗ bk1 ≤ kak1 · kbk1 .
THEOREM B.7 Let a and b be sequences and p ∈ [1, ∞]. If a ∈ lp and b ∈ l1 , then ka ∗ bkp ≤ kakp .kbk1. Having introduced Theorems B.4 through B.7, we now have the tools needed for calculating the induced norms of convolution maps. These calculations are carried out in the next section.
852
OPTIMAL AND ROBUST CONTROL
B.4
Induced Norms of Convolution Maps
Let (E, |.|∞ ) = L∞ (R+ ) = {f : R+ → R | kf k∞ < ∞}. Let H be a linear map defined on E in terms of an integrable function h : R → R; ∆
H : u 7→ Hu = h ∗ u ∀ u ∈ L∞ that is (Hu)(t) =
Z
(B.40)
t
h(t − τ )u(τ )dτ ∀ t ∈ R+ .
(B.41)
0
THEOREM B.8 R ∞ Assume that khk1 = 0 |h(t)|dt < ∞. Then we have (a) H : L∞ → L∞
(b) kHk, the induced norm of the linear map, H, is given by kHk∞ = khk1 ; that is kh ∗ uk∞ ≤ khk1 · kuk∞ , ∀u ∈ L∞ and kh ∗ uk∞ can be made arbitrarily close to khk1 · kuk∞ by an appropriate choice of u. PROOF We start by calculating the induced norm of H. We drop the subscript ∞ throughout; for example kuk denotes the L∞ norm of u : R+ → R. We have three norms in this proof: the absolute value of real numbers, for example, |u(t)|; the norm on L∞ , for example kuk; the induced norm on L(E, E), namely kHk. Now kHk =
sup kh ∗ uk∞ kuk∞ =1
=
sup sup |(h ∗ u)(t)| kuk∞ =1 t≥0
Z t h(t − τ )u(τ )dτ = sup sup kuk∞ =1 t≥0 0 Z t ≤ sup sup |h(t − τ )||u(τ )|dτ
kuk∞ =1
t≥0
0
Since kuk∞ = 1, kHk ≤ sup ≤
Z
t≥0 0 Z ∞
t
|h(t − τ )|dτ
|h(t′ )|dt′
0
(P utting t − τ = t′ ).
853
NORMS FOR LINEAR SYSTEMS Hence, kHk ≤
Z
∞
|h(t′ )|dt′ = khk1
(B.42)
0
Thus, H is a bounded linear map from L∞ into L∞ . Also khk1 is an upper bound on the induced norm of the map H : L∞ → L∞ . Let us now show that khk1 is indeed the induced norm. Consider a sequence of inputs u1 , u2 , u3 , · · · with kui k = 1, where for t = 1, 2, 3, · · · , we define ut : τ → ut (τ ) = sgn[h(t − τ )], τ ∈ R, t ∈ Z+
(B.43)
and we take h(t) = 0 for t < 0. Consider now the value at time t of the output due to ut (.), that is (h ∗ ut )(t) =
Z
t
|h(t − τ )|dτ ≤ kh ∗ ut k∞ t = 1, 2, 3, · · ·
(B.44)
0
where k.k∞ denotes the norm on E. Hence, for t = 1, 2, 3, · · · , we have using (B.42), (B.44) Z
t
|h(τ )|dτ ≤ kh ∗ ut k∞ ≤ kHk ≤
Z
∞
|h(τ )|dτ.
(B.45)
0
0
Letting t → ∞ these inequalities imply that kHk = khk1 . THEOREM B.9 Let E = L2 (R+ ) = {f : R+ 7→ R | kf k2 < ∞}. Let H : u 7→ Hu, where (Hu)(t) =
Z
t
h(t − τ )u(τ )dτ, ∀ t ∈ R+ .
(B.46)
0
If the linear map H is defined by (B.46), where h ∈ L1 , then (a) H : L2 → L2 (b) kHk2 , the induced norm of the linear map H ∈ L(L2 , L2 ), is given by ˆ kHk2 = max |h(jω)|. ω∈R
(B.47)
ˆ PROOF Since h ∈ L1 , its Fourier transform, F (h) = h(jω) is uniformly continuous on R and → 0 as |ω| → ∞ (by Theorem B.1, parts (a) and (c)). Since h ∈ L1 , u ∈ L2 , we have h ∗ u ∈ L2 (by Theorem B.5).
854
OPTIMAL AND ROBUST CONTROL
Now (kHuk2 )2 = kh ∗ uk2 2 Z ∞ = (h ∗ u)(t)(h ∗ u)(t)dt 0 Z ∞ 1 (h ˆ ∗ u)(jω) · [(h ˆ∗ u)(jω)]∗ dω = 2π −∞ (by Parseval’s Theorem) Z ∞ 2 1 2 ˆ = |h(jω)| |ˆ u(jω)| dω 2π −∞ (using the Convolution Theorem for Fourier Transforms). Furthermore, by Parseval, 1 kuk2 = 1 ⇔ kˆ uk2 = (2π) ⇔ 2π 1 2
Z
∞
|ˆ u(jω)|2 dω = 1
(B.48)
−∞
Hence, ∀u such that kuk2 = 1 h i 2 2 ˆ kHuk2 ≤ max |h(jω)| ω∈R
(B.49)
Thus, the induced norm satisfies ˆ kHk2 ≤ max |h(jω)| ω∈R
(B.50)
ˆ Note that since ω 7→ |h(jω)| is continuous on R and → 0 as |ω| → ∞, the maximum exists. We are now going to show that kHk2 is actually equal to the right-hand side of (B.50). Recall from Example B.9 that for λ > 0, 2
F (e−λt ) =
π 12 λ
ω2
e− 4λ .
(B.51)
Furthermore, using the modulation property of the Fourier Transform [110], h i 1 π 12 (ω−ω0 )2 (ω+ω0 )2 2 . (B.52) F e−λt cos ω0 t = e− 4λ + e− 4λ 2 λ As λ → 0, this expression tends to π[δ(ω − ω0 )+ δ(ω + ω0 )], where δ(.) denotes the Dirac delta (generalized) function. ˆ Pick ω0 to be the abscissa of the maximum of ω 7→ |h(jω)|; for each λ pick a normalization n(λ) such that h i 2 uλ (t) = n(λ) e−λt cos ω0 t (B.53)
855
NORMS FOR LINEAR SYSTEMS
ˆ has unit norm. Since ω 7→ |h(jω)| is continuous, we see that kh ∗ uλ k2 → ˆ maxω |h(jω)| as λ → 0. Consequently, we have shown that ˆ kHk2 = max |h(jω)|
(B.54)
ω
The norm defined in (B.54) above is called the H∞ norm of the operator H or of the corresponding transfer function H(s). THEOREM B.10 Let E1 = L2 (R) = {f : R → R | kf k2 < ∞} and E2 = L∞ (R). Let H be a linear map defined on L2 by H : u 7→ Hu, where Z ∞ h(t − τ )u(τ )dτ ∀ t ∈ R (B.55) (Hu)(t) = −∞
We assume that h ∈ L2 . Under this condition (a) H : L2 → L∞ (b) kHk, the induced norm of the linear map H ∈ L(L2 , L∞ ), is given by 1 kHk = 2π
Z
∞
2
ˆ |h(jω)| dω
(B.56)
−∞
Now Z ∞ |y(t)| = h(t − τ )u(τ )dτ −∞ Z ∞ ≤ |h(t − τ )|.|u(τ )|dτ
PROOF
−∞
⇒ kyk∞
≤ khk2 kuk2 (using the Cauchy Schwartz Inequality) 12 Z ∞ 2 1 ˆ = |h(jω)| dω · kuk2 (using Parseval’s Theorem) 2π −∞ 12 Z ∞ 2 1 ˆ ≤ |h(jω)| dω · kuk2 2π −∞
Hence, kHk ≤
1 2π
Z
∞
−∞
2
ˆ dω |h(jω)|
21
(B.57)
856
OPTIMAL AND ROBUST CONTROL
Thus, H is a bounded linear map from L2 to L∞ . Also Z ∞ 2 1 ∆ ˆ |h(jω)| dω = kHkH2 2π −∞ is an upper bound on the induced norm of the map H : L2 −→ L∞ . R∞ 2 1 ˆ Let us now show that 2π −∞ |h(jω)| dω is indeed the induced norm. Consider a sequence of inputs u1 , u2 , u3 , · · · , where for t = 1, 2, 3, · · · , we define ut : τ → ut (τ ) = h(t − τ ), τ ∈ R, t ∈ Z+ Consider now the value at time t of the output due to ut (.): Z ∞ (h ∗ ut )(t) = |h(t − τ )|2 dτ (B.58) −∞
⇒ |(h ∗ ut )(t)| =
Z
∞
−∞
21 Z |h(t − τ )| dτ ·
∞
2
2
|h(t − τ )| dτ
−∞
= khk2 · kut k2 12 Z ∞ 2 1 ˆ = |h(jω)| dω · kut k2 2π −∞ Thus, kh ∗ ut k∞ ≥
h
1 2π
2
R∞
ˆ −∞ |h(jω)| dω
1 so that kHk ≥ 2π
i 12
Z
21
(B.59) (B.60) (B.61)
· kut k2
∞
2 ˆ |h(jω)| dω
−∞
12
.
(B.62)
From (B.57) and (B.62), it follows that
1 kHk = 2π
Z
∞
−∞
2 ˆ |h(jω)| dω
21
(B.63)
The norm defined in (B.63) above is called the H2 norm of the operator H or of the corresponding transfer function H(s) and is usually denoted by k.kH2 or by k.k2 . In general, it is not an induced norm although by allowing non-causal inputs and for the single-input single-output case, we were able to demonstrate otherwise. The three theorems above hold when the input and output signals and the impulse response are all scalar-valued functions of time. It is possible to extend the first two theorems to the case when the input and output are vector valued functions of time. In this case, the impulse response will be given by
857
NORMS FOR LINEAR SYSTEMS
an impulse response matrix. Accordingly, in the Examples B.11, B.12 and B.13 to follow, let u : R+ → Rn and let u be locally integrable. Let H be the matrix impulse response so that H : R+ → Rn×n . Assume throughout that the elements of H, namely the hij (.)’s, are in L1 , for i, j = 1, 2, · · · , n. Denote ¯ the linear operator defined by by H ∆ ¯ : u 7→ Hu ¯ = H H ∗u Z ∞ where (H ∗ u)(t) = H(t − τ )u(τ )dτ.
(B.64)
0
We will establish the following induced norms.
Example B.11 For u ∈ Ln∞ (R+ ) define kuk∞ = maxi supt≥0 |ui (t)|. Show that the induced ¯ is norm of H Z ∞X n ¯ ∞ = max kHk |hij (τ )|dτ (maximum row sum). (B.65) i
0
j=1
Solution: Now ¯ = Hu
Z
∞
H(t − τ )u(τ )dτ
0
Z
¯ ∞= ⇒ kHuk
H(t − τ )u(τ )dτ
0 ∞ Z ∞ n X = max sup hij (t − τ )uj (τ )dτ i t≥0 0 j=1 Z ∞X n ≤ max sup |hij (t − τ )||uj (τ )|dτ i
∞
t≥0
≤ max sup i
t≥0
0
Z
0
j=1 n ∞X
|hij (t − τ )|dτ · kuk∞
j=1
Now substitute τ ′ = t − τ in the above integral to obtain Z ∞X Z t X n n |hij (t − τ )|dτ = |hij (τ ′ )|dτ ′ 0
−∞ j=1
j=1
Thus, sup t≥0
Z
0
n ∞X j=1
|hij (t − τ )|dτ = sup t≥0
Z
t
n X
−∞ j=1
|hij (τ ′ )|dτ ′
858
OPTIMAL AND ROBUST CONTROL Z ∞X n = |hij (τ ′ )|dτ ′ −∞ j=1 n ∞X
Z
=
0
|hij (τ ′ )|dτ ′
j=1
(since H is defined only on R+ ). Hence,
¯ ∞ ≤ max kHuk i
Z
0
n ∞X
¯ ∞ ≤ max ⇒ kHk i
j=1
Z
0
|hij (τ )|dτ · kuk∞
n ∞X
|hij (τ )|dτ
(B.66)
j=1
Now, suppose that the right-hand side is maximized for i = k. Consider the sequence of inputs u1 , u2 , · · · , with kut k∞ = 1 where for t = 1, 2, 3, · · · , we define
sgn{hk1 (t − τ )} . . ut : τ → 7 ut (τ ) = . sgn{hkn (t − τ )}
Consider now the value at time t of the kth component of the output due to ut (.). Then Z ∞X n (H ∗ ut )(t)|kth component = |hkj (t − τ )| dτ 0
j=1
But (H ∗ ut )(t) |kth component ≤ | (H ∗ ut )(t) |kth component h i ≤ sup | (H ∗ ut )(t) |kth component t≥0
h i ≤ max sup | (H ∗ ut )(t) |ith component i t≥0 = kH ∗ ut k∞ (by definition) ¯ t k∞ = kHu ¯ ∞ · kut k∞ ≤ kHk ¯ ∞ (Since kut k∞ = 1) = kHk
859
NORMS FOR LINEAR SYSTEMS Thus, ¯ ∞≥ kHk
Z
0
n ∞X
|hkj (t − τ )|dτ
j=1 n X
=
Z
=
Z tX n
t
|hkj (τ ′ )|dτ ′
−∞ j=1
|hkj (τ ′ )|dτ ′
0 j=1
( since H is defined only on R+ ). Since the above inequality holds ∀ t ≥ 0, we have Z ∞X n ¯ ∞≥ kHk |hkj (τ )|dτ 0
= max i
Z
0
n ∞X
|hij (τ )|dτ
i
0
j=1
Example B.12 For u ∈ Ln2 (R+ ) define kuk2 by 2
Z
0
Show that
n ∞X
2
|ui (t)| dt.
i=1
h io 12 n ∗ ˆ ¯ 2 = max max λi H(jω) ˆ kHk H(jω) w
(B.68)
j=1
Combining (B.66) and (B.68), we obtain Z ∞X n ¯ ∞ = max kHk |hij (τ )|dτ
kuk2 =
(B.67)
j=1
i
where λi (M ) denotes the ith eigenvalue of the Hermitian matrix M. Solution: Now 2 Z ∞X n n Z ∞ X ¯ 22 = dt kHuk h (t − τ )u (τ )dτ ij j 0 i=1 0 j=1 2 Z ∞X n X n = (hij ∗ uj )(t) dt 0 i=1 j=1
(B.69)
860
OPTIMAL AND ROBUST CONTROL n 2 Z ∞ X n X 1 ˆ hik (jω) · u ˆk (jω) dω = 2π −∞ i=1 k=1
(Using Parseval’s Theorem and the Convolution Theorem for Fourier Transforms.) Z ∞
2 dω
ˆ
ˆ =
H(jω) · U(jω)
2π 2 −∞ Z ∞n h io 2 dω ˆ ∗ (jω)H(jω) ˆ ˆ ≤ · kU(jω)k λmax H 2 2π −∞ (Using the fact that the largest singular value of a matrix is the induced Euclidean norm.) h i 2 ˆ ∗ (jω)H(jω) ˆ ≤ max max λi H · kuk2 . ω
Hence,
i
n h io 12 ˆ ∗ (jω)H(jω) ˆ ¯ 2 ≤ max max λi H kHk . ω
i
(B.70)
To get equality in the above expression, suppose that h i ˆ ∗ (jω)H(jω) ˆ max max λi H ω
i
occurs at ω = ±ωo . For each λ > 0, choose 2
uλ (t) = n(λ)e−λt · cos(ωo t) · e¯ ˆ ∗ (jωo )H(jω ˆ where e¯ is the (unit) eigenvector of H o ) corresponding to the largest eigenvalue and n(λ) is an appropriate normalization to make Z ∞X n |uλi (t)|2 dt = 1. (B.71) 0
i=1
ˆλ (jω) → π [δ(ω − ωo ) + δ(ω + ωo )] e¯. Thus Then, as λ → 0, U Z ∞ ˆ ˆλ (jω)k2 dω ¯ λ k2 = kH(jω) U kHu 2 2 2π −∞ Z ∞ ˆλ∗ (jω)H ˆ ∗ (jω)H(jω) ˆ ˆλ (jω) dω . = U U 2π −∞ As λ → 0, the right-hand side tends to Z ∞ ˆ ∗ (jω)H(jω) ˆ π [δ(ω − ωo ) + δ(ω + ωo )] e¯∗ H −∞
dω ·π [δ(ω − ωo ) + δ(ω + ωo )] e¯ 2π h i ∗ ˆ (jωo )H(jω ˆ = λmax H o) .
861
NORMS FOR LINEAR SYSTEMS Thus,
h i ¯ λ k2 → λmax H ˆ ∗ (jωo )H(jω ˆ kHu o) 2
as λ → 0. Since kuλ k2 = 1, it follows that n h io 21 ¯ 2 = max max λi H ˆ ∗ (jω)H(jω) ˆ kHk . ω
(B.72)
i
Example B.13 For u ∈ Ln1 (R+ ) define kuk1 by kuk1 =
Z
0
Show that ¯ 1 = max kHk j
n ∞X
Z
0
Solution: Now ¯ 1= kHuk
≤
Z
i=1
0
=
n ∞X
|hij (t)|dt.
i=1
hij (t − τ )uj (τ )dτ dt j=1
n Z ∞ X n ∞X
Z ∞X n Z 0
|ui (t)|dt.
i=1
0
n ∞X
|hij (t − τ )| · |uj (τ )|dτ dt
i=1 0 j=1 Z Z n n ∞ ∞ XX 0
j=1 i=1
0
|hij (t − τ )|dt |uj (τ )|dτ
Substituting t − τ = τ ′ in the inner integral we have dt = dτ ′ . When t = 0 then τ ′ = −τ and when t = ∞ then τ ′ = ∞. So n X n Z ∞Z ∞ X ¯ 1≤ kHuk |hij (τ ′ )|dτ ′ · |uj (τ )|dτ (B.73) 0
j=1 i=1
=
"Z n X
0
j=1
−τ
n ∞X i=1
# Z |hij (τ )|dτ · ′
∞
′
0
(since H is defined only on R+ ) Z ∞X n ≤ max |hij (τ ′ )|dτ ′ · kuk1 j
¯ 1 ≤ max ⇒ kHk j
0
Z
0
i=1 n ∞X i=1
|hij (t)|dt.
|uj (τ )|dτ
(B.74)
(B.75) (B.76)
862
OPTIMAL AND ROBUST CONTROL
Suppose that the maximum column sum on the right-hand side of (B.76) occurs for j = k. Choose 1 t uλ (t) = e− λ · vk λ where vk is the kth basis vector in Rn . Then Z ∞ 1 −t kuλ (t)k1 = e λ dt λ 0 = 1 ∀ λ > 0. Also uλ (t) → δ(t) · vk as λ → 0. Now Z ∞X n Z ∞ X n ¯ λ k1 = dt kHu h (t − τ )u (τ )dτ ij λj 0 i=1 0 j=1 Z ∞X n Z ∞ 1 τ = hik (t − τ ) · e− λ dτ dt λ 0 0 i=1 Z ∞X n → |hik (t)|dt as λ → 0. 0
i=1
¯ λ k1 → max Thus, as λ → 0, kHu j
But
Z
0
n ∞X
|hij (t)|dt.
i=1
¯ λ k1 ≤ kHk ¯ 1 · kuλ k1 = kHk ¯ 1. kHu
Hence, ¯ 1 ≥ max kHk j
Z
n ∞X
Z
n ∞X
0
Combining (B.76) and (B.77), ¯ 1 = max kHk j
0
|hij (t)|dt
(B.77)
|hij (t)|dt
(B.78)
i=1
i=1
For H defined by (B.64), one can define the H2 norm of H(s) by
1 kHk2 = 2π ∆
Z
∞ −∞
h i 21 ∗ ˆ ˆ trace H (jω)H(jω) dω .
This matrix generalization of the norm in (B.63) is not an induced norm. The physical significance of the H2 norm is that it arises by combining the ¯ Let ei denote the ith outputs of several experiments as follows. Let y = Hu.
863
NORMS FOR LINEAR SYSTEMS
standard basis vector in Rn . Apply the impulse input δ(t)ei , i = 1, 2, · · · , n ¯ and denote (δ is the unit impulse) in n separate experiments to the system H the output by yi (t). Then 2
(kHk2 ) =
n X
2
(kyi k2 ) . | {z } i=1 2 (L2 -norm)
The calculation of induced norms of convolution maps in this section was based on Theorems B.4 and B.5, both of which required that the function w (impulse response) be absolutely integrable on [0, ∞), that is w ∈ L1 (R+ ). It is possible to extend Theorems B.4 and B.5 to the case where w ∈ / L1 (R+ ) but is the sum of an L1 (R+ ) function and a sequence of impulses at discrete instants of time. To formalize this notion, let A have elements of the form P∞ ga (t) + i=0 gi δ(t − ti ) for t ≥ 0 g(t) = (B.79) 0 for t < 0 P where ga ∈ L1 (R+ ); gi ∈ R, ∀ i and ∞ i=0 |gi | < ∞; 0 = t0 , 0 < ti ∀ i. On A, addition is defined pointwise, and scalar multiplication by real numbers is defined in the usual manner. The product of two elements f, g ∈ A is defined to be their convolution. More precisely, let g be given by (B.79) and f be given by ∞ X f (t) = fa (t) + fk δ(t − tk ), t ≥ 0. (B.80) k=0
Then the convolution f ∗ g is the function X (f ∗ g)(t) = (fa ∗ ga )(t) + gi fa (t − ti ) ti ≤t
+
X
′
fi ga (t − ti ) +
′ ti ≤t
X
gi fk′ δ[t − (ti + tk′ )]. (B.81)
ti +tk′ ≤t
With the convolution defined as above, it can be shown that (i) f ∗ g = g ∗ f and (ii) f, g ∈ A ⇒ f ∗ g ∈ A. These two properties imply that A is a commutative algebra. Let us define a norm on A by ∞ X kgkA = kga k1 + |gi |. (B.82) i=0
Then, it can be shown that
kf ∗ gkA ≤ kf kA kgkA
864
OPTIMAL AND ROBUST CONTROL
which is the analogue of Theorem B.4 when f, g ∈ / L1 . Also note that kδkA = 1. We next state the analogue of Theorem B.5 when f, g ∈ A. THEOREM B.11 Let f ∈ Lp (R+ ) where 1 ≤ p ≤ ∞ and let w ∈ A. Then f ∗ w ∈ Lp and kf ∗ wkp ≤ kf kp kwkA
The above bound is sharp for p = ∞. For p = 2, we can obtain a sharper bound given by the following Theorem. THEOREM B.12 Let f ∈ L2 (R+ ) and w ∈ A. Then f ∗ w ∈ L2 and kf ∗ wk2 ≤ sup |w(jω)| ˆ · kf k2 ω
Clearly, Theorem B.12 represents an extension of Theorem B.9 when w ∈ / L1 but w ∈ A. Notice that in this case, one has to use the supremum of |w(jω)| ˆ since w does not belong to L1 and so |w(jω)| ˆ may not have the earlier nice properties that guaranteed the existence of a maximum. When the impulse response is in A, we can define the input-output relationship by y = Hu = h ∗ u
(B.83)
where the convolution is given by (B.81). Furthermore, we have kyk∞ ≤ khkA kuk∞ .
(B.84)
Here khkA is the induced operator norm. However, it is difficult to calculate this quantity since it involves taking the inverse Laplace transform and integrating. Moreover, it cannot be readily computed using frequency response data. On the other hand, for L2 norms on the input and output signals, we have the following relationship: ˆ kyk2 ≤ sup |h(jω)| kuk2 . ω
(B.85)
NORMS FOR LINEAR SYSTEMS
865
ˆ Moreover, the quantity supω |h(jω)| can be easily computed using frequency response data. This creates a situation where, in practice, we may be interested in establishing properties of the L∞ norms of certain signals but wish to infer that using L2 norm properties. Such a situation arises in Theorem 14.2 of this book.
B.5
Notes and References
The material in this appendix, which is quite standard in the controls literature, is adapted from Desoer and Vidyasagar [66]. In order to make the appendix suitable for self-study, many of the exercises from that reference have been included here as worked-out examples.
Part IV
EPILOGUE
ROBUSTNESS AND FRAGILITY
In previous chapters we have described several mathematically elegant approaches to the design of robust and optimal controllers developed in the control literature. Control theory is meant to be applied to the design of real world systems and the effectiveness of various design methods must ultimately be assessed based on their performance on real systems. In this Epilogue, we raise a cautionary note by pointing out some potential pitfalls of the previously described methods. We show by examples that optimum and robust controllers, designed by using the H2 , H∞ , l1 and µ formulations, can produce extremely fragile controllers, in the sense that vanishingly small perturbations of the coefficients of the designed controller can destabilize the closed-loop control system. The examples show that this fragility also usually manifests itself as extremely poor gain and phase margins of the closed-loop system. The calculations given here should alert those contemplating the design of real systems using these methods and also draw attention to the larger issue of controller sensitivity which may be important in other nonoptimal design techniques as well.
Feedback, Robustness, and Fragility We begin by describing an example motivated by the discovery of the feedback amplifier by H.S. Black in 1926. Suppose an amplifier with a nominal gain A0 of 100 is built with components which are not reliable. Assuming that the components can vary by 50%, the amplifier gain A varies between 50 and 150.
50 ≤ A ≤ 150 Figure 1 Open-loop unreliable system
869
870
EPILOGUE
To obtain a reliable device with the same unreliable components, we construct the feedback system (see Figure 2) where A¯ is built with the same unreliable components and has a nominal value 1 A¯0 = 10, 000 and β = . 100
+
A¯ − β Figure 2 Closed-loop reliable system.
The closed-loop gain is given by A¯ ¯ 1 + Aβ
(1)
A¯0 10, 000 = 99.0099 . = ¯ 1 + 100 1 + A0 β
(2)
Ac = and has the nominal value A¯0c =
With 50% variation in A¯0 , the closed-loop gain varies between 5, 000 15, 000 = 98.04 ≤ Ac ≤ = 99.34 1 + 50 1 + 150
(3)
In other words, the open-loop gain variation interval 5, 000−15, 000 is mapped to a closed-loop gain variation interval of only 98.04 − 99.34. This illustrates the principle that feedback systems can drastically reduce the effect of large variations provided the feedback device has an accurate gain. However, if this latter gain varies by 50%, we can see that the closed-loop gain also varies by 50%. To summarize, we see that the feedback device must be built with reliable components if it is to overcome the effects of unreliable components elsewhere in the system. In general, this principle carries over to the case of a plant-controller feedback loop with dynamic elements. One should expect that robustness against
ROBUSTNESS AND FRAGILITY
871
plant perturbations can only be obtained at the cost of tighter accuracy requirements on controllers. An implicit assumption inherent to most controller design methodologies is that the controller that is designed will be implemented exactly. Relatively speaking, this assumption is valid, in the sense that plant uncertainty is clearly the most significant type of uncertainty in a control system, while controllers are generally implemented with high precision hardware. On the other hand it is necessary that any controller that is part of a closedloop system be able to tolerate some uncertainty in its coefficients. There are at least two reasons for this. First controller implementation is subject to the imprecision inherent in analog-digital and digital-analog conversion, finite word-length and finite resolution measuring instruments and roundoff errors in numerical computations. Thus, it is required that there exist a nonzero (although possibly small) margin of tolerance around the controller designed. Second, every paper design requires readjustment because no scalar index can capture all the performance requirements of a real world control system. This means that any useful design procedure should generate a controller which also has sufficient room for readjustment of its coefficients. This translates to the requirement that an adequate stability and performance margin be available around the transfer function coefficients, or other characterizing implementation parameters, of the designed nominal controller. With the above background as motivation we study in this chapter the parametric stability margin of several controller designs from the published literature, obtained by using the H2 , H∞ , l1 and µ approaches. In each of the examples treated we obtain the somewhat surprising conclusion that the parametric stability margin of the controller designed is vanishingly small. This means that extremely small perturbations of the coefficients of the controller designed will succeed in destabilizing the loop; in other words the controller itself is fragile and so is the control system. We also compute the gain and phase margins of the systems designed and observe that these too are generally very poor. At the end, we briefly discuss some of the issues raised by the calculations presented in the chapter. It is obvious that it would be unwise to place a controller that is fragile with respect to perturbations of its coefficients in an actual control system, without further precautions and analysis.
Examples In this section we analyze several examples of optimal designs taken from the control literature from the standpoint of controller sensitivity. In other words we determine how much perturbation the coefficients of the controller can
872
EPILOGUE
undergo without destroying closed-loop stability. Each example is a singleinput single-output system and the controller is designed to optimize some closed-loop performance function. In Examples 1, 3, and 6, the procedure consists of fixing the nominal plant model, parametrizing all proper feedback controllers that stabilize the nominal model through the YJBK parameter Q(s), which is only required to be stable and proper, and optimally selecting Q(s). In cases where the optimal Q(s) turns out to be improper it is divided by a factor (sτ +1)k to make it proper, and τ is chosen to be suitably small and positive. For our analysis we compute in each case, the parametric stability margin around the coefficients of the designed optimal controller, as well as the gain and phase margins of the closed-loop system. Example 1 (H∞ Based Optimum Gain Margin Controller [72]) This example uses the YJBK parametrization and the machinery of the H∞ Model Matching Problem to optimize the upper gain margin. The plant to be controlled is s−1 P (s) = 2 s −s−2 and the controller, designed to give an upper gain margin of 3.5, (the closed loop is stable for the gain interval [1, 3.5]) is obtained by optimizing the H∞ norm of a complementary sensitivity function. The controller found is C(s) = where
q60 s6 + q50 s5 + q40 s4 + q30 s3 + q20 s2 + q10 s + q00 p06 s6 + p05 s5 + p04 s4 + p03 s3 + p02 s2 + p01 s + p00 q60 q50 q40 q30 q20 q10 q00
= 379 = 39383 = 192306 = 382993 = 383284 = 192175 = 38582
p06 p05 p04 p03 p02 p01 p00
=3 = −328 = −38048 = −179760 = −314330 = −239911 = −67626.
The poles of this nominal controller are: 174.70,
−65.99,
−1.86,
−1.04,
−0.98 ± j0.03
and the poles of the closed-loop system are: −0.4666 ± j14.2299, −1.0000 ± j0.0002,
−5.5334 ± j11.3290,
−1.0002,
−0.9998
and this verifies that the controller is indeed stabilizing. The Nyquist plot of P (s)C(s) is shown in Figure 3 and verifies that the desired upper gain margin is achieved. On the other hand, we see from Figure 3
873
ROBUSTNESS AND FRAGILITY that the lower gain margin and phase margin are: Gain Margin = [1, 0.9992],
Phase Margin = [0, 0.1681] degree
0.5 0.4 0.3
Imag Axis
0.2 0.1 0 −0.1 −0.2 −0.3 −0.4 −0.5 −1
−0.8
−0.6 Real Axis
−0.4
−0.2
0
Figure 3 Nyquist plot of P (s)C(s).
This means roughly that a reduction in gain of one part in one thousand will destabilize the closed-loop system! Likewise a vanishingly small phase perturbation is destabilizing. To continue with our analysis let us consider the transfer function coefficients of the controller to be a parameter vector p with its nominal value being p0 = q60 · · · q00 p06 · · · p00 and let ∆p be the vector representing perturbations in p. We compute the l2 parametric stability margin around the nominal point. This comes out to be: ρ = 0.15813903109631. The normalized ratio of change in controller coefficients required to destabilize the closed loop is ρ = 2.103407115900516 × 10−7 . kp0 k2
874
EPILOGUE
This shows that a change in the controller coefficients of less than 1 part in a million destabilizes the closed loop. This controller is anything but robust; in fact we are certainly justified in labeling it as a fragile controller. In order to verify this rather surprising result, we construct the destabilizing controller whose parameters are obtained by setting p = p0 + ∆p and are: q6 q5 q4 q3 q2 q1 q0
= = = = = = =
379.000285811 39382.999231141 192305.999998597 382993.000003775 383284.000000007 192174.999999982 38582.000000000
p6 p5 p4 p3 p2 p1 p0
= = = = = = =
3.158134748 −327.999718909 −38048.000776386 −179760.000001380 −314329.999996188 −239910.999999993 −67626.000000018
The closed-loop poles of the system with this controller are: 0.000 ± j14.2717,
−5.5745 ± j10.9187,
−1.0044,
−1.0067 ± j 0.0158,
−0.9820
which shows that the roots crossover to the right-half plane at ω = 14.27, and the perturbed controller is indeed destabilizing. Example 2 (An Arbitrary Controller) For the sake of comparison, we continue with the previous example and try a first order controller with the same plant P (s) =
s2
s−1 . −s−2
We design √ a pole placement controller placing closed-loop poles on a circle of radius 2 spaced equidistantly in the left-half plane. The transfer function of this controller is: q 0 s + q00 C(s) = 1 s + p00 where q10 = 11.44974739, q00 = 11.24264066, p00 = −7.03553383. Introduce the parameter vector p corresponding to the controller coefficients and with nominal value: p0 = q10 q00 p00 . We compute the l2 parametric stability margin for this controller to be ρ = 1.26491100827916.
875
ROBUSTNESS AND FRAGILITY
The normalized ratio of change in controller coefficients required to destabilize the loop is ρ = 0.07219317556675087. kp0 k2 which is to be compared to the previous 10−7 value. This controller can tolerate a change in coefficient values of 7.2% compared to the value of 10−4 % for the optimum controller. The Nyquist plot of the system with this controller is shown in Figure 4 and gives the lower gain and phase margins: Gain Margin = [1, 0.79403974451346] Phase Margin = [0, −9.88729274575800] degree.
0.8
0.6
0.4
Imag Axis
0.2
0
−0.2
−0.4
−0.6
−0.8 −1.6
−1.4
−1.2
−1
−0.8 −0.6 Real Axis
−0.4
−0.2
0
Figure 4 Nyquist plot of P (s)C(s).
The system can tolerate gain reduction of about 21%. This is an improvement over the previous controller by a factor of about 20, 000! The phase margin is improved by a factor of about 60. We have already shown the drastic
876
EPILOGUE
improvement in the parametric stability margin. Therefore, this nonoptimal controller is far less fragile than the optimal controller, on all counts. Example 3 (H∞ Robust Controller [72]) The following example designs an optimal H∞ robust controller that minimizes kW2 (s)T (s)k∞ where T (s) is the complementary sensitivity function and the weight W2 (s) is chosen as the high-pass function W2 (s) =
s + 0.1 . s+1
The plant transfer function is P (s) =
s−1 . s2 + 0.5s − 0.5
and the optimally robust controller found is C(s) =
−124.5s3 − 364.95s2 − 360.45s − 120 . s3 + 227.1s2 + 440.7s + 220
The poles of the closed-loop system are: −99.99999999999994, −1.00000484275516, − 0.99999757862242 ± j 0.00000419348343
−0.10000000000000,
and therefore the controller does stabilize the nominal plant. For the purposes of our analysis we first took the controller coefficients as a parameter vector p and found the parametric stability margin (l2 norm of the smallest destabilizing perturbation ∆p) to be: ρ = 8.94427190999916. The normalized ratio of change in controller coefficients required for destabilization is: ρ = 0.01167214151733 kp0 k2 which shows that the controller, which by design is maximally robust with respect to H∞ perturbations, is quite fragile with respect to controller coefficient perturbations. To continue our analysis the Nyquist plot of the system with this controller is drawn in Figure 5. From this we obtain the lower gain and phase margins: Gain Margin = [1, 0.91666666666667] Phase Margin = [0, 12.91170327761722] degrees which are quite poor and would probably be unacceptable in a real system.
877
ROBUSTNESS AND FRAGILITY 0.3
0.2
Imag Axis
0.1
0
−0.1
−0.2
−0.3
−0.4 −1.2
−1
−0.8
−0.6 Real Axis
−0.4
−0.2
0
Figure 5 Nyquist plot of P (s)C(s).
Example 4 (µ Based Design [83]) This example examines a robust controller for an electromagnetic suspension system designed by using the µ-synthesis technique. The plant transfer function is P (s) =
−36.27 s3 + 45.69s2 − 4480.9636s − 204735.226884
and the controller designed to tolerate prescribed structured plant perturbations is given as C(s) =
s7
q60 s6 + q50 s5 + q40 s4 + q30 s3 + q20 s2 + q10 s + q00 + p06 s6 + p05 s5 + p04 s4 + p03 s3 + p02 s2 + p01 s + p00
where q60 q50 q40 q30 q20 q10 q00
= −5.220000000000000 × 108 = −1.190629800000000 × 1011 = −1.089211902480000 × 1013 = −5.104622252074320 × 1014 = −1.285270261841830 × 1016 = −1.629532689765926 × 1017 = −7.937217972339767 × 1017
p06 p05 p04 p03 p02 p01 p00
The poles of the closed-loop system are: −557.780980781351
= 1.468170000000000 × 103 = 8.153914724000001 × 105 = 2.268680248018680 × 108 = 1.818763428483511 × 1010 = 5.698409038920188 × 1011 = 6.284542925855980 × 1012 = 6.227740485023126 × 1011 .
878
EPILOGUE −424.062740635737 −127.987138247396 ± j 32.750959185172 − 67.791476256437 − 66.940000009033 − 45.689999998661 − 42.997168378168 − 26.311678722911 ± j 9.297227877312
which verifies that the closed-loop system is nominally stable. For our analysis we first took the controller coefficient vector as a parameter vector and determined the l2 parametric stability margin around the nominal controller. This parametric stability margin is found to be: ρ = 1.17938672900662 × 103 . The normalized ratio of change in controller coefficients required to destabilize the closed loop is ρ = 1.455352715525003 × 10−15 kp0 k2 which indicates that closed-loop stability is very fragile with respect to controller parameter perturbations. To continue, we draw the Nyquist plot of the plant P (s) with this controller C(s). This is shown in Figure 6 which shows that Gain Margin = [0.5745485, 2.585314] Phase Margin = [0, ±24.0555] degree. Comparing this with the previous fragility with respect to coefficient perturbations reminds us that good gain and phase margins are not necessarily reliable indicators of robustness. However, poor gain and/or phase margins are accurate indicators of fragileness! Example 5 (ℓ1 Optimal Control [55]) The given plant which is a discrete time model of an X-29 aircraft has transfer function: n3 z 3 + n22 + n1 z + n0 P (z) = 4 z + d3 z 3 + d2 z 2 + d1 z + d0 with n0 n1 n2 n3
= 4.535291991023538 × 10−3 = −1.294020546545127 × 10−2 = 2.996941752535731 × 10−3 = 4.522895843871860 × 10−3
d0 d1 d2 d3
= 0.1888756028372526 = −1.235443807952025 = 2.900568259828338 = −2.871701572273939.
879
ROBUSTNESS AND FRAGILITY 3
2
Imag Axis
1
0
−1
−2
−3 −5
−4
−3
−2 Real Axis
−1
0
1
Figure 6 Nyquist plot of P (s)C(s)
The optimal controller designed to minimize the ℓ1 norm of a disturbance transfer function is C(z) =
q6 z 6 + q5 z 5 + q4 z 4 + q3 z 3 + q2 z 2 + q1 z + q0 p6 z 6 + p5 z 5 + p4 z 4 + p3 z 3 + p2 z 2 + p1 z + p0
where q0 q1 q2 q3 q4 q5 q6
= −4.1763 × 10 = 2.18478 × 102 = −2.93168 × 102 = −1.21311 × 102 = 3.43 × 102 = 8.068 × 10 = −1.86872 × 102
p0 p1 p2 p3 p4 p5 p6
= 1 = 1.9728 = −1.7703 = −3.0297 = −8.446 × 10−1 = 1.489 × 10−1 = 2.5231.
The poles of the closed-loop system are: 0.96739955458900,
0.90462192702824,
0.71656397178049,
−0.46249566183404,
880
EPILOGUE 0.51179070536440,
0.42346983479152
−0.03470824179828 ± j 0.08323893024208 0.07832845106251 ± j 0.06923045644543 and therefore the controller does stabilize the nominal closed-loop system. For our analysis we took the controller transfer function coefficients as a parameter vector and computed the l2 parametric stability margin around the nominal controller. Here we found that ρ = 0.01796866210822. The normalized ratio of change in controller coefficients required to destabilize the closed loop is: ρ = 3.231207641519448 × 10−5 . kp0 k2 This shows how fragile the system is with respect to controller parameter perturbations: perturbations of less than 1 part in 10, 000 will destabilize the closed loop. Next, the Nyquist plot of P (z)C(z) is drawn in Figure 7. This gives the closed-loop gain and phase margins: Gain Margin = [0.5463916, 1.38153348] Phase Margin = [0, ±23.03169] degree.
Example 6 (H2 Optimal Design [72]) The plant transfer function is P (s) =
−s + 1 s2 + s + 2
and the optimal controller is determined by minimizing a weighted H2 norm of the disturbance transfer function kW (s)P (s)S(s)k2 where S(s) is the sensitivity function. In this example the optimal YJBK parameter Q(s) is improper and a suboptimal controller is picked after dividing Q(s) by the factor (sτ +1)k with τ = 0.01 and k = 2. The controller designed is C(s) =
q6 s6 + q5 s5 + q4 s4 + q3 s3 + q2 s2 + q1 s + q0 p 6 s6 + p 5 s5 + p 4 s4 + p 3 s3 + p 2 s2 + p 1 s
where q6 = 1.0002, q5 = 3.0406, q4 = 8.1210, q3 = 13.2010, q2 = 15.2004, q1 = 12.08, q0 = 4.0 p6 = 0.0001, p5 = 1.0205, p4 = 2.1007, p3 = 5.1403, p2 = 6.06, p1 = 2.0.
881
ROBUSTNESS AND FRAGILITY 3
2
Imag Axis
1
0
−1
−2
−3 −5
−4
−3
−2 Real Axis
−1
0
1
Figure 7 Nyquist plot of P (z)C(z).
The poles of the closed-loop system are: −99.99999999999979 ± j 0.00000525738646 − 0.49999991692171 ± j 1.32287570012498 − 0.50000008307830 ± j 1.32287561093960 − 0.99999999999999 ± j 0.00000012181916. which verifies nominal closed-loop stability. To proceed with our analysis we took the controller coefficients as a parameter vector and computed the l2 parametric stability around the nominal. Here we found that ρ = 1.101592322015807 × 10−4 . The normalized ratio of change in controller coefficients required to destabilize the closed loop is ρ = 3.737066131643626 × 10−6 . 0 kp k2 which again shows that the controller is extremely fragile. To continue we draw the Nyquist plot of the system with this controller in Figure 8. It shows that the gain and phase margins are: Gain Margin = [1, 1.02000161996515] Phase Margin = [0, 6.86646258505442] degrees
882
EPILOGUE
which is again rather poor.
2
1.5
1
Imag Axis
0.5
0
−0.5
−1
−1.5
−2 −1.6
−1.4
−1.2
−1
−0.8 Real Axis
−0.6
−0.4
−0.2
0
Figure 8 Nyquist plot of P (s)C(s) (Zoomed Figure).
In this particular example, we found that the poles of the closed-loop system appear to be extremely sensitive with respect to controller coefficient changes. Consider the following perturbation vector ∆p: −0.32108925047466 × 10−4 −0.00907625963783 × 10−4 0.00174367712816 × 10−4 0.00004928868247 × 10−4 −0.00000946904996 × 10−4 −0.00000026766277 × 10−4 −4 ∆p = 0.00000005142150 × 10−4 −0.99999651057041 × 10 0.33210864299116 × 10−4 0.00543048713498 × 10−4 −0.00180351800614 × 10−4 −0.00002949029363 × 10−4 0.00000979401515 × 10−4 whose l2 norm is k∆pk2 = 1.101592322015807 × 10−4 which equals the parametric stability margin for this problem. If we add this perturbation to the
ROBUSTNESS AND FRAGILITY
883
nominal controller coefficients, we have the following closed-loop poles with the perturbed controller: −58362838.6564950, −49.0701313, −1.0093632, −0.9910823, −0.5053510 ± j 1.3259448, −0.4947516 ± j 1.3200071. This shows that one pole is far in the left-half plane. Now we select another small perturbation ∆p: −0.3210910254683 × 10−4 −0.0090763098117 × 10−4 0.0017436867673 × 10−4 0.0000492889549 × 10−4 −0.0000094691023 × 10−4 −0.0000002676642 × 10−4 −4 ∆p = 0.0000000514218 × 10−4 −1.0000020385893 × 10 0.3321104789004 × 10−4 0.0054305171549 × 10−4 −0.0018035279761 × 10−4 −0.0000294904567 × 10−4 0.0000097940693 × 10−4
whose l2 norm is k∆pk2 = 1.101598411660255 × 10−4 which is slightly larger than the previous one. If we add this perturbation to the nominal controller coefficients, we have the following new set of closed-loop poles 99899133.5177260, −49.0700608, 1.0093632, −0.9910823, −0.5053510 ± j 1.3259448, 0.4947516 ± j 1.3200071.
which includes an RHP pole at about 100 × 106 ! This example shows that a slight perturbation in controller coefficients of the optimal controller will result in very large perturbations of the closed-loop poles. This is due to the degree dropping of the characteristic polynomial at the perturbation in the highest coefficient.
Discussion The calculations presented above show that H∞ , H2 , µ, and ℓ1 designs can lead to fragile controllers. This means that very small perturbations of the
884
EPILOGUE
controller coefficients can result in instability. This fragility also shows up usually as extremely small gain and/or phase margins of the closed-loop system. Moreover these margins were calculated at the nominal plant; the worst case margins over the set of uncertain plants would certainly be even poorer! The extremely small size of the parametric margin around the controller coefficients obtained in these examples means that there is practically no freedom left to readjust or tune the controller. Thus, at least in the examples presented here, the control engineer who opts for such an optimal design is forced to either accept a fragile design or reject it altogether. These calculations therefore raise a cautionary note regarding the role of optimal and robust designs as developed over the last few decades, in practical applications. It is important to understand the fundamental reasons for controller fragility if we are to successfully overcome it. We make some speculative comments on this issue below. Modern control theory results in higher order controllers. As we know the stability regions in the parameter space of higher order systems have “instability holes” and the optimization algorithm can stuff the controller parameter into tight spots close to these holes, since no margin is asked for with respect to the controller in the optimization procedure. Thus, in a sense the design transfers the sensitivity from the plant to the controller. Of course the fragility shown here may not be limited to the techniques discussed here nor to high order controllers. For example, Example 4.8 (page 194) of the book [24] also shows that one may obtain extremely sensitive low order controllers. This example shows that the controller that maximizes the plant parameter perturbation tolerance actually puts the closed-loop poles on the boundary of the specified D-stability region. This means that while the closed-loop poles may only move inward to the D-stability region with respect to plant parameter perturbations even a small perturbation in the controller parameters sends them flying out of the D-stability region. One view of optimal control is that the fixed plant is a constraint on the optimization process. There are “good” and “bad” plants and some nonminimum phase plants are so bad that no compensator can produce a robust system. Some of the examples given here are indeed bad plants and this can lead to inherently poor performance regardless of the methodology used. Finally, the above discussion suggests that there are a number of open areas in controller design that need to be looked into more deeply. The most important of these are the design of low order robust, nonfragile controllers to achieve multiple performance specifications and the design of controllers based on measured data rather than models. The results presented in Part 1 are preliminary steps toward addressing the first problem area. The 2008 paper [115] addresses the design of data based controllers.
ROBUSTNESS AND FRAGILITY
885
Notes and References This epilogue is based on and closely follows the 1997 paper [124] by Keel and Bhattacharyya where the issue of controller fragility was first pointed out. Since then there has been considerable discussion of the potential fragility of complex systems but no effective general procedure to overcome the controller fragility problem has appeared suggesting that it may be inherent to high order and optimal designs.
REFERENCES
[1] Aguirre, G., Chapellat, H. and Bhattacharyya, S. P. Stability margins for discrete - time uncertain systems. In Proceedings of the 1989 IEEE Conference on Decision and Control (Tampa, FL, December 1989). [2] Ahmad, S. S. and Keel, L. H. Robust lead-lag compensation for uncertain linear systems. In Proceedings of the 1992 IEEE Symposium on Circuits and Systems (San Diego, CA, 1992), pp. 2716 – 2719. [3] Ahmad, S. S., Keel, L. H. and Bhattacharyya, S. P. Computer aided robust control design for interval control system. In Proceedings of the IEEE Symposimum on Computer Aided Control System Design (Napa, CA, 1992), pp. 82 – 89. [4] Ahmad, S. S., Keel, L. H. and Bhattacharyya, S. P. Robust PID control and lead - lag compensator for linear interval systems. In Robustness of Dynamic Systems with Parameter Uncertainties, M. Mansour, S. Balemi, and W. Tru¨ ol, Eds. Birkh¨ auser, Berlin, 1992, pp. 251 – 260. [5] Aizerman, M. A. and Gantmacher, F. R. Absolute Stability of Regulator Systems. Holden - Day, San Francisco, CA, 1964. Russian Edition, 1963. [6] Anderson, B. D. O., Kraus, F., Mansour, M. and Dasgupta, S. Easily testable sufficient conditions for the robust stability of systems with multiaffine parameter dependence. In Robustness of Dynamic Systems with Parameter Uncertainties, M. Mansour, S. Balemi, and W. Tru¨ol, Eds. Birkh¨ auser, Berlin, 1992, pp. 81 – 92. ¨ m, K. J. and Ha ¨gglund, T. Automatic tuning of simple regu[7] ˚ Astro lators with specifications on phase and amplitude margins. Automatica 20 (1984), 645 – 651. ¨ m, K. J. and Ha ¨gglund, T. PID Controllers: Theory, Design, [8] ˚ Astro and Tuning. Instrument Society of America, Research Triangle Park, NC, 1995. [9] Atherton, D. P. and Majhi, S. Limitations of PID controllers. In Proceedings of the 1999 American Control Conference (San Diego, GA, June 1999).
887
888
REFERENCES
[10] Barmish, B. R. Invariance of strict Hurwitz property of polynomials with perturbed coefficients. IEEE Transactions on Automatic Control AC - 29, 10 (1984), 935 – 936. [11] Barmish, B. R. New tools for robustness analysis. In Proceedings of the 27th IEEE Conference on Decision and Control (Austin, TX, December 1988). [12] Barmish, B. R. A generalization of Kharitonov’s four polynomial concept for robust stability problems with linearly dependent coefficient perturbations. IEEE Transactions on Automatic Control 34, 2 (February 1989), 157 – 165. [13] Barmish, B. R., Hollot, C. V., Kraus, F. J. and Tempo, R. Extreme point results for robust stabilization of interval plants with first order compensators. IEEE Transactions on Automatic Control 37, 6 (June 1992), 707 – 714. [14] Barmish, B. R., Khargonekar, P. P., Shi, Z. and Tempo, R. Robustness margin need not be a continuous function of the problem data. Systems & Control Letters 15 (1989), 371 – 381. [15] Bartlett, A. C. Nyquist, Bode, and Nichols plots of uncertain systes. In Proceedings of the 1990 American Control Conference (San Diego, CA, 1990). [16] Bartlett, A. C. Vertex results for the steady state analysis of uncertain systems. IEEE Transactions on Automatic Control 37, 11 (November 1992), 1758 – 1762. [17] Bartlett, A. C., Hollot, C. V. and Lin, H. Root location of an entire polytope of polynomials: It suffices to check the edges. Mathematics of Controls, Signals and Systems 1 (1988), 61 – 71. [18] Bartlett, A. C., Tesi, A. and Vicino, A. Frequency response of uncertain systems with interval plants. IEEE Transactions on Automatic Control 38, 6 (June 1993), 929 – 933. [19] Basu, S. On boundary implications of stability and positivity properties of multidimensional systems. IEEE Proceedings 78, 4 (1990), 614 – 626. [20] Bellman, R. and Majhi, S. Differential-Difference Equations. Academic Press, London, UK, 1963. [21] Bhattacharyya, S. P. Output regulation with bounded energy. IEEE Transactions on Automatic Control AC-18, 4 (1973), 381 – 383. [22] Bhattacharyya, S. P. Robust parametric stability: The role of the CB segments. In Control of Uncertain Dynamic Systems, S. P. Bhattacharyya and L. H. Keel, Eds. CRC Press, Littleton, MA, September 1991.
REFERENCES
889
[23] Bhattacharyya, S. P. Vertex results in robust stability. Tech. rep., TCSP Report, Texas A&M University, April 1991. [24] Bhattacharyya, S. P., Chapellat, H. and Keel, L. H. Robust Control: The Parametric Approach. Prentice Hall PTR, Upper Saddle River, NJ, 1995. [25] Bhattacharyya, S. P. and Keel, L. H., Eds. Control of Uncertain Dynamic Systems. CRC Press, Littleton, MA, 1991. [26] Bhattacharyya, S. P. and Keel, L. H. Robust stability and control of linear and multilinear interval systems. In Control and Dynamic Systems, C. T. Leondes, Ed., vol. 51. Academic Press, New York, NY, 1992, pp. 31 – 78. [27] Bhattacharyya, S. P., Keel, L. H. and Howze, J. W. Stabilization of linear systems with fixed order controllers. Linear Algebra and Its Applications 98 (1988), 57 – 76. [28] Bhattacharyya, S. P. and Pearson, J. B. On the linear servomechanism problem. International Journal of Control 12, 5 (1970), 795 – 806. [29] Bhattacharyya, S. P. and Pearson, J. B. On error systems and the servomechanism problem. International Journal of Control 15, 6 (1972), 1041 – 1062. [30] Bhattacharyya, S. P., Pearson, J. B. and Wonham, W. M. On zeroing the output of a linear system. Information and Control 2 (1972), 135 – 142. [31] Bialas, S. A necessary and sufficient condition for stability of interval matrices. International Journal of Control 37 (1983), 717 – 722. [32] Bialas, S. A necessary and sufficient condition for the stability of convex combinations of stable polynomials and matrices. Bulletin of Polish Academy of Science 39, 9 - 10 (1985), 473 – 480. [33] Biernacki, R. M., Hwang, H. and Bhattacharyya, S. P. Robust stabilization of plants subject to structured real parameter perturbations. IEEE Transactions on Automatic Control AC - 32, 6 (June 1987), 495 – 506. [34] Bose, N. K. A system-theoretic approach to stability of sets of polynomials. Contemporary Mathematics 47 (1985), 25 – 34. [35] Bose, N. K. Robust multivariable scattering Hurwitz interval polynomials. Linear Algebra and Its Application 98 (1988), 123 – 136. [36] Bose, N. K. Test of Hurwitz and Schur properties of convex combination of complex polynomials. IEEE Transactions on Automatic Control 36, 9 (1989), 1245 – 1247.
890
REFERENCES
[37] Bose, N. K. Digital Filters. Elsevier-Science North-Holland, Krieger Publishing Co., New York, 1993. [38] Bose, N. K. Argument conditions for Hurwitz and Schur polynomials from network theory. IEEE Transactions on Automatic Control 39, 2 (February 1994), 345 – 346. [39] Bose, N. K. and Delansky, J. F. Boundary implications for interval positive rational functions. IEEE Transactions on Circuits and Systems CAS - 36 (1989), 454 – 458. [40] Bose, N. K. and Shi, Y. Q. Network realizability theory approach to stability of complex polynomials. IEEE Transactions on Automatic Control 34, 2 (1987), 216 – 218. [41] Bose, N. K. and Shi, Y. Q. A simple general proof of Kharitonov’s general stability criterion. IEEE Transactions on Circuits and Systems CAS - 34 (1987), 1233 – 1237. [42] Brasch, F. M. and Pearson, J. B. Pole placement using dynamic compensator. IEEE Transactions on Automatic Control AC - 15, 1 (February 1970), 34 – 43. [43] Chang, B. C. and Pearson, J. B. Optimal disturbance reduction in linear multivariable systems. IEEE Transactions on Automatic Control AC - 29 (October 1984), 880 – 887. [44] Chapellat, H. and Bhattacharyya, S. P. Calculation of maximal stability domains using an optimal property of Kharitonov polynomials. In Analysis and Optimization of Systems, Lecture Notes in Control and Information Sciences. Springer - Verlag, 1988, pp. 22 – 31. [45] Chapellat, H. and Bhattacharyya, S. P. An alternative proof of Kharitonov’s theorem. IEEE Transactions on Automatic Control AC 34, 4 (April 1989), 448 – 450. [46] Chapellat, H. and Bhattacharyya, S. P. A generalization of Kharitonov’s theorem: Robust stability of interval plants. IEEE Transactions on Automatic Control AC - 34, 3 (March 1989), 306 – 311. [47] Chapellat, H. and Bhattacharyya, S. P. Robust stability and stabilization of interval plants. In Robustness in Identification and Control, M. Milanese, R. Tempo, and A. Vicino, Eds. Plemum, New York, 1989, pp. 207 – 229. [48] Chapellat, H., Bhattacharyya, S. P. and Keel, L. H. Stability margin for Hurwitz polynomials. In Proceedings of the 27th IEEE Conference on Decision and Control (Austin, TX, December 1988), pp. 1392 – 1398.
REFERENCES
891
[49] Chapellat, H., Dahleh, M. and Bhattacharyya, S. P. Robust stability under structured and unstructured perturbations. IEEE Transactions on Automatic Control AC - 35, 10 (October 1990), 1100 – 1108. [50] Chapellat, H., Dahleh, M. and Bhattacharyya, S. P. On robust nonlinear stability of interval control systems. IEEE Transactions on Automatic Control AC - 36, 1 (January 1991), 59 – 67. [51] Chapellat, H., Mansour, M. and Bhattacharyya, S. P. Elementary proofs of some classical stability criteria. IEEE Transactions on Education 33, 3 (1990), 232 – 239. [52] Choksy, N. H. Time-lag systems - A bibliography. IRE Transactions on Automatic Control 5, 1 (1960), 66 – 70. [53] Cohen, G. H. and Coon, G. A. Theoretical consideration of retarded control. Transactions of the American Society of Mechanical Engineers 76 (1953), 827 – 834. [54] Dahleh, M., Tesi, A. and Vicino, A. On the robust Popov criterion for interval Lur’e system. IEEE Transactions on Automatic Control 38, 9 (September 1993), 1400 – 1405. [55] Dahleh, M. A. and Diaz-Bobillo, I. J. Control of Uncertain Systems: A Linear Programming Approach. Prentice Hall PTR, Englewood Cliffs, NJ, 1995. [56] Dahleh, M. A. and Pearson, J. B. l1 optimal feedback controllers for discrete-time systems. In Proceedings of the American Control Conference (Seattle, WA, 1986). [57] Dahleh, M. A. and Pearson, J. B. ℓ1 optimal feedback controllers for MIMO discrete-time systems. IEEE Transactions on Automatic Control AC - 32, 4 (April 1987), 314 – 322. [58] Dahleh, M. A. and Pearson, J. B. l1 optimal compensators for continuous-time systems. IEEE Transactions on Automatic Control AC - 32 (1987), 889 – 895. [59] Dasgupta, S. A Kharitonov like theorem for systems under nonlinear passive feedback. In Proceedings of the 26th IEEE Conference on Decision and Control (Los Angeles, CA, December 1987), pp. 2062 – 2063. [60] Dasgupta, S. Kharitonov’s theorem revisited. Systems & Control Letters 11 (1988), 381 – 384. [61] Dasgupta, S. and Bhagwat, A. S. Conditions for designing strictly positive real transfer functions for adaptive output error identification. IEEE Transactions on Circuits and Systems CAS - 34 (1987), 731 – 736.
892
REFERENCES
[62] Datta, A., Ho, M. T. and Bhattacharyya, S. P. Structure and Synthesis of PID Controllers. Springer - Verlag, London, UK, 2000. [63] Davison, E. J. The output control of linear time-invariant multivariable systems with unmeasurable arbitrary disturbances. IEEE Transactions on Automatic Control AC-17, 5 (1972), 621 – 630. [64] Davison, E. J. The robust control of a servomechanism problem for linear time-invariant systems. IEEE Transactions on Automatic Control AC-21, 1 (1976), 25 – 34. [65] deGaston, R. R. E. and Safonov, M. G. Exact calculation of the multiloop stability margin. IEEE Transactions on Automatic Control AC - 33, 2 (February 1988), 156 – 171. [66] Desoer, C. A. and Vidyasagar, M. Feedback Systems: Input Output Properties. Academic Press, New York, NY, 1975. [67] Desoer, C. A. and Wang, Y. T. Linear time-invariant robust servomechanism problem: A self-contained exposition. Control and Dynamic Systems 16 (1980), 81 – 129. [68] Dieudonn´ e, J. El´ements d’analyse, Tome 1: Fondements de l’analyse moderne. Gauthier-Villars, Editeur, Paris, 1969. [69] Dorato, P., Abdallah, C. and Cerone, V. Linear Quadratic Control: An Introduction. Macmillan Publishing Co., New York, NY, 1994. [70] Dorato, P., Fortuna, L. and Muscato, G. Robust Control for Unstructured Perturbations: An Introduction. Springer - Verlag, 1992. Lecture Notes in Control and Information Sciences. [71] Doyle, J. C. Lecture notes in advances in multivariable control. Tech. rep., ONR/Honeywell Workshop, 1984. Minneapolis, MN. [72] Doyle, J. C., Francis, B. A. and Tannenbaum, A. R. Feedback Control Theory. Macmillan Publishing Company, New York, NY, 1992. [73] Doyle, J. C., Glover, K., Khargonekar, P. P. and Francis, B. A. State space solution to standard H2 and H∞ control problems. IEEE Transactions on Automatic Control AC - 34, 8 (August 1989), 831 – 847. [74] Elizondo-Gonzales, C. Necessary and sufficient conditions for robust positivity of polynomic functions via sign decomposition. In Proceedings of the 3rd IFAC Symposium on Robust Control Design (ROCOND) (Prague, Czech Republic, 2000). [75] Faedo, S. A new stability problem for polynomials with real coefficients. Ann. Scuola Norm. Sup. Pisa Sci. Fis. Mat. Ser. 3 - 7 (1953), 53 – 63.
REFERENCES
893
[76] Ferreira, P. M. G. The servomechanism problem and the method of the state space in the frequency domain. International Journal of Control 23, 2 (1976), 245 – 255. [77] Ferreira, P. M. G. and Bhattacharyya, S. P. On blocking zeros. IEEE Transactions on Automatic Control AC - 22, 2 (1977), 258 – 259. [78] Foo, Y. K. and Soh, Y. C. Stability analysis of a family of matrices. IEEE Transactions on Automatic Control 35, 11 (November 1990), 1257 – 1259. [79] Francis, B. A., Sebakhy, O. A. and Wonham, W. M. Synthesis of multivariable regulators: The internal model principle. Applide Mathematics and Optimization 1 (1974), 64 – 86. [80] Fu, M. Computing the frequency response of linear systems with parametric perturbation. Systems & Control Letters 15 (1990), 45 – 52. [81] Fu, M. and Barmish, B. R. Polytopes and polynomials with zeros in a prescribed set. IEEE Transactions on Automatic Control AC-34 (1989), 544 – 546. [82] Fu, M., Olbrot, A. W. and Polis, M. P. Robust stability for timedelay systems: The Edge theorem and graphical tests. IEEE Transactions on Automatic Control 34, 8 (August 1989), 813 – 820. [83] Fujita, M., Namerikawa, T., Matsumura, F. and Uchida, K. µsynthesis of an electromagnetic suspension system. IEEE Transactions on Automatic Control 40, 3 (1995), 530 – 536. [84] Gantmacher, F. R. The Theory of Matrices, Vol. 2. Chelsea Publishing Company, New York, N.Y., 1959. [85] Glover, K. All optimal Hankel-norm approximations of linear multivariable systems and their l∞ error bounds. International Journal of Control 39 (1984), 1115–1193. [86] Glover, K., Limebeer, D. J. N., Doyle, J. C., Kasenally, E. M. and Safonov, M. G. A characterization of all solutions to the four block general distance problem. SIAM Journal on Control and Optimization 29 (1991), 283–324. [87] Green, M. and Limebeer, D. Linear Robust Control. Prentice Hall PTR, Englewood Cliffs, NJ, 1995. [88] Grimble, M. J. and Johnson, M. A. Algorithm for PID controller tuning using lqg cost minimization. In Proceedings of 1997 American Control Conference (Albuquerque, NM, June 1997). `, L. T. and Petkovski, D. On robustness of Lur’e systems [89] Grujic with multiple nonlinearities. Automatica 23 (1987), 327 – 334.
894
REFERENCES
[90] Gu, K., Kharitonov, V. L. and Chen, J. Stability of Time-Delay Systems. Birkh¨ auser, Boston, MA, 2003. [91] Guillemin, E. A. The Mathematics of Circuit Analysis. Wiley Publishing Co., New York, 1949. [92] Hinrichsen, D. and Pritchard, A. J. New robustness results for linear systems under real perturbations. In Proceedings of the 27th IEEE Conference on Decision and Control (1988). [93] Hinrichsen, D. and Pritchard, A. J. An application of state space methods to obtain explicit formulae for robustness measures of polynomials. In Robustness in Identification and Control, M. Milanese, R. Tempo, and A. Vicino, Eds. Plenum, New York, 1989, pp. 183 – 206. [94] Hinrichsen, D. and Pritchard, A. J. A robustness measure for linear systems under structured real parameter perturbations. Tech. rep., Institut fur Dynamische Systeme, Bremen, Germany, 1991. Report No. 184. [95] Ho, M. T. Synthesis of H∞ PID controllers. In Proceedings of the 40th IEEE Conference on Decision and Control (Orlando, FL, December 2001), pp. 255 – 260. [96] Ho, M. T., Datta, A. and Bhattacharyya, S. P. Control system design using low order controllers: Constant gain, PI, and PID. In Proceedings of the 1997 American Control Conference (1997), pp. 571 – 578. [97] Ho, M. T., Datta, A. and Bhattacharyya, S. P. A linear programming characterization of all stabilizing PID controllers. In Proceedings of the 1997 American Control Conference (1997), pp. 3922 – 3928. [98] Ho, M. T., Datta, A. and Bhattacharyya, S. P. Generalizations of the Hermite-Biehler theorem. Linear Algebra and Its Applications 302-303 (1999), 135 – 153. [99] Ho, M. T., Datta, A. and Bhattacharyya, S. P. Robust and non-fragile PID controller design. International Journal of Robust and Nonlinear Control 11 (2001), 681 – 708. [100] Ho, M. T. and Lin, C. Y. PID controller design for robust performance. In Proceedings of the 41st IEEE Conference on Decision and Control (Seville, Spain, December 2002), pp. 1063 – 1067. [101] Ho, M. T., Silva, G. J., Datta, A. and Bhattacharyya, S. P. Real and complex stabilization: Stability and performance. In Proceedings of the 2004 American Control Conference (Boston, MA, June 2004), pp. 4126 – 4138.
REFERENCES
895
[102] Ho, W., Hang, C. and Zhou, J. Self-tuning PID control for a plant with underdamped response with specification on gain and phase margins. IEEE Transactions on Control Systems Technology 5, 4 (1997), 446 – 452. [103] Hollot, C. V. and Tempo, R. On the Nyquist envelope of an interval plant family. IEEE Transactions on Automatic Control 39, 2 (February 1994), 391 – 396. [104] Hollot, C. V. and Xu, Z. L. When is the image of a multilinear function a polytope? A conjecture. In Proceedings of the 28th IEEE Conference on Decision and Control (Tampa, FL, 1989), pp. 1890 – 1891. [105] Hollot, C. V. and Yang, F. Robust stabilization of interval plants using lead or lag compensators. Systems & Control Letters 14 (1990), 9 – 12. [106] Howze, J. W. and Bhattacharyya, S. P. Robust tracking, error feedback and two degrees of freedom controllers. IEEE Transactions on Automatic Control 42, 7 (1997), 980 – 984. [107] Jury, E. I. Sampled - Data Control Systems. Wiley Publishing Co., New York, 1958. [108] Kalman, R. E. Contribution to the theory of optimal control. Bol. Soc. Matem. Mexico (1960), 102 – 119. [109] Kalman, R. E. When is a linear control system optimal? ASME Transactions Series D (Journal of Basic Engineering) (1964), 51 – 60. [110] Kamen, E. W. and Heck, B. S. Fundamentals of Signals and Systems. Prentice Hall, Upper Saddle River, NJ, 1997. [111] Kang, H. I. Extreme point results for robustness of control systems. PhD thesis, Department of Electrical and Computer Engineering, University of Wisconsin, Madison, Wisconsin, U.S.A., 1992. ˘ [112] Karmarkar, J. S. and Siljak, D. D. Stability analysis of systems with time delay. Proceedings of IEEE 117, 7 (1970), 1421 – 1424. [113] Katbab, A. and Jury, E. I. Robust Schur-stability of control systems with interval plants. International Journal of Control 51, 6 (1990), 1343 – 1352. [114] Keel, L. and Bhattacharyya, S. Data based interval controller design. In Proceedings of the 2007 IEEE Conference on Decision and Control (New Orleans, LA, 2007). [115] Keel, L. and Bhattacharyya, S. Controller synthesis free of analytical models: Three term controllers. IEEE Transactions on Automatic Control 53, 9 (2008). To appear.
896
REFERENCES
[116] Keel, L. and Bhattacharyya, S. Fixed order multivariable controller synthesis: A new algorithm. In Proceedings of the 2008 IEEE Conference on Decision and Control (Cancun, Mexico, 2008). [117] Keel, L. H. and Bhattacharyya, S. Robust stability of interval matrices: A computational approach. International Journal of Control 62, 6 (1995), 1491 – 1506. [118] Keel, L. H. and Bhattacharyya, S. P. Frequency domain design of interval controllers. In Control of Uncertain Dynamic Systems, S. P. Bhattacharyya and L. H. Keel, Eds. CRC Press, Littleton, MA, September 1991. [119] Keel, L. H. and Bhattacharyya, S. P. Parametric stability margin for multilinear interval control systems. In Proceedings of the 1993 American Control Conference (San Francisco, CA, June 1993). [120] Keel, L. H. and Bhattacharyya, S. P. Stability margin for multilinear interval systems via phase conditions: A unified approach. In Proceedings of the 1993 American Control Conference (San Francisco, CA, June 1993). [121] Keel, L. H. and Bhattacharyya, S. P. Control system design for parametric uncertainty. International Journal of Robust and Nonlinear Control 4, 1 (1994), 87 – 100. [122] Keel, L. H. and Bhattacharyya, S. P. Phase properties of Hurwitz polynominals and segments. Tech. rep., Tennessee State University, November 1994. ISE Report No. ACS-94-2. [123] Keel, L. H. and Bhattacharyya, S. P. Robust parametric classical control design. IEEE Transactions on Automatic Control 39, 7 (1994), 1524 – 1530. [124] Keel, L. H. and Bhattacharyya, S. P. Robust, optimal, or fragile? IEEE Transactions on Automatic Control 42, 8 (1997), 1098 – 1105. [125] Keel, L. H. and Bhattacharyya, S. P. A generalization of Mikhailov’s criterion with applications. In Proceedings of the 2000 American Control Conference (Chicago, IL, June 2000). [126] Keel, L. H. and Bhattacharyya, S. P. Root counting, phase unwrapping, stability and stabilization of discrete time systems. Linear Algebra and Its Application 351 - 352 (2002), 501 – 518. [127] Keel, L. H. and Bhattacharyya, S. P. Direct synthesis of first order controllers from frequency response measurements. In Proceedings of the American Control Control (Portland, OR, June 8-10 2005). [128] Keel, L. H. and Bhattacharyya, S. P. PID controller synthesis free of analytical models. In Proceedings of the 16th IFAC World Congress (Prague, Czech Republic, July 4-8 2005).
REFERENCES
897
[129] Keel, L. H. and Bhattacharyya, S. P. Data driven synthesis of three term digital controllers. In Proceedings of the American Control Conference (Minneapolis, MN, June 14-16 2006). [130] Keel, L. H., Bhattacharyya, S. P. and Howze, J. W. Robust control with structured perturbations. IEEE Transactions on Automatic Control AC - 33, 1 (January 1988), 68 – 78. [131] Keel, L. H., Lim, K. B. and Juang, J. N. Robust eigenvalue assignment with maximum tolerance to system uncertainties. AIAA Journal of Guidance, Control, and Dynamics 14, 3 (May - June 1991), 615 – 620. [132] Keel, L. H., Mitra, S. and Bhattacharyya, S. P. Data driven synthesis of three term digital controllers. SICE Journal of Control, Measurement, and System Integration 1, 2 (2008), 102 – 110. [133] Keel, L. H., Rego, J. I. and Bhattacharyya, S. P. A new approach to digital PID controller design. IEEE Transactions on Automatic Control 48, 4 (2003), 687 – 692. [134] Keel, L. H., Shaw, J. and Bhattacharyya, S. P. Robust control of interval systems. In Robust Control. Springer-Verlag, Tokyo, Japan, June 1993. [135] Khalil, H. K. Nonlinear Systems. MacMillan, New York, 1992. [136] Kharitonov, V. L. Asymptotic stability of an equilibrium position of a family of systems of linear differential equations. Differential Uravnen 14 (1978), 2086 – 2088. Translation in Differential Equations, vol. 14, pp. 1483 - 1485, 1979. [137] Kharitonov, V. L. The Routh-Hurwitz problem for families of polynomials and quasipolynomials. Izvetiy Akademii Nauk Kazakhskoi SSR, Seria fizikomatematicheskaia 26 (1979), 69 – 79. [138] Kharitonov, V. L. Interval stability of quasipolynomials. In Control of Uncertain Dynamic Systems, S. P. Bhattacharyya and L. H. Keel, Eds. CRC Press, Littleton, MA, September 1991. [139] Kharitonov, V. L. and Zhabko, A. P. Robust stability of timedelay systems. IEEE Transactions on Automatic Control 39 (1994), 2388 – 2397. [140] Kim, Y. C., Keel, L. H. and Bhattacharyya, S. P. Computer aided control system design: Multiple design objectives. In Proceedings of the European Control Conference (Kos, Greece, July 2-5 2007). [141] Kogan, J. Robust Stability and Convexity. Springer - Verlag, New York, NY, 1994.
898
REFERENCES
[142] Leal, M. A. and Gibson, J. S. A first-order Lyapunov robustness method for linear systems with uncertain parameters. IEEE Transactions on Automatic Control 35, 9 (September 1990), 1068 – 1070. [143] Luenberger, D. G. Optimization by Vector Space Methods. John Wiley and Sons, New York, 1969. [144] Lur’e, A. I. On Some Nonlinear Problems in the Theory of Automatic Control. H. M. Stationary Office, London, 1957. Russian Edition, 1951. [145] Malik, W., Swaroop, D. and Bhattacharyya, S. P. A linear programming approach to the synthesis of fixed order controllers. IEEE Transactions on Automatic Control 53 (2008). To appear. [146] Mansour, M. Robust stability of interval matrices. In Proceedings of the 28th IEEE Conference on Decision and Control (Tampa, FL, December 1989). [147] Mansour, M. Robust stability in systems described by rational functions. In Control and Dynamic Systems, C. T. Leondes, Ed., vol. 51. Academic Press, New York, NY, 1992, pp. 79 – 128. [148] Mansour, M. and Anderson, B. D. O. Kharitonov’s theorem and the second method of Lyapunov. In Robustness of Dynamic Systems with Parameter Uncertainties, M. Mansour, S. Balemi, and W. Tru¨ol, Eds. Birkh¨auser, Berlin, 1992, pp. 3 – 12. [149] Mansour, M. and Kraus, F. J. Argument conditions for Hurwitz and Schur stable polynomials and the robust stability problem. Tech. rep., ETH, Zurich, Tech. Report, 1990. [150] Marden, M. Geometry of Polynomial. American Mathematical Society, Providence, RI, 1966. [151] Marquez, H. J. and Diduch, C. P. On strict positive realness of interval plants. IEEE Transactions on Circuits and Systems 40, 8 (1993), 551 – 552. [152] Marshall, J. E. Control of Time Delay Systems. Peter Peregrinus, London, UK, 1979. [153] Marshall, J. E., Gorecki, H., Korytowski, A. and Walton, K. Time-Delay Systems: Stability and Performance Criteria with Applications. Ellis Horwood, New York, 1992. [154] Martin, J. M. State-space measures for stability robustness. IEEE Transactions on Automatic Control AC-32, 6 (June 1987), 509 – 512. [155] Meressi, T., Chen, D. and Paden, B. Application of Kharitonov’s theorem to mechanical systems. IEEE Transactions on Automatic Control 38, 3 (March 1993), 488 – 491.
REFERENCES
899
[156] Minnichelli, R. J., Anagnost, J. J. and Desoer, C. A. An elementary proof of Kharitonov’s stability theorem with extensions. IEEE Transactions on Automatic Control AC - 34, 9 (1989), 995 – 998. [157] Morari, M. and Zafiriou, E. Robust Process Control. Prentice Hall PTR, Englewood Cliffs, NJ, 1989. [158] Mori, T. and Barnett, S. On stability tests for some classes of dynamical systems with perturbed coefficients. IMA Journal of Mathematical Control and Information 5 (1988), 117 – 123. [159] Mori, T. and Kokame, H. Stability of interval polynomials with vanishing extreme coefficients. In Recent Advances in Mathematical Theory of Systems, Control, Networks, and Signal Processing I. Mita Press, Tokyo, Japan, 1992, pp. 409 – 414. [160] Nett., C., Jacobson, C. and Balas, M. A connection between state-space and doubly coprime fractional representations. IEEE Transactions on Automatic Control AC - 29 (1984), 831 – 832. [161] Newton, G., Gould, L. A. and Kaiser, J. F. Analytical Design of Linear Feedback Controls. John Wiley, New York, NY, 1957. [162] Ohm, D., Howze, J. W. and Bhattacharyya, S. P. Structural synthesis of multivariable controllers. Automatica 21, 1 (January 1985), 35 – 55. [163] Ozguler, A. B. and Kocan, A. A. An analytic determination of stabilizing feedback gains. Tech. rep., Report 321, Institut f¨ ur Dynamische Systeme, Universitat Bremen, September 1994. ¨ m, K. J. and Ha ¨gglund, T. Design of [164] Panagopoulos, H., ˚ Astro PID controllers based on constrained optimization. In Proceedings of 1999 American Control Conference (San Diego, CA, June 1999). [165] Patel, R. V. and Toda, M. Quantitative measures of robustness for multivariable systems. In Proceedings of American Control Conference (San Francisco, CA, May 1980). [166] Pearson, J. B. Compensator design for dynamic optimization. International Journal of Control 9 (1968), 473. [167] Pessen, D. W. A new look at PID controller tuning. ASME Journal of Dynamic Systems, Measurement and Control 116 (1994), 553 – 557. [168] Petersen, I. R. A new extension to Kharitonov’s theorem. In Proceedings of IEEE Conference on Decision and Control (Los Angeles, CA, December 1987). [169] Polyak, B. T. Robustness analysis for multilinear perturbations. In Robustness of Dynamic Systems with Parameter Uncertainties, M. Man-
900
REFERENCES sour, S. Balemi, and W. Tru¨ ol, Eds. Birkh¨ auser, Berlin, 1992, pp. 93 – 104.
[170] Pontryagin, L. S. On the zeros of some elementary transcendental function. American Mathematical Society Translation 2 (1955), 95 – 110. [171] Rantzer, A. Kharitonov’s weak theorem holds if and only if the stability region and its reciprocal are convex. International Journal of Nonlinear and Robust Control (1992). [172] Rantzer, A. Stability conditions for polytopes of polynomials. IEEE Transactions on Automatic Control AC-37 (January 1992), 79 – 89. [173] Rosenbrock, H. H. Computer-Aided Control System Design. Academic Press, New York, N.Y., 1974. [174] Safonov, M. G., Jonckheere, E. A., Verma, M. and Limebeer, D. J. N. Synthesis of positive real multivariable feedback systems. International Journal of Control 45 (1987), 817 – 842. [175] Safonov, M. G. and Verma, M. S. L∞ optimization and Hankel approximation. IEEE Transactions on Automatic Control AC - 30, 3 (March 1985), 279 – 280. ˘ [176] Sezer, M. E. and Siljak, D. D. A note on robust stability bounds. IEEE Transactions on Automatic Control 34, 11 (November 1989), 1212 – 1215. ´nches Pen ˜a, R. S. Fast computation of the mul[177] Sideris, A. and Sa tivariable stability margin for real interrelated uncertain parameters. IEEE Transactions on Automatic Control 34, 12 (December 1989), 1272 – 1276. [178] Silva, G. J., Datta, A. and Bhattacharyya, S. P. Controller design via Pad´e approximation can lead to instability. In Proceedings of the 40th IEEE Conference on Decision and Control (Orlando, FL, 2001). [179] Silva, G. J., Datta, A. and Bhattacharyya, S. P. PI stabilization of first-order systems with time-delay. Automatica 37 (2001), 2025 – 2031. [180] Soh, C. B., Berger, C. S. and Dabke, K. P. On the stability properties of polynomials with perturbed coefficients. IEEE Transactions on Automatic Control AC - 30, 10 (October 1985), 1033 – 1036. [181] Tantaris, R. N., Keel, L. H. and Bhattacharyya, S. P. Stabilization of discrete time systems by first order controllers. IEEE Transactions on Automatic Control 48, 5 (2003), 858 – 861.
REFERENCES
901
[182] Tantaris, R. N., Keel, L. H. and Bhattacharyya, S. P. H∞ design with first order controllers. IEEE Transactions on Automatic Control 51, 8 (2006), 1343 – 1347. [183] Tantaris, R. N., Keel, L. H. and Bhattacharyya, S. P. Stabilization of continuous-time systems by first order controllers. In Proceedings of the 10th IEEE Mediterranean Conference on Control and Automation (Lisbon, Portugal, July 9 - 12, 2002). [184] Tantaris, R. N., Keel, L. H. and Bhattacharyya, S. P. Gain/phase margin design with first order controllers. In Proceedings of the 2003 American Control Conference (Denver, CO, June 4 - 6, 2003). [185] Tesi, A. and Vicino, A. A new fast algorithm for robust stability analysis of linear control systems with linearly correlated parametric uncertainty. Systems & Control Letters 13 (1989), 321 – 329. [186] Tesi, A. and Vicino, A. Robust stability of state-space models with structured uncertainties. IEEE Transactions on Automatic Control 35, 2 (February 1990), 191 – 195. [187] Tesi, A. and Vicino, A. Robustness analysis for linear dynamical systems with linearly correlated parameter uncertainties. IEEE Transactions on Automatic Control 35, 2 (February 1990), 186 – 190. [188] Tesi, A. and Vicino, A. Kharitonov segments suffice for frequency response analysis of plant-controller families. In Control of Uncertain Dynamic Systems, S. P. Bhattacharyya and L. H. Keel, Eds. CRC Press, Littleton, MA, September 1991. [189] Tesi, A. and Vicino, A. Robust absolute stability of Lur’e control systems in parameter space. Automatica 27 (1991), 147 – 151. [190] Tesi, A. and Vicino, A. Robust strict positive realness: New results for interval plant plus controller families. In Proceedings of the 30th IEEE Conference on Decision and Control (Brighton, UK, December 1991), pp. 421 – 426. [191] Thompson, W. M., Vacroux, A. G. and Hoffman, C. H. Application of Pontryagin’s time lag stability criterion to force-reflecting servomechanisms. In Proceedings of the 9th Joint Automatic Control Conference (1968). [192] Tsypkin, Y. Z. and Polyak, B. T. Robust absolute stability of continuous systems. In Robustness of Dynamic Systems with Parameter Uncertainties, M. Mansour, S. Balemi, and W. Tru¨ol, Eds. Birkh¨auser, Berlin, 1992, pp. 113 – 124. ˘ [193] Siljak, D. D. Polytopes of nonnegative polynomials. In Proceedings of the 1989 American Control Conference (Pittsburgh, PA, June 1989).
902
REFERENCES
[194] Vaidyanathan, P. and Mitra, S. K. A unified structural interpretation of some well-known stability test procedures for linear systems. IEEE Proceedings 75 (April 1987), 478 – 497. [195] Vicino, A. and Tesi, A. Regularity condition for the stability margin problem with linear dependent perturbations. SIAM Journal on Control and Optimization 33, 5 (May 1995). [196] Vicino, A., Tesi, A. and Milanese, M. Computation of nonconservative stability perturbation bounds for systems with nonlinearly correlated uncertainties. IEEE Transactions on Automatic Control AC 35, 7 (July 1990), 835 – 841. [197] Vidyasagar, M. Control System Synthesis: A Factorization Approach. MIT Press, Cambridge, MA, 1985. [198] Vidyasagar, M. Optimal rejection of persistent bounded disturbances. IEEE Transactions on Automatic Control AC - 31, 6 (June 1986). [199] Voda, A. A. and Landau, I. D. A method of the auto-calibration of PID controllers. Automatica 31, 1 (1995), 41 – 53. [200] Wonham, W. M. Linear Multivariable Control: A Geometric Approach, 3rd Edition. Springer-Verlag, New York, 1985. 2nd Edition. [201] Wonham, W. M. and Pearson, J. B. Regulation and internal stabilization in linear multivariable systems. SIAM Journal in Control 12 (1974), 5 – 8. [202] Xu, H., Datta, A. and Bhattacharyya, S. P. Computation of all stabilizing PID gains for digital control systems. IEEE Transactions on Automatic Control 46, 4 (2001), 647 – 652. [203] Xu, H., Datta, A. and Bhattacharyya, S. P. PID stabilization of lti plants with time-delay. In Proceedings of the 42nd IEEE Conference on Decision and Control (Maui, HI, December 9 - 12, 2003), pp. 4038 – 4043. [204] Yamamoto, S. and Hashimoto, I. Present status and future needs: The view from Japanese industry. In Proceedings of the 4th International Conference on Chemical Process Control (Tokyo, Japan, December 1991), Springer-Verlag. [205] Yedavalli, R. K. Improved measures of stability for linear state space model. IEEE Transactions on Automatic Control AC - 30, 6 (June 1985), 577 – 579. [206] Yedavalli, R. K. and Liang, Z. Reduced conservatism in stability robustness bounds by state transformation. IEEE Transactions on Automatic Control AC-31, 9 (September 1986), 863 – 865.
REFERENCES
903
[207] Yeung, K. S. and Wang, S. S. A simple proof of Kharitonov’s theorem. IEEE Transactions on Automatic Control 32 (April 1987), 822 – 823. [208] Zadeh, L. A. and Desoer, C. A. Linear Systems Theory. McGraw Hill, New York, 1963. [209] Zames, G. Functional analysis applied to nonlinear feedback systems. IEEE Transactions on Circuit Theory (September 1963), 392 – 404. [210] Zames, G. On the input-output stability of time-varying nonlinear feedback systems - Parts I and II. IEEE Transactions on Automatic Control AC - 11 (1966), 228 – 238 and 465 – 476. [211] Zames, G. Feedback and optimal sensitivity: Model reference transformations, multiplicative seminorms, and approximate inverses. IEEE Transactions on Automatic Control AC-26 (April 1981), 301 – 320. [212] Zames, G. and Francis, B. A. Feedback, minimax sensitivity, and optimal robustness. IEEE Transactions on Automatic Control AC-28 (May 1983), 585 – 601. [213] Zhou, K., Doyle, J. C. and Glover, K. Robust Optimal Control. Prentice Hall PTR, Englewood Cliffs, NJ, 1995. [214] Zhou, K. and Khargonekar, P. P. Stability robustness bounds for linear state space models with structured uncertainty. IEEE Transactions on Automatic Control AC-32 (1987), 621 – 623. [215] Zhuang, M. and Atherton, D. P. Automatic tuning of optimum PID controllers. IEE Proceedings - D 140, 3 (1993), 216 – 224. [216] Ziegler, J. G. and Nichols, N. B. Optimum settings for automatic controllers. Transactions of the American Society of Mechanical Engineers 64 (1942), 759 – 768.
Index
H2 norm, 856, 862 H∞ model matching problem, 872 H∞ norm-bounded uncertainty, 592 H∞ norm constraint, 190 H∞ optimal control, 759, 784 H∞ optimal control problem, 718 L1 , 818 L1 theory, 849 L2 gain, 716 L2 stability, 705 Lp , 818 Lp theory, 850 L∞ , 818 L∞ stability, 705 ℓ1 optimal control, 878 ℓ2 stability margin, 387 ℓ1 stability margin, 396 ℓ∞ stability margin, 395 γ-iteration, 718 µ-synthesis, 692, 877 l1 , 813 l1 optimal control, 747 lp , 813 l∞ , 813 z-transform, 845 L-stability, 704 2-Ricatti solution, 792, 803 absolute stability, 609, 610, 614 adaptive control, 692 additive perturbation, 608, 635 additive uncertainty, 707 adjoint operator, 764 admissible set, 51 algebra, 836 algebraic Ricatti equation, 651, 776, 787
algorithm, 19, 55, 95 alignment, 728, 736 all-pass, 769 all-pass extension, 770 all-pass property, 737, 742 alternative signature formula, 33 amplitude decay ratio, 11, 17 anti-Hurwitz, 189 antiwindup, 24 arbitrary admissible control law, 650 asymptotic tracking, 675 augmented plant, 715 augmented system, 668 automatic PID tuning, 14, 24 Banach space, 823 Bezout identity, 710, 780 bisection algorithm, 795 Blaschke product, 741, 773 blocking zero, 678 Bode magnitude and phase envelope, 636 Bode magnitude plot, 215 boundary crossing theorem, 297 boundary property, 564 bounded degree, 300 bounded linear map, 695, 840, 853 bounded phase condition, 335, 449 bounded phase lemma, 334 bounded phase theorem, 448 canonical robust control framework, 722 Cauchy Schwartz inequality, 706, 734, 832 Cauchy Sequence, 822 causal, 695
905
906 causality, 693 co-inner, 774, 776 Cohen-Coon method, 17 commutative algebra, 863 complementary sensitivity, 690, 720 complete metric space, 823 complex convex direction, 359 complex root boundary, 182, 198 complex stabilization, 48 computer-aided design, 251, 282 conjugate exponent, 831 continuous linear map, 840 control error, 19, 21 control signal, 19, 21 controller gain value, 68 controller sensitivity, 869 convergence, 822 convex direction, 334, 359 convex hull, 445 convex hull approximation, 402 convex polygon, 446 convex stability set, 35 convolution, 844 coprime factorization, 710, 772, 775, 780 cost function, 644 D-decomposition, 181, 194, 196 damping, 55 data based design, 240, 270 data driven synthesis, 207 deadbeat control, 154 deadtime, 99, 101 degree dropping, 299 degree inflation, 783 derivative action, 36, 55 describing function, 13 detectability, 679 digital controller, 153 discontinuity, 391 discrete metric, 820 discrete time integrator, 153 disturbance rejection, 5, 27, 673, 690, 718 DK iteration, 724
Index dominant pole, 17 dual compensator, 671 dual problem, 750 dual space, 728 dynamic compensator, 667 edge polynomial, 446, 449 edge theorem, 455, 500 eigenspace, 652 eigenvalue, 652, 656, 665 equivalent norm, 825 error signal, 18, 20 essential supremum, 815, 817 exogenous input, 709 exponential weighting, 705 exposed edge, 445 extended Le space, 694 extended space, 693 extremal Bode angle value, 633 extremal Bode magnitude value, 633 extremal set, 625 extremal subset, 560, 564, 566 extremal system, 557, 564, 566 feasible string, 167 feedback, 4, 6, 13 feedback amplifier, 869 finite dimensional vector space, 825 finite-time LQR problem, 645 first order controller, 181 first order discrete time controller, 195 four block problem, 774 Fourier transform, 846 fragile controller, 869 fragility, 869 frequency domain template, 559 frequency response, 12, 19 frequency response data, 207 frequency response measurement, 208 Frobenius norm, 815 full state feedback H2 , 796 full state feedback H∞ , 785 functional, 729 fundamental tradeoff, 720
907
Index gain, 702 gain margin, 491 game theory, 784 generalized Kharitonov theorem, 504 generator, 445 guaranteed gain margin, 579 guaranteed phase margin, 579 guaranteed stability margin, 573 Hahn-Banach theorem, 737 Hamilton-Jacobi-Bellman equation, 641, 642 Hamiltonian, 652 Hamiltonian matrix, 777 Hankel approximation, 759 Hankel approximation theory, 762 Hankel norm, 760 Hankel operator, 759, 760 Hermite-Biehler theorem, 302 Hermite-Biehler theorem for complex polynomial, 310 Hermitian, 654 Hermitian matrix, 841 Hilbert space, 735, 764 Holder’s inequality, 814, 831 Hurwitz polynomial, 302 Hurwitz segment lemma, 343 identified analytical model, 207 identified model, 207 image set, 452 IMC, 15 impulse input data, 270 impulse response, 270, 844 induced norm, 729, 838, 857 infimum, 702 Infimum (Inf), 812 infinite dimensional vector space, 825 infinite horizon cost, 786 infinite horizon LQR problem, 650 infinite horizon optimal control problem, 648 initial value theorem, 849 inner, 773
inner product, 733 inner product space, 728 inner-outer factorization, 773, 779 integral action, 8, 21, 27, 55 integral control, 5 integrated error, 17 interlacing property, 304 internal model, 674 Internal Model Control (IMC) structure, 713 internal model controller, 15 interpolation constraint, 743 interval inequality, 244 interval linear programming, 243 interval polynomial, 470, 482 interval system, 190 invariant degree, 299 invariant embedding, 642 inverse Fourier transform, 846 iterated integral, 849 Jury table, 322 Kharitonov polynomial, 189, 483 Kharitonov vertex polynomial, 504 Kharitonov’s theorem, 470, 481 LabVIEW, 251 lag or lead controller, 181 Laplace transform, 845, 848 largest singular value, 842 Lebesgue integral, 815 Lebesgue measure, 815 left coprime, 772 linear inequality, 25, 35 linear map, 833 linear programming, 38, 748 linear quadratic regulator, 641 load disturbance, 14, 17 locally integrable, 829 loop frequency response, 181 loop gain, 690, 703 loop transfer function, 190, 777 low frequency band, 207 LQ return difference equality, 773, 777, 783
908 Luenberger observer, 782 Lur’e criterion, 610 Lur’e gain, 611 Lyapunov equation, 656, 766, 769 Lyapunov function, 423 mapping theorem, 396 Markov parameter, 270 matrix interpolation theory, 747 maximal delay, 201 maximal delay tolerance, 153, 175 maximally deadbeat control, 173 maximally deadbeat design, 153, 202 maximum column sum, 815 maximum modulus theorem, 726 maximum of column sum, 840 maximum of row sum, 842 maximum row sum, 815, 857 mean value theorem, 643 metric, 819 metric space, 820 minimum norm solution, 386 Minkowski’s inequality, 809, 814, 832 mixed perturbation, 591 model based approach, 207 model based design, 240 model free approach, 207 model uncertainty, 19 monotonic phase increase property, 307 multilinear dependence, 416 multilinear interval plant, 405 multilinear interval polynomial, 396 multiple design specification, 25 multiplicative perturbations, 707 negative-time projection, 760 Nehari’s theorem, 768 Nevanlinna matrix, 747 Nevanlinna Pick interpolation, 747 noise, 9, 19, 56 nonanticipative, 695 nonlinear, 4, 19 nonlinear element, 13
Index norm, 689, 807 norm bounded perturbation, 692 normed linear space, 808 Nyquist criterion, 208 Nyquist plot, 12 Nyquist/Bode data, 207 observability grammian, 766 observer reconstructed state feedback H2 , 801 observer-based H∞ feedback, 791 one block problem, 773 operator, 729, 838 operator norm, 838 optimal H∞ robust controller, 876 optimal control, 15, 19 optimal control problem, 641 optimal design, 15 optimality, 643 optimum and robust controller, 869 orthogonal, 735 orthogonal complement, 736 orthogonality, 728 oscillation, 11, 13 outer, 773 output feedback, 672, 679 overshoot, 11, 38 packed matrix notation, 772 Pad´e approximation, 56, 64, 85, 101 parameter separation, 34 parametric stability margin, 101, 380, 575, 876 parametric uncertainty, 691 Parseval’s theorem, 848 performance attainment problem, 224 performance index, 647, 658, 689 performance robustness problem, 724 performance specification, 38, 47, 55, 181 phase margin, 19 PID controller, 27 PID controller design, 10 PID stabilization, 85, 114
Index polytopic family, 444 Popov criterion, 611 positive-time projection, 760 principle of argument, 294 principle of optimality, 642 process model, 99 projection map, 694 Pythagorean theorem, 735 quadratic cost functional, 650 quasi-polynomial, 56, 69, 72, 87 Rayleigh principle, 426 reachability grammian, 766 real convex direction, 361 real parametric stability margin, 381 real root boundary, 182 reference, 4, 673 relay experiment, 18 relay feedback, 24 return difference, 777 return difference identity, 661 return difference relation, 660 Ricatti differential equation, 647, 787 Ricatti equation, 784 Riemann integral, 815 Riemann sum, 816 Riemann-Lebesgue lemma, 846 right coprime, 772 robust absolute stability, 625 robust absolute stability sector, 630 robust control, 708 robust performance, 452, 606, 691 robust positivity, 524 robust servomechanism, 5 robust small gain theorem, 603 robust stability, 7, 151, 189, 691, 718 robust stabilization, 189 robustness, 5 root clustering, 158 Rouch´e’s theorem, 294, 616 Routh table, 326 Routh-Hurwitz criterion, 27
909 sample signal, 153 Schur stability, 155, 312 Schur stable, 153 segment, 334 segment lemma, 341, 508 sensitivity, 690, 719 servomechanism, 672 setpoint, 14, 20 settling time, 11, 38 shifted H∞ -norm, 707 signature, 25, 220, 261 signature formula, 29 simultaneous stabilization, 194, 201 small gain theorem, 592, 693, 700 small gain theorem for interval system, 593 spectral factorization, 774, 779 spectral radius, 794 SPR property, 609 square, 773 stability, 691 stability ball, 380, 381, 452 stability margin, 27, 181 stabilizing PID controller, 25, 56, 85 stabilizing set, 25, 27, 101, 165 standard H∞ control problem, 718 state feedback control, 672 strict positive realness, 609 structured singular value, 724 structured uncertainty, 691 supremum (sup), 812 Tchebyshev polynomial, 154, 197, 260 Tchebyshev representation, 153, 196, 260, 345 terminal penalty term, 642 time-delay system, 61, 77 Toeplitz matrix, 762 Tonelli’s theorem, 850 tracking, 5, 27, 673 tracking error, 153 transfer function, 845 transient response, 27
910 triangle inequality, 807 truncation, 693 two-Ricatti solution, 789 ultimate gain, 11 ultimate period, 11 uncertainty, 4 uncertainty box, 453 uncertainty template, 553 union of convex region, 25 unity rank perturbation, 416 unstructured stability margin, 604 unstructured uncertainty, 691 value function, 645 vector space, 807 vertex, 445 vertex lemma, 348, 369 vertex polynomial, 446 vertex system, 189 Virtual Instrument, 251 weighted sensitivity minimization, 717, 726 windup, 20 worst case H∞ stability margin, 598 worst case margin, 573 worst case parametric stability margin, 453, 574, 599 worst case stability margin, 452 YJBK parametrization, 709, 748, 768, 872 z-transform, 270, 747 zero exclusion principle, 302, 448 zero measure set, 817 zeroing the output, 658 Ziegler-Nichols frequency response method, 11 Ziegler-Nichols step response method, 10, 64, 100
Index
S.P. Bhattacharyya was born in Yangon, Myanmar on June 23, 1946. He obtained the B. Tech. degree from IIT Bombay in 1967 and the M.S. and Ph.D degrees from Rice University in 1969 and 1971. From 1971-80 he established the graduate program in automatic control at the Federal University, Rio de Janeiro, Brazil and served as the Department Head of Electrical Engineering from 1978-80. He has held an NRC-NASA Resident Research Associateship, a Senior Fullbright Lecturership and the Boeing-Welliver Faculty Fellowship. At present he is the Robert M. Kennedy Professor of Electrical Engineering at Texas A&M University. Prof. Bhattacharyya’s contributions to control theory span over 40 years and include the first solution of the linear multivariable servomechanism problem, the theory of robust and unknown input observers, a pole assignment algorithm based on Sylvester’s equation, the computation of the parametric stability margin, generalizations of Kharitonov’s theorem and the Hermite-Biehler theorem, the demonstration of the fragility of optimal and high order controllers, the synthesis of PID and fixed order controllers and most recently a new approach to model and identification-free, measurement-driven controller synthesis. He has authored or coauthored 6 books and over 200 journal and conference papers. Shankar Bhattacharyya is also a performing artist and has played concerts of North Indian classical music on the Sarode, in several countries. A. Datta is the J. W. Runyon Jr. 35 Professor II of Electrical and Computer Engineering at Texas A & M University. He obtained the B. Tech degree from the Indian Institute of Technology, Kharagpur in 1985, the M.S. degree from Southern Illinois University at Carbondale in 1987 and the Ph.D. degree from the University of Southern California in 1991. His areas of interest include control systems and genomic signal processing and he has published 5 books and over 100 journal and conference articles in these areas. L.H. Keel received the B.S. degree from Korea University,in 1978, and the M.S. and Ph.D degrees in Electrical Engineering from Texas A&M University,in 1983 and 1986, respectively. He is Professor in the Department of Electrical & Computer Engineering and also in the Center of Excellence in Information Systems at Tennessee State University. In 2003 and 2004, he was a NASA summer faculty fellow at NASA Marshall Space Flight Center, working on flight control problems of re-entry vehicles. His research interests include robust control, system identification, structure and control, and computer aided design. He has authored or coauthored over 150 technical papers and three books in the field of control systems.
911