1,593 494 18MB
Pages 417 Page size 396 x 619.2 pts Year 2010
Advances
in COMPUTERS VOLUME 56
This Page Intentionally Left Blank
Advances in
COMPUTERS EDITED BY
MARVIN V. ZELKOWITZ Department of Computer Science and Institute for Advanced Computer Studies University of Maryland College Park, Maryland
VOLUME 56
ACADEMIC PRESS An imprint of Elsevier Science Amsterdam Boston London New York Oxford Paris San Diego San Francisco Singapore Sydney Tokyo
This book is printed on acid-free paper. Copyright © 2002, Elsevier Science (USA) Except Chapter 5 All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. The appearance of the code at the bottom of the first page of a chapter in this book indicates the Publisher's consent that copies of the chapter may be made for personal or internal use of specific clients. This consent is given on the condition, however, that the copier pay the stated per copy fee through the Copyright Clearance Center, Inc. (222 Rosewood Drive, Danvers, Massachusetts 01923), for copying beyond that permitted by Sections 107 or 108 of the U.S. Copyright Law. This consent does not extend to other kinds of copying, such as copying for general distribution, for advertising or promotional purposes, for creating new collective works, or for resale. Copy fees for pre-2002 chapters are as shown on the title pages. If no fee code appears on the title page, the copy fee is the same as for current chapters. ISSN#/2002 $35.00. Explicit permission from Academic Press is not required to reproduce a maximum of two figures or tables from an Academic Press chapter in another scientific or research publication provided that the material has not been credited to another source and that full credit to the Academic Press chapter is given. Academic Press An imprint of Elsevier Science 84 Theobald's Road, London WCIX 8RR http://www.academicpress.com Academic Press An imprint of Elsevier Science 525 B Street, Suite 1900, San Diego, California 92101-4495, USA http://www.academicpress.com ISBN 0-12-012156-5 A catalogue record for this book is available from the British Library Typeset by Devi Information Systems, Chennai, India Printed and bound in Great Britain by MPG Books Ltd, Bodmin, Cornwall 02 03 04 05 06 07 MP 9 8 7 6 5 4 3 2 1
Contents CONTRIBUTORS PREFACE
ix xiii
Software Evolution and the Staged Model of the Software Lifecycle Keith H. Bennett, Vaclav T. Rajlich, and Norman Wilde
1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Introduction Initial Development Evolution—The Key Stage Servicing Phase-Out and Closedown Case Studies Software Change and Comprehension Sustaining Software Value Future Directions: Ultra Rapid Software Evolution Conclusions Acknowledgments References
3 13 16 19 23 24 31 38 44 47 47 48
Embedded Software Edward A. Lee
1. What is Embedded Software? 2. Just Software on Small Computers? 3. Limitations of Prevailing Software Engineering Methods 4. Actor-Oriented Design 5. Examples of Models of Computation 6. Choosing a Model of Computation 7. Heterogeneous Models 8. Component Interfaces 9. Frameworks Supporting Models of Computation 10. Conclusions V
56 57 62 65 71 79 82 84 88 89
VI
CONTENTS
Acknowledgments References
89 90
Empirical Studies of Quality Models in Object-Oriented Systems Lionel C. Briand and Jurgen Wust
1. 2. 3. 4. 5.
Introduction Overview of Existing Studies Data Analysis Methodology Summary of Results Conclusions Appendix A Appendix B: Glossary References
98 99 112 131 150 157 161 162
Software Fault Prevention by Language Choice: Why C Is Not My Favorite Language Richard J. Fateman
1. 2. 3. 4. 5. 6. 7.
Introduction and Background Why Use C? Why does Lisp Differ from C? Root Causes of Flaws: A Lisp Perspective Arguments against Lisp, and Responses But Why Is C Used by Lisp Implementors? Conclusion Appendix 1: Cost of Garbage Collection Appendix 2: Isn't C Free? Acknowledgments and Disclaimers References
168 169 171 173 179 185 185 186 187 187 188
Quantum Computing and Communication Paul E. Black, D. Richard Kuhn, and Carl J. Williams
1. 2. 3. 4.
Introduction The Surprising Quantum World The Mathematics of Quantum Mechanics Quantum Computing
190 192 202 207
CONTENTS
5. Quantum Communication and Cryptography 6. Physical Implementations 7. Conclusions Appendix References
VII
220 231 240 240 242
Exception Handling Peter A. Buhr, Ashif Harji, and W. Y. Russell Mok
1. Introduction 2. EHM Objectives 3. Execution Environment 4. EHM Overview 5. Handling Models 6. EHM Features 7. Handler Context 8. Propagation Models 9. Propagation Mechanisms 10. Exception Partitioning 11. Matching 12. Handler Clause Selection 13. Preventing Recursive Resuming 14. Multiple Executions and Threads 15. Asynchronous Exception Events 16. Conclusions Appendix: Glossary References
246 248 249 253 254 263 272 273 277 280 282 283 285 290 292 297 298 301
Breaking the Robustness Barrier: Recent Progress on the Design of the Robust Multimodal System Sharon Oviatt
1. 2. 3. 4.
Introduction to Multimodal Systems Robustness Issues in the Design of Recognition-Based Systems Future Directions: Breaking the Robustness Barrier Conclusion Acknowledgments References
306 313 331 333 333 333
VIII
CONTENTS
Using Data Mining to Discover the Preferences of Computer Criminals Donald E. Brown, and Louise F. Gunderson
1. 2. 3. 4. 5. 6. 7.
Introduction The Target Selection Process of Criminals Predictive Modeling of Crime Discovering the Preferences of the Agents Methodology Testing with Synthetic Data Conclusions References
344 346 348 352 358 364 369 370
AUTHOR INDEX
375
SUBJECT INDEX
385
CONTENTS OF VOLUMES IN THIS SERIES
395
Contributors Keith H. Bennett is a full professor and former chair in the Department of Computer Science at the University of Durham. His research interests include new software architectures that support evolution. Bennett received a Ph.D. in computer science from the University of Manchester. He is a chartered engineer and a Fellow of the British Computer Society and lEE. Contact him at k e i t h . bennett Odurham.ac.uk. Paul E. Black is a computer scientist in the Information Technology Laboratory of the National Institute of Standards and Technology (NIST). He has published papers on software configuration control, networks and queuing analysis, formal methods, testing, and software verification. He has nearly 20 years of industrial experience in developing software for IC design and verification, assuring software quality, and managing business data processing. Black earned an M.S. in Computer Science from the University of Utah, and a Ph.D. from Brigham Young University. Lionel C. Briand is a professor with the Department of Systems and Computer Engineering, Carleton University, Ottawa, Canada, where he founded the Software Quality Engineering Laboratory ( h t t p : / / w w w . s c e . c a r l e t o n . c a / Squall/Squall.htm). Lionel has been on the program, steering, and organization committees of many international, IEEE conferences and the editorial boards of several scientific journals. His research interests include object-oriented analysis and design, inspections and testing in the context of object-oriented development, quality assurance and control, project planning and risk analysis, and technology evaluation. Donald Brown is a Professor and Chair of the Department of Systems Engineering, University of Virginia. Prior to joining the University of Virginia, Dr. Brown served as an officer in the U.S. Army and later worked at Vector Research, Inc. on projects in medical information processing and multisensor surveillance systems. He is currently a Fellow at the National Institute of Justice Crime Mapping Research Center. Dr. Brown is a Fellow of the IEEE and a past President of the IEEE Systems, Man, and Cybernetics Society. He is the recipient of the Outstanding Contribution Award from that society and the IEEE Millennium Medal. He is also a past Chairman of the Operations Research Society of America
ix
X
CONTRIBUTORS
Technical Section on Artificial Intelligence, and he is the recipient of the Outstanding Service Award from that society. Dr. Brown received his B.S. degree from the U.S. Military Academy, West Point, M.S. and M.Eng. degrees in operations research from the University of California—Berkeley, and a Ph.D. degree in industrial and operations engineering from the University of Michigan—Ann Arbor. His research focuses on data fusion, decision support, and predictive modeling with applications to security and safety. His email address is brown ©Virginia.edu. Peter A. Buhr received B.Sc. Hons/M.Sc. and Ph.D. degrees in computer science from the University of Manitoba in 1976, 1978, and 1985, respectively. He is currently an Associate Professor in the Department of Computer Science, University of Waterloo, Canada. His research interests include concurrency, concurrent profiling/debugging, persistence, and polymorphism. He is the principal designer and implementer for the //System project, thread library for C, the //C-I-+ project, extending C-h-1- with threads, and the MVD project, a collection of software tools to monitor, visuaUze, and debug concurrent //C-h+ programs. Dr. Buhr is a member of the Association for Computing Machinery. Richard J. Fateman received a B.S. degree in physics and mathematics from Union College, Schenectady, NY, and a Ph.D. degree in applied mathematics from Harvard University, Cambridge, MA, in 1966 and 1971, respectively. From 1971 to 1974 he taught in the Department of Mathematics at the Massachusetts Institute of Technology, where he also participated in research on symbolic computation and the Macsyma system. Since 1974 he has been at the University of California at Berkeley where he served as Associate Chair for Computer Science of the Department of Electrical Engineering and Computer Sciences from 1987 to 1990. His research interests include the design and analysis of symbolic mathematical algorithms and systems, implementation of programming languages, and the design of computer environments for scientific programming. He has also done research and teaching in document image analysis. Further details may be found at http://www.cs.berkeley.edu/~fateman. Louise Gunderson is a research assistant in the Department of Systems Engineering, University of Virginia. Prior to joining the University of Virginia, Ms. Gunderson worked for the U.S. Environmental Protection Agency as an Enforcement Specialist. Ms. Gunderson received a B.A. degree in chemistry from the University of California—Berkeley, a B.A. degree in biology from the University of Colorado—Denver, and an M.S. degree in environmental science from the University of Colorado—Denver. She is currendy a Ph.D. candidate in the Department of Systems Engineering, University of Virginia. Her interests involve the
CONTRIBUTORS
XI
modeling and simulation of natural and artificial systems. Her email address is [email protected]. Ashif Harji received BMath and MMath degrees in computer science from the University of Waterloo in 1997 and 1999, respectively. He is currently a Ph.D. student at the University of Waterloo, Waterloo, Canada. His research interests include concurrency, real-time, scheduling, and number theory. D. Richard Kuhn is a computer scientist in the Information Technology Laboratory of the National Institute of Standards and Technology (NIST). His primary technical interests are in software testing and assurance, and information security. Before joining NIST in 1984, he worked as a systems analyst with NCR Corporation and the Johns Hopkins University Applied Physics Laboratory. He received an M.S. in computer science from the University of Maryland at College Park, and an M.B.A. from the College of William and Mary. Edward A. Lee is a Professor in the Electrical Engineering and Computer Science Department at University of California—Berkeley. His research interests center on design, modeling, and simulation of embedded, real-time computational systems. He is director of the Ptolemy project at UC—Berkeley. He is co-author of four books and numerous papers. His B.S. is from Yale University (1979), his M.S. from MIT (1981), and his Ph.D. from UC—Berkeley (1986). From 1979 to 1982 he was a member of technical staff at Bell Telephone Laboratories in Holmdel, NJ, in the Advanced Data Communications Laboratory. He is a cofounder of BDTI, Inc., where he is currently a Senior Technical Advisor, is cofounder of Agile Design, Inc., and has consulted for a number of other companies. He is a Fellow of the IEEE, was an NSF Presidential Young Investigator, and won the 1997 Frederick Emmons Terman Award for Engineering Education. W.Y. Russell Mok received the BCompSc and MMath degrees in computer science from Concordia University (Montreal) in 1994 and University of Waterloo in 1998, respectively. He is currently working at Algorithmics, Toronto. His research interests include object-oriented design, software patterns, and software engineering. Sharon Oviatt is a Professor and Co-Director of the Center for Human-Computer Communication (CHCC) in the Department of Computer Science at Oregon Health and Sciences University. Previously she has taught and conducted research at the Artificial Intelligence Center at SRI International, and the Universities of Illinois, California, and Oregon State. Her research focuses on human-computer interaction, spoken language and multimodal interfaces, and mobile and highly interactive systems. Examples of recent work involve the development of novel design
XII
CONTRIBUTORS
concepts for multimodal and mobile interfaces, robust interfaces for real-world field environments and diverse users (children, accented speakers), and conversational interfaces with animated software "partners." This work is funded by grants and contracts from the National Science Foundation, DARPA, ONR, and corporate sources such as Intel, Motorola, Microsoft, and Boeing. She is an active member of the international HCI and speech communities, has published over 70 scientific articles, and has served on numerous government advisory panels, editorial boards, and program committees. Her work is featured in recent special issues of Communications for the ACM, Human-Computer Interaction, and IEEE Multimedia. Further information about Dr. Oviatt and CHCC is available at h t t p : / / www.cse.ogi.edu/CHCC. Vaclav T. Rajlich is a full professor and former chair in the Department of Computer Science at Wayne State University. His research interests include software change, evolution, comprehension, and maintenance. Rajlich received a Ph.D. in mathematics from Case Western Reserve University. Contact him at rajlich® wayne.edu. Norman Wilde is a full professor of computer science at the University of West Florida. His research interests include software maintenance and comprehension. Wilde received a Ph.D. in mathematics and operations research from the Massachusetts Institute of Technology. Contact him at nwilde^uwf . edu. Carl J. Williams is a research physicist in the Quantum Processes Group, Atomic Physics Division, Physics Laboratory of the National Institute of Standards and Technology (NIST). Before joining NIST in 1998, he worked as a systems analyst for the Institute of Defense Analyses and was a research scientist studying atomic and molecular scattering and molecular photodissociation at the James Franck Institute of the University of Chicago. He is an expert in the theory of ultra-cold atomic collisions, does research on the physics of quantum information processors, and coordinates quantum information activities at NIST. Williams received his Ph.D. from the University of Chicago in 1987. Jiirgen Wiist received the degree Dipkrm-Informatiker (M.S.) in computer science from the University of Kaiserslaulern, Germany, in 1997. He is currently a researcher at the Fraunhofer Institute for Experimental Software Engineering (lESE) in Kaiserslautern, Germany. His research activities and industrial activities include software measurement, software product evaluation, and objectoriented development techniques.
Preface
Advances in Computers, continually published since 1960, is the oldest series still in publication to provide an annual update to the rapidly changing information technology scene. Each volume provides six to eight chapters describing how the software, hardware, or applications of computers is changing. In this volume, the 56th in the series, eight chapters describe many of the new technologies that are changing the use of computers during the early part of the 21st century. In Chapter 1, "Software Evolution and the Staged Model of the Software Lifecycle" by K. H. Bennett, V. T. Rajlich, and N. Wilde, the authors describe a new view of software maintenance. It is well known that maintenance consumes a major part of the total lifecycle budget for a product; yet maintenance is usually considered an "end game" component of development. In this chapter, the authors view software maintenance as consisting of a number of stages, some of which can start during initial development. This provides a very different perspective on the lifecycle. The chapter introduces a new model of the lifecycle that partitions the conventional maintenance phase in a much more useful, relevant, and constructive way. Chapter 2, "Embedded Software" by E. A. Lee, explains why embedded software is not just software on small computers, and why it therefore needs fundamentally new views of computation. Time, concurrency, liveness, robustness, continuums, reactivity, and resource management are all functions that are part of the computation of a program; yet prevailing abstractions of programs leave out these "nonfunctional" aspects. This chapter explains why embedded software is not just software on small computers, and why it therefore needs fundamentally new views of computation. Object-oriented system design is a current approach toward developing quality systems. However, how do we measure the quality of such systems? In order to measure a program's quality, quality models that quantitatively describe how internal structural properties relate to relevant external system qualities, such as reliability or maintainability, are needed. In Chapter 3, "Empirical Studies of Quality Models in Object-Oriented Systems" by L. C. Briand and J. Wiist, the authors summarize the empirical results that have been reported so far with modeling external system quality based on structural design properties. They perform a critical review of existing work in order to identify lessons learned regarding the way these studies are performed and reported.
xiii
XIV
PREFACE
Chapter 4, "Software Fault Prevention by Language Choice: Why C is Not My Favorite Language" by R. Fateman, presents an opposing view of the prevaihng sentiment in much of the software engineering world. Much of software design today is based on object-oriented architecture using C-I-+ or Java as the implementation language. Both languages are derived from the C language. However, is C an appropriate base in which to write programs? Dr. Fateman argues that it is not and that a LISP structure is superior. How fast can computers ultimately get? As this is being written, a clock speed of around 2 GHz is available. This speed seems to double about every 18 months ("Moore's Law"). However, are we reaching the limits on the underlying "silicon" structure the modem processor uses? One of the more exciting theoretical developments today is the concept of quantum computing, using the quantum states of atoms to create extremely fast computers, but quantum effects are extremely fragile. How can we create reliable machines using these effects? In "Quantum Computing and Communication" by P. E. Black, D. R. Kuhn, and C. J. WiUiams, the authors describe the history of quantum computing, and describe its applicability to solving modern problems. In particular, quantum computing looks like it has a home in modern cryptography, the ability to encode information so that some may not decipher its contents or so that other codebreakers may decipher its contents. In Chapter 6, "Exception Handling" by P. A. Buhr, A. Harji, and W. Y. R. Mok, the authors describe the current status of exception handling mechanisms in modem languages. Exception handling in languages like PL/I (e.g., the Oncondition) and C (e.g., the Throw statement) can be viewed as add-on features. The authors argue it is no longer possible to consider exception handling as a secondary issue in a language's design. Exception handling is a primary feature in language design and must be integrated with other major features, including advanced control flow, objects, coroutines, concurrency, real-time, and polymorphism. In Chapter 7, "Breaking the Robustness Barrier: Recent Progress on the Design of the Robust Multimodal Systems" by S. Oviatt, goes into the application of applying multimodal systems, two or more combined user input modes, in user interface design. Multimodal interfaces have developed rapidly during the past decade. This chapter specifically addresses the central performance issue of multimodal system design techniques for optimizing robustness. It reviews recent demonstrations of multimodal system robustness that surpass that of unimodal recognition systems, and also discusses future directions for optimizing robustness further through the design of advanced multimodal systems. The final chapter, "Using Data Mining to Discover the Preferences of Computer Criminals" by D. E. Brown and L. F. Gunderson, discusses the ability to predict a
PREFACE
XV
new class of crime, the "cyber crime." With our increased dependence on global networks of computers, criminals are increasingly "hi-tech." The ability to detect illegal intrusion in a computer system makes it possible for law enforcement to both protect potential victims and apprehend perpetrators. However, warnings must be as specific as possible, so that systems that are not likely to be under attack do not shut off necessary services to their users. This chapter discusses a methodology for data-mining the output from intrusion detection systems to discover the preferences of attackers. I hope that you find these articles of interest. If you have any suggestions for future chapters, I can be reached at mvzOcs. umd. edu. MARVIN ZELKOWITZ
College Park, Maryland
This Page Intentionally Left Blank
Software Evolution and the Staged Model of the Software Lifecycle K. H. BENNETT Research Institute for Software University of Durham Durham DH1 3LE United Kingdom [email protected]
Evolution
V.TRAJLICH Department of Computer Science Wayne State University Detroit, Ml 48202 USA vtr@cs. wayne. edu N.WILDE Department of Computer Science University of West Florida Pensacola, FL 32514 USA [email protected]
Abstract Software maintenance is concerned with modifying software once it has been deUvered and has entered user service. Many studies have shown that maintenance is the dominant Hfecycle activity for most practical systems; thus maintenance is of enormous industrial and commercial importance. Over the past 25 years or so, a conventional view of software development and maintenance has been accepted in which software is produced, delivered to the user, and then enters a maintenance stage. A review of this approach and the state of the art in research and practice is given at the start of the chapter. In most lifecycle models, software maintenance is lumped together as one phase at the end. In the experience of the authors, based on how maintenance is really undertaken (rather than how it might or should be done), software ADVANCES IN COMPUTERS, VOL. 56 ISBN 0-12-012156-5
1
Copyright 2002 Elsevier Science Ltd Allrightsof reproduction in any form reserved.
K. H. BENNETT £7/A/..
maintenance actually consists of a number of stages, some of which can start during initial development. This provides a very different perspective on the lifecycle. In the chapter, we introduce a new model of the lifecycle that partitions the conventional maintenance phase in a much more useful, relevant, and constructive way. It is termed the staged model There are five stages through which the software and the development team progress. A project starts with initial development stage, and we then identify an explicit evolution stage. Next is a servicing stage, comprising simple tactical activities. Later still, the software moves to ?i phase-out stage in which no more work is done on the software other than to collect revenue from its use. Finally the software has a close-down stage. The key point is that software evolution is quite different and separate from servicing, from phase-out, and from close-down, and this distinction is crucial in clarifying both the technical and business consequences. We show how the new model can provide a coherent analytic approach to preserving software value. Finally, promising research areas are summarized. Introduction 1.1 Background 1.2 Early Work 1.3 Program Comprehension 1.4 Standards 1.5 Iterative Software Development 1.6 The Laws of Software Evolution 1.7 Stage Distinctions 1.8 The Business Context 1.9 Review 1.10 The Stages of the Software Lifecycle Initial Development 2.1 Introduction 2.2 Software Team Expertise 2.3 System Architecture 2.4 What Makes Architecture Evolvable? Evolution—The Key Stage 3.1 Introduction 3.2 Software Releases 3.3 Evolutionary Software Development Servicing 4.1 Software Decay 4.2 Loss of Knowledge and Cultural Change 4.3 Wrapping, Patching, Cloning, and Other "Kludges" 4.4 Reengineering Phase-Out and Closedown
3 3 5 6 7 7 8 9 9 10 12 13 13 14 15 15 16 16 17 17 19 19 20 21 22 23
SOFTWARE EVOLUTION
6. Case Studies 6.1 The Microsoft Corporation 6.2 The VME Operating System 6.3 The Y2K Experience 6.4 A Major Billing System 6.5 A Small Security Company 6.6 A Long-Lived Defense System 6.7 A Printed Circuits Program 6.8 Project PET 6.9 The FASTGEN Geometric Modeling Toolkit 6.10 A Financial Management Application 7. Software Change and Comprehension 7.1 The Miniprocess of Change 7.2 Change Request and Planning 7.3 Change Implementation 7.4 Program Comprehension 8. Sustaining Software Value 8.1 Staving off the Servicing Stage 8.2 Strategies during Development 8.3 Strategies during Evolution 8.4 Strategies during Servicing 9. Future Directions: Ultra Rapid Software Evolution 10. Conclusions Acknowledgments References
1.
Introduction
1.1
Background
24 24 25 26 27 27 28 28 29 30 30 31 31 32 33 35 38 38 39 41 43 44 47 47 48
What is software maintenance? Is it different from software evolution? Why isn't software designed to be easier to maintain? What should we do with legacy software? How do we make money out of maintenance? Many of our conventional ideas are based on analyses carried out in the 1970s, and it is time to rethink these for the modem software industry. The origins of the term maintenance for software are not clear, but it has been used consistently over the past 25 years to refer to post-initial delivery work. This view is reflected in the IEEE definition of software maintenance [1] essentially as a post delivery activity: The process of modifying a software system or component after delivery to correct faults, improve performance or other attributes, or adapt to a changed environment. [1, p. 46]
4
K.H. BENNETT Er>^L
Implicit in this definition is the concept of software Ufecycle, which is defined as: The period of time that begins when a software product is conceived and ends when the software is no longer available for use. The software life cycle typically includes a concept phase, requirements phase, design phase, implementation phase, test phase, installation and checkout phase, operation and maintenance phase, and, sometimes, retirement phase. Note: These phases may overlap or be performed iteratively. [1, p. 68] A characteristic of established engineering disciplines is that they embody a structured, methodical approach to developing and maintaining artifacts [2, Chaps. 15 and 27]. Software lifecycle models are abstract descriptions of the structured methodical development and modification process typically showing the main stages in producing and maintaining executable software. The idea began in the 1960s with the waterfall model [3]. A lifecycle model, implicit or explicit, is the primary abstraction that software professionals use for managing and controlling a software project, to meet budget, timescale, and quality objectives, with understood risks, and using appropriate resources. The model describes the production of deliverables such as specifications and user documentation, as well as the executable code. The model must be consistent with any legal or contractual constraints within a project's procurement strategy. Thus, it is not surprising that lifecycle models have been given primary attention within the software engineering community. A good overview of software lifecycle models is given in [2] and a very useful model is the spiral model of Boehm [4], which envisages software production as a continual iterative development process. However, crucially, this model does not address the loss of knowledge, which in the authors' experience accompanies the support of long-lived software systems and which vitally constrains the tasks which can be performed. Our aim was to create a lifecycle model that would be useful for the planning, budgeting, and delivery of evolving systems, and that would take into account this loss of knowledge. Our new model is called the staged model. The aim of this chapter is to describe the new staged model [5]. We provide a broad overview of the state of the art in software maintenance and evolution. The emphasis is mainly on process and methods (rather than technology), since this is where the main developments have occurred, and is of most relevance to this chapter. There is much useful material available on software maintenance management, including very practical guides [6]. We start from the foundations established within the international standards community. We then briefly revisit previous research work, as an understanding of these results is essential. Program comprehension is identified as a key component; interestingly, very few
SOFTWARE EVOLUTION
5
textbooks on software engineering and even on software maintenance mention the term, so our review of the state of the art addresses the field to include this perspective. The new model and our view of research areas are influenced by program comprehension more than other aspects. The staged model is presented, and evidence drawn from case studies. Practical implications are then described, and finally, research directions are presented.
1.2
Early Work
In a very influential study, Lientz and Swanson [7,8] undertook a questionnaire survey in the 1970s, in which they analyzed then-current maintenance practices. Maintenance changes to software were categorized into: • perfective (changes to the functionality), adaptive (changes to the environment), corrective (the correction of errors), and preventive (improvements to avoid future problems). This categorization has been reproduced in many software engineering text books and papers (e.g., Sommerville [9], McDermid [2], Pressman [10], Warren [11]), and the study has been repeated in different application domains, in other countries and over a period of 20 years (see, for example, the Ph.D. thesis of Foster [12], who analyzes some 30 studies of this type). However, the basic analysis has remained essentially unchanged, and it is far from clear what benefits this view of maintenance actually brings. Implicit in the Lientz and Swanson model are two concepts: • That software undergoes initial development, it is delivered to its users, and then it enters a maintenance phase. • The maintenance phase is uniform over time in terms of the activities undertaken, the process and tools used, and the business consequences. These concepts have also suggested a uniform set of research problems to improve maintenance, see for example [2, Chap. 20]. One common message emerging from all these surveys is the very substantial proportion of lifecycle costs that are consumed by software maintenance, compared to software development. The figures range from 50 to 90% of the complete lifecycle cost [7]. The proportion for any specific systems clearly depends on the application domain and the successful deployment of the software (some long-lived software is now over 40 years old!).
6
K.H. BENNETT E r 4 L
The balance of lifecycle costs is subject to commercial pressures. It may be possible to discount the purchase price of a new software system, if charges can be recovered later through higher maintenance fees. The vendor may be the only organization capable (i.e., having the knowledge and expertise) of maintaining the system. Depending on the contractual arrangement between the producer and the consumer, and on the expectations for the maintenance phase, the incentive during development to produce maintainable software may vary considerably. Software maintenance is thus not entirely a technical problem. In the 1990s, the practice of outsourcing software maintenance became widespread. A customer company subcontracts all the support and maintenance of a purchased system to a specialist subcontractor. This has raised a set of new commercial issues, for example the potential risk of subcontracting out key company systems; and the difficulties in the future of recalling maintenance back in house or to another subcontractor if the first does not perform acceptably. It is important to recall that it is not simply the software application that evolves. Long-lived software may well outlast the environment within which it was produced. In the military domain, software has sometimes lasted longer than the hardware on which it was cross compiled (presenting major problems if the software has to be modified). Software tools are often advocated for software maintenance, but these may also evolve (and disappear from the market) at a faster rate than the software application under maintenance.
1.3
Program Comprehension
Program comprehension is that activity by which software engineers come to an understanding of the behavior of a software system using the source code as the primary reference. Studies suggest that program comprehension is the major activity of maintenance, absorbing around 50% of the costs [2, Chap. 20; 13]. Program comprehension requires understanding of the user domain that the software serves as well as software engineering and programming knowledge of the program itself. Further details are given in Section 7. The authors believe that comprehension plays a major role in the software lifecycle. During the early stages, the development team builds group understanding, and the system architects have a strategic understanding of the construction and operation of the system at all levels. At later stages, this knowledge is lost as developers disperse and the complexity of the software increases, making it more difficult to understand. Knowledge appears impossible to replace, once lost, and this forms the basis for our new model.
SOFTWARE EVOLUTION
1.4
7
Standards
Software maintenance has been included within more general software engineering standardization initiatives. For example, the IEEE has published a comprehensive set of standards [14], of which Std. 1219 on maintenance forms a coherent part. The IEEE standard defines seven steps in software maintenance change: • Problem modification/identification, classification, and prioritization, • Analysis and understanding (including ripple effects), • Design, • Implementation, • Regression/system testing, • Acceptance testing, • Delivery. Underpinning the standard is a straightforward iterative perspective of software maintenance; a change request is reviewed and its cost estimated; it is implemented; and then validation is carried out. The International Standards Organization (ISO) has also published a software maintenance standard [15]. This is in the context of Std. ISO/IEC 12207, which addresses how an agreement should be drawn up between a software acquirer and supplier (in which maintenance is included). The standard places considerable emphasis on planning.
1.5
Iterative Software Development
The iterative nature of software lifecycle was noted already in the 1970s by several authors. Wirth [16] proposed Stepwise Refinement where functionality is introduced into the program in successive iterations. Basili and Turner [17] described another process where the functionality is added to the program in successive iterative steps. Large software projects of that time already followed iterative scenarios. A notable project was the development of the IBM OS operating system. The experience of that project was described in [18,19]. These authors noted that a software lifecycle is inherently iterative, that a substantial proportion of the functionality is added iteratively, and that the initial development is simply the initialization stage of this process. See also [20].
8
K.H. BENNETT ET^/..
1.6
The Laws of Software Evolution
The evolution of a software system conforms to laws, which are derived from empirical observations of several large systems [21-23]: 1. Continuing change. A program that is used and that, as an implementation of its specification, reflects some other reality undergoes continuing change or becomes progressively less useful. 2. Increasing complexity. As an evolving program is continuously changed, its complexity, reflecting deteriorating structure, increases unless work is done to maintain or reduce it. The laws are only too apparent to anyone who has maintained an old heavily changed system. Lehman also categorized software into three types, as follows [24]: S-type software: this has a rigid specification (S means static) which does not change and which is well understood by the user. The specification defines the complete set of circumstances to which it applies. Examples are offered by many mathematical computations. It is therefore reasonable to prove that the implementation meets the specification. P-type software: this has a theoretical solution, but the implementation of the solution is impractical or impossible. The classic example is offered by a program to play chess, where the rules are completely defined, but do not offer a practical solution to an implementation (P means practical). Thus we must develop an approximate solution that is practical to implement. E-type software: this characterizes almost all software in everyday use, and reflects the real world situation that change is inevitable (E means embedded, in the sense of embedded in the real world). The solution involves a model of the abstract processes involved, which includes the software. Thus the system is an integral part of the world it models, so change occurs because both the world changes and the software is part of that world. For E-type systems, Lehman holds the view that (as in any feedback system), the feed-forward properties such as development technologies, environments, and methods are relatively unimportant and global properties of the maintenance process are insensitive to large variations in these factors. In contrast, he argues that the feedback components largely determine the behavior. So issues like testing, program understanding, inspection, and error reports are crucial to a wellunderstood stable process. This work may provide a theoretical perspective of why program comprehension is so important. He also amplified the concept of E-type software in his principle of uncertainty: the outcome of software system
SOFTWARE EVOLUTION
9
operation in the real world is inherently uncertain with the precise area of uncertainty also unknowable. This work to establish a firm scientific underpinning for software evolution continues in the FEAST project [25-28].
1.7
Stage Distinctions
Sneed [29] and Lehner [30,31] are among the few authors to have observed that the maintenance phase is not uniform. Sneed classified systems into three categories: throw-away systems that typically have a lifetime of less than two years and are neither evolved nor maintained. Then there are static systems that are implemented in a well-defined area and after being developed, their rate of change is less than 10% in a year, or are relatively static after development. Finally there are evolutionary systems that undergo substantial change after the initial development and last many years. The stages of these systems are initial development, evolution (called "further development" by Sneed), "maintenance" (i.e., servicing), and "demise" (i.e., phase-out). Lehner [30] used this model and investigated the lifecycle of 13 business systems from Upper Austria to confirm the existence of the stages. He found that some systems belong to the category of static systems, where there is a very short evolution after the initial development and then the maintenance work very dramatically decreases. Other systems consume substantial effort over many years. Lehner confirmed a clear distinction between evolution (called "growth" in his paper) and servicing (called "saturation") where the maintenance effort is substantially lower [31]. He thus refuted earlier opinions that the evolution and growth of software can continue indefinitely and confirmed Sneed's earlier observation about the distinctions between the several stages, through observation of long-term data from several systems.
1.8
The Business Context
Some of the problems of the traditional lifecycle model stem from recent trends in the business context of software development. Different categories of software application are subjected to radically different kinds of business pressures. Most software engineering techniques available today for software specification, design, and verification have been presented as conventional supply-side methods, driven by technological advance. Such methods may work well for systems with rigid boundaries of concern, such as embedded systems, which may be characterized as risk-averse. In such domains, users have become familiar with long periods between requests for new features and their release in new versions (the so-called "applications backlog").
10
K. H. BENNETT E7>^/..
However such techniques break down for appHcations where system boundaries are not fixed and are subject to constant urgent change. These applications are typically found in emergent organizations—"organizations in a state of continual process change, never arriving, always in transition" [32]. Examples include e-businesses as well as more traditional companies that continually need to reinvent themselves to gain competitive advantage [33]. A stockbroker, for example, may have a need to introduce a new service overnight; the service may only exist for another 24 hours before it is replaced by an updated version. In such organizations we have a demand-led approach to the provision of software services, addressing delivery mechanisms and processes which, when embedded in emergent organizations, give a software solution in emergent terms—one with continual change. The solution never ends and neither does the provision of software. The user demand is for change in 'Internet time" and the result is sometimes termed engineering for emergent solutions. Yet a third category is provided by so-called "legacy systems," which have been defined [34] as "systems which are essential to our organization but we don't know what to do with them." They pose the epitome of the maintenance challenge, because for the great majority, remedial action has never taken place, so whatever structure originally existed has long since disappeared. Legacy systems have been extensively addressed in the literature (see, e.g., [35,36]). The main conclusion is that there is no magic silver bullet; the future of each system needs to be analyzed, planned, and implemented based on both technical and business drivers, and taking into account existing and future staff expertise. Finally, as software systems become larger and more complex, organizations find that it does not make sense to develop in-house all the software they use. Commercial-off-the-shelf (COTS) components are becoming a larger part of most software projects. Selecting and managing such software represents a considerable challenge, since the user becomes dependent on the supplier, whose business and products may evolve in unexpected ways. Technical concerns about software's capabilities, performance, and reliability may become legal and contractual issues, and thus even more difficult to resolve. It appears that many organizations are now or soon will be running, at the same time, a mixture of embedded or legacy software, with added COTS components, and interfaced to new e-business applications. The processes and techniques used in each category clash, yet managers need somehow to make the whole work together to provide the services that clients demand.
1.9
Review
We began by drawing on currently available evidence. Many ideas are now sufficiently mature that process standards have been defined or are emerging, at
SOFTWARE EVOLUTION
11
least for large scale, embedded risk-averse systems. Empirical studies have shown that program comprehension is a crucial part of software maintenance, yet it is an activity that is difficult to automate and relies on human expertise. The increasing business stress on time to market in emergent organizations is increasing the diversity of software types that must be managed, with different processes being appropriate for each type. We conclude that: • Human expertise during the maintenance phase represents a crucial dimension that cannot be ignored. At the moment, some of the hardest software engineering evolution tasks (such as global ripple analysis) need senior engineers fully to comprehend the system and its business role. • We need to explore maintenance to reflect how it is actually done, rather than prescriptively how we would like it to be done. The major contribution of this chapter is to propose that maintenance is not a single uniform phase, the final stage of the conventional lifecycle, but is comprised of several distinct stages and is in turn distinct from evolution. The stages are not only technically distinct, but also require a different business perspective. Our work is motivated by the fact that the Lientz and Swanson approach does not accord with modem industrial practice, based on analysis of a number of case studies. Knowing the fraction of effort spent on various activities during the full lifecycle does not help a manager to plan those activities or make technical decisions about multisourced component-based software, or address the expertise requirements for a team. The conventional analysis has not, over many years, justified the production of more maintainable software despite the benefits that should logically accrue. Especially, the conventional model does not sensibly apply to the many modem projects, which are heavily based on COTS technology. Finally, it is only a technical model and does not include business issues. We also had other concems that the conventional Lientz and Swanson model did not elucidate. For example, there are few guidelines to help an organization assess if a reverse engineering project would be commercially successful, despite the large amount of research and development in this field, and it was not clear why this was the case. The skills of staff involved in post-delivery work seem very important, but the knowledge needed both by humans and in codified form has not been clearly defined, despite a large number of projects which set out to recapture such knowledge. Our motivation for defining a new perspective came from the very evident confusion in the area. A brief examination of a number of Web sites and papers concemed with software engineering, software evolution, and software maintenance also illustrated the confusion between terms such as maintenance and
12
K. H. BENNETT E 7 / \ L
evolution (with a completely uniform maintenance phase), and the almost universal acceptance of the Lientz and Swanson analysis. We found this situation inadequate for defining a clear research agenda that would be of benefit to industry. For these reason, we shall largely restrict our use of the term "software maintenance" from now on in this chapter to historical discussions. We have estabHshed a set of criteria forjudging success of our new perspective: • It should support modern industrial software development which stresses time to delivery and rapid change to meet new user requirements. • It should help with the analysis of COTS-type software. • It should be constructive and predictive—we can use it to help industry to recognize and plan for stage changes. • It should clarify the research agenda—each stage has very different activities and requires very different techniques to achieve improvement. • It should be analytic—we can use it to explain and clarify observed phenomena, e.g., that reverse engineering from code under servicing is very hard. • It should be used to model business activity as well as technical activity. • It should transcend as far as possible particular technologies and application domains (such as retail, defense, embedded, etc.) while being applicable to modem software engineering approaches. • It should also transcend detailed business models and support a variety of product types. On one hand we have the shrink-wrap model where time to market, etc., are important considerations; at the other extreme we have customer-tailored software where the emphasis may be on other attributes like security, reliability, and ease of evolution. • It should be supported by experimental results from the field. • It should help to predict and plan, rather than simply be descriptive. Our perspective is amplified below and is called the staged model.
1.10
The Stages of the Software Lifecycle
The basis of our perspective is that software undergoes several distinctive stages during its life. The following stages are: • Initial development—the first functioning version of the system is developed.
SOFTWARE EVOLUTION
13
• Evolution—if initial development is successful, the software enters the stage of evolution, where engineers extend its capabilities and functionality, possibly in major ways. Changes are made to meet new needs of its users, or because the requirements themselves had not been fully understood, and needed to be given precision through user experience and feedback. The managerial decision to be made during this stage is when and how software should be released to the users (alpha, beta, commercial releases, etc.). • Servicing—the software is subjected to minor defect repairs and very simple changes in function (we note that this term is used by Microsoft in referring to service packs for minor software updates). • Phase-out—no more servicing is being undertaken, and the software's owners seek to generate revenue for its use for as long as possible. Preparation for migration routes is made. • Closedown—the software is withdrawn from the market, and any users directed to a replacement system if this exists. The simplest variant of the staged software lifecycle is shown in Fig. 1. In the following sections, we describe each of these stages in more detail.
2.
Initial Development 2.1
Introduction
The first stage is initial development, when the first version of the software is developed from scratch. This stage has been well described in the software engineering literature and there are very many methods, tools, and textbooks that address it in detail (for example, see [2,9,10,37]). The stage is also addressed by a series of standards by IEEE and ISO, or by domain- or industry-specific standards (for example, in the aerospace sector). Such initial development very rarely now takes place starting from a "green field" situation since there may be an inheritance of old legacy software, as well as external suppliers of new COTS components. Over the past 30 years, since the recognition of software engineering as a discipline [38], a great deal of attention has been paid to the process of initial development of reliable software within budget and to predictable timescales. Software project managers welcomed the earliest process model, called the waterfall model, because it offered a means to make the initial development process more visible and auditable through identifiable deliverables. Since there is such an extensive
14
K. H.BENNETT E T ^ / . .
I Initial development first running version evolution changes
Evolution
loss of evolvability servicing patchies
Servicing
servicing discontinued
Phase-out
Switch-off Close-down
FIG. 1. The simple staged model.
literature dealing with initial development, we will cover only selected aspects of it.
2.2
Software Team Expertise
From the point of view of the future stages, several important foundations are laid during initial development. The first foundation is that the expertise of the software engineering team and in particular of the system architects is established. Initial development is the stage during which the team learns about the domain and the problem. No matter how much previous experience had been accumulated before the project started, new knowledge will be acquired during initial development. This experience is of indispensable value in that it will make future evolution of the software possible. So this aspect—the start of team learning— characterizes the first stage. Despite the many attempts to document and record such team learning, much of it is probably tacit—it is the sort of experience that is extremely difficult to record formally.
SOFTWARE EVOLUTION
2.3
15
System Architecture
Another important result and deliverable from initial development is the architecture of the system, i.e., the components from which the system is built, their interactions, and their properties. The architecture will either facilitate or hinder the changes that will occur during evolution and it will either withstand those changes or break down under their impact. It is certainly possible to document architecture, and standard approaches to architectures (e.g., [39]) provide a framework. In practice, one of the major problems for architectural integrity during initial development is "requirements creep." If the requirements of the software system are not clear, or if they change as the software is developed, then a single clear view of the architecture is very difficult to sustain. Numerous approaches to ameliorating this problem have been devised, such as rapid application development, prototyping, and various management solutions, such as the Chief Programmer team, and (more recently) extreme programming [40]. The approach chosen can be strongly influenced by the form of legal contract between the vendor and customer which may induce either a short- or long-term view of the trade-off between meeting a customer's immediate needs and maintaining a clean software architecture.
2.4
What Makes Architecture Evolvable?
Thus for software to be easily evolved, it must have an appropriate architecture, and the team of engineers must have the necessary expertise. For example, in long-lived systems such as the ICL VME operating system, almost all subcomponents have been replaced at some stage or another. Yet despite this, the overall system has retained much of its architectural integrity [41]. In our experience, the evolution of architecture needs individuals of very high expertise, ability, and leadership. There may be financial pressures to take technical shortcuts in order to deliver changes very quickly (ignoring the problem that these conflict with the architectural demands). Without the right level of human skill and understanding it may not be realized that changes are seriously degrading the software structure until it is too late. There is no easy answer or "prescription" to making an architecture easily evolvable. Inevitably there is a trade-off between gains now, and gains for the future, and the process is not infallible. A pragmatic analysis of software systems which have stood the test of time (e.g., VME, or UNIX) typically shows the original design was undertaken by one, or a few, highly talented individuals. Despite a number of attempts, it has proved very difficult to establish contractually what is meant by maintainable or evolvable software and to define processes
16
K. H. BENNETT ET/^/..
that will produce software with these characteristics. At a basic level, it is possible to insist on the adoption of a house style to programming, to use IEEE or ISO standards in the management and technical implementation, to use modem tools, to document the software, and so on. Where change can be foreseen at design time, it may be possible to parameterize functionality. These techniques may be necessary, but experience shows that they are not sufficient. The problem may be summarized easily: a successful software system will be subjected to changes over its lifetime that the original designers and architects cannot even conceive of. It is therefore not possible to plan for such change, and certainly not possible to create a design that will accommodate it. Thus, some software will be able to evolve, but other systems will have an architecture that is at cross-purposes with a required change. To force the change may introduce technical and business risks and create problems for the future.
3.
Evolution—The Key Stage 3.1
Introduction
The evolution stage is characterized as an iterative addition, modification, or deletion of nontrivial software functionality (program features). This stage represents our first major difference from the traditional model. The usual view is that software is developed and then passed to the maintenance team. However, in many of the case studies described later, we find that this is not the case. Instead, the software is released to customers, and assuming it is successful, it begins to stimulate enthusiastic users. (If it is not successful, then the project is cancelled!) It also begins to generate income and market share. The users provide feedback and requests for new features. The project team is living in an environment of success, and this encourages the senior designers to stick around and support the system through a number of releases. In terms of team learning, it is usually the original design team that sees the new system through its buoyant early days. Of course, errors will be detected during this stage, but these are scheduled for correction in the next release. During the evolution stage, the continued availability of highly skilled staff makes it possible to sustain architectural integrity. Such personnel would seem to be essential. Unfortunately we note that making this form of expertise explicit (in a textbook, for example) has not been successful despite a number of projects concerned with "knowledge-based software engineering." The increase in size, complexity, and functionality of software is partly the result of the learning process in the software team. Cusumano and Selby reported that a feature set during each iteration may change by 30% or more, as a direct
SOFTWARE EVOLUTION
17
result of the learning process during the iteration [42]. Brooks also comments that there is a substantial "learning curve" in building a successful new system [18]. Size and complexity increases are also caused by customers' requests for additional functionality, and market pressures add further to growth, since it may be necessary to match features of the competitor's product. In some domains, such as the public sector, legislative change can force major evolutionary changes, often at short notice, that were never anticipated when the software was first produced. There is often a continuous stream of such changes.
3.2
Software Releases
There are usually several releases to customers during the software evolution stage. The time of each release is based on both technical and business considerations. Managers must take into account various conflicting criteria, which include time to market or time to delivery, stability of software, fault rate reports, etc. Moreover the release can consist of several steps, including alpha and beta releases. Hence the release, which is the traditional boundary between software development and software maintenance, can be a blurred and to a certain degree an arbitrary milestone. For software with a large customer base, it is customary to produce a sequence of versions. These versions coexist among the users and are independendy serviced, mostly to provide bug fixes. This servicing may take the form of patches or minor releases so that a specific copy of the software in the hands of a user may have both a version number and release number. The releases rarely implement a substantial new functionality; that is left to the next version. This variant of the staged lifecycle model for this situation is shown in Fig. 2.
3.3
Evolutionary Software Development
The current trend in software engineering is to minimize the process of initial development, making it into only a preliminary development of a skeletal version or of a prototype of the application. Full development then consists of several iterations, each adding certain functionality or properties to the already existing software system. In this situation, software evolution largely replaces initial development, which then becomes nothing more than the first among several equal iterations. The purpose of evolutionary development is to minimize requirements risks. As observed earlier, software requirements are very often incomplete because of the difficulties in eliciting them. The users are responsible for providing a complete set of accurate requirements, but often provide less than that, because of the lack of knowledge or plain omissions. On top of that, the requirements change
18
K. H.BENNETT E7/\L Initial development
first running version
evolution clianges
Evolution Version 1 -^.^ - -
servicing pa tches
Servicing Version 1 evolution of new version evolution changes Phase-out Version 1 Evolution Version 2 servicing patches
"^\
Close-down Version 1 Servicing Version 2
evolution of new version Phase-out Version 2
I ! Evolution Version . . .
Close-down Version 2
I
L
_._.__ FIG. 2. The versioned staged model.
during development, because the situation in which the software operates changes. There is also a process of learning by both users and implementers and that again contributes to changing requirements. Because of this, a complete set of requirements is impossible or unlikely in many situations so one-step implementation of large software carries a substantial risk. Evolutionary software development that is divided into incremental steps lessens the risk because it allows the users to see and experience the incomplete software after each iteration. One of the well-known and well-described processes of evolutionary software development is the Unified Software Development Process [43]. This process describes in detail how software is to be developed in incremental iterations. Each
SOFTWARE EVOLUTION
19
incremental iteration adds a new functionality or a new property (e.g., security, effectiveness) to the already existing software. This gradual increase in requirements lessens the risk involved, because each iteration provides a fresh feedback about the progress of the project. The Unified Software Development Process describes the number of activities and specifies the documents to be produced during the iterations. However, Booch reports that a major criticism leveled at the Unified Software Development Process and similar approaches is that the resulting processes are rigid, require extensive documentation and many steps, and consequently are too expensive in time for many modem businesses [44]. An emerging alternate approach for systems that require rapid evolution is the agile method, an example of which is Extreme Programming (XP) [40]. XP almost abolishes the initial development phase. Instead, programmers work closely with customers to develop a set of "stories" describing desired features of the new software. Then a series of releases is implemented, with typically only a few weeks between releases. The customer defines the next release by choosing the stories to implement. Programmers take the stories and define more fine-grained tasks, with one programmer taking responsibility for each. Test cases are defined before programming begins. An interesting aspect of XP is that the responsible programmer signs up a partner for the task; all work is done in pairs with both working at the same workstation. Thus, knowledge is shared between at least two programmers, and some self-checking is built in without requiring organized walkthroughs or inspections. Pairs are broken up and reformed for different tasks so experience can be distributed. There is little documentation of code or design, although considerable care is taken to maintain tests that can be rerun in the future. Agile methods seem to discard all the software engineering experience of the past 20 years and place their reliance purely on the retention of expert team personnel for as long as the software needs to evolve. They thus gain valuable time, but perhaps at considerable risk. It remains to be seen whether this kind of methodology will be viable beyond the short term or whether managers and stockholders will instead discover that their critical applications have suddenly made unplanned and costly transitions to servicing.
4.
Servicing
4.1 Software Decay As previously mentioned, software enters the servicing stage as human expertise and/or architectural integrity are lost. Servicing has been alternatively called
20
K. H. BENNETT ET/^L
"saturation" [30,31], "aging software," "decayed software," "maintenance proper," and "legacy software." During this stage, it is difficult and expensive to make changes, and hence changes are usually limited to the minimum. At the same time, the software still may have a "mission critical" status; i.e., the user organization may rely on the software for services essential to its survival. Code decay (or aging) was discussed in [45] and empirical evidence for it was summarized in [46]. The symptoms of code decay include: — excessively complex (bloated) code, i.e., code that is more complex than it needs to be, — vestigial code that supports features no longer used or required, — frequent changes to the code, — history of faults in the code, — delocalized changes are frequent, i.e., changes that affect many parts of the code, — programmers use "kludges," i.e., changes done in an inelegant or inefficient manner, for example, clones or patches, — numerous dependencies in the code. As the number of dependencies increases, the secondary effects of change become more frequent and the possibility of introducing an error into software increases.
4.2
Loss of Knowledge and Cultural Change
In order to understand a software system, programmers need many kinds of knowledge. The programmers must understand the domain of the application in detail. They must understand the objects of the domain, their properties, and their relationships. They must understand the business process that the program supports, as well as all activities and events of that process. They also must understand the algorithms and data structures that implement the objects, events, and processes. They must understand the architecture of the program and all its strengths and weaknesses, imperfections by which the program differs from an ideal. This knowledge may be partially recorded in program documentation, but usually it is of such a size and complexity that a complete recording is impractical. A great part of it usually is not recorded and has the form of individuals' experiences or groups' oral tradition. This knowledge is constantly at risk. Changes in the code make knowledge obsolete. As the symptoms of decay proliferate, the code becomes more and more complicated, and larger and deeper knowledge is necessary in order to understand it. At the same time, there is usually a turnover of programmers on the project. Turnover may have different causes, including the natural turnover of the programmers for their personal reasons, or the needs of other projects that forces
SOFTWARE EVOLUTION
21
managers to reassign programmers to other work. Based on the success of the project, team members are promoted, moved to other projects, and generally disperse. The team expertise to support strategic changes and evolution to the software is thus lost; new staff members joining the team have a much more tactical perspective (e.g., at code level) of the software. Evolvability is lost and, accidentally or by design, the system slips into servicing. However, management that is aware of the decline may recognize the eventual transition by planning for it. Typically, the current software is moved to the servicing stage, while the senior designers initiate a new project to release a radically new version (often with a new name, a new market approach, etc.). A special instance of the loss of knowledge is cultural change in software engineering [47]. Software engineering has almost a half-century of tradition, and there are programs still in use that were created more than 40 years ago. These programs were created in a context of completely different properties of hardware, languages, and operating systems. Computers were slower and had much smaller memories, often requiring elaborate techniques to deal with these limitations. Program architectures in use were also different; modem architectures using techniques such as object orientation were rare at that time. The programmers who created these programs are very often unavailable. Current programmers who try to change these old programs face a double problem: not only must they recover the knowledge that is necessary for that specific program, but they also must recover the knowledge of the culture within which it and similar programs were created. Without that cultural understanding they may be unable to make the simplest changes in the program.
4.3
Wrapping, Patching, Cloning, and Other "Kludges"
During servicing, it is difficult and expensive to make changes, and hence changes are usually limited to the minimum. The programmers must also use unusual techniques for changes, the so-called "kludges." One such technique is wrapping. With wrapping, software is treated as a black box and changes are implemented as wrappers where the original functionality is changed into a new one by modifications of inputs and outputs from the old software. Obviously, only changes of a limited kind can be implemented in this way. Moreover each such change further degrades the architecture of the software and pushes it deeper into the servicing stage. Another kind of change that frequently is employed during servicing is termed cloning. If programmers do not understand fully the program, instead of finding where a specific functionality is implemented in the program, they create another implementation. Thus a program may end up having several implementations of
22
K. H. BENNETT EI>A/..
identical or nearly identical stacks or other data structures, several implementations of identical or almost identical algorithms, etc. Sometimes programmers intentionally create clones out of fear of secondary effects of a change. As an example, let us assume that function f o o () requires a change, but f o o () may be called from other parts of the code so that a change in fooO may create secondary effects in those parts. Since knowledge of the program in the servicing stage is low, the programmers choose a "safe" technique: they copy-and-paste fooO, creating an intentional clone f o o l (). Then they update f o o1 {) so that it satisfies the new requirements, while the old f o o () still remains in use by other parts of the program. Thus there are no secondary effects in the places where f o o () is called. While programmers solve their immediate problem in this way, they negatively impact the program architecture and make future changes harder. The presence of a growing number of clones in code is a significant symptom of code decay during servicing. Several authors have proposed methods of detecting clones automatically using substring matching [48] or subtree matching [49]. Software managers could consider tracking the growth of clones as a measure of code decay, and consider remedial action if the system seems to be decaying too rapidly [50,51]. Servicing patches are fragments of the code, very often in binary form, that are used to distribute bug fixes in a widely distributed software system.
4.4
Reengineering
In the servicing stage, it is difficult to reverse the situation and return to the stage of evolution. That would require regaining the expertise necessary for evolution, recapturing the architecture, restructuring the software, or all of these. Both restructuring and regaining expertise are slow and expensive processes, with many risks involved, and new staff may have to be recruited with appropriate and relevant skills. As analyzed by Olsem [52], the users of a legacy system build their work routines and expectations based on the services it provides and are thus very sensitive about any disruption of routine. Their tolerance of changes may be much smaller than the tolerance displayed by the users of brand new systems. Thus, user rigidity also makes reengineering a very risky proposition. In order to minimize the risk and the disruption of user routine, Olsem advocates incremental reengineering, where the system is reengineered one part at a time. The new parts temporarily coexist with the old parts and old parts are replaced one-by-one, without interruption of the service. A case study of such reengineering was published in [53].
SOFTWARE EVOLUTION
23
This approach to reengineering avoids disruption of the user's routines, but it also preserves the interfaces among the parts and hence the overall architecture of the system. If the architecture is also obsolete, the process provides only partial relief. This impacts the business case for reengineering, since the benefits returned compared to the investment required may be difficult to justify. In the worst case, we are spending resources for little or no benefit. A further difficulty with reengineering of widely used software is the problem of distribution. Getting the new version out to all users can be expensive or impossible, so the burden of servicing the old version may persist. This problem will surely become even greater as software is increasingly introduced to more consumer goods such as mobile phones. Once an object level code has been released in such devices, it is all but impossible to go back to the evolutionary stage. Based on our experience, complete reengineering as a way of stepping back from servicing to evolution is very rare and expensive, so that entrance into the servicing stage is for all practical purposes irreversible.
5.
Phase-Out and Closedown
At some stage the system is essentially frozen and no further changes are allowed. This stage, which we call phase-out, has also been called "decline" [31]. Help desk personnel may still be in place to assist users in running the system, but change requests are no longer honored. Users must work around any remaining defects instead of expecting them to be fixed. Finally, the system may be completely withdrawn from service and even this basic level of staffing is no longer provided. The exact course of phase-out and closedown will depend on the specific system and the contractual obligations in place. Sometimes a system in phase-out is still generating revenue, but in other cases (such as most shrink-wrap software) the user has already paid for it. In this second case, the software producer may be much less motivated to provide support. In a survey by Tamai and Torimitsu [54], an investigation was undertaken of the life span of software in Japan. The survey dealt with software from several application areas such as manufacturing, financial services, construction, and mass media. It found that for software larger than 1 million lines of code, the average life was 12.2 years with a standard deviation of 4.2 years. The lifetime varied more widely for smaller software. Tamai and Torimitsu's work also classified the causes of the closedowns in the following way. Hardware and/or system change caused the closedown in 18%
24
K. H.BENNETT Er/\L
of the cases. New technology was the reason in 23.7% of the cases. A need to satisfy new user requirements (that the old system was unable to satisfy) was the cause in 32.8% of the cases. Finally deterioration of software maintainability was the culprit in 25.4% of the cases. We can speculate that at the end of the lifetime, the software was in phase-out stage and in most of the cases, there was an event (hardware change, new technology, new requirements) that pushed software into closedown. Only in 25.4% of the cases did closedown occur naturally as a free management decision, without any precipitating event from the outside. There are a number of issues related to software shutdown. Contracts should define the legal responsibilities in this phase. In some cases, such as outsourced software in which one company has contracted with another to develop the system, the relationships may be quite complex. Final ownership and retention of the system, its source code, and its documentation should be clearly defined. Frequently system data must be archived and access must be provided to it. Examples of such data are student transcripts, birth certificates, and other longlived data. The issues of data archiving and long-term access must be solved before the system is shut down.
6.
Case Studies
Our new lifecycle model was derived from involvement with and observation of real industrial and commercial software development projects in a number of domains. We then abstracted from the particular experiences and practices of these projects, in order to draw our new perspective. Lehner [30,31] has provided empirical evidence that the activities of "maintenance" change during the lifecycle of a project. However, other than this, very little data have been collected, and our evidence is gleaned from published case studies and personal practical experience. The experience from these projects is summarized in the rest of this section.
6.1 The Microsoft Corporation The description of the Microsoft development process, as given by Cusumano and Selby [42], illustrates the techniques and processes used for high-volume mass market shrink-wrapped software. In particular, we can draw on the following evidence: 1. At Microsoft, there is a substantial investment in the initial development stage, before revenue is generated from sales. This includes testing.
SOFTWARE EVOLUTION
25
2. The division between initial development and evolution is not sharp; the technique of using beta releases to gain experience from customers is widely used. 3. Microsoft tries to avoid quite explicitly the traditional maintenance phase. It is realized that with such a large user base, this is logistically impossible. Object code patches (service packs) are released to fix serious errors, but not for feature enhancement. 4. The development of the next release is happening while the existing release is still achieving major market success. Thus Windows 98 was developed while Windows 95 was still on its rising curve of sales. Microsoft did not wait until Windows 95 sales started to decline to start development, and to do so would have been disastrous. Market strategy is based upon a rich (and expanding) set of features. As soon as Windows 98 reached the market, sales of Windows 95 declined very rapidly. Shortcomings and problems in Windows 95 were taken forward for rectification in Windows 98; they were not addressed by maintenance of Windows 95. 5. Microsoft does not support old versions of software, which have been phased out, but they do provide transition routes from old to new versions. 6. Organizational learning is becoming evident through the use of shared software components. Interestingly, Microsoft has not felt the need for substantial documentation, indicating that the tacit knowledge is retained effectively in design teams. We conclude that evolution represents Microsoft's main activity, and servicing by choice a very minor activity.
6.2
The VME Operating System
This system has been implemented on ICL (and other) machines for the past 30 years or so, and has been written up by International Computers Ltd. [41]. It tends to follow the classical X.Y release form, where X represents a major release (evolution) and Y represents minor changes (servicing). In a similar way to Microsoft, major releases tend to represent market-led developments incorporating new or better facilities. The remarkable property of VME is the way in which its original architectural attributes have remained over such a long period, despite the huge evolution in the facilities. It is likely that none of the original source code from the early 1970s still is present in the current version. Yet its architectural integrity is clearly preserved. We can deduce that: 1. There was a heavy investment in initial development, which has had the effect of a meticulous architectural design. The system has been evolved
26
K. H. BENNETT FT/^/..
by experts with many years of experience, but who also have been able to sustain architectural integrity. 2. Each major release is subject to servicing, and eventually that release is phased-out and closed down. 3. Reverse engineering is not used from one major release to another; evolution is accomplished by team expertise and an excellent architecture.
6.3
The Y2K Experience
An excellent example of software in its servicing stage and its impact has been provided by the "Y2K" episode. It was caused by a widespread convention that limited the representation of a year in a date to the last two digits; for example, the year 1997 was represented by two digits "97." Based on this convention, when January 1, 2000 was reached, the year represented as "00" would be interpreted by computers as 1900, with all accompanying problems such misrepresentation could cause. The origin of the two-digit convention goes back to the early programs of the 1950s when the memory space was at a premium and hence to abbreviate the year to its two final digits seemed reasonable, while the problems this would cause seemed very remote. Even as the time was moving closer toward the fateful January 1, 2000, programmers continued to use the entrenched convention, perhaps out of inertia and habit. The extent of the problem became obvious in the late 1990s and a feverish attempt to remedy the problem became widespread. Articles in the popular press and by pessimists predicted that the programs would not be updated on time, that the impacts of this failure would be catastrophic, disrupting power supplies, goods distribution, financial markets, etc. At the height of the panic, the president of the USA appointed a "Y2K czar" with an office close to the White House whose role was to coordinate the efforts to fix the Y2K problem (and if this did not succeed, to deal with the ensuing chaos). Similar appointments were made in other countries, such as the UK. Fortunately, the dire Y2K predictions did not materialize. Many old programs were closed down. The programs that could not be closed down and needed to be repaired were indeed fixed, mostly by a technique called "windowing," which is a variant of wrapping. The two-digit dates are reinterpreted by moving the "window" from years 1900-1999 to a different period, for example, 1980-2080. In the new window, "99" is still interpreted as 1999, but "00", "01", ..., are now interpreted as 2000, 2001, etc. This worked well (for the time being) and has postponed the problem to the time when the new "window" will run out. The Y2K czar quietly closed his office and left town. There were, in fact, very few reported problems.
SOFTWARE EVOLUTION
27
However the whole Y2K effort is estimated to have had a staggering cost. It is estimated that worldwide, about 45% of all applications were modified and 20% were closed down, at the cost between $375 and $750 billion [55]. From the viewpoint of our model, the Y2K problem was caused by the fact that many legacy systems were in the servicing stage, and although Y2K rectification would be only a routine change during the evolution stage, it was a hard or very hard change during the servicing stage. At heart, the problem was caused by a design decision (a key data representation choice) and changes in design are very hard to achieve successfully during servicing. The reason why the Y2K problem caught so many managers by surprise is the fact that the difference between evolutionary and servicing stages was not well understood.
6.4
A Major Billing System
This 20-year-old system generates revenue for its organization, and is of strategic importance. However, the marketplace for the organization's products has changed rapidly in recent years, and the billing system can no longer keep up with market-led initiatives (such as new products). Analysis shows that this system has slid from evolution into servicing without management realizing it, the key designers have left, the architectural integrity has been lost, changes take far too long to implement, and revalidation is a nightmare; it is a classical legacy system. The only solution (at huge expense) is to replace it.
6.5
A Small Security Company
A small company has a niche market in specialized hardware security devices. The embedded software is based around Microsoft's latest products. The products must use rapidly changing hardware peripherals, and the company must work hard to keep ahead of the competition in terms of the sophistication of the product line. The software therefore consists of COTS components (e.g., special device drivers), locally written components, some legacy code, and glue written in a variety of languages (e.g., C, C-l-l-, BASIC). The system was not planned in this way, but has evolved into this form because of "happenstance." The software is the source of major problems. New components are bought, and must work with the old legacy. Powerful components are linked via very low-level code. Support of locally written components is proving very hard. From our perspective, we have a software system in which some parts are in initial development, some are in evolution, others are in servicing, while others are ready for phase-out. There is no sustained architectural design.
28
K. H.BENNETT E r ^ L
The existing (Lientz and Swanson) type analysis sheds Uttle Ught on this problem. Our model allows each component and connector to be assessed in terms of its stage. This should then allow the company to develop a support plan. For example, a component being serviced can have a dependency on another component being evolved.
6.6
A Long-Lived Defense System
A different type of case study is represented by a long-lived, embedded, defense system which is safety related. This was developed initially many years ago (in Assembler) and needs to be continually updated to reflect changes in the supporting hardware. In classic terms, this system would be thought of as being in the maintenance phase, but according to our analysis, it is still being evolved, yet surely but inexorably slipping into servicing: 1. The software is still core to the organization, and will be for many years. Failure of the software in service would be a disaster. 2. Many experts with in-depth knowledge of the software (and hardware) are working on the system. They understand the architecture and are Assembler experts. The software is being changed to meet quite radical new requirements. It is free from ad hoc patches, and consistent documentation is being produced, although structurally it is decaying. Comprehensive test procedures are in place and are used rigorously. The system engineers understand the impact of local changes on global behavior. Mature, well-understood processes are employed. 3. Conversely, some experts have recently left the organization, and this loss of expertise, accompanied by structural decay mentioned above, is a symptom of serious software decay. Reverse engineering is not considered feasible, partly because of the lack of key expertise. If the process of the decay reaches a certain point, it is likely that the system will be developed again ab initio.
6.7
A Printed Circuits Program
One of the co-authors was for several years a software manager of a software department. One of the projects in that department was a program that designed printed circuit boards, and was used by customers within the same institution. Because of the critical nature of the product, it had to be constantly evolved as new requirements appeared and had to be satisfied. The original developers were evolving the program but at the same time, they were some of the most qualified program developers in the department.
SOFTWARE EVOLUTION
29
Since there was a backlog of other projects that required high expertise, and the difference between evolution and servicing was not understood at the time, the manager tried several times unsuccessfully to transfer the evolution responsibility to different people. However all attempts to train new programmers so that they would be able to take over the evolution task and relieve the original developers turned out to be unsuccessful. In all instances, the new trainees were able to do only very limited tasks and were unable to make strategic changes in the program. At that time, this inability to transfer a "maintenance" task proved to be baffling to the manager. In hindsight, the expertise needed for evolution was equivalent or perhaps even greater than the expertise to create the whole program from scratch. It proved more cost effective to assign the new programmers to the new projects and to leave the experienced developers to evolve the printed circuit program.
6.8
Project PET
This case study is an example of an attempted reengineering project. PET is a CAD tool developed by a car company [56,57] to support the design of the mechanical components (transmission, engine, etc.) of a car. It is implemented in C-h-1-, and every mechanical component is modeled as a C-I-+ class. The mechanical component dependency is described by a set of equations that constitute a complex dependency network. Whenever a parameter value is changed, an inference algorithm traverses the entire network and recalculates the values of all dependent parameters. PET consists of 120,000 lines of C-l-l- code and is interfaced with other CAD software, including 3-D modeling software. After the initial implementation, there was a massive stage of evolution where, in our estimate, more than 70% of the current functionality was either radically changed or newly introduced. The evolution was driven mostly by the user requests. All changes to PET were performed as quickly as possible in order to make the new functionality available. This situation prevented conceptual changes to the architecture, and the architecture progressively deteriorated. Also the original architecture was not conceived for changes of this magnitude. As a result, the architecture has drastically deteriorated to the point where the requested evolutionary changes are becoming increasingly difficult. The symptoms of deterioration include the introduction of clones into the code and the misplacement of code into the wrong classes. During a code review we identified 10% of the PET code as clones. Because of code deterioration, the evolvability of the PET software has been decreasing and some evolutionary changes are becoming very hard. An example of a hard change is a modification to the inferencing algorithms. As mentioned above, the program uses inferencing by which the relationships between the mechanical components are maintained. The program would greatly benefit from
30
K. H. BENNETT E7/\L
an introduction of a commercially available component for inferencing that contains more powerful inferencing algorithms, but the current architecture with misplaced code and clones does not make that change feasible. Because of this, the changes done to the software have the character of patches that further corrode the architecture. Recently a decision was made to move PET software into a servicing stage, with work performed by a different group of people, and to stop all evolutionary changes. While PET will be serviced and should meet the needs of the users in this situation, a new version of PET will be developed from scratch, embodying all the expertise gained from the old PET evolution. The attempt to reengineer the old version of PET has been abandoned.
6.9
The FASTGEN Geometric Modeling Toolkit
FASTGEN is a collection of Fortran programs used by the U.S. Department of Defense to model the interactions between weapons (such as bombs or missiles) and targets (such as tanks or airplanes). Targets are modeled as large collections of triangles, spheres, donuts, and other geometric figures, and ray-tracing programs compute the effects of the explosion of a weapon. FASTGEN was originally developed in the late 1970s by one contractor, and has since been modified many times by other agencies and contractors at different sites ranging from California to Florida. Originally developed primarily for mainframe computers, it has been ported to supercomputer platforms such as CDC and Cray, Digital Equipment VAX, and, in the 1990s to PC and Unix workstations. A study of CONVERT, one of the FASTGEN programs, illustrates the impact of the original architecture on program comprehension and evolvability [58]. The original code was poorly modularized with large, noncohesive subroutines and heavy use of global data. The program still contains several optimizations that were important for the original mainframe environment, but that now make comprehension very difficult. For example, records are read and written in arbitrary batches of 200 at a time; in the original environment input/output could cause the program to be swapped out of memory so it was much more efficient to read many records before doing computations. Current versions of the program preserve this complex batching logic that is now obscure and irrelevant. FASTGEN is now in a late servicing stage, bordering on phase-out.
6.10
A Financial Management Application
This application dates from the 1970s, when it was implemented on DEC PDP computers. Recently it has been ported to PC/Windows machines. It is
SOFTWARE EVOLUTION
31
financially critical to its users. The software is modest in size (around 10,000 lines of code). Prior to the port, the software was stable and had evolved very little. In Lehman's terms this was an S-type system, with very little evolution, and very long lived. During the port, it had been decided to modify the code, preserving the original architecture as far as possible. Unfortunately, this had the following effects: (a) On the PDP series, different peripheral drivers (magnetic tapes, paper tape, disks, etc.) had very different interfaces. These differences were not well hidden, and impacted much of the application code. In the PC implementation, Windows has a much cleaner unified view of access to disks, CDs, etc. (i.e., byte vectors). Yet the original PDP peripheral code structure was retained, because the designers of the port could not be sure of correctly handling all side effects if the structure were changed. As a result, the code is much longer than it needs to be, with much redundancy and unwarranted complexity. (b) Even worse, the application needs to run in real time. The real time model employed in the original language has been retained in the port, yet the model for the new apphcation language has been added. The result is a labyrinthine real time program structure that is extremely hard to comprehend. This application has now slipped to the end of the servicing stage and only the simplest changes are possible. The expertise does not exist to reengineer it. If major changes are needed, the system will have to be rewritten.
7.
Software Change and Comprehension 7.1 The Miniprocess of Change
During both the evolution and the servicing stages, a software system goes through a series of changes. In fact, both evolution and servicing consist of repeated change, and hence understanding the process of software change is the key to understanding these stages and the problems of the whole software lifecycle. Accordingly in this section we look at the process of change in more detail, decomposing change into its constituent tasks. A particularly important task is program comprehension, because it consumes most of the programmer's time, and its success dominates what can or cannot be accomplished by software change. The tasks comprising software change are listed in [14] (see the Introduction). They are summarized in the miniprocess of change. In order to emphasize tasks
32
K. H. BENNETT Er/^L
that we consider important, we divide them differently than the standard and group them into the miniprocess in the following way: • Change request: the new requirements for the system are proposed. • Change planning: to analyze the proposed changes. o Program comprehension: understand the target system. o Change impact analysis: analyze the potential change and its scope. • Change implementation: the change is made and verified. o Restructuring (re-factoring) for change. o Initial change. o Change propagation: make secondary changes to keep the entire system consistent, o Validation and verification: to ensure whether the system after change meets the new requirement and that the old requirements have not been adversely impacted by the change, o Redocumentation: to project the change into all documentation. • Delivery. These tasks are discussed in more detail in this section.
7.2
Change Request and Planning
The users of the system usually originate the change requests (or maintenance requests). These requests have the form of fault reports or requests for enhancements. Standard practice is to have a file of requests (backlog) that is regularly updated. There is a submission deadline for change requests for the next release. After the deadline, the managers decide which particular requests will be implemented in that release. All requests that are submitted after the deadline or the requests that did not make it into the release will have to wait for the following release. Even this superficial processing of change requests requires some understanding of the current system so that the effort required may be estimated. It is a common error to underestimate drastically the time required for a software change and thus the time to produce a release. In small changes, it suffices to find the appropriate location in the code and replace the old functionality with the new one. However large incremental changes
SOFTWARE EVOLUTION
33
require implementation of new domain concepts. Consider a retail "point-ofsale" application for handling bar code scanning and customer checkout. The application would need to deal with several forms of payment, such as cash and credit cards. An enhancement to handle check payments would involve a new concept, related to the existing payment methods but sufficiently different to require additional data structures, processing for authorization, etc. There will be quite a lot of new code, and care is needed to maintain consistency with existing code to avoid degrading the system architecture. Concepts that are dependent on each other must be implemented in the order of their dependency. For example, the concept "tax" is dependent on the concept "item" because different items may have different tax rates and tax without an item is meaningless. Therefore, the implementation of "item" must precede the implementation of "tax." If several concepts are mutually dependent, they must be implemented in the same incremental change. Mutually independent concepts can be introduced in arbitrary order, but it is advisable to introduce them in the order of importance to the user. For example, in the point-of-sale program it is more important to deal correctly with taxes than to support several cashiers. An application with correct support for taxes is already usable in stores with one cashier. The opposite order of incremental changes would postpone the usability of the program. Change planning thus requires the selection of domain concepts to be introduced or further developed. It also requires finding in the old code the location where these concepts should be implemented so that they properly interact with the other already present concepts. Obviously these tasks require a deep understanding of the software and of its problem domain.
7.3
Change Implementation
Implementation of the software change requires several tasks, often with some looping and repetition. If the proposed change has a large impact on the architecture, there may be a preliminary restructuring of the program to maintain cleanness of design. In an object-oriented program, for example, this may involve refactoring to move data or functions from one class to another [59]. The actual change may occur in any one of several ways. For small changes, obsolete code is replaced by a new code. For large incremental changes, new code is written and then "plugged" into the existing system. Several new classes implementing a new concept may be written, tested, and interfaced with the old classes already in the code. Very often the change will propagate; that is, it will require secondary changes. In order to explain change propagation, we must understand that software consists
34
K. H.BENNETT £74/..
of entities (classes, objects, functions, etc.) and their dependencies. A dependency between entities A and B means that entity B provides certain services, which A requires. A function call is an example of a dependency among functions. Different programming languages or operating systems may provide different kinds of entities and dependencies. A dependency of A on 5 is consistent if the requirements of A are satisfied by what B provides. Dependencies can be subtle and of many kinds. The effect may be at the code level; for example, a module under change may use a global variable in a new way, so all uses of the global variable must be analyzed (and so on). Dependencies can also occur via nonfunctional requirements or business rules. For example, in a real time system, alteration of code may affect the timing properties in subtle ways. For this reason, the analysis of a change, and the determination of which code to alter often cannot easily be compartmentalized. Senior maintenance engineers need a deep understanding of the whole system and how it interacts with its environment to determine how a required change should be implemented while hopefully avoiding damage to the system architecture. The business rules may be extremely complex (e.g., the "business rules" that address the navigation and flight systems in an on-board safety critical flight control system); in an old system, any documentation on such rules has probably been lost, and determining the rules retrospectively can be an extremely time-consuming and expensive task (for example, when the domain expert is no longer available). Implementation of a change in software thus starts with a change to a specific entity of the software. After the change, the entity may no longer fit with the other entities of the software, because it may no longer provide what the other entities require, or it may now require different services from the entities it depends on. The dependencies that no longer satisfy the require-provide relationships are called inconsistent dependencies (inconsistencies for short), and they may arise whenever a change is made in the software. In order to reintroduce consistency into software, the programmer keeps track of the inconsistencies and the locations where the secondary changes are to be made. The secondary changes, however, may introduce new inconsistencies, etc. The process in which the change spreads through the software is sometimes called the ripple effect of the change [60,61]. The programmer must guarantee that the change is correctly propagated, and that no inconsistency is left in the software. An unforeseen and uncorrected inconsistency is one of the most common sources of errors in modified software. A software system consists not just of code, but also of documentation. Requirements, designs, test plans, and user manuals can be quite extensive and they often are also made inconsistent by the change. If the documentation is to be useful in the future, it must also be updated.
SOFTWARE EVOLUTION
35
Obviously the modified software system needs to be validated and verified. The most commonly used technique is regression testing, in which a set of system tests is conserved and rerun on the modified system. The regression test set needs to have fairly good coverage of the existing system if it is to be effective. It grows over time as tests are added for each new concept and feature added. The regression test set will also need to be rerun many times over the life of the project. Regression testing is thus not cheap, so it is highly desirable to automate the running of the tests and the checking of the output. Testing is, however, not enough to guarantee that consistency has been maintained. Inspections can be used at several points in the change miniprocess to confirm that the change is being introduced at the right point, that the resulting code meets standards, and that documentation has indeed been updated for consistency. It is evident that clear understanding of the software system is essential at all points during change implementation. Refactoring requires a vision of the architecture and of the division of responsibilities between modules or classes. Change propagation analysis requires tracing the dependencies of one entity on another, and may require knowledge of subtle timing or business rule dependencies. Documentation updating can be among the most knowledge-demanding tasks since it requires an awareness of the multiple places in each document where any particular concept is described. Even testing requires an understanding of the test set, its coverage, and of where different concepts are tested. In the previous paragraphs we have described what should be done as part of each software change. Given the effort required to "do it right" it is not surprising to discover that, in practice, some of these tasks are skipped or slighted. In each such case, a trade-off is being made between the immediate cost or time and an eventual long-term benefit. As we will discuss in Section 8 it is not necessarily irrational to choose the immediate over the long-term, but all such decisions need to be taken with full awareness of the potential consequences. As we have described earlier, program comprehension is a key determinant of the Hfecycle of any specific software product. To understand why this is so, we need to understand what it means to understand a program, why that understanding is difficult, and how it fits into the cycle of software change.
7.4
Program Comprehension
Program comprehension is carried out with the aim of understanding source code, documentation, test suite, design, etc., by human engineers. It is typically a gradual process of building up understanding, which can then be used further to explain the construction and operation of the program. So program comprehension is the activity of understanding how a program is constructed and its
36
K. H. BENNETT E7^/..
underlying intent. The engineer requires precise knowledge of the data items in the program, the way these items are created, and their relationships [62]. Various surveys have shown that the central activity in maintenance is understanding the source code. Chapin and Lau [63] describe program comprehension as the most skilled and labor-intensive part of software maintenance, while Oman [64] states that the key to effective software maintenance is program comprehension. Thus it is a human-intensive activity that consumes considerable costs. An early survey to the field is [13]; see also von Mayrhauser [65]. The understanding can then be used for: • Maintenance and evolution (e.g., [66]), • Reverse engineering (e.g., [13]), • Learning and training, • Redocumentation (e.g., [67,68]), • Reuse (e.g., [69]), • Testing and validation (e.g., [70]). The field has prompted several theories derived from empirical investigation of the behavior of programmers. There are three fundamental views, see Storey [71]: • Comprehension is undertaken in a top-down way, from requirements to implementation [72,73], • Comprehension is undertaken in a bottom-up way, starting with the source code, and deducing what it does and how it does it [74], and • Comprehension is undertaken opportunistically [75,76]. All three may be used at different times, even by a single engineer. It is encouraging to note that much work on comprehension has been supported by empirical work to gain understanding of what engineers actually do in practice (see, for example, [66,76,77]). To support comprehension, a range of tools has been produced and some of these present information about the program, such as variable usage, call graphs, etc., in a diagrammatic or graphical form. Tools divide into two types: • Static analysis tools, which provide information to the engineer based only on the source code (and perhaps documentation), • Dynamic analysis tools, which provide information as the program executes. More recent work is using virtual reality and much more sophisticated visualization metaphors to help understanding [78].
SOFTWARE EVOLUTION
37
The work on an integrated metamodel [65] has drawn together into a single framework the work on cognition of large software systems. It is based on four components: • Top-down structures, • Situation model, • Program model, and • The knowledge base. It combines the top-down perspective with the bottom-up approach (i.e., situation and program models). The knowledge base addresses information concerned with the comprehension task, and is incremented as new and inferred knowledge is determined. The model is not prescriptive, and different approaches to comprehension may be invoked during the comprehension activity. All authors agree that program comprehension is a human-oriented and timeintensive process, requiring expertise in the programming language and environment, deep understanding of the specific code and its interactions, and also knowledge of the problem domain, the tasks the software performs in that domain, and the relationships between those tasks and the software structure. As mentioned earlier, locating concepts in the code is a program comprehension task that is very important during the phase of change planning. Change requests are very often formulated as requests to change or introduce implementation of specific domain concepts, and the very first task is to find where these concepts are found in the code. A usual assumption behind the concept location task is that the user does not have to understand the whole program, but only the part that is relevant to the concepts involved in the change. In a widely cited paper, Biggerstaff et al. [79] presented a technique of concept location in the program based on the similarity of identifiers used in the program and the names of the domain concepts. When trying to locate a concept in the code, the programmer looks for the variables, functions, classes, etc., with a name similar to the name of the concept. For example when trying to locate implementation of breakpoints in a debugger, the programmer looks for variables with identifiers breakpoint. Breakpoint, break-point, brkpt, etc. Text pattern matching tools like "grep" are used for this purpose. Once the appropriate identifier is found, the programmer reads and comprehends the surrounding code in order to locate all code related to the concept being searched. Another technique of concept or feature location is based on analysis of program execution traces [80]. The technique requires instrumentation of the program so that it can be determined which program branches were executed for a given set of input data. Then the program is executed for two sets of data: data set A with the feature and data set B without the feature. The feature is most probably
38
K. H.BENNETT Ef/^L
located in the branches that were executed for data set A but were not executed for data set B. Another method of concept location is based on static search of code [81]. The search typically starts in function main() and the programmer tries to find the implementation of the concept there. If it cannot be located there, it must be implemented in one of the subfunctions called from main(); hence the programmer decides which subfunction is the most likely to implement the concept. This process is recursively repeated (with possible backtracks) until the concept is found. As remarked earlier, we believe that the comprehensibility of a program is a key part of software quality and evolvability, and that research in program comprehension is one of the key frontiers of research in software evolution and maintenance.
8. 8.1
Sustaining Software Value Staving off the Servicing Stage
One of the goals of our staged model of the software lifecycle is to aid software managers in thinking about the systems they control and in planning their futures. It is clear from our argument so far that a software system subtly loses much of its value to its owners when it makes the transition from the evolution to the servicing stage. A software system in the evolution stage is routinely adapted to changing organizational needs and can thus make a considerable contribution to the organization's mission and/or revenues. When the system transitions into servicing, only the simplest changes can be made; the software is a less valuable asset and may actually become a constraint on the organization's success. Thus, most software managers will want to stave off the servicing stage as long as possible. There are a number of strategies that can be adopted to sustain software value, but unfortunately all of them produce their benefits in the long term, but require an expenditure of effort or time in the short term. A software manager must seek an appropriate trade-off between the immediate budget and time pressures of doing business and the potential long-term benefits of increased software value. The appropriate choice of strategies will obviously not be the same for all companies. An e-business that must change almost daily to survive will focus on rapid change, whereas the owners of an embedded system with stable requirements but life-critical consequences of failure may be able to focus on long-term quality.
SOFTWARE EVOLUTION
39
Thus this section is not prescriptive, but merely tries to identify some of the issues that a software manager or chief architect should consider. We list some of the strategies and techniques that have been proposed, categorizing them by their stage in the lifecycle. Unfortunately, there seems to be very little published analysis that would aid a software manager in estimating the costs and benefits. Research into the actual effectiveness of each would seem to be a priority.
8.2
Strategies during Development
The key decisions during development are those that determine the architecture of the new system and the team composition. These decisions are, of course interrelated; as has been mentioned, many of the more famously evolvable systems such as Unix and VME were the product of a very few highly talented individuals. Advice to "hire a genius" is not misplaced, but is difficult to follow in practice. In the current state of the art, there is probably little that can be done to design an architecture to permit any conceivable change. However it is possible to address systematically those potential changes that can be anticipated, at least in general terms. For instance it is well known that changes are extremely common in the user interfaces to systems, to operating systems, and to hardware, while the underlying data and algorithms may be relatively stable. During initial development a roughly prioritized list of the anticipated changes can be a very useful guide to architectural design [82]. Once possible changes are identified, the main architectural strategy to use is information hiding of those components or constructs most likely to change. Software modules are structured so that design decisions, such as the choice of a particular kind of user interface or a specific operating system, are concealed within one small part of the total system, a technique described by Pamas since the early 1970s [83]. If the anticipated change becomes necessary in the future, only a few modules would need to be modified. The emergence of object-oriented languages in the 1990s has provided additional mechanisms for designing to cope with anticipated changes. These languages provide facilities such as abstract classes and interfaces which can be subclassed to provide new kinds of object which are then used by the rest of the program without modification. Designers can also make use of object-oriented design patterns, many of which are intended to provided flexibility to allow for future software enhancements [84]. For example, the Abstract Factory pattern provides a scheme for constructing a family of related objects that interact, such as in a user interface toolkit. The pattern shows how new object classes can be added, say to provide an alternate look-and-feel, with minimal change to the existing code.
40
K. H. BENNETT ET/AL
In both traditional and object-oriented systems, architectural clarity and consistency can greatly facilitate program comprehension. Similar features should always be implemented in the same way, and if possible by similarly named components. For example, one of the authors once studied a test coverage monitoring system that provided coverage of basic blocks, of decisions, and of several kinds of data flows [85]. The code for each kind of monitoring was carefully segregated into source files, with code to display blocks in a file called b d i s p. c, that to display decisions in d d i s p. c, and so on. Within each file, corresponding functions were named the same; for example, each file had a d i s p I a y () function to handle its particular kind of display. This consistency made it very easy for a maintainer to hypothesize where particular concepts were likely to be located, and greatly speeded up understanding of the program architecture, even in the absence of design documentation. On the other hand, inconsistency in design and naming can lead future maintainers into errors in understanding. Polymorphic object-oriented systems are particularly susceptible to this kind of error, since many member functions may have the same name. They thus cannot easily be distinguished by a maintainer reading the code. If they do not perform precisely the same task, or if one version has side effects not shared by others, then the maintainer may be seriously mislead in reading the code [86]. The best way to achieve architectural consistency is probably to leave the basic design in the hands of a very small team who work together very closely. Temptations to rotate personnel between projects should probably be strongly resisted at this stage. Other programmers may later maintain the consistent design if they are encouraged to study the structure of existing code before adding their own contributions. Consistency may then further be enforced by code inspections or walkthroughs. If the project architecture depends on purchased COTS components, as so many modem projects do, then particular care is needed. First, it would obviously be dangerous to depend heavily on a component that is already in the servicing or phase-out stages. It is thus important to understand the true status of each component, which may require information that the component supplier is reluctant to give. Second, the impact of the possible changes on the COTS component should be considered. For instance, if changes to the hardware platform are anticipated, will the COTS supplier, at reasonable cost, evolve his product to use the new hardware? If not, information hiding design might again be advisable to facilitate the possible substitution of a different COTS component in the future. Programming environments and software tools that generate code may create problems similar to those of COTS. Many such environments assume implicitly that all future changes will take place within the environment. The generated code
SOFTWARE EVOLUTION
41
may be incomprehensible for all practical purposes. Unfortunately, experience indicates that environments, and the companies that produce them, often have much shorter lifetimes than the systems developed using them. Finally, there are well-known coding practices that can greatly facilitate program comprehension and thus software change. In Section 2.4 we have mentioned the use of IEEE or ISO standards, the enforcement of house coding style to guarantee uniform layout and commenting, and an appropriate level of documentation to match the criticality of the project. One coding technique that should probably be used more often is to insert instrumentation into the program to aid in debugging and future program comprehension. Experienced programmers have used such instrumentation for years to record key events, interprocess messages, etc. Unfortunately instrumentation is rarely mandated, and more often introduced ad hoc only after a project has fallen into trouble [87]. If used systematically, it can be a great aid to understand the design of a complex system "as-built" [88]. We should mention again that all the above techniques involve a trade-off between evolvability and development time. The study of potential changes takes time and analysis. Design to accommodate change may require more complex code, which impacts both time and later program comprehension. (At least one project found it desirable to remove design patterns that had been introduced to provide unused flexibility [89].) COTS components and programming environments can greatly speed up initial development, but with serious consequences for future evolvability. Design consistency and coding standards are difficult to enforce unless almost all code is inspected, a time-consuming and thus far from universal practice. In the rush to get a product to market, a manager must be careful about decisions that sacrifice precious time against future benefit.
8.3
Strategies during Evolution
During the evolution phase, the goal must be to delay servicing as long as possible by preserving a clean architecture and by facilitating program comprehension. As previously mentioned, most modem projects transition to the evolution phase with at least some key members of the original development team in place. There are, of course, circumstances when this may not be possible and a transition to a new team is unavoidable. This is a strategy that risks an almost immediate slide into servicing due to the difficulties of program comprehension. If a new team is, however, absolutely essential, then there are some steps that can be taken, such as having developers on-site for some months to aid the transition. Software managers placed in this situation may want to consult some of the papers by Pigoski and others that discuss experiences in software transition [90-92].
42
K. H.BENNETT Er/^L
If it has not already been done, the system should now be placed under configuration management control. From here on, different customers will have different versions, and it is essential to have a mechanism for tracking the revisions of each file that went into making up each version. Without such a mechanism it will be very difficult to interpret problem reports arriving from the field. Change control may be formal or informal, depending on the process used, but change control procedures should be well defined, and it is highly desirable to have one person designated as configuration manager, with responsibility for change control and version tracking. At this stage, if not earlier, the project needs to have a clear strategy for program comprehension. That strategy can include combinations of at least three elements: • Team knowledge, carried by team members who commit to long-term participation in the project, • Written documentation of specifications, design, configurations, tests, etc., or its equivalent in the repository of a software tool, and • Reverse engineering tools to recover design information from the system itself. As evolution starts, team composition may change somewhat, often with some shrinkage. If the project manager intends to rely mainly on team knowledge for program comprehension, this is a good point to take inventory of the available knowledge and to try to avoid over concentration in one or two team members. As previously mentioned, agile development methods such as Extreme Programming often use pair programming, in which two programmers work together on each task [40]. As indicated by the printed circuits program case described in Section 6.7 it is probably unrealistic to expect new programmers to undertake major changes alone, but it may be possible to get them to work in a pair with a more experienced programmer. It is desirable to avoid the well-known ''guru" phenomenon, in which one person is the only expert in an important part of the project. The guru often tends to adopt that part as his territory, and makes it difficult for anyone else to work with that code. Once this situation is established, it can be very difficult to manage. A guru problem is obviously dangerous for the future of the project and can also have a bad impact on team morale. If the manager decides to emphasize written documentation for program comprehension, then there will be considerable overhead to revise and update all such documentation as the system evolves. It is likely that any design documentation available at the beginning of evolution will represent the system the developers intended to build, which may differ substantially from what was actually built.
SOFTWARE EVOLUTION
43
A useful technique is incremental redocumentation, in which documentation is generated or updated as modules are modified. A trainee programmer may be assigned to do the write-up, based on notes or an interview with the experienced programmer who actually made the modification, thus reducing the cost and schedule impact of the redocumentation. One of the authors has given a case study showing how this strategy was applied to a large C-I~l- system [93]. The manager needs to establish a tricky balance between programmers' desire to restructure or rewrite code and the economics of the project. Unless they wrote it themselves, programmers will almost always complain of the quaUty of the code they encounter! Some such complaints are certainly well founded as restructuring and refactoring may be needed to maintain a clean architecture. However, if all such requests are approved, the burden of coding, inspecting, and especially testing will become unsustainable. Key decisions in the evolution phase concern the creation of new versions and the transition to servicing. If the versioned staged model of Fig. 2 is followed, the new version will probably start development well before the old one passes into servicing. Management should make a conscious decision as to the paths to be followed, based on judgments about the state of the system and the demands of the market.
8.4 Strategies during Servicing The transition into servicing implies that further change will be relatively minor, perhaps involving bug fixes and peripheral new features. It is important to understand that the transition is largely irreversible since essential knowledge and architectural integrity have probably been lost. If the product is to be reengineered, it is likely that the best strategy will be simply to try to reproduce its black-box behavior rather than to study and reuse its current code or design [94]. Often there may be a transition to a new maintenance team as servicing begins. Expectations about what can be accomplished by such a team should be kept modest to avoid impossible commitments. Configuration management must continue in place to be able to understand reports from the field. Strategies such as opportunistic redocumentation may still be desirable, but fixes that degrade the code may be tolerated since the basic strategy is to minimize cost while maintaining revenue in the short run. Finally, the servicing stage is the time to make and implement a plan for phaseout. Migration paths to a new version in development should be provided. The main issue is often the need to recover and reformat vital organizational data so that it can be used in the new version.
44
9.
K. H. BENNETT Er/\L.
Future Directions: Ultra Rapid Software Evolution
It is possible to inspect each activity of the staged software model and determine how it may be speeded up. Certainly, new technology to automate parts may be expected, supported by tools (for example, in program comprehension, testing etc.). However, it is very difficult to see that such improvements will lead to a radical reduction in the time to evolve a large software system. This prompted us to believe that a new and different way is needed to achieve "ultra rapid evolution"; we term this "evolution in Internet time." It is important to stress that such ultra rapid evolution does not imply poor quality, or software that is simply hacked together without thought. The real challenge is to achieve very fast change yet provide very-high-quality software. Strategically, we plan to achieve this by bringing the evolution process much closer to the business process. The generic problem of ultra rapid evolution is seen as one of the grand challenges for software engineering (see [95,96]). The staged model allows us to address a large system built out of many parts (and so on recursively). Each part may be in one of the five stages (although we would expect the main stress to be the first three stages). This has been ignored in previous research. The integration mechanism is market-led, not simply a technical binding, and requires the representation of nontechnical and nonfunctional attributes of the parts. The new perspective offered by the staged model has been a crucial step in developing a seniceware approach. For software evolution, it is useful to categorize contributing factors into those which can rapidly evolve, and those which cannot, see Table I. We concluded that a "silver bullet," which would somehow transform software into something that could be changed (or could change itself) far more quickly than at present, was not viable. Instead, we take the view that software is actually TABLE I CONTRIBUTING FACTORS OF SOFTWARE EVOLUTION
Fast moving
Slow moving
Software requirements Marketplaces Organizations Emergent companies Demand led Competitive pressures Supply chain delivery Risk taking New business processes Near-business software
Software functionality Skills bases Standards Companies with rigid boundaries Supply led Long-term contracts Software technology Risk averse Software process evolution Software infrastructure
SOFTWARE EVOLUTION
45
hard to change, and thus that change takes time to accompHsh. We needed to look for other solutions. Let us now consider a very different scenario. We assume that our software is structured into a large number of small components that exactly meet the user's needs and no more. Suppose now that a user requires an improved component C. The traditional approach would be to raise a change request with the vendor of the software, and wait for several months for this to be (possibly) implemented, and the modified component integrated. In our solution, the user disengages component C, and searches the marketplace for a replacement D that meets the new needs. When this is found, it replaces C, and is used in the execution of the application. Of course, this assumes that the marketplace can provide the desired component. However, it is a wellestablished property of marketplaces that they can spot trends, and make new products available when they are needed. The rewards for doing so are very strong and the penalties for not doing so are severe. Note that any particular component supplier can (and probably will) use traditional software maintenance techniques to evolve their components. The new dimension is that they must work within a demand-led marketplace. Therefore, if we can find ways to disengage an existing component and bind in a new one (with enhanced functionality and other attributes) ultra rapidly, we have the potential to achieve ultra-rapid evolution in the target system. This concept led us to conclude that the fundamental problem with slow evolution was a result of software that is marketed as a product, in a supply-led marketplace. By removing the concept of ownership, we have instead a service, i.e., something that is used, not owned. Thus, we generalized the component-based solution to the much more generic service-based software in a demand-led marketplace [97]. This service-based model of software is one in which services are configured to meet a specific set of requirements at a point in time, executed and disengaged— the vision of instant service. A service is used rather than owned [98]; it may usefully be considered to comprise a communications protocol together with a service behavior. Services are composed from smaller ones (and so on recursively), procured and paid for on demand. A service is not a mechanized process; it involves humans managing supplier-consumer relationships. This is a radically new industry model, which could function within markets ranging from a genuine open market (requiring software functional equivalence) to a keisetzu market, where there is only one supplier and consumer, both working together with access to each other's information systems to optimize the service to each other. This strategy potentially enables users to create, compose, and assemble a service by bringing together a number of suppliers to meet needs at a specific
46
K. H. BENNETT Er>^L
point in time. An analogy is selling cars: today manufacturers do not sell cars from a premanufactured stock with given color schemes, features, etc.; instead customers configure their desired car from a series of options and only then is the final product assembled. This is only possible because the technology of production has advanced to a state where assembly of the final car can be undertaken sufficiently quickly. Software vendors attempt to offer a similar model of provision by offering products with a series of configurable options. However, this offers extremely limited flexibility—consumers are not free to substitute functions with those from another supplier since the software is subject to binding, which configures and links the component parts, making it very difficult to perform substitution. The aim of this research is to develop the technology which will enable binding to be delayed until immediately before the point of execution of a system. This will enable consumers to select the most appropriate combination of services required at any point in time. However, late binding comes at a price, and for many consumers, issues of reliability, security, cost, and convenience may mean that they prefer to enter into contractual agreements to have some early binding for critical or stable parts of a system, leaving more volatile functions to late binding and thereby maximizing competitive advantage. The consequence is that any future approach to software development must be interdisciplinary so that nontechnical issues, such as supply contracts, terms, and conditions, and error recovery are addressed and built into the new technology. A truly service-based role for software is far more radical than current approaches, in that it seeks to change the very nature of software. To meet users' needs of evolution, flexibility, and personalization, an open marketplace framework is necessary in which the most appropriate versions of software products come together, and are bound and executed as and when needed. At the extreme, the binding that takes place prior to execution is disengaged immediately after execution in order to permit the "system" to evolve for the next point of execution. Flexibility and personalization are achieved through a variety of service providers offering functionality through a competitive marketplace, with each software provision being accompanied by explicit properties of concern for binding (e.g., dependability, performance, quality, license details, etc.). A component is simply a reusable software executable. Our serviceware clearly includes the software itself, but in addition has many nonfunctional attributes, such as cost and payment, trust, brand allegiance, legal status and redress, and security. Binding requires us to negotiate across all such attributes (as far as possibly electronically) to establish a binding, at the extreme just before execution.
SOFTWARE EVOLUTION
47
Requirements for software need to be represented in such a way that an appropriate service can be discovered on the network. The requirements must convey therefore both the description and intention of the desired service. Given the highly dynamic nature of software suppUed as a service, the maintainabiUty of the requirements representation becomes an important consideration. However, the aim of the architecture is not to prescribe such representation, but support whatever conventions users and service suppliers prefer. Automated negotiation is another key issue for research, particularly in areas where nonnumeric terms are used, e.g., legal clauses. Such clauses do not lend themselves to offer/counteroffer and similar approaches. In relation to this, the structure and definition of profiles and terms need much work, particularly where terms are related in some way (e.g., performance and cost). Also we need insight to the issue of when to select a service and when to enter negotiations for a service. It is in this area that multidisciplinary research is planned. We plan to concentrate research in these areas, and use as far as possible available commercial products for the software infrastructure. Finally, many issues need to be resolved concerning mutual performance monitoring and claims of legal redress should they arise.
10.
Conclusions
We have presented a new staged model of the software lifecycle, motivated by the need to formulate an abstraction that is supported partly by empirical published evidence, and partly by the authors' field experiences. In particular, we have observed how the "project knowledge" has been progressively lost over the lifecycle, and the enormous implications for our ability successfully to support the software. We have argued that by understanding the staged model, a manager can better plan and resource a project, in particular to avoid it slipping into servicing irreversibly. We also indicated some future directions based on this approach. ACKNOWLEDGMENTS
Keith Bennett thanks members of the Pennine Research Group at Durham, UMIST, and Keele Universities for the collaborative research which has led to the author's input to this chapter (in particular Malcolm Munro, Paul Layzell, Linda Macauley, Nicolas Gold, Pearl Brereton, and David Budgen). He also thanks the Leverhulme Trust, BT, and EPSRC for generous support, and Deborah Norman for help in preparing the chapter. Vaclav Rajlich thanks Tony Mikulec from Ford Motor Co. for generous support of research in software maintenance. Also discussions with Franz Lehner while visiting University of Regensburg, and with Harry Sneed on several occasions influenced the author's thinking about this area.
48
K. H.BENNETT Er/\/..
Norman Wilde thanks the support of the Software Engineering Research Center (SERC) over the past 15 years; its industrial partners have taught him most of what he knows about software maintenance. More recently the US Air Force Office of Scientific Research under Grant F49620-99-1-0057 has provided an opportunity to study the FASTGEN system mentioned as one of the case studies. REFERENCES
[1] IEEE (1990). Standard Glossary of Software Engineering Terminology, standard IEEE Std 610.12-1990, IEEE, Los Alamitos, CA. Also IEEE Software EngineeringIEEE Standards Collection. IEEE, New York, 1994. [2] McDermid, J. A. (Ed.) (1991). The Software Engineer's Reference Book, Butterworth-Heinemann, London. [3] Royce, W. W. (1970). "Managing the development of large software systems." Proc. IEEE WESCON 1970, pp. 1-9. IEEE, New York. [Reprinted in Thayer, R. H. (Ed.). IEEE Tutorial on Software Engineering Project Management.] [4] Boehm, B. W. (1988). 'A spiral model of software development and enhancement." IEEE Computer, May, 61-72. [5] Rajlich, V. T., and Bennett, K. H. (2000). 'A staged model for the software Hfecycle." IEEE Computer, 33, 66-71. [6] Pigoski, T. R (1997). Practical Software Maintenance: Best Practices for Managing Your Software Investment. Wiley, New York. [7] Lientz, B., and Swanson, E. B. (1980). Software Maintenance Management: a Study of the Maintenance of Computer Application Software in 487 Data Processing Organisations. Addison-Wesley, Reading, MA. [8] Lientz, B., Swanson, E. B., and Tompkins, G. E. (1978). "Characteristics of applications software maintenance." Communications of the ACM, 21, 466-471. [9] Sommerville, I. (1995). Software Engineering. Addison Wesley, Reading, MA. [10] Pressman, R. S. (1996). Software Engineering. McGraw-Hill, New York. [11] Warren, I. (1999). The Renaissance of Legacy Systems. Springer-Verlag, London. [12] Foster, J. R. (1993). Cost Factors in Software Maintenance, Ph.D. Thesis. Computer Science Department, University of Durham. [13] Robson, D. J., Bennett, K. H., Munro, M., and ComeHus, B. J. (1991). "Approaches to program comprehension." Journal of Systems and Software, 14, 79-84. [Reprinted in Arnold, R. (Ed.) (1992). Software Re-engineering. IEEE Computer Society Press, Los Alamitos, CA.] [14] IEEE. Standard for Software Maintenance, p. 56. IEEE, Los Alamitos, CA. [15] International Standards Organisation (1999). International Standard Information Technology: Software Maintenance, ISO/IEC 14764:1999. International Standards Organisation. [16] Wirth, N. (1971). "Program development by stepwise refinement." Communications of the ACM, U.
SOFTWARE EVOLUTION
49
[17] Basili, V. R., and Turner, A. J. (1975). "Iterative enhancement: A practical technique for software development." IEEE Transactions on Software Engineering 1, 90-396. (An updated version was pubUshed as Auerbach Report 14-01-05, 1978, and in Tutorial on Software Maintenance, IEEE Computer Society Press, Los Alamitos, CA, 1982). [18] Brooks, F. The Mythical Man-Month: Essays on Software Engineering. AddisonWesley, Reading, MA. [ 19] Lehman, M. M., and Beladay, L. A. (1976). "A model of large program development." IBM System Journal 3, 225-252. [20] Burch, E., and Kunk, H. (1997). "Modeling software maintenance requests: A case study." Proc. IEEE International Conference on Software Maintenance, pp. 40-47. IEEE Computer Society Press, Los Alamitos, CA. [21] Lehman, M. M. (1980). "Programs, lifecycles, and the laws of software evolution." IEEE Transactions on Software Engineering, 68, 1060-1076. [22] Lehman, M. M. (1984). "Program evolution." Information Processing Management, 20, 19-36. [23] Lehman, M. M. (1985). Program Evolution. Academic Press, London. [24] Lehman, M. M. (1989). "Uncertainty in computer application and its control through the engineering of software." Journal of Software Maintenance, 1, 3-28. [25] Lehman, M. M., and Ramil, J. F. (1998). "Feedback, evolution and software technology—Some results from the FEAST Project, Keynote Lecture." Proceedings 11th International Conference on Software Engineering and its Application, Vol. 1, Paris, 8-10 Dec, pp. 1-12. [26] Ramil, J. R, Lehman, M. M., and Kahen, G. (2000). "The FEAST approach to quantitative process modelling of software evolution processes." Proceedings PROFES'2000 2nd International Conference on Product Focused Software Process Improvement, Oulu, Finland, 20-22 June (F. Bomarius and M. Oivo, Eds.), Lecture Notes on Computer Science 1840, pp. 311-325. Springer-Verlag, Berlin. This paper is a revised version of the report: Kahen, G., Lehman, M. M., Ramil, J. F. (2000). "Model-based assessment of software evolution processes." Research Report 2000/4. Department of Computers, Imperial College. [27] Lehman, M. M., Perry, D. E., and Ramil, J. F. (1998). "Implications of evolution metrics on software maintenance." International Conference on Soft. Maintenance (ICSM'98), Bethesda, Maryland, Nov. 16-24, pp. 208-217. [28] Ramil, J. F., and Lehman, M. M. (1999). "Challenges facing data collection for support and study of software evolution processes," position paper. ICSE 99 Workshop on Empirical Studies of Software Development and Evolution, Los Angeles, May 18. [29] Sneed, H. M. (1989). Software Engineering Management (I. Johnson, TransL). Ellis Horwood, Chichester, West Sussex, pp. 20-21. Original German ed. Software Management, Rudolf Mueller Verlag, Koln, 1987. [30] Lehner, F. (1989). "The software lifecycle in computer applications." Long Range Planning, Vol. 22, No. 5, pp. 38-50. Pergamon Press, Elmsford, NY.
50
K. H.BENNETT E7^/..
[31] Lehner, F. (1991). "Software lifecycle management based on a phase distinction method." Microprocessing and Microprogramming, Vol. 32, pp. 603-608. NorthHolland, Amsterdam. [32] Truex, D. P., Baskerville, R., and Klein, H. (1999). "Growing systems in emergent organizations." Commun. ACM 42, 117-123. [33] Cusumano, M., and Yoffe, D. (1998). Competing on Internet Time-Lessons from Netscape and Its Battle with Microsoft. Free Press (Simon & Schuster), New York. [34] Bennett, K. H. (1995). "Legacy systems: Coping with success." IEEE Software 12, 19-23. [35] Henderson, P. (Ed.) (2000). Systems Engineering for Business Process Change. Springer-Verlag, Berlin. [36] UK EPSRC (1999). "Systems Engineering for Business Process Change." Available at h t t p : / / w w w . s t a f f . e c s . s o t o n . a c . u k / ~ p h / s e b p c . [37] Pfleeger, S. L., and Menezes, W. (2000). "Technology transfer: Marketing technology to software practitioners." IEEE Software 17, 27-33. [38] Naur, P., and Randell, B. (Eds.) (1968). "Software engineering concepts and techniques," NATO Science Committee. Proc. NATO Conferences, Oct. 7-11, Garmisch, Germany. Petrocelli/Charter, New York. [39] Shaw, M., and Garland, D. (1996). Software Architectures. Prentice-Hall, Englewood
cuffs, NJ. [40] Beck, K. (1999). "Embracing change with extreme programming." IEEE Computer 32, 70-77. [41] International Computers Ltd., "The Architecture of Open VME," ICL publication ref. 55480001. ICL, Stevenage, Herts, UK, 1994. [42] Cusumano, M. A., and Selby, R. W. (1997). Microsoft Secrets. HarperCollins, New York. [43] Jacobson, I., Booch, G., and Rumbaugh, J. (1999). The Unified Software Development Process. Addison-Wesley, Reading, MA. [44] Booch, G. (2001). "Developing the future." Commun. ACM, 44, 119-121. [45] Pamas, D. L. (1994). "Software aging." Proceedings 16th International Conference on Software Engineering, pp. 279-287. IEEE Computer Society Press, Los Alamitos, CA. [46] Eick, S. G., Graves, T. L., Karr, A. R, Marron, J. S., and Mockus, A. (2001). "Does code decay? Assessing evidence from change management data." IEEE Transactions on Software Engineering, 27, 1-12. [47] Rajlich, V., Wilde, N., Buckellew, M., and Page, H. (2001). "Software cultures and evolution." IEEE Computer, 34, 24-28. [48] Johnson, J. H. (1994). "Substring matching for clone detection and change tracking." Proceedings IEEE International Conference on Software Maintenance, Victoria, Canada, Sept., pp. 120-126.
SOFTWARE EVOLUTION
51
[49] Baxter, D., Yahin, A., Moura, L., Sant'Anna, M., and Bier, L. (1998). "Clone detection using abstract trees." IEEE International Conference on Software Maintenance, pp. 368-377. [50] Lagu, B., Proulx, D., Mayrand, J., Merlo, E. M., and Hudepohl, J. (1997). "Assessing the benefits of incorporating function clone detection in a development process." IEEE International Conference on Software Maintenance, pp. 314-331. [51] Burd, E., and Munro, M. (1997). "Investigating the maintenance implications of the replication of code." IEEE International Conference on Software Maintenance, pp. 322-329. [52] Olsem, M. R. (1998). An incremental approach to software systems reengineering." Software Maintenance: Research and Practice 10, 181-202. [53] Canfora, G., De Lucia, A., and Di Lucca, G. (1999). "An incremental object-oriented migration strategy for RPG legacy systems." International Journal of Software Engineering and Knowledge Engineering, 9, 5-25. [54] Tamai, T., and Torimitsu, Y. (1992). "Software lifetime and its evolution process over generations." Proc. IEEE International Conference on Software Maintenance, pp. 63-69. [55] Kappelman, L. A. (2000). "Some strategic Y2K blessings." IEEE Software, 17, 42-46. [56] Fanta, R., and Rajlich, V. (1998). "Reengineering object-oriented code." Proc. IEEE International Conference on Software Maintenance, pp. 238-246. [57] Fanta, R., and Rajlich, V. "Removing clones from the code." Journal of Software Maintenance, 1, 223-243. [58] Wilde, N., Buckellew, M., Page, H., and Rajlich, V. (2001). "A case study of feature location in unstructured legacy Fortran code." Proceedings CSMR'Ol, pp. 68-76. IEEE Computer Society Press, Los Alamitos, CA. [59] Fowler, M. (1999). Refactoring: Improving the Design of Existing Code. AddisonWesley, Reading, MA. [60] Yau, S. S., Collofello, J. S., and MacGregor, T. (1978). "Ripple effect analysis of software maintenance." Proc. IEEE COMPSAC, pp. 60-65. [61] Rajlich, V. (2000). "Modeling software evolution by evolving interoperation graphs." Annals of Software Engineering, 9, 235-248. [62] Ogando, R. M., Yau, S. S., Liu, S. S., and Wilde, N. (1994). "An object finder for program structure understanding in software maintenance." Journal of Software Maintenance: Research and Practice, 6, 261-283. [63] Chapin, N., and Lau, T. S. (1996). "Effective size: An example of USE from legacy systems." Journal of Software Maintenance: Research and Practice, 8, 101-116. [64] Oman, P (1990). "Maintenance tools." IEEE Software, 1, 59-65. [65] Von Mayrhauser, A., and Vans, A. M. (1995). "Program comprehension during software maintenance and evolution." IEEE Computer, 28, 44-55.
52
K. H.BENNETT Er>^L
[66] Littman, D. C , Pinto, J., Letovsky, S.. and Soloway, E. (1986). "Mental models and software maintenance." Empirical Studies of Programmers (E. Soloway and S. Iyengar, Eds.), pp. 80-98. Ablex, Norwood, NJ. [67] Basili, V. R., and Mills, H. D. (1982). "Understanding and documenting programs." IEEE Transactions on Software Engineering, 8, 270-283. [68] Younger, E. J., and Bennett, K. H. (1993). "Model-based tools to record program understanding." Proceedings of the IEEE 2nd International Workshop on Program Comprehension, July 8-9, Capri, Italy, pp. 87-95. IEEE Computer Society Press, Los Alamitos, CA. [69] Standish, T. A. (1984). "An essay on software reuse." IEEE Transactions on Software Engineering, 10, 494^97. [70] Weiser, M., and Lyle, J. (1986). "Experiments on slicing-based debugging aids." Empirical Studies of Programmers (E. Soloway and S. Iyengar, Eds.), pp. 187-197. Albex, Norwood, NJ. [71] Storey, M. A. D., Fracchia, F. D., and Muller, H. A. (1997). "Cognitive design elements to support the construction of a mental model during software visualization." Proceedings of the 5th IEEE International Workshop on Program Comprehension, May 28-30, pp. 17-28. [72] Brooks, R. (1983). "Toward a theory of comprehension of computer programs." International Journal of Man-Machine Studies, 18, 542-554. [73] Soloway, E., and Ehrlich, K. (1984). "Empirical studies of programming knowledge." IEEE Transactions on Software Engineering, 10, 595-609. [74] Pennington, N. (1987). "Stimulus structures and mental representations in expert comprehension of computer programs." Cognitive Psychology, 19, 295-341. [75] Letovsky, S. (1987). "Cognitive processes in program comprehension." Journal of Systems and Software, 7, 325-339. [76] Von Mayrhauser, A., Vans, A. M., and Howe, A. E. (1997). "Program understanding behaviour during enhancement of large-scale software." Journal of Software Maintenance: Research and Practice, 9, 299-327. [77] Shneiderman, B., and Mayer, R. (1979). "Syntactic/semantic interactions in programmer behaviour: A model and experimental results." International Journal of Computer and Information Sciences, 8, 219-238. [78] Knight, C , and Munro, M. (1999). "Comprehension with[in] virtual environment visualisations." Proceedings IEEE 7th International Workshop on Program Comprehension, May 5-7, pp. 4-11. [79] Biggerstaff, T., Mitbander, B., and Webster, D. (1994). "Program understanding and concept assignment problem." Communications of ACM, 37, 72-83. [80] Wilde, N., and Scully, M. (1995). "Software reconnaissance: Mapping program features to code." Journal of Software Maintenance: Research and Practice, 1,49-62. [81] Chen, K., and Rajlich, V. (2000). "Case study of feature location using dependency graph." Proc. International Workshop on Program Comprehension, pp. 241-249. IEEE Computer Society Press, Los Alamitos, CA.
SOFTWARE EVOLUTION
53
[82] Hager, J. A. (1989). "Developing maintainable systems: A full life-cycle approach." Proceedings Conference on Software Maintenance, pp. 271-278. Oct. 16-19. IEEE Computer Society Press, Los Alamitos, CA. [83] Pamas, D. L. (1972). "On the criteria to be used in decomposing systems into modules." Communications of the ACM, 29, 1053-1058. [84] Gamma, E., Helm, R., Johnson, R., and Vlissides, J. (1995). Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, Reading, MA. [85] Wilde, N., and Casey, C. (1996). "Early field experience with the software reconnaissance technique for program comprehension." Proceedings International Conference on Software Maintenance—ICSM'96, pp. 312-318. IEEE Computer Society Press, Los Alamitos, CA. [86] Wilde, N., and Huitt, R. (1992). "Maintenance support for object-oriented programs." IEEE Transactions on Software Engineering, 18, 1038-1044. [87] Wilde, N., and Knudson, D. (1999). "Understanding embedded software through instrumentation: Preliminary results from a survey of techniques," Report SERCTR-85-F, Software Engineering Research Center, Purdue University. Available at h t t p : //www. cs.uwf . edu/~wilde/publicatioiis/TecRpt85F_ExSuin. html. [88] Wilde, N., Casey, C , Vandeville, J., Trio, G., and Hotz, D. (1998). "Reverse engineering of software threads: A design recovery technique for large multi-process systems." Journal of Systems and Software, 43, 11-17. [89] Wendorff, P. (2001). "Assessment of design patterns during software reengineering: Lessons learned from a large commercial project." Proceedings Fifth European Conference on Software Maintenance and Reengineering—CSMR'Ol, pp. 77-84. IEEE Computer Society Press, Los Alamitos, CA. [90] Pigoski, T. M., and Sexton, J. (1990). "Software transition: A case study." Proceedings Conference on Software Maintenance, pp. 200-204. IEEE Computer Society Press, Los Alamitos, CA. [91] Vollman, T. (1990). "Transitioning from development to maintenance." Proceedings Conference on Software Maintenance, pp. 189-199. IEEE Computer Society Press, Los Alamitos, CA. [92] Pigoski, T. M., and Cowden, C. A. (1992). "Software transition: Experience and lessons learned." Proceedings Conference on Software Maintenance, pp. 294-298. IEEE Computer Society Press, Los Alamitos, CA. [93] Rajlich, V. (2000). "Incremental redocumentation using the web." IEEE Software, 17, 102-106. [94] Bollig, S., and Xiao, D. (1998). "Throwing off the shackles of a legacy system." IEEE Computer, 31, 104-109. [95] Bennett, K. H., Layzell, P J., Budgen, D., Brereton, O. P., Macaulay, L., and Munro, M. (2000). "Service-based software: The future for flexible software," IEEE APSEC2000. The Asia-Pacific Software Engineering Conference, Singapore, 5-8 December. IEEE Computer Society Press, Los Alamitos, CA.
b4
K. H. BENNETT E7>^/..
[96] Bennett, K. H., Munro, M., Brereton, O. P., Budgen, D., Layzell, P. J., Macaulay, L., Griffiths, D. G., and Stannet, C. (1999). 'The future of software." Communications of ACM, 42, 78-84. [97] Bennett, K. H., Munro, M., Gold, N. E., Layzell, P. J., Budgen, D., and Brereton, O. P. (2001). "An architectural model for service-based software with ultra rapid evolution." Proc. IEEE International Conference on Software Maintenance, Florence, to appear. [98] Lovelock, C., Vandermerwe, S., and Lewis, B. (1996). Services Marketing. PrenticeHall Europe, Englewood Cliffs, NJ. ISBN 013095991X.
Embedded Software EDWARD A. LEE Department of Electrical Engineering and Computer Science University of California—Berkeley 518 Cory Hall Berkeley CA 94720-1770 USA [email protected]
Abstract The science of computation has systematically abstracted away the physical world. Embedded software systems, however, engage the physical world. Time, concurrency, liveness, robustness, continuums, reactivity, and resource management must be remarried to computation. Prevailing abstractions of computational systems leave out these "nonfunctional" aspects. This chapter explains why embedded software is not just software on small computers, and why it therefore needs fundamentally new views of computation. It suggests component architectures based on a principle called "actor-oriented design," where actors interact according to a model of computation, and describes some models of computation that are suitable for embedded software. It then suggests that actors can define interfaces that declare dynamic aspects that are essential to embedded software, such as temporal properties. These interfaces can be structured in a "system-level-type system" that supports the sort of design-time- and run-time-type checking that conventional software benefits from.
1. What is Embedded Software? 2. Just Software on Small Computers? 2.1 TimeUness 2.2 Concurrency 2.3 Liveness 2.4 Interfaces 2.5 Heterogeneity 2.6 Reactivity 3. Limitations of Prevailing Software Engineering Methods 3.1 Procedures and Object Orientation 3.2 Hardware Design ADVANCES IN COMPUTERS, VOL. 56 ISBN 0-12-012156-5
55
56 57 58 58 59 60 61 62 62 63 63
Copyright 2002 Elsevier Science Ltd All rights of reproduction in any form reserved.
56
EDWARD A. LEE
3.3 Real-Time Operating Systems 3.4 Real-Time Object-Oriented Models 4. Actor-Oriented Design 4.1 Abstract Syntax 4.2 Concrete Syntaxes 4.3 Semantics 4.4 Models of Computation 5. Examples of Models of Computation 5.1 Dataflow 5.2 Time Triggered 5.3 Synchronous/Reactive 5.4 Discrete Events 5.5 Process Networks 5.6 Rendezvous 5.7 Publish and Subscribe 5.8 Continuous Time 5.9 Finite State Machines 6. Choosing a Model of Computation 7. Heterogeneous Models 8. Component Interfaces 8.1 On-line-type Systems 8.2 Reflecting Program Dynamics 9. Frameworks Supporting Models of Computation 10. Conclusions Acknowledgments References
1.
64 65 65 66 67 68 69 71 71 72 73 74 74 75 76 76 76 79 82 84 85 86 88 89 89 90
What is Embedded Software?
Deep in the intellectual roots of computation is the notion that software is the realization of mathematical functions as procedures. These functions map a body of input data into a body of output data. The mechanism used to carry out the procedure is not nearly as important as the abstract properties of the function. In fact, we can reduce the mechanism to seven operations on a machine (the famous Turing machine) with an infinite tape capable of storing zeros and ones [1]. This mechanism is, in theory, as good as any other mechanism, and therefore, the significance of the software is not affected by the mechanism. Embedded software is not like that. Its principal role is not the transformation of data, but rather the interaction with the physical world. It executes on machines that are not, first and foremost, computers. They are cars, airplanes, telephones, audio equipment, robots, appliances, toys, security systems, pacemakers, heart
EMBEDDED SOFTWARE
b/
monitors, weapons, television sets, printers, scanners, climate control systems, manufacturing systems, and so on. Software with a principal role of interacting with the physical world must, of necessity, acquire some properties of the physical world. It takes time. It consumes power. It does not terminate (unless it fails). It is not the ideaUzed procedures of Alan Turing. Computer science has tended to view this physicality of embedded software as messy. Consequently, the design of embedded software has not benefited from the richly developed abstractions of the 20th century. Instead of using object modeling, polymorphic-type systems, and automated memory management, engineers write assembly code for idiosyncratic digital signal processors (DSPs) that can do finite impulse response filtering in one (deterministic) instruction cycle per tap. The engineers that write embedded software are rarely computer scientists. They are experts in the application domain with a good understanding of the target architectures they work with. This is probably appropriate. The principal role of embedded software is interaction with the physical world. Consequently, the designer of that software should be the person who best understands that physical world. The challenge to computer scientists, should they choose to accept it, is to invent better abstractions for that domain expert to do her job. Today's domain experts may resist such help. In fact, their skepticism is well warranted. They see Java programs stalling for one-third of a second to perform garbage collection and update the user interface, and they envision airplanes falling out of the sky. The fact is that the best-of-class methods offered by computer scientists today are, for the most part, a poor match to the requirements of embedded systems. At the same time, however, these domain experts face a serious challenge. The complexity of their applications (and consequent size of their programs) is growing rapidly. Their devices now often sit on a network, wireless or wired. Even some programmable DSPs now run a TCP/IP protocol stack, and the applications are getting much more dynamic, with downloadable customization and migrating code. Meanwhile, reliability standards for embedded software remain very high, unlike general-purpose software. At a minimum, the methods used for general-purpose software require considerable adaptation for embedded software. At a maximum, entirely new abstractions that embrace physicality and deliver robustness are needed.
2.
Just Software on Small Computers?
An arrogant view of embedded software is that it is just software on small computers. This view is naive. Timeliness, concurrency, liveness, reactivity, and
58
EDWARD A. LEE
heterogeneity need to be an integral part of the programming abstractions. They are essential to the correctness of a program. It is not sufficient to realize the right mapping from input data to output data.
2.1
Timeliness
Time has been systematically removed from theories of computation. 'Ture" computation does not take time, and has nothing to do with time. It is hard to overemphasize how deeply rooted this is in our culture. So-called "real-time" operating systems often reduce the characterization of a component (a process) to a single number, its priority. Even most "temporal" logics talk about "eventually" and "always," where time is not a quantifier, but rather a qualifier [2]. Attempts to imbue object-oriented design with real-time are far from satisfactory [3]. Much of the problem is that computation does take time. Computer architecture has been tending toward making things harder for the designers of embedded systems. A large part of the (architectural) performance gain in modern processors comes from statistical speedups such as elaborate caching schemes, speculative instruction execution, dynamic dispatch, and branch prediction. These techniques compromise the reliability of embedded systems. In fact, most embedded processors such as programmable DSPs and microcontrollers do not use these techniques. I believe that these techniques have such a big impact on average case performance that they are indispensable. However, software practitioners will have to find abstractions that regain control of time, or the embedded system designers will continue to refuse to use these processors. The issue is not just that execution takes time. Even with infinitely fast computers, embedded software would still have to deal with time because the physical processes, with which it interacts, evolve over time.
2.2
Concurrency
Embedded systems rarely interact with only a single physical process. They must simultaneously react to stimulus from a network and from a variety of sensors, and at the same time, retain timely control over actuators. This impHes that embedded software is concurrent. In general-purpose software practice, management of concurrency is primitive. Threads or processes, semaphores, and monitors [4] are the classic tools for managing concurrency, but I view them as comparable to assembly language in abstraction. They are very difficult to use reliably, except by operating system experts. Only trivial designs are completely comprehensible (to most engineers). Excessively conservative rules of thumb dominate (such as always grab locks in the same order [5]). Concurrency theory has much to offer that has not made its
EMBEDDED SOFTWARE
59
way into widespread practice, but it probably needs adaptation for the embedded system context. For instance, many theories reduce concurrency to "interleavings," which trivialize time by asserting that all computations are equivalent to sequences of discrete timeless operations. Embedded systems engage the physical world, where multiple things happen at once. Reconciling the sequentiality of software and the concurrency of the real world is a key challenge in the design of embedded systems. Classical approaches to concurrency in software (threads, processes, semaphore synchronization, monitors for mutual exclusion, rendezvous, and remote procedure calls) provide a good foundation, but are insufficient by themselves. Complex compositions are simply too hard to understand. An alternative view of concurrency that seems much better suited to embedded systems is implemented in synchronous/reactive languages [6] such as Esterel [7], which are used in safety-critical real-time applications. In Esterel, concurrency is compiled away. Although this approach leads to highly reliable programs, it is too static for some networked embedded systems. It requires that mutations be handled more as incremental compilation than as process scheduling, and incremental compilation for these languages proves to be challenging. We need an approach somewhere in between that of Esterel and that of today's realtime operating systems, with the safety and predictability of Esterel and the adaptability of a real-time operating system.
2.3
Liveness
In embedded systems, liveness is a critical issue. Programs must not terminate or block waiting for events that will never occur. In the Turing view of computation, all nonterminating programs fall into an equivalence class that is implicitly deemed to be a class of defective programs. In embedded computing, however, terminating programs are defective. The term "deadlock" pejoratively describes premature termination of such systems. It is to be avoided at all costs. In the Turing paradigm, given a sufficiendy rich abstraction for expressing procedures, it is undecidable whether those procedures halt. This undecidability has been inconvenient because we cannot identify programs that fail to halt. Now it should be viewed as inconvenient because we cannot identify programs that fail to keep running. Moreover, correctness cannot be viewed as getting the right final answer. It must take into account the timeliness of a continuing stream of partial answers, as well as other "nonfunctional" properties. A key part of the prevailing computation paradigm is that software is defined by the function it computes. The premise is that the function models everything interesting about the software. Even for the portions of embedded software that terminate (and hence have an
60
EDWARD A. LEE
associated "computable function"), this model is a poor match. A key feature of embedded software is its interaction with physical processes, via sensors and actuators. Nonfunctional properties include timing, power consumption, fault recovery, security, and robustness.
2.4
Interfaces
Software engineering has experienced major improvements over the past decade or so through the widespread use of object-oriented design. Object-oriented design is a component technology, in the sense that a large complicated design is composed of pieces that expose interfaces that abstract their own complexity. The use of interfaces in software is not new. It is arguable that the most widely applied component technology based on interfaces is procedures. Procedures are finite computations that take predefined arguments and produce final results. Procedure libraries are marketable component repositories, and have provided an effective abstraction for complex functionality. Object-oriented design aggregates procedures with the data that they operate on (and renames the procedures "methods"). Procedures, however, are a poor match for many embedded system problems. Consider, for example, a speech coder for a cellular telephone. It is artificial to define the speech coder in terms of finite computations. It can be done of course. However, a speech coder is more like a process than a procedure. It is a nonterminating computation that transforms an unbounded stream of input data into an unbounded stream of output data. Indeed, a commercial speech coder component for cellular telephony is likely to be defined as a process that expects to execute on a dedicated signal processor. There is no widely accepted mechanism for packaging the speech coder in any way that it can safely share computing resources with other computations. Processes, and their cousin, threads, are widely used for concurrent software design. Processes can be viewed as a component technology, where a multitasking operating system or multithreaded execution engine provides the framework that coordinates the components. Process interaction mechanisms, such as monitors, semaphores, and remote procedure calls, are supported by the framework. In this context, a process can be viewed as a component that exposes at its interface an ordered sequence of external interactions. However, as a component technology, processes and threads are extremely weak. A composition of two processes is not a process (it no longer exposes at its interface an ordered sequence of external interactions). Worse, a composition of two processes is not a component of any sort that we can easily characterize. It is for this reason that concurrent programs built from processes or threads are so hard to get right. It is very difficult to talk about the properties of the aggregate
EMBEDDED SOFTWARE
61
because we have no ontology for the aggregate. We don't know what it is. There is no (understandable) interface definition. Object-oriented interface definitions work well because of the type systems that support them. Type systems are one of the great practical triumphs of contemporary software. They do more than any other formal method to ensure correctness of (practical) software. Object-oriented languages, with their user-defined abstract data types, and their relationships between these types (inheritance, polymorphism) have had a big impact in both reusability of software (witness the Java class libraries) and the quality of software. Combined with design patterns [8] and object modeling [9], type systems give us a vocabulary for talking about larger structure in software than lines of code and procedures. However, object-oriented programming talks only about static structure. It is about the syntax of procedural programs, and says nothing about their concurrency or dynamics. For example, it is not part of the type signature of an object that the initialize() method must be called before the fire() method. Temporal properties of an object (method x() must be invoked every 10 ms) are also not part of the type signature. For embedded software to benefit from a component technology, that component technology will have to include dynamic properties in interface definitions.
2.5
Heterogeneity
Heterogeneity is an intrinsic part of computation in embedded systems. They mix computational styles and implementation technologies. First, such systems are often a mixture of hardware and software designs, so that the embedded software interacts with hardware that is specifically designed to interact with it. Some of this hardware has continuous-time dynamics, which is a particularly poor match to prevailing computational abstractions. Embedded systems also mix heterogeneous event handling styles. They interact with events occurring irregularly in time (alarms, user commands, sensor triggers, etc.) and regularly in time (sampled sensor data and actuator control signals). These events have widely different tolerances for timeliness of reaction. Today, they are intermingled in real-time software in ad hoc ways; for example, they might be all abstracted as periodic events, and rate-monotonic principles [10] might be used to assign priorities. Perhaps because of the scientific training of most engineers and computer scientists, the tendency is to seek a grand-unified theory, the common model that subsumes everything as a special case, and that can, in principle, explain it all. We find it anathema to combine multiple programming languages, despite the fact that this occurs in practice all the time. Proponents of any one language are sure, absolutely sure, that their language is fully general. There is no need for
62
EDWARD A. LEE
any other, and if only the rest of the world would understand its charms, they would switch to using it. This view will never work for embedded systems, since languages are bound to fit better or worse for any given problem.
2.6
Reactivity
Reactive systems are those that react continuously to their environment at the speed of the environment. Harel and Pnueli [11] and Berry [12] contrast them with interactive systems, which react with the environment at their own speed, and transformational systems, which simply take a body of input data and transform it into a body of output data. Reactive systems have real-time constraints, and are frequently safety-critical, to the point that failures could result in loss of human life. Unlike transformational systems, reactive systems typically do not terminate (unless they fail). Robust distributed networked reactive systems must be capable of adapting to changing conditions. Service demands, computing resources, and sensors may appear and disappear. Quality of service demands may change as conditions change. The system is therefore continuously being redesigned while it operates, and all the while it must not fail. A number of techniques for providing more robust support for reactive system design than what is provided by real-time operating systems have emerged. The synchronous languages, such as Esterel [7], Lustre [13], Signal [14], and Argos [15], are reactive, have been used for applications where validation is important, such as safety-critical control systems in aircraft and nuclear power plants. Lustre, for example, is used by Schneider Electric and Aerospatiale in France. Use of these languages is rapidly spreading in the automotive industry, and support for them is beginning to appear on commercial EDA (electronic design automation) software. Reactive systems must typically react simultaneously to multiple sources of stimulus. Thus, they are concurrent. The synchronous languages manage concurrency in a very different way than that found in real-time operating systems. Their mechanism makes much heavier use of static (compile-time) analysis of concurrency to guarantee behavior. However, compile-time analysis of concurrency has a serious drawback: it compromises modularity and precludes adaptive software architectures.
3.
Limitations of Prevailing Software Engineering Methods
Construction of complex embedded software would benefit from component technology. Ideally, these components are reusable, and embody valuable
EMBEDDED SOFTWARE
63
expertise in one or more aspects of the problem domain. The composition must be meaningful, and ideally, a composition of components yields a new component that can be used to form other compositions. To work, these components need to be abstractions of the complex, domain-specific software that they encapsulate. They must hide the details, and expose only the essential external interfaces, with well-defined semantics.
3.1
Procedures and Object Orientation
A primary abstraction mechanism of this sort in software is the procedure (or in object-oriented culture, a method). Procedures are terminating computations. They take arguments, perform a finite computation, and return results. The real world, however, does not start, execute, complete, and return. Object orientation couples procedural abstraction with data to get data abstraction. Objects, however, are passive, requiring external invocation of their methods. So-called "active objects" are more like an afterthought, requiring still a model of computation to have any useful semantics. The real world is active, more like processes than objects, but with a clear and clean semantics that is firmly rooted in the physical world. So while object-oriented design has proven extremely effective in building large software systems, it has little to offer to address the specific problems of the embedded system designer. A sophisticated component technology for embedded software will talk more about processes than procedures, but we must find a way to make these processes compositional, and to control their real-time behavior in predictable and understandable ways. It will talk about concurrency and the models of computation used to regulate interaction between components. And it will talk about time.
3.2
Hardware Design
Hardware design, of course, is more constrained than software by the physical world. It is instructive to examine the abstractions that have worked for hardware, such as synchronous design. The synchronous abstraction is widely used in hardware to build large, complex, and modular designs, and has recently been applied to software [6], particularly for designing embedded software. Hardware models are conventionally constructed using hardware description languages such as Verilog and VHDL; these languages realize a discrete-event model of computation that makes time a first-class concept, information shared by all components. Synchronous design is done through a stylized use of these languages. Discrete-event models are often used for modeling complex systems.
64
EDWARD A. LEE
particularly in the context of networking, but have not yet (to my knowledge) been deployed into embedded system design. Conceptually, the distinction between hardware and software, from the perspective of computation, has only to do with the degree of concurrency and the role of time. An application with a large amount of concurrency and a heavy temporal content might as well be thought of using hardware abstractions, regardless of how it is implemented. An application that is sequential and has no temporal behavior might as well be thought of using software abstractions, regardless of how it is implemented. The key problem becomes one of identifying the appropriate abstractions for representing the design.
3.3
Real-Time Operating Systems
Most embedded systems, as well as many emerging applications of desktop computers, involve real-time computations. Some of these have hard deadlines, typically involving streaming data and signal processing. Examples include communication subsystems, sensor and actuator interfaces, audio and speech processing subsystems, and video subsystems. Many of these require not just real-time throughput, but also low latency. In general-purpose computers, these tasks have been historically delegated to specialized hardware, such as SoundBlaster cards, video cards, and modems. In embedded systems, these tasks typically compete for resources. As embedded systems become networked, the situation gets much more complicated, because the combination of tasks competing for resources is not known at design time. Many such embedded systems incorporate a real-time operating system, which offers specialized scheduling services tuned to real-time needs, in addition to standard operating system services such as I/O. The schedules might be based on priorities, using for example the principles of rate-monotonic scheduling [10,16], or on deadlines. There remains much work to be done to improve the match between the assumptions of the scheduling principle (such as periodicity, in the case of rate-monotonic scheduling) and the realities of embedded systems. Because the match is not always good today, many real-time embedded systems contain hand-built, specialized microkernels for task scheduling. Such microkernels, however, are rarely sufficiently flexible to accommodate networked applications, and as the complexity of embedded applications grows, they will be increasingly difficult to design. The issues are not simple. Unfortunately, current practice often involves fine tuning priorities until a particular implementation seems to work. The result is fragile systems that fail when anything changes. A key problem in scheduling is that most techniques are not compositional. That is, even if assurances can be provided for an individual component, there are no systematic mechanisms for providing assurances to the aggregate of two
EMBEDDED SOFTWARE
65
components, except in trivial cases. A chronic problem with priority-based scheduling, known as priority inversion, is one manifestation of this problem. Priority inversion occurs when processes interact, for example, by using a monitor to obtain exclusive access to a shared resource. Suppose that a lowpriority process has access to the resource, and is preempted by a medium-priority process. Then a high-priority process preempts the medium-priority process and attempts to gain access to the resource. It is blocked by the low-priority process, but the low-priority process is blocked by the presence of an executable process with higher priority, the medium-priority process. By this mechanism, the highpriority process cannot execute until the medium-priority process completes and allows the low-priority process to relinquish the resource. Although there are ways to prevent priority inversion (priority inheritance and priority ceiling protocols, for example), the problem is symptomatic of a deeper failure. In a priority-based scheduling scheme, processes interact both through the scheduler and through the mutual exclusion mechanism (monitors) supported by the framework. These two interaction mechanisms together, however, have no coherent compositional semantics. It seems like a fruitful research goal to seek a better mechanism.
3.4
Real-Time Object-Oriented Models
Real-time practice has recently been extended to distributed component software in the form of real-time CORBA and related models [17] and real-time objectoriented modeling (ROOM) [18]. CORBA is fundamentally a distributed objectoriented approach based on remote procedure calls. Built upon this foundation of remote procedure calls are various services, including an event service that provides a publish-and-subscribe semantics. Real-time CORBA extends this further by associating priorities with event handling, and leveraging real-time scheduling for processing events in a timely manner. Real-time CORBA, however, is still based on prevailing software abstractions. Thus, for effective real-time performance, a programmer must specify various numbers, such as worst-case and typical execution times for procedures, cached and not. These numbers are hard to know precisely. Real-time scheduling is then driven by additional parameters such as periodicity, and then tweaked with semantically weak parameters called "importance" and "criticality." These parameters, taken together, amount to guesses, as their actual effect on system behavior is hard to predict except by experimentation.
4.
Actor-Oriented Design
Object-oriented design emphasizes inheritance and procedural interfaces. We need an approach that, like object-oriented design, constructs complex
66
EDWARD A. LEE
applications by assembling components, but emphasizes concurrency and communication abstractions, and admits time as a first-class concept. I suggest the term actor-oriented design for a refactored software architecture, where instead of objects, the components are parameterized actors with ports. Ports and parameters define the interface of an actor. A port represents an interaction with other actors, but unlike a method, does not have call-return semantics. Its precise semantics depends on the model of computation, but conceptually it represents signaling between components. There are many examples of actor-oriented frameworks, including Simulink (from The MathWorks), Lab VIEW (from National Instruments), Easy 5x (from Boeing), SPW (the Signal Processing Worksystem, from Cadence), and Cocentric System studio (from Synopsys). The approach has not been entirely ignored by the software engineering community, as evidenced by ROOM [18] and some architecture description languages (ADLs, such as Wright [19]). Hardware design languages, such as VHDL, Verilog, and SystemC, are all actor-oriented. In the academic community, active objects and actors [20,21], timed I/O automata [22], Polls and Metropolis [23], Giotto [24], and Ptolemy and Ptolemy II [25] all emphasize actor orientation. Agha uses the term "actors," which he defines to extend the concept of objects to concurrent computation [26a]. Agha's actors encapsulate a thread of control and have interfaces for interacting with other actors. The protocols used for this interface are called interaction patterns, and are part of the model of computation. My use of the term "actors" is broader, in that I do not require the actors to encapsulate a thread of control, but I share with Agha the notion of interaction patterns, which I call the "model of computation." Agha argues that no model of concurrency can or should allow all communication abstractions to be directly expressed. He describes message passing as akin to "gotos" in their lack of structure. Instead, actors should be composed using an interaction policy. These more specialized interaction policies will form models of computation.
4.1
Abstract Syntax
It is useful to separate syntactic issues from semantic issues. An abstract syntax defines how a design can be decomposed into interconnected components, without being concerned with how a design is represented on paper or in a computer file (that is the concern of the concrete syntax). An abstract syntax is also not concerned with the meaning of the interconnections of components, nor even what a component is. A design is a set of components and relationships among them, where the relationships conform to this abstract syntax. Here, we describe the abstract syntax using informal diagrams that illustrate these sets and relations
67
EMBEDDED SOFTWARE
by giving use cases, although formaUzing the abstract syntax is necessary for precision. Consider the diagram in Fig. 1. This shows three components (actors), each with one port, and an interconnection between these ports mediated by a relation. This illustrates a basic abstract syntax. The abstract syntax says nothing about the meaning of the interconnection, but rather just merely that it exists. To be useful, the abstract syntax is typically augmented with hierarchy, where an actor is itself an aggregate of actors. It can be further elaborated with such features as ports supporting multiple links and relations representing multiple connections. An elaborate abstract syntax of this type is described in [25].
4.2
Concrete Syntaxes
The abstract syntax may be associated with any number of concrete syntaxes. For instance, an XML schema might be used to provide a textual representation of a structure [26b]. A visual editor may provide a diagrammatic syntax, like that shown in Fig. 2. Actor-oriented design does not require visual syntaxes. However, visual depictions of systems have always held a strong human appeal, making them extremely effective in conveying information about a design. Many of the methods described in this chapter can use such depictions to completely and formally specify models. Visual syntaxes can be every bit as precise and complete as textual syntaxes, particularly when they are judiciously combined with textual syntaxes. Visual representations of models have a mixed history. In circuit design, schematic diagrams used to be routinely used to capture all of the essential information needed to implement some systems. Today, schematics are usually replaced by text in hardware description languages such as VHDL or Verilog. In other contexts, visual representations have largely failed, for example, flowcharts for
Actor ^ ^ Relation ^ f r. ^ ^ Link ^^^ Link ^ Port M ^ M I V I Parameters J W
Actor „ ^ Port Parameters
FIG. 1. Abstract syntax of actor-oriented designs.
68
EDWARD A. LEE
..JGl ill m
view
Edit Graph
utilities director library 1 ^ actor library Graphics
Help
^
ThiS modei illustrates composae types m Ptolemy 11, The Record Assembler actor composes a strmg with an integer mto a record token which is then passed through a channel that has random delay. The tokerts amve possibly in another order The Record Dssassembter actor separates the string from the sequence number. The strings are displayed as received (possible out of order), and resequenced by the Sequencer actor, which puts them back m order This example demonstrates how types propagate through record composition and decomposition
^
Master Ctock Strmg Sequence 1
>-
I
Record As&embler
t~~*t ^^^ f—\ m I
Sequence C o u n t ^ W i
1
Record Disassembler channel M( hannei Model • pjgpfay f^^
The channel ss modeled by a vanable delay, which here is random with a Rayleigh dtstribution
H
FIG. 2. An example of a visual concrete syntax. This is the visual editor for Ptolemy II [25] called Vergil, designed by Steve Neuendorffer.
capturing the behavior of software. Recently, a number of innovative visual formalisms, including visual dataflow, hierarchical concurrent finite state machines, and object models, have been garnering support. The UML visual language for object modeling, for example, has been receiving a great deal of practical use [3,27].
4.3
Semantics
A semantics gives meaning to components and their interconnection. It states, for example, that a component is a process, and a connection represents communication between processes. Alternatively, a component may be a state, and a connection may represent a transition between states. In the former case, the semantics may restrict how the communication may occur. These semantic models can be viewed as architectural patterns [28], although for the purposes of this chapter, I will call them models of computation. One of my objectives here is to codify a few of the known models of computation that are useful for embedded software design. Consider a family of models of computation where components are producers or consumers of data (or both). In this case, the ports acquire the property of being inputs, outputs, or both. Consider for example the diagram in Fig. 3.
EMBEDDED SOFTWARE
• ~~ N send(t) - ^ A P E1
69
receiver, put(t) /" ZZiT" ^^^^^^^-^--^f^ SetO ^ ©P2 E: token token tty
FIG. 3. Producer-consumer communication mechanism.
This diagram has two actors, one producer and one consumer. The diagram suggests a port that is an output by showing an outgoing arrow, and an input by showing an ingoing arrow. It also shows a simpUfied version of the Ptolemy II data transport mechanism [25]. The producer sends a token t (which encapsulates user data) via its port by calling a send() method on that port. This results in a call to the put() method of the receiver in the destination port. The destination actor retrieves the token by calling get() on the port. This mechanism, however, is polymorphic, in the sense that it does not specify what it means to call put() or get(). This depends on the model of computation. A model of computation may be very broad or very specific. The more constraints there are, the more specific it is. Ideally, this specificity comes with benefits. For example, Unix pipes do not support feedback structures, and therefore cannot deadlock. Common practice in concurrent programming is that the components are threads that share memory and exchange objects using semaphores and monitors. This is a very broad model of computation with few benefits. In particular, it is hard to talk about the properties of an aggregate of components because an aggregate of components is not a component in the framework. Moreover, it is difficult to analyze a design in such a model of computation for deadlock or temporal behavior. A model of computation is often deeply ingrained in the human culture of the designers that use it. It fades out of the domain of discourse. It can be argued that the Turing sequentiality of computation is so deeply ingrained in contemporary computer science culture that we no longer realize just how thoroughly we have banished time from computation. In a more domain-specific context, users of modeling languages such as Simulink rarely question the suitability of the semantics to their problem at hand. To such users, it does not "have semantics," it just "is." The key challenge in embedded software research is to invent or identify models of computation with properties that match the application domain well. One of the requirements is that time be central to the model.
4.4
Models of Computation
A model of computation can be thought of as the "laws of physics" that govern component interactions. It is the programmer's model, or the conceptual
70
EDWARD A. LEE
framework within which larger designs are constructed by composing components. Design of embedded software will require models of computation that support concurrency. In practice, concurrency seriously complicates system design. No universal model of computation has yet emerged for concurrent computation (although some proponents of one approach or another will dispute this). By contrast, for sequential computation. Von Neumann provided a wildly successful universal abstraction. In this abstraction, a program consists of a sequence of transformations of the system state. In distributed systems, it is difficult to maintain a global notion of "system state," an essential part of the Von Neumann model, since many small state transformations are occurring simultaneously, in arbitrary order. In networked embedded systems, communication bandwidth and latencies will vary over several orders of magnitude, even within the same system design. A model of computation that is well suited to small latencies (e.g., the synchronous hypothesis used in digital circuit design, where computation and communication take "zero" time) is usually poorly suited to large latencies, and vice versa. Thus, practical designs will almost certainly have to combine techniques. It is well understood that effective design of concurrent systems requires one or more levels of abstraction above the hardware support. A hardware system with a shared memory model and transparent cache consistency, for example, still requires at least one more level of abstraction in order to achieve determinate distributed computation. A hardware system based on high-speed packet-switched networks could introduce a shared-memory abstraction above this hardware support, or it could be used directly as the basis for a higher level of abstraction. Abstractions that can be used include the event-based model of Java Beans, semaphores based on Dijkstra's P/V systems [29], guarded communication [30], rendezvous, synchronous message passing, active messages [31], asynchronous message passing, streams (as in Kahn process networks [32]), dataflow (commonly used in signal and image processing), synchronous/reactive systems [6], Linda [33], and many others. These abstractions partially or completely define a model of computation. Applications are built on a model of computation, whether the designer is aware of this or not. Each possibility has strengths and weaknesses. Some guarantee determinacy, some can execute in bounded memory, and some are provably free from deadlock. Different styles of concurrency are often dictated by the application, and the choice of model of computation can subtly aifect the choice of algorithms. While dataflow is a good match for signal processing, for example, it is a poor match for transaction-based systems, control-intensive sequential decision making, and resource management. It is fairly common to support models of computation with language extensions or entirely new languages. Occam, for example, supports synchronous
EMBEDDED SOFTWARE
71
message passing based on guarded communication [30]. Esterel [7], Lustre [13], Signal [14], and Argos [15] support the synchronous/reactive model. These languages, however, have serious drawbacks. Acceptance is slow, platforms are limited, support software is limited, and legacy code must be translated or entirely rewritten. An alternative approach is to explicitly use models of computation for coordination of modular programs written in standard, more widely used languages. The system-level specification language SystemC for hardware systems, for example, uses this approach (see h t t p : / / s y s t e m c . org). In other words, one can decouple the choice of programming language from the choice of model of computation. This also enables mixing such standard languages in order to maximally leverage their strengths. Thus, for example, an embedded application could be described as an interconnection of modules, where modules are written in some combination of C, Java, and VHDL. Use of these languages permits exploiting their strengths. For example, VHDL provides FPGA targeting for reconfigurable hardware implementations. Java, in theory, provides portability, migratability, and a certain measure of security. C provides efficient execution. The interaction between modules could follow any of several principles, e.g., those of Kahn process networks [32]. This abstraction provides a robust interaction layer with loosely synchronized communication and support for mutable systems (in which subsystems come and go). It is not directly built into any of the underlying languages, but rather interacts with them as an application interface. The programmer uses them as a design pattern [8] rather than as a language feature. Larger applications may mix more than one model of computation. For example, the interaction of modules in a real-time, safety-critical subsystem might follow the synchronous/reactive model of computation, while the interaction of this subsystem with other subsystems follows a process networks model. Thus, domain-specific approaches can be combined.
5.
Examples of Models of Computation
There are many models of computation, each dealing with concurrency and time in different ways. In this section, I oudine some of the most useful models for embedded software. All of these will lend a semantics to the same abstract syntax shown in Fig. 1.
5.1
Dataflow
In dataflow models, actors are atomic (indivisible) computations that are triggered by the availability of input data. Connections between actors represent the
72
EDWARD A. LEE
flow of data from a producer actor to a consumer actor. Examples of commercial frameworks that use dataflow models are SPW (signal processing worksystem, from Cadence) and Lab VIEW (from National Instruments). Synchronous dataflow (SDF) is a particularly restricted special case with the extremely useful property that deadlock and boundedness are decidable [34-38]. Boolean dataflow (BDF) is a generalization that sometimes yields to deadlock and boundedness analysis, although fundamentally these questions remain undecidable [39]. Dynamic dataflow (DDF) uses only run-time analysis, and thus makes no attempt to statically answer questions about deadlock and boundedness [40-42]. A small but typical example of an embedded software application modeled using SDF is shown in Fig. 4. That example shows a sound synthesis algorithm that consists of four actors in a feedback loop. The algorithm synthesizes the sound of a plucked string instrument, such as a guitar, using the well-known Karplus-Strong algorithm.
5.2
Time Triggered
Some systems with timed events are driven by clocks, which are signals with events that are repeated indefinitely with a fixed period. A number of software frameworks and hardware architectures have evolved to support this highly regular style of computation. SDF Director
This model implements the Karplus-Strong algorithm for generating a piucked-string musical instrument sound.
0#lay
Lowpass Filter
AIIpass FJItar
M
' jtl^^^"
^G&m
^
- X 1
D
AudioPlayer
FIG. 4. A synchronous dataflow model implemented in the SDF domain (created by Stephen Neuendorflfer) of Ptolemy II [25]. This model uses the audio library created by Brian Vogel.
EMBEDDED SOFTWARE
73
The time-triggered architecture [43] is a hardware architecture supporting such models. The TTA takes advantage of this regularity by statically scheduling computations and communications among distributed components. In hardware design, cycle-driven simulators stimulate computations regularly according to the clock ticks. This strategy matches synchronous hardware design well, and yields highly efficient simulations for certain kinds of designs. In the Scenic system [44], for example, components are processes that run indefinitely, stall to wait for clock ticks, or stall to wait for some condition on the inputs (which are synchronous with clock ticks). Scenic also includes a clever mechanism for modeling preemption, an important feature of many embedded systems. Scenic has evolved into the SystemC specification language for system-level hardware design (see h t t p : / / s y s t e m c . org). The Giotto programming language [24] provides a time-triggered software abstraction which, unlike the TTA or cycle-driven simulation, is hardware independent. It is intended for embedded software systems where periodic events dominate. It combines with finite-state machines (see below) to yield modal models that can be quite expressive. An example of a helicopter controller in Giotto is described in [45]. Discrete-time models of computation are closely related. These are commonly used for digital signal processing, where there is an elaborate theory that handles the composition of subsystems. This model of computation can be generalized to support multiple sample rates. In either case, a global clock defines the discrete points at which signals have values (at the ticks).
5.3
Synchronous/Reactive
In the synchronous/reactive (SR) model of computation [6], connections between components represent data values that are aligned with global clock ticks, as with time-triggered approaches. However, unlike time-triggered and discretetime approaches, there is no assumption that all (or even most) signals have a value at each time tick. This model efficiently deals with concurrent models with irregular events. The components represent relations between input and output values at each tick, allowing for absences of value, and are usually partial functions with certain technical restrictions to ensure determinacy. Sophisticated compiler techniques yield extremely efficient execution that can reduce all concurrency to a sequential execution. Examples of languages that use the SR model of computation include Esterel [7], Signal [14], and Lustre [46]. An example of an application for which the synchronous reactive model is ideally suited is the management of a token-ring protocol for media access control, described in [9]. In this application, a token circulates in a round-robin fashion among users of a communication medium. When a user makes a request for
74
EDWARD A. LEE
access, if the user has the token, access is granted immediately. If not, then access may still be granted if the current holder of the token does not require access. The SR realization of this protocol yields predictable, deterministic management of access. This application benefits from the SR semantics because it includes instantaneous dialog and convergence to a fixed point (which determines who gets access when there is contention). SR models are excellent for applications with concurrent and complex control logic. Because of the tight synchronization, safety-critical real-time applications are a good match. However, also because of the tight synchronization, some applications are overspecified in the SR model, which thus limits the implementation alternatives and makes distributed systems difficult to model. Moreover, in most realizations, modularity is compromised by the need to seek a global fixed point at each clock tick.
5.4
Discrete Events
In discrete-event (DE) models of computation, the connections represent sets of events placed on a time line. An event consists of a value and time stamp. This model of computation is popular for specifying hardware and for simulating telecommunications systems, and has been realized in a large number of simulation environments, simulation languages, and hardware description languages, including VHDL and Verilog. Like SR, there is a globally consistent notion of time, but unlike SR time has a metric, in that the time between events has significance. DE models are often used in the design of communication networks. Figure 2 above gives a very simple DE model that is typical of this usage. That example constructs packets and routes them through a channel model. In this case, the channel model has the feature that it may reorder the packets. A sequencer is used to reconstruct the original packet order. DE models are also excellent descriptions of concurrent hardware, although increasingly the globally consistent notion of time is problematic. In particular, it overspecifies (or overmodels) systems where maintaining such a globally consistent notion is difficult, including large VLSI chips with high clock rates, and networked distributed systems. A key weakness is that it is relatively expensive to implement in software, as evidenced by the relatively slow simulators.
5.5
Process Networks
A common way of handling concurrent software is where components are processes or threads that communicate by asynchronous, buffered message passing. The sender of the message need not wait for the receiver to be ready to receive
EMBEDDED SOFTWARE
75
the message. There are several variants of this technique, but I focus on one that ensures determinate computation, namely Kahn process networks [32]. In a Kahn process network (PN) model of computation, the connections represent sequences of data values (tokens), and the components represent functions that map input sequences into output sequences. Certain technical restrictions on these functions are necessary to ensure determinacy, meaning that the sequences are fully specified. Dataflow models are a special case of process networks that construct processes as sequences of atomic actor firings [47]. PN models are excellent for signal processing [48]. They are loosely coupled, and hence relatively easy to parallelize or distribute. They can be implemented efficiently in both software and hardware, and hence leave implementation options open. A key weakness of PN models is that they are awkward for specifying complicated control logic. Control logic is specified by routing data values.
5.6
Rendezvous
In synchronous message passing, the components are processes, and processes communicate in atomic, instantaneous actions called rendezvous. If two processes are to communicate, and one reaches the point first at which it is ready to communicate, then it stalls until the other process is ready to communicate. "Atomic" means that the two processes are simultaneously involved in the exchange, and that the exchange is initiated and completed in a single uninterruptable step. Examples of rendezvous models include Hoare's communicating sequential processes (CSP) [30] and Milner's calculus of communicating systems (CCS) [49]. This model of computation has been realized in a number of concurrent programming languages, including Lotos and Occam. Rendezvous models are particularly well matched to appHcations where resource sharing is a key element, such as client-server database models and multitasking or multiplexing of hardware resources. A key weakness of rendezvous-based models is that maintaining determinacy can be difficult. Proponents of the approach, of course, cite the ability to model nondeterminacy as a key strength. Rendezvous models and PN both involve threads that communicate via message passing, synchronously in the former case and asynchronously in the latter. Neither model intrinsically includes a notion of time, which can make it difficult to interoperate with models that do include a notion of time. In fact, message events are partially ordered, rather than totally ordered as they would be were they placed on a time line. Both models of computation can be augmented with a notion of time to promote interoperability and to directly model temporal properties (see, for example, [50]). In the Pamela system [51], threads assume that time does not advance while they are active, but can advance when they stall on inputs, outputs, or explicitly indicate
76
EDWARD A. LEE
that time can advance. By this vehicle, additional constraints are imposed on the order of events, and determinate interoperability with timed models of computation becomes possible. This mechanism has the potential of supporting lowlatency feedback and configurable hardware.
5.7
Publish and Subscribe
In publish-and-subscribe models, connections between components are via named event streams. A component that is a consumer of such streams registers an interest in the stream. When a producer produces an event to such a stream, the consumer is notified that a new event is available. It then queries a server for the value of the event. Linda is a classic example of a fully elaborated publish-andsubscribe mechanism [52]. It has recendy been reimplemented in JavaSpaces, from Sun Microsystems. An example of a distributed embedded software application using JavaSpaces is shown in Fig. 5.
5.8
Continuous Time
Physical systems can often be modeled using coupled differential equations. These have a natural representation in the abstract syntax of Fig. 1, where the connections represent continuous-time signals (functions of the time continuum). The components represent relations between these signals. The job of an execution environment is to find a fixed-point, i.e., a set of functions of time that satisfy all the relations. Differential equations are excellent for modeling the physical systems with which embedded software interacts. Joint modeling of these physical systems and the software that interacts with them is essential to developing confidence in a design of embedded software. Such joint modeling is supported by such actororiented modeling frameworks as Simulink, Saber, VHDL-AMS, and Ptolemy II. A Ptolemy II continuous-time model is shown in Fig. 6.
5.9
Finite State Machines
All of the models of computation considered so far are concurrent. It is often useful to combine these concurrent models hierarchically with finite-state machines (FSMs) to get modal models. FSMs are different from any of the models we have considered so far in that they are strictly sequential. A component in this model is called a state or mode, and exactly one state is active at a time. The connections between states represent transitions, or transfer of control between states. Execution is a strictly ordered sequence of state transitions. Transition
77
EMBEDDED SOFTWARE
SDF Publisherl I" r""" """""V
TiltSensorO
r^j^~
»-
w E
^^B||||ip»-
J
p — -
Pyblisher2
1
1 SDF LegoDriver14 ouiiirv
J 4.
jj^^i 1 -^
*-
• . — pumiuMiimmm^
Subscribe r2
1 ' ^feii^i
X ScaleX
Ih—^omm
%
luwayp
X
^
r
^ r
•
^
^
^
1
^
jfcUMMy
"4 n
"w
^^
\mmmpL
SumL 4. 1
15[
FIG. 5. A distributed embedded application using JavaSpaces combined with SDF to realize a publish-and-subscribe model of computation. The upper left model reads sensor data from a tilt sensor and publishes the data on the network. The lower model subscribes to the sensor data and uses it to drive the Lego robot at the upper right. This example was built by Jie Liu and Xiaojun Liu.
systems are a more general version, in that a given component may represent more than one system state (and there may be an infinite number of components). FSM models are excellent for describing control logic in embedded systems, particularly safety-critical systems. FSM models are amenable to in-depth formal analysis, using for example model checking, and thus can be used to avoid surprising behavior. Moreover, FSMs are easily mapped to either hardware or software implementations. FSM models have a number of key weaknesses. First, at a very fundamental level, they are not as expressive as the other models of computation described here. They are not sufficiently rich to describe all partial recursive functions. However, this weakness is acceptable in light of the formal analysis that becomes possible. Many questions about designs are decidable for FSMs and undecidable
78
EDWARD A. LEE
Continuous TJm« (CT) Ojrectof
This model shows a nonlinear feedback system that exhibits chaotic behavior. It is modeled in continuous time. The CT director uses a sophisticated ordmary differential equation solver to execute the model. This particular model is known as a Lorenz attractor.
^mmm
•
J ^
T"""^""""°l
^ Ex|>res^on 1
Integral
A ^ ^ L"-*
IH
Expresslor ^ 2
1
\-
^•'
>--|
Ejqare^or 1 3 V. V.
|„
*•' 4"
•-H
Strange Attractor
^BBH
Integral
Ir Integrat
u^ diL
FIG. 6. A nonlinear feedback system modeled in the continuous-time (CT) domain in Ptolemy II. This model exhibits the chaotic behavior plotted at the right. This model and the CT domain were created by Jie Liu.
for Other models of computation. Another key weakness is that the number of states can get very large even in the face of only modest complexity. This makes the models unwieldy. The latter problem can often be solved by using FSMs in combination with concurrent models of computation. This was first noted by Harel, who introduced the Statecharts formalism. Statecharts combine synchronous/reactive modeling with FSMs [53a]. Statecharts have been adopted by UML for modeling the dynamics of software [3,27]. FSMs have also been combined with differential equations, yielding the so-called hybrid systems model of computation [53b]. FSMs can be hierarchically combined with a huge variety of concurrent models of computation. We call the resulting formahsm "^charts" (pronounced "starcharts") where the star represents a wildcard [54]. Consider the model shown in Fig. 7. In that figure, component B is hierarchically refined by another model consisting of three components, c, d, and e. These latter three components are states of a state machine, and the connections between them are state transitions. States c and e are shown refined to concurrent models
79
EMBEDDED SOFTWARE
A
H
1^
B
^
"n F UJ G U.
FIG. 7. Hierarchical composition of an FSM with concurrent models of computation.
themselves. The interpretation is that while the FSM is in state c, then component B is in fact defined by component H. While it is in state e, component B is defined by a composition of F and G. In the figure, square boxes depict components in a concurrent model of computation, while circles depict states in a state machine. Despite the different concrete syntax, the abstract syntax is the same: components with interconnections. If the concurrent model of computation is SR, then the combination has Statechart semantics. If it is continuous time, then the combination has hybrid systems semantics. If it is PN, then the combination is similar to the SDL language [55]. If it is DE, then the combination is similar to Polis [23]. A hybrid system example implemented in Ptolemy II is shown in Fig. 8.
6.
Choosing a Model of Computation
The rich variety of models of computation outlined above can be daunting to a designer faced with having to select them. Most designers today do not face this choice because they get exposed to only one or two. This is changing, however, as the level of abstraction and domain-specificity of design practice both rise. We expect that sophisticated and highly visual user interfaces will be needed to enable designers to cope with this heterogeneity.
80
EDWARD A. LEE
abs(Force) > SJidkiness Separate-pl = PI, Separate.p2 = PI: Ssparate.vl = V I ; Separate.v2 = V I
T
STi
Gain
I
FIG. 8. Hybrid system model in Ptolemy II, showing a hierarchical composition of a finite state machine (FSM) model and two continuous-time (CT) models. This example models a physical springmass system with two modes of operation. In the Separate mode, it has two masses on springs oscillating independently. In the Together mode, the two masses are struck together, and oscillate together with two springs. The model was created by Jie Liu and Xiaojun Liu.
An essential difference between concurrent models of computation is their modeling of time. Some are very explicit by taking time to be a real number that advances uniformly, and placing events on a time line or evolving continuous signals along the time line. Others are more abstract and take time to be discrete. Others are still more abstract and take time to be merely a constraint imposed by causality. This latter interpretation results in time that is partially ordered, and explains much of the expressiveness in process networks and rendezvousbased models of computation. Partially ordered time provides a mathematical framework for formally analyzing and comparing models of computation [56,57]. Many researchers have thought deeply about the role of time in computation. Benveniste and Le Guernic observe that in certain classes of systems, "the nature of time is by no means universal, but rather local to each subsystem, and consequently multiform" [14]. Lamport observes that a coordinated notion of time cannot be exacdy maintained in distributed systems, and shows that a partial ordering is sufficient [58]. He gives a mechanism in which messages in an asynchronous system carry time stamps and processes manipulate these time stamps. We
EMBEDDED SOFTWARE
81
can then talk about processes having information or knowledge at a consistent cut, rather than "simultaneously." Fidge gives a related mechanism in which processes that can fork and join increment a counter on each event [59]. A partial ordering relationship between these lists of times is determined by process creation, destruction, and communication. If the number of processes is fixed ahead of time, then Mattem gives a more efficient implementation by using "vector time" [60]. All of this work offers ideas for modeling time. How can we reconcile this multiplicity of views? A grand unified approach to modeling would seek a concurrent model of computation that serves all purposes. This could be accomplished by creating a melange, a mixture of all of the above. For example, one might permit each connection between components to use a distinct protocol, where some are timed and some not, and some are synchronous and some not, as done for example in ROOM [18] and SystemC 2.0 ( h t t p : / / s y s temc. org). This offers rich expressiveness, but such a mixture may prove extremely complex and difficult to understand, and synthesis and validation tools would be difficult to design. In my opinion, such richly expressive formalisms are best used as foundations for more specialized models of computation. This, in fact, is the intent in SystemC 2.0 [61]. Another alternative would be to choose one concurrent model of computation, say the rendezvous model, and show that all the others are subsumed as special cases. This is relatively easy to do, in theory. Most of these models of computation are sufficiently expressive to be able to subsume most of the others. However, this fails to acknowledge the strengths and weaknesses of each model of computation. Process networks, for instance, are very good at describing the data dependencies in a signal processing system, but not as good at describing the associated control logic and resource management. Finite-state machines are good at modeling at least simple control logic, but inadequate for modeling data dependencies and numeric computation. Rendezvous-based models are good for resource management, but they overspecify data dependencies. Thus, to design interesting systems, designers need to use heterogeneous models. Certain architecture description languages (ADLs), such as Wright [19] and Rapide [28], define a model of computation. The models are intended for describing the rich sorts of component interactions that commonly arise in software architecture. Indeed, such descriptions often yield good insights about design, but sometimes, the match is poor. Wright, for example, which is based on CSP, does not cleanly describe asynchronous message passing (it requires giving detailed descriptions of the mechanisms of message passing). I believe that what we really want are architecture design languages rather than architecture description languages. That is, their focus should not be on describing current practice, but rather on improving future practice. Wright, therefore, with its strong commitment
82
EDWARD A. LEE
to CSP, should not be concerned with whether it cleanly models asynchronous message passing. It should instead take the stand that asynchronous message passing is a bad idea for the designs it addresses.
7.
Heterogeneous Models
Figure 4 shows a hierarchical heterogeneous combination of models of computation. A concurrent model at the top level has a component that is refined into a finite-state machine. The states in the state machine are further refined into a concurrent model of computation. Ideally, each concurrent model of computation can be designed in such a way that it composes transparently with FSMs, and, in fact, with other concurrent models of computation. In particular, when building a realization of a model of computation, it would be best if it did not need to be jointly designed with the realizations that it can compose with hierarchically. This is a challenging problem. It is not always obvious what the meaning should be of some particular hierarchical combination. The semantics of various combinations of FSMs with various concurrency models are described in [54]. In Ptolemy II [25], the composition is accomplished via a notion called domain polymorphism. The term "domain polymorphism" requires some explanation. First, the term "domain" is used in the Ptolemy project to refer to an implementation of a model of computation. This implementation can be thought of as a "language," except that it does not (necessarily) have the traditional textual syntax of conventional programming languages. Instead, it abides by a common abstract syntax that underlies all Ptolemy models. The term "domain" is a fanciful one, coming from the speculative notion in astrophysics that there are regions of the universe where the laws of physics differ. Such regions are called "domains." The model of computation is analogous to the laws of physics. In Ptolemy II, components (called actors) in a concurrent model of computation implement an interface consisting of a suite of action methods. These methods define the execution of the component. A component that can be executed under the direction of any of a number of models of computation is called a domain polymorphic component. The component is not defined to operate with a particular model of computation, but instead has a well-defined behavior in several, and can be usefully used in several. It is domain polymorphic, meaning specifically that it has a well-defined behavior in more than one domain, and that the behavior is not necessarily the same in dilferent domains. For example, the AddSubtract actor (shown as a square with a -h and - ) appears in Fig. 8, where it adds or subtracts continuous-time signals, and in Fig. 5, where it adds or subtracts streams.
EMBEDDED SOFTWARE
83
In Ptolemy II, an application (which is called a "model") is constructed by composing actors (most of which are domain polymorphic), connecting them, and assigning a domain. The domain governs the interaction between components and the flow of control. It provides the execution semantics to the assembly of components. The key to hierarchically composing multiple models of computation is that an aggregation of components under the control of a domain should itself define a domain polymorphic component. Thus, the aggregate can be used as a component within a different model of computation. In Ptolemy II, this is how finite-state machine models are hierarchically composed with other models to get hybrid systems, Statechart-like models, and SDL-like models. Domain polymorphic components in Ptolemy II simply need to implement a Java interface called Executable. This interface defines three phases of execution, an initialization phase, which is executed once, an iteration phase, which can be executed multiple times, and a termination phase, which is executed once. The iteration itself is divided into three phases also. The first phase, called prefire, can examine the status of the inputs and can abort the iteration or continue it. The prefire phase can also initiate some computation, if appropriate. The second phase, called fire, can also perform some computation, if appropriate, and can produce outputs. The third phase, called postfire, can commit any state changes for the component that might be appropriate. To get hierarchical mixtures of domains, a domain must itself implement the Executable interface to execute an aggregate of components. Thus, it must define an initialization, iteration, and termination phase, and within the iteration phase, it must define the same three phases of execution. The three-phase iteration has proven suitable for a huge variety of models of computation, including synchronous dataflow (SDF) [37], discrete events (DE) [62], discrete time (DT) [63], finite-state machines (FSM) [54], continuous-time (CT) [64], synchronous/reactive (SR), and Giotto (a time-triggered domain) [24]. All of these domains can be combined hierarchically. Some domains in Ptolemy II have fixed-point semantics, meaning that in each iteration, the domain may repeatedly fire the components until a fixed point is found. Two such domains are continuous time (CT) [64] and synchronous/ reactive (SR) [65,66]. The fact that a state update is committed only in the postfire phase of an iteration makes it easy to use domain-polymorphic components in such a domain. Ptolemy II also has domains for which this pattern does not work quite as well. In particular, in the process networks (PN) domain [67] and communicating sequential processes (CSP) domain, each component executes in its own thread. These domains have no difficulty executing domain polymorphic components. They simply wrap in a thread a (potentially) infinite sequence of iterations.
84
EDWARD A. LEE
However, aggregates in such domains are harder to encapsulate as domain polymorphic components, because it is hard to define an iteration for the aggregate. Since each component in the aggregate has its own thread of execution, it can be tricky to define the boundary points between iterations. This is an open issue that the Ptolemy project continues to address, and to which there are several candidate solutions that are applicable for particular problems.
8.
Component Interfaces
The approach described in the previous section is fairly ad hoc. The Ptolemy project has constructed domains to implement various models of computation, most of which have entire research communities centered on them. It has then experimented with combinations of models of computation, and through trial and error, has identified a reasonable design for a domain polymorphic component interface definition. Can this ad hoc approach be made more systematic? I believe that type system concepts can be extended to make this ad hoc approach more systematic. Type systems in modern programming languages, however, do not go far enough. Several researchers have proposed extending the type system to handle such issues as array bounds overruns, which are traditionally left to the run-time system [68]. However, many issues are still not dealt with. For example, the fact that prefire is executed before ^r^ in a domain polymorphic component is not expressed in the type system. At its root, a type system constrains what a component can say about its interface, and how compatibility is ensured when components are composed. Mathematically, type system methods depend on a partial order of types, typically defined by a subtyping relation (for user-defined types such as classes) or in more ad hoc ways (for primitive types such as double or int). They can be built from the robust mathematics of partial orders, leveraging, for example, fixedpoint theorems to ensure convergence of type checking, type resolution, and type inference algorithms. With this very broad interpretation of type systems, all we need is that the properties of an interface be given as elements of a partial order, preferably a complete partial order (CPO) or a lattice [18]. I suggest first that dynamic properties of an interface, such as the conventions in domain polymorphic component design, can be described using nondeterministic automata, and that the pertinent partial ordering relation is the simulation relation between automata. Preliminary work in this direction is reported in [69], which uses a particular automaton model called interface automata [29]. The result is called a behavioral-type system. Behavioral-level types can be used without modifying the underlying languages, but rather by overlaying on standard languages design patterns that make
EMBEDDED SOFTWARE
85
these types explicit. Domain polymorphic components are simply those whose behavioral-level types are polymorphic. Note that there is considerable precedent for such augmentations of the type system. For example, Lucassen and Gifford introduce state into functions using the type system to declare whether functions are free of side effects [70]. MartinLof introduces dependent types, in which types are indexed by terms [71]. Xi uses dependent types to augment the type system to include array sizes, and uses type resolution to annotate programs that do not need dynamic array bounds checking [68]. The technique uses singleton types instead of general terms [72] to help avoid undecidability. While much of the fundamental work has been developed using functional languages (especially ML [73]), there is no reason that I can see that it cannot be applied to more widely accepted languages.
8.1
On-line-type Systems
Static support for type systems gives the compiler responsibility for the robustness of software [74]. This is not adequate when the software architecture is dynamic. The software needs to take responsibility for its own robustness [75]. This means that algorithms that support the type system need to be adapted to be practically executable at run time. ML is an early and well-known realization of a "modem type system" [1,76,77]. It was the first language to use type inference in an integrated way [78], where the types of variables are not declared, but are rather inferred from how they are used. The compile-time algorithms here are elegant, but it is not clear to me whether run-time adaptations are practical. Many modem languages, including Java and C+H-, use declared types rather than type inference, but their extensive use of polymorphism still implies a need for fairly sophisticated type checking and type resolution. Type resolution allows for automatic (lossless) type conversions and for optimized mn-time code, where the overhead of late binding can be avoided. Type inference and type checking can be reformulated as the problem of finding the fixed point of a monotonic function on a lattice, an approach due to Dana Scott [79]. The lattice describes a partial order of types, where the ordering relationship is the subtype relation. For example. Double is a subtype of Number in Java. A typical implementation reformulates the fixed point problem as the solution of a system of equations [49] or of inequalities [80]. Reasonably efficient algorithms have been identified for solving such systems of inequalities [81], although these algorithms are still primarily viewed as part of a compiler, and not part of a mn-time system. Iteration to a fixed point, at first glance, seems too costly for on-line real-time computation. However, there are several languages based on such iteration that
86
EDWARD A. LEE
are used primarily in a real-time context. Esterel is one of these [7]. Esterel compilers synthesize run-time algorithms that converge to a fixed point at each clock of a synchronous system [14]. Such synthesis requires detailed static information about the structure of the application, but methods have been demonstrated that use less static information [65]. Although these techniques have not been proposed primarily in the context of a type system, I believe they can be adapted.
8.2
Reflecting Program Dynamics
Object-oriented programming promises software modularization, but has not completely delivered. The type system captures only static, structural aspects of software. It says litde about the state trajectory of a program (its dynamics) and about its concurrency. Nonetheless, it has proved extremely useful, and through the use of reflection, is able to support distributed systems and mobile code. Reflection, as applied in software, can be viewed as having an on-line model of the software within the software itself. In Java, for example, this is applied in a simple way. The static structure of objects is visible through the Class class and the classes in the reflection package, which includes Method, Constructor, and various others. These classes allow Java code to dynamically query objects for their methods, determine on-the-fly the arguments of the methods, and construct calls to those methods. Reflection is an integral part of Java Beans, mobile code, and CORBA support. It provides a run-time environment with the facilities for stitching together components with relatively intolerant interfaces. However, static structure is not enough. The interfaces between components involve more than method templates, including such properties as communication protocols. To get adaptive software in the context of real-time applications, it will also be important to reflect the program state. Thus, we need reflection on the program dynamics. In embedded software, this could be used, for example, to systematically realize fault detection, isolation, and recovery (FDIR). That is, if the declared dynamic properties of a component are violated at run time, the run-time-type checking can detect it. For example, suppose a component declares as part of its interface definition that it must execute at least once every 10 ms. Then a run-time-type checker will detect a violation of this requirement. The first question becomes at what granularity to do this. Reflection intrinsically refers to a particular abstracted representation of a program. For example, in the case of static structure, Java's reflection package does not include finer granularity than methods. Process-level reflection could include two critical facets, communication protocols and process state. The former would capture in a type system such properties as whether the process uses rendezvous, streams, or events to communicate with
EMBEDDED SOFTWARE
87
Other processes. By contrast, Java Beans defines this property universally to all applications using Java Beans. That is, the event model is the only interaction mechanism available. If a component needs rendezvous, it must implement that on top of events, and the type system provides no mechanism for the component to assert that it needs rendezvous. For this reason, Java Beans seem unlikely to be very useful in applications that need stronger synchronization between processes, and thus it is unlikely to be used much beyond user interface design. Reflecting the process state could be done with an automaton that simulates the program. (We use the term "simulates" in the technical sense of automata theory.) That is, a component or its run-time environment can access the "state" of a process (much as an object accesses its own static structure in Java), but that state is not the detailed state of the process, but rather the state of a carefully chosen automaton that simulates the application. Designing that automaton is then similar (conceptually) to designing the static structure of an object-oriented program, but represents dynamics instead of static structure. Just as we have object-oriented languages to help us develop object-oriented programs, we would need state-oriented languages to help us develop the reflection automaton. These could be based on Statecharts, but would be closer in spirit to UML's state diagrams in that it would not be intended to capture all aspects of behavior. This is analogous to the object model of a program, which does not capture all aspects of the program structure (associations between objects are only weakly described in UML's static structure diagrams). Analogous to object-oriented languages, which are primarily syntactic overlays on imperative languages, a state-oriented language would be a syntactic overlay on an objectoriented language. The syntax could be graphical, as is now becoming popular with object models (especially UML). Well-chosen reflection automata would add value in a number of ways. First, an application may be asked, via the network, or based on sensor data, to make some change in its functionality. How can it tell whether that change is safe? The change may be safe when it is in certain states, and not safe in other states. It would query its reflection automaton, or the reflection automaton of some gatekeeper object, to determine how to react. This could be particularly important in real-time applications. Second, reflection automata could provide a basis for verification via such techniques as model checking. This complements what object-oriented languages offer. Their object model indicates safety of a change with respect to data layout, but they provide no mechanism for determining safety based on the state of the program. When a reflection automaton is combined with concurrency, we get something akin to Statechart's concurrent, hierarchical FSMs, but with a twist. In Statecharts, the concurrency model is fixed. Here, any concurrency model can be used. We
88
EDWARD A. LEE
call this generalization "*charts," pronounced "starcharts," where the star represents a wildcard suggesting the flexibility in concurrency models [54]. Some variations of Statecharts support concurrency using models that are diff'erent from those in the original Statecharts [15,82]. As with Statecharts, concurrent composition of reflection automata provides the benefit of compact representation of a product automaton that potentially has a very large number of states. In this sense, aggregates of components remain components where the reflection automaton of the aggregate is the product automaton of the components, but the product automaton never needs to be explicitly represented. Ideally, reflection automata would also inherit cleanly. Interface theories are evolving that promise to explain exactly how to do this [29]. In addition to application components being reflective, it will probably be beneficial for components in the run-time environment to be reflective. The run-time environment is whatever portion of the system outlives all application components. It provides such services as process scheduling, storage management, and specialization of components for efficient execution. Because it outlives all application components, it provides a convenient place for reflecting aspects of the application that transcend a single component or an aggregate of closely related components.
9.
Frameworks Supporting Models of Computation
In this context, a framework is a set of constraints on components and their interaction, and a set of benefits that derive from those constraints. This is broader than, but consistent with the definition of frameworks in object-oriented design [83]. By this definition, there are a huge number of frameworks, some of which are purely conceptual, cultural, or even philosophical, and some of which are embodied in software. Operating systems are frameworks where the components are programs or processes. Programming languages are frameworks where the components are language primitives and aggregates of these primitives, and the possible interactions are defined by the grammar. Distributed component middleware such as CORBA [17] and DCOM are frameworks. Synchronous digital hardware design principles are a framework. Java Beans form a framework that is particularly tuned to user interface construction. A particular class library and policies for its use is a framework [83]. For any particular application domain, some frameworks are better than others. Operating systems with no real-time facilities have limited utility in embedded systems, for example. In order to obtain certain benefits, frameworks impose constraints. As a rule, stronger benefits come at the expense of stronger constraints. Thus, frameworks may become rather specialized as they seek these benefits.
EMBEDDED SOFTWARE
89
The drawback with speciaHzed frameworks is that they are unlikely to solve all the framework problems for any complex system. To avoid giving up the benefits of specialized frameworks, designers of these complex systems will have to mix frameworks heterogeneously. Of course, a framework within which to heterogeneously mix frameworks is needed. The design of such a framework is the purpose of the Ptolemy project [25]. Each domain, which implements a model of computation, offers the designer a specialized framework, but domains can be mixed hierarchically using the concept of domain polymorphism. A few other research projects have also heterogeneously combined models of computation. The Gravity system and its visual editor Orbit, like Ptolemy, provide a framework for heterogeneous models [84]. A model in a domain is called a facet, and heterogeneous models are multifacetted designs [85]. Jourdan et al. have proposed a combination of Argos, a hierarchical finite-state machine language, with Lustre [13], which has a more dataflow flavor, albeit still within a synchronous/reactive concurrency framework [86]. Another interesting integration of diverse semantic models is done in Statemate [87], which combines activity charts with statecharts. This sort of integration has more recently become part of UML. The activity charts have some of the flavor of a process network.
10.
Conclusions
Embedded software requires a view of computation that is significantly different from the prevailing abstractions in computation. Because such software engages the physical world, it has to embrace time and other nonfunctional properties. Suitable abstractions compose components according to a model of computation. Models of computation with stronger formal properties tend to be more speciaHzed. This specialization limits their applicability, but this limitation can be ameliorated by hierarchically combining heterogeneous models of computation. System-level types capture key features of components and their interactions through a model of computation, and promise to provide robust and understandable composition technologies. ACKNOWLEDGMENTS
This chapter distills the work of many people who have been involved in the Ptolemy Project at Berkeley. Most notably, the individuals who have directly contributed ideas are Shuvra S. Bhattacharyya, John Davis II, Johan Eker, Chamberlain Fong, Christopher Hylands, Joem Janneck, Jie Liu, Xiaojun Liu, Stephen Neuendorifer, John Reekie, Farhana Sheikh, Kees Vissers, Brian K. Vogel, Paul Whitaker, and Yuhong Xiong. The Ptolemy Project is supported by the Defense Advanced Research Projects Agency (DARPA), the MARCO/DARPA Gigascale Silicon Research Center (GSRC), the State of
90
EDWARD A. LEE
California MICRO program, and the following companies: Agilent Technologies, Cadence Design Systems, Hitachi, and Philips. REFERENCES
[1] Turing, A. M. (1936). "On computable numbers with an application to the Entscheidungsproblem." Proceedings of the London Mathematical Society, 42, 230-265. [2] Manna, Z., and Pnueli, A. (1991). The Temporal Logic of Reactive and Concurrent Systems. Springer-Verlag, Berlin. [3] Douglass, B. R (1998). Real-Time UML. Addison-Wesley, Reading, MA. [4] Dijkstra, E. (1968). "Cooperating sequential processes." Programming Languages (E F. Genuys, Ed.). Academic Press. New York. [5] Lea, D. (1997). Concurrent Programming in JavaTM: Design Principles and Patterns. Addison-Wesley, Reading MA. [6] Benveniste, A., and Berry, G. (1991). "The synchronous approach to reactive and real-time systems." Proceedings of the IEEE, 79, 1270-1282. [7] Berry, G., and Gonthier, G. (1992). "The Esterel synchronous programming language: Design, semantics, implementation." Science of Computer Programming, 19, 87-152. [8] Gamma, E., Helm, R., Johnson, R., and Vlissides, J. (1994). Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, Reading, MA. [9] Edwards, S. A., and Lee, E. A. (2001). "The semantics and execution of a synchronous block-diagram language," Technical Memorandum UCB/ERL, University of California—Berkeley. Available at h t t p : //ptolemy. e e c s . b e r k e l e y . edu/ publications. [10] Liu, C , and Layland, J. (1973). "Scheduling algorithms for multiprogramming in a hard-real-time environment." Journal of the ACM, 20, 46-61. [11] Harel, D., and Pnueli, A. (1985). "On the development of reactive systems." Logic and Models for Verification and Specification of Concurrent Systems. SpringerVerlag, Berlin. [12] Berry, G. (1989). "Real time programming: Special purpose or general purpose languages." Information Processing (G. Ritter, Ed.), Vol. 89, pp. 11-17. Elsevier Science, Amsterdam. [13] Halbwachs, N., Caspi, P., Raymond, P., and Pilaud, D. (1991). "The synchronous data flow programming language LUSTRE." Proc. IEEE, 79, 1305-1319. [14] Benveniste, A., and Le Guernic, P. (1990). "Hybrid dynamical systems theory and the SIGNAL language." IEEE Transactions on Automatic Control, 35, 525-546. [15] Maraninchi, F. (1991). "The Argos Language: Graphical representation of automata and description of reactive systems." Proceedings IEEE Workshop on Visual Languages, Kobe, Japan, Oct. [16] Klein, M. H., Ralya, T., Pollak, B., Obenza, R., and Harbour, M. G. (1993). A Practitioner's Handbook for Real-Time Analysis: Guide to Rate Monotonic Analysis for Real-Time Systems. Kluwer Academic, Norwell, MA.
EMBEDDED SOFTWARE
91
[17] Ben-Natan, R. (1995). CORBA: A Guide to Common Object Request Broker Architecture. McGraw-Hill, New York. [18] Selic, B., Gullekson, G., and Ward, P. (1994). Real-Time Object-Oriented Modeling. Wiley, New York. [19] Allen, R., and Garlan, D. (1994). "Formalizing architectural connection." Proceedings of the 16th International Conference on Software Engineering {ICSE 94), pp. 71-80. IEEE Computer Society Press, Los Alamitos, CA. [20] Agha, G. A. (1990). "Concurrent object-oriented programming." Communications of the ACM, 33, \25-\4l. [21] Agha, G. A. (1986). Actors: A Model of Concurrent Computation in Distributed Systems. MIT Press, Cambridge, MA. [22] Lynch, N. A. (1996). Distributed Algorithms. Morgan Kaufmann, San Mateo, CA. [23] Chiodo, M., Giusto, P., Hsieh, H., Jurecska, A., Lavagno, L., and SangiovanniVincentelU, A. (1994). "A formal methodology for hardware/software co-design of embedded systems." IEEE Micro, 14, 26-36. [24] Henzinger, T. A., Horowitz, B., and Kirsch, C. M. (2001). "Giotto: A time-triggered language for embedded programming." Proceedings of EMSOFT 2001, Tahoe City, CA, Lecture Notes on Computer Science, 2211, pp. 166-184. Springer-Verlag, Berlin. [25] Davis, II, J., Hylands, C , Kienhuis, B., Lee, E. A., Liu, J., Liu, X., MuUadi, L., Neuendorifer, S., Tsay, J., Vogel, B., and Xiong, Y. (2001). "Heterogeneous concurrent modeling and design in Java," Technical Memorandum UCB/ERL MOl/12. Department of Electrical Engineering and Computer Science, University of California—Berkeley. Available at h t t p : / / p t o l e m y . e e c s . b e r k e l e y . e d u / publications. [26a] Agha, G. A. (1997). "Abstracting interaction patterns: A programming paradigm for open distributed systems." Formal Methods for Open Object-Based Distributed Systems, IFIP Transactions (E. Najm and J.-B. Stefani, Eds.) Chapman and Hall, London. [26b] Lee, E. A., and Neuendorifer, S. (2000). "MoML—A modeling markup language in XML, Version 0.4," Technical Memorandum UCB/ERL MOO/12. University of California—Berkeley. Available at h t t p : / / p t o l e m y . e e c s . b e r k e l e y . e d u / publications. [27] Eriksson, H.-E., and Penker, M. (1998). UML Toolkit. Wiley, New York. [28] Luckham, D. C , and Vera, J. (1995). "An event-based architecture definition language." IEEE Transactions on Software Engineering, 21, 717-734. [29] de Alfaro, L., and Henzinger, T. A. (2001). "Interface theories for component-based design." Proceedings of EMSOFT 2001, Tahoe City, CA, Lecture Notes on Computer Science 2211, pp. 148-165. Springer-Verlag, Berlin. [30] Hoare, C. A. R. (1978). "Communicating sequential processes." Communications of the ACM, 21, 666-611.
92
EDWARD A. LEE
[31] von Eicken, T., Culler, D. E., Goldstein, S. C, and Schauser, K. E. (1992). "Active messages: A mechanism for integrated communications and computation." Proceedings of the 19th International Symposium on Computer Architecture, Gold Coast, Australia. Also available as Technical Report TR UCB/CSD 92/675, Computer Science Division, University of California—Berkeley. [32] Kahn, G. (1974). 'The semantics of a simple language for parallel programming." Proceedings of the IFIP Congress 74. North-Holland, Amsterdam. [33] Carriero, N., and Gelernter, D. (1989). "Linda in context." Communications of the ACM, 32, 444-458. [34] Bhattacharyya, S. S., Murthy, R K., and Lee, E. A. (1996). Software Synthesis from Dataflow Graphs. Kluwer Academic, Norwell, MA. [35] Karp, R. M., and Miller, R. E. (1966). "Properties of a model for parallel computations: Determinacy, termination, queueing." SIAM Journal, 14, 1390-1411. [36] Lauwereins, R., Wauters, P., Ade, M., and Peperstraete, J. A. (1994). "Geometric parallelism and cyclo-static dataflow in GRAPE-Il." Proceedings 5th International Workshop on Rapid System Prototyping, Grenoble, France. [37] Lee, E. A., and Messerschmitt, D. G. (1987). "Synchronous data flow." Proceedings of the IEEE, IS, 1235-1245. [38] Lee, E. A., and Messerschmitt, D. G. (1987). "Static scheduling of synchronous data flow programs for digital signal processing." IEEE Transactions on Computers, 36, 24-35. [39] Buck, J. T. (1993). "Scheduling dynamic dataflow graphs with bounded memory using the token flow model," Tech. Report UCB/ERL 93/69, Ph.D. Dissertation. Department of Electrical Engineering and Computer Science, University of California—Berkeley. Available at h t t p : / / p t o l e i n y . e e c s . b e r k e l e y . e d u / publications. [40] Jagannathan, R. (1992). "Parallel execution of GLU programs." Presented at 2nd International Workshop on Dataflow Computing, Hamilton Island, Queensland, Australia. [41] Kaplan, D. J., et al. (1987). "Processing Graph Method Specification Version 1.0," unpublished memorandum. Naval Research Laboratory, Washington DC. [42] Parks, T. M. (1995). "Bounded scheduling of process networks." Technical Report UCB/ERL-95-105, Ph.D. Dissertation. Department of Electrical Engineering and Computer Science. University of California—Berkeley. Available at h t t p : / / p t o lemy.eecs.berkeley.edu/publications. [43] Kopetz, H., Holzmann, M., and Elmenreich, W. (2000). "A universal smart transducer interface: TTP/A." 3rd IEEE International Symposium on Object-Oriented Real-Time Distributed Computing {ISORC'2000). [44] Liao, S., Tjiang, S., and Gupta, R. (1997). "An efficient implementation of reactivity for modeling hardware in the scenic design environment." Proceedings of the Design Automation Conference {DAC 97), Anaheim, CA. [45] Koo, T. J., Liebman, J., Ma, C , and Sastry, S. S. (2001). "Hierarchical approach for design of multi-vehicle multi-modal embedded software." Proceedings of EMSOFT
EMBEDDED SOFTWARE
93
2001, Tahoe City, CA, Lecture Notes on Computer Science 2211, pp. 344-360. Springer-Verlag, Beriin. [46] Caspi, P., Pilaud, D., Halbwachs, N., and Plaice, J. A. (1987). "LUSTRE: A declarative language for programming synchronous systems." Conference Record of the 14th Annual ACM Symposium on Principles of Programming Languages, Munich, Germany. [47] Lee, E. A., and Parks, T. M. (1995). "Dataflow process networks." Proceedings of the IEEE,S3,113-S0\. [48] Lieverse, P., Van Der Wolf, P., Deprettere, E., and Vissers, K. (2001). "A methodology for architecture exploration of heterogeneous signal processing systems." Journal of VLSI Signal Processing, 29, 197-207. [49] Milner, R. (1978). "A theory of type polymorphism in programming." Journal of Computer and System Sciences, 17, 348-375. [50] Reed, G. M., and Roscoe, A. W. (1988). "A timed model for communicating sequential processes." Theoretical Computer Science 58, 249-261. [51] van Gemund, A. J. C. (1993). "Performance prediction of parallel processing systems: The PAMELA methodology." Proceedings 7th International Conference on Supercomputing, Tokyo. [52] Ahuja, S., Carreiro, N., and Gelemter, D. (1986). "Linda and friends." Computer, 19, 26-34. [53a] Harel, D. (1987). "Statecharts: A visual formalism for complex systems." Science of Computer Programming, 8, 231-274. [53b] Henzinger, T. A. (1996). "The theory of hybrid automata." Proceedings of the 11th Annual Symposium on Logic in Computer Science, pp. 278-292. IEEE Computer Society Press, Los Alamitos, CA. Invited tutorial. [54] Girault, A., Lee, B., and Lee, E. A. (1999). "Hierarchical finite state machines with multiple concurrency models." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 18, 742-760. [55] Saracco, S., Smith, J. R. W., and Reed, R. (1989). Telecommunications Systems Engineering Using SDL, North-Holland-Elsevier, Amsterdam. [56] Lee, E. A., and Sangiovanni-Vincentelli, A. (1998). "A framework for comparing models of computation." IEEE Transaction on Computer-Aided Design, 17, 12171229. [57] Trotter, W. T. (1992). Combinatorics and Partially Ordered Sets. Johns Hopkins Univ. Press, Baltimore, MD. [58] Lamport, L. (1978). "Time, clocks, and the ordering of events in a distributed system." Communications of the ACM, 21, 558-565. [59] Fidge, C. J. (1991). "Logical time in distributed systems." Computer, 24, 28-33. [60] Mattem, F. (1989). "Virtual time and global states of distributed systems." Parallel and Distributed Algorithms (M. Cosnard and P. Quinton, Eds.), pp. 215-226. NorthHolland, Amsterdam.
94
EDWARD A. LEE
[61] Swan, S. (2001). "An introduction to system level modeling in SystemC 2.0," draft report. Cadence Design Systems. [62] Lee, E. A. (1999). "Modeling concurrent real-time processes using discrete events." Annals of Software Engineering, Special Volume on Real-Time Software Engineering, 7, 25-45. [63] Fong, C. (2001). "Discrete-time dataflow models for visual simulation in Ptolemy II," Memorandum UCB/ERL MO 1/9. Electronics Research Laboratory, University of California—Berkeley. Available at h t t p : //ptolemy. eecs . berkeley. edu/ publications. [64] Liu, J. (1998). "Continuous time and mixed-signal simulation in Ptolemy II," UCB/ERL Memorandum M98/74. Department of Electrical Engineering and Computer Science, University of California—Berkeley. Available at h t t p : / / p t o lemy.eecs.berkeley.edu/publications. [65] Edwards, S. A. (1997). "The specification and execution of heterogeneous synchronous reactive systems," Technical Report UCB/ERL M97/31, Ph.D. thesis. University of California—Berkeley. Available at h t t p : / / p t o l e m y . e e c s . berkeley.edu/publications. [66] Whitaker, P. (2001). "The simulation of synchronous reactive systems in Ptolemy II," Master's Report, Memorandum UCB/ERL MO 1/20. Electronics Research Laboratory, University of California—Berkeley. Available at h t t p : / / p t o l e m y . eecs.berkeley.edu/publications. [67] Goel, M. (1998). "Process networks in Ptolemy II," UCB/ERL Memorandum M98/69, University of California—Berkeley. Available at h t t p : / / p t o lemy.eecs.berkeley.edu/publications. [68] Xi, H., and Pfenning, F. (1998). "Eliminating array bound checking through dependent types." Proceedings of ACM SIGPLAN Conference on Programming Language Design and Implementation {PLDI '98), Montreal, pp. 249-257. [69] Lee, E. A., and Xiong, Y. (2001). "System-level types for component-based design." Proceedings ofEMSOFT2001, Tahoe City, CA, Lecture Notes on Computer Science 2211, pp. 237-253. Springer-Verlag, Berlin. [70] Lucassen, J. M., and Gifford, D. K. (1988). "Polymorphic effect systems." Proceedings 15th ACM Symposium on Principles of Programming Languages, pp. 47-57. [71] Martin-Lof, P. (1980). "Constructive mathematics and computer programming." Logic, Methodology, and Philosophy of Science VL pp. 153-175. North-Holland, Amsterdam. [72] Hayashi, S. (1991). "Singleton, union, and intersection types for program extraction." Proceedings of the International Conference on Theoretical Aspects of Computer Science (A. R. Meyer, Ed.), pp. 701-730. [73] Ullman, J. D. (1994). Elements of ML Programming. Prentice-Hall, Englewood Cliffs, NJ. [74] Cardelli, L., and Wegner, P. (1985). "On understanding types, data abstraction, and polymorphism." ACM Computing Surveys, 17, 471-522.
EMBEDDED SOFTWARE
95
[75] Laddaga, R. (1998). "Active software." Position paper for the St. Thomas Workshop on Software Behavior Description. [76] Gordon, M. J., Milner, R., Morris, L., Newey, M., and Wadsworth, C. R (1978). "A metalanguage for interactive proof in LCF." Conference Record of the 5th Annual ACM Symposium, on Principles of Programming Languages, pp. 119-130. Assoc. Comput. Mach., New York. [77] Wikstrom, A. (1988). Standard ML. Prentice-Hall, Englewood Cliffs, NJ. [78] Hudak, P. (1989). "Conception, evolution, and application of functional programming languages." ACM Computing Surveys, 21, 359^11. [79] Scott, D. (1970). "Outline of a mathematical theory of computation." Proceedings of the 4th Annual Princeton Conference on Information Sciences and Systems, pp. 169176. [80] Xiong, Y., and Lee, E. A. (2000). "An extensible type system for component-based design." 6th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, Berlin, Lecture Notes on Computer Science, 1785, pp. 20-37. Springer-Verlag, Berlin. [81] Rehof, J., and Mogensen, T. (1996). "Tractable constraints in finite semilattices." Third International Static Analysis Symposium, Lecture Notes in Computer Science 1145, pp. 285-301, Springer-Verlag, Berlin. [82] von der Beeck, M. (1994). "A comparison of statecharts variants." Proceedings of Formal Techniques in Real Time and Fault Tolerant Systems, Lecture Notes on Computer Science 863, pp. 128-148. Springer-Verlag, BerUn. [83] Johnson, R. E. (1997). "Frameworks = (Components -I- Patterns)." Communications of the ACM, 40,39-42. [84] Abu-Ghazaleh, N., Alexander, P., Dieckman, D., Murali, R., and Penix, J. (1998). "Orbit—A framework for high assurance system design and analysis," TR 211/01/98/ECECS. University of Cincinnati. [85] Alexander, P. (1998). "Multi-facetted design: The key to systems engineering." Proceedings of Forum on Design Languages (FDL-98). [86] Jourdan, M., Lagnier, F., Maraninchi, F., and Raymond, P. (1994). "A multiparadigm language for reactive systems." Proceedings of the 1994 International Conference on Computer Languages, Toulouse, France. [87] Harel, D., Lachover, H., Naamad, A., Pnueli, A., Politi, M., Sherman, R., ShtullTrauring, A., and Trakhtenbrot, M. (1990). "STATEMATE: A working environment for the development of complex reactive systems." IEEE Transactions on Software Engineering, 16, 403-414.
This Page Intentionally Left Blank
Empirical Studies of Quality Models in Object-Oriented Systems LIONEL C.BRIAND Software Quality Engineering Laboratory Systems and Computer Engineering Carleton University 1125 Colonel By Drive Ottawa, K1S 5B6 Canada [email protected] JURGENWUST Fraunhofer lESE Sauerwiesen 6 67661 Kaiserslautern Germany
Abstract Measuring structural design properties of a software system, such as coupling, cohesion, or complexity, is a promising approach toward early quality assessments. To use such measurement effectively, quality models that quantitatively describe how these internal structural properties relate to relevant external system qualities such as reliability or maintainability are needed. This chapter's objective is to summarize, in a structured and detailed fashion, the empirical results reported so far with modeling external system quality based on structural design properties in object-oriented systems. We perform a critical review of existing work in order to identify lessons learned regarding the way these studies are performed and reported. Constructive guidelines for facilitating the work of future studies are also provided, thus facilitating the development of an empirical body of knowledge.
1. Introduction 2. Overview of Existing Studies 2.1 Classification of Studies 2.2 Measurement ADVANCES IN COMPUTERS, VOL. 56 ISBN 0-12-012156-5
98 99 99 101 97
Copyright 2002 Elsevier Science Ltd Allrightsof reproduction in any form reserved.
98
LIONEL C. BRIAND AND JURGEN WUST
2.3 Survey of Studies 2.4 Discussion 3. Data Analysis Methodology 3.1 Descriptive Statistics 3.2 Principal Component Analysis 3.3 Univariate Regression Analysis 3.4 Prediction Model Construction 3.5 Prediction Model Evaluation 4. Summary of Results 4.1 Correlational Studies 4.2 Controlled Experiments 5. Conclusions 5.1 Interrelationship between Design Measures 5.2 Indicators of Fault-Proneness 5.3 Indicators of Effort 5.4 Predictive Power of Models 5.5 Cross-System Application 5.6 Cost-Benefit Model 5.7 Advanced Data Analysis Techniques 5.8 Exploitation of Results 5.9 Future Research Directions Appendix A Appendix B: Glossary References
1.
102 110 112 113 114 116 117 126 131 131 149 150 151 151 152 152 152 153 154 154 156 157 161 162
Introduction
As object-oriented programming languages and development methodologies moved forward, a significant research effort was also dedicated to defining specific quality measures and building quality models based on those measures. Quality measures of object-oriented code or design artifacts usually involve analyzing the structure of these artifacts with respect to the interdependencies of classes and components as well as their internal elements (e.g., inner classes, data members, methods). The underlying assumpdon is that such measures can be used as objective measures to predict various external quality aspects of the code or design artifacts, e.g., maintainability and reliability. Such prediction models can then be used to help decision-making during development. For example, we may want to predict the fault-proneness of components in order to focus validation and verification effort, thus finding more defects for the same amount of effort. Furthermore, as predictive measures of fault-proneness, we may want to consider the coupling, or level of dependency, between classes.
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
99
A large number of quality measures have been defined in the literature. Most of them are based on plausible assumptions but one key question is to determine whether they are actually useful, significant indicators of any relevant, external quality attribute [la]. We also need to investigate how they can be applied in practice, whether they lead to cost-effective models in a specific application context. Although numerous empirical studies have been performed and reported in order to address the above-mentioned questions, it is difficult to synthesize the current body of knowledge and identify future research directions. One of the main reasons is the large variety of measures investigated and the lack of consistency and rigor in the reporting of results. This chapter's objective is to summarize, in a structured and detailed fashion, the results that have been reported so far. Overall, although not all the results are easy to interpret, there is enough consistency across studies to identify a number of strong conclusions. We also perform a critical review of existing work in order to identify lessons learned regarding the way these studies are performed and reported. Constructive guidelines for facilitating the work of future studies are also provided, thus facilitating the development of an empirical body of knowledge. Section 2 summarizes existing studies and their main characteristics. Section 3 describes the most important principles and techniques regarding the analysis of software quality data and structural measurement. A recommended analysis procedure is also provided. Section 4 summarizes, in great detail, the results of the studies discussed in Section 2. These results are discussed and conclusions are provided in Section 5.
2.
Overview of Existing Studies
This section presents a first overview of the existing studies relating 0 0 design measurement and system quality, and highlights their commonalities and differences. A comparison of their results is performed in Section 4.
2.1 Classification of Studies Despite a large number of papers regarding the quality measurement of objectoriented systems, the number of articles that empirically investigate the relationship between design properties and various external quality attributes is relatively small. These studies fall into two categories: 1. Correlational studies. These are studies which by means of univariate or multivariate regression analysis try to demonstrate a statistical relationship
100
LIONEL C. BRIAND AND JURGEN WUST
between one or more measures of a system's structural properties (as independent variables) and an external system quality (as a dependent variable). 2. Controlled experiments. These are studies that control the structural properties of a set of systems (independent variables, mostly related to the use of the 0 0 inheritance mechanism), and measure the performance of subjects undertaking software development tasks in order to demonstrate a causal relationship between the two. So far such studies have mostly been performed with students and have focused on the impact of inheritance on maintenance tasks. Correlational studies are by far more numerous as they are usually the only option in industrial settings. Outside these two categories, published empirical work typically falls into two further categories: 3. Application of a set of design measures to one or more systems; with a discussion of the obtained distributions of the measures within one system, or a comparison of distributions across two or more systems, e.g., [lb-4]. For instance, [4] develop two versions of a brewery control system to identical specifications, one following a data-driven approach [5], the other a responsibility-driven approach [6]. They apply the set of design measures by Chidamber and Kemerer [7] to the two resulting systems. They find the system resulting from the responsibility-driven approach to display more desirable structural properties. They conclude the responsibility-driven to be more effective for the production of maintainable, extensible and reusable software. Conclusions in such studies of course are only supported when a relationship of the design measures used with the afore-mentioned system qualities is established. Considered in isolation, such studies are not suitable for demonstrating the usefulness of the structural measures, or drawing conclusions from their measurement. 4. Apply a set of design measures to one or more systems and investigate the relationships between these design measures, by investigating pairwise correlations and performing factor analysis (e.g., [1,8,9]). Besides empirical studies, the literature is concerned with the following topics: 5. Definition of new sets of measures (e.g., [3,7,10-17]). 6. Definition of measurement frameworks for one or more structural properties, which provide guidelines on how these properties can, in principle, be measured [13,18-20]. 7. Criticism/theoretical analysis of existing measures and measurement frameworks; in particular, there is an interest in defining, for measures of various structural properties, necessary mathematical properties these
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
101
measures must possess in order for them to be valid for the properties involved [21-24]. Our discussions in this article will focus on categories (1) and (2), with a strong emphasis on the former as these studies are by far the most numerous.
2.2
Measurement
In this section, we provide some examples of measures for object-oriented designs, to give the reader new to the field an impression of what measures of 0 0 structural properties usually are about. We summarize here the measures by Chidamber and Kemerer ([3], in the following referred to as C&K). As we will see, these are the measures having received the widest attention in empirical studies and will be frequendy mentioned in subsequent sections. Chidamber and Kemerer define a suite of six measures (CBO, RFC, LCOM, DIT, NOC, WMC), to quantify the coupling, cohesion, inheritance relationships, and complexity of a class in an 0 0 system: • CBO (Coupling between objects)—A count of the number of noninheritance related couples to other classes. An object of a class is coupled to another, if methods of one class use methods or attributes of the other. • RFC (Response for class)—RFC = |RS| where RS is the response set for the class. The response set can be expressed as {M} (J^^^. [Rt], where [Ri] is the set of methods called by method /, and {M} is the set of all methods in the class. The response set of a class is a set of methods that can potentially be executed in response to a message received by an object of that class. • LCOM (Lack of cohesion in methods)—Consider a Class Ci with methods Ml, M2, . . . , M„. Let {/,} = set of instance variables used by method M/. There are n such sets {/i}, . . . , {/„}. Let P = {(//, Ij) \ // n Ij = 0} and Q = {(liJj) I // n Ij + 0 } . If all n sets {/i}, . . . , {/„} are 0, then let P = 0.
={'
LCOM _ ,
IPI-IQI, ^ 0,
//|P|>|Q| ^ . otherwise.
• DIT (Depth in inheritance tree)—The depth of a class in the inheritance tree is the maximum length from the node to the root of the tree. • NOC (Number of children)—The number of classes derived from a given class.
102
LIONEL C. BRIAND AND JURGEN WUST
• WMC (Weighted method complexity)—Consider a class Ci, with methods Ml, M2, . . . , Mn. Let ci, C2, . . . , Cn be the complexity of the methods. Then n
WMC= ^ c , . The complexities c, were intentionally left undefined. Two versions of WMC were suggested and are frequently used: • In [14,25], Ci is defined as McCabe's cyclomatic complexity of method Mi [26]. • In [27], each c, is set to 1. In other words, this version of WMC counts the (noninherited) methods of the class. The Appendix provides short definitions for all measures mentioned in this chapter.
2.3
Survey of Studies
This section is divided into two distinct subsections. The first presents correlational studies, whereas the second focuses on experiments. The studies are summarized in terms of their settings, dependent variable, independent variables, and analysis techniques.
2.3.7
Correlational
Studies
Table I provides for each study a brief description of • The external system quality of interest that was used as the dependent variable of the study. • The measures of structural properties used as independent variables of the study. • A brief characterization of the system(s) from which the data sets were obtained (language, size, application domain). The systems are described the first time they are reported and are denoted by acronyms in the remainder of the table. • The types of analyses that were performed. Most importandy, this includes • univariate analysis (what modeling technique was used, if any), • multivariate analysis (what modeling technique was used, if any), • what kind of validation of the multivariate model was performed, if any.
TABLEI
OVERVIEW OF CORRELATIONAL STUDIES Reference
Dependent variable
Independent variable
Data set
Univariate analysis
Multivariate analysis
Model evaluation
28
Defect density, fault density, rework effort (system wide)
MOOD metrics
Pearson r
Linear ordinary least-squares regression (LS)
R2
Fault-proneness (faults from acceptance testing)
C&K [3]; Code Metrics Nesting Level, FunctDef, FunctCall -50 measures including C&K, C-FOOD [ 101
UMD: 8 C++ systems, university setting, from students, 4 to 15 KLOC UMD, see [28]
Logistic regression (LR)
Logistic regression (LR)
Contingency table, correctness/ completeness
Fault-proneness (acceptance testing) Fault-proneness (field faults)
UMD, see [28]
-50 measures including C&K, C-FOOD Suite of polymorphism measures 1321, C&K, Pan of C-FOOD
LALO: C++ system, commercia], 40 KLOC XPOSE: 144 classes, commercial
Development effort
-50 measures including C&K, C-FOOD
Expert opinion (EO): "perceived complexity" 0-100%
CACM (author's own), LCOM (2 versions - [3,14])
LIOO, university setting, public domain, 103 classes, 17 KLOC C++, 18 classes, from GUI packages
Fault-proneness (field faults)
LR
LR
LR
Negative binomial regression Spearman rho
LR
LR
LR, MARS
Poisson regression, hybrid with regression trees
e
5 5
R 2 , correctness/ completeness, 10-cross validation (CV) R 2 , correctness/ completeness, 10CV R 2 , correctness/ completeness, 10-CV, crosssystem validation, cost-benefit model 10-CV, ARE, MRE
Xr
(" Z 0 rn ?? 0
TI
: z rn
-I rn
0 V)
IT. NOC. N M O . N M I , CHO, NAS
CBK. LOC
LOC. lih/non-lib functions called, depth in call graph. no. of function dcclarations/dctinitions D V : dome \ystem meter (based on a special notation for busine\\ models, elements o f size, export coupling), function points
37 I S projects. 1.5100 man months. I10 developer\, C++, Smalltalk, 4 G L
Linear L S
Spearman rho
Spcarnian. Kcndall, Pcar\on'\ ISpearnian. Kcndall. Pearson'\ r for all pair!. D V x I V
Quadratic LS
Cross validation (fit/ test data set o f 2 4 1 13 projects), compare MREs of FPA and System Meter
TABLEI - Continued Reference
Dependent variable
Independent variable
Data set
Univariate analysis
54
Effort (project-wide)
No. of classes, Mthds, LOC
7 projects from a small SW company; C++, 15-1 35 CIS.; 3-22 person months dev. 3 C++ systems: case tool, GUI lib, control-SW, total 524 cls., 47KLOC 5 C++ systems, 12-70 CIS., 15- 160KLOC 3 C++ sub systems from a telecom application; 6 KLOC/20 cls., 21 KLOC/45 cls., 6 KLOC/27 cls. Conferencing system, 1 14 C++ cls., 25KLOC
Pearson's r, linear and exponential LS
Effort (class level)
EO: ranking of classes by "perceived complexity" Fault-proneness (from system tests and maintenance phase)
58
59
Effort for field fault fixes; effort for functional enhancements No. of ripple changes a class participates in, proportion ripple changes/changes in a class
Set of 50 measures, mostly size/ complexity of class methods/attributes 4 coupling measures (author's own) C&K without LCOM, LOC, author's own (inheritance-based coupling, memory allocation) C&K measures; various interpretations for WMC (McCabe, Hal stead) CBO, no. of public methods, no. of methods
Conferencing system, 114 C++ cls., 25KLOC
Multivariate analysis
Model evaluation
R2
5 ? z0
Linear LS
R 2 , between-system validation using 4th system (LIOO)
z 0
m
Pearson's r
C
rn
0
7'
LR (separate for each measure, system, type of fault). LR-R-sq.
grnZ ;;I
0 V)
Linear LS
Linear LS
R~
< ;;I V)
Z
V)
Kruskal Wallis, Kendall's tau
108
LIONEL C. BRIAND AND JURGEN WUST
Without delving into the details of the studies in Table I, we can draw a number of observations: • Choice of dependent variable. The dependent variables investigated are either fault-proneness (probability of fault detection) or the number of faults or changes in a class, effort for various development activities, or expert opinion/judgment about the psychological complexity of a class. • Fault-proneness or the number of defects detected in a class is the most frequently investigated dependent variable. Sources of faults are from either unit/system testing or field failures. This choice of dependent variable is by far the most common in the literature. One reason is that using fault-proneness as a dependent variable is an indirect way of looking at reliability, which is an important external quality to consider. Another explanation is that the collection of fault data (including the assignment of faults to classes) is less difficult than collecting other data related to classes (e.g., effort) and this makes it a convenient choice for investigating the impact of structural measures on the cognitive complexity of classes. • Less frequently investigated is effort for various activities: either total development effort, rework effort/functional enhancements, or effort to fix faults. For some studies, effort for individual classes was collected, which, in practice, is a difficult undertaking. Other studies collected system/project-wide effort, which is easier to account for but leads to other practical problems. If systems become the unit of analysis then it becomes difficult to obtain enough data to perform multivariate analysis. • Two studies [40,58] used the likelihood or number of ripple effects on other classes when a change is performed to a class. The goal was to provide a model to support impact analysis. These studies are not described in the remainder of this chapter as they are the only ones of their kind and more studies are needed to confirm the trends observed. • In the absence of hard quality data obtained from development projects, subjective data are sometimes used. For instance, the following have been used: expert opinion about the perceived complexity or cohesion of a class, and preference ranking between design alternatives. There are a number of problems associated with the use of subjective measurement. Determining what constitutes an "expert" is one. Moreover, it is a priori unclear as to which degree experts' judgment correlates with any external system quality attribute. Eliciting expert opinion is a difficult undertaking and must be carefully
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
109
planned to provide meaningful results, and the procedures used must be properly reported. Although this is outside the scope of this article, some abundant literature exists on the subject [60]. An interesting question that, to our knowledge, has not been investigated in depth to date is whether structural measures can perform as well as or better than experts in predicting quality attributes such as fault-proneness. Choice of independent variables. Existing measures receive a varying amount of attention in the empirical studies. The measures by Chidamber and Kemerer [3] were investigated the most. One reason is that this was one of the first publications on the measurement of object-oriented systems. The relative difficulty of collecting more complex measures through static analyzers may also be an explanation. Last, the profusion of papers proposing new measures, using a different terminology and formalism, has made any selection of meaningful measures a difficult undertaking. Some recently published measurement frameworks [18,19] may help choose appropriate measures based on their properties. A careful selection of measures, based on a clear rationale, is indispensable to maintain the complexity of the data analysis within reasonable limits and lower the chances of finding significant relationships by chance [61]. However, in the early stage of investigation, it is common for studies to investigate large numbers of alternatives, as they tend to be exploratory. Building prediction models. Only about half of the studies employ some kind of multivariate analysis in order to build an accurate prediction model for the dependent variable. The remaining studies only investigate the impact of individual measures on system quality, but not their combined impact. Depending on the measurement scale of the dependent variable, different regression techniques are being used. Furthermore, a number of detailed technical issues regarding the data analysis can be observed and are discussed in Section 3. One noticeable pattern is the number of studies that only investigate linear relationships between structural measures and the dependent variable of interest. Although there is no rationale to support this, data sets are often not large enough to investigate nonlinear relationships or interactions. In addition, because of the lack of supporting theory, it is often difficult to know what to search for. Exploratory techniques, such as regression trees or MARS [62], have been used in some studies to determine nonlinearities and variable interactions, with some degree of success [31,33]. Evaluating prediction models. From the studies that perform multivariate analysis, only half of these perform some kind of cross validation [63], where the prediction performance of the multivariate prediction model in a
110
LIONEL C. BRIAND AND JURGEN WUST
relevant application context is investigated. The other studies only provide a measure of the goodness-of-fit of the prediction model (e.g., R^). As a consequence, the potential benefits of using such prediction models are not always clear, especially from a practical standpoint. Very few studies attempt to build a model on a system and apply it to another one, within one environment. As studies move away from exploration and investigate the practical applications of measurement-based models, cross-system predictions will require more attention. One practical difficulty is obtaining consistent data from different projects of comparable nature. • Data sets. Data sets with fault or effort data at the class level are rare. As a consequence, these data sets tend to be repeatedly used for various studies, for example, investigating different sets of measures, or using different modeling techniques. On the one hand, this allows for better comparison between studies but it is also detrimental to building an increased body of knowledge, as replication of individual studies in many different contexts rarely take place. Instead, we find a large number of different studies using a small number of data sets.
2.3.2
Experiments
Table II provides an overview of the controlled experiments investigating the relationship between structural design properties and system quality in objectoriented systems. For each study, we state the literature source, a characterization of the dependent and independent variables investigated, the systems used for the experiment, and the participants involved. The rightmost column indicates what experimental design was employed and the analysis techniques used to test the research hypotheses. For an introduction to experimental designs, see, e.g., [64]. Controlled experiments are far fewer in number than correlational studies. The studies mostly investigate aspects of system understandability and maintainability as dependent variables, and usage of inheritance as the independent variable. Also, we see that the controlled experiments are usually performed in a university setting with student subjects. The qualitative results of these experiments will be summarized in Section 4.2.
2.4
Discussion
From Tables I and II, we can see there is a large number of studies that have already been reported. The great majority of them are correlational studies. One of the reasons is that it is difficult to perform controlled experiments in industrial settings. Moreover, preparing the material for such experiments (e.g., alternative.
?
OVERVIEW OF CONTROLLED EXPERIMENTS Reference
Dependent variable
Independent variables
Systems/subjects
Exp. designlanal. technique
2
Reusability (subjective perception of)
C&K, LOC, methods, attributes, meaningfulness of variable names (subjective measure) Procedural vs 00 design; adherence to common principles of good design Adherence to common principles of good design
2 systems, 3 and 2 classes, one designed to be reusable, the other not
Ad hoc comparison
2 x 2 systems (30 pages reqs & design); 13 student subjects, 2 runs 2 x 2 systems (30 pages reqs & design); 3 1 student subjects, 2 runs 4 versions of a university admin IS system, 2 x 0, I x 3, I x 5 levels of inheritance; 4 x 12 student subjects 2 groups (5&6 students)
2 x 2 fact. Design; ANOVA, paired t- test
Understandability, correctness, completeness, modification rate of Impact Analysis Understandability, correctness, completeness, modification rate of Impact Analysis Maintainability, understandability
68
69
Understandability, modifiability, "debugability" (time, correctness, completeness to perform these tasks) Maintainability (time for maintenance task)
DIT (flat vs deep inheritance structure)
Flat vs deep inheritance structure
Flat vs deep inheritance structure
3 x 2 systems, C++, 400-500 LOC; 3 1 student subjects, 3 runs
in z c
2 x 2 fact. Design; ANOVA, paired t- test 4 x 12 between subject; X 2 to compare groups
Within subject, 2 groups, three diff. tasks
Blocked design, 1 internal rep.; Wilcoxon sign rank and rank sum test
rn
0
2
112
LIONEL C. BRIAND AND JURGEN WUST
functional designs) is usually costly. With correlational studies, actual systems and design documents can be used. Another important observation is that the analysis procedures that are followed throughout the correlational studies vary a great deal. To some extent, some variation is to be expected, as alternative analysis procedures are possible, but many of the studies are actually not optimal in terms of the techniques being used. For instance, [28] overfits the data and performs a great number of statistical tests without using appropriate techniques for repeated testing. We therefore need to facihtate a comparison of studies, to ensure that the data analysis is complete and properly reported. Only then will it be possible to build upon every study and develop a body of knowledge that will allow us to determine how to use structural measurement to build quality models of object-oriented software. Section 3 provides a detailed procedure that was first used (with minor differences) in a number of articles [29-31,33]. Such a procedure will make the results of a study more interpretable—and thus easier to compare—and the analysis more likely to obtain accurate prediction models.
3.
Data Analysis Methodology
Recall that our focus here is to explain the relationships between structural measures of object-oriented designs and external quality measures of interest. In this section, because we focus on data analysis procedures and multivariate modeling, we will refer to these measures as independent and dependent variables, respectively. For the sake of brevity they will be denoted as IVs and DVs. Our goal here is not to paraphrase books on quantitative methods and statistics but rather to clearly make the mapping between the problems we face and the techniques that exist. We also provide clear, practical justifications for the techniques we suggest should be used. On a high level, the procedure we have used [29-31,33] consists of the following steps. 1. Descriptive statistics [70]. An analysis of the frequency distributions of the IVs will help to explain some of the results observed in subsequent steps and is also crucial for explaining differences across studies. 2. Principal component analysis (PCA) [71]. In the investigation of measures of structural properties, it is common to have much collinearity between measures capturing similar underlying phenomena. PCA is a standard technique for determining the dimensions captured by our IVs. PCA will help us better interpret the meaning of our results in subsequent steps. 3. Univariate analysis. Univariate regression analysis looks at the relationships between each of the IVs and the DV under study. This is a first step
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
113
to identify types of IVs significantly related to the DV and thus potential predictors to be used in the next step. 4. Prediction model construction (multivariate analysis). Multivariate analysis also looks at the relationships between IVs and the DV, but considers the former in combination, as covariates in a multivariate model, in order to better explain the variance of the DV and ultimately obtain accurate predictions. To measure the prediction accuracy, different modeling techniques (e.g., OLS [72], logistic regression, Poisson regression [73]) have specific measures of goodness-of-fit of the model. 5. Prediction model evaluation. In order to get an estimate of the predictive power of the multivariate prediction models that is more realistic than goodness-of-fit, we need to apply models to data sets other than those from which they were derived. A set of procedures known as cross-validation [63] should be carried out. Typically, such a procedure consists of dividing the data set into V pieces and use them in turn as test data sets, using the remainder of the data set to fit the model. This is referred to as V crossvalidation and allows the analyst to get a realistic accuracy prediction even when a data set of limited size is available. Based on the results of the cross-validation, the benefit of using the model in a usage scenario should then be demonstrated. The above procedure is aimed at making studies and future replications repeatable and comparable across different environments. In the following, we describe and motivate each step in more detail.
3.1
Descriptive Statistics
Within each case study, the distribution (mean, median, and interquartile ranges) and variance (standard deviation) of each measure is examined. Low variance measures do not differentiate classes very well and therefore are not likely to be useful predictors. The range and distribution of a measure determines the applicability of subsequent regression analysis techniques. Analyzing and presenting the distribution of measures is important for the comparison of different case studies.^ It allows us to determine whether the data collected across studies stem from similar populations. If not, this information will likely be helpful to explain different findings across studies. Also, this analysis will identify measures with potential outlying values, which will be important in the subsequent regression analyses. Univariate and multivariate outlier analyses are discussed in their respective sections. ^ Note that one strong conclusion that comes from our experience of analyzing data and building models is that we will only be able to draw credible conclusions regarding what design measures to use if we are able to replicate studies across a large number of environments and compare their results.
114
LIONEL C. BRIAND AND JURGEN WUST
3.2
Principal Component Analysis
It is common to see groups of variables in a data set that are strongly correlated. These variables are likely to capture the same underlying property of the object to be measured. PCA is a standard technique for identifying the underlying, orthogonal dimensions (which correspond to properties that are directly or indirectly measured) that explain relations between the variables in the data set. For example, analyzing a data set using PCA may lead to the conclusions that all your measures come down to measuring some aspect of class size and import coupling. Principal components (PCs) are linear combinations of the standardized IVs. The sum of the square of the weights in each linear combination is equal to 1. PCs are calculated as follows. The first PC is the linear combination of all standardized variables that explain a maximum amount of variance in the data set. The second and subsequent PCs are linear combinations of all standardized variables, where each new PC is orthogonal to all previously calculated PCs and captures a maximum variance under these conditions. Usually, only a subset of all variables shows large weights and therefore contributes significantly to the variance of each PC. To better identify these variables, the loadings of the variables in a given PC can be considered. The loading of a variable is its correlation with the PC. The variables with high loadings help identify the dimension the PC is capturing but this usually requires some degree of interpretation. In other words, one assigns a meaning or property to a PC based on the variables that show a high loading. For example, one may decide that a particular PC mostly seems to capture the size of a class. In order to further ease interpretation of the PCs, we consider the rotated components. This is a technique where the PCs are subjected to an orthogonal rotation in the sample space. As a result, the resulting principal components (referred to as rotated components) show a clearer pattern of loadings, where the variables either have a very low or high loading, thus showing either a negligible or a significant impact on the PC. There exist several strategies to perform such a rotation, the varimax rotation being the most frequently used strategy in the literature. For a set of n measures there are, at most, n orthogonal PCs, which are calculated in the decreasing order of variance they explain in the data set. Associated with each PC is its eigenvalue, which is a measure of the variance of the PC. Usually, only a subset of the PCs is selected for further analysis (interpretation, rotated components, etc.). A typical stopping rule that we also use in our studies is that only PCs whose eigenvalue is larger than 1.0 are selected. See [71] for more details on PCA and rotated components. We do not consider the PCs themselves for use as independent variables in the prediction model. Although this is often done with ordinary least-square
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
115
(OLS) regression, in the context of logistic regression, this has shown to result in models with a suboptimal goodness-of-fit (when compared to models built using the measures directly), and is not current practice. In addition, principal components are always specific to the particular data set on which they have been computed, and may not be representative of other data sets. A model built using principal components is not likely to be applicable across different systems. Still, it is interesting to interpret the results from regression analyses (see next sections) in the light of the results from PCA, e.g., determine which PCs the measures found to be significant stem from. This shows which dimensions are the main drivers of fault-proneness, and may help explain why this is the case. Regarding replicated studies, it is interesting to see which dimensions are also observable from PCA results in other systems, and find possible explanations for differences in the results, e.g., a different design methodology. We would expect to see consistent trends across systems for the strong PCs that explain a large percentage of the data set variance, and can be readily interpreted. From such observations, we can also derive recommendations regarding which measures appear to be redundant and need not be collected, without losing a significant amount of design information. As an example of an application of PCA, and the types of conclusions we can draw from it. Table III shows the rotated components obtained from cohesion measures applied to the system in [33]. The measures mostly capture two orthogonal dimensions (the rotated components PCI and PC2) in the sample space formed by all measures. Those two dimensions capture 81.5% of the variance in the data set. Analyzing the definitions of the measures with high loadings in PC 1 and PC2 yields the following interpretations of the cohesion dimensions: TABLE HI ROTATED COMPONENTS FOR COHESION MEASURES (FROM
Eigenvalue: Percent: CumPercent: LCOMl LC0M2 LC0M3 LC0M4 LC0M5 Coh Co LCC TCC ICH
[33])
PCI
PC2
4.440 44.398 44.398 0.084 0.041 -0.218 -0.604 -0.878 0.872 0.820 0.869 0.945 0.148
3.711 37.108 81.506 0.980 0.983 0.929 0.224 0.057 -0.113 0.139 0.320 0.132 0.927
116
LIONEL C. BRIAND AND JURGEN WUST
• PCh Measures LC0M5, COH, CO, LCC, TCC are all normalized cohesion measures, i.e., measures that have a notion of maximum cohesion. • PC2: Measures LC0M1-LC0M3, and ICH are nonnormalized cohesion measures, which have no upper bound. As discussed in [18], many of the cohesion measures are based on similar ideas and principles. Diiferences in the definitions are often intended to improve shortcomings of other measures (e.g., behavior of the measure in some pathological cases). The results show that these variations, based on careful theoretical consideration, do not make a substantial difference in practice. By and large, the measures investigated here capture either normalized or nonnormalized cohesion, measures of the latter category having been shown to be related to the size of the class in past studies ([29,30]).
3.3
Univariate Regression Analysis
Univariate regression is performed for each individual IV against the DV, in order to determine whether the measure is a potentially useful predictor. Univariate regression analysis is conducted for two purposes: • to test the hypotheses that the IVs have a significant statistical relationship with the DV, and • to screen out measures not significantly related to the DV and not likely to be significant predictors in multivariate models. Only measures significant at significance level, say, a = 0.25 [74] should be considered for the subsequent multivariate analysis. Note that some IV may be significantly related to the DV for various reasons. It may capture a causal relationship or be the result of a confounding effect with another IV. Because of the repeated testing taking place during univariate analysis, there is a nonnegligible chance to obtain a spurious relationship by chance. Although a number of techniques exist to deal with repeated testing (e.g., Bonferroni [61]), this is not an issue here as we are not trying to demonstrate or provide evidence for a causal relationship. Our goal is to preselect a number of potential predictors for multivariate analysis, which will tell us in turn which IVs seem to be useful predictors. Causality cannot really be demonstrated in this context and only a careful definition of the design measures used as IVs, along with plausible mechanisms to explain causality, can be provided. The choice of modeling technique for univariate analysis (and also the multivariate analysis that follows) is mostly driven by the nature of the DV: its distribution, measurement scale, and whether it is continuous or discrete. Examples from the literature include:
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
117
• Logistic regression to predict the likelihood for an event to occur, e.g., fault detection [29,48]. • Ordinary least-squares regression, often combined with monotonic transformation (logarithmic, quadratic) of the IVs and/or DV, to predict interval/ ratio scale DVs [33,43]. • Negative binomial regression (of which Poisson regression is a special case) to predict discrete DVs that have low averages and whose distribution is skewed to the right [75]. • Parametric and nonparametric measures of correlation (Spearman p, Pearson r) are sometimes used. However, they can only provide a rough picture and are not as practical as they do not account for nonlinearities and are not comparable to the multivariate modeling techniques we present below.
3.3.1
Univariate
Outliers
Outliers are data points located in an empty part of the sample space [76]. Inclusion or exclusion of outliers can have a large influence on the analysis results. It is important that conclusions drawn are not solely dependent on a few oudying observations; otherwise, the resulting prediction models are unstable and cannot be reliably used. When comparing results across replicated studies, it is particularly crucial to ensure that differences in observed trends are not due to singular, oudying data points. For this reason it is necessary to identify outliers, test their influence, and possibly remove them to obtain stable results. For univariate analysis, all observations must be checked for outlying values in the distribution of any one of the measures used in the study. The influence of the identified observation is tested: an oudier is influential, if the significance of the relationship between the measure and the DV depends on the absence or presence of the outlier. Such influential oudiers should not be considered in the univariate analysis results. Oudiers may be detected from scatterplots, and their influence systematically tested. For many regression techniques, specific diagnostics for automatically identifying outliers, e.g.. Cooks Distance for OLS [76], and Pregibon beta for logistic regression [77], were proposed.
3.4
Prediction Model Construction
Multivariate regression for building prediction models of the DV is performed. This analysis is conducted to determine how well we can predict the DV, when the design measures are used in combinadon. For the selection of measures to be used in the model, the following strategy must be employed:
118
LIONEL C. BRIAND AND JURGEN WUST
• Select an appropriate number of independent variables in the model. Overfitting a model increases the standard error of the model's prediction, making the model more dependent on the data set it is based on and thus less generalizable. • Reduce multicollinearity [78], i.e., independent variables which are highly correlated. High multicollinearity results in large standard errors for regression coefficient estimates and may affect the predictive power of the model. It also makes the estimate of the impact of one IV on the DV difficult to derive from the model.
3.4.1
Stepwise Selection Process
Often, the validation studies described here are exploratory in nature; that is, we do not have a strong theory that tells us which variables should be included in the prediction model. In this situation, a stepwise selection process where prediction models are built in a stepwise manner, where each step consists of one variable entering or leaving the model, can be used. The two major stepwise selection processes used for regression model fitting are forward selection and backward elimination [74]. The general forward selection procedure starts with a model that includes the intercept only. Based on certain statistical criteria, variables are selected one at a time for inclusion in the model, until a stopping criterion is fulfilled. Similarly, the general backward elimination procedure starts with a model that includes all independent variables. Variables are selected one at a time to be deleted from the model, until a stopping criterion is fulfilled. When investigating a large number of independent variables, the initial model in a backward selection process would contain too many variables and could not be interpreted in a meaningful way. In that case, we use a stepwise forward selection procedure to build the prediction models. In each step, all variables not already in the model are tested: the most significant variable is selected for inclusion in the model. If this causes a variable already in the model to become not significant (at some level of significance anxit). it is deleted from the model. The process stops when adding the best variable no longer improves the model significantly (at some significance level ofEnter < o^Em)A procedure commonly used to reduce the number of independent variables to make possible the use of a backward selection process is to preselect variables using the results from principal component analysis: the highest loading variables for each principal component are selected and then the backward selection process runs on this reduced set of variables. In our studies [29,30], within the context of logistic regression, this strategy showed the goodness-of-fit of the models thus
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
119
obtained to be poorer than the models obtained from the forward stepwise procedure, hence favoring the use of the latter. The choice of significance levels for measures to enter and exit the model is an indirect means for controling the number of variables in the final model. A rule of thumb for the number of covariates is to have at least 10 data points per independent variable.
3.4.7.7 Criticism
of stepwise selection
heuristics.
Stepwise
selection procedures have been criticized for a couple of reasons: (1) the inclusion of noise variables in the presence of multicollinearity—clearly an issue with our design measures—and (2) the number of variables selected is a function of the sample size and is often too large. This casts doubt on the trustworthiness of a model built in such a fashion. In [36], the authors state that "variables selected through such a procedure cannot be construed as the best object-oriented metrics, nor even as good predictors of the DV." However, many IVs can typically be replaced by other related IVs (i.e., confounded measures, belonging to the same principal component in PCA) without a significant loss of fit. In addition, our studies show that trends between design measures and system quality frequently vary across systems [30,31], and a prediction model built from one system is likely to be representative of only a small number of systems developed in the same environment. The particular measures selected for a model are not of much general importance. Therefore, the goal of building multivariate models using stepwise heuristics is not to determine what are the "best" metrics or whether they are the only or best predictors. The most we can hope for is that the properties/ dimensions (i.e., principal components) captured by the measures are relevant, are frequently represented in the predictive models, and can explain most of the variance in the DV. In short, our aim here is only to obtain an optimal predictive model, as defined in Section 3.5. Stepwise variable selection is a standard technique frequently used in the literature. It is certainly true that the output from such a stepwise selection heuristic cannot be blindly relied upon. It is necessary to perform a number of sanity checks on the resulting model: (1) the number of covariates is reasonable considering the size of the data set, (2) the degree of collinearity among covariates is acceptable [78], and (3) no outlier is overinfluential with respect to the selection of covariates. If violations of these principles are detected, they can be amended by (1) adjusting inclusion/exclusion thresholds, (2) removing covariates, or (3) dismissing data points. We think the results obtained from a model that passes these checks, and also performs reasonably well in the subsequent model evaluation (see Section 3.5), are trustworthy at least in that they indicate the order of magnitude of the benefits that we can expect to achieve from a prediction model built in the same fashion in any given environment.
120
3.4.2
LIONEL C. BRIAND AND JURGEN WUST
Capturing Nonlinear or Local Trends and
Interactions
When analyzing and modeling the relationship between IVs and DVs, one of the main issues is that relationships between these variables can be complex (nonlinear) and involve interaction effects (the effect of one variable depends on the value of one of more other variables). Because we currently know little about what to expect and because such relationships are also expected to vary from one organization and family of systems to another, identifying nonlinear relationships and interaction effects is usually a rather complex, exploratory process. Data mining techniques such as CART regression tree analysis [33,79] makes no functional form assumption about the relationship of IVs and DV. In addition, the tree construction process naturally explores interaction effects. Another recent technique, MARS (multivariate adaptive regression splines) [62], attempts to approximate complex relationships by a series of linear regressions on different intervals of the independent variable ranges and automatically searches for interactions. Both techniques can be combined with traditional regression modeling [33].
3.4.2.1 Hybrid models with regression trees. By adapting some of the recommendations in [80], traditional regression analysis and regression trees can be combined into a hybrid model as follows: • Run regression trees analysis, with some restrictions on the minimum number of observations in each terminal node (in order to ensure that samples will be large enough for the next steps to be useful). • Add dummy variables (binary) to the data set by assigning observations to terminal nodes in the regression trees, i.e., assign 1 to the dummy variable for observations falling in its corresponding terminal node. There are as many dummy variables as terminal nodes in the tree. • Together with the IVs based on design measures, the dummy variables can be used as additional covariates in the stepwise regression. This procedure takes advantage of the modeling power of regression analysis while still using the specific interaction structures that regression trees can uncover and model. As shown in [33], such properties may significantly improve the predictive power of multivariate models.
3.4.2.2 Multivariate
adaptive regression splines (MARS). As
previously discussed, building quality models based on structural, design measures is an exploratory process. MARS is a novel statistical method that has shown to be useful in helping the specification of appropriate regression models in an exploratory context. This technique is presented in [62] and is supported by a
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
121
recent tool developed by Salford Systems.^ At a high level, MARS attempts to approximate complex relationships by a series of linear regressions on different intervals of the independent variable ranges (i.e., subregions of the independent variable space). It is very flexible as it can adapt any functional form and is thus suitable to exploratory data analysis. Search algorithms find the appropriate intervals on which to run independent linear regressions, for each independent variable, and identify interactions while avoiding overfitting the data. Although these algorithms are complex and out of the scope of this paper, MARS is based on a number of simple principles. MARS identifies optimal basis functions based on the IVs, and these basis functions are then used as candidate covariates to be included in the regression model. When we are building, for example, a classification model (such as a fault-proneness model), we use MARS in two steps: (1) Use the MARS algorithms to identify relevant basis functions, and (2) refit the model with logistic regression, using the basis functions as covariates [33]. Our experience has shown that MARS was helpful in building more accurate predictive models [31,33].
3A.3
Multivariate
Outliers
Just as univariate analysis results are susceptible to univariate outliers, multivariate models can be strongly influenced by the absence or presence of individual observations. Our set of n independent variables spans an ^-dimensional sample space. To identify multivariate oudiers in this sample space, we calculate, for each data point, the Mahalanobis Jackknife [81] distance from the sample space centroid. The Mahalanobis Distance is a measure that takes correlations between measures into account. Multivariate outliers are data points with a large distance from the sample space centroid. Again, a multivariate oudier may be overinfluential and therefore removed, if the significance of any of the n variables in the model depends on the absence or presence of the outlier. A subtle point here occurs when dismissing an outlier causes one or more covariates in the model resulting from a stepwise selection heuristic to become insignificant. In that case, our strategy is to rerun the stepwise selection heuristic from scratch, excluding the outlier from the beginning. More detailed information on outlier analysis can be found in [76].
3.4.4
Test for
Multicollinearity
Multivariate models should be tested for multicollinearity. In severe cases, multicollinearity results in inflated standard errors for the estimated coefficients, ^Available at www. salf ord-systems. com.
122
LIONEL C. BRIAND AND JURGEN WUST
which renders predicted values of the model unreliable. The presence of multicollinearity also makes the interpretation of the model difficult, as the impact of individual covariates on the dependent variable can no longer be judged independently from other covariates. According to [74], tests for multicollinearity used in least-squares regression are also applicable in the context of logistic regression. They recommend the test suggested by Belsley et al. [78], which is based on the conditional number of the correlation matrix of the covariates in the model. This conditional number can conveniently be defined in terms of the eigenvalues of principal components as introduced in Section 3.2. Let X\, . . . , Jf„ be the covariates of our model. We perform a principal component analysis on these variables, and set /max to be the largest eigenvalue, /min the smallest eigenvalue of the principal components. The conditional number is then defined as A = \//max//min- A large conditional number (i.e., discrepancy between minimum and maximum eigenvalues) indicates the presence of multicollinearity. A series of experiments showed that the degree of multicollinearity is harmful, and corrective actions should be taken, when the conditional number exceeds 30 [78].
3.4.5
Evaluating Goodness of Fit
The purpose of building multivariate models is to predict the DV as accurately as possible. Different regression techniques provide specific measures for a model's goodness-of-fit, for instance, R^ for OLS or methods based on maximum likelihood estimation such as logistic regression. While these allow, to some degree, for comparison of accuracy between studies, such measures are abstract mathematical artifacts that do not illustrate very well the potential benefits of using the prediction model for decision-making. We provide below a quick summary of goodness-of-fit measures that users of prediction models tend to use to evaluate the practicality of using a model. There are two main cases that must be deak with in practice: (1) classification (such as classifying components as fault-prone or not), and (2) predicting a continuous DV on an interval or ratio scale. We will use an example of each category to illustrate practical measures of goodness-of-fit. 3.4.5.1 Classifying fault-proneneSS. To evaluate the model's goodness-of-fit, we can apply the prediction model to the classes of our data set from which we derived the model.^^ A class is classified fault-prone, if its predicted ^This is, of course, an optimistic way to assess a modeL This is why the term goodness-of-fit is used, as opposed to predictive power. This issue will be addressed in Section 3.5.
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
123
probability to contain a fault is higher than a certain threshold, po. Assume we use this prediction to select classes to undergo inspection. Further assume that inspections are 100% eifective; i.e., all faults in a class are found during inspection. We then compare the predicted fault-proneness of classes to their actual faultproneness. We then use the following measures of the goodness-of-fit of the prediction model: • Completeness: Completeness, in this context, is defined as the number of faults in classes classified as fault-prone, divided by the total number of faults in the system. It is a measure of the percentage of faults that would have been found if we used the prediction model to drive inspections. Low completeness indicates that, despite the use of the classification model, many faults are not detected. These faults would then slip to subsequent development phases, where they are more expensive to correct. We can always increase the completeness of our prediction model by lowering the threshold po used to classify classes as fault-prone (TT > po). This causes more classes to be classified as fault-prone; thus completeness increases. However, the number of classes incorrectly being classified as fault-prone also increases. It is therefore important to consider the correctness of the prediction model. • Correctness: Correctness is the number of classes correctly classified as fault-prone, divided by the total number of classes classified as fault-prone. Low correctness means that a high percentage of the classes being classified as fault-prone do not actually contain a fault. We want correctness to be high, as inspections of classes that do not contain faults is an inefficient use of resources. These definitions of completeness and correctness have straightforward, practical interpretations. They can be used in other application contexts where a classification model is required. A drawback of these measures is that they depend on a particular classification threshold. The choice of threshold is system-dependent and, to a large degree, arbitrary. To achieve comparability between studies and models, we can, however, employ a consistent strategy for threshold selection, such as using prior probability (proportion) of fault-prone classes, or selecting threshold po so as to balance the number of actual faulty and predicted faultprone classes. Plotting the correctness and completeness curves as a function of the selected threshold po is also a good, common practice [29], as shown in Fig. 1. As an example, we show in Table IV the fault-proneness classification results from a model ("Linear" logistic regression model) built in [31]. The model identifies 19 out of 144 classes as fault-prone (i.e., 13% of all classes). Of these, 14 actually are faulty (74% correctness), and contain 82 out of 132 faults (62% completeness).
124
LIONEL C. BRIAND AND JURGEN WUST
FIG. 1. Correctness/completeness graph (for "linear model" in [31]). TABLE IV FAULT-PRONENESS CLASSIFICATION RESULTS (LINEAR MODEL IN [31])
Predicted
Actual
No fault Fault
Z
;r 0 . 5
108 17 (50 faults)
14 (82 faults)
113 31 (132 faults)
125
19
144
The above figures are based on a cutoff value of ;r = 0.5 for predicting faultprone/not fault-prone classes, and the table only gives a partial picture, as other cutoff values are possible. Figure 1 shows the correctness and completeness numbers (vertical axis) as a function of the threshold n (horizontal axis). Standard measures of the goodness-of-fit used in the context of logistic regression models are sensitivity, specificity, and the area under the receiver-operator curve (ROC) [82]. Sensitivity is the fraction of observed positive outcome cases correctly classified (i.e., the fraction of faulty classes correctly classified faultprone, which is similar to completeness as defined above). Specificity is the fraction of observed negative outcomes cases correctly classified (i.e., the fraction of nonfaulty classes correctly classified not fault-prone). Calculating sensitivity and specificity too requires a selection of a particular threshold, p. The receiveroperator curve is a graph of sensitivity versus 1-specificity as the threshold p is varied. The area under the ROC is a common measure of the goodness-of-fit of the model—a large area under the ROC indicates that high values for both
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
125
sensitivity and specificity can be achieved. The advantage of the area under the ROC is that this measure does not necessitate the selection of a particular threshold. The drawback is that its interpretation (i.e., probability that a randomly selected faulty class has a predicted fault-proneness higher than that of a randomly selected nonfaulty class) has no immediate interpretation in the context of a practical application of the model.
3.4.5.2 Predicting development effort. We use development effort here as an example for the prediction of continuous, interval/ratio scale DVs. In the area of effort estimation, the most commonly used measures of prediction accuracy are the absolute relative error (ARE) and the magnitude of relative error (MRE) of the effort prediction. If eff is the actual effort (e.g., for a class or system), and eff the predicted effort, then ARE = I eff - eff |, and MRE = | eff - eff |/eff. The percentage (or absolute value in terms of person hours) that a predicted effort is on average off is immediately apparent to a practitioner and can be used to decide whether the model can be of any practical help. ARE and MRE measures can readily be used in contexts other than effort estimation.
3.4.6
The Impact of Design Size
The size of an artifact (e.g., class designs) is a necessary part of any model predicting a property (e.g., fault proneness) of this artifact. This is mostly justified by the fact that size determines, to some extent, many of its external properties such as fault-proneness or effort. On the one hand, we want our predictive models to account for size. However, in many cases, e.g., in the case of fault-proneness models, and for practical reasons, we need them to capture more than size effects. Using again our inspection example, a model that systematically identifies larger classes as more fault-prone would a priori be less useful: the predicted fault-prone classes are likely to cover a larger part of the system and the model could not help focus inspection and testing efforts very well. In our studies [29,30,33], we compare models built from (1) size measures only and (2) models allowing all measures (size, coupling, cohesion, inheritance) to enter the model. With these models, we seek to find answers to the following questions: • Are coupling, cohesion, and inheritance (CCI) measures complementary predictors of the DV as compared to size measures alone? • How much more accurate is a model that includes the more difficult to collect coupling, cohesion, and inheritance measures?"^ If it is not ^Such measures usually require the use of complex static analyzers.
126
LIONEL C. BRIAND AND JURGEN WUST
significantly better, then the additional effort of calculating these more expensive measures instead of some easily collected size measures would not be justified. When the DV is class fault-proneness and the measures are collected based on design information, the results [30,31] so far have shown that: • Models including CCI measures clearly outperform models based on size measures only. Even though they may be related to size, CCI measures therefore capture information related to fault-proneness that cannot be explained by size alone. • There is no significant difference between models based on CCI measures only and models based on both CCI and size measures. This indicates that all size aspects that have a bearing on fault-proneness are also accounted for by the set of CCI measures investigated. In other words, the CCI measures are not just complementary to the size measures, they subsume them. When the DV is effort [33], however, it appears that the size accounts for most of the variation in effort, and more sophisticated CCI measures do not help to substantially improve the model's predictive capability. In the model building strategy proposed by El Emam et al. [36], a size measure is forced on the predictive model by default. Measures that are confounded by size are not considered for inclusion in the model. This is an alternative strategy and which one to use depends on your purpose. If you want to build an optimal prediction model and determine what are useful predictors, then the procedure we outlined above is fine. If your goal is to demonstrate that a given measure is related to fault-proneness, or any other DV, and that this relationship cannot be explained by size effects, then the procedure in [36] is appropriate.
3.5
Prediction Model Evaluation
We discussed above the notion of goodness-of-fit and practical ways to measure it. However, although such measures are useful for comparing models built on a given data set, they present two limitations: • They are optimistic since we must expect the model's predictive accuracy to deteriorate when it is applied to data sets different from the one it is built on. • They still do not provide information that can be used directly to assess whether a model can be useful in given circumstances. Those two issues are addressed by the next two subsections.
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
3.5.1
127
Cross Validation
One of the commonly encountered problems in software engineering is that our data sets are usually of limited size, i.e., a few hundred observations when we are lucky. Dividing the available data into a modeling set and a test set is usually difficult as it implies that either the test set is going to be too small to obtain representative and reliable results or the modeling set is going to be too small to build a refined predictive model. One reasonable compromise is to use a cross-validation procedure. To get an impression of how well the model performs when applied to different data sets, i.e., its prediction accuracy, a cross-validation should be carried out. Depending on the availability and size of the data set, various cross-validation techniques can be used: • F-cross-validation [63] is what we used in our studies [29,30,33]. For the K-cross-validation, the n data points of each data set are randomly split into V partitions of roughly equal size (n/V). For each partition, we refit the model using all data points not included in the partition, and then apply the resulting model to the data points in the partition. We thus obtain for all n data points a predicted probability of their fault-proneness (or predicted development effort). • Leave-one-out cross-validation, a special case of K-cross-validation, where V = n- I, used for very small data sets. • For larger data sets, one can randomly partition the data set into a fit/ modeling data partition (usually 2/3 of all observations) used to fit the model and a test data partition (all remaining observations). The ideal situation is where separate data sets, derived from different systems stemming from similar environments, are available. The prediction model is built from one system used in turn to make predictions for the other system. This is the most effective demonstration of the practical use of a prediction model. Typically, models are built on past systems and used to predict properties of new systems or their components. System factors may affect the predictive power of a model and, therefore, it is important to validate the model under conditions that resemble as closely as possible its usage conditions. Reference [31] reports on such a study where the authors introduce a cost-effectiveness model for faultproneness models. This is described further in the next section.
3.5.2
Cost-Benefit IVIodel for Class Fault-Proneness Prediction
Goodness-of-fit or predictive power does not give the potential users of a model a direct means to assess whether the model can be practically useful to them.
128
LIONEL C. BRIAND AND JURGEN WUST
We need to develop cost-benefit models that are based on realistic assumptions and that use parameters that can be either measured or estimated. Although it is difficult to further specify general rules to build such models in our context, we will use an example to illustrate the principles to follow: How can we determine whether a fault-proneness model would be economically viable if used to drive inspections? The first step is to identify all the parameters that the model will be based on. At the same time, list all the assumptions on which the model will be based regarding these parameters. Such assumptions are usually necessary to help simplify the cost-benefit model. Some of these assumptions will inevitably be specific to an environment and can very well be unrealistic in others. What we present here is based on a study reported in [31 ]: • All classes predicted as fault-prone are inspected. • Usually, an inspection does not find all faults in a class. We assume an average inspection effectiveness e,0 < e < 1, where e = I means that all faults in inspected classes are being detected. • Faults not discovered during inspection (faults that slipped through, faults in classes not inspected) later cause costs for isolating and fixing them. The average cost of a fault when not detected during inspection is fc. • The cost of inspecting a class is assumed to be proportional to the size of the class. In general, in order to estimate the benefits of a model, we need a comparison baseline that represents what could be achieved without the use of the model. In our example, we assume a simple model that ranks the classes by their size, and selects the n largest classes for inspection. The number n is chosen so that the total size of the selected classes is roughly the same as the total size of classes selected by the fault-proneness model based on design (size and CCI) measures. It is thus ensured that we compare models where the investment—the cost of inspections—are the same or similar and can be factored out. For the specification of the model, we need some additional definitions. Let ci, ... ,CN denote the A^ classes in the system. For / = 1, . . . , A^, let • /, be the number of actual faults in class /, • Pi indicates whether class / is predicted fault-prone by the model, i.e., /?, = 1 if class / is predicted fault-prone, p, = 0 otherwise, and • Si denotes the size of class / (measured in terms of the number of methods, although other measures of size are possible). The inspection cost is ic • 5,, where ic is the cost of inspection of one size unit.
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
129
The next step is to quantify the gains and losses due to using the model. In our example, they are all expressed below in effort units, i.e., the effort saved and the effort incurred assuming inspections are performed on code. Gain (effort saved): g_m
=
defects covered and found
g_m
=
e ' fc ' Ufi ' Pi) i
Cost (effort incurred): c_m
=
direct inspection cost + defects not covered + defects that escape
c_m
=
ic.z(5/-A) + f c - Z ( / / - ( l - A ) ) + ( l - ^ ) - f c - Z ( / / - p / ) .
In the same way, we express the cost and gains of using the size-ranking model (baseline) to select the n largest classes, so that their cumulative size is equal or close to Z/(5/ • p,), the size of classes selected by the predictive model.^ For / = 1, . . . , AT, let p. = 1 if class / is among those n largest classes, and p[ = 0 otherwise:
g_s =
e-fC'l^ifrp'i) i
C.S =
ic.z(5,.p;.) + f c - Z ( / , - ( l - p ; ) ) + ( l - e ) - f c - S ( / , - p ; ) .
We now want to assess the difference in cost and gain when using the faultproneness model and size-ranking model, which is our comparison baseline: Again
=
g_m - g_s = e • fc • (Z(// • Pt) - Ufi' i
Acost
=
p\))
i
c_m - c_s = ic • (ZC^, • pi) - Z(5/ • p\)) -h fc(Z/(// • (1 - p/)) - Ufi • (1 - P'i))) + (1 - e) • fc(S(/, • Pi) i
i
-nfrp'i)). i
We select n and therefore p' so that ^i{Si'Pi) — Y.i{si •/?•) ?^ 0 (inspected classes are of roughly equal size in both situations). We can thus, as an approximation, drop the first term from the Acost equation. This also eliminates the inspection cost ic from the equation, and with it the need to make assumptions about the ratio fc to ic for calculating values of Acost. With this simplification, we have Acost = : f c . ( Z ( / / - ( l - A ) ) - Z ( / / - ( l - p ; ) ) ) + ( l - e ) - f c . ( Z ( / / - P / ) - S ( / / - p ; ) ) . /
/•
/
/
^We may not be able to get the exact same size, but we should be sufficiently close so that we perform the forthcoming simplifications. This is usually not difficult as the size of classes composing a system usually represent a small percentage of the system size. In practice, we can therefore make such an approximation and find an adequate set of n largest classes.
130
LIONEL C. BRIAND AND JURGEN WUST
By doing the multiplications and adding summands it is easily shown that Acost = - e • fc • (Z(/, • Pi) - Uf, • P',)) = -Again. The benefit of using the prediction model to select classes for inspection instead of selecting them according to their size is benefit = Again - Acost = 2Again = 2 • e • fc • (Z(// • A ) - ^ifi ' Pi))Thus, the benefit of using the fault-proneness model is proportional to the number of faults it detects above what the size-based model can find (if inspection effort is about equal to that of the baseline model, as is the case here). The factor 2 is because the difference between not finding a fault and having to pay fc, and finding a fault and not having to pay fc is 2fc. Once such a model is developed, parameters e and fc are estimated in a given environment, and we can determine, for a given range of e values, the benefits (in effort unit) of using a fault-proneness model as a function of fc, the cost of a defect slipping through inspections. Based on such information, one may decide whether using a predictive model for driving inspections can bring practical benefits. As an example. Fig. 2 shows the benefit graph for two models, referred to as "linear" and "MARS" model. The benefit of using the linear or MARS model to select classes for inspection over a simple size-based selection of classes is plotted as a function of the number n of classes selected for inspection. The benefit is expressed in multiples of fc, assuming an inspection effectiveness e = 80%. Besides the economical viability of a model, such afigureeffectively demonstrates Benefit [fc]
•MARS-
-Linear
0 10 20 30 40 50 60 70 80 90 100110120130140 Number of classes inspected FIG. 2. Benefit graph for linear (thin Une) and MARS (thick Hne) models from [31].
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
131
the advantages of one model over the other. It also helps to identify ranges for the number of selected classes, n, at which the model usage has its greatest payoff. To decide whether a prediction model is worth using, some additional costs and constraints may also be accounted for, such as, in our example, the cost of deploying the model: automation and training.
4.
Summary of Results
This section presents a summary of the empirical results reported in the studies of Section 2. It attempts to identify consistent trends across the results reported and discuss inconsistencies when they arise.
4.1
Correlational Studies
We focus here on correlational studies where the dependent variable is related to some measure of fault-proneness. The reason is that this is the only dependent variable for which a large-enough number of studies exist and hence a crossexamination of results is possible.
4.7.7
Univariate Analysis of
Fault-Proneness
Tables V to VIII show the results from univariate analysis in studies using faultproneness or the number of faults as a dependent variable, for size, coupling, cohesion, and inheritance measures, respectively. The table for size measures also includes measures presented as "complexity" measures, which in practice are often strongly correlated to simple size measures. The inheritance measures capture various properties of inheritance such as its depth or level of overloading. Each column provides the results for one study, each row the results for one measure. The aim is to facilitate comparison of results where the same measure is investigated in several studies. In each table, the row "Tech." indicates the modeling technique used in each study—which is either univariate logistic regression (denoted as LR) or Spearman p (rho). For studies that investigate more than one system, the row "System" identifies the name of the system each column pertains to. In the body of the table, the semantics of the entries is as follows: • -I-+: measure has positive significant relationship at the 0.01 level • -I-: measure has positive significant relationship at the 0.05 level • O: measure has no significant relationship at the 0.05 level
132
LIONEL C. BRIAND AND JURGEN WUST
TABLE V SUMMARY OF UNIVARIATE ANALYSIS RESULTS FOR SIZE MEASURES
Tech. System ATTRIB STATES EVNT READS WRITES DELS RWD LOC LOC_B LOC_H WMC-ss WMC-1/ NMA/ NMImp WMC-CC NOMA AMC Stmts NAImp NMpub NMNpub NumPara NAInh NMInh TotPrivAtrib TotMethod
29
30
31
LR
LR
LR
43
39 rho CCM
rho EFT
rho
36 LR
LR
37
56
LR
rho
57 LR A
LR B
LR C
o +
+ ++
O O
0 ++ ++ ++ ++ 0
++
++
++ ++
++ ++
-f+
++
++ +-I-
++
++ + ++
++
++ ++
++ ++
O O
++
++ ++
0 0 -I-+
++
• - : measure has negative significant relationship at the 0.05 level • — : measure has negative significant relationship at the 0.01 level • na: the measure was considered, but showed no or little variation (measure not significant and not meaningful in the respective system). Although it may be desirable to provide more detailed results (e.g., regression coefficients, exact p-values) in the table, we chose this more compact summary for the following reasons: • In isolation, the magnitude of a regression coefficient is not meaningful, since this depends on the range and variation of the measure in the data sample. • These coefficients are not applicable in other environments and not really useful to report here.
4L
to smooth out random noise. Communication, storage, and processing measures and compares bits along the way to detect and correct small errors before they accumulate to cause incorrect results, distorted messages, or even system crashes. However, measuring a quantum mechanical system causes the quantum system to change. An important breakthrough came in 1996 when Andrew Steane, and independently, Richard Calderbank and Peter Shor, discovered methods of encoding quantum bits, or "qubits," and measuring group properties so that small errors can be corrected. These ingenious methods use collective measurement to identify characteristics of a group of qubits, for example, parity. Thus, it is conceivable to compensate for an error in a single qubit while preserving the information encoded in the collective quantum state. Although a lot of research and engineering remain, today we see no theoretical obstacles to quantum computation and quantum communication. In this article, we review quantum computing and communications, current status, algorithms, and problems that remain to be solved. Section 2 gives the reader a narrative tutorial on quantum effects and major theorems of quantum mechanics. Section 3 presents the "Dirac" or "ket" notation for quantum mechanics and mathematically restates many of the examples and results of the preceding section. Section 4 goes into more of the details of how a quantum computer might be built and explains some quantum computing algorithms, such as Shor's for factoring, Deutsch's for function characterization, and Grover's for searching, and error correcting schemes. Section 5 treats quantum communication and cryptography. We end with an overview of physical implementations in Section 6.
2.
The Surprising Quantum World
Subatomic particles act very differently from things in the everyday world. Particles can have a presence in several places at once. Also two well-separated particles may have intertwined fates, and the observation of one of the particles will cause this remarkable behavior to vanish. Quantum mechanics describes these, and other physical phenomena extraordinarily well. We begin with a simple experiment that you can do for a few dollars worth of equipment. Begin with a beam of light passing through a polarizer, as in Fig. 1. A typical beam, such as from the sun or a flashlight, has its intensity reduced by half. Suppose we add another polarizer after the first. As we rotate the polarizer, the beam brightens and dims until it is gone,^ as depicted in Fig. 2. ^Real polarizers are not perfect, of course, so a little light always passes.
QUANTUM COMPUTING AND COMMUNICATION
193
#1 FIG. 1. Polarizer dims beam by half.
t
^ #2
#1
FIG. 2. Two orthogonal polarizers extinguish the beam.
Leaving the two polarizers at the minimum, add a third polarizer between them, as shown in Fig. 3. As we rotate it, we can get some light to pass through! How can adding another filter increase the light getting through? Although it takes extensive and elaborate experiments to prove that the following explanation is accurate, we assure you it is. Classical interpretations of these results are misleading at best. To begin the explanation, photons have a characteristic called "polarization." After passing through polarizer #1, all the photons of the light beam are polarized in the same direction as the polarizer. If a polarizer is set at right angles to polarizer #1, the chance of a photon getting through both polarizers is 0, that is, no light gets through. However, when the polarizer in the middle is diagonal to polarizer #1, half the photons pass through the first two polarizers. More importantly, the photons are now oriented diagonally. Half the diagonally oriented photons can now pass through the final polarizer. Because of their relative orientations, each polarizer lets half the photons through, so a total of 1/8 passes through all three polarizers.
2.1
Sidebar: Doing the Polarization Experiment Yourself
You can do the polarization experiment at home with commonly available materials costing a few dollars. You need a bright beam of light. This could be sunUght shining through a hole, a flashlight, or a laser pointer. 1/8
#1
#3
#2
FIG. 3. A third polarizer can partially restore the beam!
194
PAULE. BLACK ETAL
For polarizers you can use the lens from a pair of polarizing sunglasses. You can tell if sunglasses are polarizing by holding two pairs, one behind the other, and looking through the left (or right) lens in series. Rotate one pair of sunglasses relative to the other while keeping the lens in line. If the scene viewed through the lens darkens and lightens as one is rotated, they are polarizing. You can also buy gray light polarizing glasses or plastic sheets on the World Wide Web. Carefully free the lens. One lens can be rigidly attached to a support, but the others must be able to rotate. Shine the light through polarizer #1. Put polarizer #2 in the beam well after polarizer #1. Rotate 2 until the least amount of light comes through. Now put polarizer 3 between 1 and 2. Rotate it until the final beam of light is its brightest. By trying different combinations of lenses and rotations, you can verify that the lenses are at 45° and 90° angles from each other.
2.2
Returning to the Subject at Hand
After we develop the mathematics, we will return to this example in Section 3.3 and show how the results can be derived. The mathematical tools we use are quantum mechanics. Quantum mechanics describes the interactions of electrons, photons, neutrons, etc. at atomic and subatomic scales. It does not explain general relativity, however. Quantum mechanics makes predictions on the atomic and subatomic scale that are found to be extremely accurate and precise. Experiments support this theory to better accuracy than any other physical theory in the history of science. The effects we see at the quantum level are very different from those we see in the everyday world. So, it should not come as a surprise that a different mathematics is used. This section presents fundamental quantum effects and describes some useful laws that follow from those.
2.3
The Four Postulates of Quantum Mechanics
Quantum mechanics is mathematically very well defined and is a framework for defining physical systems. This powerful framework defines what may and may not happen in quantum mechanical systems. Quantum mechanics itself does not give the details of any one particular physical system. Some analogies may help. Algebraic groups have well-defined properties, such as that operations are closed. Yet, the definition of a group does not detail the group of rotations in 3-space or addition on the integers. Likewise, the rules for a role-playing game limit what is and is not allowed, but don't describe individuals or scenarios. Quantum mechanics consists of four postulates [3, pp. 80-94].
QUANTUM COMPUTING AND COMMUNICATION
195
Postulate 1. Any isolated quantum system can be completely mathematically characterized by a state vector in a Hilbert space. A Hilbert space is a complex vector space with inner product. Experiments show there is no need for other descriptions since all the interactions, such as momentum transfer, electric fields, and spin conservation, can be included within the framework. The postulates of quantum mechanics, by themselves, do not tell us what the appropriate Hilbert space is for a particular system. Rather, physicists work long and hard to determine the best approximate model for their system. Given this model, their experimental results can be described by a vector in this appropriate Hilbert space. The notation we will explain in Section 3 cannot express all possible situations, such as if we wish to track our incomplete knowledge of a physical system, but suffices for this paper. There are more elaborate mathematical schemes that can represent as much quantum information as we need. Postulate 2. The time evolution of an isolated quantum system is described by a unitary transformation. Physicists use the term "time evolution" to express that the state of a system is changing solely due to the passage of time; for instance, particles are moving or interacting. If the quantum system is completely isolated from losses to the environment or influences from outside the system, any evolution can be captured by a unitary matrix expressing a transformation on the state vector. Again, pure quantum mechanics doesn't tell us what the transformation is, but provides the framework into which experimental results must fit. The corollary is that isolated quantum systems are reversible. Postulate 3. Only certain sets of measurements can be done at any one time. Measuring projects the state vector of the system onto a new state vector. This is the so-called collapse of the system. From a mathematical description of the set of measurements, one can determine the probability of a state yielding each of the measurement outcomes. One powerful result is that arbitrary quantum states cannot be measured with arbitrary accuracy. No matter how delicately done, the very first measurement forever alters the state of the system. We discuss this in more detail in Section 2.6. The measurements in a set, called "basis," are a description of what can be observed. Often quantum systems can be described with many different, but related, bases. Analogously, positions in the geometric plane may be given as pairs of distances from the origin along orthogonal, or perpendicular, axes, such as X and Y. However, positions may also be given as pairs of distances along the diagonal lines X = Y and X = —Y, which form an equally valid set of orthogonal axes. A simple rotation transforms between coordinates in either basis.
196
PAUL E. BLACK Er>^L
Polar coordinates provide yet another alternative set of coordinates. Although it may be easier to work with one basis or another, it is misleading to think that coordinates in one basis are the coordinates of a position, to the exclusion of others. Postulate 4. The state space of a composite system is the tensor products of the state spaces of the constituent systems. Herein lies a remarkable opportunity for quantum computing. In the everyday world, the composite state space is the product of constituent spaces. However, quantum mechanical systems can become very complicated very fast. The negative view is to realize how much classical computation we need to simulate even simple systems of, say, 10 particles. The positive view is to wonder if this enormously rich state space might be harnessed for very powerful computations.
2.4
Superposition
As can be seen from the polarization experiment above, very tiny entities may behave very differently from macroscopic things. An everyday solid object has a definite position, velocity, etc., but at the quantum scale, particle characteristics are best described as blends or superpositions of base values. When measured, we get a definite value. However, between measurement events, any consistent mathematical model must allow for the potential or amplitude of several states at once. Another example may provide a more intuitive grasp.
2.4.1
Young's Double-Slit
Experiment
In 1801, using only a candle for a light source, Thomas Young performed an experiment whose results can only be explained if light acts as a wave. Young shined the light through two parallel slits onto a surface, as shown in Fig. 4, and saw a pattern of light and dark bands. The wavy line on the right graphs the result; light intensity is the horizontal axis, increasing to the right. This is the well-known interference effect: waves, which cancel and reinforce each other, produce this pattern. Imagine, in contrast, a paintball gun pointing at a wall in which two holes have been drilled, beyond which is a barrier, as shown in Fig. 5. The holes are just big enough for a single paintball to get through, although the balls may ricochet from sides of the holes. The gun shoots at random angles, so only a few of the paintballs get through. If one of the holes is covered up, the balls that get through will leave marks on the wall, with most of the marks concentrated opposite the hole and others scattered in a bell curve (PI) to either side of the hole, as shown in the figure. If only the second hole is open, a similar pattern (P2) emerges
QUANTUM COMPUTING AND COMMUNICATION
Wave source
197
pattern wall
barrier
FIG. 4. Young's double-slit experiment.
P1 +P2 Gun
wall
barrier
FIG. 5. Paintballs fired at a wall.
on the barrier immediately beyond the hole. If both holes are open, the patterns simply add. The paint spots are especially dense where the two patterns overlap, resulting in a bimodal distribution curve that combines PI and P2. No interference is evident. What happens when electrons are fired at two small slits, as in Fig. 6? Surprisingly, they produce the same wave pattern of Fig. 4. That is, the probability of an electron hitting the barrier at a certain location varies in a pattern of alternating high and low, rather than a simple bimodal distribution. This occurs even when electrons are fired one at a time. Similar experiments have been done with atoms and even large molecules of carbon-60 ("buckeyballs"), all demonstrating
198
PAULE. BLACK Er>AL
H1
Electron Gun
H2 pattern wall
barrier
FIG. 6. Double-slit experiment with electrons.
wave-like behavior of matter. So something 'Vave-Uke" must be happening at small scales.
2A.2
Explaining the Double-Slit Experiment
How do we explain these results? If a wave passes through the sHts, we can expect interference, canceling or reinforcing, resulting in a pattern of light and dark lines. But how can individual electrons, atoms, or molecules, fired one at a time, create interference patterns? A desperate classical explanation might be that the particles split, with one part passing through each hole, but this is not the case: if detectors are placed at HI or at H2 or in front of the barrier, only one particle is ever registered at a time. (Remarkably, if a detector is placed at HI or H2, the pattern follows Fig. 5. More about this effect later.) The quantum mechanical explanation is that particles may be in a "superposition" of locations. That is, an electron is in a combination of state "at HI" and "at H2." An everyday solid object has a definite position, mass, electric charge, velocity, etc., but at the quantum scale, particle characteristics are best described as blends or superpositions of base values. When measured, we always get a definite value. However, between measurement events, any consistent mathematical model must potentially allow for an arbitrary superposition of many states. This behavior is contrary to everyday experience, of course, but thousands of experiments have verified this fact: a particle can be in a superposition of several states at the same time. When measured, the superposition collapses into a single state, losing any information about the state before measurement. The photons in
QUANTUM COMPUTING AND COMMUNICATION
199
the beam-and-filters experiment are in a superposition of polarizations. When polarizer #1 tests the photon for vertical or horizontal polarization, either the photon emerges polarized vertically or it doesn't emerge. No information about prior states is maintained. It is not possible to determine whether it had been vertical, diagonal, or somewhere in between. Since vertical polarization is a superposition, or combination, of diagonal polarizations, some of the vertically polarized photons pass through the middle polarizer and emerge polarized diagonally. Half of the now-diagonally polarized photons will pass through the final, horizontal polarizer.
2.5
Randomness
In the beam-and-filters experiment, randomly some photons emerge polarized while others do not emerge at all. This unpredictability is not a lack of knowledge. It is not that we are missing some full understanding of the state of the photons. The random behavior is truly a part of nature. We cannot, even in principle, predict which of the photons will emerge. This intrinsic randomness may be exploited to generate cryptographic keys or events that are not predictable, but it also means that the unpredictability of some measurements is not merely an annoying anomaly to be reduced by better equipment, but an inherent property in quantum computation and information. Even though an individual measurement may be arbitrary, the statistical properties are well defined. Therefore, we may take advantage of the randomness or unpredictability in individual outcomes. We can make larger or more energetic systems that are more predictable, but then the quantum properties, which may be so useful, disappear, too.
2.6
Measurement
As opposed to being an objective, external activity, in quantum mechanics measuring a system is a significant step. A measurement is always with regard to two or more base values. In the photon polarization experiment, the bases are orthogonal directions: vertical and horizontal, two diagonals, 15° and 105°, etc. The basis for other systems may be in terms of momentum, position, energy level, or other physical quantities. When a quantum system is measured, it collapses into one of the measurement bases. No information about previous superpositions remains. We cannot predict into which of the bases a system will collapse; however, given a known state of the system, we can predict the probability of measuring each basis.
200
PAUL E. BLACK ETAL
2.7
Entanglement
Even more surprising than superposition, quantum theory predicts that entities may have correlated fates. That is, the result of a measurement on one photon or atom leads instantaneously to a correlated result when an entangled photon or atom is measured. For a more intuitive grasp of what we mean by "correlated results," imagine that two coins could be entangled (there is no known way of doing this with coins, of course). Imagine one is tossing a coin. Careful records show it comes up "heads" about half the time and "tails" half the time, but any one result is unpredictable. Tossing another coin has similar, random results, but surprisingly, the records of the coin tosses show a correlation! When one coin comes up heads, the other coin comes up tails and vice versa. We say that the state of the two coins is entangled. Before the measurement (the toss), the outcome is unknown, but we know the outcomes will be correlated. As soon as either coin is tossed (measured), the fate of tossing the other coin is sealed. We cannot predict in advance what an individual coin will do, but their results will be correlated: once one is tossed, there is no uncertainty about the other. This imaginary coin tossing is only to give the reader a sense of entanglement. Although one might come up with a classical explanation for these results, multitudes of ingenious experiments have confirmed the existence of entanglement and ruled out any possible classical explanation. Over several decades, physicists have continually refined these experiments to remove loopholes in measurement accuracy or subtle assumptions. All have confirmed the predictions of quantum mechanics. With actual particles any measurement collapses uncertainty in the state. A real experiment would manufacture entangled particles, say by bringing particles together and entangling them or by creating them with entangled properties. For instance, we can "downconvert" one higher energy photon into two lower energy photons which leave in directions not entirely predictable. Careful experiments show that the directions are actually a superposition, not merely a random, unknown direction. However, since the momentum of the higher energy photon is conserved, the directions of the two lower energy photons are entangled. Measuring one causes both photons to collapse into one of the measurement bases. However, once entangled, the photons can be separated by any distance, at any two points in the universe; yet measuring one will result in a perfectly correlated measurement for the other. Even though measurement brings about a synchronous collapse regardless of the separation, entanglement doesn't let us transmit information. We cannot force the result of a measurement any more than we can force the outcome of tossing a fair coin (without interference).
QUANTUM COMPUTING AND COMMUNICATION
2.8
201
Reversibility
Postulate 2 of quantum mechanics says that the evolution of an isolated system is reversible. In other words, any condition leading to an action also may bring about the reverse action in time-reversed circumstances. If we watch a movie of a frictionless pendulum, we cannot tell whether the movie is being shown backwards. In either case, the pendulum moves according to the laws of momentum and gravity. If a beam of photons is likely to move an electron from a lower to a higher energy state, the beam is also likely to move an electron from the higher energy state to the lower one. (In fact, this is the "stimulated emission" of a laser.) This invertible procession of events is referred to as "unitary evolution." To preserve superposition and entanglement, we must use unitary evolutions. An important consequence is that operations should be reversible. Any operation that loses information or is not reversible cannot be unitary, and may lose superposition and entanglement. Thus, to guarantee that a quantum computation step preserves superposition and entanglement, it must be reversible. Finding the conjunction of A AND B is not reversible: if the result is false, we do not know whether A was false, B was false, or both A and B were false. Thus a standard conjunction destroys superpositions and entanglements. However, suppose we set another bit, C, previously set to false, to the conjunction of A AND B, and keep the values of both A and B. This computation is reversible. Given any resulting state of A, B, and C, we can determine the state before the computation. Likewise all standard computations can be done reversibly, albeit with some extra bits. We revisit reversible computations in Section 4.1.1.
2.9
The Exact Measurement "Theorem"
Although quantum mechanics seems strange, it is a very consistent theory. Seemingly reasonable operations are actually inconsistent with the theory as a whole. For instance, one might wish to harness entanglement for faster-thanlight or even instantaneous communication. Unfortunately, any measurement or observation collapses the state. Also unfortunately, it is impossible to tell with local information whether the observation preceded or followed the collapse: the observation gives the same random result in either case. Communicating with the person holding the other entangled particle, to determine some correlation, can only be done classically, that is, no faster than the speed of light. So entanglement cannot be used to transmit information faster than light and violate relativity. If we could exactly measure the entire quantum state of a particle, we could determine whether it were in a superposition. Alice and Bob could begin with two pairs of particles; let us call them the "T" pair, Tl and T2, and the "F" pair, Fl and F2. They manipulate them so Tl and T2 are entangled with each other
202
PAUL E. BLACK ETAL
and Fl and F2 are entangled with each other. Bob then takes Tl and Fl far away from Alice. If exact measurement were possible, Bob could continuously measure his particle Tl to see if it has collapsed into a definite state. To instantly communicate a "1," Alice observes her member of the ' T " pair, T2, causing it to collapse. Because the "T" pair was entangled, Bob's particle, Tl, simultaneously collapses into a definite state. Bob detects the collapse of Tl, and writes down a "1." Similarly, a "0" bit could be transmitted instantly using the "F" pair if, indeed, exact measurement were possible. In fact, if we were able to exactly measure an unknown quantum state, it would lead to many inconsistencies.
2.10 The No-Cloning Theorem One might be tempted to evade the impossibility of exact measurement by making many exact copies of particles and measuring the copies. If we could somehow manage to have an unlimited supply of exact copies, we could measure them and experimentally build up an exact picture of the quantum state of the original particle. However, the "No-Cloning Theorem" proves we cannot make an exact copy of an unknown quantum state. In Section 3.6 we prove a sHghtly simplified version of the theorem. What about setting up an apparatus, say with polarizers, laser beams, magnetic fields, etc., which produces an unlimited number of particles, all in the same quantum state? We could make unlimited measurements in various bases, and measure the state to arbitrary accuracy. Indeed, this is what experimental physicists do. But it is a measurement of the result of a process, not the measurement of a single, unknown state. Alternatively, if we could exactly measure an unknown quantum state, we could prepare as many particles as we wished in that state, effectively cloning. So the lack of exact measurement foils this attempt to clone, and the lack of cloning closes this route to measurement, maintaining the consistency of quantum mechanics.
3.
The Mathematics of Quantum Mechanics
The ugly truth is that general relativity and quantum mechanics are not consistent. That is, our current formulations of general relativity and quantum mechanics give different predictions for extreme cases. We assume there is a 'Theory of Everything" that reconciles the two, but it is still very much an area of thought and research. Since relativity is not needed in quantum computing, we ignore this problem. Let us emphasize that thousands of experiments that have been done throughout the world in the last 100 years are consistent with quantum mechanics.
QUANTUM COMPUTING AND COMMUNICATION
203
We present a succinct notation and mathematics commonly used to formally express the notions of quantum mechanics. Although this formalization cannot express all the nuances, it is enough for this introductory article. More complete notations are given in various books on quantum mechanics.
3.1
Dirac or Ket Notation
We can represent the state of quantum systems in "Dirac" or "ket" ^ notation. ("Ket" rhymes with "let.") A qubit is a quantum system with two discrete states. These two states can be expressed in ket notation as |0) and |1). An arbitrary quantum state is often written | ^ ) . State designations can be arbitrary symbols. For instance, we can refer to the polarization experiment in Section 2 using the bases |t) and | ^ ) for vertical and horizontal polarization and I/*) and | \ ) for the two orthogonal diagonal polarizations. (Caution: although we use an up-arrow for vertical, "up" and "down" polarization are the same thing: they are both vertical polarization. Likewise be careful not to misinterpret the right or diagonal arrows.) A quantum system consisting of two or more quantum states is the tensor product of the separate states in some fixed order. Suppose we have two photons, PI and P2, where PI has the state |P1), and P2 has the state |P2). We can express the state of the joint system as |P1) 0 |P2), or we can express it as |P2) 0 |P1). The particular order doesn't matter as long as it is used consistently. For brevity, the tensor product operator is implicit between adjacent states. The above two-photon system is often written \Pl P2). Since the order is typically implicit, the ket is usually written without indices, thus, \PP). Ket "grouping" is associative; therefore a single ket may be written as multiple kets for clarity: |0)|0)|0), |0)|00), and |00)|0), all mean IO1O2O3). Bases are written in the same notation using kets. For example, four orthogonal bases of a two-qubit system are |00), |01), |10), and |11). Formally, a ket is just a column vector.
3.2
Superpositions and Measurements
Superpositions are written as a sum of states, each with an "amplitude" which may be a complex number. For instance, if an electron has a greater probability of going through the top slit in Fig. 6, its position might be y ^ l / 4 1 ^ 1)+\/3/41H2). The polarization of a photon in an equal superposition of vertical and horizontal polarizations may be written a s l / V 2 | t ) + l / V 2 | ^ ) . In general, a two-qubit system is in the state a\00) + b\Ol) -h c|10) + d\ll), ^The name comes from "bracket." P. A. M. Dirac developed a shorthand "bracket" notion to express the outer product of state vectors, ( ^ | ^ ) . In most cases the column vector, or right-hand side, can be used alone. Being the second half of a bracket, it is called a ket.
204
PAUL E. BLACK ETAL
The norm squared of the amphtude of a state is the probabiUties of measuring the system in that state. The general two-qubit system from above will be measured in state |00) with probability 1^1-^. Similarly, the system will be measured in states |01), |10), or |11> with probabilities \b\^, |c|^, and \d\^ respectively. Amplitudes must be used instead of probabilities to reflect quantum interference and other phenomena. Because a measurement always finds a system in one of the basis states, the probabilities sum to 1. (The requirement that they sum to 1 is a reflection of the basic conservation laws of physics.) Hence the sum of norm squared amplitudes must always sum to 1, also. Amplitudes that nominally do not sum to 1 are understood to be multiplied by an appropriate scaling factor to "normalize" them so they do sum to 1. A measurement collapses the system into one of the bases of the measurement. The probability of measuring the system in, or equivalently, collapsing the system into any one particular basis is the norm squared of the probability. Hence, for the location distribution ^/Tj4 \H\) -\- \/3/4 |//2), the probability of finding an I
electron at location HI is
.2
^1/4
= 1/4, and the probability of finding an
electron at H2 is y 3/4 = 3 / 4 . After measurement, the electron is either in the state \Hl), that is, at HI, or in the state \H2), that is, at H2, and there is no hint that the electron ever had any probability of being anywhere else. If measurements are done at HI or H2, the interference disappears, resulting in the simple bimodal distribution shown in Fig. 5.
3.3
The Polarization Experiment, Again
Just as geometric positions may be equally represented by different coordinate systems, quantum states may be expressed in different bases. A vertically polarized photon's state may be written as |t). It may just as well be written as a superposition of two diagonal bases 1/A/2 |/^) -h 1/V2|\). Likewise a diagonally polarized photon | \ ) may be viewed as being in a superposition of vertical and horizontal polarizations 1/V2 |t) + 1/V2|^). If the polarization is I/*), the superposition is 1/V2|t) - 1/V2|^); note the sign change. In both cases, the amplitudes squared, | 1/V2 | and | - 1 / V 2 | , still sum to 1. We can now express the polarization experiment formally. The first polarizer "measures" in some basis, which we can cafl |t) and | ^ ) . Regardless of previous polarization, the measurement leaves photons in either It) or 1^), but only passes photons that are, say, |t). If the incoming beam is randomly polarized, half the photons collapse into, or are measured as, |t) and passed, which agrees with the observation that the intensity is halved.
QUANTUM COMPUTING AND COMMUNICATION
205
A second polarizer, tilted at an angle, 6, to the first, "measures" in a tilted basis I/'cos e) and |\sin e)- Photons in state |t) can also be considered to be in the superposition cos 6 l/cos e) + sin ^ |\sin e)- The second polarizer measures photons in the tilted basis, and passes only those collapsing into \/co^ e)- Since the chance^ of a photon collapsing into that state is cos^ 6, the intensity of the resultant beam decreases to 0 as the polarizer is rotated to 90°. With polarizer #2 set at a right angle, it measures with the same basis as polarizer #1, that is, |t) and 1^), but only passes photons with state |-»). When polarizer #3 is inserted, it is rotated to a 45° angle. The vertically polarized, that is, |t), photons from polarizer #1 can be considered to be in the superposition cos45°IAos45°) +sin45°I\sin450) = l / V S l / ) + 1/V2|\). So they have a 11/V21 = 1 / 2 chance of collapsing into state \/) and being passed. These photons encounter polarizer #2, where they can be considered to be in the superposition cos 45°ITcos 45°) +sin45°I^sin 45°) = 1/V2|t) + 1 / V 2 H ) . So they again have a 11/V21 = 1 / 2 chance of collapsing, now into state |-»), and being passed. Thus, the chance of an arbitrary photon passing through all three polarizers is 1/2 X 1/2 X 1/2 = 1/8, agreeing with our observation.
3.4
Expressing Entanglement
In the Dirac or ket notation, the tensor product, 0, distributes over addition, e.g., |0)(8>(1/V2|0) + I/V2II)) = (I/V2IOO) + I/V2IOI)). Another example is that the tensor product of equal superpositions is an equal superposition of the entire system:
= ( » 4'4"'4"*^"'> * ^">7!"0 = ^(100)+ 101)+ 110)+ 111)). Note that the square of each amplitude gives a 1/4 chance of each outcome, which is what we expect. If a state cannot be factored in products of simpler states, it is "entangled." For instance, neither l/2|00)+\/374|ll) nor \/y/2[\Heads Tails)-¥\Tails Heads)) can be factored into a product of states. The latter state expresses the entangled coin tossing we discussed in Section 2.7. When we toss the coins (do a measurement), we have equal chances of getting \Heads Tails) (heads on the first coin ^To double check consistency, note that the probabiUty of seeing either state is cos^ 9 -f- sin^ 6 =\.
206
PAUL E. BLACK ETAL
and tails on the second) or \Tails Heads) (tails on the first coin and heads on the second). If we observe the coins separately, they appear to be completely classical, fair coins: heads or tails appear randomly. However, the records of the two coins are correlated: when one comes up heads, the other comes up tails and vice versa.
3.5
Unitary Transforms
Postulate 2 states that all transformations of an isolated quantum system are unitary. In particular, they are linear. If a system undergoes decoherence or collapse because of some outside influence, the transformation is not necessarily unitary, but when an entire system is considered in isolation from any other influence, all transformations are unitary.
3.6
Proof of No-Cloning Theorem
With Postulate 2, we can prove a slighdy simplified version of the No-Cloning Theorem. (A comprehensive version allows for arbitrary ancillary or "work" qubits.) We begin by formalizing the theorem. We hypothesize that there is some operation, U, which exactly copies an arbitrary quantum state, ^ , onto another particle. Its operation would be written as
t/|T)|0) = 1^)1'!'). Does this hypothetical operator have a consistent definition for a state that is a superposition? In Dirac notation, what is the value of U(a\0) -h ^|1)) |0)? Recall that tensor product distributes over superposition. One derivation is to distribute the tensor product first, then distribute the clone operation, and finally perform the hypothetical clone operation: U{a\0) + b\l))\0) = U{a\Om ^ =
b\\m)
Ua\0)\0)-hUb\l)\0)
= a\0)a\0) + b\\)b\l) =
a^\00)-^b^\\l).
However, if we evaluate the clone operation first then distribute, we get U{a\0)-h b\l))\0) = {a\0) + b\l)){a\0) + b\l)) = a\0)a\0) + a\0)b\\) + b\\)a\0) -h b\l)b\l) = a^\00) -\- ab\Ol) -\- ab\lO) -\- b^\n).
QUANTUM COMPUTING AND COMMUNICATION
207
The derivations are different! The mathematics should be consistent unless we're trying something impossible, like dividing by 0. Since the only questionable step was assuming the existence of a cloning operation, we conclude that a general cloning operation is inconsistent with the laws of quantum mechanics. Note that if a is 0 or Z> is 0, the two derivations do give the same result, but a and b are amplitudes (like probabilities) of states in the superposition. If one or the other is 0, there was actually no superposition to begin with, and this proof doesn't apply. In fact, in the absence of arbitrary superposition, we can clone. If we know that a particle is either in state |0) or in state |1), we can simply measure the particle. We then set any number of other particles to that same state, effectively copying the state of the particle. In this case we know something about the original state of the particle. So this "loophole" does not invalidate the theorem that we cannot clone a completely unknown state. In Section 5.6 we explain how we can move, or "teleport," an unknown state to a distant particle, but the state on the original particle is destroyed in the process. So we still end up with just one instance of a completely unknown state.
4.
Quantum Computing
We have seen that phenomena and effects at quantum scales can be quite different from those we are used to. The richness of these effects tantalize us with the possibility of far faster computing, when we manage to harness these effects. But how can we turn these effects in gates and computers? How fast might they solve problems? Are these merely theoretical ideals, like a frictionless surface or noiseless measurement, or is there hope of building an actual device? This section discusses how quantum effects can be harnessed to create gates, assesses the potential for quantum algorithms, and oudines ways of dealing with imperfect operations and devices.
4.1 Quantum Gates and Quantunn Connputers Digital computers, from microprocessors to supercomputers, from the tiny chips running your wristwatch or microwave to continent-spanning distributed systems that handle worldwide credit card transactions, are built of thousands or millions of simple gates. Each gate does a single logical operation, such as producing a 1 if all its inputs are 1 (otherwise, producing a 0 if any input is 0) or inverting a 1 to a 0 and a 0 to a 1. From these simple gates, engineers build more complex circuits that add or multiply two numbers, select a location in memory, or choose which instructions to do next depending on the result of an operation.
208
PAUL E. BLACK ETAL
From these circuits, engineers create more and more complex modules until we have computers, CD players, aircraft navigation systems, laser printers, and cell phones. Although computer engineers still must deal with significant concerns, such as transmitting signals at gigahertz rates, getting a million gates to function in exact lockstep, or storing ten billion bits without losing a single one, conceptually once we can build simple gates, the rest is "merely" design. Quantum computing appears to be similar: we know how to use quantum effects to create quantum gates or operations, we have ideas about combining gates into meaningful modules, and we have a growing body of work about how to do quantum computations reliably, even with imperfect components. Researchers are optimistic because more work brings advances in both theory and practice. In classical computer design, one basic gate is the AND gate. However, as we described in Section 2.8, an AND gate is not reversible. A basic, reversible quantum gate is the "controlled-not" or CNOT gate. It is represented as in Fig. 7. The two horizontal lines, labeled \(p) and |i//), represent two qubits. The gate is the vertical line with connections to the qubits. The top qubit, labeled \y/) and connected with the dot, is the gate's control. The bottom qubit, labeled \(p) and connected with e, is the "data." The data qubit is inverted if the control qubit is 1. If the control is 0, the data qubit is unchanged. Table I shows the operation of CNOT. Typically we consider the inputs to be on the left (the \(p) and \y/)), and the outputs to be on the right. Since CNOT is reversible, it is not unreasonable to consider the right-hand side (the \(p') and \i//')) the "inputs" and the left-hand side the "outputs"! That is, we can run the gate "backwards." The function is still completely determined: every possible "input" produces exactly one "output." So far, this is just the classical exclusive-OR gate. What happens when the control is a superposition? The resultant qubits are entangled. In the following, we apply a CNOT to the control qubit, an equal superposition of |0y,) and |1^) (we use the subscript y/ to distinguish the control qubit), and the data qubit, |0): C N O T ( I / \ / 2 ( |O^)+ I V ) ) ^ |O>) = I/\/2(CNOT|O^O> + CNOT|VO))
= i/v^( |o^o)+ IVI)). ,p c
Uy
1^/ FIG.
7. A CNOT gate.
\r)
QUANTUM COMPUTING AND COMMUNICATION
209
TABLE I FUNCTION OF THE C N O T GATE
\¥)
M
Iv^')
\(p')
|0> 10)
|0) |1)
10) |0)
|0) |1)
ID ID
10) ID
ID ID
ID |0)
What does this mean? One way to understand it is to measure the control qubit. If the result of the measurement is 0, the state has collapsed to |0^0), so we will find the data qubit to be 0. If we measure a 1, the state collapsed to 11^/1), and the data qubit is 1. We could measure the data qubit first and get much the same result. These results are consistent with Table I. So how might we build a CNOT gate? We review several possible implementations in Section 6, but sketch one here. Suppose we use the state of the outermost electron of a sodium atom as a qubit. An electron in the ground state is a logical 0, and an excited electron is a logical 1. An appropriate pulse of energy will flip the state of the qubit. That is, it will excite an outer electron in the ground state, and "discharge" an excited electron. To make a CNOT gate, we arrange a coupling between two atoms such that if the outer electron of the control atom is excited, the outer electron of the data atom flips when we apply a pulse. If the control atom is not excited, the pulse has no effect on the data atom. As can be guessed from this description, the notion of wires and gates, as represented schematically in Fig. 7, might not be used in an actual quantum computer. Instead, different tuned and selected minute energy pulses may cause qubits to interact and change their states. A more easily used quantum gate is the controlled-controlled-not or C2N0T gate. It has two control qubits and one data qubit, as represented schematically in Fig. 8. It is similar to the CNOT: the data qubit is inverted if both the control qubits are 1. If either is 0, the data qubit is unchanged. We can easily make a reversible version of the classical AND gate. To find A AND B, use A and B as
-^ FIG. 8. A C2N0T gate.
210
PAULE. BLACK ETAL
the controls and use a constant 0 as the data. If A and B are both 1, the 0 is flipped to a 1. Otherwise, it remains 0. Many other basic quantum gates have been proposed [3, Chap. 4]. Using these gates as building blocks, useful functions and entire modules have been designed. In short, we can conceptually design complete quantum computing systems. In practice there are still enormous, and perhaps insurmountable, engineering tasks before realistic quantum computing is available. For instance, energy pulses are never perfect, electrons don't always flip when they are supposed to, and stray energy may corrupt data. Section 4.3 explains a possible approach to handhng such errors.
4.2
Quantum Algorithms
The preceding section outlines plans to turn quantum effects into actual gates and, eventually, into quantum computers. But how much faster might quantum computers be? After all, last year's laptop computer seemed fast until this year's computer arrived. To succincdy address this, we introduce some complexity theory. To help answer whether one algorithm or computer is actually faster, we count the number of basic operations a program executes, not (necessarily) the execution, or elapsed "wall clock" time. Differences in elapsed time may be due to differences in the compiler, a neat programming trick, memory caching, or the presence of other running programs. We want to concentrate on fundamental differences, if any, rather than judging a programming competition. In measuring algorithm performance we must consider the size of the input. A computer running a program to factor a 10,000-digit number shouldn't be compared with a different computer that is only factoring a 10-digit number. So we will compare performance in terms of the size of the problem or input. We expect that larger problems take longer to solve than smaller instances of the same problem. Hence, we express performance as a function of the problem size, e.g., /(«). We will see that performance functions fall into theoretically neat and practically useful "complexity classes."
4.2.1
Complexity
Classes
What are some typical complexity classes? Consider the problem of finding a name in a telephone book. If one took a dumb, but straightforward method where we check every entry, one at a time, from the beginning, the expected average number of checks is n/2 for a telephone book with n names. (This is called "sequential search.") Interestingly, if one checks names completely at random.
QUANTUM COMPUTING AND COMMUNICATION
211
even allowing accidental rechecks of names, the expected average number of checks is still «/2. Since telephone books are sorted by name, we can do much better. We estimate where the name will be, and open the book to that spot. Judging from the closeness to the name, we extrapolate again where the name will be and skip there. (This is called "extrapolation search.") This search is much faster, and on average takes some constant multiple of the logarithm of the number of names, or c log n. Although it takes a little longer to estimate and open the book than just checking the next name, as n gets large, those constant multipliers don't matter. For large values of n the logarithm is so much smaller than n itself, it is clear that extrapolation search is far faster than linear search. (When one is close to the name, that is, when n is small, one switches to linear searching, since the time to do a check and move to the next name is much smaller.) This is a mathematically clear distinction. Since almost any reasonable number times a logarithm is eventually smaller than another constant times the number, we'll ignore constant multiples (in most cases) and just indicate what "order" they are. We say that linear search is 0(n), read "big-Oh of n," and extrapolation search is 0(log n), read "big-Oh of log AI." Since logarithms in different bases only differ by a constant multiple, we can (usually) ignore the detail of the logarithm's base. Other common problems take different amounts of time, even by these highlevel comparisons. Consider the problem of finding duplicates in an unordered list of names. Comparing every name to every other name takes some multiple of n^, or O(n^) time. Even if we optimize the algorithm and only compare each name to those after it, the time is still a multiple of n^. Compared with 0(log n) or even 0{n), finding duplicates will be much slower than finding a name, especially for very large values of n. It turns out we can sort names in time proportional to n\og n. Checking for duplicates in a sorted list only takes 0{n), so sorting and then checking takes 0(cn log n-\-dn), for some constants c and d. Since the n log n term is significantly larger than the n term for large n, we can ignore the lone n and say this method is 0(n log n), much faster than the 0{n^) time above. Although the difference between these methods is significant, both are still polynomial, meaning the run time is a polynomial of the size. That is, they are both O(n^) for some constant k. We find a qualitative difference in performance between polynomial algorithms and algorithms with exponential run time, that is, algorithms that are O(k^) for some constant k. Polynomial algorithms are generally practical to run; even for large problems, the run time doesn't increase too much, whereas exponential algorithms are generally intractable. Even seemingly minor increases in the problem size can make the computation completely impractical.
212
PAULE. BLACK ETAL.
There is a mathematical reason for separating polynomial from exponential algorithms. A polynomial of a polynomial is still a polynomial. Thus, even having polynomial algorithms use other polynomial algorithms still results in a polynomial algorithm. Moreover any exponential function always grows faster than any polynomial function.
4.2.2
A Formal Definition of Big-Oh
Complexity
For the curious reader, we formally define big-Oh notation. We say f(n) = 0{g{n)) if there are two positive constants k and ^o such that \f{n)\ < kg{n) for all n > HQ. The constants k and HQ must not depend on n. Informally, there is a constant k such that for large values (beyond no), kg(n) is greater than f(n). From this definition, we see that constant multipliers are absorbed into the k. Also lower order terms, such as dn, are eventually dominated by higher order terms, like cnlogn or n^. Because a "faster" algorithm may have such large constants or lower order terms, it may perform worse than a "slower" algorithm for realistic problems. If we are clearly only solving small problems, it may, in fact, be better to use the "slower" algorithm, especially if the slower algorithm is simpler. However, experience shows that big-Oh complexity is usually an excellent measure for comparing algorithms.
4.2.3
Shor's Factoring
Algorithm
We can now succinctly compare the speed of computations. The security of a widely used encryption scheme, RS A, depends on the presumption that finding the factors of large numbers is intractable. After decades of work, the best classical factoring algorithm is the Number Field Sieve [4]. With it, factoring an n-digit number takes about O steps,"^ which is exponential in n. What does this mean? Suppose you use RSA to encrypt messages, and your opponent buys fast computers to break your code. Multiplication by the Schonhage-Strassen algorithm [5] takes 0(« log «log log A?) steps. Using a key eight times longer means multiplications, and hence encrypting and decrypting time, takes at most 24 times longer to run, for n> \6. However, the time for your opponent to factor the numbers, and hence break the code, increases to e^^ = e^ ^ = ie^\ . In other words, the time to factor is squared. It doesn't matter whether the time is in seconds or days: factoring is exponential. Without too much computational overhead you can increase the size of your key beyond the capability of any '^More precisely, it takes e^'"
' steps
QUANTUM COMPUTING AND COMMUNICATION
213
conceivable computer your opponent could obtain. At least, that was the case until 1994. In 1994, Peter Shor invented a quantum algorithm for factoring numbers that takes O («^log«loglog«) steps [2]. This is polynomial, and, in fact, isn't too much longer than the naive time to multiply. So if you can encrypt, a determined opponent can break the code, as long as a quantum computer is available. With this breakthrough, the cryptography community in particular became very interested in quantum computing. Shor's algorithm, like most factoring algorithms, uses "a standard reduction of the factoring problem to the problem of finding the period of a function" [6]. What is the period of, say, the function 3" mod 14? Values of the function for increasing exponents are 3^ = 3,3^ = 9,3^ = 27 or 13 mod 14, 3^ = 11 mod 14,3^ = 5 mod 14, and 3^ = 1 mod 14. Since the function has the value 1 when n = 6, the period of 3" mod 14 is 6. The algorithm has five main steps to factor a composite number A^ with Shor's algorithm. 1. If AT is even or there are integers a and b > 1 such that N = a^,2oT a SLTQ factors. 2. Pick a positive integer, m, which is relatively prime to N. 3. Using a quantum computer, find the period of m^ mod N, that is, the smallest positive integer P such that m^ = 1 mod N. 4. For a number of theoretic reasons, if P is odd or if m^^^ + 1 = 0 mod TV, start over again with a new m at step 2. 5. Compute the greatest common divisor of m^/^ - 1 and N. This number is a divisor of N. For a concrete example, let N = 323, which is 19 x 17. N is neither even nor the power of an integer. Suppose we choose 4 for m in step 2. Since 4 is relatively prime to 323, we continue to step 3. We find that the period, P, is 36, since 4^^ = 1 mod 323. We do not need to repeat at step 4 since 36 is not odd and 436/2 ^ I - 3Q5 j^Q^ 323, In step 5 we compute the greatest common divisor of 436/2 _ 1 ^^^ 323^ ^hich is 19. Thus we have found a factor of 323. The heart of Shor's algorithm is quantum period finding, which can also be applied to a quantum Fourier transform and finding discrete logarithms. These are exponentially faster than their classical counterparts.
4.2.4
Deutsch's Function Characterization
Problem
To more clearly illustrate quantum computing's potential speedup, let's examine a contrived, but simple problem first presented and solved by Deutsch [7].
214
PAUL E. BLACK ETAL.
Suppose we wish to find out whether an unknown Boolean unary function is constant, either 0 or 1, or not. Classically, we must apply the function twice, once with a 0 input and once with a 1. If the outputs are both 0 or both 1, it is constant; otherwise, it is not. A single classical application of the function, say applying a 1, can't give us enough information. However, a single quantum application of the function can. Using a superposition of 0 and 1 as the input, one quantum computation of the function yields the answer. The solution of Cleve et al. [8] to Deutsch's problem uses a common quantum computing operation, called a Hadamard,^ which converts |0) into the superposition 1/V2(|0) + |1>) and |1) into the superposition 1/V2(|0) - |1)). The algorithm is shown schematically in Fig. 9. The Hadamard is represented as a box with an "H" in it. The function to be characterized is a box labeled "Uf." To begin, we apply a Hadamard to a |0) and another Hadamard to a 11): H|0)//|1) = 1/2(10)+ | 1 ) ) ( | 0 ) - | 1 ) ) = 1/2(|0)(|0)-|1» + |1)(|0)-|1))) To be reversible, the function, Uj, takes a pair of qubits, |x)|y), and produces the pair \x)\y e /(x)). The second qubit is the original second qubit, y, exclusiveor'd with the function applied to the first qubit, f{x). We apply the function one time to the result of the Hadamards, and then apply another Hadamard to the first qubit, not the "result'' qubit. Below the 'T' represents the identity; that is, we do nothing to the second qubit: (i/®/)C/^l/2(|0)(|0)-|l)) + |l)(|0)-|l))) = ( / / 0 / ) 1 / 2 ( ^ ^ | 0 > ( | 0 ) - |1)) + [//|1)(|0) - |1))) = {H^i)i/2(|0)(|0
e /(O)) - l i e /(O))) +
|i>(|Oe/(i)>-|ie/(i))))
lf(0)©f(1)>
FIG. 9. Solution to Deutsch's function characterization problem. ^ Also called Walsh transform, Walsh-Hadamard transform, or discrete Fourier transformation over Z"
QUANTUM COMPUTING AND COMMUNICATION
215
= l/2(H|0)|0e/(0))-H|0)|l©/(0)) + //|l>|0 ©/(!)> - H\m
® f(l)))
^ 1 /7^/? ( (l^> + i l » | 0 ® /(O)) - (|0) + Il))|l ® /(0)>+\ ^ ^ V(io>-ii»io®/(!))-(10)-ii))ii©/(!)) ; • Case analysis and algebraic manipulations reduce this equation to (Details are in the Appendix.)
= i/V2|/(0)e/(i))(|0)-|i)) = 1/(0) 0 / ( 1 ) ) ^ 1 / ^ ( 1 0 ) - | 1 ) ) . We now measure the first qubit. If it is 0, the function is a constant. If we measure a 1, the function is not a constant. Thus, we can compute a property of a function's range using only one function application. Although contrived, this example shows that using quantum entanglement and superposition, we can compute some properties faster than is possible with classical means. This is part of the lure of quantum computing research.
4.2.5
Graver's Search
Algorithm
The requirement of searching for information is simple: find a certain value in a set of values with no ordering. For example, does the name "John Smith" occur in a set of 1000 names? Since there is no order to the names, the best classical solution is to examine each name, one at a time. For a set of N names, the expected run time is 0{N/2): on average we must examine half the names. There are classical methods for speeding up the search, such as sorting the names and doing a binary search, or using parallel processing or associative memory. Sorting requires us to assign an order to the data, which may be hard if we are searching data such as pictures or audio recordings. To accommodate every possible search, say by last name or by first name, we would need to create separate sorted indices into the data, requiring 0{N log N) preliminary computation and 0{N) extra storage. Parallel processing and associative memory takes 0(N) resources. Thus these classical methods speed up query time by taking time earlier or using more resources. In 1996 Grover presented a quantum algorithm [9,10] to solve the general search problem in 0(\/iV log N) time. The algorithm proceeds by repeatedly enhancing the amplitude of the position in which the name occurs. Database searching, especially for previously unindexed information, is becoming more important in business operations, such as data mining. However, Grover's algorithm might have an impact that reaches even farther. Although we
216
PAULE. BLACK ETAL
present the algorithm in terms of looking for names, the search can be adapted to any recognizable pattern. Solutions to problems currently thought to take more than polynomial time, that is, 0(k"), may be solvable in polynomial time. A typical problem in this group is the Traveling Salesman Problem, which is finding the shortest path between all points in a set. This problem occurs in situations such as finding the best routing of trucks between pick-up and drop-off points, airplanes between cities, and the fastest path of a drill head making holes in a printed circuit board. The search algorithm would initialize all possible solutions, and then repeatedly enhance the amplitude of the best solution. No published quantum algorithm exists to solve the Traveling Salesman, or any NP problem, in polynomial time. However, improvements like Grover's hint that it may be possible.
4.2.6
Quantum
Simulation
Since a quantum system is the tensor product of its component systems, the amount of information needed to completely describe an arbitrary quantum system increases exponentially with the size. This means that classical simulation of quantum systems with even a dozen qubits challenges the fastest supercomputers. Researching protein folding to discover new drugs, evaluating different physical models of the universe, understanding new superconductors, or designing quantum computers may take far more classical computer power than could reasonably be expected to exist on Earth in the next decade. However, since quantum computers can represent an exponential amount of information, they may make such investigations tractable.
4.3
Quantum Error Correction
One of the most serious problems for quantum information processing is that of decoherence, the tendency for quantum superpositions to collapse into a single, definite, classical state. As we have seen, the power of quantum computing derives in large part from the ability to take advantage of the unique properties of quantum mechanics—superposition and entanglement. The qubits that compose a quantum computer must inevitably interact with other components of the system, in order to perform a computation. This interaction inevitably will lead to errors. To prevent the state of qubits from degrading to the point that quantum computations fail requires that errors be either prevented or corrected. In classical systems, errors are prevented to some degree by making the ratio of system size to error deviation very large. Error correction methods are well known in conventional computing systems, and have been used for decades. Classical
QUANTUM COMPUTING AND COMMUNICATION
217
error correction uses various types of redundancy to isolate and then correct errors. Multiple copies of a bit or signal can be compared, with the assumption that errors are sufficiently improbable to never result in faulty bits or signals being more likely than valid ones; e.g., if three bits are used to encode a one-bit value, and two of three bits match, then the third is assumed to be faulty. In quantum systems it is not possible to measure qubit values without destroying the superposition that quantum computing needs, so at first there was doubt that quantum error correction would ever be feasible. This is natural, especially considering the no-cloning theorem (Section 2.10). Not only could qubits not be exactly measured, they cannot even be arbitrarily copied by any conceivable scheme for detecting and correcting errors. It is perhaps surprising then that quantum error correction is not only possible, but also remarkably effective. The challenge in quantum error correction is to isolate and correct errors without disturbing the quantum state of the system. It is in fact possible to use some of the same ideas employed for classical error correction in a quantum system; the trick is to match the redundancy to the type of errors likely to occur in the system. Once we know what kinds of errors are most likely, it is possible to design effective quantum error correction mechanisms.
4.3.7
Single-Bit-Flip Errors
To see how this is possible, consider a simple class of errors: single-bit errors that affect qubits independently. (In reality, of course, more complex problems occur, but this example illustrates the basic technique.) Consider a single-qubit, a two-state system with bases |0) and |1). We will use a simple "repetition code"; that is, we represent a logical zero with three zero qubits, |0L) = |000), and a logical one with three ones, |1L) = |111). An arbitrary qubit in this system, written as a superposition a\Oi^) -h b\ 1L), becomes a\000) + b\lll) with repetition coding. Since we assume the system is stable in all ways except perhaps for single-bit flips, there may be either no error or one of three qubits flipped, as shown in Table II.
TABLE II BIT-FLIP ERRORS, SYNDROMES, AND CORRECTIVES
Error
Error state
Syndrome
Correction
No error qubit 1 flipped qubit 2 flipped qubit 3 flipped
a|000) + /)|lll> a|100) + />|011> a\0\0) + b\\0\) fl|001) + Z)|110)
|000> 1110) 1101) 1011)
None X ). The error state is then augmented with |000) and the syndrome extraction, S, applied: 5'(|^)^|000)) = 5 ( ' v ^ l ^ l 100000)+ /?|011000)) +Va2(«|010000)+/?|101000))')
= Va8(fl|iooiio) + /?|oiiiio)) + Vo2(^|oioioi) + /?|ioiioi)) = Va8(a|ioo) + b\o\\)) 0 |i 10) + \/a2(«|oio) + /?|ioi)) ^ |ioi). Now we measure the last three qubits. This measurement collapses them to 1110) with 80% probabiHty or 1101) with 20% probability. Since they are entangled with
QUANTUM COMPUTING AND COMMUNICATION
219
the repetition coding bits, the coding bits partially collapse, too. The final state is (fl|100) + ^1011)) ^ |110) with 80% probability or (a|010) + ^|101)) (8) |101) with 20% probability. If we measured 1, 1, 0, the first collapse took place, and we apply Z(8)/(g)/ to a|100)+Z?|011), producing a|000)+Z)|lll), the original coding. On the other hand, if we measured 1, 0, 1, we apply I (S> X (S> I to a\OlO)-\-b\lOl). In either case, the system is restored to the original condition, fl|000) + ^ | l l l ) , without ever measuring (or disturbing) the repetition bits themselves. This error correction model works only if no more than one of the three qubits experiences an error. With an error probability of p, the chance of either no error or one error is (1 - p)^ + 3p (1 - p)^ = 1 - 3p^ + 2p^. This method improves system reliability if the chance of an uncorrectable error, which is 3p^ - 2p^, is less than the chance of a single error, p, in other words, if p < 0.5.
4.3.3
From Error Correction to Quantum Fault Tolerance
The replication code given above is simple, but has disadvantages. First, it only corrects "bit flips," that is, errors in the state of a qubit. It cannot correct "phase errors," such as the change of sign in 1/V2(|0) + |1)) to 1/V2(|0) - |1)). Second, a replication code wastes resources. The code uses three actual qubits to encode one logical qubit. Further improvements in reliability take significantly more resources. More efficient codes can correct arbitrary bit or phase errors while using a sublinear number of additional qubits. One such coding scheme is group codes. Since the odds of a single qubit being corrupted must be low (or else error correction wouldn't work at all), we can economize by protecting a group of qubits at the same time, rather than protecting the qubits one at a time. In 1996 Ekert and Macchiavello pointed out [11] that such codes were possible and showed a lower bound. To protect / logical qubits from up to t errors, they must be encoded in the entangled state of at least n physical qubits, such that the following holds: 2^ ,-n /•=o
\
/
An especially promising approach is the use of "concatenated" error correcting codes [12,13]. In this scheme, a single logical qubit is encoded as several qubits, but in addition the code qubits themselves are also encoded, forming a hierarchy of encodings. The significance is that if the probability of error for an individual qubit can be reduced below a certain threshold, then quantum computations can be carried out to an arbitrary degree of accuracy. A new approach complements error correction. Fault tolerant quantum computing avoids the need to actively decode and correct errors by computing directly
220
PAUL E. BLACK ETAL
on encoded quantum states. Instead of computing with gates and qubits, fault tolerant designs use procedures that execute encoded gates on encoded states that represent logical qubits. Although many problems remain to be solved in the physical implementation of fault tolerant quantum computing, this approach brings quantum computing a litde closer to reality.
5.
Quantum Communication and Cryptography
Quantum computing promises a revolutionary advance in computational power, but applications of quantum mechanics to communication and cryptography may have equally spectacular results, and practical implementations may be available much sooner. In addition, quantum communication is likely to be just as essential to quantum computing as networking is to today's computer systems. Most observers expect quantum cryptography to be the first practical application for quantum communications and computing.
5.1 Why Quantum Cryptography Matters Cryptography has a long history of competition between code makers and code breakers. New encryption methods appear routinely, and many are quickly cracked through lines of attack that their creators never considered. During the first and second World Wars, both sides were breaking codes that the other side considered secure. More significantly, a code that is secure at one time may fall to advances in technology. The most famous example of this may be the World War II German Enigma code. Some key mathematical insights made it possible to break Enigma messages encrypted with poorly selected keys, but only with an immense amount of computation. By the middle of the war. Enigma messages were being broken using electromechanical computers developed first by Polish intelligence and later by faster British devices built under the direction of Alan Turing. Although the Germans improved their encryption machines, Joseph Desch, at NCR Corporation, developed code breaking devices 20 times faster than Turing's, enabling the US Navy's Op-20-G to continue cracking many Enigma messages. Today, an average personal computer can break Enigma encryption in seconds. A quantum computer would have the same impact on many existing encryption algorithms. Much of modern cryptography is based on exploiting extremely hard mathematical problems, for which there are no known eificient solutions. Many modem cipher methods are based on the difficulty of factoring (see Section 4.2.3) or computing discrete logarithms for large numbers (e.g., over 100 digits). The
QUANTUM COMPUTING AND COMMUNICATION
221
best algorithms for solving these problems are exponential in the length of input, so a brute force attack would require literally billions of years, even on computers thousands of times faster than today's machines. Quantum computers factoring large numbers or solving discrete logarithms would make some of the most widely used encryption methods obsolete overnight. Although quantum computers are not expected to be available for at least the next decade, the very existence of a quantum factoring algorithm makes classical cryptography obsolete for some applications. It is generally accepted that a new encryption method should protect information for 20 to 30 years, given expected technological advances. Since it is conceivable that a quantum computer will be built within the next two to three decades, algorithms based on factoring or discrete logarithms are, in that sense, obsolete already. Quantum cryptography, however, offers a solution to the problem of securing codes against technological advances.
5.2
Unbreakable Codes
An encrypted message can always be cryptanalyzed by brute force methods— trying every key until the correct one is found. There is, however, one exception to this rule. A cipher developed in 1917 by Gilbert Vemam of AT&T is truly unbreakable. A Vemam cipher, or "one-time pad," uses a key with a random sequence of letters in the encryption alphabet, equal in length to the message to be encrypted. A message, M, is encrypted by adding, modulo the alphabet length, each letter of the key K to the corresponding letter of M, i.e., C, = M, e Ki, where C is the encrypted message, or ciphertext, and ® is modular addition (see Table III). To decrypt, the process is reversed. The Vemam cipher is unbreakable because there is no way to determine a unique match between encrypted message C and key K. Since the key is random and the same length as the message, an encrypted message can decrypt to any text at all, depending on the key that is tried. For example, consider the ciphertext "XEC." Since keys are completely random, all keys are equally probable. So it is just as likely that the key is "UDI," which decrypts to "CAT," or "TPW," which
TABLE III ONE-TIME PAD
Text
Random key
Ciphertext
C(3) A(l) T (20)
©U(21) eD(4) © I (9)
X(24) E(5) C (3)
222
PAUL E. BLACK ETAL
decrypts to "DOG." There is no way to prove which is the real key, and therefore no way to know the original message. Although it is completely secure, the Vernam cipher has serious disadvantages. Since the key must be the same length as the message, a huge volume of key material must be exchanged by sender and recipient. This makes it impractical for high-volume applications such as day-to-day military communication. However, Vernam ciphers may be used to transmit master keys for other encryption schemes. Historically, Vernam ciphers have been used by spies sending short messages, using pads of random keys that could be destroyed after each transmission, hence the common name "one-time pad." An equally serious problem is that if the key is ever reused, it becomes possible to decrypt two or more messages that were encrypted under the same key. A spectacular example of this problem is the post-war decryption of Soviet KGB and GRU messages by U.S. and British intelligence under the code name VENONA. Soviet intelligence had established a practice of reusing one-time pads after a period of years. British intelligence analysts noticed a few matches in ciphers from a large volume of intercepted Soviet communications [14]. Over a period of years, British and U.S. cryptanalysts working at Arlington Hall in Virginia gradually decrypted hundreds of Soviet messages, many of them critical in revealing Soviet espionage against U.S. atomic weapons research in the 1940s and early 1950s. Still another problem with implementing a Vernam cipher is that the key must be truly random. Using text from a book, for example, would not be secure. Similarly, using the output of a conventional cipher system, such as DBS, results in an encryption that is only as secure as the cipher system, not an unbreakable one-time pad system. Pseudo-random number generator programs may produce sequences with correlations or the entire generation algorithm may be discovered; both these attacks have been successfully used. Thus while the Vernam cipher is in theory unbreakable, in practice it becomes difficult and impractical for most applications. Conventional cryptosystems, on the other hand, can be broken but are much more efficient and easier to use.
5.3
Quantum Cryptography
Quantum cryptography offers some potentially enormous advantages over conventional cryptosystems, and may also be the only way to secure communications against the power of quantum computers. With quantum methods, it becomes possible to exchange keys with the guarantee that any eavesdropping to intercept the key is detectable with arbitrarily high probability. If the keys are used as one-time pads, complete security is assured. Although special purpose classical
QUANTUM COMPUTING AND COMMUNICATION
223
hardware can generate keys that are truly random, it is easy to use the collapse of quantum superpositions to generate truly random keys. This eliminates one of the major drawbacks to using one-time pads. The ability to detect the presence of an eavesdropper is in itself a huge advantage over conventional methods. With ordinary cryptography, there is always a risk that the key has been intercepted. Quantum key distribution eliminates this risk using properties of quantum mechanics to reveal the presence of any eavesdropping.
5.3.1 Quantum Key Distribution The first significant communications application proposed using quantum effects is quantum key distribution, which solves the problem of communicating a shared cryptographic key between two parties with complete security. Classical solutions to the key distribution problem all carry a small, but real, risk that the encrypted communications used for sharing a key could be decrypted by an adversary. Quantum key distribution (QKD) can, in theory, make it impossible for the adversary to intercept the key communication without revealing his presence. The security of QKD relies on the physical effects that occur when photons are measured. As discussed in Section 3.3, a photon polarized in a given direction will not pass through a filter whose polarization is perpendicular to the photon's polarization. At any other angle than perpendicular, the photon may or may not pass through the filter, with a probability that depends on the difference between the direction of polarization of the photon and the filter. At 45°, the probability of passing through the filter is 50%. The filter is effectively a measuring device. According to the measurement postulate of quantum mechanics, measurements in a 2-dimensional system are made according to an orthonormal basis.^ Measuring the state transforms it into one or the other of the basis vectors. In effect, the photon is forced to "choose" one of the basis vectors with a probability that depends on how far its angle of polarization is from the two basis vectors. For example, a diagonally polarized photon measured according to a vertical/horizontal basis will be in a state of either vertical or horizontal polarization after measurement. Furthermore, any polarization angle can be represented as a linear combination, a|t) + ^ | ^ ) of orthogonal (i.e., perpendicular) basis vectors. For QKD, two bases are used: rectilinear, with basis vectors t and -^, and diagonal, with basis vectors /* and \ . ^Recall from linear algebra that a basis for a vector space is a set of vectors that can be used in linear combination to produce any vector in the space. A set of k vectors is necessary and sufficient to define a basis for a /c-dimensional space. A commonly used basis for a 2-dimensional vector space is (1,0) and (0,1).
224
PAUL E. BLACK ETAL
Measuring photons in these polarizations according to the basis vectors produces the results shown in Table IV. These results are the basis of a key distribution protocol, BB84, devised by Bennett and Brassard [15]. Many other QKD protocols have been devised since, using similar ideas. Suppose two parties, Alice and Bob, wish to establish a shared cryptographic key. An eavesdropper, Eve, is known to be attempting to observe their communication, see Fig. 10. How can the key be shared without Eve intercepting it? Traditional solutions require that the key be encrypted under a previously shared key, which carries the risk that the communication may be decrypted by cryptanalytic means, or that the previously shared key may have been compromised. Either way. Eve may read the message and learn Alice and Bob's new key. QKD provides a method for establishing the shared key that guarantees either that the key will be perfectly secure or that Alice and Bob will learn that Eve is listening and therefore not use the key. The BB84 QKD protocol takes advantage of the properties shown in Table IV. The protocol proceeds as follows: Alice and Bob agree in advance on a representation for 0 and 1 bits in each basis. For example, they may choose -> and /* to represent 0 and t and \ to represent 1. Alice sends Bob a stream of polarized photons, choosing randomly between t, -^, /-> and \ polarizations. When receiving a photon. Bob chooses randomly between -h and x bases. When TABLE IV PHOTON MEASUREMENT WITH DIFFERENT BASES
t
Polarization Basis Result
+ T
-^ + "^
/ + Tor^ (50/50)
\
T
-^
/
+ t or -^ (50/50)
X
X
X
X
/or\ (50/50)
/or\ (50/50)
/
\
Public channel
^r
^r
Alice
^r Bob
Eve i
L
i i
k
Quantum channel FIG. 10. Quantum key distribution.
L
\
225
QUANTUM COMPUTING AND COMMUNICATION
the transmission is complete, Bob sends Alice the sequence of bases he used to measure the photons. This communication can be completely public. Alice tells Bob which of the bases were the same ones she used. This communication can also be public. Alice and Bob discard the measurements for which Bob used a different basis than Alice. On average, Bob will guess the correct basis 50% of the time, and will therefore get the same polarization as Alice sent. The key is then the interpretation of the sequence of remaining photons as O's and I's. Consider the example in Table V. Eve can listen to the messages between Alice and Bob about the sequences of bases they use and learn the bases that Bob guessed correctly. However, this tells her nothing about the key, because Alice's polarizations were chosen randomly. If Bob guessed + as the correct polarization. Eve does not know whether Alice sent a ^ (0) or a t (1) polarized photon, and therefore knows nothing about the key bit the photon represents. What happens if Eve intercepts the stream sent by Alice and measures the photons? On average. Eve will guess the correct basis 50% of the time, and the wrong basis 50% of the time, just as Bob does. However, when Eve measures a photon, its state is altered to conform to the basis Eve used, so Bob will get the wrong result in approximately half of the cases where he and Alice have chosen the same basis. Since they chose the same basis half the time. Eve's measurement adds an error rate of 25%. Consider the elaborated example in Table VI. We describe details of real systems with some error rate and determining the error rate in Section 5.4. TABLE V DERIVING A NEW KEY
Sent by Alice Basis used by Bob Bob's result Key
-^
— > •
t
t
/
\
/
^
-^
t
\
\
\
t ^
/
X
+
X
X
+
X
+
+
+
X
X
-1-
X
+
+
X
\
-^ /
\
-^ \
\
-^ \
t
-^ /
0
1
T -^ -^ \ 0
0
1
1
1
0
0
TABLE VI QUANTUM KEY DISTRIBUTION WITH EAVESDROPPING
Sent by Alice Basis used by Eve Eve's result Basis used by Bob Bob's result Key
-^ —^ t
t
/
\
/
^
^
t \
\
\
t ^
+
+
+
+
+
X
+
X
X
+
+
X
+
^
-^ /
t
^
t
/
^
\
\
t
-^ \
X
+
/
^
X
X
+
X
X
+
\
^
/
\
-^ /
0
X
err
+
+
t ^
0
+
X
-^ \ 0
err
X
/ +
+
-^
X
t /
\
T ^
/
1
1
+ 0
X
0
226
5.3.2
PAULE. BLACK £ r ^ L
Generating Random Keys
Properly implemented, the BB84 protocol guarantees that Alice and Bob share a key that can be used either as a one-time pad, or as a key for a conventional cryptographic algorithm. In either case, real security is only available if the key is truly random. Any source of nonrandomness is a potential weakness that might be exploited by a cryptanalyst. This is one reason that ordinary pseudorandom number generator programs, such as those used for simulations, are hard to use for cryptography. Some conventional cryptosystems rely on special purpose hardware to generate random bits, and elaborate tests [16] are used to ensure randomness. One of the interesting aspects of quantum cryptography is that it provides a way to ensure a truly random key as well as allowing for detection of eavesdropping. Recall from Section 3.2 that for a superposition a\^) -{- b\(^), the probability of a measurement result of ^ is a^, and of (^ is b^. Therefore when a series of superpositions of a\0) -\- b\l) is measured, 01 and 10 are measured with equal probability. Measuring a series of particles in this state therefore estabhshes a truly random binary sequence.
5.4
Prospects and Practical Problems
Although in theory the BB84 protocol can produce a guaranteed secure key, a number of practical problems remain to be solved before quantum cryptography can fulfill its promise. BB84 and other quantum protocols are idealized, but current technology is not yet close enough to the idealized description to implement quantum protocols as practical products. As of 2002, QKD has not been demonstrated over a distance of more than 50 km, but progress has been steady [17]. Commercial products using quantum protocols may be available by 2005, if problems in generating and detecting single photons can be overcome. Single-photon production is one of the greatest challenges for quantum communication. To prevent eavesdropping, transmission of one photon per time slot is needed. If multiple photons are produced in a time slot, it is possible for an adversary to count the number of photons without disturbing their quantum state. Then, if multiple photons are present, one can be measured while the others are allowed to pass, revealing key information without betraying the presence of the adversary. Current methods of generating single photons typically have an efficiency of less than 15%, leaving plenty of opportunity for Eve. One method of dealing with noise problems is to use privacy amplification techniques. Whenever noise is present, it must be assumed that Eve could obtain partial information on the key bits, since it is not possible for Alice and Bob to
QUANTUM COMPUTING AND COMMUNICATION
227
know for certain whether the error rate results from ordinary noise or from Eve's intrusion. Privacy amplification distills a long key, about which Eve is assumed to have partial information, down to a much shorter key that eliminates Eve's information to an arbitrarily low level. For privacy amplification, the first part of the protocol works exactly as before: Alice sends Bob qubits over a quantum channel, then the two exchange information over a public channel about which measurement bases they used. As before, they delete the qubits for which they used different measurement bases. Now, however, they also must delete bit slots in which Bob should have received a qubit, but didn't, either due to Eve's intrusion or dark counts at Bob's detector. Bob transmits the location of dark counts to Alice over the public channel. Next, Alice and Bob publicly compare small parts of their raw keys to estimate the error rate, then delete these publicly disclosed bits from their key, leaving the tentative final key. If the error rate exceeds a predetermined error threshold, indicating possible interception by Eve, they start over from the beginning to attempt a new key. If the error rate is below the threshold, they remove any remaining errors from the rest of the raw key, to produce the reconciled key by using parity checks of subblocks of the tentative final key. To do this, they partition the key into blocks of length / such that each block is unlikely to contain more than one error. They each compute parity on all blocks and publicly compare results, throwing away the last bit of each compared block. If parity does not agree for a block, they divide the block into two, then compare parity on the subblocks, continuing in this binary search fashion until the faulty bit is found and deleted. This step is repeated with different random partitions until it is no longer efficient to continue. After this process, they select randomly chosen subsets of the remaining key, computing parity and discarding faulty bits and the last bit of each partition as before. This process continues for some fixed number of times to ensure with high probability that the key contains no error. Because physical imperfections are inevitable in any system, it must be assumed that Eve may be able to obtain at least partial information. Eavesdropping may occur, even with significantly improved hardware, either through multiplephoton splitting or by intercepting and resending some bits, but not enough to reveal the presence of the eavesdropper. To overcome this problem, Bennett et al. [18] developed a privacy amplification procedure that distills a secure key by removing Eve's information with an arbitrarily high probability. During the privacy amplification phase of the protocol, Eve's information is removed. The first step in privacy amplification is for Alice and Bob to use the error rate determined above to compute an upper bound, b, on the number of bits in the remaining key that could be known to Eve. Using the number of bits in the
228
PAUL E. BLACK ETAL
remaining, reconciled key, n, and an adjustable security parameter s, they select n-k-s subsets of the reconciled key. The subset selection is done publicly, but the contents of the subsets are kept secret. Alice and Bob then compute parity on the subsets they selected, using the resulting parities as the final secret key. On average, Eve now has less than 2"Vln2 bits of information about the final key. Even if a reliable method of single photon production is developed, errors in transmission are as inevitable with quantum as with classical communication. Because quantum protocols rely on measuring the error rate to detect the presence of an eavesdropper, it is critical that the transmission medium's contribution to the error rate be as small as possible. If transmission errors exceed 25%, secure communication is not possible, because a simple man-in-the-middle attack— measuring all bits and passing them on—will not be detected.
5.5
Dense Coding
As discussed in Section 2.6, a qubit can produce only one bit of classical information. Surprisingly it is possible to communicate two bits of information using only one qubit and an EPR pair in a quantum technique known as dense coding. Dense coding takes advantage of entanglement to double the information content of the physically transmitted qubit. Initially, Alice and Bob must each have one of the entangled particles of an EPR pair: ^o = - ^ ( | 0 0 ) + | l l » . v2 To communicate two bits, Alice represents the possible bit combinations as 0 through 3. Using the qubit in her possession, she then executes one of the transformations in Table VII. After the transformation, Alice sends her qubit to Bob. Now that Bob has both qubits, he can use a controlled-NOT (prior to this, Alice and Bob could apply transformations only to their individual particles). TABLE VII DENSE CODING, PHASE 1
Bits
Transform
New state
00 01 10 11
^0 = (7 0 7)^0 4^1 = (A-^7)4^0 ^2 = (Z®7)4^o "¥, = (¥ ^1)^0
^ ( 1 0 0 ) + 111)) ^ ( | 1 0 ) + |01)) ^(|00)-|11)) ^(|01)-|10))
QUANTUM COMPUTING AND COMMUNICATION
229
The controlled-NOT makes it possible to factor out the second bit, while the first remains entangled, as shown in Table VIII. Note that after the controlled-NOT, it is possible to read off the values of the initial bits by treating 1/V2(|0) +11)) as 0 and 1/V2(|0) - |1)) as 1. All that remains is to reduce the first qubit to a classical value by executing a Hadamard transform, as shown in Table IX. The dense coding concept can also be implemented using three qubits in an entangled state known as a GHZ state [19,20]. With this procedure, Alice can communicate three bits of classical information by sending two qubits. Using local operations on the two qubits, Alice is able to prepare the GHZ particles in any of the eight orthogonal GHZ states. Through entanglement, her operations affect the entire three-state system, just as her operation on one qubit of an entangled pair changes the state of the two qubits in the pair. Similar to twoqubit dense coding. Bob measures his qubit along with the qubits received from Alice to distinguish one of the eight possible states encoded by three bits.
5.6
Quantum Teleportation
As shown in Section 3.6, it is impossible to clone, or copy, an unknown quantum state. However, the quantum state can be moved to another location using classical communication. The original quantum state is reconstructed exactly at the receiver, but the original state is destroyed. The No-Cloning theorem thus TABLE VIII DERIVING THE SECOND BIT
State
C-NOT result
First bit
Second bit
j= (|oo> + |ii»
^ (100) + |ii»
j . (|0) + ID)
|0)
j= (|io> + |0i»
j= (111) +101))
j= (|0) + ID)
ID
j= (|oo> - 111))
j ^ (100) -110))
^ (|0> - ID)
^(101)-110))
J. (101)-111))
;^(|0)-|l))
|0) |1>
TABLE IX DERIVING FIRST BIT First bit
//(First bit)
j ^ (|0> + ID)
j ^ {j^m
+ ID) + j ^ (|0> - ID)) = ^ (|0> + ID +10) - ID) = |0)
j ^ (10) - ID)
j ^ (j^m
+ ID) - j=, (10) - ID)) = ^ (10) + ID -10> + ID) = |i>
230
PAULE. BLACK ETAL
holds because in the end there is only one state. Quantum teleportation can be considered the dual of the dense coding procedure: dense coding maps a quantum state to two classical bits, while teleportation maps two classical bits to a quantum state. Both processes would not be possible without entanglement. Initially, Ahce has a qubit in an unknown state, i.e., O = fl|0) + /?|1).
As with dense coding, Alice and Bob each have one of an entangled pair of qubits: ^0 = -^(100) +111)). The combined state is O ^ ^ o = MO)0-^(|OO) + |ll))+/?|l)0-^(|OO) + |ll))j
= -L(^|o)|oo> + ^|o)|ii)) + -^(/>|i>|oo) + /)|i>|ii)) = -^(fl|000) + ^|011) + /?|100> + /?|lll)).
V2 At this point, Alice has the first two qubits and Bob has the third. Alice applies a controlled-NOT next: (CNOT (8) / ) (O 0 ^ ) = — (a\000) -f a\0\l) + b\ 110) + b\mi)). Applying a Hadamard, H (S> I 101) |10> 111)
a\0) + b\l) a\l) + b\0) a\0}-b\l) a\\)-b\0)
I X Z Y
a\0) + a\0) + a\0) + a\0) +
b\l) b\l) b\l) b\l)
232
PAUL E. BLACK ETAL.
6.1
General Properties Required to Build a Quantum Computer
We will first describe briefly the general physical traits that any specific physical implementation must have. Currently, there is a big gap between demonstrations in the laboratory and generally useful devices. Moreover, most proposed laboratory implementations and the experiments carried out to date fail to completely satisfy all of the general characteristics. Which system ultimately satisfies all these constraints at a level required to build a true extensible quantum processor is not known. Here, we will comment on the general properties needed to build a useful quantum computing device.
6.7.7
Well-Characterized
Qubits
The first requirement is that the qubits chosen must be well characterized. This requires that each individual qubit must have a well-defined set of quantum states that will make up the "qubit." So far, we have assumed that we had a two-level, or binary, quantum system. In reality, most quantum systems have more than two levels. In principle we could build a quantum computer whose individual elements or qubits consist of systems with four, five, ten, or any number of levels. The different levels we use may be a subset of the total number of levels in individual elements. Whatever the level structure of my qubit, we require that the levels being used have well-defined properties, such as energy. Moreover, a superposition of the levels of the qubit must minimize decoherence by hindering energy from moving into or out of the qubit. This general constraint requires that each individual qubit have the same internal level structure, regardless of its local external environment. This also requires that the qubit is well isolated from its environment to hinder energy flow between the environment and the qubit. Isolating information from the environment is easier for classical bits than for quantum bits. A classical bit or switch is in either the state "0" or the state "1," that is, on or off. Except in special cases, such as communication, CCDs, or magnetic disks, we engineer classical systems to be in one of these two possibilities, never in between. In those special cases interactions are kept to a few tens of devices. A quantum system or qubit is inherently a much more delicate system. Although we may prepare the system in some excited state |1), most quantum systems will decay to |0) or an arbitrary superposition because of interactions with the environment. Interaction with the environment must be controllable to build a large quantum processor. Moreover, in some proposed physical implementations, individual qubits may have a slightly different internal level structure resulting from either the manufacturing process or the interaction of the qubit with its
QUANTUM COMPUTING AND COMMUNICATION
233
environment. This slight difference in level structure must be compensated and should not change during the computation. The physical nature of the qubit may be any one of a number of properties, such as, electron spin, nuclear spin, photon polarization, the motional or trapping state of a neutral atom or ion, or the flux or charge in a superconducting quantum interference device (SQUID). For instance, in the particular case of ion traps, it is the ground state of the hyperfine states that result from actually coupling the electron and nuclear spin together. Further, it is only a specific pair of the magnetic sublevels of those hyperfine states. In a quantum dot one again uses the complex structure of the device to come up with two states to act as the qubit. In this case, it corresponds more to an excitation of an electron or an electron-hole pair.
6.1.2
Scalable Qubit Arrays and Gates
This requirement is a logical extension of the previous requirement. Since individual qubits must be able to interact to build quantum gates, they must be held in some type of replicated trap or matrix. In the very early days of computing mechanical switches or tubes in racks held information instead of microscopic transistors etched in silicon and mounted in integrated circuits. Thus, the specific nature of the supporting infrastructure depends on the nature of the qubit. Regardless of the matrix or environment holding individual qubits, it is essential that we can add more qubits without modifying the properties of the previous qubits and having to reengineer the whole system. Because quantum systems are so sensitive to their environment, scalability is not trivial. Scalability also requires that qubits are stable over periods that are long compared to both single-qubit operations and two-qubit gates. In other words, the states of the qubits must not decohere on a time scale that is long compared to one- and two-qubit operations. This is increasingly important in larger quantum computers where larger portions of the computer must wait for some parts to finish.
6.1.3
Stability and Speed
Physical implementations of a qubit are based on different underlying physical effects. In general these physical effects have very different decoherence times. For example, nuclear spin relaxation can be from one-tenth of a second to a year, whereas the decoherence time is more like 10"^ s in the case of electron spin. It is approximately 10"^ s for a quantum dot, and around 10"^ s for an electron in certain solid state implementations. Although one might conclude that nuclear spins are the best, decoherence time is not the only concern.
234
PAUL E. BLACK ETAL
A qubit must interact with external agencies so two qubits can interact. The stronger the external interactions, the faster two-qubit gates could operate. Because of the weak interaction of the nuclear spin with its environment, gates will likely take from 10""^ to 10"^ s to operate, giving "clock speeds" of from 1 kHz to 1 MHz. The interaction for electron spin is stronger, so gates based on electron spin qubits may operate in 10"^ to 10~^ s or at 1 to 100 MHz. Thus, electron spin qubits are likely to be faster, but less stable. Since quantum error correction, presented in Section 4.3, requires gates like those used for computations, error correction only helps if we can expect more than about 10,000 operations before decoherence, that is, an error. Reaching this level of accuracy with a scalable method is the current milestone. Therefore, it is really the ratio of gate operation time to decoherence time, or operations until decoherence, that is the near term goal. Dividing decoherence times by operation times we may have from 10^ to 10^"^ nuclear spin operations before a decoherence or between 10-^ and 10^ electron spin operations before a decoherence. Although decoherence and operation times are very different, the number of operations may be similar. This is not surprising since, in general, the weaker the underlying interactions, the slower the decoherence and the slower the one- and two-qubit operations. We see that many schemes offer the possibility of more than 10,000 operations between decoherence events. The primary problem is engineering the systems to get high gate speeds and enough operations before decoherence. It is not clear which physical implementation will provide the best qubits and gates. If we examine the state of the art in doing one-qubit operations while controlling decoherence, ions in an ion trap and single nuclear spins look the most promising. However, the weak interaction with the environment, and thus potentially other qubits, makes 2-qubit gates significantly slower than some solidstate implementations.
6.7.4
Good Fidelity
Fidelity is a measurement of the decoherence or decay of many qubits relative to one- and two-qubit gate times. Another aspect of fidelity is that when we perform an operation such as a CNOT, we do not expect to do it perfectly, but nearly perfectly. As an example, when we flip a classical bit from "0" to "1," it either succeeds or fails—there is no in between. Even at a detailed view, it is relatively straightforward to strengthen classical devices to add more charge, increase voltage, etc., driving the value to a clean "0" or " 1," charged or uncharged, on or off state. When we flip a quantum bit, we intend to exchange the amplitudes of the "0" and " 1 " states: a|0) -h /?|1)-^^|0) -h a|l). Since most physical effects we use are continuous values, the result of the operation is likely to have a smafl
QUANTUM COMPUTING AND COMMUNICATION
235
distortion: a\0) + /?|1)^/?'|0) + e'^a'\l). The error, e, is nearly zero, and the primed quantities are almost equal to the unprimed quantities, but they are not perfectly equal. In this case, we require that the overlap between our expected result and the actual result be such that the net effect is to have a probability of error for a gate operation on the order of 10"^.
6.1.5
Universal Family of Unitary
Transformations
In general, to build a true quantum computer it is only necessary to be able to perform an arbitrary one-qubit operation and almost any single two-qubit gate. If one can do arbitrary single-qubit operations and almost any single two-qubit gate, one can combine these operations to perform single-qubit operations, such as the Hadamard, and multiqubit operations, such as a CNOT or C2N0T gate (see Section 4.1) [21]. From these, we know we can construct any Boolean function.
6.1.6
Initialize Values
Another important operation is the initialization of all the qubits into a welldefined and -characterized initial state. This is essential if one wishes to perform a specific algorithm since the initial state of the system must typically be known and be unentangled. The initializing of the qubits corresponds to putting a quantum system into a completely coherent state that basically requires removing all thermal fluctuations and reduces the entropy (lack of order) of the system to 0. This is an extremely difficult task.
6.7.7
Readout
Another important requirement is the ability to reliably read resultant qubits. In many experimental situations, this is a technically challenging problem because one needs to detect a quantum state of a system that has been engineered to only weakly interact with its environment. However, this same system at "readout time" must interact sufficiently strongly that we can ascertain whether it is in the state |0) or |1), while simultaneously ensuring that the result is not limited by our measurement or detection efficiency. Readout, along with single-qubit gates, implies we need to be able to uniquely address each qubit.
6.1.8
Types of Qubits
We must be able to store quantum information for relatively long times: the equivalent of main memory, or RAM, in classical computers. There are two general possibilities: material qubits, such as atoms or electrons, and "flying"
236
PAUL E. BLACK ETAL
qubits, or photons. Each has its own strengths and weaknesses. Material qubits can have decoherence times on the order of days, while photons move very fast and interact weakly with their environment. A quantum memory will likely be made of material systems consisting of neutral atoms or ions held in microtraps, solid state materials involving electron or nuclear spins, or artificial atoms like quantum dots. These material qubits are ideal for storing information if decoherence can be controlled. For example, single ions can be coherently stored for several days. However, manipulating individual photons or trying to build a two-qubit gate using photons appears quite difficult. A quantum processor is likely to use material qubits, too, to build the equivalent of registers and to interact well with the quantum memory.
6.1.9
Communication
When we wish to transmit or move quantum information, we typically want to use photons: they travel very fast and interact weakly with their environment. To build a successful quantum communication system will likely require the ability to move quantum information between material qubits and photons. This is another relatively difficult task, but several experiments have been successfully performed. However, different implementations of material qubits will likely need different solutions to moving entangled or superposed information from particles to photons and back.
6.2
Realizations
Just as there are many subatomic properties that may be exploited for quantum effects, realizations range from brand new technologies to decades old technologies harnessed and adapted for quantum computing. These technologies can be categorized into two basic classes. First, a top-down approach where we take existing technology from the material science and solid state fields and adapt it to produce quantum systems. This top-down approach involves creative ideas such as implementing single-ion impurities in silicon, designing very uniform quantum dots whose electronic properties are well characterized and controllable, using superconducting quantum interference devices, and several others. A second contrasting approach is a bottom-up approach. The idea here is to start with a good, natural qubit, such as an atom or ion, and trap the particle in a benign environment. This latter concept provides very good single qubits but leaves open the question of scalability—especially when one begins to examine the mechanical limits of current traps. The benefit of this approach is excellent, uniform, decoherence-free qubits with great readout and initialization capabilities.
QUANTUM COMPUTING AND COMMUNICATION
237
The hard problem will be scaling these systems and making the gate operations fast. The top-down approach suffers from decoherence in some cases or a dramatic failure of uniformity in the individual qubits: "identical qubits" are not truly uniform, decoherence-free individual qubits. The bottom-up approach has the benefit of good quality qubits and starting with a basic understanding of decoherence processes. Below we will briefly discuss some of these possible technologies.
6.2.1
Charged Atoms in an Ion Trap
The one system that has had great success is ions in an ion trap. Dave Wineland's group at NIST, Boulder has • entangled four ions, • shown exceedingly long coherence times for a single qubit, • demonstrated high-efficiency readout, • initialized four atoms into their ground state, and • multiplexed atoms between two traps. They have also shown violations of the Bell's inequalities [22] and had many other successes. Their remarkable success and leadership of this effort blazes new frontiers in the experimental approaches to quantum computation, and their progress shows no signs of slowing. Ions of beryllium are held single file. Laser pulses flip individual ions. To implement a CNOT gate, the motion of the ions "sloshing" back and forth in the trap is coupled to the electron levels. That is, if ions are in motion, the electron level is flipped. Otherwise the electron level is unchanged. This is the operation described abstractly in Section 4.1.
6.2.2
Neutral Atoms in Optical Lattices or
Microtraps
Several groups are attempting to repeat the ion trap success using neutral atoms in optical lattices, where a trapping potential results from intersecting standing waves of light from four or more laser beams in free space, or micro-magnetic or micro-optical traps. These efforts are just getting seriously underway and appear to have many of the advantages of the ion scheme. One major difference is that the atoms interact less strongly with each other than with the ions. This could lead to better decoherence, but is also likely to lead to slower 2-qubit gate operations because of the weaker interactions. Much of the promise of the neutral atom
238
PAUL E. BLACK ETAL
approach is based on the remarkable advances made in the past two decades in laser cooling of atoms and the formation of neutral atom Bose-Einstein condensates where thousands of atoms are ''condensed into" a single-quantum state with temperatures of a few nanokelvins, or billionth of a degree above absolute zero. These advances have allowed scientists to manipulate large numbers of atoms in extremely controlled and exotic ways. The tools, techniques, and understandings developed over these past two decades may prove very useful in these current attempts to create quantum gates with these systems.
6.2.3 Solid State Many different approaches fall under the realm of solid state. One general approach is to use quantum dots—so-called artificial atoms—as qubits. If it can be done in a controlled and decoherence free way, then one has the advantages of the atom and ion approach while having the controlled environment and assumed scalability that comes with solid state material processing. Another variant is embedding single-atom impurities, such as '''P, in silicon. The ^^P nuclear spin serves as a qubit while using basic semiconductor technology to build the required scalable infrastructure. Alternative approaches based on excitons in quantum dots or electronic spins in semiconductors are also being investigated. The primary difficulty of these approaches is building the artificial atoms or implanting the ^^P impurities precisely where required.
6.2.4 NMR Nuclear magnetic resonance (NMR) has shown some remarkable achievements in quantum computing. However, it is widely believed that the current NMR approach will not scale to systems with more than 15 or 20 qubits. NMR uses ingenious series of radio-frequency pulses to manipulate the nuclei of atoms in molecules. Although all isolated atoms of a certain element resonate at the same frequency, their interactions with other atoms in a molecule cause slight changes in resonance. The NMR approach is extremely useful in coming up with a series of pulses to manipulate relatively complex systems of atoms in a molecule in situations where individual qubit rotations or gates might appear problematic. Thus, this work provides useful knowledge into how to manipulate complex quantum systems. Low-temperature solid state NMR is one possible way forward. Like the previous section, a single-atom impurity—such as ^^P in siUcon—is a qubit, but NMR attempts to perform single-site addressability, detection, and manipulation on nuclear spins.
QUANTUM COMPUTING AND COMMUNICATION
6.2.5
239
Photon
Photons are clearly the best way to transmit information, since they move at the speed of light and do not strongly interact with their environment. This nearperfect characteristic for quantum communication makes photons problematic for quantum computation. In fact, early approaches to using photons for quantum computation suffered from a requirement of exponential numbers of optical elements and resources as one scaled the system. A second problem was that creating conditional logic for two-qubit gates appeared very difficult since two photons do not interact strongly even in highly nonlinear materials. In fact, most nonlinear phenomena involving light fields result only at high intensity. Recently, new approaches for doing quantum computation with photons that depend on using measurement in a "dual-rail" approach to create entanglement have appeared. This approach removes many of the constraints of early approaches, but provides an alternative approach to creating quantum logic. Experimental efforts using this approach are just beginning. The approach will still have to solve the technically challenging problems caused by high-speed motion of their qubits, a benefit in communication and a possible benefit in computational speed, and by the lack of highly efficient, single-photon detectors essential to the success of this approach.
6.2.6
Optical Cavity Quantum
Electrodynamics
Other atomic-type approaches involve strongly coupling atoms or ions to photons using high-finesse optical cavities. A similar type of approach may be possible using tailored quantum dots, ring resonators, or photonic materials. One advantage of these types of approaches is the ability to move quantum information from photons to material qubits and back. This type of technology appears to be essential anyway since material qubits (e.g., atoms, ions, electrons) are best for storing quantum information while photons, i.e., flying qubits, are best for transmitting quantum information. It is possible that these approaches may provide very fast quantum processors as well. Numerous efforts to investigate these schemes are underway.
6.2.7
Superconducting
Qubits
Superconducting quantum interference devices (SQUID) can provide two types of qubits: either flux-based qubits, corresponding to bulk quantum circulation, or charge-based qubits responsible for superconductivity. SQUID-based science has been a field of investigation for several decades but has only recently shown an ability to observe Rabi-flopping—a key experiment that shows the ability to
240
PAUL E. BLACK ETAL
do single-qubit operations. This approach to quantum computation has great potential but also will have to overcome numerous technical difficulties. One major issue is the need to operate a bulk system at liquid helium temperatures. In summary, numerous physical approaches to quantum computing have been proposed and many are under serious research. Which of these approaches will ultimately be successful is not clear. In the near term, the ions and atomic systems will likely show the most progress, but the final winner will be the system that meets all of the technical requirements. This system may not even be among those listed above. What is important is that each of these approaches is providing us with an increased understanding of complex quantum systems and their coupling to the environment. This knowledge is essential to tackling the broad range of technical barriers that will have to be overcome to bring this exciting, perhaps revolutionary, field to fruition.
7.
Conclusions
It will be at least a decade, and probably longer, before a practical quantum computer can be built. Yet the introduction of principles of quantum mechanics into computing theory has resulted in remarkable results already. Perhaps most significantly, it has been shown that there are functions that can be computed on a quantum computer that cannot be effectively computed with a conventional computer (i.e., a classical Turing machine). This astonishing result has changed the understanding of computing theory that has been accepted for more than 50 years. Similarly, the application of quantum mechanics to information theory has shown that the accepted Shannon limit on the information carrying capacity of a bit can be exceeded. The field of quantum computing has produced a few algorithms that vastly exceed the performance of any conventional computer algorithm, but it is unclear whether these algorithms will remain rare novelties, or if quantum methods can be applied to a broad range of computing problems. The future of quantum communication is less uncertain, but a great deal of work is required before quantum networks can enter mainstream computing. Regardless of the future of practical quantum information systems, the union of quantum physics and computing theory has developed into a rich field that is changing our understanding of both computing and physics.
Appendix The jump in the derivation of the answer to Deutsch's function characterization problem in Section 4.2.4 started after the application of the Hadamard, leaving us with the equation
QUANTUM COMPUTING AND COMMUNICATION
241
1 nxFo i (l^> + li))lo® /(o)> - (10) + |i»|i e/(0)> +\ ^ ^ V (iO) - Ii))|0e/(i)) - (10) - |i))|i ©/(!)) ; • The easiest way to follow the result is to do case analysis. Here we have four cases: each possible result of /(O) and / ( I ) . The exclusive-or operation is 0 ® a = a and 1 © a = a. Case I:
/(O) = 0
/(1) = 0
, / ^ , / ^ / ( | 0 ) + |l))|0©/(0))-(|0) + |l))|l®/(0)) + ^ ^ V (|0> - I l » | 0 e / ( 1 ) ) - (10) - |1))|1 © / ( ! ) ) - 1 / 9 ^ / 9 l (|0) + | 1 ) ) | 0 ) - ( | 0 ) + |1))|1) +
^V^(V(lo)-li))|0)-(|0)-|i))|i)
" ^ ^
Distributing the second qubit across the superposition of the first yields = 1/2V2(|0)|0) + |1)|0) - |0)|1) - |1)|1) + |0)|0) - |1)|0) - |0)|1) + |1)|1)). Collecting and canceling like terms, we get = 1/2V^(2|0)|0)-2|0)|1)). We now factor out the 2 and the first qubit: = 1/V2|0)(|0)-|1)). Generalizing, we get the final result for this case:
= i/V2|/(0)e/(i))(|0)-|i)). Case II: /(O) = 0 /(l) = 1 1 n^ri i (lo> + Ii»|0e /(O)) - (10) + |i))|i ® /(O)) +\ ' ^ V (I0> - |1))|0®/(D) - (10) - |1))|1 ®/(D) ) " ^ ^ UI0)-li»li)-(l0)-li»l0) / - 1/2V2(|0)|0) + |1)|0) - |0)|1) - |1)|1) + |0)|1) - IDID -|0)|0) + |1)|0)). The reader can verify that collecting, canceling, and factoring gives = 1/V2|1)(|0)-|1)). This generalizes to Case IFs final result: -l/\^|/(0)©/(l))(|0)-|l)).
242
PAUL E. BLACK ETAL.
Case III:
/(O) ^ 1 /(1) = 0 1 /2A/i ( (1^^ + Il>)|0® /(O)) - (|0) + |1))11 ® /(0)> + '
^
V(|o>-|i))|o©/(i)>-(|0)-|i))|i®/(i)>
" ^ ^
V(lo>-li))lo)-(|0)-|i))|i>
= 1/2V2(|0)|1) + |1>|1) - |0)|0> - |1>|0) + |0)|0> - |1)|0) -|0)|1> + |1>|1» -1/V2|1)(|1>-|0»
= l/V2|/(0)®/(l)>(|l)-|0)). Case IV:
/(O) = 1 / ( I ) = 1
,,^./:;({\o) '
+ n))\o®fm}-{\o)
+ \\))\i(Bfm
+
V(lo>-|i>)|Oe/(i)>-(|0)-|i»|ie/(i)> - I / 9 A / ^ / ' ( | 0 ) + I1))|I>-(I0) + |I»|0> +
~ ^ ^
Ul0)-|i»li)-(l0)-|i»|0)
= 1/2V2(|0>|1) + |1>|1) - |0)|0) - |1)|0) + |0)|1> - |1)|1) -|0>|0) + |1)|0» = 1/V2|0)(|1)-|0)) = l/V2|/(0)®/(l))(|l)-|0)). We compute the second qubit to be| 1) -10) in Cases I and II, and|0) -11) in Cases III and IV. This extra multiplication by - 1 is called a "global phase." A global phase is akin to rotating a cube in purely empty space: without a reference, it is merely a mathematical artifact and has no physical meaning. Thus, all the cases result in l/\/2|/(0) ® /(1)>(|0) - |1>).
REFERENCES
Many papers and articles on quantum computing are archived by Los Alamos National Laboratory with support by the United States National Science Foundation and Department of Energy. The URL for the quantum physics archive is http://arXiv.org/archive/quant-ph/. References to it are "quant-ph/YYMMNNN" where the date first submitted is given as YY (year), MM (month), NNN (number within month).
QUANTUM COMPUTING AND COMMUNICATION
243
[1] Feynman, R. (1982). "Simulating physics with computers." International Journal of Theoretical Physics, 21, 6&7, 467^88. [2] Shor, P. W. (1997). "Polynomial time algorithms for prime factorization and discrete logarithms on a quantum computer." SI AM Journal on Computing, 26, 5, 1484-1509, quant-ph/9508027. [3] Nielsen, M. A., and Chuang, I. L. (2000). Quantum Communication and Quantum Information. Cambridge Univ. Press, Cambridge, UK. [4] Lenstra, A. K., and Lenstra, H. W., Jr. (Eds.) (1993). The Development of the Number Field Sieve, Lecture Notes in Mathematics, Vol. 1554. Springer-Verlag, Berlin. [5] Schonhage, A. (1982). "Asymptotically fast algorithms for the numerical multipUcation and division of polynomials with complex coefficients." In Computer Algebra EUROCAM '82, Lectures Notes in Computer Science, Vol. 144, pp. 3-15. SpringerVerlag, Berlin. [6] Riefifel, E., and Polak, W. (2000). "An introduction to quantum computing for nonphysicists." ACM Computing Surveys, 32, 3, 300-335, quant-ph/9809016. [7] Deutsch, D. (1985). "Quantum theory, the Church-Turing principle and the universal quantum computer." Proc. of Royal Society London A, 400, 97-117. [8] Cleve, R., Ekert, A. K., Macchiavello, C , and Mosca, M. (1998). "Quantum algorithms revisited." Proc. of Royal Society London A, 454, 339-354. [9] Grover, L. K. (1996a). "A fast quantum mechanical algorithm for database search." in 28th Annual ACM Symposium on the Theory of Computation, pp. 212-219. ACM Press, New York. [10] Grover, L. K. (1996b). "A fast quantum mechanical algorithm for database search," quant-ph/9605043. [11] Ekert, A. K., and Macchiavello, C. (1996). "Quantum error correction for communication." Physical Review Letters, 77, 2585-2588, quant-ph/9602022. [12] Knill, E., and LaFlamme, R. (1996). "Concatenated quantum codes," quantph/9608012. [13] Knill, E., and LaFlamme, R. (1997). "A theory of quantum error-correcting codes." Phys. Rev. A, 55, 900. [14] Wright, P. (1987). Spy Catcher: The Candid Autobiography of a Senior Intelligence Officer. Viking Penguin, New York. [15] Bennett, C. H., and Brassard, G. (1984). In Proc. IEEE International Conf on Computers, Systems and Signal Processing, Bangalore, India, p. 175. IEEE, New York. [16] National Institute of Standards and Technology (2000). "A statistical test suite for random and pseudorandom number generators for cryptographic applications," SP 800-22. [17] Brassard, G., and Crepeau, C. (1996). "25 years of quantum cryptography." SIGACT News, 27, 3,13-24. [18] Bennett, C. H., Brassard, G., Crepeau, C , and Maurer, U. M. (1995). "GeneraUzed privacy amphfication." IEEE Transactions on Information Theory, 41, 6, 1915-1923.
244
PAULE. BLACK ETAL.
[19] Cereceda, J. L. (2001). "Quantum dense coding using three qubits." C/Alto del Leon 8, 4A, 28038 Madrid, Spain, May 21, quant-ph/0105096. [20] Gorbachev, V. N., ZhiHba, A. I., Trubilko, A. I., and Yakovleva, E. S. "Teleportation of entangled states and dense coding using a multiparticle quantum channel," quantph/0011124. [21] Deutsch, D., Barenco, A., and Ekert, A. (1995). "Universality in quantum computation." Proc. of Royal Society London A, 499, 669-611. [22] Bell, J. S. (1964). "On the Einstein-Podolsky-Rosen Paradox." Physics, 1, 195-200.
Exception Handling'' PETER A, BUHR, ASHIF HARJI, AND W. Y. RUSSELL MOK Department of Computer Science University of Waterloo Waterloo, Ontario N2L 3G1 Canada {pabuhr,asharji,wyrmok}@uwaterloo.ca Abstract It is no longer possible to consider exception handling as a secondary issue in a language's design, or even worse, as a mechanism added after the fact via a library approach. Exception handling is a primary feature in language design and must be integrated with other major features, including advanced control flow, objects, coroutines, concurrency, real-time, and polymorphism. Integration is crucial as there are both obvious and subtle interactions between exception handling and other language features. Unfortunately, many exception handling mechanisms work only with a subset of the language features and in the sequential domain. A comprehensive design analysis is presented for an easy-to-use and extensible exception-handling mechanism within a concurrent, object-oriented environment. The environment includes language constructs with separate execution stacks, e.g., coroutines and tasks, so the exception environment is significantly more complex than the normal single-stack situation. The pros and cons of various exception features are examined, along with feature interaction with other language mechanisms. Both exception termination and resumption models are examined in this environment, and previous criticisms of the resumption model, a feature commonly missing in modem languages, are addressed. 1. 2. 3. 4. 5.
Introduction EHM Objectives Execution Environment EHM Overview Handling Models
246 248 249 253 254
' This chapter is an extended version of the paper "Advanced Exception Handling Mechanisms" in IEEE Transaction on Software Engineering, 26(9), 820-836, September 2000. ©2000 IEEE. Portions reprinted with permission. ADVANCES IN COMPUTERS, VOL. 56 ISBN 0-12-012156-5
245
Copyright 2002 Elsevier Science Ltd All rights of reproduction in any form reserved.
246
PETER A. BUHRET/A/..
5.1 Nonlocal Transfer 5.2 Termination 5.3 Retry 5.4 Resumption 6. EHM Features 6.1 Catch-Any and Reraise 6.2 Derived Exceptions 6.3 Exception Parameters 6.4 Bound Exceptions and Conditional Handling 6.5 Exception List 7. Handler Context 7.1 Guarded Block 7.2 Lexical Context 8. Propagation Models 8.1 Dynamic Propagation 8.2 Static Propagation 9. Propagation Mechanisms 10. Exception Partitioning 10.1 Derived Exception Implications 11. Matching 12. Handler Clause Selection 13. Preventing Recursive Resuming 13.1 Mesa Propagation 13.2 VMS Propagation 14. Multiple Executions and Threads 14.1 Coroutine Environment 14.2 Concurrent Environment 14.3 Real-Time Environment 15. Asynchronous Exception Events 15.1 Communication 15.2 Nonreentrant Problem 15.3 Disabling Asynchronous Exceptions 15.4 Multiple Pending Asynchronous Exceptions 15.5 Converting Interrupts to Exceptions 16. Conclusions Appendix: Glossary References
1.
255 257 259 260 263 263 264 265 267 269 272 272 272 273 274 275 277 280 281 282 283 285 286 287 290 290 291 291 292 292 293 294 296 297 297 298 301
Introduction
Substantial research has been done on exceptions but there is little agreement on what an exception is. Attempts have been made to define exceptions in terms
EXCEPTION HANDLING
247
of errors but an error itself is also ill-defined. Instead of struggling to define what an exception is, this discussion examines the entire process as a control flow mechanism, and an exception is a component of an exception-handling mechanism (EHM) that specifies program behavior after an exception has been detected. The control flow generated by an EHM is supposed to make certain programming tasks easier, in particular, writing robust programs. Robustness results because exceptions are an active rather than a passive phenomenon, forcing programs to react immediately when exceptions occur. This dynamic redirection of control flow indirectly forces programmers to think about the consequences of exceptions when designing and writing programs. Nevertheless, exceptions are not a panacea and are only as good as the programmer using them. The strongest definition we are prepared to give for an exception is an event that is known to exist but which is ancillary to an algorithm or execution. Because it is ancillary, the exception may be forgotten or ignored without penalty in the majority of cases, e.g., an arithmetic overflow, which is a major source of errors in programs. In other situations, the exception always occurs but with a low frequency, e.g., encountering end-of-file when reading data. Essentially, a programmer must decide on the level of frequency that moves an event from the algorithmic norm to an exceptional case. Once this decision is made, the mechanism to deal with the exceptional event is best moved out of the normal algorithmic code and handled separately. It is this mechanism that constitutes an EHM. Even with the availability of EHMs, the common programming techniques used to handle exceptions are return codes and status flags. The return code technique requires each routine to return a value on its completion. Different values indicate whether a normal or rare condition has occurred during the execution of a routine. Alternatively, or in conjunction with return codes, is the status flag technique, which uses a shared variable to indicate the occurrence of a rare condition. Setting a status flag indicates a rare condition has occurred; the value remains as long as it is not overwritten by another condition. Both techniques have noticeable drawbacks. First, and foremost, the programmer is required to explicitly test the return values or status flags; hence, an error is discovered and subsequently handled only when checks are made. Without timely checking, a program is allowed to continue after an error, which can lead to wasted work at the very least, or an erroneous computation leading to failure at the very worst. Second, these tests are located throughout the program, reducing readability and maintainability. Third, as a routine can encounter many different errors, it may be difficult to determine if all the necessary error cases are handled. Finally, removing, changing, or adding return or status values is difficult as the testing is coded inline. The return code technique often encodes exception values among normal returned values, which artificially enlarges the range of valid values independent of the computation. Hence, changing a value representing
248
PETER A. BUHR ETAL
an exception into a normal return value or vice versa can result in interactions between the exception handling and normal code, where the two cases should be independent. The status flag technique uses a shared variable that precludes its use in a concurrent environment as it can change unpredictably. Fortunately, modern EHM techniques are slowly supplanting return codes and flags, even though EHMs have been available for more than two decades. A general framework is presented for exception handling, along with an attempt to compose an ideal EHM, with suggested solutions to some outstanding EHM problems. In constructing the framework, a partial survey of existing EHMs is necessary to compare and contrast approaches.
2.
EHM Objectives
The failure of return codes and status flags as an informal EHM suggests the need for a formal EHM supported by the programming language, which must: 1. alleviate multiple testing for the occurrence of rare conditions throughout the program, and at the location where the test must occur, be able to change control flow without requiring additional testing and transfers, 2. provide a mechanism to prevent an incomplete operation from continuing, and 3. be extensible to allow adding, changing, and removing exceptions. The first objective targets readability and programmability by eliminating checking of return codes and flags, and removing the need to pass fix-up routines or have complex control logic within normal code to deal with exceptional cases. The second objective provides a transfer from the exception point that disallows returning, directing control flow away from an operation where local information is possibly corrupted; i.e., the operation is nonresumable. The last objective targets extensibility, easily allowing change in the EHM, and these changes should have minimal effects on existing programs using them. Two existing EHMs illustrate the three objectives: Unix signal mechanism. On encountering a rare condition, a signal (interrupt) is generated, which preempts execution and calls a handler routine to deal with the condition, suspending prior execution; when the handler routine returns, prior execution continues. This change of control flow does not require the programmer's involvement or testing any error codes, as there is (usually) no exphcit caU to the signal handler in the user program. Using a special jump facility, I o n g j m p, the handler routine can prevent an incomplete operation from continuing, and possibly terminate multiple active blocks between the signal handler and the transfer point (see Section 5.1 for details of this mechanism).
EXCEPTION HANDLING
249
Extensibility is quite limited, as most signals are predefined and unavailable to programmers. If a library uses one of the few user available signals, all clients must agree on the signal's definition, which may be impossible. Ada exception mechanism. On encountering a rare condition, an exception is raised in Ada terminology, and control flow transfers to a sequence of statements to handle the exception. This change of control flow does not require the programmer's involvement or testing any error codes. The operation encountering the rare condition cannot be continued, and possibly multiple active blocks between the raise point and the statements handling the exception are terminated. A new exception can be declared as long as there is no name conflict in the flat exception name-space; hence the mechanism is reasonably extensible. A good EHM should strive to be orthogonal with other language features; i.e., the EHM features should be able to be used in any reasonable context without obscure restrictions. Nevertheless, specific implementation and optimization techniques for some language constructs can impose restrictions on other constructs, particularly the EHM.
3.
Execution Environment
The structure of the execution environment has a significant effect on an EHM; e.g., a concurrent, object-oriented environment requires a more complex EHM than a sequential non-object-oriented environment. The execution model described in [1] is adopted for this discussion; it identifies three elementary execution properties: 1. Execution is the state information needed to permit independent execution. It includes local and global data, current execution location, and routine activation records (i.e., the runtime stack) of a sequential computation. From the exception perspective, an execution is the minimal language unit in which an exception can be raised. In simple sequential programming languages, there is only one execution, which makes exception handling simple and straightforward. More complex languages allow multiple executions but each is executed sequentially (called coroutines). In this case, only one exception can occur at a time but there are multiple independent units in which an exception can occur. Interaction among these units with respect to exceptions now becomes an issue. 2. Thread is execution of code that occurs independently of and possibly concurrently with another execution; thread execution is sequential as it changes an execution's state. Multiple threads provide concurrent execution; multiple CPUs provide parallel execution of threads. A context switch is a change in the execution/thread binding.
250
PETER A.
BUHHETAL
A thread performs normal and exceptional execution as it changes an execution's state. A thread may execute on its own execution or that of a coroutine, so any concurrent system is complex from an exception perspective. That is, multiple exceptions can be raised simultaneously within each execution being changed by different threads. 3. Mutual exclusion is serializing execution of an operation on a shared resource. Mutual exclusion is a concern only when there are multiple threads, as these threads may attempt to simultaneously change the same data. In this case, the data can become corrupted if the threads do not coordinate read and write operations on the data. From the exception perspective, the occurrence of simultaneous exceptions may result in simultaneous access of shared data, either in the implementation of exceptions in the runtime system or at the user-level with respect to data that is part of the exception. In either case, the data must be protected from simultaneous access via mutual exclusion. The first two properties are fundamental; i.e., it is impossible to create them from simpler constructs in a programming language. Only mutual exclusion can be generated using basic control structures and variables (e.g., Dekker's algorithm), but software algorithms are complex and inefficient. Thus, these three properties must be supplied via the programming language. Table I shows the difi'erent constructs possible when an object possesses different execution properties; each of the eight entries in the table are discussed below. TABLE I ELEMENTARY EXECUTION PROPERTIES
Object properties
Object's member routine properties
Thread
Execution-state
No mutual exclusion
Mutual exclusion
no no yes yes
no yes no yes
1 object 3 coroutine 5 (rejected) 7 (rejected)
2 monitor 4 coroutine-monitor 6 (rejected) 8 task
Case 1 is an object (or a routine not a member of an object) using the caller's execution and thread to change its state. For example, class foo { v o i d memO { ... } }; f o o f; f.mem(...); //caller's execution
and
thread
EXCEPTION HANDLING
251
The call f. m e m (...) creates an activation record (stack frame) on the runtime stack containing the local environment for member routine m e m , i.e., local variables and state. This activation record is pushed on the stack of the execution associated with the thread performing the call. Since this kind of object provides no mutual exclusion, it is normally accessed only by a single thread. Case 2 is like Case 1 but deals with concurrent access by ensuring mutual exclusion for the duration of each computation by a member routine, called a monitor [la]. For example, men iter
f oo
void
{
memO
{ ... }
}; foo
f;
f.mem(...);
//caller's
execution/thread
and
mutual
exclusion
The call f. m e m (...) works as in Case 1, with the additional effect that only one thread can be active in the monitor at a time. The implicit mutual exclusion is a fundamental aspect of a monitor and is part of the programming language. Case 3 is an object that has its own execution-state but no thread. Such an object uses its caller's thread to advance its own execution and usually, but not always, returns the thread back to the caller. This abstraction is a coroutine [lb]. For example, coroutine
foo
void
{
memO
{ ... }
}; foo
f;
f.mem{...);
/ / f s execution
and
caller's
thread
The call f. m e m (...) creates an activation record (stack frame) on f's runtime stack and the calling thread performs the call. In this case, the thread "context switches" from its execution to the execution of the coroutine. When the call returns, the thread context switches from the coroutine's execution back to its execution. Case 4 is like Case 3 but deals with the concurrent access problem by ensuring mutual exclusion, called a coroutine-monitor. For example, cormon itor void };
f oo
{
memO
{ ... }
252 foo
PETER A. BUHR ETAL. f;
f.mem(...);
//f's
execution/caller's
thread
and mutual
exclusion
The call f. m e m {...) works as in Case 3, with the additional effect that only one thread can be active in the coroutine-monitor at a time. Cases 5 and 6 are objects with a thread but no execution-state. Both cases are rejected because the thread cannot be used to provide additional concurrency. That is, the object's thread cannot execute on its own since it does not have an execution, so it cannot perform any independent actions. Case 7 is an object that has its own execution and thread. Because it has both properties it is capable of executing on its own; however, it lacks mutual exclusion, so access to the object's data via calls to its member routine is unsafe, and therefore, this case is rejected. Case 8 is like Case 7 but deals with the concurrent access problem by implicitly ensuring mutual exclusion, called a task. For example, task foo { void
memO
{ ... }
}; foo
f;
f.mem(...);
//choice
of execution/thread
and mutual
exclusion
The call f. m e m (...) works as in Case 4, except there are two threads associated with the call, the caller's and the task's. Therefore, one of the two threads must block during the call, called a rendezvous. The key point is that an execution supplies a stack for routine activation records, and exceptional control-flow traverses this stack to locate a handler, often terminating activation records as it goes. When there is only one stack, it is straightforward to define consistent and well-formed semantics. However, when there are multiple stacks created by instances of coroutines and/or tasks, the EHM semantics can and should become more sophisticated, resulting in more complexity. For example, assume a simple environment composed of nested routine calls. When an exception is raised, the current stack is traversed up to its base activation-record looking for a handler. If no handler is found, it is reasonable to terminate the program, as no more handlers exist. Now, assume a complex environment composed of coroutines and/or tasks. When an exception is raised, the current coroutine/task stack is traversed up to its base activation-record looking for a handler. If no handler is found, it is possible to continue propagating the exception from the top of the current stack to another coroutine or task stack. The choice for selecting the point of continuation depends on the particular EHM
EXCEPTION HANDLING
253
Strategy. Hence, the complexity and design of the execution environment significantly affects the complexity and design of its EHM.
4.
EHM Overview
An event is an exception instance, and is raised by executing a language or system operation, which need not be available to programmers, e.g., only the runtime system may raise predefined exceptions, such as hardware exceptions. Raising an exception indicates an abnormal condition the programmer cannot or does not want to handle via conventional control flow. As mentioned, what conditions are considered abnormal is programmer or system determined. The execution raising the event is the source execution. The execution that changes control flow due to a raised event is the faulting execution; its control flow is routed to a handler. With multiple executions, it is possible to have an exception raised in a source execution different from the faulting execution. Propagating an exception directs the control flow of the faulting execution to a handler, and requires a propagation mechanism to locate the handler. The chosen handler is said to have caught (catch) the event when execution transfers there; a handler may deal with one or more exceptions. The faulting execution handles an event by executing a handler associated with the raised exception. It is possible that another exception is raised or the current exception is reraised while executing the handler. A handler is said to have handled an event only if the handler returns. Unlike returning from a routine, there may be multiple return mechanisms for a handler (see Section 5). For a synchronous exception, the source and faulting execution are the same; i.e., the exception is raised and handled by the same execution. It is usually difficult to distinguish raising and propagating in the synchronous case, as both happen together. For an asynchronous exception, the source and faulting execution are usually different, e.g., raise E in Ex raises exception E from the current source execution to the faulting execution E x. Unlike a synchronous exception, raising an asynchronous exception does not lead to the immediate propagation of the event in the faulting execution. In the Unix example, an asynchronous signal can be blocked, delaying propagation in the faulting execution. Rather, an asynchronous exception is more like a nonblocking direct communication from the source to the faulting execution. The change in control flow in the faulting execution is the result of delivering the exception event, which initiates the propagation of the event in the faulting execution. While the propagation in the faulting execution can be carried out by the source, faulting or even another execution (see Section 14.1), for the moment, assumes the source raises the event and the faulting execution propagates and handles it.
254
PETER A.
BUHRETAL
Goodenough's seminal paper on exception handling suggests a handler can be associated with programming units as small as a subexpression and as large as a routine [2, pp. 686-687]. Between these extremes is associating a handler with a language's notion of a block, i.e., the facility that combines multiple statements into a single unit, as in Ada [3], Modula-3 [4], and C-h-l- [5]. While the granularity of a block is coarser than an expression, our experience is that finegrained handling is rare. As well, having handlers, which may contain arbitrarily complex code, in the middle of an expression can be difficult to read. In this situation, it is usually just as easy to subdivide the expression into multiple blocks with necessary handlers. Finally, handlers in expressions or for routines may need a mechanism to return results to allow execution to continue, which requires additional constructs [2, p. 690]. Therefore, this discussion assumes handlers are only associated with language blocks. In addition, a single handler can handle several different kinds of exceptions and multiple handlers can be bound to one block. Syntactically, the set of handlers bound to a particular block is the handler clause, and a block with handlers becomes a guarded block, e.g., try { //introduce new //guarded block raise E l ; } catch( El ) { // } catch{ //
block
//synchronous //may handle
exception multiple
exceptions
handlerl E2 ) {
//multiple
handlers
handler2
}
The propagation mechanism also determines the order that the handler clauses bound to a guarded block are searched. A block with no handler clause is an unguarded block. An exception may propagate from any block. In summary, an EHM = Exceptions H- Raise -\- Propagation -h Handlers, where exceptions define the kinds of events that can be generated, raise generates the exception event and finds the faulting execution, propagation finds the handler in the faulting execution, and handlers catch the raised event during propagation.
5.
Handling Models
Yemini and Berry [6, p. 218] identify 5 exception handling models: non-local transfer, 2 termination, retry and resumption. An EHM can provide multiple models.
255
EXCEPTION HANDLING
5.1
Nonlocal Transfer
Local transfer exists in all programming languages, implicitly or explicitly (see Fig. 1). Implicit local-transfer occurs in selection and looping constructs in the form of hidden g o t o statements, which transfer to lexically fixed locations. Explicit local-transfer occurs with a g o t o statement, which also transfers to lexically fixed locations. In both cases, the transfer points are known at compile time, and hence, are referred to as statically or lexically scoped transfer points. Dynamically scoped transfer points are also possible, called nonlocal transfer. In Fig. 2, the label variable L contains both a point of transfer and a pointer to an activation record on the stack containing the transfer point; therefore, a label value is not static. The nonlocal transfer in f using the g o t o directs control flow first to the specified activation record and then to the location in the code associated with the activation record. A consequence of the transfer is that blocks activated between the g o t o and the label value are terminated; terminating these
if (...)
Implicit Transfer
Explicit Transfer
{ // false => transfer to else
if(! ...)gotoL1;
II transfer after else } else {
goto L2; L1:
} while (...)
{ //false => transfer after while
L2: L3: if (! ...) goto L4;
} II transfer to start of while
goto L3; L4:
FIG. 1. Statically scoped transfer.
label L; void f() { goto L;} voidg(){f();} void h() { L = L1; f(); L1: L = L2; g(); L2; }
f
r
goto L
1
h
f g h
•
k
1 i
«
* LI *
r—
1
L
activation records (stack)
goto L
1
' fc 1 n * Lfi. *
FIG. 2. Dynamically scoped transfer.
1 L
activation records (stack)
256
PETER A.
BUHRETAL
blocks is called stack unwinding. In the example, the first nonlocal transfer from f transfers to the static label L 1 in the activation record for h, terminating the activation record for f. The second nonlocal transfer from f transfers to the static label L 2 in the activation record for h, terminating the activation records for f and g. PL/I [7] is one of a small number of languages (Beta [8], C [9]) supporting nonlocal transfer among dynamic blocks through the use of label variables. The C routines s e t j m p and I o n g j m p are a simplified nonlocal transfer, where s e t j m p sets up a dynamic label variable and I o n g j m p performs the nonlocal transfer. An EHM can be constructed using a nonlocal transfer mechanism by labeling code to form handlers and terminating operations with nonlocal transfers to labels in prior blocks. For example, in Fig. 3, the PL/I program (left example) assigns to the label variable E 1 a label in the scope of procedure TEST and then calls procedure F. Procedure F executes a transfer ( G 0 T 0 ) to the label variable, transferring out of F, through any number of additional scope levels, to the label LI in T E S T. Inside the handler, the same action is performed but the transfer point is changed to L 2. Similarly, the C program (right example) uses s e t j m p to store the current execution context in variable E 1, which is within the scope of the call to s e t j m p, s e t j m p returns a zero value, and a call is made to routine f. Routine f executes a transfer (I o n g j m p ) to the execution-context variable, transferring out of f, through any number of additional scope levels, back within the saved scope of s e t j m p, which returns a nonzero value. Inside the handler, the same action is performed but the transfer point is changed to the second call of s e t j m p. The key point is that the transfer point for the GOTO PL/I
C
TEST: PROC OPTIONS(MAIN); DCL E1 LABEL; F: PROC; G0T0E1; END; E1 = L 1 ; CALL F; RETURN; L1: /* HANDLER 1 V E1 = L2; CALL F; RETURN; L2: /* HANDLER 2 V END;
jmp.buf E1; void f(void) { longjmp(E1, 1); } int main() { if (setjmp(E1) = = 0 ) { f(); } else { /* handler 1 V if (setjmp(E1) = = 0 ) { f(); } else { r handler 2 V } } }
FIG. 3. Nonlocal transfer.
EXCEPTION HANDLING
257
or I o n g j m p is unknown statically; it is determined by the dynamic value of the label or execution-context variable. Unfortunately, nonlocal transfer is too general, allowing branching to almost anywhere (the structured programming problem). This lack of discipline makes programs less maintainable and error-prone [10, p. 102]. More importantly, an EHM is essential for sound and efficient code generation by a compiler (as for concurrency [11]). If a compiler is unaware of exception handling (e.g., s e t j m p/ I o n g j m p in C), it may perform code optimizations that invalidate the program, needing bizarre concepts like the v o I a t i I e declaration qualifier. Because of these problems, nonlocal transfer is unacceptable as an EHM. However, nonlocal transfer is essential in an EHM; otherwise it is impossible to achieve the first two EHM objectives in Section 2, i.e., alleviating explicit testing and preventing return to a nonresumable operation. A restricted form of nonlocal transfer appears next.
5.2
Termination
In the termination model, control flow transfers from the raise point to a handler, terminating intervening blocks on the runtime stack (like nonlocal transfer). When the handler completes, control flow continues as if the incomplete operation in the guarded block terminated without encountering the exception. Hence, the handler acts as an alternative operation for its guarded block. This model is the most popular, appearing in Ada, C-l-h, ML [12], Modula-3, and Java [13]. The difference between nonlocal transfer and termination is that termination is restricted in the following ways: • Termination cannot be used to create a loop, i.e., cause a backward branch in the program, which means only looping constructs can be used to create a loop. This restriction is important to the programmer since all the situations that result in repeated execution of statements in a program are clearly delineated by the looping constructs. • Since termination always transfers out of containing blocks, it cannot be used to branch into a block. This restriction is important for languages allowing declarations within the bodies of blocks. Branching into the middle of a block may not create the necessary local variables or initialize them properly. Yemini and Berry (also in CLU [21, p. 547]) divide termination into one level and multiple levels. That is, control transfers from the signaller to the immediate caller (one level) or from the signaller to any nested caller (multiple levels). However, this artificial distinction largely stems from a desire to support exceptions lists (see Section 6.5).
258
PETER A. BUHR ETAL. Ada
C++
procedure main is E1 : exception; procedure f is begin raise E1; end f; begin f; exception when E1 => - handler end main;
voidfO { throw 0; } int main() { try { f(); } catch( int) { // handler } }
FIG. 4. Termination.
For example, in Fig. 4, the Ada program (left example) declares an exception E 1 in the scope of procedure m a i n and then calls procedure f. Procedure f executes a raise of the exception, transferring out of f, through any number of additional scope levels, to the handler at the end of m a i n . The C++ program (right example) does not declare an exception label; instead, an object type is used as the label, and the type is inferred from an object specified at the raise point; in this case, t h r o w 0 implies an exception label of int. Routine f executes a raise ( t h r o w ) of the exception, transferring out of f, through any number of additional scope levels, to the handler at the end of the try statement in m a i n . Note that termination achieves the first two EHM objectives in Section 2, without the drawbacks of nonlocal transfer. (Interestingly, the C++ approach seems to provide additional generality because any type can be an exception; i.e., there is no special exception type in the language. However, in practice, this generality is almost never used. First, using a type like I n t as an exception is dangerous because there is no exceptional meaning for this type. That is, one library routine can raise i n t to mean one thing and another routine can raise i n t to mean another; a handler catching I n t may have no idea about the meaning of the exception. To prevent this ambiguity, users create specific types describing the exception, e.g., o v e r f l o w , u n d e r f l o w , etc. Second, these specific types are very rarely used both in normal computations and for raising exceptions, so the sole purpose of these types is for raising unambiguous exceptions. In essence, C++ programmers ignore the generality available in the language and follow a convention of creating explicit exceptions types. Therefore, having a specific exception type in a programming language is not a restriction, and it provides additional documentation, discrimination among conventional and exception types, and provides the compiler with exact knowledge about type usage rather than having to infer it from the program. Hence, a specific exception type is used in this discussion.)
EXCEPTION HANDLING
259
Termination is often likened to a reverse routine call, particularly when argument/parameters are available (see Section 6.3). Raise acts like a call and the handler acts like a routine, but control flows down the stack rather than up. In detail, the differences between routine call and termination are: • A routine call is statically bound, whereas a termination handler is dynamically bound. (Routine pointers and virtual routines, which are just routine pointers, are dynamically bound.) That is, the routine name of a call selects its associated code at compile time based on the lexical scope of the program, whereas the handler name of a raise selects its associated handler code at runtime based on the dynamic blocks on the stack (as in many versions of LISP). • A routine returns to the dynamic location of its call, whereas a termination handler returns to its associated lexical block. That is, a routine returns one stack frame to the caller's stack frame, whereas a handler returns to the lexical context of the guarded block it is associated with. A side effect of returning to a guarded-block's lexical context maybe stack unwinding if a raise is in another stack frame. Understanding the notions of static versus dynamic name binding and static versus dynamic transfer points are key to understanding exceptions.
5.3
Retry
The retry model combines the termination model with special handler semantics, i.e., restart the failed operation, creating an implicit loop in the control flow. There must be a clear beginning for the operation to be restarted. The beginning of the guarded block is usually the restart point and there is hardly any other sensible choice. The left example of Fig. 5 shows a retry handler by extending the C-I-+ exception mechanism; the example calculates a normalized sum for a set of numbers, ignoring negative values. The exception is raised using termination semantics, and the retry handler completes by jumping to the start of the try block. The handler is supposed to remove the abnormal condition so the operation can complete during retry. Mesa [14], Exceptional C [15], and Eiffel [16] provide retry semantics through a retry statement only available in the handler body. As mentioned, establishing the operation restart point is essential; reversing lines 5 and 6 in the figure generates a subtle error with respect to the exception but not normal execution, i.e., the s u m counter is not reset on retry. This error can be difficult to discover because control flow involving propagation may occur infrequently. In addition, when multiple handlers exist in the handler clause, these handlers must use the same restart point, which may make retrying more difficult to use in some cases.
260
PETER A. BUHR ETAL Simulation
Retry 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
float nsum( int n, float a[ ]) { float sum; int 1, cnt = n; try { sum = 0;
float nsum( int n, float a[ ] ) { float sum; int i, cnt = n; while (true ) { // infinite loop try { sum = 0;
for ( i = 0; =>
try
block
handler
for
handles
R
R
The try block resumes R. Handler H is called by the resume, and the blocks on the call-stack are ...
^
T (H{R))
^
H{R) ( ^
stack
top)
Then H resumes exception R again, which finds the handler just above it at T { H { R)) and calls handler H ( R ) again and this continues until the runtime stack overflows. Recursive resuming is similar to infinite recursion, and can be difficult to discover both at compile time and at runtime because of the dynamic choice of a handler. Asynchronous resuming compounds the difficulty because it can cause recursive resuming where it is impossible for synchronous resuming because the asynchronous event can be delivered at any time. MacLaren briefly discusses the recursive resuming problem in the context of PL/I [10, p. 101], and the problem exists in Exceptional C and ^System. Mesa made an attempt to solve this problem but its solution is often criticized as incomprehensible. Two solutions are discussed in Section 13.
8.2
Static Propagation
Knudsen proposed a static propagation mechanism [25,26], with the intention of resolving the dynamic propagation problems, using a handler based on Tennent's sequel construct [28, p. 108]. A sequel is a routine, including parameters; however.
276
PETER A.
BUHRETAL
when a sequel terminates, execution continues at the end of the block in which the sequel is declared rather than after the sequel call. Thus, handling an exception with a sequel adheres to the termination model. However, propagation is along the lexical hierarchy, i.e., static propagation, because of static name-binding. Hence, for each sequel call, the handling action is known at compile time. As mentioned, a termination handler is essentially a sequel as it continues execution after the end of the guarded block; the difference is the dynamic name-binding for termination handlers. Finally, Knudsen augments the sequel with virtual and default sequels to deal with controlled cleanup, but points out that mechanisms suggested in Section 6.1 can also be used [26, p. 48]. Static propagation is feasible for monolithic programs (left example in Fig. 10). However, it fails for modular (library) code as the static context of the module and user code are disjoint; e.g., if s t a c k is separately compiled, the sequel call in p u s h no longer knows the static blocks containing calls to p u s h. To overcome this problem, a sequel can be made a parameter of s t a c k (right example in Fig. 10). In static propagation, every exception called during a routine's execution is known statically, i.e., the static context and/or sequel parameters form the equivalent of an exception list (see Section 6.5). However, when sequels become part of a class's or routine's type signature, reuse is inhibited, as for exception lists. Furthermore, declarations and calls now have potentially many additional arguments, even if parameter defaults are used, which results in additional execution cost on every call. Interestingly, the dynamic handler selection issue is resolved only for monolithic programs; when sequels are passed as arguments, the selection becomes dynamic; i.e., the call does not know statically which handler is chosen, but it does eliminate the propagation search. Finally, there is no recursive resuming because there is no special resumption capability; resumption is achieved by explicitly passing fix-up routines and using normal routine call. Monolithic
Separate Compilation
{ // new block sequel StackOverflow(...) { . . . } class stack { void push( int i) { ... StackOverflow(...); }
class stack { // separately compiled stack( sequel overflow(...) ) { . . . } void push( int i) { .. .overflow(...); } ]\ _ ^ { // separately compiled sequel StackOverflow(...) {. • } stack s( StackOverflow); ... s.push( 3 ); // overflow 7 } // sequel transfers fiere
}; ' " stack s; ... s.push( 3 ); // overflow ? } //sequel transfers here
FIG. 10. Sequel compilation structure.
EXCEPTION HANDLING
277
which is available in most languages. However, passing fix-up routines has the same problems as passing sequel routines. Essentially, if users are willing to expUcitly pass sequel arguments, they are probably willing to pass fix-up routines. Finally, Knudsen shows several examples where static propagation provides syntax and semantics superior to traditional dynamic EHMs (e.g., CLU/Ada). However, with advanced language features, like generics and overloading, and advanced EHM features it is possible to achieve almost equivalent syntax and semantics in the dynamic case. For these reasons, static propagation seldom appears in an EHM; instead, most EHMs use the more powerful and expressive dynamic propagation.
9.
Propagation Mechanisms
Propagating directs control flow of the faulting execution to a handler; the search for a handler proceeds through the blocks, guarded, and unguarded, on the call (or lexical) stack. Different implementation actions occur during the search depending on the kind of propagation, where the kinds of propagation are terminating and resuming, and both forms can coexist in a single EHM. Terminating or throwing propagation means control does not return to the raise point. The unwinding associated with terminating normally occurs during propagation, although this is not required; unwinding can occur when the handler is found, during the handler's execution, or on its completion. However, there is no advantage to delaying unwinding for termination, and doing so results in problems (see Sections 7.2 and 13) and complicates most implementations. Resuming propagation means control returns to the point of the raise; hence, there is no stack unwinding. However, a handler may determine that control cannot return, and needs to unwind the stack, i.e., change the resume into a terminate. This capability is essential to prevent unsafe resumption, and mechanisms to accomplish it are discussed below. Three approaches for associating terminating or resuming propagation with an exception are possible: 1. At the declaration of the exception, as in terminate E l ; resu me E 2 ; try {
//specific
... r a i s e E l ; ... r a i s e E 2 ; } catch( E l ) ... c a t c h ( E2 ) . . .
//generic
declaration
raise
//generic handler
278
PETER A. BUHR ETAL
Associating the propagation mechanism at exception declaration means the raise and handler can be generic. With this form, there is a partitioning of exceptions, as in Goodenough [2] with ESCAPE and NOTIFY, //System [20] with exceptions and interventions, and Exceptional C [15] with exceptions and signals. 2. At the raise of the exception event, as in exception
E;
//generic
declaration
try { ... t e r m i n a t e ... } catch(
resume
E;
//specific
raise
E;
E ) ...
//generic
handler
Associating the propagation mechanism at the raise means the declaration and handler can be generic. With this form, an exception can be used in either form; i.e., exception E can imply termination or resumption depending on the raise. The generic handler catching the exception must behave according to the kind of handler model associated with exception event. As a result, it is almost mandatory to have a facility in the handler to determine the kind of exception as different actions are usually taken for each. 3. At the handler, as in exception
E;
//generic
declaration
try { ... r a i s e E; } terminate(
E ) ...
//generic //specific
raise handler
try { ...
raise
} resume(
E;
E ) ...
//generic //specific
raise handler
Associating the propagation mechanism at the handler means the declaration and raise can be generic. With this form, an exception can be used in either form; i.e., exception E can imply termination or resumption depending on the handler. However, it is ambiguous to have the two handlers appear in the same handler clause for the same exception. Interestingly, the choice of handling model can be further delayed using an unw i n d statement available only in the handler to trigger stack unwinding, as in
279
EXCEPTION HANDLING
exception
E;
//generic
declaration
try { ... r a i s e
E;
//generic //generic
} catch( E ) { if (...)
raise handler
{
...
u nwi nd;
//
=>
termination
//
=>
resumption
} else {
In this form, a handler imphes resumption unless an u n w i n d is executed. The u n w i n d capability in VMS [29, Chap. 4] and any language with nonlocal transfer can support this approach. Both schemes have implications with respect to the implementation because stack unwinding must be delayed, which can have an effect on other aspects of the EHM. Unfortunately, this approach violates the EHM objective of preventing an incomplete operation from continuing; i.e., it is impossible at the raise point to ensure control flow does not return. As a result, this particular approach is rejected. Continuing with the first two approaches, if an exception can be overloaded, i.e., be both a terminating and resuming exception, combinations of the first two forms of handler-model specification are possible, as in t e r m i n a t e E; r e s u m e E; try {
e x c e p t i o n E; //generic
//overload
try {
... t e r m i n a t e E;
... t e r m i n a t e E;
... r e s u m e E; } catch( E ) ... //generic //either
terminate
declaration
or
... r e s u m e E; handler resume
} t e r m i n a t e ! E ) ...
overload
r e s u m e ! E ) ...
In both cases, the kind of handler model for the exception is specified at the raise and fixed during the propagation. In the left example, exception E is overloaded at the declaration and the generic handler catching the exception must behave according to the kind of handler model associated with the exception event. As mentioned, it is almost mandatory to have a facility in the handler to determine the kind of exception. In general, it is better software engineering to partition the handler code for each kind of handler model. In the right example, the generic exception is made specific at the raise and the overloaded handlers choose the appropriate kind. In this form, the handler code is partitioned for each kind of
280
PETER A. BUHR ETAL
handler model. However, unlike the previous scheme, the exception declaration does not convey how the exception may be used in the program. Finally, it is possible to specify the handler model in all three locations, as in t e r m i n a t e E; resume E; try {
//overload
... t e r m i n a t e E; ... r e s u m e E ; } t e r m i n a t e ! E ) ... r e s u m e ! E ) ...
//overload
The EHM in ^System [20] uses all three locations to specify the handler model. While pedantic, the redundancy of this format helps in reading the code because the declaration specifies the kind of exception (especially when the exception declaration is part of an interface). As well, it is unnecessary to have a mechanism in the handler to determine the kind of raised exception. Finally, in an EHM where terminating and resuming coexist, it is possible to partially override their semantics by raising events within a handler, as in try {
try {
... r e s u m e E l ; } catch! El ) terminate
E2;
... t e r m l n a t e E l ; } c a t c h ! El ) r e s u m e E2;
In the left example, the terminate overrides the resuming and forces stack unwinding, starting with the stack frame of the handler (frame on the top of the stack), followed by the stack frame of the block that originally resumed the exception. In the right example, the resume cannot override the terminate because the stack frames are already unwound, so the new resume starts with the handler stack frame.
10.
Exception Partitioning
As mentioned, associating the propagation mechanism at exception declaration results in exception partitioning into terminating and resuming exceptions. Without partitioning, i.e., generic exception declarations, every exception becomes dual as it can be raised with either form of handler model. However, an exception declaration should reflect the nature of the abnormal condition causing the event being raised. For example, Unix signals S I G B U S and S I G T E R M always lead to termination of an operation, and hence, should be declared as terminationonly. Indeed, having termination-only and resume-only exceptions removes the mistake of using the wrong kind of raise and/or handler.
EXCEPTION HANDLING
281
However, having a dual exception is also useful. While overloading an exception name allows it to be treated as a dual, few languages allow overloading of variables in a block. Alternatively, an exception can be declared as d u a l . Both forms of making an exception dual have the following advantages. First, encountering an abnormal condition can lead to resuming or terminating an exception depending on the particular context. Without dual exceptions, two different exceptions must be declared, one being terminate-only and the other resume-only. These two exceptions are apparently unrelated without a naming convention; using a single dual exception is simpler. Second, using a dual exception instead of resume-only for some abnormal conditions allows a resumed event to be terminated when no resuming handler is found. This effect can be achieved through a default resuming handler that raises a termination exception. The problem is that terminate-only and resume-only exceptions lack the flexibility of dual, and flexibility improves reusability. This observation does not imply all exceptions should be dual, only that dual exceptions are useful.
10.1
Derived Exception Implications
With derived exceptions and partitioned exceptions, there is the issue of deriving one kind of exception from another, e.g., terminate from resume, called heterogeneous derivation. If the derivation is restricted to exceptions of the same kind it is called homogeneous derivation. Homogeneous derivation is straightforward and easy to understand. Heterogeneous derivation is complex but moreflexiblebecause it allows deriving from any kind of exception. With heterogeneous derivation, it is possible to have all exceptions in one hierarchy. The complexity with heterogeneous derivation comes from the following heterogeneous derivations: parent termmate derived option
resume
dual
dual
i
i
I
termmate resume i i dual dual terminate resume terminate
; resume 1
In option 1, the kind of exception is different when the derived exception is raised and the parent is caught. If a resume-only exception is caught by a terminate-only handler, it could unwind the stack, but that invalidates resumption at the raise point. If a terminate-only exception is caught by a resume-only handler, it could resume the event, but that invalidates the termination at the raise point. In option 2, problems occur when the dual exception attempts to perform an unwind or resume on an exception of the wrong kind, resulting in the option 1 problems. In option 3, there is neither an obvious problem nor an advantage if the dual exception is caught by the more specific parent. In most cases, it seems that heterogeneous
282
PETER A. BUHR ETAL.
derivation does not simplify programming and may confuse programmers; hence, it is a questionable feature.
11.
Matching
In Section 9, either the exception declaration or the raise fixes the handler model for an exception event. The propagation mechanism then finds a handler matching both the kind and exception. However, there is no requirement the kind must match; only the exception must match, which leads to four possible situations in an EHM:
termmation resumption
terminating handler 1. matching 3. unmatching
resuming handler 2. unmatching 4. matching
Up to now, matching has been assumed between handling model and propagation mechanism; i.e., termination matches with terminating and resumption with resuming. However, the other two possibilities (options 2 and 3) must be examined to determine whether there are useful semantics. In fact, this discussion parallels that for heterogeneous derivation. In option 2, when a termination exception is raised, the stack is immediately unwound and the operation cannot be resumed. Therefore, a resuming handler handling a termination exception cannot resume the terminated operation. This semantics is misleading and difficult to understand, possibly resulting in an error long after the handler returns, because an operation raising a termination exception expects a handler to provide an alternative for its guarded block, and a resuming handler catching an exception expects the operation raising it to continue. Therefore, unmatching semantics for a termination exception is largely an unsafe feature. In option 3, when an exception is resumed, the stack is not unwound so a terminating handler has four possibilities. First, the stack is not unwound and the exception is handled with the resumption model; i.e., the termination is ignored. Second, the stack is unwound only after the handler executes to completion. Third, the stack is unwound by executing a special statement during execution of the handler. Fourth, the stack is unwound after finding the terminating handler but before executing it. The first option is unsafe because the terminating handler does not intend to resume, and therefore, it does not correct the problem before returning to the raise point. The next two options afford no benefit as there is no advantage to delaying unwinding for termination, and doing so results in problems (see Sections 7.2 and 13) and complicates most implementations. These problems can be avoided by the fourth option, which unwinds the stack before executing the handler, essentially handling the resumed exception as a
EXCEPTION HANDLING
283
termination exception. It also simplifies the task of writing a terminating handler because a programmer does not have to be concerned about unwinding the stack explicitly, or any esoteric issues if the stack is unwound inside or after the terminating handler. Because of its superiority over the other two options favoring termination, the last option is the best unmatching semantics for a resumption exception (but it is still questionable). With matching semantics, it is possible to determine what model is used to handle a raised exception (and the control flow) by knowing either how an exception is raised or which handler is chosen. Abstracting the resumption and the termination model is done in a symmetric fashion. The same cannot be said about unmatching semantics. In particular, it is impossible to tell whether a resumed exception is handled with the resumption model without knowing the handler catching it, but a termination exception is always handled with the termination model. Hence, terminating and resuming are asymmetric in unmatching semantics. Without knowing the handling model used for a resumed exception, it becomes more difficult to understand the resuming mechanism for unmatching semantics than to understand the terminating and resuming mechanism for matching semantics. Therefore, unmatching semantics is inferior to matching and a questionable feature in an EHM.
12.
Handler Clause Selection
The propagation mechanism determines how handler clauses are searched to locate a handler. It does not specify which handler in a handler clause is chosen if there are multiple handlers capable of catching the exception. For example, a handler clause can handle both a derived and a base exception. This section discusses issues about two orthogonal criteria—matching and specificity—for choosing a handler among those capable of handling a raised exception in a handler clause. Matching criteria (see Section 11) selects a handler matching with the propagation mechanism, e.g., try { resume E ; } t e r m i n a t e E ... r e s u m e E ...
//matching
Matching only applies for an EHM with the two distinct propagation mechanisms and handler partitioning. Specificity criteria selects the most specific eligible handler within a handler clause using the following ordering rules:
284
PETER A. BUHR ETAL
1. The exception is derived from another exception (see Section 6.2): terminate
B;
terminate
D : B;
try { ... t e r m i n a t e } terminate! terminate!
D;
D ) ... II more
specific
B ) ...
2. The exception is bound to an object rather than to a class (see Section 6.4): try { ... f . r e a d O ; } terminate!
...
f.file_err
) ... II more
t e r m i n a t e ! f i le.f ile_err
specific
) ...
3. The exception is bound to the same object and derived from another exception: class foo { terminate terminate void foo
m!)
B; D : B; { ... t e r m i n a t e
D; }
f;
try { ...
f.m!);
} terminate!
f.D
) ... II more
terminate!
f.B
)...
specific
In this case, it is may be infeasible to tell which handler in a handler clause is more specific: try { } terminate!
D ) ... II equally
t e r m i n a t e ! f.B
specific
) ...
Here, there is a choice between a derived exception and a bound, base exception, which could be said to be equally specific.
EXCEPTION HANDLING
285
A language designer must set priorities among these orthogonal criteria. In addition, the priority of handling a termination exception is orthogonal to that of a resumed one. Dynamic propagation (see Section 8.1) uses closeness; i.e., select a handler closest to the raise point on the stack, to first locate a possible set of eligible handlers in a handler clause. Given a set of eligible handlers, matching should have the highest priority, when applicable, because matching semantics is safe, consistent, and comprehensible (see Section 11). A consequence of matching is a terminating handler hierarchy for termination exceptions and a resuming handler hierarchy for resumed ones. With separate handler hierarchies, it is reasonable for an exception to have both a default terminating and resuming handler (see Section 6.3 concerning default handlers). It is still possible for a default resuming handler to override resuming (see Section 9) and raise a termination exception in the terminating hierarchy. Overriding does not violate mandatory matching because of the explicit terminating raise in the handler. If there is no default handler in either case, the runtime system must take some appropriate action, usually terminating the execution. Specificity is good, but comes after matching; e.g., if specificity is selected before matching in try { ...terminate
D; . . .
/ / D is derived
} t e r m i n a t e ! B ) ...
//matching
r e s u m e ! D ) ...
//specificity
from B
then the handler r e s u m e ! D ) is chosen, not that for terminate( B ), which violates handler matching. The only exception to these rules is when two handlers in the same handler clause are equally specific, requiring additional criteria to resolve the ambiguity. The most common one is the position of a handler in a handler clause, e.g., select the first equally matching handler found in the handler-clause list. Whatever this additional criteria is, it should be applied to resolve ambiguity only after using the other criteria.
13.
Preventing Recursive Resuming
Recursive resuming (see Section 8.1.3) is the only legitimate criticism of resuming propagation. The mechanisms in Mesa [14, p. 143] and VMS [29, pp. 90-92] represent the two main approaches for solving this problem. The rest of this section looks at these two solutions.
286
PETERA. B U H R E T ^ L
13.1
Mesa Propagation
Mesa propagation prevents recursive resuming by not reusing an unhandled handler bound to a specific called block; i.e., once a handler for a block is entered, it is marked as unhandled and not used again. The propagation mechanism always starts from the top of the stack to find an unmarked handler for a resume exception.^ However, this unambiguous semantics is often described as confusing. The following program demonstrates how Mesa solves recursive resuming: void testO { try {
/ / TUHUR2))
try {
//
try { resume } catch( } catch(
T2(H2(R1)}
/ / T3(H3(R2)} R1 ;
R2 ) r e s u m e
R1 ) r e s u m e
R1;
R2;
II H3(R2) II
} c a t c h ! R2 ) . . .
H2(R1}
II HUR2)
}
The following stack is generated at the point when exception R 1 is resumed from the innermost try block: test
-^ T 1 ( H 1 ( R 2 ) )
^
T2(H2(R1))
-^ T3{H3(R2))
^
H2(R1)
The potential infinite recursion occurs because H 2 ( R 1 ) resumes R 2, and there is resuming handler H 3 ( R 2 ), which resumes R 1, while handler H 2 ( R 1 ) is still on the stack. Hence, handler body H 2 ( R 1 ) calls handler body H 3 ( R 2 ) and vice versa with no case to stop the recursion. Mesa prevents the infinite recursion by marking an unhandled handler, i.e., a handler that has not returned, as ineligible (in bold), resulting in test
-> T 1 { H 1 ( R 2 ) )
-^ T 2 ( H 2 { R 1 ) )
^
T3(H3(R2))
^
H2(R1)
Now, H 2 ( R 1 ) resumes R 2, which is handled by H 3 ( R 2 ): t e s t ^ T 1 ( H 1 { R 2 ) ) -> T2( H 2 ( R 1 )) ^ T 3 ( H 3 { R2 )) - ^ H2(R1) ^ H3(R2)
Therefore, when H 3 ( R 2 ) resumes R 1 no infinite recursion occurs as the handler for R 1 in T 2 ( H 2 ( R 1)) is marked ineligible. ^This semantics was determined with test programs and discussions with Michael Plass and Alan Freier at Xerox Pare.
EXCEPTION HANDLING
287
However, the confusion with the Mesa semantics is that there is now no handler for R 1, even though the nested t r y blocks appear to deal with this situation. In fact, looking at the static structure, a programmer might incorrectly assume there is an infinite recursion between handlers H 2 ( R 1 ) and H 3 ( R 2 ), as they resume one another. This confusion has resulted in a reticence by language designers to incorporate resuming facilities in new languages. In detail, the Mesa semantics has the following negative attributes: • Resuming an exception in a block and in one of its handlers can call different handlers, even though the block and its handlers are in the same lexical scope. For instance, in the above example, an exception generated in a guarded block is handled by handlers at or below the block on the stack, but an exception generated in a handler body can be handled by handlers above it on the stack. Clearly, lexical scoping does not reflect the difference in semantics. • Abstraction implies a routine should be treated as a client of routines it calls direcdy or indirectly, and have no access to the implementations it uses. However, if resuming from a resuming handler is a useful feature, some implementation knowledge about the handlers bound to the stack above it must be available to successfully understand how to make corrections, thereby violating abstraction. • Finally, exceptions are designed for communicating abnormal conditions from callee to caller. However, resuming an exception inside a resuming handler is like abnormal condition propagating from caller to callee because of the use of handlers above it on the stack.
13.2
VMS Propagation
The VMS propagation mechanism solves the recursive resuming problem, but without the Mesa problems. This mechanism is then extended to cover asynchronous exceptions, which neither Mesa nor VMS have. Before looking at the VMS mechanism, the concept of consequent events is defined, which helps to explain why the semantics of the VMS mechanism are desirable.
73.2.7
Consequent
Events
Raising an exception synchronously implies an abnormal condition has been encountered. A handler can catch an event and then raise another synchronous event if it encounters another abnormal condition, resulting in a second synchronous exception. The second event is considered a consequent event of the first.
288
PETER A. BUHR ETAL
More precisely, every synchronous event is an immediate consequent event of the most recent exception being handled in the execution (if there is one). For example, in the previous Mesa resuming example, the consequence sequence is R 1, R 2, and R 1. Therefore, a consequent event is either the immediate consequent event of an event or the immediate consequent event of another consequent event. The consequence relation is transitive, but not reflexive. Hence, synchronous events propagated when no other events are being handled are the only nonconsequent events. An asynchronous exception is not a consequent event of other exceptions propagated in the faulting execution because the condition resulting in the event is encountered by the source execution, and in general, not related to the faulting execution. Only a synchronous event raised after an asynchronous event is delivered can be a consequent event of the asynchronous event.
75.2.2
Consequential
Propagation
The VMS propagation mechanism is referred to as consequential propagation, based on the premise that if a handler cannot handle an event, it should not handle its consequent events, either. Conceptually, the propagation searches the execution stack in the normal way to find a handler, but marks as ineligible all handlers inspected, including the chosen handler. Marks are cleared only when an event is handled, so any consequent event raised during handling also sees the marked handlers. Practically, all resuming handlers at each level are marked when resuming an event; however, stack unwinding eliminates the need for marking when raising a termination exception. Matching (see Section 11) eliminates the need to mark terminating handlers because only resuming handlers catch resume events. If the resuming handler overrides the propagation by raising a termination exception, the stack is unwound normally from the current handler frame. How does consequential propagation make a difi'erence? Given the previous Mesa runtime stack test
^
T1{H1(R2))
^
T2(H2(R1))
->
T3(H3(R2))
->
H2(R1)
consequential propagation marks all handlers between the raise of R 1 in T 3 ( H 3 ( R 2 ) ) t o T 2 ( H 2 ( R 1 ) ) a s ineligible (in bold): test
^
T1{H1(R2))
^
T2(H2{R1))
-^
T3(H3(R2))
-^
H2(R1)
Now, H 2 ( R 1 ) resumes R 2, which is handled by H 1 ( R 2 ) instead of H 3 { R 2 ): test ^ T 1 ( H 1 ( R 2 ) ) ^ T 2 ( H 2 ( R 1 ) ) - ^ T 3 ( H 3 ( R 2 ) )
^ H2(R1) -> H1(R2)
EXCEPTION HANDLING
289
Like Mesa, recursive resuming is eliminated, but consequential propagation does not result in the confusing resumption of R 1 from H 3 ( R 2 ). In general, consequential propagation eliminates recursive resuming because a resuming handler marked for a particular event cannot be called to handle its consequent events. As well, propagating a synchronous resumption event out of a handler does not call a handler bound to a stack frame between the handler and the handler body, which is similar to a termination event propagated out of a guarded block because of stack unwinding. Consequential propagation does not preclude all infinite recursion with respect to propagation, as in void test!) { try { ...
IIT(H(R)) resume
R; ...
} catch( R ) testO;
II
HiR)
}
Here, each call of t e s t creates a new try block to handle the next recursion, resulting in an infinite number of handlers: test -^ T ( H ( R ) ) -^ HIR) ^
test ^
T(H(R)) ^
H(R) ^ t e s t ^
...
As a result, there is always an eligible handler to catch the next event in the recursion. Consequential propagation is not supposed to handle this situation as it is considered an error with respect to recursion not propagation. Finally, consequential propagation does not affect termination propagation because marked resuming handlers are simply removed during stack unwinding. Hence, the application of consequential propagation is consistent with either terminating or resuming. As well, because of handler partitioning, a terminating handler for the same event bound to a prior block of a resuming handler is still eligible, as in void testO { d u a l R; II terminate and resume try { ... resume R; ... } t e r m i n a t e ! R ) ... r e s u m e ! R ) t e r m i n a t e R;
IITir(R)MR)) llt(R) llr(R)
}
Here, the resume of R in the try block is first handled by r (R), resulting in the call stack
290
test
PETER A. BUHR ETAL.
^
T(r(R),t(R))
->
r(R)
While r(R) is marked ineligible, the terminating handler, t(R), for the same try block is still eligible. The handler r( R) then terminates the exception R, and the stack is unwound starting at the frame for handler r {R) to the try block where the exception is caught by handler t (R), resulting in the call stack test
-^
t(R).
The try block is effectively gone because the scope of the handler does not include the try block (see Section 7.1). All handlers are considered unmarked for a propagated asynchronous event because an asynchronous event is not a consequent event. Therefore, the propagation mechanism searches every handler on the runtime stack. Hence, a handler ineligible to handle an event and its consequent events can be chosen to handle a newly arrived asynchronous event, reflecting its lack of consequentiality. In summation, consequential propagation is better than other existing propagation mechanisms because: • it supports terminating and resuming propagation, and the search for a handler occurs in a uniformly defined way, • it prevents recursive resuming and handles synchronous and asynchronous exceptions according to a sensible consequence relation among exceptions, and • the context of a handler closely resembles its guarded block with respect to lexical location; in effect, an event propagated out of a handler is handled as if the event is directly propagated out of its guarded block.
14.
Multiple Executions and Threads
The presence of multiple executions and multiple threads has an impact on an EHM. In particular, each execution has its own stack on which threads execute, and different threads can carry out the various operations associated with handling an exception. For example, the thread of the source execution delivers an exception to the faulting execution; the thread of the faulting execution propagates and handles it.
14.1
Coroutine Environment
Coroutines represent the simplest execution environment where the source execution can be different from the faulting execution, but the thread of a single task executes both source and faulting execution. In theory, either execution can
EXCEPTION HANDLING
291
propagate the event, but in practice, only the faulting execution is reasonable. Assume the source execution propagates the event in try { try { Exi (suspended) } catch( E2 ) ... } c a t c h ! E l ) ...
IITUHUEV) IIT2(H2(E2}) II HUE2} II H2(E1}
and execution E x 1 is suspended in the guarded region of try block T 2. While suspended, a source execution E x 2 raises and propagates an asynchronous exception E 1 in E X 1, which directs control flow of E x 1 to handler H 2 ( E 1), unwinding the stack in the process. While E x 1 is still suspended, a third source execution E X 3 raises and propagates another asynchronous exception E 2 ( E x 2 and E x 3 do not have to be distinct). Hence, control flow of E x 1 goes to another handler determined in the dynamic context, further unwinding the stack. The net effect is that neither of the exceptions is handled by any handler in the program fragment. The alternative approach is for the faulting execution, E x 1, to propagate the exceptions. Regardless of which order E x 1 raises the two arriving events, at least a handler for one of the events is called. Therefore, only the faulting execution should propagate an exception in an environment with multiple executions.
14.2 Concurrent Environment Concurrency represents the most complex execution environment, where the separate source and faulting executions are executed by threads of different tasks. In theory, either execution can propagate an event, but in practice, only the faulting execution is reasonable. If the source execution propagates the event, it must change the faulting execution, including the runtime stack and program counter. Consequently, the runtime stack and the program counter become shared resources between these tasks, making a task's execution dependent on another task's execution in a direct way, i.e., not through communication. To avoid corrupting an execution, locking is now required. Hence, an execution must lock and unlock its runtime stack before and after each execution time-slice. Obviously, this approach generates a large amount of superfluous lockings to deal with a situation that occurs rarely. Therefore, it is reasonable to allow only the faulting execution to propagate an exception in an environment with multiple tasks.
14.3 Real-Time Environment In the design and implementation of real-time programs, various timing constraints are guaranteed through the use of scheduling algorithms, as well as an EHM. Exceptions are extremely crucial in real-time systems, e.g, deadline expiry
292
PETER A. BUHRETAL
or early/late starting exceptions, as they allow a system to react to abnormal situations in a timely fashion. Hecht and Hecht [30] demonstrated, through various empirical studies, that the introduction of even the most basic fault-tolerance mechanisms into a real-time system drastically improves its reliability. The main conflict between real-time and an EHM is the need for constanttime operations and the dynamic choice of a handler [31]. As pointed out in Section 8.1.2, the dynamic choice of a handler is crucial to an EHM, and therefore, it may be impossible to resolve this conflict. At best, exceptions may only be used in restricted ways in real-time systems when a bound can be established on cafl stack depth and the number of active handlers, which indirectly puts a bound on propagation.
15.
Asynchronous Exception Events
The protocol for communicating asynchronous events among coroutines and tasks is examined.
15.1
Communication
Because only the faulting execution should propagate an event and directly alter control flow, the source execution only needs to deliver the event to the faulting execution. This requires a form of direct communication not involving shared objects. In essence, an event is transmitted from the source to the faulting execution. There are two major categories of direct communication: blocking and nonblocking. In the first, the sender blocks until the receiver is ready to receive the event; in the second, the sender does not block.
75.7.7
Source Execution
Requirement
Using blocking communication, the source execution blocks until the faulting execution executes a complementary receive. However, an execution may infrequently (or never) check for incoming exception events. Hence, the source can be blocked for an extended period of time waiting for the faulting execution to receive the event. Therefore, blocking communication is rejected. Only nonblocking communication allows the source execution to raise an exception on one or more executions without suffering an extended delay.
75.7.2
Faulting Execution
Requirement
Nonblocking communication for exceptions is different from ordinary nonblocking communication. In the latter case, a message is delivered only after
EXCEPTION HANDLING
293
the receiver executes some form of receive. The former requires the receiver to receive an exception event without expHcitly executing a receive because an EHM should preclude checking for an abnormal condition. The programmer is required to set up a handler only to handle the rare condition. From the programmer's perspective, the delivery of an asynchronous exception should be transparent. Therefore, the runtime system of the faulting execution must poll for the arrival of asynchronous exceptions, and propagate it on arrival. The delivery of asynchronous exceptions must be timely, but not necessarily immediate. There are two polling strategies: implicit polling and explicit polling. Implicit polling is performed by the underlying system. (Hardware interrupts involve implicit polling because the CPU automatically polls for the event.) Explicit polling requires the programmer to insert explicit code to activate polling. Implicit polling alleviates programmers from polling, and hence, provides an apparently easier interface to programmers. On the other hand, implicit polling has its drawbacks. First, infrequent implicit polling can delay the handling of asynchronous exceptions; polling too frequently can degrade the runtime efficiency. Without specific knowledge of a program, it is difficult to have the right frequency for implicit polling. Second, implicit polling suffers the nonreentrant problem (see Section 15.2). Explicit polling gives a programmer control over when an asynchronous exception can be raised. Therefore, the programmer can delay or even completely ignore pending asynchronous exceptions. Delaying and ignoring asynchronous exceptions are both undesirable. The other drawback of explicit polling is that a programmer must worry about when to and when not to poll, which is equivalent to explicitly checking for exceptions. Unfortunately, an EHM with asynchronous exceptions needs to employ both implicit and explicit polling. Implicit polling simplifies using the EHM and reduces the damage a programmer can do by ignoring asynchronous exceptions. However, the frequency of implicit polling should be low to avoid unnecessary loss of efficiency. Explicit polling allows programmers to have additional polling when it is necessary. The combination of implicit and explicit polling gives a balance between programmability and efficiency. Finally, certain situations can require implicit polling be turned off, possibly by a compiler or runtime switch, e.g., in low-level system code where execution efficiency is crucial or real-time programming to ensure deadlines.
15.2
Nonreentrant Problem
Asynchronous events introduce a form of concurrency into sequential execution because delivery is nondeterministic with implicit polling. The event delivery can be considered as temporarily stealing a thread to execute the handler. As a
294
PETER A. BUHR ETAL
result, it is possible for a computation to be interrupted while in an inconsistent state, a handler to be found, and the handler to recursively call the inconsistent computation, called the nonreentrant problem. For example, while allocating memory, an execution is suspended by delivery of an asynchronous event, and the handler for the exception attempts to allocate memory. The recursive entry of the memory allocator may corrupt its data structures. The nonreentrant problem cannot be solved by locking the computation because either the recursive call deadlocks, or if recursive locks are used, reenters and corrupts the data. To ensure correctness of a nonreentrant routine, an execution must achieve the necessary mutual exclusion by blocking delivery, and consequently the propagation of asynchronous exceptions, hence temporarily precluding delivery. Hardware interrupts are also implicitly polled by the CPU. The nonreentrant problem can occur if the interrupt handler enables the interrupt and recursively calls the same computation as has been interrupted. However, because hardware interrupts can happen at times when asynchronous exceptions cannot, it is more difficult to control delivery.
15.3
Disabling Asynchronous Exceptions
Because of the nonreentrant problem, facilities to disable asynchronous exceptions must exist. There are two aspects to disabling: the specific event to be disabled and the duration of disabling. (This discussion is also applicable to hardware interrupts and interrupt handlers.)
75.5.7
Specific Event
Without derived exceptions, only the specified exception is disabled; with derived exceptions, the exception and all its descendants can be disabled. Disabling an individual exception but not its descendants, called individual disabling, is tedious as a programmer must list all the exceptions being disabled, nor does it complement the exception hierarchy. If a new derived exception should be treated as an instance of its ancestors, the exception must be disabled wherever its ancestor is disabled. Individual disabling does not automatically disable the descendants of the specified exceptions, and therefore, introducing a new derived exception requires modifying existing code to prevent it from activating a handler bound to its ancestor. The alternative, hierarchical disabling, disables an exception and its descendants. The derivation becomes more restrictive because a derived exception also inherits the disabling characteristics of its parent. Compared to individual disabling, hierarchical disabling is more complex to implement and usually has a higher runtime cost. However, the improvement in programmability makes hierarchical disabling attractive.
EXCEPTION HANDLING
295
A diiferent approach is to use priorities instead of hierarchical disabUng, allowing a derived exception to override its parent's priority when necessary. Selective disabUng can be achieved by disabling exceptions of priority lower than or equal to a specified value. This selective disabling scheme trades off the programmability and extensibility of hierarchical disabling for lower implementation and runtime costs. However, the problem with priorities is assigning priority values. Introducing a new exception requires an understanding of its abnormal nature plus its priority compared to other exceptions. Hence, defining a new exception requires an extensive knowledge of the whole system with respect to priorities, which makes the system less maintainable and understandable. It is conceivable to combine priorities with hierarchical disabling; a programmer specifies both an exception and a priority to disable an asynchronous exception. However, the problem of maintaining consistent priorities throughout the exception hierarchy still exists. In general, priorities are an additional concept that increases the complexity of the overall system without significant benefit. Therefore, hierarchical disabling with derived exceptions seems the best approach in an extensible EHM. Note that multiple derivation (see Section 6.2) only complicates hierarchical disabling, and the same arguments can be used against hierarchical disabling with multiple derivation.
75.3.2
Duration
The duration for disabling could be specified by a time duration, but normally the disabling duration is specified by a region of code that cannot be interrupted. There are several mechanisms available for specifying the region of uninterruptable code. One approach is to supply explicit routines to turn on and off the disabling for particular asynchronous exceptions. However, the resulting programming style is like using a semaphore for locking and unlocking, which is a low-level abstraction. Programming errors result from forgetting a complementary call and are difficult to debug. An alternative is a new kind of block, called a protected block, which specifies a list of asynchronous events to be disabled across the associated region of code. On entering a protected block, the list of disabled asynchronous events is modified, and subsequently enabled when the block exits. The effect is like entering a guarded block so disabling applies to the block and any code dynamically accessed via that block, e.g., called routines. An approach suggested for Java [32] associates the disabling semantics with an exception named AI E. If a member routine includes this exception in its exception list, interrupts are disabled during execution of the member; hence, the member body is the protected block. However, this approach is poor language
296
PETER A. BUHR ETAL
design because it associates important semantics with a name, AI E, and makes this name a hidden keyword. The protected block seems the simplest and most consistent in an imperative language with nested blocks. Regardless of how asynchronous exceptions are disabled, all events (except for special system events) should be disabled initially for an execution; otherwise, an execution cannot install handlers before asynchronous events begin arriving.
15.4
Multiple Pending Asynchronous Exceptions
Since asynchronous events are not serviced immediately, there is the potential for multiple events to arrive between two polls for events. There are several options for dealing with these pending asynchronous events. If asynchronous events are not queued, there can be only one pending event. New events must be discarded after the first one arrives, or overwritten as new ones arrive, or overwritten only by higher priority events. However, the risk of losing an asynchronous event makes a system less robust; hence queuing events is usually superior. If asynchronous events are queued, there are multiple pending events and several options for servicing them. The order of arrival (first-in, first-out, FIFO) can be chosen to determine the service order for handling pending events. However, a strict FIFO delivery order may be unacceptable, e.g., an asynchronous event to stop an execution from continuing erroneous computation can be delayed for an extended period of time in a FIFO queue. A more flexible semantics for handling pending exceptions is user-defined priorities. However, Section 15.3 discusses how a priority scheme reduces extensibility, making it inappropriate in an environment emphasizing code reuse. Therefore, FIFO order seems acceptable for its simplicity in understanding and low implementation cost. However, allowing a pending event whose delivery is disabled to prevent delivering other pending events seems undesirable. Hence, an event should be able to be delivered before earlier events if the earlier events are disabled. This out-of-order delivery has important implications on the programming model of asynchronous exceptions. A programmer must be aware of the fact that two exceptions having the same source and faulting execution may be delivered out-of-order (when the first is disabled but not the second). This approach may seem unreasonable, especially when causal ordering is proved to be beneficial in distributed programming. However, out-of-order delivery is necessary for urgent events. Currently, the most adequate delivery scheme remains as an open problem, and the answer may only come with more experience.
EXCEPTION HANDLING
297
15.5 Converting Interrupts to Exceptions As mentioned, hardware interrupts can occur at any time, which significantly complicates the nonreentrant problem. One technique that mitigates the problem is to convert interrupts into language-level asynchronous events, which are then controlled by the runtime system. Some interrupts target the whole program, like abort execution, while some target individual executions that compose a program, like completion of a specific thread's I/O operation. Each interrupt handler raises an appropriate asynchronous exception to the particular faulting execution or to some system execution for program faults. However, interrupts must still be disabled when enqueueing and dequeuing the asynchronous events to avoid the possibility of corrupting the queue by another interrupt or the execution processing the asynchronous events. By delivering interrupts through the EHM, the nonreentrant problem is avoided and interrupts are disabled for the minimal time. Furthermore, interrupts do not usually have all the capabilities of an EHM, such as parameters; hence, interrupts are not a substitute for a general EHM. Finally, the conversion also simplifies the interface within the language. The interrupts can be completely hidden within the EHM, and programmers only need to handle abnormal conditions at the language level, which improves portability across systems. However, for critical interrupts and in hard real-time systems, it may still be necessary to have some control over interrupts if they require immediate service; i.e., software polling is inadequate. One final point about programming interrupt handlers is that raising a synchronous exception within an interrupt handler is meaningful only if it does not propagate outside of the handler. The reason is that the handler executes on an arbitrary execution stack, and hence, there is usually no relationship between the interrupt handler and the execution. Indeed, Ada 95 specifies that propagating an event from an interrupt handler has no effect.
16.
Conclusions
Static and dynamic name binding, and static and dynamic transfer points can be combined to form the following different language constructs: transfer point static dynamic
name binding static dynamic sequel termination routine call resumption
These four constructs succinctly cover all the kinds of control flow associated with routines and exceptions.
298
PETER A. BUHRETAL.
Raising, propagating, and handling an exception are the three core controlflow mechanisms of an EHM. There are two useful handling models: termination and resumption. For safety, an EHM should provide matching propagation mechanisms: terminating and resuming. Handlers should be partitioned with respect to the handling models to provide better abstraction. Consequential propagation solves the recursive resuming problem and provides consistent propagation semantics with termination, making it the best choice for an EHM with resumption. As a result, the resumption model becomes attractive and can be introduced into existing termination-only EHMs. Exception parameters, homogeneous derivation of exceptions, and bound/conditional handling all improve programmability and extensibility. In a concurrent environment, an EHM must provide some disabling facilities to solve the nonreentrant problem. Hierarchical disabling is best in terms of programmability and extensibility. An EHM based on the ideas presented here has been implemented in fiC++ [21], providing feedback on correctness.
Appendix: Glossary asynchronous exception is when a source execution raises an exception event in a different faulting execution, e.g., raise E in Ex raises exception E from the current source execution to the faulting execution E x. bound exception is when an exception event is bound to a particular object, rather than to a class of objects or no object. catch is the selection of a handler in a handler clause during propagation to deal with an exception event. closeness is when an event is handled by a handler closest to the block where propagation of the event starts. conditional handling is when the handler for an exception event also depends on a predicate for selection. consequent event is when a handler catches an event and then raises another synchronous event due to an abnormal condition, so the second event is a consequence of the first. consequential propagation assumes that if a handler cannot handle an event, it should not handle its consequent events. default handler is a handler called if the faulting execution does not find a handler during propagation. delivery is the arrival of an exception event at a faulting execution, which initiates propagation of the event within the faulting execution. dual is a kind of exception that can be associated with both termination and resumption.
EXCEPTION HANDLING
299
dynamic propagation is propagation that searches the dynamic scopes (callstack) to find a handler. event is an exception instance generated at a raise and caught by a handler. exception is an event that is known to exist but which is ancillary to an algorithm or execution. exception list is part of a routine's signature specifying which exceptions may propagate out of a routine to its caller. exception parameter is the ability to pass data from the raise in the source execution to the handler in the faulting execution so the handler can analyze why an exception is raised and how to deal with it. exception partitioning occurs when exceptions are explicitly divided into different kinds, e.g., terminating and resuming. execution is the state information needed to permit independent execution, and the minimal language unit in which an exception can be raised. explicit polling is when arrival and propagation of asynchronous exceptions require the programmer to insert explicit code. failure exception is a system exception raised if and only if an exception is raised that is not part of the routine's interface. faulting execution is the execution (process, task, coroutine) affected by an exception event; its control flow is routed to a handler. guarded block is a programming language block with handlers. handled is when the handler for an exception returns. handler is a sequence of statements dealing with one or more exceptions. handler clause is the set of handlers bound to a guarded block. handler hierarchies is when different kinds of handlers are organized into separate hierarchies for various purposes. handles is the execution of a handler in a handler clause associated with a raised exception. heterogeneous derivation is when different kinds of exceptions can be derived from one another, e.g., terminating from resuming or vice versa. hierarchical disabling is when an individual exception is disabled and all of its hierarchical descendants are implicitly disabled. homogeneous derivation is when different kinds of exceptions can only be derived from exceptions of the same kind, e.g., terminating from terminating or resuming from resuming. implicit polling is when arrival and propagation of asynchronous exceptions are performed by the underlying system. individual disabling is when an individual exception is disabled but not its hierarchical descendants.
300
PETER A. BUHRET-^^L
marking is flagging handlers as ineligible during propagation so they cannot be considered again should propagation reencounter them. matching is when the handling model and propagation mechanism are the same; i.e., termination matches with terminating and resumption with resuming. multiple derivation is the ability to derive an exception from multiple exceptions, which is similar to multiple inheritance of classes. mutual exclusion is serializing execution of an operation on a shared resource. nonlocal transfer is a transfer, usually via a g o t o, to a dynamically scoped location, where any activation records on the call stack between the transfer and the specified label are terminated. nonreentrant problem is when a computation is interrupted asynchronously while in an inconsistent state, and the handler for the asynchronous interrupt invokes the same computation. nonresumable is an operation that cannot be restarted or continued; i.e., the operation must be terminated. propagating is the directing of control flow within a faulting execution from the raise to a handler. propagation mechanism is the algorithm used to locate an appropriate handler. protected block is a lexical block specifying a list of asynchronous events to be disabled during execution of the block. raise causes control flow to transfer up the lexical or dynamic scopes of the language until it is caught by a handler. recursive resuming is the potential for infinite recursion resulting from the presence of resuming handlers in previous scopes during propagation. resuming propagation is when propagation returns to the point of the raise. return code is a value encoded among normal returned values or a separate value returned from a routine call indicating additional information about the routine's computation. sequel is a routine, including parameters, which upon returning, continues execution at the end of the block in which the sequel is declared rather than after the sequel call. source execution is the execution (process, task, coroutine) raising an exception event. specificity selects the most specific eligible handler within a handler clause using ordering rules. stack unwinding is the terminating of blocks, including activation records, between the raise point and the handler. static propagation is propagation that searches the lexical scopes (static nesting) to find a handler.
EXCEPTION HANDLING
301
status flag is a shared (global) variable indicating the occurrence of a rare condition, e.g., e r r n o in UNIX. Setting a status flag indicates a rare condition has occurred; the value remains as long as it is not overwritten by another condition. synchronous exception is when the source and faulting executions are the same; i.e., the exception is raised and handled by the same execution. terminating propagation is when propagation does not return to the raise point. thread is execution of code that occurs independendy of and possibly concurrently with another execution; thread execution is sequential as it changes an execution's state. throwing propagation see "terminating propagation." unguarded block is a programming language block without handlers.
REFERENCES
[1] Buhr, P. A., Ditchfield, G., Stroobosscher, R. A., Younger, B. M., and Zamke, C. R. (1992). "/^C++: Concurrency in the object-oriented language C-l—1-." Software— Practice and Experience, 22(2), 137-172. [la] Hoare, C. A. R. (1974). "Monitors: An operating system structuring concept." Communications of the ACM, 17(10), 549-557. [lb] Marlin, C. D. (1980). Coroutines: A Programming Methodology, a Language Design and an Implementation, volume 95 of Lecture Notes in Computer Science, Ed. by G. Goos and J. Hartmanis. Springer-Verlag, Berlin. [2] Goodenough, J. B. (1975). "Exception handling: Issues and a proposed notation." Communications of the ACM, 18(12), 683-696. [3] Intermetrics, Inc. (1995). Annotated Ada Reference Manual, international standard ISO/IEC 8652:1995(E) with COR. 1:2000 ed. Language and Standards Libraries. [4] Cardelli, L., Donahue, J., Glassman, L., Jordan, M., Kalsow, B., and Nelson, G. (1988). "Modula-3 report." Technical Report 31, Systems Research Center, 130 Lytton Avenue, Palo Alto, California 94301. [5] Stroustrup, B. (1997). The C-\-\- Programming language, third ed. Addison-Wesley, Reading, MA. [6] Yemini, S., and Berry, D. M. (1985). "A modular verifiable exception-handling mechanism." ACM Transactions on Programming Languages and Systems, 7(2), 214-243. [7] International Business Machines (1981). OS and DOS PL/1 Reference Manual, first ed. Manual GC26-3977-0. [8] Madsen, O. L., M0ller-Pedersen, B., and Nygaard, K. (1993). Object-Oriented Programming in the BETA Programming Language. Addison-Wesley, Reading, MA. [9] Kemighan, B. W., and Ritchie, D. M. (1988). The C Programming Language, Prentice Hall Software Series, second ed. Prentice Hall, Englewood Cliffs, NJ.
302
PETER A. BUHR ETAL.
101 MacLaren, M. D. (1977). "Exception handling in PL/I." SIGPLAN notice^, 12(3), 101-104. Proceeding.\ ( f L I I ~ ACM Cot7fc~renceon Lcing~1ugeDesign for Reliable Software, March 28-30, 1977. Raleigh, NC. 1I] Buhr, P. A. (1995). "Are safe concurrency libraries possible?" Con7munication.sof the ACM, 38(2), 1 17-1 20. 121 Milner, R., and Tofte, M. (1991 ). Cornnzentcrrx or1 Stcrndrrrd ML. MIT Press, Cambridge, MA. 1 3 1 Gosling, J., Joy, B., and Steele. G. (1996). The J a w Language Specification. Addison-Wesley, Reading, MA. [I41 Mitchell, J. G., Maybury. W.. and Sweet, R. (1979). "Mesa language manual." Technical Report CSL-79-3. Xerox Palo Alto Research Center. '151 Gehani, N. H. (1992). "Exceptional C or C with exceptions." Sofiwc~re-Practice and Experience, 22( lo), 827-848. [16] Meyer, B. (1992). Elfel: 7'11~~ Lrrr1glrtrge. Prentice Hall Object-Oriented Series. Prentice-Hall, Englewood Clitfs. NJ. [I71 Drew, S. J., and Gough. K. J. (1994). "Exception handling: Expecting the unexpected." Computer Langlrtrges. 20(2). [I81 Liskov, B. H., and Snyder. A. (1979). "Exception handling in CLU." IEEE Transactions on S o f t ~ ~ a Engineering. re 5(6). 546-558. [19] Stroustrup, B. (1994). The Design crnd E~~)llrtion of'C++. Addison-Wesley, Reading, MA. [20] Buhr, P. A,, Macdonald, H. I.. and Zarnke, C. R. (1992). "Synchronous and asynchronous handling of abnormal events in the /tSystem." Stfrw~rre-Practice and Experience, 22(9), 735-776. [21] Buhr, P. A,, and Stroobosscher. R. A. (2001 ). "PC++ annotated reference manual, version 4.9." Technical report, Department of Computer Science. University of Waterloo, Waterloo, Ontario N2L 3G 1. Canada. f tp : //plg . uwaterloo . ca/pub/ uSystem/uC++.ps.gz. [22] Koenig, A,, and Stroustrup. B. (1990). "Exception handling for C++." Journal of Object-Oriented Progrumrning. 3 ( 2 ) ,16-33. [23] Cargill, T. A. (1990). "Does C++ really need multiple inheritance?" In USENIX C++ Conference Proceedings, pp. 3 15-323. San Francisco, CA. USENIX Association. [24] Mok, W. Y. R. (1997). "Concurrent abnormal event handling mechanisms." Master's thesis, University of Waterloo. Waterloo. Ontario N2L 3G 1 , Canada. f tp : //plg. uwaterloo.ca/pub/uSystem/MokThesis.ps.gz. [25] Knudsen, J. L. (1984). "Exception handling-A and Experience, 14(5),429449.
static approach." Sojt~~are-Practice
[26] Knudsen, J. L. (1987):'Better exception handling in block structured systems." IEEE Software, 4(3), 4 0 4 9 . [27] Motet, G., Mapinard, A,. and Geotfroy. J . C. (1996). Design of Dependable Ada Sofrware. Prentice-Hall. Englewood Clifl's. NJ.
EXCEPTION HANDLING
303
[28] Tennent, R. D. (1977). "Language design methods based on semantic principles." Acta Infomatica, 8(2), 97-112. Reprinted in [33]. [29] Kenah, L. J., Goldenberg, R. E., and Bate, S. F. (1988). VAX/VMS Internals and Data Structures Version 4.4. Digital Press. [30] Hecht, H., and Hecht, M. (1986). "Software reliability in the systems context." IEEE Transactions on Software Engineering, 12(1), 51-58. [31] Lang, J., and Stewart, D. B. (1998). "A study of the applicabihty of existing exception-handling techniques to component-based real-time software technology." ACM Transactions on Programming Languages and Systems, 20(2), 274-301. [32] Real Time for Java Experts Group, h t t p : //www. r t j . org. (1999). [33] Wasserman, A. I. (Ed.) (1980). Tutorial: Programming Language Design. Computer Society Press.
This Page Intentionally Left Blank
Breaking the Robustness Barrier: Recent Progress on the Design of Robust IVIultimodal Systems SHARON OVIATT Center for Human Computer Communication Computer Science Department Oregon Graduate Institute of Science and Technology 20,000 N. W. Walker Road Beaverton, Oregon 97006 USA [email protected]
Abstract Cumulative evidence now clarifies that a well-designed multimodal system that fuses two or more information sources can be an effective means of reducing recognition uncertainty. Performance advantages have been demonstrated for different modality combinations (speech and pen, speech and lip movements), for varied tasks (map-based simulation, speaker identification), and in different environments (noisy, quiet). Perhaps most importantly, the error suppression achievable with a multimodal system, compared with a unimodal spoken language one, can be in excess of 40%. Recent studies also have revealed that a multimodal system can perform in a more stable way than a unimodal one across varied real-world users (accented versus native speakers) and usage contexts (mobile versus stationary use). This chapter reviews these recent demonstrations of multimodal system robustness, distills general design strategies for optimizing robustness, and discusses future directions in the design of advanced multimodal systems. Finally, implications are discussed for the successful commercialization of promising but error-prone recognition-based technologies during the next decade.
1. Introduction to Multimodal Systems 1.1 Types of Multimodal System 1.2 Motivation for Multimodal System Design 1.3 Long-Term Directions: Multimodal-Multisensor Systems That Model Biosensory Perception 2. Robustness Issues in the Design of Recognition-Based Systems ADVANCES IN COMPUTERS, VOL. 56 ISBN 0-12-012156-5
305
306 307 309 312 313
Copyright 2002 Elsevier Science Ltd Allrightsof reproduction in any form reserved.
306
SHARON OVIATT
2.1 Recognition Errors in Unimodal Speech Systems 314 2.2 Research on Suppression of Recognition Errors in Multimodal Systems . 316 2.3 Multimodal Design Strategies for Optimizing Robustness 326 2.4 Performance Metrics as Forcing Functions for Robustness 329 3. Future Directions: Breaking the Robustness Barrier 331 4. Conclusion 333 Acknowledgments 333 References 333
1.
Introduction to Multimodal Systems
Multimodal systems process two or more combined user input modes—such as speech, pen, gaze, manual gestures, and body movements—in a coordinated manner with multimedia system output. This class of systems represents a new direction for computing, and a paradigm shift away from conventional windowsicons-menus-pointing device (WIMP) interfaces. Multimodal interfaces aim to recognize naturally occurring forms of human language and behavior, which incorporate at least one recognition-based technology (e.g., speech, pen, vision). The development of novel multimodal systems has been enabled by the myriad input and output technologies currently becoming available, including new devices and improvements in recognition-based technologies. Multimodal interfaces have developed rapidly during the past decade, with steady progress toward building more general and robust systems [1,2]. Major developments have occurred in the hardware and software needed to support key component technologies incorporated within multimodal systems, and in techniques for integrating parallel input streams. The array of multimodal applications also has expanded rapidly, and currently ranges from map-based and virtual reality systems for simulation and training, to person identification/verification systems for security purposes, to medical and Web-based transaction systems that eventually will transform our daily lives [2-4]. In addition, multimodal systems have diversified to include new modality combinations, including speech and pen input, speech and lip movements, speech and manual gesturing, and gaze tracking and manual input [5-9]. This chapter specifically addresses the central performance issue of multimodal system design techniques for optimizing robustness. It reviews recent demonstrations of multimodal system robustness that surpass that of unimodal recognition systems, and also discusses future directions for optimizing robustness further through the design of advanced multimodal systems. Currently, there are two types of system that are relatively mature within the field of multimodal research, ones capable of processing users' speech and pen-based input, and others based
BREAKING THE ROBUSTNESS BARRIER
307
on speech and lip movements. Both types of system process two recognitionbased input modes that are semantically rich, and have received focused research and development attention. As we will learn in later sections, the presence of two semantically rich input modes is an important prerequisite for suppression of recognition errors. The present chapter will focus on a discussion of these two types of multimodal system.
1,1
Types of Multimodal System
Since the appearance of Bolt's "Put That There" [10] demonstration system, which processed speech in parallel with touch-pad pointing, a variety of new multimodal systems have emerged. Most of the early multimodal systems processed simple mouse or touch-pad pointing along with speech input [11-16]. However, contemporary multimodal systems that process two parallel input streams, each of which is capable of conveying rich semantic information, have now been developed. These multimodal systems recognize two natural forms of human language and behavior, for which two recognition-based technologies are incorporated within a more powerful bimodal user interface. To date, systems that combine either speech and pen input [2,17] or speech and lip movements [1,7,18] are the predominant examples of this new class of multimodal system. In both cases, the keyboard and mouse have been abandoned. For speech and pen systems, spoken language sometimes is processed along with complex pen-based gestural input involving hundreds of different symbolic interpretations beyond pointing [2]. For speech and lip movement systems, spoken language is processed along with corresponding human lip movement information during the natural audio-visual experience of spoken interaction. In both cases, considerable work has been directed toward quantitative modeling of the integration and synchronization characteristics of the two input modes being processed, and innovative new time-sensitive architectures have been developed to process these rich forms of patterned input in a robust manner. Recent reviews of the cognitive science underpinnings, natural language processing and integration techniques, and architectural features used in these two types of multimodal system have been summarized elsewhere (see Benoit et al. [1], Oviatt et al. [2], and Oviatt [19]). Multimodal systems designed to recognize speech and pen-based gestures first were prototyped and studied in the early 1990s [20], with the original QuickSet system prototype built in 1994. The QuickSet system is an agent-based, collaborative multimodal system that runs on a hand-held PC [6]. As an example of a multimodal pen/voice command, a user might add three air landing strips to a map by saying "airplane landing strips facing this way (draws arrow NW), facing this way (draws arrow NE), and facing this way (draws arrow SE)." Other systems
308
SHARON OVIATT
of this type were built in the late 1990s, with examples including the Humancentric Word Processor, Portable Voice Assistant, QuickDoc, and MVIEWS [2,21-23]. In most cases, these multimodal systems jointly interpreted speech and pen input based on a frame-based method of information fusion and a late semantic fusion approach, although QuickSet uses a statistically ranked unification process and a hybrid symbolic/statistical architecture [24]. Other very recent speech and pen multimodal systems also have begun to adopt unificationbased multimodal fusion and hybrid processing approaches [25,26], although some of these newer systems still are limited to pen-based pointing. In comparison with the multimodal speech and lip movement literature, research and system building on multimodal speech and pen systems has focused more heavily on diversification of applications and near-term commercialization potential. In contrast, research on multimodal speech and lip movements has been driven largely by cognitive science interest in intersensory audio-visual perception, and the coordination of speech output with lip and facial movements [5,7,27-36]. Among the contributions of this literature has been a detailed classification of human lip movements (visemes), and the viseme-phoneme mappings that occur during articulated speech. Actual systems capable of processing combined speech and lip movements have been developed during the 1980s and 1990s, and include the classic work by Petajan [37], Brooke and Petajan [38], and others [39-43]. Additional examples of speech and lip movement systems and applications have been detailed elsewhere [1,7]. The quantitative modeling of synchronized phoneme/viseme patterns that has been central to this multimodal literature recently has been used to build animated characters that generate text-to-speech output with coordinated lip movements for new conversational interfaces [28,44]. In contrast with the multimodal speech and pen literature, which has adopted late integration and hybrid approaches to processing dual information, speech and lip movement systems sometimes have been based on an early feature-level fusion approach. Although very few existing multimodal interfaces currently include adaptive processing, researchers in this area have begun exploring adaptive techniques for improving system robustness during noise [45-47]. This is an important future research direction that will be discussed further in Section 2.2.2. As multimodal interfaces gradually evolve toward supporting more advanced recognition of users' natural activities in context, including the meaningful incorporation of vision technologies, they will begin to support innovative directions in pervasive interface design. New multimodal interfaces also will expand beyond rudimentary bimodal systems to ones that incorporate three or more input modes, qualitatively different modes, and more sophisticated models of multimodal interaction. This trend already has been initiated within biometrics research, which has combined recognition of multiple behavioral input modes (e.g., speech.
BREAKING THE ROBUSTNESS BARRIER
309
handwriting, gesturing, and body movement) with physiological ones (e.g., retinal scans, fingerprints) in an effort to achieve reliable person identification and verification in challenging field conditions [4,48].
1.2
Motivation for Multimodal System Design
The growing interest in multimodal interface design is inspired largely by the goal of supporting more flexible, transparent, and powerfully expressive means of human-computer interaction. Users have a strong preference to interact multimodally in many applications, and their performance is enhanced by it [2]. Multimodal interfaces likewise have the potential to expand computing to more challenging applications, to a broader spectrum of everyday users, and to accommodate more adverse usage conditions such as mobility. As this chapter will detail, multimodal interfaces also can function in a more robust and stable manner than unimodal systems involving a single recognition-based technology (e.g., speech, pen, vision).
7.Z 7 Universal Access and
Mobility
A major motivation for developing more flexible multimodal interfaces has been their potential to expand the accessibility of computing to more diverse and nonspecialist users. There are large individual differences in people's ability and preference to use different modes of communication, and multimodal interfaces are expected to increase the accessibility of computing for users of different ages, skill levels, cultures, and sensory, motor, and intellectual impairments. In part, an inherently flexible multimodal interface provides people with interaction choices that can be used to circumvent personal limitations. This is becoming increasingly important, since U.S. legislation effective June 2001 now requires that computer interfaces demonstrate accessibility in order to meet federal procurement regulations [49,50]. Such interfaces also permit users to alternate input modes, which can prevent overuse and damage to any individual modality during extended computing tasks (R. Markinson, University of California at San Francisco Medical School, 1993). Another increasingly important advantage of multimodal interfaces is that they can expand the usage contexts in which computing is viable, including natural field settings and during mobility. In particular, they permit users to switch between modes as needed during the changing conditions of mobile use. Since input modes can be complementary along many dimensions, their combination within a multimodal interface provides broader utility across varied and changing usage contexts. For example, a person with a multimodal pen/voice interface may use
310
SHARON OVIATT
hands-free speech input for voice dialing a car cell phone, but switch to pen input to avoid speaking a financial transaction in a public setting.
1.2.2 Error Avoidance and
Resolution
Of special relevance to this chapter, multimodal interface design frequently manifests improved error handling, in terms of both error avoidance and graceful recovery from errors [43,51-55]. There are user- and system-centered reasons why multimodal systems facilitate error recovery, when compared with unimodal recognition-based interfaces. First, in a multimodal speech and pen interface, users will select the input mode that they judge less error prone for particular lexical content, which tends to lead to error avoidance [51]. For example, they may prefer speedy speech input, but will switch to pen input to communicate a foreign surname. Secondly, users' language often is simplified when interacting multimodally. In one study, a user added a boat dock to an interactive map by speaking "Place a boat dock on the east, no, west end of Reward Lake." When using multimodal pen/voice input, the same user completed the same action with [draws rectangle] "Add dock." Multimodal utterances generally were documented to be briefer, and to contain fewer disfluencies and complex locative descriptions, compared with a speechonly interface [56]. This can result in substantially reducing the complexity of natural language processing that is needed, thereby reducing recognition errors [57]. Thirdly, users have a strong tendency to switch modes after a system recognition error, which tends to prevent repeat errors and to facilitate error recovery. This error resolution occurs because the confusion matrices differ for any given lexical content for the two recognition technologies involved [52]. In addition to these user-centered reasons for better error avoidance and resolution, there also are system-centered reasons for superior error handling. A welldesigned multimodal architecture with two semantically rich input modes can support mutual disambiguation of signals. For example. Fig. 1 illustrates mutual disambiguation from a user's log during an interaction with the QuickSet multimodal system. In this example, the user said "zoom out" and drew a checkmark. Although the lexical phrase "zoom out" only was ranked fourth on the speech «-best list, the checkmark was recognized correctly by the gesture recognizer, and the correct semantic interpretation "zoom out" was recovered successfully (i.e., ranked first) on the final multimodal n-htsi list. As a result, the map interface
BREAKING THE ROBUSTNESS BARRIER
311
FIG. 1. QuickSet user interface during multimodal command to "zoom out," illustrating mutual disambiguation with the correct speech interpretation pulled up on its «-best list to produce a correct final multimodal interpretation.
zoomed out correctly, and no errors were ever experienced by the user. This recovery of the correct interpretation was achievable within the multimodal architecture because inappropriate signal pieces are discarded or "weeded out" during the unification process, which imposes semantic, temporal, and other constraints on what can be considered "legal" multimodal interpretations [2,6]. In this particular example, the three alternatives ranked higher on the speech «-best list only could have integrated with circle or question mark gestures, which were not present on the «-best gesture list. As a result, these alternatives could not form a legal integration and were discarded. Using the QuickSet architecture, which involves late semantic integration and unification [2,6,24], it has been demonstrated empirically that a multimodal system can support mutual disambiguation of speech and pen input during semantic interpretation [53,58,59]. As a result, such a system yields a higher overall rate of correct utterance interpretations than spoken language processing alone. This performance improvement is the direct result of the disambiguation between signals that can occur in a well-designed multimodal system, because each mode provides context for interpreting the other during integration. To achieve optimal disambiguation of meaning, a multimodal interface ideally should be designed to include complementary input modes, and each mode should provide duplicate functionality such that users can accomplish their goals using either one.
312
SHARON OVIATT
Parallel error suppression also has been observed in multimodal speech and lip movement systems, although the primary focus has been on demonstrating improvements during noise. During the audio-visual perception of speech and lip movements, enhancement of multimodal speech recognition has been demonstrated over audio-only processing for human listeners [5,30,32,34,60] and also for multimodal speech and lip movement systems [3,39,43,45,61-65]. In this literature, key complementarities have been identified between acoustic speech and corresponding lip movements, which jointly supply unique information for accurately recognizing phonemes. More detailed research findings on the error suppression capabilities and mechanisms of multimodal systems will be reviewed in Section 2.2.
1.3
Long-Term Directions: Multimodal-Multisensor Systenns That Model Biosensory Perception
The advent of multimodal interfaces based on recognition of human speech, gaze, gesture, and other natural behavior represents only the beginning of a progression toward computational interfaces capable of relatively human-like sensory perception. Such interfaces eventually will interpret continuous input from a large number of different visual, auditory, tactile, and other input modes, which will be recognized as users engage in everyday activities. The same system will track and incorporate information from multiple sensors on the user's interface and surrounding physical environment in order to support intelligent adaptation to the user, task and usage environment. This type of advanced multimodalmultisensor interface will be integrated within a flexible architecture in which information from different input modes or sensors can be actively recruited when it is relevant to the accurate interpretation of an ongoing user activity. The flexible collection of information essentially will permit dynamic reconfiguration of future multimodal-multisensor interfaces, especially when key information is incomplete or discordant, or at points when the user's activity changes. Adaptive multimodal-multisensor interfaces that incorporate a broad range of information have the potential to achieve unparalleled robustness, and to support new functionality. They also have the potential to perform flexibly as multifunctional and personalized mobile interfaces. At their most evolved endpoint, this new class of interfaces will become capable of relatively human-like sensoryperceptual capabilities, including self-diagnostic functions. The long-term research direction of designing robust multimodal-multisensor interfaces will be guided in part by biological, neurophysiological, and psychological evidence on the organization of intelligent sensory perception [66]. Coordinated sensory perception in humans and animals is active, purposeful, and able to achieve remarkable robustness through multimodality [5,30,32,34,67,68]. In
BREAKING THE ROBUSTNESS BARRIER
313
fact, robustness generally is achieved by integrating information from many different sources, whether different input modes, or different kinds of data from the same mode (e.g., brightness, color). During fusion of perceptual information, for example, the primary benefits include improved robustness, the extraction of qualitatively new perceptions (e.g., binocular stereo, depth perception), and compensation for perceptual disturbance (e.g., eye movement correction of perturbations induced by head movement). In biological systems, input also is dynamically recruited from relevant sensory neurons in a way that is both sensitive to the organism's present context, and informed by prior experience [69-71]. When orienting to a new stimulus, the collection of input sampled by an organism can be reconfigured abruptly. Since numerous information sources are involved in natural sensory perception, discordant or potentially faulty information can be elegantly resolved by recalibration or temporary suppression of the "offending" sensor [72-74]. In designing future architectures for multimodal interfaces, important insights clearly can be gained from biological and cognitive principles of sensory integration, intersensory perception, and their adaptivity during purposeful activity. As a counterpoint, designing robust multimodal interfaces also requires a computational perspective that is informed by the implementation of past fusion-based systems. Historically, such systems often have involved conservative applications for which errors are considered costly and unacceptable, including biometrics, military, and aviation tasks [4,75,76]. However, fusion-based systems also have been common within the fields of robotics and speech recognition [3,7,18,24,39, 43,47,77]. Although discussion of these many disparate literatures is beyond the scope of this chapter, nonetheless examination of the past application of fusion techniques can provide valuable guidance for the design of future multimodalmultisensor interfaces. In the present chapter, discussion will focus on research involving multimodal systems that incorporate speech recognition.
2.
Robustness Issues in the Design of Recognition-Based Systems
As described in the Introduction, state-of-the-art multimodal systems now are capable of processing two parallel input streams that each convey rich semantic information. The two predominant types of such a system both incorporate speech processing, with one focusing on multimodal speech and pen input [2,17], and the other multimodal speech and lip movements [1,7,18]. To better understand the comparative robustness issues associated with unimodal versus multimodal system design. Section 2.1 will summarize the primary error handling problems with unimodal recognition of an acoustic speech stream. Although spoken
314
SHARON OVIATT
language systems support a natural and powerfully expressive means of interaction with a computer, it is still the case that high error rates and fragile error handling pose the main interface design challenge that limit the commercial potential of this technology. For comparison, Section 2.2 will review research on the relative robustness of multimodal systems that incorporate speech. Section 2.3 then will summarize multimodal design strategies for optimizing robustness, and Section 2.4 will discuss the performance metrics used as forcing functions for achieving robustness.
2.1
Recognition Errors in Unimodal Speech Systems
Spoken language systems involve recognition-based technology that by nature is probabilistic and therefore subject to misinterpretation. Benchmark error rates reported for speech recognition systems still are too high to support many applications [78], and the time that users spend resolving errors can be substantial and frustrating. Although speech technology often performs adequately for read speech, for adult native speakers of a language, or when speaking under idealized laboratory conditions, current estimates indicate a 20-50% decrease in recognition rates when speech is delivered during natural spontaneous interactions, by a realistic range of diverse speakers (e.g., accented, child), or in natural field environments. Word error rates (WERs) are well known to vary directly with speaking style, such that the more natural the speech delivery the higher the recognition system's WER. In a study by Weintraub et al. [79], speakers' WERs increased from 29% during carefully read dictation, to 38% during a more conversationally read delivery, to 53% during natural spontaneous interactive speech. During spontaneous interaction, speakers typically are engaged in real tasks, and this generates variability in their speech for several reasons. For example, frequent miscommunication during a difficult task can prompt a speaker to hyperarticulate during their repair attempts, which leads to durational and other signal adaptations [80]. Interpersonal tasks or stress also can be associated with fluctuating emotional states, giving rise to pitch adaptations [81]. Basically, the recognition rate degrades whenever a user's speech style departs in some way from the training data upon which a recognizer was developed. Some speech adaptations, like hyperarticulation, can be particularly difficult to process because the signal changes often begin and end very abruptly, and they may only affect part of a longer utterance [80]. In the case of speaker accents, a recognizer can be trained to recognize an individual accent, although it is far more difficult to recognize varied accents successfully (e.g., Asian, European, African, North American), as might be required for an automated public telephone service. In the case of heterogeneous accents, it can be infeasible to specifically tailor an
BREAKING THE ROBUSTNESS BARRIER
315
application to minimize highly confusable error patterns in a way that would assist in supporting robust recognition [53]. The problem of supporting adequate recognition rates for diverse speaker groups is due partly to the need for corpus collection, language modeling, and tailored interface design with different user groups. For example, recent research has estimated that children's speech is subject to recognition error rates that are two-to-five times higher than adult speech [82-85]. The language development literature indicates that there are specific reasons why children's speech is harder to process than that of adults. Not only is it less mature, children's speech production is inherently more variable at any given stage, and it also is changing dynamically as they develop [86,87]. In addition to the many difficulties presented by spontaneous speech, speaker stylistic adaptations, and diverse speaker groups, it is widely recognized that laboratory assessments overestimate the recognition rates that can be supported in natural field settings [88-90]. Field environments typically involve variable noise levels, social interchange, multitasking and interruption of tasks, increased cognitive load and human performance errors, and other sources of stress, which collectively produce 20-50% drops in speech recognition accuracy. In fact, environmental noise currently is viewed as one of the primary obstacles to widespread commercialization of spoken language technology [89,91]. During field use and mobility, there actually are two main problems that contribute to degradation in system accuracy. The first is that noise itself contaminates the speech signal, making it harder to process. Stationary noise sources often can be modeled and processed successfully, when they can be predicted (e.g., road noise in a moving car). However, many noises in natural field environments are nonstationary ones that either change abruptly or involve variable phasein/phase-out noise as the user moves. Natural field environments also present qualitatively different sources of noise that cannot always be anticipated and modeled. Speech technology has special difficulty handling abrupt onset and nonstationary sources of environmental noise. The second key problem, which has been less well recognized and understood, is that people speak differently under noisy conditions in order to make themselves understood. During noise, speakers have an automatic normalization response called the "Lombard effect" [92], which causes systematic speech modifications that include increased volume, reduced speaking rate, and changes in articulation and pitch [58,91,93-95]. The Lombard effect not only occurs in human adults, but also in young children, primates, quail, and essentially all animals [96-98]. From an interface design standpoint, it is important to realize that the Lombard effect essentially is reflexive. As a result, it has not been possible to eliminate it through instruction or training, or to suppress it selectively when noise is introduced [99].
316
SHARON OVIATT
Although speech originally produced in noise actually is more intelligible to a human listener, a system's recognition accuracy instead degrades when it must process Lombard speech [91]. To summarize, current estimates indicate a 20-50% decrease in recognition rate performance when attempts are made to process natural spontaneous speech, or speech produced by a wider range of diverse speakers in real-world field environments. Unfortunately, this is precisely the kind of realistic speech that must be recognized successfully before widespread commercialization can occur. During the development of modern speech technology there generally has been an overreliance on hidden Markov modeling, and a relatively singular focus on recognizing the phonetic features of acoustic speech. Until very recently, the speech community also has focused quite narrowly on unimodal speech processing. Finally, speech recognition research has depended very heavily on the word error rate as a forcing function for advancing its technology. Alternative perspectives on the successful development of robust speech technology will be discussed throughout this chapter.
2.2
Research on Suppression of Recognition Errors in Multimodal Systems
A different approach to resolving the impasse created by recognition errors is to design a more flexible multimodal interface that incorporates speech as one of the input options. In the past, skeptics have claimed that a multimodal system incorporating two error-prone recognition technologies would simply compound errors and yield even greater unreliability. However, as introduced earlier, cumulative data now clarify that a system which fuses two or more input modes can be an effective means of reducing recognition uncertainty, thereby improving robustness [39,43,53,58]. Furthermore, performance advantages have been demonstrated for different modality combinations (speech and pen, speech and lip movements), for varied tasks (map-based simulation, speaker identification), and in different environments (noisy mobile, quiet stationary). Perhaps most importantly, the error suppression achievable with a multimodal system, compared with an acoustic-only speech system, can be very substantial in noisy environments [39,45,58,62,64,65]. Even in environments not degraded by noise, the error suppression in multimodal systems can exceed 40%, compared with a traditional speech system [53]. Recent studies also have revealed that a multimodal architecture can support mutual disambiguation of input signals, which stabilizes the system's performance in a way that can minimize or even close the recognition rate gap between nonnative and native speakers [53], and between mobile and stationary system use [58]. These results indicate that a well-designed multimodal system not only can
BREAKING THE ROBUSTNESS BARRIER
317
perform overall more robustly than a unimodal system, but they also can perform in a more reliable way across varied real-world users and usage contexts. In the following sections, research findings that compare the robustness of multimodal speech processing with parallel unimodal speech processing will be summarized. Relevant studies will be reviewed on this topic from the multimodal literature on speech and pen systems and speech and lip movement systems.
2.2.1 Robustness of Multimodal Speech and Pen Systems The literature on multimodal speech and pen systems recently has demonstrated error suppression ranging between 19 and 41% for speech processed within a multimodal architecture [53,58]. In two recent studies involving over 4600 multimodal commands, these robustness improvements also were documented to be greater for diverse user groups (e.g., accented versus native speakers) and challenging usage contexts (noisy mobile contexts versus quiet stationary use), as introduced above. That is, multimodal speech and pen systems typically show a larger performance advantage precisely for those users and usage contexts in which speech-only systems typically fail. Although recognition rates degrade sharply under the different kinds of conditions discussed in Section 2.1, nonetheless new multimodal pen/voice systems that improve robustness for many of these challenging forms of speech can be designed. Research on multimodal speech and pen systems also has introduced the concept of mutual disambiguation (see Section 1.2 for definition and illustration). This literature has documented that a well-integrated multimodal system that incorporates two semantically rich input modes can support significant levels of mutual disambiguation between incoming signals. That is, a synergistic multimodal system can be designed in which each input mode disambiguates partial or ambiguous information in the other mode during the recognition process. Due to this capacity for mutual disambiguation, the performance of each error-prone mode potentially can be stabilized by the alternate mode whenever challenging usage conditions arise. 2.2.1.1 Accented Speaker Study, in a recent study, eight native speakers of English and eight accented speakers who represented different native languages (e.g.. Mandarin, Tamil, Spanish, Turkish, Yoruba) each communicated 100 commands multimodally to the QuickSet system while using a hand-held PC. Sections 1.1 and 1.2 described the basic QuickSet system, and Fig. 2 illustrates its interface. With QuickSet, all participants could use multimodal speech and pen input to complete map-based simulation exercises. During testing, users accomplished a variety of tasks such as adding objects to a map (e.g., "Backbum zone" (draws irregular rectangular area)), moving objects (e.g., "Jeep follow this route"
318
SHARON OVIATT
FIG. 2. Diverse speakers completing commands multimodally using speech and gesture, which often would fail for a speech system due to varied accents.
(draws line)), and so forth. Details of the QuickSet system's signal and language processing, integration methods, and symbolic/statistical hybrid architecture have been summarized elsewhere [2,6,24]. In this study, data were collected on over 2000 multimodal commands, and the system's performance was analyzed for the overall multimodal recognition rate, recognition errors occurring within each system component (i.e., speech versus gesture recognition), and the rate of mutual disambiguation between speech and pen input during the integration process. When examining the rate of mutual disambiguation, all cases were assessed in which one or both recognizers failed to determine the correct lexical interpretation of the users' input, although the correct choice effectively was "retrieved" from lower down on an individual recognizer's «-best list to produce a correct final multimodal interpretation. The rate of mutual disambiguation per subject (MD^) was calculated as the percentage of all their scorable integrated commands (Nj) in which the rank of the correct lexical choice on the multimodal «-best list (R^^) was lower than the average rank of the correct lexical choice on the speech and gesture n-best lists (R^ and Rf), minus the number of commands in which the rank of the correct choice on the multimodal n-best list was higher than its average rank on the speech and gesture n-best lists, or
BREAKING THE ROBUSTNESS BARRIER
319
MD was calculated both at the signal processing level (i.e., based on rankings in the speech and gesture signal n-hesi lists), and at the parse level after natural language processing (i.e., based on the spoken and gestural parse w-best lists). Scorable commands included all those that the system integrated successfully, and that contained the correct lexical information somewhere in the speech, gesture, and multimodal w-best lists. All significant MD results reported in this section [2.2.1] repUcated across both signal and parse-level MD. The results of this study confirmed that a multimodal architecture can support significant levels of mutual disambiguation, with one in eight user commands recognized correctly due to mutual disambiguation. Table la confirms that the speech recognition rate was much poorer for accented speakers (-9.5%), as would be expected, although their gesture recognition rate averaged slightly but significantly better (-1-3.4%). Table lb reveals that the rate of mutual disambiguation (MD) was significantly higher for accented speakers (-1-15%) than for native speakers of English (+8.5%)—by a substantial 76%. As a result. Table la shows that the final multimodal recognition rate for accented speakers no longer differed significantly from the performance of native speakers. The main factor responsible TABLE la DIFFERENCE IN RECOGNITION RATE PERFORMANCE OF ACCENTED SPEAKERS, COMPARED WITH NATIVE ONES, DURING SPEECH, GESTURE, AND MUUTIMODAL PROCESSING
Type of language processing
% Performance difference for accented speakers
Speech Gesture Multimodal
-9.5* +3.4* —
* Significant difference present.
TABLE lb MUTUAL DISAMBIGUATION (MD) RATE AND RATIO OF MD INVOLVING SPEECH SIGNAL PULL-UPS FOR NATIVE AND ACCENTED SPEAKERS
Type of MD metric
Native speakers
Accented speakers
Signal MD rate Ratio of speech pull-ups
8.5% 0.35
15.0% * 0.65*
•= Significant difi'erence present.
320
SHARON OVIATT
for closing this performance gap between groups was the higher rate of mutual disambiguation for accented speakers. Overall, a 41% reduction was revealed in the total error rate for spoken language processed within the multimodal architecture, compared with spoken language processed as a stand-alone [53]. Table lb also reveals that speech recognition was the more fragile mode for accented speakers, with two-thirds of all mutual disambiguation involving pull-ups of their failed speech signals. However, the reverse was true for native speakers, with two-thirds of the mutual disambiguation in their case involving retrieval of failed ambiguous gesture signals. These data emphasize that there often are asymmetries during multimodal processing as to which input mode is more fragile in terms of reliable recognition. When one mode is expected to be less reliable, as is speech for accented speakers or during noise, then the most strategic multimodal design approach is to supplement the error-prone mode with an alternative one that can act as a natural complement and stabilizer by promoting mutual disambiguation. Table II reveals that although single-syllable words represented just 40% of users' multimodal commands in these data, they nonetheless accounted for 58.2% of speech recognition errors. Basically, these brief monosyllabic commands were especially error prone because of the minimal amount of acoustic signal information available for the speech recognizer to process. These relatively fragile monosyllabic commands also accounted for 84.6% of the cases in which a failed speech interpretation was pulled up during the mutual disambiguation process, which was significandy greater than the rate observed for multisyllabic utterances [53]. 2.2.1.2 Mobile Study, in a second study, 22 users interacted multimodally using the QuickSet system on a hand-held PC. Each user completed half of 100 commands in a quiet room (42 dB) while stationary, and the other half while mobile in a moderately noisy natural setting (40-60 dB), as illustrated in Fig. 3. TABLE II RELATION BETWEEN SPOKEN COMMAND LENGTH, THE PRESENCE OF SPEECH RECOGNITION ERRORS, AND THE PERCENTAGE OF MULTIMODAL COMMANDS WITH MUTUAL DISAMBIGUATION (MD) INVOLVING A SPEECH SIGNAL PULL-UP
1 syllable 2-7 syllables
% Total commands in corpus
% Speech recognition errors
% MD with speech pull-ups
40 60
58.2 41.8
84.6* 15.4
* Significant difference present between monosyllabic and multisyllabic commands.
BREAKING THE ROBUSTNESS BARRIER
321
FIG. 3. Mobile user with a hand-held PC in a moderately noisy cafeteria, who is completing commands multimodally that often fail for a speech system.
Testing was replicated across microphones representing opposite quality, including a high-quality, close-talking, noise-canceling microphone, and also a lowquality, built-in microphone without noise cancellation. Over 2600 multimodal utterances were evaluated for the multimodal recognition rate, recognition errors occurring within each component recognizer, and the rate of mutual disambiguation between signals. The results indicated that one in seven utterances were recognized correctly because of mutual disambiguation occurring during multimodal processing, even though one or both of the component recognizers failed to interpret the user's intended meaning. Table Ilia shows that the speech recognition rate was degraded
322
SHARON OVIATT TABLE Ilia
DIFFERENCE IN RECOGNITION RATE PERFORMANCE IN MOBILE ENVIRONMENT, COMPARED WITH STATIONARY ONE, FOR SPEECH, GESTURE, AND MULTIMODAL PROCESSING
Type of language processing
% Performance difference when mobile
Speech Gesture Multimodal
-10.0* — -8.0 *
* Significant difference present.
when speakers were mobile in a moderately noisy environment, compared with when they were stationary in a quiet setting (-10%). However, their gesture recognition rate did not decline significantly during mobility, perhaps because pen input involved brief one- to three-stroke gestures. Table Illb reveals that the rate of mutual disambiguation in the mobile condition (+16%) also averaged substantially higher than the same user's stationary rate (-1-9.5%). As a result, Table Ilia confirms a significant narrowing of the gap between mobile and stationary recognition rates (to -8.0%) during multimodal processing, compared with spoken language processing alone. In fact, 19-35% relative reductions in the total error rate (for noise-canceling versus built-in microphones, respectively) were observed when speech was processed within the multimodal architecture [58]. Finally, the general pattern of results obtained in this mobile study replicated across opposite types of microphone technology. When systems must process speech in natural contexts that involve variable levels of noise, and qualitatively different types of noise (e.g., abrupt onset, phasein/phase-out), the problem of supporting robust recognition is extremely difficult. Even when it is feasible to collect realistic mobile training data and to model many qualitatively different sources of noise, speech processing during abrupt shifts in noise (and the corresponding Lombard adaptations that users make) simply is a challenging problem. As a result, mobile speech processing remains an unsolved problem for traditional speech recognition. In the face of such challenges, a multimodal architecture that supports mutual disambiguation potentially can provide TABLE Illb MUTUAL DISAMBIGUATION (MD) RATE AND RATIO OF M D INVOLVING SPEECH SIGNAL PULL-UPS IN STATIONARY AND MOBILE ENVIRONMENTS
Type of MD metric
Stationary
Mobile
Signal MD rate Ratio of speech pull-ups
9.5% .26
16.0% * .34 *
* Significant difference present.
BREAKING THE ROBUSTNESS BARRIER
323
greater stability and a more viable long-term avenue for managing errors in emerging mobile interfaces. This theme also is central to the performance advantages identified for multimodal speech and lip movement systems, which are described in Section 2.2.2. One unique aspect of this mobile study was its focus on testing during actual use of an implemented multimodal system while users were mobile in a natural field environment. Such performance testing was possible because of the state of development of multimodal speech and pen systems, which now are beginning to transition into conmiercial applications. It also was possible because of the emerging research infrastructure now becoming available for collecting mobile field data [58]. In addition, this mobile study was unique in its examination of performance during naturalistic noisy conditions, especially the inclusion of nonstationary noise. As a result, the present data provide information on the expected performance advantages of multimodal systems in moderately noisy field settings, with implications for the real-world commercialization of new mobile interfaces. In summary, in both of the studies described in this section, even though one or both of the component recognizers failed to identify users' intended meaning, the architectural constraints imposed by the multimodal system's unification process ruled out incompatible speech and pen signal integrations. These unification constraints effectively pruned recognition errors from the «-best lists of the component recognizers, which resulted in the retrieval of correct lexical information from lower down on their lists, producing a correct final multimodal interpretation. This process suppressed many errors that would have occurred, such that users never experienced them. It also had an especially large impact on reducing the speech recognition errors that otherwise were so prevalent for accented speakers and in noisy environments.
2.2.2 Robustness of Multimodal Speech and Lip Movement Systems During the natural audio-visual perception of speech, human listeners typically observe a speaker's lip and facial movements while attending to speech. Furthermore, their accurate interpretation of speech is well known to be superior during multimodal speech perception, compared with acoustic-only speech processing [5,30,32,34]. In noisy environments, which include most natural field environments, visual information about a speaker's lip movements can be particularly valuable for the accurate interpretation of speech. However, there also are large individual and cultural differences in the information available in visible lip movements, as well as in people's ability and tendency to lip-read [7]. For example, the hearing impaired, elderly, and nonnative speakers all typically rely
324
SHARON OVIATT
more heavily on visual lip movements when they attend to speech, so for these populations accurate interpretation can depend critically on combined audiovisual processing [100,101]. The cognitive science literature generally has provided a good foundation for understanding many aspects of the design and expected value of multimodal speech and lip movement systems. In many of the multimodal speech and lip movement systems developed during the 1980s and 1990s, error suppression also has been observed [3,37,39,43,45,6165,102]. This literature has investigated the use of visually derived information about a speaker's lip movements (visemes) to improve recognition of acoustic speech (phonemes). The primary focus of this research has been on demonstrating robustness improvement during the audio-visual processing of speech during noise, compared with acoustic-only speech processing, with demonstrations of a larger boost in robustness as the noise level increases and speech recognition errors rise. Robustness improvements for multimodal speech and lip movement systems that have been reported under noise-free conditions actually have been relatively small when they occur at all, typically with less than a 10% relative error reduction [102]. In fact, sometimes a performance penalty occurs during the audio-visual processing of noise-free speech, largely as a consequence of adopting approaches designed to handle speech in noise [103]. On the other hand, robustness improvements of over 50% relative error reduction frequently have been documented under noisy conditions [39,45,46,61,65,102]. 2.2.2.1 Profile of Typical Study, in typical studies exploring performance enhancement in multimodal speech and lip movement systems, researchers have compared different approaches for audio-only, visual-only, and audio-visual system processing. Typically, testing has been done on a limited single-speaker corpus involving read materials such as nonsense words or digits [39,43]. Artificial stationary noise (e.g., white noise) then is added to generate conditions representing a range of different signal-to-noise ratio (SNR) decibel levels, for example, graduated intervals between - 5 and -1-25 dB. Most assessments have been performed on isolated-word speaker-dependent speech systems [39], although more recent studies now are beginning to examine continuous speech recognition as well [45]. The most common goal in these studies has been a basic demonstration of whether word error rates for audio-visual speech processing exceed those for audio-only and video-only processing, preferably at all levels of additive noise. Frequently, different integration strategies for audio-visual processing also are compared in detail. As described previously, the most common result has been to find the largest enhancements of audio-visual performance at the most degraded noise levels, and modest or no enhancement in a noise-free context. Unlike studies on multimodal speech and pen systems, research on the performance of multimodal speech and lip movement systems has not focused on the
BREAKING THE ROBUSTNESS BARRIER
325
mutual disambiguation of information that can occur between two rich input modes, but rather on the bootstrapping of speech recognition under noisy conditions. In addition, studies conducted in this area have not involved testing with fully implemented systems in actual noisy field settings. They likewise have been limited to testing on stationary noise sources (for a recent exception, see DuPont and Luettin's research [45]), rather than the more realistic and challenging nonstationary sources common during mobile use. Future research in this area will need to include more realistic test conditions before results can be generalized to situations of commercial import. More recent research in this area now is beginning to train systems and evaluate multimodal integration techniques on increasingly large multiparty corpora, and also to develop multimodal audio-visual systems for a variety of potential applications (e.g., speech recognition, speaker recognition, speech event detection) [3,45]. Currently, researchers in this area are striving to develop new integration techniques that can support general robustness advantages across the spectrum of noise conditions, from extremely adverse environments with SNR ranging 0 to -22 dB, to quiet environments with SNR ranging 20-30 dB. One goal is to develop multimodal integration techniques that yield generally superior robustness in widely varied and potentially changing environment conditions, such as those anticipated during mobile use. A second goal is to demonstrate larger improvements for audio-visual processing over audio-only in noise-free environments, which has been relatively elusive to date [104]. Late-integration fusion (i.e., "decision-level") and hybrid integration techniques, such as those used in multimodal speech and pen systems, generally have become viewed as good avenues for achieving these robustness goals [3,43,45,62,65,102]. Recent work also has begun to focus on audio-visual robustness gains achievable through adaptive processing, in particular various techniques for stream weight estimation [45,63,64]. For example, a recent experiment by Potamianos and Neti [64] of IBM-Watson reported over a 20% relative error reduction based on an n-best stream likelihood dispersion measure. Further work on adaptive multimodal processing is an important research direction in need of additional attention. Issues as basic as determining the key criteria and strategies needed to accomplish intelligent adaptation in natural field settings still are very poorly understood. In general, early attempts to adapt multimodal audio-visual processing based on simple engineering concepts will need to be superceded by empirically validated strategies. For example, automated dynamic weighting of the audio and visual input modes as a function of SNR estimates [46,47] is known to be problematic because it fails to take into account the impact of users' Lombard adaptations (for discussion, see Oviatt's research [105]).
326
SHARON OVIATT
Like the literature on multimodal speech and pen interaction, research in this area has identified key complementarities between the audio speech signal and corresponding visible speech movements [29,33,106]. For example, place of articulation is difficult to discriminate auditorally for consonants, but easy to distinguish visually from the position of the teeth, tongue, and lips. Natural feature-level complementarities also have been identified between visemes and phonemes for vowel articulation, with vowel rounding better conveyed visually, and vowel height and backness better revealed auditorally [29,33]. Some speech and lip movement systems have developed heuristic rules incorporating information about the relative confusability of different kinds of phonemes within their audio and visual processing components [107]. Future systems that incorporate phoneme-level information of this kind are considered a potentially promising avenue for improving robustness. In particular, research on the misclassification of consonants and vowels by audio-visual systems has emphasized the design recommendation that the visual component be weighted more heavily when discriminating place and manner of articulation, but less heavily when determining voicing [65]. Research by Silsbee and colleagues [65] has indicated that when consonant versus vowel classification tasks are considered separately, although no robustness enhancement occurs for audio-visual processing of consonants during noise-free conditions, an impressive 61% relative error reduction is obtained for vowels [65]. These results underscore the potential value of applying cognitive science findings to the design of future adaptive systems. Finally, hke the literature on multimodal speech and pen systems, in this research area brief spoken monosyllables have been associated with larger magnitude robustness gains during audio-visual processing, compared to multisyllabic utterances [108]. This is largely because monosyllables contain relatively impoverished acoustic information, and therefore are subject to higher rates of speech recognition errors. This finding in the speech and lip movement literature basically is parallel to the higher rate of mutual disambiguation reported for monosyllables in the multimodal speech and pen literature [53]. As will be discussed in Section 2.3, this replicated finding suggests that monosyllables may represent one of the targets of opportunity for future multimodal system design.
2,3
Multimodal Design Strategies for Optimizing Robustness
From the emerging literature on multimodal system performance, especially the error suppression achievable with such systems, there are several key concepts that surface as important for their design. The following are examples of fertile
BREAKING THE ROBUSTNESS BARRIER
327
research strategies known to be relevant to improving the robustness of future multimodal systems: • Increase the number of input modes interpreted within the multimodal system. This principle is effective because it supports effective supplementation and disambiguation of partial or conflicting information that may be present in any individual input mode. Current bimodal systems largely are successful due to their elementary fusion of information sources. However, according to this general principle, future multimodal systems could optimize robustness further by combining additional information sources—for example, three or more input modes. How much additional robustness gain can be expected as a function of incorporating additional sources of information is an issue that remains to be evaluated in future research. • Combine input modes that represent semantically rich information sources. In order to design multimodal systems that support mutual disambiguation, a minimum of two semantically rich input modes is required. Both types of multimodal system discussed in this chapter process two semantically rich input modes, and both have demonstrated enhanced error suppression compared with unimodal processing. In contrast, multimodal systems that combine only one semantically rich input mode (e.g., speech) with a second that is limited in information content (e.g., mouse, touch, or pen input only for selection) cannot support mutual disambiguation. However, even these more primitive multimodal systems can support disambiguation of the rich input mode to some degree by the more limited one. For example, when pointing to select an interface entity or input field, the natural language processing can be constrained to a reduced set of viable interpretations, thereby improving the accuracy of spoken language recognition [109]. • Increase the heterogeneity of input modes combined within the multimodal system. In order to bootstrap the joint potential of two input modes for collecting the relevant information needed to achieve mutual disambiguation of partial or conflicting information during fusion, one strategy is to sample from a broad range of qualitatively different information sources. In the near term, the most likely candidates for new modes to incorporate within multimodal systems involve vision-based recognition technologies. Specific goals and strategies for achieving increased heterogeneity of information, and how successfully they may optimize overall multimodal system robustness, is a topic that needs to be addressed in future research. One specific strategy for achieving heterogeneity is described in the next section. • Integrate maximally complementary input modes. One goal in the design of multimodal systems is to combine modes into a well-integrated system. If
328
SHARON OVIATT
designed opportunistically, such a system should integrate complementary modalities to yield a highly synergistic blend in which the strengths of each mode can be capitalized upon and used to overcome weaknesses in the other [11]. As discussed earlier, in the multimodal speech and lip movement literature, natural feature-level complementarities already have been identified between visemes and phonemes [29,33]. In multimodal speech and pen research, the main complementarity involves visual-spatial semantic content. Whereas visual-spatial information is uniquely and clearly indicated via pen input, the strong descriptive capabilities of speech are better suited for specifying temporal and other nonspatial information [56,110]. In general, this design approach promotes the philosophy of using modalities to their natural advantage, and it also represents a strategy for combining modes in a manner that can generate mutual disambiguation. In fact, achieving multimodal performance gains of the type described earlier in this chapter is well known to depend in part on successful identification of the unique semantic complementarities of a given pair of input modes. As discussed in Section 2.2.1, when one mode is expected to be less rehable (e.g., speech for accented speakers or during noise), then the most strategic multimodal design approach is to supplement the error-prone mode with a second one that can act as a natural complement and stabilizer in promoting mutual disambiguation. Future research needs to explore asymmetries in the reliability of diiferent input modes, as well as the main complementarities that exist between modes that can be leveraged during multimodal system design. • Develop multimodal processing techniques that retain information. In addition to the general design strategies outlined above, it also is important to develop multimodal signal processing, language processing, and architectural techniques that retain information and make it available during decision-level fusion. For example, alternative interpretations should not be pruned prematurely from each of the component recognizers' «-best lists. Excessive pruning of n-besi list alternatives (i.e., by setting probability estimate thresholds too high) could result in eliminating the information needed for mutual disambiguation to occur. This is because the correct partial information must be present on each recognizer's «-best list in order for the correct final multimodal interpretation to be formed during unification. The following are research strategies that are known to be relevant for successfully applying multimodal system design to targets of opportunity in which the
BREAKING THE ROBUSTNESS BARRIER
329
greatest enhancement of robustness is likely to be demonstrated over unimodal system design: • Apply multimodal system design to brief information segments for which robust recognition is known to be unreliable. As outlined in Sections 2.2.1 and 2.2.2, brief segments of information are the most fragile and subject to error during recognition (e.g., monosyllabic acoustic content during speech recognition). They also are selectively improved during multimodal processing in which additional information sources are used to supplement interpretation. • Apply multimodal system design to challenging user groups and usage environments for which robust recognition is known to be unreliable. When a recognition-based component technology is known to be selectively faulty for a given user group or usage environment, then a multimodal interface can be used to stabilize errors and improve the system's average recognition accuracy. As discussed earlier, accented speakers and noisy mobile environments are more prone to precipitate speech recognition errors. In such cases, a multimodal interface that processes additional information sources can be crucial in disambiguating the error-prone speech signal, sometimes recovering performance to levels that match the accuracy of nonrisk conditions. Further research needs to continue investigating other potential targets of opportunity that may benefit selectively from multimodal processing, including especially complex task appHcations, errorprone input devices (e.g., laser pointers), and so forth. In discussing the above strategies, there is a central theme that emerges. Whenever information is too scant or ambiguous to support accurate recognition, a multimodal interface can provide an especially opportune solution to fortify robustness. Furthermore, the key design strategies that contribute to the enhanced robustness of multimodal interfaces are those that add greater breadth and richness to the information sources that are integrated within a given multimodal system. Essentially, the broader the information collection net cast, the greater the likelihood missing or conflicting information will be resolved, leading to successful disambiguation of user input during the recognition process.
2.4
Performance Metrics as Forcing Functions for Robustness
In the past, the speech community has relied almost exclusively on the assessment of WER to calibrate the performance accuracy of spoken language systems. This metric has served as the basic forcing function for comparing and iterating
330
SHARON OVIATT
spoken language systems. In particular, WER was used throughout the DARPAfunded Speech Grand Challenge research program [78] to compare speech systems at various funded sites. Toward the end of this research program, it was widely acknowledged that although metrics are needed as a forcing function, nonetheless reliance on any single metric can be risky and counterproductive to the promotion of high-quality research and system building. This is because a singular focus on developing technology to meet the demands of any specific metric essentially encourages the research community to adopt a narrow and conservative set of design goals. It also tends to encourage relatively minor iterative algorithmic adaptations during research and system development, rather than a broader and potentially more productive search for innovative solutions to the hardest problems. When innovative or even radically different strategies are required to circumvent a difficult technical barrier, then new performance metrics can act as a stimulus and guide in advancing research in the new direction. Finally, in the case of the speech community's overreliance on WER, one specific adverse consequence was the general disincentive to address many essential usercentered design issues that could have reduced errors and improved error handling in spoken language systems. During the development of multimodal systems, one focus of early assessments clearly has been on the demonstration of improved robustness over unimodal speech systems. To track this, researchers have calculated an overall multimodal recognition rate, although often summarized at the utterance level and with additional diagnostic information about the performance of the system's two component recognizers. This has provided a global assessment tool for indexing the average level of multimodal system accuracy, as well as the basic information needed for comparative analysis of multimodal versus unimodal system performance. However, as an alternative approach to traditional speech processing, multimodal research also has begun to adopt new and more specialized metrics, such as a given system's rate of mutual disambiguation. This concept has been valuable for assessing the degree of error suppression achievable in multimodal systems. It also has provided a tool for assessing each input mode's ability to disambiguate errors in the other mode. This latter information has assisted in clarifying the relative stability of each mode, and also in establishing how effectively two modes work together to supply the complementary information needed to stabilize system performance. In this respect, the mutual disambiguation metric has significant diagnostic capabilities beyond simply summarizing the average level of system accuracy. As part of exploratory research, the mutual disambiguation metric also is beginning to be used to define in what circumstances a particular input mode is effective
BREAKING THE ROBUSTNESS BARRIER
331
at stabilizing the performance of a more fragile mode. In this sense, it is playing an active role in exploring user-centered design issues relevant to the development of new multimodal systems. It also is elucidating the dynamics of error suppression. In the future, other new metrics that reflect concepts of central importance to the development of emerging multimodal systems will be needed.
3.
Future Directions: Breaking the Robustness Barrier
The computer science community is just beginning to understand how to design innovative, well-integrated, and robust multimodal systems. To date, most multimodal systems remain bimodal, and recognition technologies related to several human senses (e.g., haptics, smell) have yet to be well represented within multimodal interfaces. As with past multimodal systems, the design and development of new types of multimodal system that include such modes will not be achievable through intuition alone. Rather, it will depend on knowledge of the usage and natural integration patterns that typify people's combined use of various input modes. This means that the successful design of new multimodal systems will continue to require guidance from cognitive science on the coordinated human perception and production of natural modalities. In this respect, multimodal systems only can flourish through multidisciplinary cooperation and teamwork among those working on different component technologies. The multimodal research community also could benefit from far more cross-fertilization among researchers representing the main subareas of multimodal expertise, especially those working in the more active areas of speech and pen and speech and lip movement research. Finally, with multimodal research projects and funding expanding in Europe, Japan, and elsewhere, the time is ripe for more international collaboration in this research area. To achieve commercialization and widespread dissemination of multimodal interfaces, more general, robust, and scalable multimodal architectures will be needed, which now are beginning to emerge. Most multimodal systems have been built during the past decade, and they are research-level systems. However, in several cases they now have developed beyond the prototype stage, and are being integrated with other software at academic and federal sites, or are beginning to appear as newly shipped products [2,19]. Future research will need to focus on developing hybrid symbolic/statistical architectures based on large corpora and refined fusion techniques in order to optimize multimodal system robustness. Research also will need to develop new architectures capable of flexibly coordinating numerous multimodal-multisensor system components to support new directions in adaptive processing. To transcend the
332
SHARON OVIATT
robustness barrier, research likewise will need to explore new natural language, dialogue processing, and statistical techniques for optimizing mutual disambiguation among the input modes combined within new classes of a multimodal system. As multimodal interfaces gradually progress toward supporting more robust and human-like perception of users' natural activities in context, they will need to expand beyond rudimentary bimodal systems to ones that incorporate three or more input modes. Like biological systems, they should be generalized to include input from qualitatively different and semantically rich information sources. This increase in the number and heterogeneity of input modes can effectively broaden the reach of advanced multimodal systems, and provide them with access to the discriminative information needed to reliably recognize and process users' language, actions, and intentions in a wide array of different situations. Advances of this kind are expected to contribute to a new level of robustness or hybrid vigor in multimodal system performance. This trend already has been initiated within the field of biometrics research, which is combining recognition of multiple behavioral modes with physiological ones to achieve reliable person identification and verification under challenging field conditions. To support increasingly pervasive multimodal interfaces, these combined information sources ideally must include data collected from a wide array of sensors as well as input modes, and from both active and passive forms of user input. Very few existing multimodal systems that involve speech recognition currently include any adaptive processing. With respect to societal impact, the shift toward adaptive multimodal interfaces is expected to provide significantly enhanced usability for a diverse range of everyday users, including young and old, experienced and inexperienced, able-bodied and disabled. Such interfaces also will be far more personalized and appropriately responsive to the changing contexts induced by mobihty than interfaces of the past. With respect to robustness, adaptivity to the user, ongoing task, dialogue, environmental context, and input modes will collectively generate constraints that can greatly improve system reliability. In the future, adaptive multimodal systems will require active tracking of potentially discriminative information, as well as flexible incorporation of additional information sources during the process of fusion and interpretation. In this respect, future multimodal interfaces and architectures will need to be able to engage in flexible reconfiguration, such that specific types of information can be integrated as needed when adverse conditions arise (e.g., noise), or if the confidence estimate for a given interpretation falls too low. The successful design of future adaptive multimodal systems could benefit from a thoughtful examination of the models already provided by biology and cognitive science on intelligent adaptation during perception, as well as from the literature on robotics.
BREAKING THE ROBUSTNESS BARRIER
4.
333
Conclusion
In summary, a well-designed multimodal system that fuses two or more information sources can be an effective means of reducing recognition uncertainty. Performance advantages have been demonstrated for different modality combinations (speech and pen, speech and lip movements), as well as for varied tasks and different environments. Furthermore, the average error suppression achievable with a multimodal system, compared with a unimodal spoken language one, can be very substantial. These findings indicate that promising but error-prone recognition-based technologies are increasingly likely to be embedded within multimodal systems in order to achieve commercial viability during the next decade. Recent research also has demonstrated that multimodal systems can perform more stably for challenging real-world user groups and usage contexts. For this reason, they are expected to play an especially central role in the emergence of mobile interfaces, and in the design of interfaces for every-person universal access. In the long term, adaptive multimodal-multisensor interfaces are viewed as a key avenue for supporting far more pervasive interfaces with entirely new functionality not supported by computing of the past. ACKNOWLEDGMENTS
I thank the National Science Foundation for their support over the past decade, which has enabled me to pursue basic exploratory research on many aspects of multimodal interaction, interface design, and system development. The preparation of this chapter has been supported by NSF Grant IRI-9530666 and NSF Special Extension for Creativity (SEC) Grant IIS-9530666. This work also has been supported by Contracts DABT63-95-C-007 and N66001-99-D-8503 from DARPA's Information Technology and Information Systems Office, and Grant NOOO14-99-1-0377 from ONR. I also thank Phil Cohen and others in the Center for Human-Computer Communication for many insightful discussions, and Dana Director, Rachel Coulston, and Kim Tice for expert assistance with manuscript preparation. REFERENCES
[1] Benoit, C, Martin, J. C, Pelachaud, C, Schomaker, L., and Suhm, B. (2000). "Audio-visual and multimodal speech-based systems." Handbook of Multimodal and Spoken Dialogue Systems: Resources, Terminology and Product Evaluation (D. Gibbon, I. Mertins, and R. Moore, Eds.), pp. 102-203. Kluwer Academic, Boston. [2] Oviatt, S. L., Cohen, R R., Wu, L., Vergo, J., Duncan, L., Suhm, B., Bers, J., Holzman, T, Winograd, T, Landay, J., Larson, J., and Ferro, D. (2000). "Designing the user interface for multimodal speech and gesture applications: State-of-the-art systems and research directions." Human Computer Interaction, 15, 4, 263-322.
334
[3]
[4] [5]
[6]
[7] [8] [9]
[10] [11]
[12]
[13]
[14]
[15]
SHARON OVIATT [Reprinted in Human-Computer Interaction in the New Millennium (J. Carroll, Ed.), Chap. 19, pp. 421-456. Addison-Wesley, Reading, MA, 2001.] Neti, C , Iyengar, G., Potamianos, G., Senior, A., and Maison, B. (2000). "Perceptual interfaces for information interaction: Joint processing of audio and visual information for human-computer interaction." Proceedings of the International Conference on Spoken Language Processing, Beijing, 3, 11-14. Pankanti, S., Bolle, R. M., and Jain, A. (Eds.) (2000). "Biometrics: The future of identification." Computer, 33, 2, 46-80. Benoit, C , and Le Goff, B. (1998). "Audio-visual speech synthesis from French text: Eight years of models, designs and evaluation at the ICP." Speech Communication, 26, 117-129. Cohen, P. R., Johnston, M., McGee, D., Oviatt, S., Pittman, J., Smith, I., Chen, L., and Clow, J. (1997). "Quickset: Multimodal interaction for distributed applications." Proceedings of the Fifth ACM International Multimedia Conference, pp. 31-^0. ACM Press, New York. Stork, D. G., and Hennecke, M. E. (Eds.) (1996). Speechreading by Humans and Machines. Springer-Verlag, New York. Turk, M., and Robertson, G. (Eds.) (2000). "Perceptual user interfaces." Communications of the ACM (special issue on Perceptual User Interface), 43, 3, 32-70. Zhai, S., Morimoto, C , and Ihde, S. (1999). "Manual and gaze input cascaded (MAGIC) pointing." Proceedings of the Conference on Human Factors in Computing Systems (CHF99), pp. 246-253. ACM Press, New York. Bolt, R. A. (1980). "Put-that-there: Voice and gesture at the graphics interface." Computer Graphics, 14, 3, 262-270. Cohen, R R., Dalrymple, M., Moran, D. B., Pereira, F C. N., Sullivan, J. W., Gargan, R. A., Schlossberg, J. L., and Tyler, S. W. (1989). "Synergistic use of direct manipulation and natural language." Proceedings of the Conference on Human Factors in Computing Systems (CHI'89), pp. 227-234. ACM Press, New York. [Reprinted in Readings in Intelligent User Interfaces (Maybury and Wahlster, Eds.), pp. 29-37, Morgan Kaufmann, San Francisco.] Kobsa, A., Allgayer, J., Reddig, C, Reithinger, N., Schmauks, D., Harbusch, K., and Wahlster, W. (1986). "Combining deictic gestures and natural language for referent identification." Proceedings of the 11th International Conf on Computational Linguistics, Bonn, Germany, pp. 356-361. Neal, J. G., and Shapiro, S. C. (1991). "Intelligent multimedia interface technology." Intelligent User Interfaces (J. W. Sullivan and S. W. Tyler, Eds.), pp. 11-43. ACM Press, New York. Seneff, S., Goddeau, D., Pao, C , and Polifroni, J. (1996). "Multimodal discourse modeling in a multi-user multi-domain environment." Proceedings of the International Conference on Spoken Language Processing (T. Bunnell and W. Idsardi, Eds.), Vol. 1, pp. 192-195. University of Delaware and A. 1. duPont Institute. Siroux, J., Guyomard, M., Multon, R, and Remondeau, C. (1995). "Modeling and processing of the oral and tactile activities in the Georal tactile system."
BREAKING THE ROBUSTNESS BARRIER
[16]
[17] [18] [19] [20]
[21]
[22]
[23]
[24] [25]
[26]
[27]
[28]
[29]
335
Proceedings of the International Conference on Cooperative Multimodal Communication, Theory & Applications. Eindhoven, Netherlands. Wahlster, W. (1991). "User and discourse models for multimodal communciation." Intelligent User Interfaces (J. W. Sullivan and S. W. Tyler, Eds.), Chap. 3, pp. 45-67. ACM Press, New York. Oviatt, S. L., and Cohen, P. R. (2000). "Multimodal systems that process what comes naturally." Communications of the ACM, 43, 3, 45-53. Rubin, P., Vatikiotis-Bateson, E., and Benoit, C. (Eds.) (1998). Speech Communication (special issue on audio-visual speech processing). 26, 1-2. Oviatt, S. L. (2002). "Multimodal Interfaces." Handbook of Human-Computer Interaction (J. Jacko and A. Sears, Eds.). Lawrence Erlbaum, Mahwah, NJ. Oviatt, S. L., Cohen, R R., Pong, M. W., and Frank, M. R (1992). "A rapid semi-automatic simulation technique for investigating interactive speech and handwriting." Proceedings of the International Conference on Spoken Language Processing, University of Alberta, Vol. 2, pp. 1351-1354. Bers, J., Miller, S., and Makhoul, J. (1998). "Designing conversational interfaces with multimodal interaction." DARPA Workshop on Broadcast News Understanding Systems, pp. 319-321, Cheyer, A. (1998). "MVIEWS: Multimodal tools for the video analyst." Proceedings of the International Conference on Intelligent User Interfaces (IUr98), pp. 55-62. ACM Press, New York. Waibel, A., Suhm, B., Vo, M. T., and Yang, J. (1997). "Multimodal interfaces for multimedia information agents." Proceedings of the International Conference on Acoustics, Speech and Signal Processing (lEEE-ICASSP), Vol. 1, pp. 167-170. IEEE Press, Menlo Park, CA. Wu, L., Oviatt, S., and Cohen, P. (1999). "Multimodal integration: A statistical view." IEEE Transactions on Multimedia, 1, 4, 334-342. Bangalore, S., and Johnston, M. (2000). "Integrating multimodal language processing with speech recognition." Proceedings of the International Conference on Spoken Language Processing (ICSLP'2000) (B. Yuan, T. Huang, and X. Tang, Eds.), Vol. 2, pp. 126-129. Chinese Friendship, Beijing. Denecke, M., and Yang, J. (2000). "Partial information in multimodal dialogue." Proceedings of the International Conference on Spoken Language Processing (ICSLP'2000) (B. Yuan, T. Huang, and X. Tang, Eds.), pp. 624-633. Chinese Friendship, Beijing. Bernstein, L., and Benoit, C. (1996). "For speech perception by humans or machines, three senses are better than one." Proceedings of the International Conference on Spoken Language Processing, 3, 1477-1480. Cohen, M. M., and Massaro, D. W. (1993). "Modeling coarticulation in synthetic visible speech." Models and Techniques in Computer Animation (N. M. Thalmann and D. Thalmann, Eds.), pp. 139-156. Springer-Verlag, Berhn. Massaro, D. W., and Stork, D. G. (1998). "Sensory integration and speechreading by humans and machines." American Scientist, 86, 236-244.
336
SHARON OVIATT
[30] McGrath, M., and Summerfield, Q. (1985). "Intermodal timing relations and audiovisual speech recognition by normal-hearing adults." Journal of the Acoustical Society of America, 11, 2, 678-685. [31] McGurk, H., and MacDonald, J. (1976). "Hearing lips and seeing voices." Nature, 264, 746-748. [32] McLeod, A., and Summerfield, Q. (1987). "Quantifying the contribution of vision to speech perception in noise." British Journal ofAudiology, 21, 131-141. [33] Robert-Ribes, J., Schwartz, J. L., Lallouache, T., and Escudier, R (1998). "Complementarity and synergy in bimodal speech: Auditory, visual, and audio-visual identification of French oral vowels in noise." Journal of the Acoustical Society of America, 103, 6, 3677-3689. [34] Sumby, W. H., and Pollack, I. (1954). "Visual contribution to speech intelhgibility in noise." Journal of the Acoustical Society of America, 26, 212-215. [35] Summerfield, A. Q. (1992). "Lipreading and audio-visual speech perception." Philosophical Transactions of the Royal Society of London, Series B, 335, 71-78. [36] Vatikiotis-Bateson, E., Munhall, K. G., Hirayama, M., Lee, Y. V., and Terzopoulos, D. (1996). "The dynamics of audiovisual behavior in speech." Speechreading by Humans and Machines: Models, Systems and Applications (D. G. Stork and M. E. Hennecke, Eds.), NATO ASI Series, Series F: Computer and Systems Sciences 150, pp. 221-232. Springer-Verlag, Berlin. [37] Petajan, E. D. (1984). Automatic Lipreading to Enhance Speech Recognition, Ph.D. thesis. University of Illinois at Urbana-Champaign. [38] Brooke, N. M., and Petajan, E. D. (1986). "Seeing speech: Investigations into the synthesis and recognition of visible speech movements using automatic image processing and computer graphics." Proceedings of the International Conference on Speech Input and Output: Techniques and Applications, 258, 104-109. [39] Adjoudani, A., and Benoit, C. (1995). "Audio-visual speech recognition compared across two architectures." Proceedings of the Eurospeech Conference, Madrid, Spain, Vol. 2, pp. 1563-1566. [40] Bregler, C , and Konig, Y. (1994). "Eigenlips for robust speech recognition." Proceedings of the International Conference on Acoustics Speech and Signal Processing (lEEE-ICASSP), Vol. 2, pp. 669-672. [41] Goldschen, A. J. (1993). Continuous Automatic Speech Recognition by Lipreading, Ph.D. thesis. Department of Electrical Engineering and Computer Science, George Washington University. [42] Silsbee, P. L., and Su, Q. (1996). "Audiovisual sensory integration using Hidden Markov Models." Speechreading by Humans and Machines: Models, Systems and Applications (D. G. Stork and M. E. Hennecke, Eds.), NATO ASI Series, Series F: Computer and Systems Sciences 150, pp. 489-504. Springer-Verlag, Berlin. [43] Tomlinson, M. J., Russell, M. J., and Brooke, N. M. (1996). "Integrating audio and visual information to provide highly robust speech recognition." Proceedings of the International Conference on Acoustics Speech and Signal Processing (lEEEICASSP), pp. S21-S24.
BREAKING THE ROBUSTNESS BARRIER
337
[44] Cassell, J., Sullivan, J., Prevost, S., and Churchill, E. (Eds.) (2000). Embodied conversational agents. MIT Press, Cambridge, MA. [45] Dupont, S., and Luettin, J. (2000). "Audio-visual speech modeling for continuous speech recognition." IEEE Transactions on Multimedia, 2, 3, 141-151. [46] Meier, U., Hiirst, W., and Duchnowski, P. (1996). "Adaptive bimodal sensor fusion for automatic speechreading." Proceedings of the International Conference on Acoustics, Speech and Signal Processing (lEEE-ICASSP), pp. 833-836. IEEE Press, Menlo Park, CA. [47] Rogozan, A., and Deglise, P. (1998). "Adaptive fusion of acoustic and visual sources for automatic speech recognition." Speech Communication, 26, 1-2, 149-161. [48] Choudhury, T., Clarkson, B., Jebara, T., and Pentland, S. (1999). "Multimodal person recognition using unconstrained audio and video." Proceedings of the 2nd International Conference on Audio-and-Video-based Biometric Person Authentication, Washington, DC, pp. 176-181. [49] Lee, J. (2001). "Retoohng products so all can use them." New York Times, June 21. [50] Jorge, J., Heller, R., and Guedj, R. (Eds.) (2001). Proceedings of the NSF/EC Workshop on Universal Accessibility and Ubiquitous Computing: Providing for the Elderly, Alcacer do Sal, Portugal, 22-25 May. Available at h t t p : //immi. i n e s c . pt/alcacerOl/procs/papers-list.html. [51] Oviatt, S. L., and van Gent, R. (1996). "Error resolution during multimodal humancomputer interaction." Proceedings of the International Conference on Spoken Language Processing, Vol. 2, pp. 204-207. University of Delaware Press. [52] Oviatt, S. L., Bernard, J., and Levow, G. (1998). "Linguistic adaptation during error resolution with spoken and multimodal systems." Language and Speech (special issue on prosody and conversation), 41, 3-4, 419-442. [53] Oviatt, S. L. (1999). "Mutual disambiguation of recognition errors in a multimodal architecture." Proceedings of the Conference on Human Factors in Computing Systems (CHr99), pp. 576-583. ACM Press, New York. [54] Rudnicky, A., and Hauptman, A. (1992). "Multimodal interactions in speech systems." Multimedia Interface Design, Frontier Series (M. Blattner and R. Dannenberg, Eds.), pp. 147-172. ACM Press, New York. [55] Suhm, B. (1998). Multimodal Interactive Error Recovery for Non-conversational Speech User Interfaces, Ph.D. thesis. Karlsruhe University, Germany. [56] Oviatt, S. L. (1997). "Multimodal interactive maps: Designing for human performance." Human-Computer Interaction (special issue on multimodal interfaces), 12, 93-129. [57] Oviatt, S. L., and Kuhn, K. (1998). "Referential features and linguistic indirection in multimodal language." Proceedings of the International Conference on Spoken Language Processing, ASSTA Inc., Sydney, Australia, Vol. 6, pp. 2339-2342. [58] Oviatt, S. L. (2000). "Multimodal system processing in mobile environments." Proceedings of the Thirteenth Annual ACM Symposium on User Interface Software Technology (UIST2000), pp. 21-30. ACM Press, New York.
338
SHARON OVIATT
[59] Oviatt, S. L. (2000). "Taming recognition errors with a multimodal architecture." Communications of the ACM (special issue on conversational interfaces), 43, 9, 45-51. [60] Erber, N. P. (1975). "Auditory-visual perception of speech." Journal of Speech and Hearing Disorders, 40, 481-492. [61] Bregler, C , Omohundro, S. M., Shi, J., and Konig, Y. (1996). "Towards a robust speechreading dialog system." Speechreading by Humans and Machines: Models, Systems and Applications (D. G. Stork and M. E. Hennecke, Eds.), NATO ASI Series, Series F: Computer and Systems Sciences 150, pp. 409^23. SpringerVerlag, Berlin. [62] Brooke, M. (1996). "Using the visual component in automatic speech recognition." Proceedings of the International Conference on Spoken Language Processing, Vol. 3, pp. 1656-1659. [63] Nakamura, S., Ito, H., and Shikano, K. (2000). "Stream weight optimization of speech and lip image sequence for audio-visual speech recognition." Proceedings of the International Conference on Spoken Language Processing (ICSLP 2000) (B. Yuan, T. Huang and X. Tang, Eds.), Vol. 3, pp. 20-24. Chinese Friendship Publishers, Beijing. [64] Potamianos, G., and Neti, C. (2000). "Stream confidence estimation for audiovisual speech recognition." Proceedings of the International Conference on Spoken Language Processing (ICSLP 2000) (B. Yuan, T. Huang, and X. Tang, Eds.), Vol. 3, pp. 746-749. Chinese Friendship Publishers, Beijing. [65] Silsbee, P. L., and Bovik, A. C. (1996). "Computer lipreading for improved accuracy in automatic speech recognition." IEEE Transactions on Speech and Audio Processing, 4, 5, 337-351. [66] Murphy, R. R. (1996). "Biological and cognitive foundations of intelligent sensor fusion." IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans, 26, 1, 42-51. [67] Lee, D. (1978). "The functions of vision." Modes of Perceiving and Processing Information (H. L. Pick and E. Saltzman, Eds.), pp. 159-170. Wiley, New York. [68] Pick, H. L., and Saltzman, E. (1978). "Modes of perceiving and processing information." Modes of Perceiving and Processing Information (H. L. Pick, Jr., and E. Saltzman, Eds.), pp. 1-20. Wiley, New York. [69] Pick, H. (1987). "Information and effects of early perceptual experience." Contemporary Topics in Developmental Psychology (N. Eisenberg, Ed.), pp. 59-76. Wiley, New York. [70] Stein, B., and Meredith, M. (1993). The Merging of the Senses. MIT Press, Cambridge, MA. [71 ] Welch, R. B. (1978). Perceptual Modification: Adapting to Altered Sensory Environments. Academic Press, New York. [72] Bower, T. G. R. (1974). "The evolution of sensory systems." Perception: Essays in Honor of James J. Gibson (R. B. MacLeod and H. L. Pick, Jr., Eds.), pp. 141-153. Cornell University Press, Ithaca, NY.
BREAKING THE ROBUSTNESS BARRIER
339
[73] Freedman, S. J., and Rekosh, J. H. (1968). "The functional integrity of spatial behavior." The Neuropsychology of Spatially-Oriented Behavior (S. J. Freedman, Eds.), pp. 153-162. Dorsey Press, Homewood, IL. [74] Lackner, J. R. (1981). "Some aspects of sensory-motor control and adaptation in man." Intersensory Perception and Sensory Integration (R. D. Walk and H. L. Pick, Eds.), pp. 143-173. Plenum, New York. [75] Hall, D. L. (1992). Mathematical Techniques in Multisensor Data Fusion. Artech House, Boston. [76] Pavel, M., and Sharma, R. K. (1997). "Model-based sensor fusion for aviation." Proceedings of SPIE,30m, 169-176. [77] Hager, G. D. (1990). Task-Directed Sensor Fusion and Planning: A Computational Approach. Kluwer Academic, Boston. [78] Martin, A., Fiscus, J., Fisher, B., Pallet, D., and Przybocki, M. (1997). "System descriptions and performance summary." Proceedings of the Conversational Speech Recognition Workshop/DARPA Hub-5E Evaluation. Morgan Kaufman, San Mateo, CA. [79] Weintraub, M., Taussig, K., Hunicke, K., and Snodgrass, A. (1997). "Effect of speaking style on LVCSR performance." Proceedings of the Conversational Speech Recognition Workshop/DARPA Hub-5E Evaluation. Morgan Kaufman, San Mateo, CA. [80] Oviatt, S. L., MacEachem, M., and Levow, G. (1998). "Predicting hyperarticulate speech during human-computer error resolution." Speech Communication, 24, 87-110. [81] Banse, R., and Scherer, K. (1996). "Acoustic profiles in vocal emotion expression." Journal of Personality and Social Psychology, 70, 3, 614-636. [82] Aist, G., Chan, P., Huang, X., Jiang, L., Kennedy, R., Latimer, D., Mostow, J., and Yeung, C. (1998). "How effective is unsupervised data collection for children's speech recognition?" Proceedings of the International Conference on Spoken Language Processing, ASSTA Inc., Sydney, Vol. 7, pp. 3171-3174. [83] Das, S., Nix, D., and Picheny, M. (1998). "Improvements in children's speech recognition performance." Proceedings of the International Conference on Acoustics, Speech and Signal Processing, Vol. 1, pp. 433-436. IEEE Press, Menlo Park, CA. [84] Potamianos, A., Narayanan, S., and Lee, S. (1997). "Automatic speech recognition for children." European Conference on Speech Communication and Technology, 5, 2371-2374. [85] Wilpon, J. G., and Jacobsen, C. N. (1996). "A study of speech recognition for children and the elderly." Proceedings of the International Conference on Acoustic, Speech and Signal Processing (ICASSP'96), pp. 349-352. [86] Lee, S., Potamianos, A., and Narayanan, S. (1997). "Analysis of children's speech: Duration, pitch and formants." European Conference on Speech Communication and Technology, Vol. 1, pp. 473^76. [87] Yeni-Komshian, G., Kavanaugh, J., and Ferguson, C. (Eds.) (1980). Child Phonology, Vol. I: Production. Academic Press, New York.
340
SHARON OVIATT
[88] Das, S., Bakis, R., Nadas, A., Nahamoo, D., and Picheny, M. (1993). "Influence of background noise and microphone on the performance of the IBM TANGORA speech recognition system." Proceedings of the IEEE International Conference on Acoustic Speech Signal Processing, Vol. 2, pp. 71-74. [89] Gong, Y. (1995). "Speech recognition in noisy environments." Speech Communication, 16,261-291. [90] Lockwood, R, and Boudy, J. (1992). "Experiments with a non-linear spectral subtractor (NSS), Hidden Markov Models and the projection for robust speech recognition in cars." Speech Communication, 11, 2-3, 215-228. [91] Junqua, J. C. (1993). "The Lombard reflex and its role on human listeners and automatic speech recognizers." Journal of the Acoustical Society of America, 93, 1,510-524. [92] Lombard, E. (1911). "Le signe de 1'elevation de la voix." Annals Maladiers Oreille, Larynx, Nez, Pharynx, 37, 101-119. [93] Hanley, T. D., and Steer, M. D. (1949). "Effect of level of distracting noise upon speaking rate, duration and intensity." Journal of Speech and Hearing Disorders, 14, 363-368. [94] Schulman, R. (1989). "Articulatory dynamics of loud and normal speech." Journal of the Acoustical Society of America, 85, 295-312. [95] van Summers, W. V., Pisoni, D. B., Bernacki, R. H., Pedlow, R. I., and Stokes, M. A. (1988). "Effects of noise on speech production: Acoustic and perceptual analyses." Journal of the Acoustical Society of America, 84, 917-928. [96] Potash, L. M. (1972). "A signal detection problem and a possible solution in Japanese quail." Animal Behavior, 20, 192-195. [97] Sinott, J. M., Stebbins, W. C, and Moody, D. B. (1975). "Regulation of voice ampUtude by the monkey." Journal of the Acoustical Society of America, 58, 412-414. [98] Siegel, G. M., Pick, H. L., Olsen, M. G., and Sawin, L. (1976). "Auditory feedback in the regulation of vocal intensity of preschool children." Developmental Psychology, 12, 255-261. [99] Pick, H. L., Siegel, G. M., Fox, R W., Garber, S. R., and Kearney, J. K. (1989). "Inhibiting the Lombard effect." Journal of the Acoustical Society of America, 85, 2, 894-900. [100] Fuster-Duran, A. (1996). "Perception of conflicting audio-visual speech: An examination across Spanish and German." Speechreading by Humans and Machines: Models, Systems and Applications (D. G. Stork and M. E. Hennecke, Eds.), NATO ASI Series, Series F: Computer and Systems Sciences 150, pp. 135-143. SpringerVerlag, Berlin. [101] Massaro, D. W. (1996). "Bimodal speech perception: A progress report." Speechreading by Humans and Machines: Models, Systems and Applications (D. G. Stork and M. E. Hennecke, Eds.), NATO ASI Series, Series F: Computer and Systems Sciences 150, pp. 79-101. Springer-Verlag, Berlin. [102] Hennecke, M. E., Stork, D. G., and Prasad, K. V. (1996). "Visionary speech: Looking ahead to practical speechreading systems." Speechreading by
BREAKING THE ROBUSTNESS BARRIER
[103] [104]
[105]
[106]
[107] [108]
[109]
[110]
341
Humans and Machines: Models, Systems and Applications (D. G. Stork and M. E. Hennecke, Eds.), NATO ASI Series, Series F: Computer and Systems Sciences 150, pp. 331-349. Springer-Verlag, Berlin. Haton, J. P. (1993). "Automatic recognition in noisy speech." New Advances and Trends in Speech Recognition and Coding. NATO Advanced Study Institute. Senior, A., Neti, C. V., and Maison, B. (1999). "On the use of visual information for improving audio-based speaker recognition." Proceedings of Auditory-Visual Speech Processing (AVSP), 108-111. Oviatt, S. L. (2000). "Multimodal signal processing in naturalistic noisy environments." Proceedings of the International Conference on Spoken Language Processing (ICSLP'2000) (B. Yuan, T. Huang, and X. Tang, Eds.), Vol. 2, pp. 696-699. Chinese Friendship PubHshers, Beijing. Summerfield, Q. (1987). "Some preliminaries to a comprehensive account of audio-visual speech perception." Hearing by Eye: The Psychology of Lip-reading, (B. Dodd and R. Campbell, Eds.), pp. 3-51. Lawrence Erlbaum, London. Petajan, E. D. (1987). "An improved automatic lipreading system to enhance speech recognition." Tech. Rep. 11251-871012-11ITM, AT&T Bell Labs. Iverson, P., Bernstein, L., and Auer, E. (1998). "Modeling the interaction of phonemic intelligibility and lexical structure in audiovisual word recognition." Speech Communication, 26, 1-2, 45-63. Oviatt, S. L., Cohen, P R., and Wang, M. Q. (1994). "Toward interface design for human language technology: Modality and structure as determinants of linguistic complexity." Speech Communication, 15, 3 ^ , 283-300. Oviatt, S. L., DeAngeli, A., and Kuhn, K. (1997). "Integration and synchronization of input modes during multimodal human-computer interaction." Proceedings of the Conference on Human Factors in Computing Systems (CHr97), pp. 415^22. ACM Press, New York.
This Page Intentionally Left Blank
Using Data Mining to Discover the Preferences of Computer Criminals DONALD E. BROWN AND LOUISE R GUNDERSON Department of Systems and Information Engineering University of Virginia Olsson 114A 115 Engineer's Way Charlottesville, Virginia 22904 USA [email protected], [email protected]
Abstract The ability to predict criminal incidents is vital for all types of law enforcement agencies. This ability makes it possible for law enforcement to both protect potential victims and apprehend perpetrators. However, for those in charge of preventing computer attacks, this ability has become even more important. While some responses are possible to these attacks, most of them require that warnings of possible attacks go out in "cyber time." However, it is also imperative that warnings be as specific as possible, so that systems that are not likely to be under attack do not shut off necessary services to their users. This chapter discusses a methodology for data-mining the output from intrusion detection systems to discover the preferences of attackers. These preferences can then be communicated to other systems, which have features similar to these discovered preferences. This approach has two theoretical bases. One is judgment analysis, which comes from the cognitive sciences arena, and the other is data mining and pattern recognition. Judgment analysis is used to construct a mathematical formulation for this decision to choose a specific target. This formulation allows clustering to be used to discover the preferences of the criminals from the data. One problem is posed by the fact that many criminals may have the same preferences or one criminal may have more than one set of preferences; thus, an individual criminal cannot be identified by this method. Instead we refer to the discovered preferences as representing agents. Another problem is that, while all of the agents are operating in the same event space, they may not all be using the same feature set to choose their targets. In order to discover these agents and their preferences, a salience weighting methodology has been developed. This method, when applied to the events caused ADVANCES IN COMPUTERS, VOL. 56 ISBN 0-12-012156-5
343
Copyright 2002 Elsevier Science Ltd Allrightsof reproduction in any form reserved.
344
DONALD E. BROWN AND LOUISE F. GUNDERSON
by attackers, allows for the discovery of the preferences for the features in the environment used by each of the discovered agents to select a target. Once the target preference of the agents has been discovered, this knowledge can be used to create a system for the prediction of future targets. In order to construct this system, one would use the output of existing intrusion detection systems. This data would be used by automated data-mining software to discover the preferences of the attackers and to warn machines with similar attributes. Because this entire process is automatic, the sites could be warned in "cyber time."
1. Introduction 2. The Target Selection Process of Criminals 2.1 Rational Choice Theory 2.2 Routine Activity Hypothesis 2.3 Victim Profiling 3. Predictive Modeling of Crime 3.1 Previous Models 3.2 Multiagent Modeling 4. Discovering the Preferences of the Agents 4.1 Clustering 4.2 Judgment Analysis 4.3 Applying the Judgment Analysis Model to Criminal Preference 5. Methodology 5.1 Cluster-Specific Salience Weighting 5.2 Using the Discovered Agents in a Multiagent Model 6. Testing with Synthetic Data 7. Conclusions References
1.
344 346 346 347 348 348 348 350 352 352 353 356 358 358 361 364 369 370
Introduction
In the past few years, computer networks, including those that constitute the Internet, have become vitally important to the world economy. While the number of users is difficult to estimate, the number of hosts has grown from 4 in 1969 to 72,398,092 in January 2000 [1]. According to one estimate, worldwide e-commerce generated $132 billion in revenues in 2000 [2]. The increased use of these networks has created a new venue for criminals. In addition, the proliferation of free hacking/cracking software has changed the nature of computer crime from an endeavor that required computer expertise to one that can be practised by a computer novice [3]. For these reasons, the number of computer crimes has increased dramatically. The CERT Coordination Center at Carnegie Mellon has
THE PREFERENCES OF COMPUTER CRIMINALS
345
documented an exponential growth in the number of incidents reported to them in the past decade, from 252 in 1990 to 21,756 in 2000 [4]. For this discussion, the emphasis will be on denial of service (DOS) attacks. A DOS attack is an attempt to prevent legitimate users of a service from using that service. The most common type of attack is one that consumes (scarce) resources. This type of attack may include [5]: • Consumption of bandwidth, for example, by generating a large number of packets directed at a network. • Consumption of disk space, for example, by spamming or e-mail bombing. • Consumption of network connections, for example, by sending requests for a connection with an incorrect (spoofed) IP address. The nature of this type of attack places some constraints on the techniques that can be used to protect vulnerable systems. First is the speed with which the attack proceeds. This speed requires that warnings of possible attacks go out in "cyber time." Another constraint is that warnings be as specific as possible, so that systems not likely to be under attack do not shut off necessary services to their users. The methodology described in this paper is based on the identification of the target preferences of the attackers, in order to predict the targets that they will subsequently target. Fundamentally, this approach develops a model of the criminals' decision-making process. We discover these criminal preferences in much the same way that Internet businesses are discovering customer preferences: by observing and analyzing behavior on the Web. Figure 1 shows the basic components of this preference discovery approach. We observe criminal incidents in time and across the network topology. Each of these incidents is Time Axis
Feature Space
Network Topology
FIG. 1. Graphical depiction of the preference discovery approach.
346
DONALD E. BROWN AND LOUISE F. GUNDERSON
mapped to a feature space that contains attributes of the sites attacked and the type of attack. We cluster these incidents in feature space. More formally, we develop a density estimate for the decision surfaces across feature space. This surface then becomes the basis for modeling the decision behavior of the criminals toward future attacks. Once the target preferences have been discovered, then a model of criminal behavior can be developed. In order to develop a method for discovering target preference, the existing criminological literature on preferences must be examined. This is done in Section 2. Three theories of criminal choice will be discussed: rational choice, routine activity, and victim profiling. Taken together, these theories show that the choice behavior of criminals is nonrandom. In Section 3, some of the types of models, including the multiagent model, used in modeling the prediction of criminal activity are discussed. In Section 4, the theoretical basis for the preference discovery methodology is discussed. In Section 5, the preference discovery methodology is discussed in more detail. Section 6 gives some of the results of the methodology with simulated data.
2.
The Target Selection Process of Criminals 2.1
Rational Choice Theory
The rational choice theory developed by Cornish and Clarke suggests that the choice of a target is based on a rational decision process. This choice is based on a rough cost-benefit analysis between the costs and benefits posed by a specific target [6]. This model has been proposed for the target selection process of shoplifters [7] commercial burglars, and robbers [8]. It has also been extended to cover computer and network crime [9]. Let us take for example the choice to target a series of sites for a denial of service attack. In order to assess the value of these targets, the possible rewards of attacking these targets (the bragging rights of having disrupted a particular site or sites or the political value of the disruption) are balanced against the possible costs (the humiliation of getting caught or of not successfully disrupting the site (perhaps after bragging about one's ability to do so)). However, the criminal only has limited time and incomplete information with which to make this targeting decision [10]. This means that they must use proxies to assess the probability of the rewards or costs for a specific target. For example, let us consider a group that is firmly committed to disrupting the U.S. economy. The value of their reward will be related to the content of the site. For them, disrupting the "Mom's Little Apple Shop" site will result in less value than disrupting the "Big E-Commerce" site. Their proxies for the value of the site will be the "importance" of the site and the size of the market share that uses the site. This implies that the attacker
THE PREFERENCES OF COMPUTER CRIMINALS
347
will have a specific preference for sites with features that indicate a small cost and a large reward. This analysis would also be used in the choice of method, for example, the use of a gun in committing a crime [11]. However, an individual's choices may be limited both by their economic and social situation and by the nature of the crime. For example, consider an individual with poor literacy skills and no high school diploma. This person may have trouble getting legal jobs that can compete with the illegal work available to them. So the educational attributes of the individual will limit their legal choices. However, the properties of the offense will also narrow their set of criminal choices. For example, this individual would have difficulty in committing an act of embezzlement at the local bank, but might have no difficulty in taking up burglary as a profession. Since these attributes will structure the decision-making process of the criminal, they are called choice-structuring properties [12]. In the case of computer crime, the type of tool that can be used will be determined by the choice-structuring property of the attacker's ability. An unskilled attacker would need to use a program written by another person, where skilled programmers could write their own attack tool. Each of these attack tools will have a set of site types that it works best with. Therefore, these tools will structure the choice of targets that the attacker will target. The rational criminal hypothesis implies that criminals will have a strong preference for targets with certain features. These preferences will be determined by the weighted benefits of attacking the target, the cost of attacking the target, and the type of tools they have available for attacking the target.
2.2
Routine Activity Hypothesis
The routine activity hypothesis was developed in an effort to explain the fact that some occupations and settings have disproportionately high victimization rates [13,14]. According to the routine activity hypothesis, a criminal incident requires three things: 1. A motivated offender, 2. A suitable target (or victim), and 3. The absence of a motivated guardian [15]. It has since been demonstrated that some parameters can have a major effect on the probability of crime. For example, students who have dogs, jobs, and extra locks are significantly less likely to have a major larceny. On the other hand students who live in noisy neighborhoods, belong to many organizations, or eat out often are significantly more likely to be victimized [16]. Some places also have higher crime levels. The crime rates near taverns, bars, liquor stores, and bus depots are higher than those in areas farther away [17,18].
348
DONALD E. BROWN AND LOUISE F. GUNDERSON
While this theory has not been expHcitly used for computer crime, let us consider how it would play out for the example discussed above. In this example we consider a group that is firmly committed to disrupting the U.S. economy. This group would be the motivated offender. A site, whose damage would cause major economic disruption, would be a suitable target. This would result in a preference dependent upon the features of the sites, where targets that are more "important" and have a larger market share are more suitable. For example, "Mom's Little Apple Shop" site might be a less attractive target than the "Big E-Commerce" site. The presence of a motivated guardian would be represented by the presence of a firewall or a quick response to the attack. This would result in a preference for targets that appeared to be relatively undefended. The routine activity hypothesis again implies that criminals will have a strong preference for targets with certain features. These preferences will be determined by their motivation to attack the target, their perception of the suitability of the target, and the absence of a motivated guardian.
2.3
Victim Profiling
Criminal profiling is a method of answering the basic question "What kind of person committed this crime?" This may include a psychological assessment, a social assessment, and strategies for interviewing suspects [19]. Criminal profiling is generally used in the case of violent crimes. Victim profiling is one of the methods of criminal profiling. In victim profiling, the question changes from "What are the characteristics of the attacker?" to "What need in the attacker does this particular victim satisfy?" [20]. In the case of victims of violent crime, this can be a complex and time-intensive process. It involves analysis of the physical and lifestyle features of the victim and analysis of the crime scene to determine what features of the victim and/or scene made it an attractive target. In many cases this may not be determinable from the available evidence. However, in the case of computer crime, the characteristics of the attacked site are completely available to the investigator in a way not possible with other types of crime. In fact, most of the characteristics of the victim that the attacker observes are the same characteristics that the investigator can observe. While it is not possible to read the mind of the attacker, the features of the victim are plain to see.
3.
Predictive Modeling of Crime 3.1
Previous Models
Recently a number of researchers have begun to expand existing approaches to predictive modeling of criminal activity. This section provides a brief overview of this work and shows its relation to our proposal.
THE PREFERENCES OF COMPUTER CRIMINALS
349
Kelly [21] has explored the relationship between public order crime and more serious crime. This work attempts to discover the strength of the relationship in the "broken windows" hypothesis. The approach uses log-linear models on a lattice. The predictor variables are public order crimes at previous time instances and at specific lattice or areal locations. The response variables are the felony or serious crimes at the same locations. This work is designed to increase understanding and based on the results to possibly direct police activities toward public order crimes. This is clearly an important contribution. However, the method is not designed to predict criminal activity in space and time, as is our proposed research. The work of Rengert [22] is similar in that it is designed to study the emergence of drug markets. His model explores factors of accessibility, susceptibility, and opportunity. As with the work by Kelly, Rengert's model is designed to increase our understanding of drug markets and inform policymakers. Again it is not intended as an operational tool. Rogerson [23] has developed methods for detecting changes in crime rates between areas. The approach uses methods from statistical process control (cumulative sum statistic in a spatial context) with assumptions of independence between time instances. He is also interested in models of the displacement of crime from one location to another, particularly in response to police actions. This last concern is relevant to the work we propose here. His approach differs from ours in that he uses a priori models of displacement in response to police actions. For example, his models assume a known utility function for criminals and use inputs such as the probability of arrest given the police action. Again these contributors to the criminals' utility function are assumed known. Olligschlaeger [24] developed an approach to forecasting calls for service using chaotic cellular forecasting. In this approach he organized the data into grid cells with monthly time units. He then used summary statistics on the calls for service in surrounding grid cells to predict calls for service in a center grid cell for the next month. He used a back-propagation neural network to actually perform the prediction at each cell. In tests of the method he showed forecasting accuracy better than that obtained from conventional statistical methods. Gorr and Olligschlaeger [25] have explored more traditional time series methods for predicting crime in grid cells. In particular they have looked at HoltWinters exponential smoothing and classical decomposition. These methods inherently look at the past crime data to predict future events. Both this work and the chaotic cellular forecasting work differ from our proposed approach in that they use only past criminal event data in their models. As a result they cannot directly address how changes affect the prediction of crime. Liu and Brown [26] developed an approach to criminal event prediction using criminal preference discovery. As inputs to the process they take past data of
350
DONALD E. BROWN AND LOUISE F. GUNDERSON
criminal activity, e.g., breaking and entering. They compute a density estimate in a high-dimensional feature space, where the features consist of all possible attributes relevant to the criminal in selecting his targets. Example features include distance to major highways, distance to schools, income, and type of housing. They then map this density back into geographic space to compute a threat surface or regions with high probability for future criminal events. Testing shows this method outperforms extrapolation from previous hot spots. However all of these methods make the underlying assumption that all criminals have the same preference for the features in an environment and that the environment is not changing. In the next section, multiagent modeling is considered. This method explicitly considers that different criminal agents in an environment have different preferences and that the environment may not be stable over time.
3.2
Multiagent Modeling
While there are many approaches to simulating human societies, one of the most promising approaches involves the use of multiagent models, also known as distributed artificial intelligence models [27]. In this type of a model, agents (defined below) are created in an environment, generally spatial, that contains both other agents and objects with which the agents can interact [28]. While there is no universal definition of an agent, an agent is generally defined as having four properties [29]: • Autonomy—the agent does not need to be externally directed. In general the agent has a set of rules to generate behaviors. • Reactivity—the agent can perceive its surroundings and react to them. • Social ability—the agent can interact with other agents in the model. • Proactivity—the agent can initiate its own goal-directed behaviors. There are major advantages to this distributed approach for the modeling of human criminal behavior: • Criminals do not all have the same target preferences. A multiagent model allows for the construction of heterogeneous agents, with different frequencies of attack and target preferences. The proactivity of an agent allows the agents to interact with the existing targets in their environment. • Criminals can communicate about methods of attack and possible targets. A multiagent model allows for the simulation of that communication process. • Most criminal behavior takes place in a geographic setting. Because multiagent models have an environmental (physical) component, they can
THE PREFERENCES OF COMPUTER CRIMINALS
351
explicitly simulate the physical distribution of a problem. While this is less important in computer crime, it is of paramount importance in the modeling of traditional criminal activity. While multiagent modeling has not yet been used to simulate criminal activity, it has been used in a wide variety of simulations of human behavior. Below is a partial list: • Simulation of the behavior of recreational users of forest lands [30], • Simulation of changes in the Anasazi culture in northeast Arizona [31], • Simulation of the movement of individuals in a city [32], and • Simulation of the effects of the organization of a fishing society on natural resources [33]. These simulations demonstrate the power of this approach for the modeling of human activity. However, not all types of multiagent models have the same predictive ability. In order to be accurate, the preferences of the agents must be accurately assessed. One distinction in this type of model is between "weak" and "strong" social simulations [27]. In a "weak" social simulation, the modeler has determined the relevant features in the model and the preferences and behaviors of the agents. While this type of model, if correctly constructed, can yield interesting insights about the behavior of cultures, it cannot be used as a predictive model. In a "strong" social simulation, the agent preferences and behaviors are derived from the preferences and behaviors of the humans in the environment being studied. A form of the "strong" social simulation is one using a "calibrated agent." Gimblett et al. use these calibrated agents in their work on recreational simulation [34]. They collected survey data from recreation users of the area of study. Then the results of the surveys were used to construct calibrated agents that have preferences and behaviors resembling those of the humans in the environment. However, in the case of computer criminals, the use of surveys is not possible. The first problem is that since most computer criminals are not identified, the population of identified computer criminals would be a biased sample of the entire population. The second problem is, even if one could find an unbiased population, why would a group of hackers tell the truth in a survey? Common sense tells us that the truth content of the responses would be low. Therefore it is necessary to find a way to discover the agents and their preferences from the event data, so as to correctly calibrate the multiagent model. This discovery method is discussed in the next section.
352
DONALD E. BROWN AND LOUISE F. GUNDERSON
4.
Discovering the Preferences of the Agents 4.1
Clustering
The data-mining technique proposed for discovering the agents and identifying their target preference is clustering. Clustering is the practice of grouping objects according to perceived similarities [35,36]. Clustering is generally used when no classes have been denned a priori for the data set [37, p. 287]. Clustering is often used in the analysis of social systems [38]. Clustering has also been used in a wide array of classification problems, in fields as diverse as medicine, market research, archeology, and social services [36, pp. 8,9]. In this discussion the term algorithm will be used for a specific clustering method, while procedure will be used for the entire process, which may include standardization of the variables in the data set, selection of the appropriate number of clusters produced, or other associated manipulations of the data set. Because clustering is a multiobjective methodology, there is no single clustering procedure that can be regarded as appropriate for most situations. Instead the applicability of a specific clustering procedure must be evaluated by the results that it produces in a specific situation [37, p. 311]. Many clustering algorithms have been created, and each variation has advantages and disadvantages when applied to different types of data or when searching for different cluster "shapes." A partial list of some possible clustering algorithms follows. 1. Methods in which the number of clusters are chosen a priori. In these methods, a criterion for measuring the adequacy of the partitioning of the objects into the selected number of disjoint classes is chosen. Then a set of transforms is selected to allow for the changing of one partition into another partition. The partitions are then modified until none of the transforms will improve the criterion chosen for measuring the adequacy of the partitioning. Some examples of this type of algorithm are k-means algorithms and simulated anneahng algorithms [36, pp. 41-45]. 2. Methods based on mixture models. In these methods the data are considered as coming from a mixture of sources, where each source has a conditional density function [35]. These methods can be used for data sets in which the clusters overlap. Some examples of this type of algorithm include Bayesian classification systems, and mode separation [37, pp. 316-318]. 3. Hierarchical clustering methods. In hierarchical methods, the number of clusters is not predetermined. Rather, a series of partitions is created, starting with a single cluster containing all of the observations and ending with a single cluster for each observation. This series of partitions can be displayed in two dimensions as a dendrogram. The number of clusters is determined by the use of a heuristic, called a stopping rule.
THE PREFERENCES OF COMPUTER CRIMINALS
353
For all methods of clustering, the inclusion of irrelevant features can mask the true structure of the data [36, pp. 23-26]. This makes it important to select only the relevant features for determining the number of agents and their preferences. For the problem of identifying attacking agents, at first blush it seems that it would only be necessary to cluster the attacks along some major attributes, Hke number of pages at a site or political affiliation. However, different criminals can and do place different weights on the same proxies [39]. For clustering algorithms, these preference differences can be expressed as cluster-specific salience weightings, where for each cluster the features have a different salience for the individual [40]. Models that take into account these preference differences between individuals have been developed for recreational fishing [41] and transportation choice [42]. However, these papers use a classification method based on survey data of the individuals involved in the activity to determine their preferences. This is clearly not feasible for criminal activity. The discussion below describes a clustering process designed to discover this type of clusters from existing event data. For any criminal event, the environment presents a large number of features that can be used directly, or as proxies for other hidden features, and each criminal will select his own feature set. The nature of these target preferences suggests a method for separating them. It can be shown that if a criminal cares about a specific feature, the values of that feature will be constrained, with the tightness of the constraint corresponding to the degree to which he cares about it. However, if he is indifferent to a specific feature, then the values of that feature will be unconstrained. Thus, we see that the distribution of the events caused by a specific criminal will be dependent on the salience weighting of that feature to that criminal. This is shown graphically in Fig. 2. In this figure, Cluster A represents the attacks of an individual who cares about all three features of potential targets: size of the site, percentage of military business done at the site, and political affiliation of the site (measured on a spectrum from communist to democratic). The cluster formed by his attacks will form a spheroidal cluster in the space formed by these three attributes. Cluster B represents the attacks of an individual who only cares about two of these features: percentage of military business done at the site and political affiliation of the site. Since he does not care about the distance from roads, his crimes are uniformly distributed across this feature. This results in a cluster that is cylindrical in the space formed by all three attributes, but is circular in the space formed by the two attributes the criminal cares about.
4.2
Judgment Analysis
One method of assessing human preferences that can be used for this problem is judgment analysis. Judgment analysis is an a posteriori method of assessing how a decision maker formed a judgment [43]. This theory comes from the field
354
DONALD E. BROWN AND LOUISE F. GUNDERSON
Cluster A
•
• • • •
\ •) /
Political affiliation Size of the site
Percentage of military business
FIG. 2. Graphical depiction of two preferences.
of cognitive psychology and is based on the work of Egon Brunswik, who viewed the decision-maker as being embedded in an ecology from which he received cues as to the true state of things [44]. These cues are probabilistically related to the actual state of events. Judgment theory is concerned with the weighting that the individual places on the cues in his environment. One of the major strengths of this theory, in the context of predicting computer crime, is that it does not require that the cognitive process of the criminal be modeled. Rather, the weights that the criminal places on the cues in his environment will be derived from the events caused by that criminal. This theory has also been used to construct a cognitive continuum between analytic and intuitive thought [45]. Brunswik's original theory and its extensions have been used in such domains as meteorological forecasting [46], social welfare judgments [47], the understanding of risk judgments [48], and medical decision making [49]. In judgment analysis, the judgment process is represented by the lens model [43]. To discuss this model, let us consider the simple example of estimating the distance to a child's building block lying on a table. In this model, the actual distance to the block is an environmental (distal) variable (je)- The observer has a series of observable (proximal) cues (c,) relating to this distal variable, such as the size of the retinal representation of the block, the differences in the image in the right and left eyes, and the blurring of the image. These cues have a correlation to the actual state (ecological validity). The subject weights the cues and uses a function of these weighted cues to make a judgment as to the true
355
THE PREFERENCES OF COMPUTER CRIMINALS
state (ys). This cue weighting has a correlation to the relationship of the cues to the actual state (cue utilization validity). The actual achievement (performance) in the judgment task can be used to update the weights placed on the cues in future judgment tasks. This model is described by ys = ^ >v/X/, where ^5 is the judgment of the condition of target 5, y^ the actual environmental condition of the target, n the total number of cues available to the judgment maker, Xi the value represented by cue /, where / goes from 1 to n, and w/ the weighting of cue /. This model is shown graphically in Fig. 3. This model does not capture the motivation of the individual. In the case of crime analysis, if the motivation is not considered, then all of the possible cues available to the attacker must be considered. This significantly increases the difficulty involved in the construction of the model. However, this model can be extended in a way that allows for a smaller subset of cues to be considered. This extension uses the rational choice theory and the routine choice hypothesis discussed above. If these theories are used, then only the cues that could be considered to have a significant effect on the criminal's perception of the risks or the benefits of the crime, the suitability of the target, or the presence of a guardian must be considered. This is a subset significantly smaller than that of all the possible cues. While the computer criminal's venue is different from that
•Achievement
Distal (Environmental) Variable
Subject Judgment
FIG. 3. Lens model.
356
DONALD E. BROWN AND LOUISE F. GUNDERSON
of the types of criminals for whom these models were developed, their decision process can be described using these models. Let us look again at the example of the computer terrorists discussed above. Some of the cues that they might use to determine the "importance" of the site might be: • The number of hits that the site gets (a measure of its relative importance), • For a commercial site, the total value of commodities sold at the site (a measure of the economic importance of the site), • The number of sites that point to this site (a measure of relative importance), • The type of firewalls employed by the host, • The type and level of encryption used at the site, and • The size of the company or government behind the site. This extension results in the hierarchical judgment design model [43]. In this model, the attackers are using the weighted cues to assess the value of the risk or benefit (value), which can be considered either a first-order judgment or a second-order cue. These second-order cues are then used to make the secondorder judgment as to the "best" target. This is shown graphically in Fig. 4.
4.3
Applying the Judgment Analysis Model to Criminal Preference
Let us consider the above-mentioned group of attackers. They have a number of sites to choose to attack, with their judgment of the value of each site represented by a weighted sum m
where ys is the judgment of the condition of target s, m the total number of risks/benefits perceived by the judgment maker, Vj the risk/benefit represented by value y, where j goes from 1 to w, and Wj the weighting of the risk/benefit represented by value j . The perceived risk/benefit of a target is derived from the weighted sum of the cues pertaining to that risk/benefit. This results in
357
THE PREFERENCES OF COMPUTER CRIMINALS
Achievement
Outcome
First-Order Cues
First-Order Judgment Second-Order Cues
Second-Order Judgment
FIG. 4. Hierarchical lens model.
where rij is the number of cues for value /, x/y value of cue j for value /, and Wj the weighting of cue j . If we define w/y = vv/Wy, then this equation can be rewritten as ys
=
Wll^ll + W12X12 + • • • + \\^\nnXim, + W21X2I + W22X22 + • • • + W2;;,2^2m2 + • • • + W„iX„i + Wn2Xn2 + • • • 4- W„m.X, 'nm„Xnm„-
The distribution of the values of the cues in the targets available to the criminal must be considered. First, there must be some divergence in the values of an attribute [50]. For example, all computer attackers attack sites that are hosted on computers. Therefore, the fact that a site is on a computer (as opposed to a book, or a sign, or a billboard) gives no information as to the cue weighting of the attacker. All of the cues used in this analysis must have some divergence. Second, the distribution of the values of the cues must be examined. Values that are relatively unusual, i.e., that represent a low-probability event, should carry a higher weight than values that are less unusual [51]. For example, a hacker who prefers to attack sites with Teletubbies would be more unusual than a hacker who prefers to hit stock brokerage sites, simply because there are more sites devoted to
358
DONALD E. BROWN AND LOUISE F. GUNDERSON
the art of stock brokerage than devoted to the art of the Teletubby. This means that before the analysis of real data can proceed, the data must be adjusted to reflect the prior probabilities. Given that these two assumptions are met, for features for which the criminal shows no preference vv/y = 0, and for features for which the criminal shows a preference Wij may be large. This term then becomes representative of the salience of the feature to the criminal, and is termed the salience weighting of the feature. For an interval feature, as the salience weighting approaches zero, since the probability of any value will be the same as the probability of any other value, the distribution of the events in the feature space will be uniform. If the feature is categorical, then the events will be uniformly distributed among the categories. For a nonzero salience weighting, the events will be grouped around the maximum preference, with the tightness of the grouping being proportional to the strength of the salience. This means that the events caused by a specific criminal will have a smaller variance along the feature axis for which that criminal has relatively large salience weighting. So, prospect theory suggests that the events caused by a specific criminal should have the following characteristics: • A relatively small variance along the axis for which the criminal has a relatively large salience weighting, and • A relatively large variance along the axis for which the criminal has a relatively small salience weighting. Since each of the criminals can have a different salience weighting for each of the features, it should be possible to discover the preferences of individual criminals.
5. 5.1 5.7.7
Methodology
Cluster-Specific Salience Weighting
Overview
As mentioned above, the point of the cluster specific salience weighting (CSSW) methodology is to identify the attacking agents and their preferences. The cue data available are the features of the sites that are attacked. These features can include: • The type of business done at the site, • The number of pages at the site.
THE PREFERENCES OF COMPUTER CRIMINALS
359
• The amount of business done at the site, and • The political affiliations of the site. The resulting clusters represent the attacking agents and the salient features represent the preferences of the agents. CSSW is used to identify these clusters and the features salient to those clusters. In order to do this, the software clusters the events in a space defined by all possible features. If any resulting cluster has a variance in all of the features less than a predetermined "cutoff" level, then it is considered to represent an agent and removed from the data set. The remaining events are clustered in all possible subsets of the features until either no events or no features remain.
5.7.2
Picking a Clustering IVIethod and Stopping
Rule
The first problem in constructing the cluster-specific salience weighting is to choose a clustering algorithm. Different clustering algorithms have different properties and problems. For this analysis, the following properties are important: • The algorithm should not be biased against forming lenticular clusters. This is important because the elongation of the clusters yields valuable information about the cluster-specific salience weight. • The algorithm should be fast. The number of analyses required by a data set with many possible features requires that the algorithm be fast. • The clusters resulting from the algorithm should be independent of the order of the observations. The need for the resulting clusters to be observation order independent suggests an agglomerative hierarchical method. However, some agglomerative hierarchical methods, namely centroid clustering and Ward's method, tend to be biased against lenticular shapes [35], so single clustering (also called hierarchical nearest neighbor), which does not impose any shape on the resulting clusters, was chosen. However, it should be noted that other clustering methods, namely mixture models, could be used in the place of this hierarchical method. After the selection of the appropriate clustering algorithm, the next problem is the selection of an appropriate number of clusters. Milligan and Cooper tested 30 stopping rules on nonoverlapping clusters [52]. They found that the best stopping criteria was the Calinski and Harabasz index [53]. This stopping rule uses the variance ration criteria (VRC), which is the ratio of the between group sum of squares (BOSS) and the within group sum of squares (WGSS). VRC =
BOSS A:-l WGSS
n-k
where k is the number of clusters and n the number of observations.
360
DONALD E. BROWN AND LOUISE F. GUNDERSON
The VRC is calculated for increasing numbers of clusters. The first number of clusters for which the VRC shows a local maximum (or at least a rapid rate of increase) is chosen as the appropriate number of clusters.
5.7.5
Implementing
the CSSW
Methodology
Below is a brief description of the CSSW methodology. This method could be employed with many different clustering methods and stopping rules. 1. A cutoff variance (v) is chosen for all dimensions, where all dimensions = n. 2. A cutoff number (s)is chosen for the smallest number of points in a cluster. 3. A cutoff number (m) is chosen for the number of local maxima to be tested. 4. The observations are clustered in all dimensions, and the VRC is calculated for all possible numbers of clusters. 5. The first local maxima is chosen. 6. The within cluster variance is calculated for each cluster with more than s points for all of the dimensions. 7. If a cluster is identified, for which the variance is less than v for all n variables, this cluster is identified and removed from the data set. 8. If no such cluster is identified, then the next local maxima is investigated, until the number of maxima reaches m. All identified clusters are removed from the data set. 9. The remaining data are clustered in all possible subsets of n- I variables. 10. The process is repeated until the number of events is less than the smallest number of points allowed in a cluster or there are no remaining features to be tested. This method is shown graphically below. Figure 5 shows the events caused by three agents: Agent A has a preference in xi, X2, and X3; Agent B has a preference in jci and X2; and Agent C has a preference in xj and X3. If the events are clustered in xi, X2, and X3, the cluster that contains the events caused by Agent A can be removed. Then the remaining events are clustered in x\ and X2. The cluster that contains the events caused by Agent B can be removed (see Fig. 6). Then the remaining events are clustered in xi and X3, but no cluster can be removed (see Fig. 7). Finally, the remaining events are clustered in xi and X2. The cluster that contains the events caused by Agent C can be removed (see Fig. 8). This simple example shows how the CSSW can be used to separate the clusters and to determine the feature weighting for each of them.
361
THE PREFERENCES OF COMPUTER CRIMINALS
Agent B
Agent A
X2 - value of highest priced item for sale
X3 - distance from New York
Agent C
X^ - volume of business
FIG. 5. Results of first clustering.
Agent B
X2 - value of highest priced item for sale
X3 - distance from New York
Xi - volume of business
FIG. 6. Results of second clustering.
5.2
Using the Discovered Agents in a Multiagent Model
In order to construct a multiagent model of the Internet, a simulation of the Internet must first be created. In order to create this simulation, the features that will be considered are chosen. As mentioned above, some features could be political affiliation of the site, size of the site, or type of business done at the site. Since it is clearly impossible to model the entire Internet, a subset of sites of interest to the modeler could be chosen. Each of the sites is identified by the value of the features of the site and a label, which is the address of the particular site. An interesting characteristic of the Internet is that multiple sites will have different addresses, but the same (or very similar) vectors of features. Then the agents and their preferences must be created. It is possible, if the modeler has no attack data, to create the agents a priori, by considering the
362
DONALD E. BROWN AND LOUISE F. GUNDERSON
X3 - distance from New York
X-, - volume of business FIG. 7. Results of third clustering.
X2 - value of highest priced item for sale Agent 0
X3 - distance from New York FIG. 8. Results of final clustering.
preferences of an imaginary attacker. However, for this case we assume that the modeler has previous attack data. The feature vector of the sites that have been attacked will be extracted, and this features vector will be clustered using the CSSW method described above. This will result in the discovery of the number of agents and their preferences (or lack of them). Once the agents and their preference have been identified, then a new round of attacks can be simulated. This direct simulation gives the user the chance to experiment with potential changes, and to see the effects in a synthetic Internet. This is shown graphically in Fig. 9. However, this methodology can also be used to create a protective system. Systems administrators do have some options open to them after an attack is started. One option is to shorten the ''wait" time on a SYN connection. This decreases the severity of the attack, but it also decreases the accessibility of the
THE PREFERENCES OF COMPUTER CRIMINALS
363
Attack Data
Attack database Event, Xi, X2,..., Xp, addressl
^
Agent 1 Agent 2