Improving Air Safety through Organizational Learning

  • 72 9 2
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up

Improving Air Safety through Organizational Learning

Para Rodrigo y Víctor Con el deseo de que permanezcan juntos Por muchos años. Consequences of a Technology-led Mode

1,357 492 896KB

Pages 189 Page size 252 x 378.36 pts Year 2007

Report DMCA / Copyright

DOWNLOAD FILE

Recommend Papers

File loading please wait...
Citation preview

IMPROVING AIR SAFETY THROUGH ORGANIZATIONAL LEARNING

Para Rodrigo y Víctor Con el deseo de que permanezcan juntos Por muchos años.

Improving Air Safety through Organizational Learning Consequences of a Technology-led Model

JOSÉ SÁNCHEZ-ALARCOS BALLESTEROS Quasar Aviation, Madrid, Spain

© José Sánchez-Alarcos Ballesteros 2007 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise without the prior permission of the publisher. José Sánchez-Alarcos Ballesteros has asserted his moral right under the Copyright, Designs and Patents Act, 1988, to be identified as the author of this work. Published by Ashgate Publishing Limited Gower House Croft Road Aldershot Hampshire GU11 3HR England

Ashgate Publishing Company Suite 420 101 Cherry Street Burlington, VT 05401-4405 USA

Ashgate website: http://www.ashgate.com British Library Cataloguing in Publication Data Ballesteros, Jose Sanchez-Alarcos Improving air safety through organizational learning : consequences of a technology-led model 1. Aircraft accidents – Prevention 2. Organizational learning I. Title 363.1'24 Library of Congress Cataloging-in-Publication Data Ballesteros, Jose Sanchez-Alarcos, 1957– Improving air safety through organizational learning : consequences of a technologyled model / by Jose Sanchez-Alarcos Ballesteros. p. cm. Includes bibliographical references and index. ISBN 978-0-7546-4912-0 1. Aeronautics, Commerical--Safety measures. 2. Safety education. 3. Organizational learning. I. Title TL553.5.B2575 2007 363.12'40683--dc22 2007005505

Printed and bound in Great Britain by MPG Books Ltd, Bodmin, Cornwall.

Contents

Preface

vii

1

Commercial Aviation: A High-Risk Activity Air safety as a model of successful learning Limitations to success in organizational learning Reductions in the rate of improvements in safety levels The reduction in the rate of learning as consequence of a model

1 3 4 10 12

2

Event Analysis as an Improvement Tool Potential and uses of event analysis Life cycle of information on events Limitations of event-based learning

15 15 20 29

3

Safety in Commercial Aviation: Risk Factors Classification of risk factors Analysis of risk factors Summary of the treatment of risk factors

35 36 38 89

4

Explanation of the Reduction in the Rate of Learning in Complex Environments Systems that learn from errors and systems that avoid errors Barriers to organizational learning Organizational paradigms and their role in learning Paradigms and modes of action Adjustment of the different organizational paradigms to the needs of learning in air safety

5

6

91 91 94 108 110 112

Organizational Learning in Air Safety: Lessons for the Future Difficulties for the change in organizational paradigm Change of organizational paradigm

117 117 123

Meaning and Trust as Keys to Organizational Learning Meaning and its role in organizational learning Role of trust in organizational learning

129 129 141

vi

Improving Air Safety through Organizational Learning

7

The Future of Improvements in Air Safety Criteria for the definition of an organizational learning model Determinant factors of learning ability Alternative learning model

145 145 147 149

8

Conclusions Limitations of the current learning model Changes in the relationships between variables within the system Future lines of development Final conclusions

159 159 160 162 163

References

167

Index

177

Preface Writing a book is not an easy task. It is very easy starting with a very well-defined plan but, a moment arrives where the book decides to fly alone. The author can choose between following the book or trying to control it. I would like to explain why and how this book was started, when it started to fly on its own and the surprises that the author has encountered during the whole process. I have been flying more than 20 years, starting with micro-lights, progressing to gliders and light planes. However, I have never flown professionally. My job has been in the field of business management as a consultant and as a Human Resources teacher. When having a PhD became important in order to remain active in any principal business school, I decided to get it in an easy way. Since I had many contacts in aviation, it should be easy to get information about air safety – as a model of excellence – and, after that, I could export findings and methodology to the field of business management. The first surprise came when the air safety model seemed to be not so perfect and, moreover, for many years improvement rates were decreasing. From that moment, the idea of easily getting together a thesis became impossible, at least, if one tries to be honest about the results without hiding the ones that counteract your interests! The analysis of this fact brought me to the second surprise: there is no difference between the model in air safety and business management. Simply, the first one has more pressure and, hence, it has been forced to advance further. That was good news. It is like having a map: knowing where many businesses are going to run into problems because air safety – the more advanced model – has already arrived there. From this situation arose the third surprise. As a consultant and professor in Human Resources and Knowledge Management fields, I was very glad that I could analyse many situations from a very privileged point of view, offering original and valid solutions to many problems in the field of organizational learning. The surprise was precisely that this idea was not embraced with the enthusiasm that I could expect from the business management field. But, the enthusiasm came from … the aviation field. A few months before presenting the manuscript to Ashgate, I spoke about my main findings in an ICAO meeting and they were very well received. Since then, I have been more encouraged by the aviation field than the business management field, even though my initial objective was the application of the air safety model to business management. I have found friends and interesting people in the professional aviation field who are really involved in safety and it is a real pleasure to work with

viii

Improving Air Safety through Organizational Learning

them. Perhaps the difference in behaviour between these two fields explains why the air safety model is more advanced than many business management models. In this book, you are going to find many descriptions of accidents and attempts to establish valid conclusions from them to improve the air safety model. Many people ask me if, after this kind of research, I would keep flying, even as a passenger. Of course, I would! There are, however, two differences if I compare the “age of innocence” with the period before my research: • Fear is not synchronised with the rest of the passengers. I have found myself in situations where people were scared while I was reading the newspaper and vice versa. • I am now fully convinced that flying is safe not just because of the wonderful technological designs, but because of people who operate planes, fix or check them and control their flights. I think that, after reading this book, you are going to know why and, if I have been able enough to explain it, you will share this opinion. Last but not least, this work would not have been possible without the patience of my family, especially my wife Elena.

Chapter 1

Commercial Aviation: A High-Risk Activity The very characteristics of commercial aviation – highly complex systems and grave potential of their contingencies – impose upon the organizations that work in this field the quality of high-risk organization. Weick (1987) defined these types of organization as those in which reliability is a more pressing problem than that of efficiency. Many activities share this concept of high-risk organization (aviation, maritime transportation, the nuclear and petrochemical industries …). All of these activities have one point in common: a lack of reliability can lead directly to the loss of life, ecological disaster and, in general, physical consequences. Under any possible definition of the word risk – one that strictly refers to physical risk, or a broader definition that includes all types of risk – it is clear that commercial aviation can be included under the heading of high-risk. As proof of this fact it is sufficient to analyse the following Ten Commandments of Flight risk1: Ten Commandments of Flight Risk 1. Commercial jets, at their cruise level, move at speeds close to the speed of sound. Any impact with another object, moving or otherwise, represents a disaster and the time available to react is very limited. 2. Aircraft travel at a distance of between nine and 12 km from the ground. The atmosphere is not breathable and to the risk of asphyxia, we must add that of the impact due to uncontrolled loss of altitude. 3. The external temperature at cruise level of jets is around -50 C. Exposure at these temperatures would not permit survival beyond a few minutes. 4. External atmospheric pressure is very low. To maintain a comfortable pressure inside the aircraft, the sheet metal that separates the interior from the exterior is subjected to significant pressure. This manifests itself as metal fatigue. 1

Own definition.

2

Improving Air Safety through Organizational Learning

5. On long flights, half the take off weight of the aircraft is fuel. In case of impact, this involves risk of fire or explosion and in the case of depletion, grave risk of accident. 6. The engines work under conditions of high pressures and temperatures, with the associated risk of mechanical failure, explosion or fire. 7. Large aircraft, under almost any meteorological condition, take off at speeds close to 300 km/h and land at speeds close to 200 km/h. 8. Aircraft are subject to all sorts of meteorological conditions that affect visibility, cause structural impact, electrical discharges or build-up of ice on external surfaces. 9. Aircraft cover large distances over all types of geographical zones. At many geographic points, an aircraft could find itself more than three flight hours2 away from the nearest airport. 10. Congestion on some flight routes3 or the terminal zones of large airports involves the risk of collision. It’s significant that despite the facts pointed out in the Commandments, defining aviation as a high-risk activity, insurance companies don’t increase their premiums to frequent flyers because they consider the incurred risk doesn’t justify an increase in premium.4 At the same time, the Commandments presents a special feature that deserves to be noted: activities exist that begin as low risk but become high risk as they evolve. Amongst the 10 risks highlighted, only number 2 is a risk intrinsic to flight. Even so, the risk arises from the technology that is used. The remaining nine types are a result of the evolution of aviation towards higher efficiency and never would have been applicable to the pioneers of flight. Naturally, the risks that have come as a result of evolution shouldn’t hide the fact that other risks faced by the pioneers have disappeared.

2 Equally, in the crossing of the Atlantic, as that of the Pacific, there exist routes where this phenomenon is produced and occasionally, are flown with twin-engine aircraft. 3 It has been recently concluded that the precision of navigation systems is sufficient that two aircraft flying with 1,000 ′ separation in opposite directions does not represent a risk of collision and has resulted in the changing of regulations regarding aircraft separation. 4 This same indicator – the behaviour of insurance companies – has been used in the opposite sense by Beck (2002a) as proof of the existence of risks in other environments, whose magnitude makes them uninsurable.

Commercial Aviation: A High-Risk Activity

3

There are eyewitness reports from the 1930s – such as wings breaking due to turbulence, catastrophes caused by simply stopping an engine, or loss of control due to lack of visibility – that demonstrate risks that, happily, have disappeared. Evolution, however, has introduced efficiency and this has changed the risk scale: if the risk is defined as the seriousness of an event multiplied by the probability of it occurring, evolution has modified both factors, reducing the probability and increasing the seriousness. Beck, an author critical of systemic models, expresses, nevertheless, this phenomenon in systemic terms indicating that the unforeseen consequences of functional differentiation can no longer be controlled by greater functional differentiation. In fact, the very idea of controllability, certainty or safety collapses. Air safety as a model of successful learning There is a clear contradiction between a danger intrinsic to commercial aviation, such as explained in the Decalogue of flight risk, and the low number of accidents. These were quantified by the NTSB as 0.103 serious accidents per million hours flown.5 The only possible explanation for this apparent contradiction is the accumulated result of intense learning, which has led to a significant improvement in the level of safety. A review of the history of commercial aviation permits us to check that this learning has actually taken place and that the consequent improvement in the safety level has been the result. Amongst the various factors that have applied pressure towards improvement, Charles Perrow highlighted several differential elements of commercial aviation with respect to other high-risk activities: 1. The possibility of an air accident with several hundred persons in the aircraft and the consequent public impact represent a constant nightmare for manufacturers and operators, as much of aircraft as of airports or air traffic. This has forced an active search for improvements not directly driven by external pressures, but rather as a precaution against situations that, once they occur, have an impact that is difficult to calculate. The pressure exerted by professional syndicates –fundamentally pilots − can also be thought of as internal, where they protest if there are any undue risks and carry out their own safety studies. 2. Regulatory bodies have a difficult position. On the one hand, they should have a supervisory role; on the other, given the economic and social impact of air transport, they can’t limit themselves to putting hurdles in place but must facilitate the development of the activity.

5

Data referring to commercial transport of passengers in the United States.

4

Improving Air Safety through Organizational Learning

3. Users tend to avoid travelling by air when a large accident occurs or avoid specific models of airplane6 if its accident rate exceeds that which should correspond to it statistically. It is also the case that the political or business elite have a personal interest in air safety, as they often are frequent flyers themselves. In addition to the pressure factors towards improvement, Perrow points to commercial aviation as privileged where opportunities for learning are concerned. The opportunities for learning have occurred as much by their own accumulated experience as through actions not directly linked to commercial aviation. So, it has been possible to learn lessons from the military and space fields, useful for the design of new models of airplane,7 without incurring the costs and risks that a complete and own development would represent. Frequently, the same manufacturers work for civil and military aviation. The latter can have different requirements and the learning obtained through the different military aviation programmes can be translated easily to the civil field. These reasons, pressures and opportunities can justify, on their own, the success of air safety via continuous learning and improvement. Nevertheless, there is another element, which will be covered thoroughly in the following chapters. Information, to be useful, must flow with relative freedom within the system. This fluidity represents one of the basic elements for learning. Military and space experiences would not count for much if their results remained in the field where they originated. The same can be said of the results of accident investigations in the commercial field itself. Limitations to success in organizational learning Until now, we can conclude that commercial aviation is a high-risk activity and, despite this, has achieved levels of safety that constitute success. Some shadow, however, darkens this panorama. Since 1975, as can be seen in Figure 1.1, the improvement in safety rates has reduced to the point of remaining almost constant. Perrow attributes this fact to causes that will be addressed separately: the necessity of efficiency and an increase in complexity.

6 The existence of a set of serious accidents involving the DC-10 and the implication of the manufacturer in dubious practices destined to avoid the hardening of regulations led to the practical death of this model of airplane. 7 The introduction of ‘fly-by-wire’, meaning that the pilot gives instructions to an information system instead of directly controlling the airplane, was first produced in military aviation and only when it was proven was it installed in the Concorde and much later in Airbus and finally in Boeing.

Commercial Aviation: A High-Risk Activity

Figure 1.1

5

Annual evolution of air disasters between 1959 and 2005

Need for efficiency The definition of high-risk organization puts reliability as a more pressing problem than efficiency. Nevertheless, that hierarchy of reasons doesn’t always occur. Once an acceptable level of reliability is reached, the pressures towards increased efficiency in operations can impose themselves. The action of the low-cost airlines and the impact of their existence and practices on the traditional operators are significant in the search for efficiency. As technology improves, the potential for safety increases, but this isn’t fully used for improvements in safety. The demands of efficiency, that is, improvements in speed, altitude, manoeuvrability, fuel consumption, reduction in training cycles and the possibility of operating in any meteorological conditions, among others, are present. These demands claim for themselves part or all of the new technological capacity. In this way, a technological improvement that could provide increased safety can be diverted from its objective towards increased efficiency. Fischhoff et al. (1999) point out that the behaviour of the different actors is radically different in so far as their level of requirements, according to their perception of risk. Hale and Baram (1998) expressed this very idea via their equilibrium curve between risk and efficiency, represented in Figure 1.2. For Hale and Baram, the simplest line of action or law of least resistance leads towards a reduction in action, which increases the level of risk. The reduction is corrected via the establishment of improvements in procedures and the dynamic

6

Figure 1.2

Improving Air Safety through Organizational Learning

Risk vs efficiency equilibrium

Source: Hale and Baram, 1998

equilibrium between these two tendencies produces a curve, along which the behaviour of organizations oscillates. In aviation, the margin for degradation of action is small. If users perceive risk, be this associated with a model of airplane, an airline or commercial aviation as a whole, they will flee. Perrow (1999) and Ranter and Lujan (2001) addressed the consequences of the public impact of an accident. They analysed the impact of a series of accidents – including when their causes were identified and corrected − of the DC-10 model and its premature withdrawal from the market. Fischhoff (1999) tackled the problem of acceptable risk under the heading of apparently simple solutions pointing out that it would be tempting to claim that no risk will be tolerated but, from the perspective of decision-making, it’s necessary to ask if absolute safety is possible. Assuming that this was materially possible, the cost could make commercial aviation unviable. Fischhoff himself offered the answer to this problem. The perception of an acceptable level of risk is a condition for being able to act in the marketplace. However, once an operator is perceived as being of acceptable risk, most passengers would tolerate a small increase in risk in exchange for a great reduction in cost. As a consequence, the demand for safety on the part of the public is not always the same: if the perceived risk is not considered acceptable, an acceptable level of safety represents an objective to achieve. Once the perception of high risk is eliminated, trade-offs between safety and efficiency become admissible. Under these conditions, the perception of a low level of risk is crucial for the support of commercial aviation. Maintaining this situation is the cost of admission,

Commercial Aviation: A High-Risk Activity

7

but once an operator – be it an airline, manufacturer or specific airplane − has reached this situation, then begins the territory of decisions of compromise between safety and efficiency. Reason (1997) illustrates the idea that operators are in a permanent search for equilibrium with the following figure, where we observe the possible lines of action:

Figure 1.3

The lifespan of a hypothetical organization through the production–protection space

Source: Reason, 1997, with permission

The argument used by Reason is similar to that used by Fischhoff when he pointed out that it would be simplistic to announce that no risk would ever be tolerated. An excess in the search for safety would drive to bankruptcy without even covering 100 per cent of the possible events. On the other hand, a lack of zeal in safety would lead to catastrophic effects. By definition, in commercial aviation all operators would move within the central zone. Otherwise, either the regulatory bodies would have impeded their activity or they would not have been economically viable to maintain themselves in the market. Luhmann offers a related explanation, although it complicates an attractively simple model: the acceptability of risk does not depend so much on the level of perfection achieved as on its origin, and this will have a determining role in that acceptability. For Luhmann (1993), the concept of risk is applied when the loss is attributed to a decision, whereas when uncontrollable external factors intervene, he uses the concept of danger. Upon introducing this distinction, there follows the requirement that the possible origins of an event be clearly differentiated. However, the average user doesn’t have the necessary information and must trust in specialists’ criteria. In summary, the perception of acceptable or unacceptable risk will be conditioned more by the credit that the users give to the specialists than by their own capacity to evaluate. However, as Beck (2002) points out, this is not a static situation.

8

Improving Air Safety through Organizational Learning

Confidence in the criteria of the specialists can be lost with catastrophic consequences when the evidence that the collateral effects of products or industrial processes are putting in danger the basic requirements of life. That can unleash the collapse of markets, destroy political confidence, economic capital and the belief in the superior rationale of the experts. Both points of view explain the mobility displayed at level considered to be acceptable, depending on different contingencies. The equilibrium point searched for by the experts does not happen exclusively by taking into account risk calculations and comparing these with a predetermined level of acceptability. The level of acceptability is also determined by the origin attributed to an event and the possible perception of inadequate decisions or conduct in their management. The perception of inadequate behaviour would result in the destruction of trust pointed out by Beck and the requirement for extraordinary control measures.8 In the best of cases, these control measures could harm efficiency with resulting effects on functionality. In the worst, they can be incapable of limiting the impact of the loss of confidence and insufficient to avoid the outcome of users stopping use of an operator, a model of airplane or air transport itself. This way, a loss of confidence can happen when an event occurs that leaves the internal functioning of the organization in question. This invites taking the idea of acceptable level of risk with a little caution, since this is quite variable and can change almost instantly in relation to the perception of the origin of the risk. The loss of confidence and its almost instant redefinition of the acceptable level of risk represent, therefore, a multiplying effect whose magnitude is difficult to calculate. The movie WRZ, dedicated to the accident that occurred in Buenos Aires on 31 August 1999 in a Boeing-737 belonging to LAPA, demonstrates the large number of irregularities that occurred in the case with the consequent effects – penal as well as the bankruptcy of the operator. Along similar lines, one can interpret the comment by Michael O’Leary, CEO of Ryanair (Crearon, 2005), who, when asked about the risks in his company, pointed out that there are only two: that they made a foolish mistake, or that a significant, low-cost company suffered an accident. O’Leary is conscious that an eventual accident could instantly redefine the level of acceptability and leave some operators out of the game.

8 The state of North American airports following 11 September 2001 demonstrates this phenomenon. Many passengers have stopped flying, save in exceptional situations, and have increased spectacularly the volume of videoconferences and similar resources. At the same time, airports such as Miami International, traditionally used as transit points, are avoided by international travellers who want to avoid passport and baggage controls, added since this date, and the resultant delays they cause.

Commercial Aviation: A High-Risk Activity

9

This is, therefore, a game of highly unstable equilibriums. The operators try to always stay above the acceptable level, maintaining at the same time a level of efficiency that permits them to compete. This effort has as its backdrop the acceptability by the users, not only of the risk, but also of its origin and the consequent danger of breach of confidence in the activity with the associated consequences. Increases in complexity The growing complexity of the system makes possible the emergence of new types of events thanks to the appearance of systemic risk. Once a certain level of complexity is reached, the capacity to improve stays at a marginal level, since the modifications would entail, as well as the solution to the problem, the capacity to cause new and unknown events. Perrow noted as a positive point for air safety the great number of flights performed in addition to the experimentation coming from the military and space fields and their effect on learning. This learning has materialized as an increase in the technological and regulatory load, which at the same time has introduced a level of complexity sufficient to create self-generated events. In an airplane, events of the systemic type are particularly easy. Functionally unrelated systems can be physically close and interact in unforeseen ways. Some of the many examples of unforeseen interactions follow: •







The detachment of an engine in an American Airlines DC-10 caused the retraction of the surfaces that provide lift at low speeds and, hence, the destruction of the airplane. It can be considered aggravating that, with the available information, the pilots interpreted the situation as an engine failure and followed the standard operating procedure (SOP). This required flying at a speed very close to the stall speed. Upon retracting the hyper lift surfaces, the stall speed increased and it was the pilots, involuntarily and following the SOP for an engine failure, who placed the airplane in the situation that caused its destruction. A grave failure in the pressurization system in a Japan Airlines Boeing 747 caused the total loss of hydraulic fluid, resulting in losing control of the airplane and, therefore, its destruction. The violent explosion of the rear pressure bulkhead caused the rupture of the hydraulic lines. An explosion in the tail engine of a DC-10 airplane caused the loss of the hydraulic systems with loss of conventional control, giving rise to a serious accident. Interestingly, in this case the pilots came up with a non-existent solution that arose from a profound knowledge of the behaviour of the airplane. A blow-out of a tyre was the cause of the accident of a Concorde in Paris. It led to an engine failure and the perforation of a fuel tank. Furthermore, the nebulized fuel escaping from the perforated tank was ignited by the afterburner, starting a fire. Finally, the fire and a second engine failure led to the total destruction of the airplane.

10

Improving Air Safety through Organizational Learning



A severe failure in pressurization of a Learjet caused the death of its occupants because the automatic pilot maintained the airplane at an altitude where survival was impossible without pressurization. During the accident investigation, options such as adding a pressure sensor to the automatic pilot were proposed. This way, in case there is no activity in the cabin and the pressure drops below the levels required for survival, the automatic pilot would make the airplane descend until the atmospheric pressure permitted the survival and recovery of the occupants. This option was rejected because an uncontrolled flight under these conditions could result in a collision with another aircraft or high ground areas.

Luhmann (1996) describes this situation as hyper complexity, defined as a state which occurs when all of the system’s occurrences attempt to introduce an element of optimization from their own perspective. This concept, used in one way or another by all authors who have developed their work in the field of systemics, can explain the appearance of new risks derived from the development. The effects of the increase in complexity on the appearance of new events are widely known. Morecroft and Sterman (1994) pointed out that linear and short-term reasoning is badly equipped for an environment of multiple retro feeds, storage and flows, delays in time and non-linear behaviours of the variables. Senge (1990), for his part, refers to complexity and its consequences, indicating that the majority of systems analyses concentrate on the complexity of the details and not dynamic complexity. Because of this, simulations with thousands of variables and complex levels of detail prevent us from seeing patterns and interactions. In short, individual action, without contemplating the possible impact on the system as a whole, would not lead to rationalization but instead, once a determined level of complexity is reached, to confusion. It’s true that learning models that have provided good results in the past are used. Nevertheless, past success is not sufficient to guarantee future success; as Peter Drucker said, success makes obsolete the factors that make it possible. What is needed is a critical analysis of the learning model used and its adaptation – or lack of it − in the present day. Reductions in the rate of improvements in safety levels As has been pointed out, commercial aviation has experienced spectacular improvements since its inception. The strong reduction in the rate of improvement until 1975 could be surprising, however. The excellent results until 1975 in the reduction of disasters (see Figure 1.1), shown in the study made by Boeing Commercial Airplanes Group, sufficiently demonstrate the capacity for improvement by commercial aviation where safety is concerned. However, after significant development, there seems to be a strong reduction in the rate of improvement, which requires an explanation. Once the causes are known, the tools necessary for recovering the previous rate of learning will be known as

Commercial Aviation: A High-Risk Activity

11

well. Naturally, this recovery will only happen if safety continues to be a priority in aviation. Senge (1990) gives an initial clue: as the limits of capacity are reached in a system, the adequate behaviour does not consist of redoubling of effort, but of acting on the limits. The reduction in levels of improvement, depicted graphically in Boeing’s statistics, was expressed by The White House Commission for the Improvement of Air Safety (1997) in the following terms: Commercial aviation is the safest mode of transportation. That record has been established, not just through government regulation, but through the work of everyone involved in aviation − manufacturers, airlines, airport operators, and a highly skilled and dedicated workforce. Their combined efforts have produced a fatal accident rate of 0.3 per million departures in the United States. The accident rate for commercial aviation declined dramatically between 1950 and 1970. But, over the last two decades, that rate has remained low, but flat. Heading into the next century, the overall goal of aviation safety programs is clear: to bring that rate down even lower.

This puts air safety in question as an ideal learning model. Therefore, it can be supposed that something has changed and has affected commercial aviation’s capacity for learning. However, the consequent reduction in improvement takes place in a context that converts it into a paradoxical phenomenon. Throughout the history of commercial aviation, as will be seen in later chapters, learning has been materialized with a preference, in the form of technological improvements or operational procedures. None of these ways of materialization has reduced its rate of activity. On the contrary, since 1975 there have been significant advances as much in the precision of navigation as in the reliability of the devices. Additionally, information technology has permitted the automation of different tasks, has improved the readability of indicators and avoids peak overloads of information for the pilot. To illustrate this, Wells (2001), at a date well after the reduction of the rate of learning, cites precision improvements in inertial navigation systems, introduction of satellite navigation systems, improvement in the capacity for operation without visibility, improvement in materials, increase in the reliability of engines, and the introduction of sophisticated information systems in flight cabins, as elements that have, or appeared to have, visibly improved. Regulatory development also has not been reduced. Modifications have occurred at the same time as important trends to search for harmonization between regulations arising from different sources and improving regulators’ control over operators and manufacturers. In summary, pressure towards improvement has not been reduced. A constant evolution has occurred, as much of the technological aspects as in the regulatory development. However, this evolution has not been reflected in a similar magnitude in air safety itself. There is, then, a paradoxical behaviour since the product – the level of safety − has not evolved at the same rate as the factors on which it seems to depend.

12

Improving Air Safety through Organizational Learning

The reduction in the rate of learning as consequence of a model The factors that have led to the discrepancy between accelerated pace of technological evolution and regulations on the one hand, and the pace at which the improvements occur in an effective way on the other, are not exclusive to commercial aviation. Nevertheless, commercial aviation demonstrates these problems in a more visible way than other fields. The predictions from The White House Commission of traffic increase point to a strong increase in the absolute number of accidents in these terms: Focusing on the accident rate is critical because of the projected increases in traffic. Unless that rate is reduced, the actual number of accidents will grow as traffic increases … Boeing projects that unless the global accident rate is reduced, by the year 2015, an airliner will crash somewhere in the world almost weekly.

This scenario results in strong pressure towards improvement. The different actors attempt to avoid the forecast situation at all costs, however it seems avoidance demands a substantial improvement that could be beyond the possibilities of the current learning model. It’s true that technology and procedures have played an important role in the achievement of improvements. It is worth asking, however, whether an improvement model, whose basic tools are technology and procedures, imposes insurmountable limits on learning capacity. If it were so, the natural question is: why haven’t alternatives been sought out? The decisions, which could involve changes in the learning model, are difficult because they affect organizations’ cultures. In this sense, Peter Drucker criticizes the habit of managers to tend to maintain the attitudes and behaviour that led to the original success in the belief that they will also lead to success in the future. Air safety could, according to Drucker, be relying on a model that has worked in the past but has reached its limit. It’s possible that we are insisting on using an outdated model, but this isn’t the only possible explanation for the reduction in the level of improvement. The advances achieved in air safety could suggest that the reasons to keep on improving have lost importance because a satisfactory level of equilibrium between air safety and the cost to obtain it has been reached. Another possibility, supported by the idea that the situation is satisfactory and other necessities have to be covered, deals with improvements relevant to safety that could have been diverted from their original objective towards the achievement of greater efficiency. Both possibilities, which are complementary, would mean the use, in the short term, of a favourable situation in terms of safety. Nevertheless, faced with the forecast increase in air traffic and the associated increase in disasters, it does not seem that taking advantage of a perception of acceptable risk could be considered as the trend that will mark the future of the sector. The experts in quality could explain very simply the reduction in improvement levels: when a system has reached a high level of perfection, marginal improvement

Commercial Aviation: A High-Risk Activity

13

has a growing cost. If the level reached is considered satisfactory, there would be a clear justification to not incur ever-increasing costs. This is a simple and apparently unobjectionable explanation. However, if, as forecast, we can expect one serious air accident per week in the year 2015, even if the relative figure were the same, the absolute figures would be unacceptable due to the public impact a series of interminable serious accidents would have. To avoid this, the objective of improving the levels of safety by a factor of five was established. Nevertheless, if the system that permits improvement of safety levels finds itself at its marginal improvement levels – that is, those where the cost of a small improvement is very high – it becomes necessary to find an alternative system. The current system, based on technological and regulatory improvement, has been useful. However, the current situation allows an easy analogy with the functioning of an engine. More power can be extracted from an engine by introducing a turbo compressor and increasing pressure. When the overpressure is at its limits and even more power is wanted, the solution cannot consist of introducing even more pressure. Where safety is concerned, radical design changes are required and this is the situation in commercial aviation. If a higher rate of learning is needed and the current model is incapable of providing it (because its learning strategy increases its complexity even more), it is necessary to change toward a less complex model. However, the complexity has appeared as a response to certain needs and cannot simply be eliminated, but rather requires alternatives.

This page intentionally left blank

Chapter 2

Event Analysis as an Improvement Tool Air safety has shown a high capacity for learning and improvement, the rate of which reduced towards 1975. To understand this phenomenon and the search for alternatives for improvement requires an understanding of this learning process. There are visible improvements that can be considered the end of a cycle of learning, that is, the point at which what is learned is ready to be used. Nevertheless, an initiating element is also necessary in the learning process. In commercial aviation, the element that initiates the process is the occurrence of an event. Learning would be, in the initial stages of aviation, aimed at avoiding the repeat of events such as the one that initiates the process. The events are understood as deviations from the expected action. Such deviations demonstrate shortcomings and permit the implementation of improvements necessary to avoid their occurrence in future. Since the beginning of commercial aviation, the identification of the causes of an event has been a basic tool of improvement. As a consequence, the identification of deviations and their causes constitute a vital part of the learning process. The role event analysis has in the improvement of safety will be studied from a double perspective: 1. Potential and uses of event analysis. 2. The event-based learning cycle. Potential and uses of event analysis The potential seriousness of an event has grown at the same rate as the size of airplanes. Because of this, its investigation occupies a key place in the whole safety system. A single event can be the origin of important technological or procedural changes. Significant data about this path of learning are shown in cases such as the accident of a Concorde, which resulted in the replacement of fuel tanks with ones capable of withstanding the impact of tyre debris launched at high speed, or by the requirement to ask for confirmation before applying extinguishers in an engine fire. Once an accident occurred in which, by error, the extinguishers were applied to an engine that was working properly.1

1 Experience shows that determined accidents are produced in situations of exceptional tension and, despite procedure, the Binter accident in Málaga, on 29 August 2001, occurred exactly for the reason presented: shutdown of the engine that functioned correctly.

16

Improving Air Safety through Organizational Learning

The role of event analysis as a learning tool has ample recognition, as much from the technical field as from the philosophical or even the field of clinical psychology. Maturana (1995) points out that the failures of machines constructed by man are more revealing about their effective operation than the descriptions we make of them when they do not fail. From another point of view, Maturana’s affirmation is an exemplary description of the clinical method of investigation by which conclusions are obtained about normal functioning by using the abnormal as the starting point. In the philosophical field, Karl Popper (1993) contributed his falsifiability principle, where what made a theory or enunciation scientific was its ability to eliminate or exclude the occurrence of possible events referring to the same phenomenon. The occurrence of an event that, during the design phase, was considered impossible, will demonstrate that the theory or enunciation on which this belief rests fails. Therefore, event analysis is especially useful to highlight parts of a process that do not function adequately, so long as there is sufficient information for its reconstruction. In the case of commercial aviation, the level of destruction that can occur in a large accident makes the gathering of information difficult and requires the adoption of specific measures. On occasions, the investigation of an event has led to the production of high quality experimental designs, heightened cost and even certain risk in the search for the cause of an accident. Possibly, the most representative case may be the investigation of explosions at cruising altitude, which occurred on de Havilland Comet airplanes – the first commercial jets − to determine where and why the structural failure occurred. In the first phase of the investigation, flights were carried out at the altitude at which the explosions had occurred, taking technicians with parachutes and oxygen masks as passengers. However, on those flights the airplane was not pressurized and nothing abnormal could be observed as it was the pressurization that was the cause of the accidents. Once it was suspected that the failure was due to the pressure exerted on the fuselage by the pressurization system,2 the airplane was kept on the ground, filling it with water under pressure and then emptying it repeatedly. This action resulted in the same 2 One simple explanation of the phenomenon would be the following. Let’s suppose that we repeatedly inflate and deflate a balloon inside a bottle. If the material or the thickness of the bottle are not sufficient, a point will be reached when the pressure of the balloon will break the bottle; the Comet had large windows that permitted the passengers better views than on current airplanes, but those windows took away strength from the structure until it yielded. Most probably, an experiment like the one detailed would be unnecessary today, as a computer simulation could be done instead.

Event Analysis as an Improvement Tool

17

structural failure in the experiment as had occurred on the airplanes in flight due to metal fatigue and gave rise to corresponding modifications. Other experiments of great interest have been performed on the effects of wake turbulence – caused by the movement of a large airplane – or on the formation of ice on the external surfaces of the airplane as a result of accidents involving Boeing-737 airplanes and another two with Avions de Transport Regional (ATR) airplanes (NTSB, 1996). In the first case, tests were carried out during flight with two airplanes of different size, where the smaller – a Boeing-737 − was deliberately introduced into the wake left by the larger to check the effects on its controllability. The experiment permitted the elimination of this possibility and, years later, in one of the longest investigations into an event, it was found that is was a failure in the design of the airplane. Very similar to this experiment was the one carried out to determine the effect of ice on ATR airplanes. One airplane in flight released frozen water onto the ATR to check the effects on it and whether this could explain the sudden loss of control that occurred in the accidents. In this case, the experiment led to the conclusion that the accumulation of ice on the ailerons resulted in a sudden and difficult-to-control deflection of the same. Smaller experiments can be found in the accident investigation reports, such as in the case of an Iberia accident which occurred in Boston airport (NTSB, 1974), and are carried out frequently, reproducing on simulators the parameters of a flight just as they have been obtained from the recording device aboard the airplane. The importance of reconstructing an event led to the construction of devices that would permit the necessary information to remain safe, even in the case of a high level of destruction. Over time, this field has advanced to the point that there are virtually no unexplained accidents in commercial aviation. A more useful change has occurred relating to the potential seriousness of events. This way, whilst in the first instance the ultimate objective of an investigation consisted of establishing what happened, as the level of accumulated learning was greater and the potential seriousness was also greater, the objective began to be the avoidance of events ab initio, more so than their repetition. This change, as we shall see later, has had a great impact on capacity for learning.

18

Improving Air Safety through Organizational Learning

The search for the failure A learning model centred on the correction of an identified failure is coherent in an environment where there are various factors with sufficient destructive potential to, on their own, cause a serious accident without the need for other concurrent factors. In the early days of commercial aviation, there were numerous sole causes with potential to cause an accident. In that environment, the posterior analysis was a conceptually simple task, but one that was instrumentally difficult. The main difficulty was the capacity to reconstruct an event; the lack of technical means that allowed the reconstruction of the facts could make the investigation difficult even though the factors involved in an accident might be few. As the environment became more complex, there emerged a second action model. This model was derived from the fact that there are no longer sole causes capable of causing an accident. Breakage of causal chains Currently, accidents occur as the result of complex interactions between different variables. Under Reason’s (1997) approach, reflected in Figure 2.1, is given the existence of holes in a set of barriers that if, in a given moment, align, expose the contradiction between what is ideal and reality, and permit an accident to occur. The almost non-existence of sole causes and the complexity and diversity of causal chains convert the probability of repetition of a specific sequence of events into

Figure 1.4

The ideal and the reality for defences-in-depth

Source: Reason (1997), with permission

Event Analysis as an Improvement Tool

19

something very remote. For this reason, one approach of safety aimed specifically to avoid the repetition of a specific event could today represent a useless objective. As a consequence, the situation has changed. It’s not about trying to prevent a new appearance of a sequence of events whose probability is extremely low. In its place, investigation is aimed at the breaking of causal chains, in the knowledge that these are complex and that it is the concurrence of various such chains that causes an accident. This objective – breakage of causal chains instead of the avoidance of repetition − has led to the appearance of systemic models of investigation attesting to the fact that in complex systems, not only is it more probable that unforeseen interdependencies happen as a result of the failure of one part of the system, but also that those who operate or direct the system are probably less capable, due to training and specialized tasks of anticipating, perceiving or even diagnosing the interdependence before the incident converts into an accident. Establishment of legal responsibility As well as having changed the focus of analysis due to aviation’s own evolution and to different requirements, the increase in size and speed of airplanes, among other factors, have raised the potential severity of a simple event. A disaster caused by a design or maintenance failure in the airplane, an incorrect instruction from an air traffic controller or an incorrectly understood instruction by a pilot, or an incorrect or badly executed decision in any phase of the flight give rise to significant compensation. It’s therefore necessary to establish who has responsibility for the damages caused and this objective, like that of improvement, will also require a careful reconstruction of the facts. A model of reconstruction is the case of Egypt Air (NTSB, 2002), where a Boeing 767 airplane crashed into the Atlantic Ocean, for reasons unknown at the time. The analysis of the flight recordings and the remains gave rise to two opposing interpretations. The airline insisted on a technical failure, based on the fact that another airplane of the same model and manufactured at the same time, operated by Lauda Air, was completely destroyed in a previous accident due to the spontaneous functioning of the power reversal mechanism3 in one of the two engines. The investigations carried out via the recordings demonstrated a set of facts that left no doubt about the voluntary origin of the event: 3 This mechanism is used during the landing and consists of reversing the flow of the engine exhaust, using this to brake the airplane.

20

Improving Air Safety through Organizational Learning

1. The first officer is alone in the cockpit at the moment the airplane begins an almost vertical descent. 2. When the captain enters the cockpit and attempts to recover the normal flight position, the action of the first officer is contrary to the recovery (a fact contrasted by the position in which the rudders were found) and, finally, cuts off fuel to the engines. The possible impact on passengers of suicide pilots in the airline led it to attempt to argue a technical failure, an hypothesis easily refutable by the data from the flight recorders. The establishment of legal responsibility is, apparently, foreign to the learning process. Nevertheless, the use of event analysis to establish responsibility can have an antagonistic effect on the investigative process and on the very response capacity to an event. In the first place, the potential informants of an event can, at the same time, have contributed by their actions to the situation of risk. If this occurs, the incentive to report in a truthful way is scarce. This is one of the reasons why it is important to have the means to reconstruct an event where there might be little collaboration on the part of the actors. As well, the system of penalties associated with the establishment of responsibility has led to a growing pressure towards regulation compliance. Schlenker et al. (1994) have used the model known as the triangle of responsibility in accordance with which people are considered responsible in a given situation so long as there are a set of clear regulations that are applicable to the event. The actors find themselves limited by the regulations in terms of their role and additionally, the actors have control over the event. Due to the growing complexity of commercial aviation, the regulations can encounter situations for which they were not created. Nevertheless, even in these situations, the pressure towards compliance persists. Life cycle of information on events As pointed out, damages in the most serious events – and because of this, those requiring an explanation with utmost urgency − have forced the development of recording devices, channels of information and an ample use of statistical information to make decisions about risk. Because of this, there are various phases that define the life cycle in the information about an event. In the first place, the gathering of information in serious events will take as its base installed recording devices, and in less serious ones will use the collaboration of the actors. Secondly, information will be broadcast to those actors interested in the causes of the event and, lastly, corrective action will be taken. Each of the phases of the information life cycle is described below.

Event Analysis as an Improvement Tool

21

Information gathering phase The analysis of an event is only feasible if there is sufficient information on the facts that generated the event, and its fidelity in the gathering process is guaranteed. To the end of guaranteeing that the information exists, devices and operational procedures have been developed whose specific goal is to permit the reconstruction of the actions before an event. In this way, its causes can be determined. Today, thanks to this, events in commercial aviation that are not fully explained are very rare. Reading any accident report shows that, to produce it, recordings covering conversations in the cockpit and the radio transmissions have been used. In addition, they include recordings of engine power parameters, position, and so on. Lastly, the analysis of remains, together with everything previously mentioned, tends to leave little margin for doubt. The recording devices aboard an airplane, also known as ‘black boxes’, have two components. The first, the flight data recorder, maintains a recording of the flight data – heading, speed, flight levels, vertical acceleration and microphone use, among other things. The second, obligatory in airplanes since 1966, is the cabin voice recorder, which weighs nine kilograms, records all sounds in the cabin and can withstand an impact with a pressure of 30,000 kg. The gathering of information has permitted its use, as much through its direct analysis as through the introduction of parameters derived from a flight crashed in a simulator. The observation of the behaviour of the simulator can provide the necessary information. Additionally, the gathering of data for its conversion into statistical information also forms part of the same process, even if in this case it is more directed to making risk-based decisions than a causal analysis of a specific event. In this sense, it is worth highlighting the objective stated by McIntyre (2002), which involves the prior existence of the statistical information, making reference to American regulations. For these regulations, the acceptable probability of a catastrophic risk as consequence of continuous flight must be less than one in one billion. In this way, decisions regarding redundancy of vital devices or authorization to use a device are taken in respect of probability calculations, which in turn are constructed on information derived from events. Two cases of the use of statistics for making risk-based decisions follow: The first case is illustrated by the DC-10 and known as United 232. Three independent hydraulic systems

22

Improving Air Safety through Organizational Learning

were installed in the DC-10 under the assumption that the probability of three systems failing at the same time was below the probability admissible for catastrophic risk of one in one billion. In this case, the calculation was performed in the design, and the gathering of event information could serve to adjust the estimated probability. The case of United 232, which will be amply covered, showed that the three systems could fail at the same time and forced the radical modification of the assumption. The second case consists of the authorization for transoceanic flight of twin-engine airplanes. This authorization is based on the requirement for technical measures and statistical recordings that check the real reliability of their engines; with this objective in mind, even small deviations from normality are recorded, the analysis of which might reveal a tendency for failure of a specific engine. Nowadays, there are numerous large-capacity airplanes equipped with only two engines and authorized to make transoceanic flights; an engine failure over ocean means the airplane should remain in flight with only one engine during a period that can exceed three hours and with a large number of people on board. This type of decision depends on a constant gathering of data, which, eventually, could require their revision. Curiously, the system can be a victim of its own success, since due to the reliability achieved, it is difficult to accumulate the sufficient volume of cases that constitute a sample to determine the real reliability of a single engine subject to the extra effort required to maintain a twin-engine airplane in flight. Faced with this fact, the certifying authorities have opted to require that in these circumstances – a twin plane powered by a single engine due to the failure of the other − the working engine should do so without exceeding its normal operating parameters for pressures and temperatures. Lastly, as well as recording devices and the gathering of data for their statistical processing, is the gathering of voluntary reports. There are two unavoidable problems with recording devices. First, their analyses are laborious and, as a result, are only used when the importance of an event requires their reconstruction. And second, on

Event Analysis as an Improvement Tool

23

their own, the data and its resultant analysis are not communicated to where they are needed – they simply provide a medium for their storage. The use of recording devices is, therefore, exceptional in character. Different situations with potentially serious consequences would go unnoticed and would not generate learning without a voluntary action on the part of those involved. This voluntary action would consist of communicating risk situations that, having concluded happily, have not given rise to an investigation. The transmission of these types of cases is a valuable instrument. Nevertheless, possible penalties if the informant was, at the same time, the cause of the situation would lead to the loss of information. The importance of this problem has been acknowledged by the regulatory bodies and has given rise to specific programmes, as much in Europe as in the United States. The idea of searching for improvement over that of responsibility is under the concept of risk management programmes. This approach consists of abandoning the idea of determining culpability and penalties in favour of the idea of a non-punitive approach to the errors with a view to preventing future errors. Systems of voluntary information function under this concept. An error or omission can be freely acknowledged without consequences. In this way, a similar action that has serious consequences on a different occasion can be avoided. Information distribution phase As a result, as much for the availability of technical means as for the incentives aimed at gathering information on risk situations, events and near-events in commercial aviation have considerable distribution. This represents an undoubted benefit from the prevention point of view. It must be highlighted, however, that the information distributed inside the system on potential or real events is much broader than that distributed among users.4 This is a double-edged sword: 1. It’s positive because it avoids creating alarm as a competitive practice. In addition, all those involved can benefit from the experience of others through the flow of information among them. 2. It’s negative because, by not competing in safety, there could be effective reductions in the level of safety by agreement among operators. Equilibrium between risks and benefits can be socially acceptable when it has been reached via a public process but is unacceptable if it has been reached through an agreement between regulators and the regulated. Fischhoff (1998) defines ideal information policy, pointing out that, conceptually, determining which information is necessary to communicate is easy. In the first place it is necessary to describe which decisions will have to be made by people. 4 On this point should be highlighted the transparency of some regulatory organizations that allow public access to their reports via the Internet. Nevertheless, the interpretation of a report habitually requires specialized knowledge, usually foreign to the general public.

24

Improving Air Safety through Organizational Learning

Secondly, it is necessary to determine what information is required to make those decisions. And thirdly, it is necessary to know what those persons already know and identify shortfalls in the information already available. The behaviour in the interior of the system seems to respond to the description. However, when the receiver of the information is not within a system composed of manufacturers, operators, regulators and other specialized professionals, the same criteria are not used. The information necessary to make decisions, that is, the information relating to particular operators or manufacturers, is lacking. As demonstrated by the cases in the box, that information appears to be aimed more at pacifying users than providing objective data. A person responsible for courses for people with a fear of flying affirmed that a large airplane with its four engines stopped could cover some 200 km. The affirmation gives a false image via a half-truth: it’s true that the glide capacity of a large airplane with the engines shut off would allow it to cover that distance if the gliding commences at its cruise level. However, from this affirmation the following erroneous conclusion could be formed: a total stoppage of all four engines would be a trivial incident. The energy necessary for the handling of the control surfaces comes, in most models of airplane, from the engines in operation; the total loss of power could, for this reason, make control impossible. This fact, however, is omitted from the information supplied to passengers. Another doubtful affirmation is made by airplane manufacturers about the engines. These allege that twin-engine airplanes are safer than older four- or threeengine airplanes. In introducing this affirmation, there is another unmentioned factor: age and technological development. A long-range twin-engine airplane can come from more advanced technology than a previous generation, four-engine airplane. In addition, the technologically less advanced airplane is older and subject to events derived from its age. The correct comparison, if there is an attempt to really evaluate the impact on safety of the number of engines, should be made between airplanes of similar age and technology and with different number of engines. The disappearance of the flight engineer in most modern fleets apparently has no impact on safety

Event Analysis as an Improvement Tool

25

although it has generated considerable debate, especially amongst pilots. This debate has hardly transcended to the exterior while, however, in extreme situations a third member devoted to all the mechanical elements would leave the pilots free to concentrate on piloting and navigation. Take as an example the case of a Swissair MD- 11 airplane in 1998 and its accident in Halifax after an incident of smoke in the cockpit. The two people in the cockpit had to attempt to land in an unfamiliar airport, which requires gathering information regarding position, runways, radio frequencies, and so on. At the same time they had to jettison fuel due to exceeding the maximum landing weight and try to find the origin of the emergency. They also had to maintain control of the airplane at all times. There is no basis to affirm that the accident could have been avoided with a third crew member, as there were other factors, such as the misidentification of the problem in the first place, but the mere description of tasks suggests that this hypothetical third crew member could have been very valuable. The different cases illustrate that the public image sought is one that shows everything is under control and that there’s nothing to worry about, save price and comfort. For those professionally involved in commercial aviation, a vision that excludes the user from the loops of information is comfortable so long as it is supported by publicly acceptable safety levels. Nevertheless, even in that case, it faces the risk of a sharp loss of confidence before a serious event with a significant public impact. Wells (2001) calculated the public impact of an accident considering that it has a direct relation to the number of victims, such that the loss perceived by society is proportional to the square of the number of deaths in a single accident. So, 100 deaths in one accident cause the same social impact as 10,000 deaths in individual accidents. The efforts made to give an image of great safety can be justified by the attempt to compensate for the negative, difficult-to-control effect of a serious accident with the associated public impact. However, an excess jealousy that borders on disinformation can, eventually, increment the negative effect of a large accident due to the loss of confidence it would generate. The opacity to the exterior illustrated by cases such as those demonstrated represent risk of loss of confidence before a serious accident that could be attributed to an irresponsible decision. Despite this, opacity has, as compensation, the possibility of great fluidity inside the system with the consequent speed in taking corrective action.

26

Improving Air Safety through Organizational Learning

Mauriño et al. (1997), quoting the analysis of the Challenger space shuttle accident, extract the following paragraph from the conclusions: All organizations are, to varying degrees, self-bounded communities. Physical structure, reinforced by norms and laws protecting privacy, insulates them from other organizations in the environment. The nature of transactions further protects them from outsiders by releasing only selected bits of information in complex and difficult-to-monitor forms. Thus, although organizations engage in exchange with others, they retain elements of autonomy that mask organizational behaviour.

As a generic description, the idea that a boundary between organizations could result is valid. In commercial aviation, one problem exists that was not present in the case of the Challenger. It’s not about a single organization or a main organization surrounded by subcontractors, but of organizations in competition. The decision to behave like a block when talking about decisions like what information to communicate to the exterior has risks. In exchange, it permits avoiding concealment in a professional environment with a very clear boundary. Information utilization phase: generation of new abilities Once the cause of an accident is identified, the immediate action is corrective in character if the available technology exists, or preventive if the event is one that cannot be confronted successfully. On some occasions, both types of action are linked and technology is used as a form of prevention. One example is the installation of meteorological radar aboard airplanes. The radar does not eliminate the risk posed by a storm but serves to avoid its most active zones, hence avoiding the risk. Nevertheless, given that today practically all accidents are multicausal, the advance in the level of learning generated a model of action aimed more at the identification of critical points than the avoidance of a specific type of event. The investigation and subsequent recommendations after the Los Rodeos accident is representative of this idea. The magnitude of the accident, which involved the collision of two Boeing 747 airplanes as one of them was attempting to take off, could make one think about the introduction of radical changes throughout the whole of the system. Instead of this, seeking to break causal chains, the following recommendations were made: 1. Placing of great emphasis on the importance of exact compliance with instructions and clearances. 2. Use of standard, concise and unequivocal aeronautical language. 3. Avoidance of the word ‘take off’ in the ATC clearance and adequate time separation between the ATC clearance and the take off clearance.

Event Analysis as an Improvement Tool

This accident can be used as an example of the concatenation of circumstances that tend to occur in a serious accident. The set of relevant facts that coincided in time are detailed below. It is worth highlighting that it would have been sufficient for just one to not occur for the accident to have been avoided. This circumstance, on the other hand, can validate the actions following the analysis and put in context the apparent weakness of the recommendations: 1. Closure of Gran Canaria airport due to the explosion of a bomb and the threat of a second. 2. Marginal meteorological conditions at Tenerife airport, where the accident occurred. 3. Obstruction of the exit taxiway on the ramp (congested due to the closure of Las Palmas) by the KLM airplane. The Pan Am airplane was ready for take off earlier, but had to wait as a result of not having enough room to exit, which caused the temporal coincidence. 4. Great difference in status between the captain and the first officer in the KLM airplane, meaning that the latter did not maintain a vigilant attitude over the actions of the captain. 5. Flight time for the crew brushing the legal limits and the need to take off quickly or cancel the flight, with all the problems that would represent. 6 . Mistaken intersection by the Pan Am airplane when leaving the active runway. 7. Temporal coincidence between the radio transmissions of the tower and the Pan Am airplane, interfering with each other and not well received in the KLM airplane. The content of the transmissions (‘Wait and I will advise’ from the tower and ‘We are still on the runway’ from the Pan Am airplane) mean that the reception of either one would have indicated without doubt that the take off should have been immediately suspended. 8. Use of reduced power take off technique, which, in exchange for reduced fuel consumption and extending engine life, lengthens the take off run of the airplane. The importance of this factor can be appreciated knowing that the first impact was between the landing

27

28

Improving Air Safety through Organizational Learning

gear of one airplane (KLM) and an engine of the other (Pan Am) or, put another way, at the moment of impact, the KLM airplane was already airborne. 9. Use of a non-standard phrase which was not understood by either the control tower or the captain of the Pan Am airplane, regarding what they were actually doing (‘We are now at take off’) which could be interpreted as ‘We are now at the take off point’ or ‘We are now taking off’. 10. Lack of recent practice on the part of the KLM captain as pilot-in-command in actual flight, having lost familiarity with the type of incidents that can arise and increasing the levels of anxiety associated with such incidents. Similar conclusions with respect to the interaction between circumstances and the need to find a critical factor could be obtained from the case of TWA-800 in which a TWA Boeing-747 exploded in flight shortly after take off. The circumstances that led to the accident are as follows: 1. A hot day and a delay in the flight forced the air conditioning to be kept running for a long time. 2. The motor for the air-conditioning system was close to a fuel tank and heated its interior. 3. The fuel tank was empty. Nevertheless, the gases, present in the tank because of its regular use, become explosive at a lower temperature than that of liquids. Therefore, an empty fuel tank represents a greater risk of explosion than a full one. 4. A short-circuit in an instrument caused an electrical spark which ignited the explosive atmosphere in the fuel tank, causing the explosion of the airplane. Contrary to the previous case, there was no operational failure here but a design failure that, through a set of exceptional circumstances – which had not occurred in more than 25 years of operation of the model of airplane – had the occasion to emerge. In consequence, the only action possible would be the modification of the design.

Event Analysis as an Improvement Tool

29

In this way, it is attempted to determine which actions can achieve greatest results in the task of preventing the future occurrence of similar events to that which caused the investigation. Hale (1997) points out that analysis and learning process generated models, tools and estimates, about which existed some consensus for the first two ‘ages’ of safety, centred on avoiding technological and human failure, respectively. A third age – into which we might be entering and where the principal concern is regarding complex sociotechnical and safety management systems – still finds itself in the early stages of development. Despite this, it becomes indispensable before the shortcomings of the older approach. Both cases are examples of how the convergence of a set of small occurrences can result in a large accident. This situation justifies, in the taking of corrective measures, taking into consideration a systemic principle: cause and effect need not retain proportion to their magnitude. The requirement to bear in mind the interaction between variables isn’t the only problem that presents itself to exploit the information gained from event analysis. There are also complications derived from the actual moment in which a corrective action is highlighted. This is how Einhorn and Hogarth (1999) distinguish between what they denominate thinking backward and thinking forward. Thinking backward – a basic task in the analysis of an event – is defined as a primordially intuitive task, diagnostic in character and which requires capacity of judgement in the search for clues and the establishment of apparently unconnected links, such as the test of possible causal chains or the search for analogies that might help. Thinking forward, a task performed when attempting to prevent an event, is different. It does not depend on intuition but on a mathematical proposal. Whoever needs to make the decision should group and weigh up a series of variables, and then make a prediction. With a strategy or with a rule, evaluating the exactness of each factor and combining the elements of information, the person making the decision reaches a sole, integrated prediction. Limitations of event-based learning The two explained methods – thinking backward and forward – are used. In event analysis, it can be said that the analysis of information is carried out under the modality of thinking backward, but once conclusions are obtained, the taking of measures is carried out under the modality of thinking forward. If we retake the idea of hyper complexity that Luhmann highlights as a limit to learning, it can be related to the two mentioned modes of thinking. The doubt over the feasibility of thinking forward in a hyper complex environment arises immediately. Perrow pointed out that the problem with complex organizations consists in that functionally unrelated units can establish an unforeseen interaction due to their proximity. A thought forward, especially if taken from the point of view of a

30

Improving Air Safety through Organizational Learning

specific specialization, can solve one problem but cause another of similar or greater seriousness. A safety device directly caused the accident of an Israeli Jumbo at Schiphol airport in 1992. The function of this device was to ensure that, in case of losing an engine in flight, the separation would be clean without taking with it part of the leading edge of the wing, making flight impossible. To this end, a component called a ‘fuse’ was installed which, in case of extreme torsion, would yield. That way, in exchange for the loss of an engine, the shape of the wing and its aerodynamic characteristics would be preserved. From the point of view of the preservation of the aerodynamic qualities of the airplane, the solution appeared correct. In this specific case, the accident – with numerous victims due to crashing into a populated zone − was due to the safety device having a fissure, which reduced its resistance and behaved according to its design. The force generated by the engines at take off power was sufficient that it yielded and the airplane lost two engines. Beck (2002) describes these situations as ones of manufactured uncertainty. This type of uncertainty supposes that not only is the knowledge base incomplete but, a surprising fact, the increase in knowledge frequently results in more uncertainty. So, the mathematical proposal that involves Einhorn and Hoggarth’s (1999) ‘thinking forward’ modality becomes progressively difficult and an inadequate approach. The possibility of emergent relationships between variables and, therefore, not considered ‘a priori’ impede that these are considered in the design and, hence, its operation can be hard or impossible once an uncontemplated event appears. In addition, as well as the already-mentioned two modalities in terms of referring to past or future, there is a third way of acting. It involves action in the present time which, as the two previous ones, has its own working rules. Reason (1990) groups his analysis of human abilities under the heading Design of a fallible machine, referring primarily to the limited capacity for attention and, hence, to deductive processes that are also limited, since they operate on the data perceived by that limited capacity. Under these conditions, the human operators find themselves needing to try and respond to a situation based on their belief about how things work. This belief is derived in part from their own experience and allows rejecting possibilities and operating with those considered most probable. However, since not all the necessary

Event Analysis as an Improvement Tool

31

data are available, they use heuristic reasoning very much separated in its functioning from models of formal logic. Hypothetically, it would be possible to have a system using resources to think forward applied to the present situation and without the limitation of the low speed of calculation of the human operator. However, this hypothetical system only links the relevant variables if they have previously already been linked in its programming and it reveals itself to be useless before an unforeseen situation. An illustration of this point is demonstrated by the next case and the results obtained through the use of a simulator. Flight simulators are used in accident investigations, as mentioned, to know the behaviour of the airplane they simulate. Despite the experience acquired with these systems, there are still situations like the one that occurred in a runway overrun by an Iberia Boeing 747 in Buenos Aires.5 The airplane suffered an engine failure an instant before reaching the take off decision speed. Applying the brakes at maximum power resulted in blowing out 16 tyres, but contrary to predictions, the plane did not have room to stop on the runway, exceeding the same by some 60 metres. The use of the flight data recorder to test in a simulator why the airplane did not brake did not clarify the matter, since in the simulator tests, given the brake pressure used and the moment at which this was applied, the airplane systematically stopped before the end of the runway. The solution appeared when a variable not introduced into the simulator was identified. The continuous application of the maximum brake power, as well as blowing out the tyres, fused the braking devices, making them useless. The simulator had data regarding hydraulic pressure during braking and its corresponding stopping-time calculation, but had not included the physical deterioration of the device and its resulting non-functioning. Analysis, whether it’s backward or forward, is carried out based on mental models coming from a dominant logic. This leads to the inclusion in the analysis of

5 Data provided by the captain in command of the airplane, D. José Luis Acebes de Villegas.

32

Improving Air Safety through Organizational Learning

some variables and the exclusion of others. The mental models resulting from this selection of variables can represent a limiting factor for the gathering of results. Both elements – types of analysis and mental models − are found to be closely related, as normally, analyses are not only carried out at different times but different persons with their specific mental models carry them out as well. In this way, design – thinking forward activity – is executed by engineers with their mental model. The management of the event is handled by pilots, who also have their own mental model, different from the previous one. Finally, the investigation –thinking backward activity – is usually carried out by specialized officials with the technical support of the previous two and with their own mental model. So then, as well as the differences imposed by the moment at which a decision is made, the different actors impose their own mental model which emphasizes or limits the contribution of different factors in the function of this model. Gathering together the conclusions of the investigation of Los Rodeos, there is a surprising lack of reference here to the fact that the captain of the KLM airplane was a member of the firm’s management and, as such, his principal recent flight experience had occurred in a simulator; where the take off authorizations or the lodging of passengers and crew don’t exist, despite their contribution to the anxiety of the pilot, and in this case, to his hurry to take off. According to flight records, the captain had performed less than 260 hours of flight per annum over the past six years – less than half the normal amount. It should be considered, as well, that the Boeing 747 is mostly dedicated to long-haul flights. Because of this, if the number of hours flown seems already scant, it is more so if taken in the context of long-haul flights: the conclusion would lead to an unusually low number of operations. Later, criticisms would appear over the fact that the socalled ‘management pilots’ were never questioned in their roles as pilots since their actual flight experience was considerably lower than that of other pilots without management positions. According to Beaty (1995), this fact is seen as a natural privilege for those in positions of authority. This culminates in that the pilot with least recent practice in the fleet is often the one who flies the important figures or makes important flights. In summary, the events are analysed from a dominant logic. This functions like an information filter and, if the dominant logic is of a technical type, most of the learning

Event Analysis as an Improvement Tool

33

acquired will take the form of regulatory or technological development. That imposes specific advantages and disadvantages linked to that type of development model. The dominant logic will determine a mental model, variable in different collectives and which can often be unaware of the fact that the form of reasoning is different depending on temporal factors. The same clues are not available when an event is analysed and when it is caused or when there is an attempt to anticipate a future event. The analysis of an event is done under the model of thinking backward, which responds to a systemic model limited by the dominant logic itself. The generation of new regulations and systems is done under the model of thinking forward, which tries to plan for all possible contingencies. Lastly, the reaction during the event is done in the thought modality of limited rationale. Of all of these, especially for a technical dominant logic, the most fragile is that which imposes the making of decisions in a situation of limited rationale. For this reason, as much in event analysis as in the generation of regulations and systems, they set off to search for predesigned solutions. This way, they avoid the risk to the operator of having to tackle contingencies in a state of limited rationale. The highlighted proposal is itself a product of the technical dominant logic. In fact, even in a situation of limited rationale, the operators have at their disposal contextual information that a regulation or technological design does not have. Technological protection of the flight envelope, which can act out of context, is one example. The apparent fragility of human functioning remains in question by recent studies on cerebral functioning. It has been observed that the number of cerebral connections towards the exterior of the brain is much greater than the number of connections towards the interior. The conclusion they derive from this fact is that the general principles, foundation for the construction of any system or regulation, out of context, are a creation following the comparison with an enormous amount of situations already lived. From here would come the reasoning labelled as heuristic. First, it is attempted to apply a general model and if this were insufficient, a more primary resource would be used: the comparison of the current situation with individual experiences. In this way, the resultant decisions are aligned with context. Obviously, predesigned general solutions don’t have this ‘emergency resource’ at their disposal, which the operator does have at the moment of taking action. Predesigned solutions only function before foreseen contingencies and, by definition, in a complex environment not all contingencies can be foreseen. Consequently, predesigned solutions shouldn’t be converted into sole resource, exclusive of the system. Events, for all of this, represent a basic component of learning. The lines that this follows, however, depend on the organizational cultures and the perception these have of the situation. To say that events occur because the risk exists for events to occur would be an obvious tautology. It can be more productive to classify the possible risks in factors that have given rise to specific forms of learning. These risk factors and the ways in which the air safety system has learned from the events related to them will be analysed in the next chapter.

This page intentionally left blank

Chapter 3

Safety in Commercial Aviation: Risk Factors In the previous chapter, it was seen how events have been a useful source of learning but their interpretation is always carried out under a mental model that introduces its own bias. This mental model determines which data to pay attention to and, in doing so, can introduce limitations to the learning process itself. Events happen because there are risks. These risks can be classified within a set of factors, a task that will be tackled next, analysing each one of them. As a general rule, the risk factors and their evolution are indicative of a learning model that gathers information with a high grade of confidence and communicates it broadly within the system and with few distortions. Nevertheless, the way in which this information is communicated and stored responds to a model of dominant culture with a strong technological burden. This culture brands the types of conclusions obtained and the actions then carried out. Along this line, Croadsell (2001) points out that when the system is perturbed by the appearance of new learning, this finds a place in relation to what is already there. The same idea, although adding a critical nuance, was manifested by Reason: there is a habit of trying to technologically fix problems created by technology. Two relevant facts already highlighted can explain some behaviours of the system as a whole: 1. Information relative to air safety is not communicated to the exterior with the same transparency as within the system. A boundary is drawn, inside of which information is communicated with relative fluidity. 2. There is a set of pressures where the search for efficiency and safety often play complementary roles: improvements in safety, following the line drawn by the dominant culture, reach a limit, where they lose relevance because the system’s organization does not have sufficient capacity for its use. The pressure towards efficiency introduces operational changes that permit the use of these theoretically irrelevant abilities. Doing so, new risks appear that require new safety measures to be taken, taking another step in the evolution of the system. In the coming sections, after the classification of the risk factors, an analysis follows of how the reduction of risk has been searched for in each one of the factors that can generate it.

36

Improving Air Safety through Organizational Learning

Classification of risk factors Commercial aviation is, obviously, an activity subject to risk. The analysis of each of the risk factors has permitted an evolution aimed at reducing its impact. The whole system has learned to avoid unexpected situations that could require improvisation and generation of solutions on the part of the operator. The risk in commercial aviation has been classified in its generating factors but there have been many classifications carried out over time. Before initiating an analysis of risk factors, it’s necessary to select which classification of factors will be used since there are various options: 1. The accident investigation manuals (FAA, 2000) are very specific and aimed at technical personnel. Their specific nature as instruction manuals for the execution of the right operations in the right order limits their usefulness when attempting to carry out a global analysis of the activity. 2. The accident classification of the safety study performed by Boeing Commercial Airplanes Group (2006a) contemplates six factors as primary cause of accidents –crew, airplane, meteorology, maintenance, airports/control and others. This classification is useful for the effects of a statistical study but can be insufficient if the objective is the analysis of a dynamic environment’s functioning and where causal chains are complex. As a consequence, a review is required of the different classifications, neither strictly technical nor orientated towards statistical information, having analysed the following possibilities: 1. The first not strictly technical model that overcame the classic man–machine dichotomy – destined to serve a more analytical than descriptive point of view − was developed by T.P. Wright in 1946 and introduced the concepts of man– machine–environment as generic causes. 2. In 1965, through the work of the University of South Carolina (Wells, 2001), the inclusion of management arose as the evolution of this model, since many accidents can have had organizational factors as causes – one example is the Dryden accident (Mauriño, 1997) in 1989, which represented an inflection point in accident investigations: Known as the Dryden case, is an accident that, under the usual parameters, would have been routinely attributed to human error; in extremely low temperature conditions, a pilot with broad experience did not activate the wing de-icing mechanisms and the airplane was incapable of taking off. Nevertheless, the investigator assigned to the case began his work with the question: Why does an experienced pilot make such a basic mistake? The search for an answer led him to analyse organizational factors such as the merging of two

Safety in Commercial Aviation: Risk Factors

37

companies, control by regulatory bodies, training and maintenance practices, and as a whole, some elements of the Canadian transport system. If in the initial model, this factor would have been integrated in the environmental factor, its explicit separation contributes to the clarification of the causes. 3. Lastly, in the Flight Safety Foundation Environment and coming from the military, the mission factor would be included in the model in 1976, that is, what is the objective of a specific flight? The inclusion of this factor gave rise to the model denominated 5-M, corresponding to the initials of the five factors analysed.1 Once again a factor is extracted that was previously included in the environment and relevant enough to affect safety as reflected by the fact that different safety ratios correspond to different types of operations. Once the different models were considered, it was opted to use the classification before 1976, that is, before the mission variable was included. That was done because the objectives of the current work refer specifically to commercial aviation. As a consequence, the mission variable does not result discriminative since all of the cases analysed share the same value for this variable. To configure the whole model, there would nevertheless be included a fifth factor, called system complexity. It should be borne in mind that the 5-M model corresponds to the year 1976 and its predecessor – omitting the mission variable − to 1965. The growth of air traffic in volume, complexity of procedures and technological development has led to the occurrence of events not directly attributable to any of the classification factors but to an exceptional combination between several of them. As Perrow pointed out, we have created designs so complicated that we cannot predict all the possible interactions amongst the inevitable failures; safety devices are added that are fooled, avoided or annulled by hidden routes inside the systems. This same phenomenon was also noted by Morecroft and Sterman (1994) under the name of non-linear relation type and to which they attribute two principal characteristics: 1. Disproportion between the magnitude of causes and effects 2. Production of results that do not respond to independent causal variables but to any form of interrelation between these. It was pointed out in the previous chapter that the non-linear relation is visible in numerous accidents, some of which have been described to illustrate the phenomenon. Because of this, as well as dealing with individual risk factors, it’s necessary to include one factor in its own right among them: creative interaction. This possible interaction has been called system complexity.

1

Man, Machine, Medium, Mission and ‘Management’.

38

Improving Air Safety through Organizational Learning

Therefore, the model that will be used to classify the sources of risk will be derived from the 5-M model, changing the mission variable into system complexity to get the following form: 1. 2. 3. 4. 5.

Human factor Technological factor Environment Management System complexity.

Each one of these variables will be treated as a generator of risk for aviation. As a consequence of these risks, the system will apply solutions that, generally, will be analysed under the same headings. There will be situations where the solution increases the complexity of the system and, as a secondary effect, reduces the risk of one factor while increasing that of another. This situation occurs, for example, when a risk derived from the human factor is reduced by a technological solution that introduces its own source of risk. Analysis of risk factors Next, each of the risk factors and the learning strategies used in its reduction will be analysed. Various phenomena worthy of being emphasized are observed in this analysis: •



Any classification is destined to clarify. Nevertheless, in a single event, there can be various risk factors jointly and the boundaries between them can become diffused. There is not a separate problem–solution approach for each one of the factors; in fact, there are factors identified as being of risk but, at the time of searching for solutions, these can be applied without intervening in the risk factor itself.

This way, a human error can be solved by implementing a specific technological resource and, in fact, the improvement in the technological factor should be analysed by a historical analysis of the technological improvements generated and their contributions to safety. •

Systemic risk will cover situations that don’t fit in any of the prior ones. Its inclusion contributes to a greater precision of the classification, although, given the interaction between them, we will continue to find the intersection of points between the factors.

Safety in Commercial Aviation: Risk Factors

39

Human factor According to a study by Boeing Commercial Airplanes Group (2006a), 55 per cent of the accidents in its study until 2005 were caused by human error. Nevertheless, most accidents have multiple causes. It’s the convergence of independent causes that can lead to situations that induce errors. Additionally, there are two reasons that will be analysed next and which show that Boeing’s datum should be viewed with caution. 1. Attributing the cause of 55 per cent of accidents to the human factor can give the false impression that if the human factor were eliminated, the accidents would be reduced by 55 per cent. 2. Human error often appears superimposed on other failures that, evidently, also have a role in the cause of the accident. A pilot’s inadequate behaviour when faced with an engine failure can give rise to an accident. However, even when it is clearly determined that the behaviour has not been adequate, it seems clear that the accident would not have occurred if, previously, there had not been an engine failure. From a technical mentality, it could be affirmed that the engine failure also comes from a human failure, be it from maintenance or from design. If this were so, it would be necessary to conclude that 100 per cent of accidents occur because of human error. Of course, this affirmation would contribute nothing, since it eliminates all possibility of discrimination where origin is concerned. The social impact of an accident prevents the visibility of the other extreme of the human factor, that is, the number of potential accidents avoided thanks to people’s correct intervention. It is easy to determine how many accidents have occurred where human error has been a primary cause but it is not possible to know how many accidents have been avoided thanks to person’s actions. This idea is expressed by Villarie (1994), pointing out that it is not possible to adequately measure the results of air safety since no one can count the fires that never started, the aborted take offs that don’t happen, or the engine failures and the forced landings that never occurred. In the words of Dekker (2006), the same idea is expressed with, ‘you can’t count errors’. The second difficulty in evaluating the real impact of the human factor in an environment of evolving technology and procedures is that the basic function of persons is to confront failures or insufficiencies in the technology or the procedures and maintain, even in that situation, flight safety. An accident demonstrates that human operators, understood in their function as last barrier, have failed. From this point of view, many accidents could be attributed to human factors, where other factors have contributed. The presence of the human factor as one of the causes in a large number of accidents has given rise to different approaches to its treatment. A classic approach

40

Improving Air Safety through Organizational Learning

of this treatment is that used by Searle (1980) in dividing actions into previously deliberated actions or intentional actions without a previous decisive process. The training and socialization processes are directed at achieving a situation where human operators make the best decisions within their reach in deliberate actions. However, not all errors of human origin obey incorrect decisions but rather incorrect executions of valid decisions, that is, intentional actions without previous decisive process. The search for a reduction in this type of error has centred its efforts on the search for instruments that are clearer in their readability and that do not induce errors, giving rise to an ergonomic approach. The principal supposition in cognitive ergonomics is the difference between the task, as defined by an organization – what people supposedly have to do – and the activity observed in real work environments – what people really do. In consequence, what we would try to achieve is the operator acting in a way that conforms to what is defined by the organization. For this, it is necessary for the operator to know how and want to do so, and not commit involuntary errors in the execution. For the purposes of the analysis of the human factor as a risk factor, the key element would be intent. Under the name of psychological approach are included all those intentional actions that can include errors of judgement, of ability or voluntary violations of regulations, and under the ergonomic approach all those factors are derived from an environment that can induce those errors. Psychological approach This approach is centred on the human limitations and on the problems that can arise in relating and communicating. In its heyday, it received strong backing as a consequence of accidents being attributable to communication problems between crew. Two well-known cases show the importance that this approach can take: The Los Rodeos accident in 1977 involved two people with great difference in their status in the KLM company. While the captain was in the upper management of the company, the first officer had less than 100 hours on the type of airplane and had recently been examined by that same captain to change to the type of airplane in which they carried out the flight. The accident occurred despite the first officer of the KLM airplane at all times being conscious of the incorrect action of a captain who he didn’t dare correct a second time. On the first attempt at take off by the captain, the first officer pointed out that they did not have permission to take off and the captain again reduced engine power, ordering the first officer to request authorization. When, having made the request, he received instructions on headings and flight levels to maintain after take off, the

Safety in Commercial Aviation: Risk Factors

41

captain incorrectly interpreted this as authorization for take off. In this case, when the captain again applied power to the engines for take off, the first officer didn’t dare to correct him. Instead, he radioed that they were taking off; however, he did so using an incorrect expression in English, which the tower controller interpreted as: the airplane was stopped and ready for take off. The Pan American pilot, before the confusing expression was used, radioed that they remained on the runway. However, his transmission interfered with the controller’s answer to the KLM airplane instructing it to wait for take off. The interference between the two radio transmissions2 prevented this last opportunity to avoid the accident. A similar situation could have happened in a Trident airplane in 1971. As opposed to the previous case, in this one it is necessary to hypothesise because cockpit voice recorders were not being used at the time. During that time there was a pilot’s strike and the captain of the Trident was, as in the previous case, a manager of the company and had been in a heated argument before the flight. The autopsy of the pilot revealed a chest angina that could have led him to some absurd action or order, not questioned for reasons of status. The examination of the remains of the airplane would reveal that the hyper lift devices were withdrawn prematurely when the airplane still had not gained enough velocity to fly without them. This was the failure that caused the accident; the potential seriousness of this type of error had led the manufacturers of the airplane to install an alarm to warn of this situation but, as was commented subsequently, the number of false alarms made by the device led to, as

2 The radio used for communication in airplanes functions in half-duplex fashion, meaning only one station can transmit, but all those on the same frequency can listen to the transmission. This mode of operation is, of itself, a safety measure, since by listening to the radio one can detect situations of potential danger. In this case, however, when two stations transmitted at the same time – the Pan Am pilot and the controller − neither was received.

42

Improving Air Safety through Organizational Learning

habit, the alarm being ignored.3 In this case, the alarm functioned correctly and the pilot limited himself to activating the switch so that it would stop sounding. The psychological approach has manifested itself in a growing interest in CRM4 training and in the selection procedures. In good part, the content of CRM programmes can be comparable to programmes aimed at improving individual abilities of communication and teamwork. Occasionally, there are specific procedures added that require the explicit agreement between crew to carry out the chosen action. Because of the American Airlines accident in Cali, criteria were established over the use of control elements of the airplane, which require that both pilots are aware of the actions carried out by each with the end of working as a team. With an identical aim, there are procedures, such as extinguishing an engine fire, where the pilot applying the extinguishers requests the other pilot to confirm that the extinguishers are really going to be applied to the burning engine and not another; although, as already shown, this has not prevented the same type of accident from occurring again. Procedures can be improved and the occurrence of events of different levels of seriousness is a source of constant improvement by gathering and analysing information. Nevertheless, the CRM philosophy also raises the adequacy or inadequacy of attitudes, whose detection is difficult a priori. The importance of maintaining the adequate attitude and the need to detect this by indirect means gives managers a complex dilemma: •



If meanings are extracted from inconsequential actions, professional careers can be harmed, seriously and without reason, based on indicators of doubtful value. If, on the other hand, indicative value is only attributed to serious actions, inadequate persons may be permitted to be in charge of an activity with vital and prominent risk.

The recording of simulation sessions and their subsequent analysis can also be an insufficient response. Although it cannot be guaranteed that the prescribed conduct 3 Safety devices whose ‘alarmist’ character lead to them being ignored deserve a special mention. The Avianca accident in Mejorada del Campo (Madrid) in 1983 is partially attributable to this factor. 4 Crew Resource Management.

Safety in Commercial Aviation: Risk Factors

43

will take place in real flight, this is conveniently exhibited when the operators know they are being observed. Although the physical environment is reproduced in simulators with extreme perfection, the action criteria in a simulator and in real flight are different. They are even more so if someone is conscious that their habitual attitude could harm them, meaning they will tend to modify it in the exercise. In the simulator training there is a different emphasis in the strict observation of regulations to that in real flight, where the need to comply with a planned activity can superimpose itself. It is worth remembering the KLM captain in Los Rodeos and his role as an expert in the simulator and, above all, how that broad experience did not prevent a situation of anxiety due to factors not present in training: maximum permitted activity in the crew; the possibility of having to disembark the passengers on an island without available lodging, and so on, which led to making an intranscendental mistake in a simulator but a vital one in real flight. An analogous situation to that which occurred in training takes place in the selection phases. The elimination of all those candidates that do not maintain the adequate5 attitudes before situations of risk involves the detection of these attitudes. This is not always feasible, especially when the candidate knows this is precisely what is being looked for. The use of tests or trained psychologists can reduce the risk but, at the same time, introduce a new one: the elimination of valid candidates – with the resulting harm to these − as consequence of an eventual aversion to all type of risk on the part of the selector. The selector and the pilot have a point in common: their behaviour is easy to evaluate once the consequences of their actions have been seen. However, during their work they may not have available all possible information as they try to apply their ability to the situation. The selector has available the possibility of avoiding all types of remotely visible risk. This, however, involves the unjust elimination of valid candidates. Because of all of this, the use of behaviour indicators becomes complex since there is a phenomenon of adaptation on the part of the subject being evaluated. The search for adequate behaviour in crew has led to important investments in the simulation of the physical environment in which this conduct occurs. This way, it is possible to achieve mechanical dexterity or improve mental models.

5 An excellent description of the risk-attitudes particular to the pioneers of supersonic flight and the space race can be found in the book The Right Stuff by Tom Wolfe, which was later made into a movie of the same name.

44

Improving Air Safety through Organizational Learning

Nevertheless, it is more difficult to reproduce real-flight significance than to reproduce the situation itself. In a simulator exercise, decisions are evaluated in terms of their impact on safety with respect to the established procedures and demonstration of dexterity in the handling of the airplane. In real flight, all of the parameters can coincide with those of simulated flight except for the source of the pressure. This, in real flight, refers to the pressures of the operation itself more than to technical factors. For example, the diversion of an airplane to an alternative airport is, as well as technical, an economic decision, as it means significant costs for the operator and passengers. In consequence, the decision in a simulator, if the introduced parameters thus advise it, will be of technical character and would consist of landing at an alternative airport. In real flight – paradoxically, given that this is the situation in which the risk exists − it cannot be avoided that economic elements form part of that decision. In a similar way, the procedures regarding the distribution of tasks in the cockpit can be perfectly followed in the simulator where pilots wish to show a third party that they know the procedure well. On the contrary, in real flight, a situation of unfamiliarity or mutual distrust between crew members would create a radically different scenario. The real situation is not one of examination, but, in an extreme case, of survival, and there can be cases where the crew disagrees over the correct action. Practices such as staying in a holding pattern in situations of transitory meteorological risk, for example a storm, attempted landings in strong winds or visibility somewhat below the minimum are more frequent than desired. These practices, aimed at avoiding landing at a different airport to the destination, consume time and fuel and if the situation is prolonged more than expected, the consumption of fuel during the wait can make the alternative airport no longer within range of the airplane. In summary, the emphasis on specific behaviour, disregarding its significance for the operator, can lead to the generation of dexterity that is practised in an exam situation but is not applied in a real situation, which applies different parameters. These parameters may not be present in the simulation processes and modify the decision criteria for the actors. Ergonomic approach The ergonomic approach is centred on the design of controls and clear indicators to stop errors in the interaction process. The density of indicators in a cockpit makes it impossible to maintain attention on all of them, especially in anomalous situations.

Safety in Commercial Aviation: Risk Factors

45

The engine indicators of the Boeing 737 were modified when, in an engine fire situation, the scarce clarity of the indicators led the pilots to disconnect the correctly functioning engine. Instead of having the indicators for the left engine on the left side and the indicators for the right engine on the right side, early models of panel had an ‘upsidedown’ configuration. And in an emergency, the pilots did not instantly know which was failing and could not act accordingly. Two important variations on the ergonomic approach, sponsored respectively by Boeing and Airbus, are as follows: 1. Simplifying perspective: in as much as is possible, attempts to limit complexity of design to avoid errors derived from this. 2. Automating perspective: Attempts to automate all those functions that, due to their complexity, could induce error. An example of the difference in perspectives is offered by the design of the fuel tanks. The distribution of fuel has great importance in determining the centre of gravity of the airplane. Boeing has opted in its most recent models for the reduction in the number of tanks, facilitating handling, whilst Airbus has opted for maintaining a complex system of tanks using automated systems that avoid handling errors or activity overloads. These policies have practical consequences that go beyond the mere legibility of indicators. Simplification involves not being able to optimize to the maximum the position of the centre of gravity and possibly yield in terms of fuel consumption or manoeuvrability before a more aggressive optimization policy. On the other hand, this more aggressive policy –automation and multiple tanks − could bring consequences due to the functioning of automatisms out of context. A loss of fuel could lead to an automatic system – searching to place the centre of gravity in the adequate place − transferring fuel precisely to the place in the system where it is being lost.

46

Improving Air Safety through Organizational Learning

Another important difference between manufacturers can be appreciated in the meteorological situations of wind shear. These involve a sudden change in the direction and intensity of wind and represent an extreme risk for the airplane in those phases of flight where it is close to the ground. Boeing has opted for a system that informs the pilot of the point up to which the nose of the airplane should be raised to ascend, given that going above that point would lead to loss of altitude. For its part, Airbus, before an identical requirement, uses its ‘fly-by-wire’ technology, by means of which the pilot does not have direct control of the airplane but rather introduces instructions into a system, so that the airplane simply stops obeying the pilot if he/she should insist in raising the nose beyond the point signalled as optimum. Under either of the two options, the design of the controls and indicators of an airplane is currently more easily legible since there has been an important reduction in both. So that this reduction does not represent a loss of information, the design should satisfy various criteria: 1. At all times, the set of indicators should provide information over the basic parameters of the flight. 2. When the value of a parameter is outside of those considered normal by the manufacturer, it should display an indication or warning. So long as the abnormal condition does not occur, the parameter will remain invisible, thus avoiding the crowded, multicolour cockpits of older airplanes. 3. At any moment, the operator can request parameter data not considered basic in the design of the system. The real difference, then, between the ergonomic perspectives of the two great manufacturers is the different way in which they try to reduce the risk of error. From the Boeing perspective, an attempt is made to simplify designs to reduce the probability of inadequate handling. From the Airbus perspective, complexity is maintained, understanding that it provides added features, but the management of that complexity is in the hands of information systems. Experience, however, demonstrates that these differences are permitted as long as there are not meaningful differences in efficiency (Campos, 2001). If that difference exists, the most efficient perspective will dominate. An example is given by the commercial evolution of both manufacturers. Until recent times, Airbus had a

Safety in Commercial Aviation: Risk Factors

47

clear advantage with a family of aircraft that shared the basic design of the Airbus A320. The cockpits of all Airbus airplanes after the A320 are almost impossible to distinguish. This design follows the ‘cross-crew qualification’ objective, that is, the possibility that the time for training and adaptation required to go from one Airbus airplane to another was minimum. It was even sought to maintain type ratings for current airplanes of very different size but with identical cockpit. Both factors represent an important gain in efficiency in operation with better usage of crews. Facing this, Boeing, especially after the takeover of McDonnell Douglas, had a diverse fleet and all airplanes were different from each other, with the exceptions of the 757 and 767 models. The launch of the Boeing 777 marked the beginning of proximity to the Airbus model. In the Boeing 777, the North American firm applied for the first time in commercial aviation the ‘fly-by-wire’ system. Its announced model, Boeing 787, is a second and possibly definitive step in the same direction. Boeing has announced its intention to manufacture this airplane in versions of different size and range. The Boeing 787, therefore, would become ‘its’ Airbus A320. Technological factor Technology represents the second of the risk factors. As well as being a risk factor, it is the principal depository of organizational learning in commercial aviation. Both facets can be observed by analysing the evolution of technology of an airplane’s principal systems: 1. 2. 3. 4.

Navigation systems Propulsion systems Auxiliary systems Control and information systems.

Technological evolution in aviation has occurred in all of these fields. All of them have undoubted relevance on the level of safety. The navigation systems are responsible for the exactness of the information regarding position, both horizontal and vertical, of an airplane. This exactness is one of the principal sources of safety and one of the places where the technological advances have been most visible. In

48

Improving Air Safety through Organizational Learning

other areas, the development of better materials and the inclusion of redundant parts or complete systems in the mechanical parts of the airplane have led to failures, such as that of an engine, being very uncommon. In addition, there are a set of systems relevant to safety – but which cannot be included in the previous paragraphs – which will be addressed under the heading of auxiliary systems. Lastly, information and control systems are difficult to address as a separate category. By their construction and objectives, they superimpose all the others due to their integrating function. Navigation systems Navigation systems, that is, the systems responsible for providing information about the position of the aircraft, have followed development parallel to that of communication systems since, in large part, the technology used has had the same base. Instruments such as the direction finder (ADF6) or the omnidirectional radio beacon (VOR7) can be simply defined as radio receivers that indicate the direction of the transmitting station they are tuned into. If these receivers are accompanied by a map showing where the transmitters are located and the frequency assigned to each one, and supposing there are enough transmitters, we would have a complete navigation system. Some exceptional cases have occurred and could be considered comical if it were not for their consequences. An important accident occurred in the Amazon jungle and was attributed to the use of the ADF to listen to commercial radio stations (Beaty, 1995) – after all, it’s a radio receiver − and, additionally, because of some confusion, a heading of 027 instead of 270 (NorthNortheast instead of West) was used. The use of the instrument that would have permitted a cross-check (ADF) for other ends led them to a zone where radio aids are scarce and became lost, eventually running out of fuel. The precision of navigation systems, for many years, is sufficient to guarantee that an airplane won’t be lost save for error or gross carelessness − this precision, however, is not just invested in the improvement of safety but also in features.

6 7

Auto Directional Finder. Very High Frequency Omnidirectional Range.

Safety in Commercial Aviation: Risk Factors

49

The greater density of air traffic has been able to be managed thanks to the current navigation systems and some specific alarms – ground proximity (ground proximity warning system (GPWS8) or proximity to other airplanes (TCAS9) – which have been invaluable aids. The density of traffic converts the knowledge of the exact position at all times into a vital necessity, since the major risk is not in becoming lost but in colliding with another airplane or with the terrain. The expression CFIT10 designates one of the main causes of disasters and in all cases is due to failures in navigation, either by failure of instruments or by incorrect interpretation of the same. In either case, the airplane ended up crashing into the ground despite not having any control problems. As the expression suggests, CFIT refers to situations where, not having any problem in maintaining control of the airplane, it hits the ground, which occurs due to having erroneous information about its location. A study carried out by the British aviation authority (U.K. Civil Aviation Authority, 2000a) shows that 57 per cent of fatal accidents in airplanes of greater than 5,700 kg (including jets and turboprops) have as the cause the quoted CFIT. Many of these accidents have occurred in airplanes equipped with the corresponding GPWS. The manufacturers have opted for improving the system (TAWS11), which, differing from the previous one, can also detect terrain when it is in front of the airplane (for example, the side of a mountain). The detail of the different generations of navigation systems follows. Each one is born to resolve problems of the previous generation and so represents a form of acquisition of learning. Navigation: First generation The compass has been, next to the sextant, the first instrument of maritime and air navigation and has introduced the problem of being able to be used alternatively as a safety tool or to increase operational abilities. No one doubts that the compass was advancement for safety; however, from its beginnings – it was criticized by Thomas More − it was also an invitation to adventure, in situations that before its appearance would have been unthinkable due to their risk. Apart from possible deviations, the compass has always had problems that are aggravated with the speed of the vehicle that is guided by it. The use of the compass as primary instrument for navigation in an airplane represents a safety issue because of the possible deviations from the planned route. The compass is subjected to local differences in the earth’s magnetism. Because of this it does not exactly point to the magnetic north but rather to a drift with respect to said point – a drift labelled magnetic drift − which is different for different parts of the earth. 8 9 10 11

Ground Proximity Warning System. Traffic Collision Avoidance System. Controlled Flight Into Terrain. Terrain Awareness Warning System.

50

Improving Air Safety through Organizational Learning

This is a trivial point on short flights because drift is a specific figure to add or subtract from the heading on the compass to know the heading of the airplane over the surface. When a flight is long and passes through different geographic zones, the magnitude of this drift of the compass with respect to geographic North is not constant and makes the task of navigating more tiresome. Additionally, the compass points where the nose of the airplane points, depending on the direction and strength of the wind, which might not coincide exactly with the direction in which the airplane is moving.12 These problems have been eluded, while airplanes flew over land, by means of radio aids in the form of terrestrial transmissions, and when visibility permitted it, by recognition of the terrain features. Both possibilities are annulled on transoceanic flights. The lack of surface stations on transoceanic flights has been overcome over the years using astronomical navigation and the placement of ships equipped with radio aids in fixed geographic positions. In addition, radio aids have been fixed on islands that are along the way. The resultant situation was fragile, because of clouds that would impede astronomical navigation or of strong wind that might deviate the airplane beyond the reach of the scarce radio aids available on the surface. The first radio aids, mentioned at the beginning of the section, which are still in use today (especially in general aviation), were of the direction finder type. A transmitter situated on land transmitted a specific frequency and a receiver in the airplane, tuned in to that frequency, indicated the direction in which the transmitter was found with respect to the airplane. Knowing the geographical location of the transmitter, whether a commercial radio station or an ad hoc transmitter, the instrument signals the direction in which this is found. This system, however, presents problems for its use as primary navigation system. In the first place, the instrument gives an indication of direction, but not distance, and so the actual position can not be known, but rather in which direction the aid is located with respect to the nose of the airplane. Secondly, it does not guarantee flight in a straight line. Due to the effect of wind, it could happen that by scrupulously following the indicator, the plane would finally arrive at the destination, but having traced a considerable curve, an especially serious problem for low fuel or flying without visibility between mountains. Lastly, the instrument has a special predilection for storms. If there is one nearby, the instrument will stop signalling the station to signal the storm, adding the risk inherent in the storm itself to the risk of the deviation.

12 This factor is known as ‘drift’ and also applies in maritime navigation. In popular terms, it could be said that the airplane flies slightly ‘askew’, but given the distances covered a minimum difference in degrees can represent an important deviation from the planned route.

Safety in Commercial Aviation: Risk Factors

51

Navigation: Second generation The difficulties of navigation by compass, astronomical navigation and direction finders led to the development of instruments specifically for air navigation with very high precision. In this second generation, the radio beacon or VOR introduced improvements such as the knowledge of distances and being completely indifferent to wind in its indications by not using the nose as the reference point but rather the whole airplane. Formally, the omnidirectional radio beacon is an FM radio receiver and has a short range, due to the so-called line of sight, which is limited by the Earth’s round shape. The radio waves transmitted by this frequency travel in a straight line and this means that if a transmitter and receiver find themselves situated on land or at low height, the shape of the Earth represents an obstacle for communication. This problem in not very important in commercial aviation – at least in flights over land − since the altitude at which this operates permits airplanes to receive transmitters that are relatively far away. As opposed to the direction finder and the compass, it allows tracing an exact heading, although its handling complexity is greater. With respect to the first generation and flights over land, the omnidirectional radio beacon permitted tracing an exact route over the map. The initial models did not have information available about how far the airplane was from the transmitter. The possibility of establishing the position required, as well as the technological development of the time, came about through a procedure known as triangulation. Airplanes were equipped with two identical VOR instruments. By tuning each one to a different frequency, two vectors could be traced, one with respect to each one of the stations. The point where both vectors cross represents the exact location of the airplane. Afterwards, a device called a DME13 was added to the radio beacon. This gave distance information and enabled, with the use of a map, the determination of position without the need for triangulation. A second development over the base of the omnidirectional radio beacon was the instrument landing system, known by its initials ILS.14 This system was developed specifically to facilitate landings with low visibility and, as well as providing heading information, it also provides information to maintain the correct altitude during the approach phase. With the help of this instrument a new ability was introduced: up to that time, flight without visibility had been achieved. With the precision of the VOR, associated with the indicators of speed, height and instruments like the artificial horizon, directional gyro15 or turn indicator, this started to be a safe practice. The introduction of ILS allows the conditions of flight without visibility to be maintained until the airplane finds itself very close to the ground and ready to land. 13 Distance Measurement Equipment. 14 Instrumental Landing System. 15 In simple terms, it can be considered an evolution of the compass, not subject to the effects of terrestrial magnetism.

52

Improving Air Safety through Organizational Learning

Since then, systems for military applications or designed for especially complex landing approaches have appeared that have served to refine this acquired ability. In these systems, the presence of obstacles that impede following a straight line in the final phases of the approach is not a problem for the instrument’s indications. Hong Kong airport, closed in 1998, was especially difficult due to a lack of space, which forced an approach following a curved trajectory. To solve this problem the system known as MLS was applied, which permits approaches with these characteristics. In the same situation, in one of the runways of JFK airport in New York, it was opted for a low-technology system consisting of placing lights on the ground to mark out the trajectory – also curved − for the approach. One problem persisted, however: the low range of the radio beacon, even when considering that altitude improves range, left the problem in the same initial terms of flight over oceans. Once out of the reach of coastal stations, the system became inoperative. There was therefore not sufficient ability to tackle flight over water with safety. It’s true that as the speed and altitude of airplanes increased, the distance at which they could receive transmissions became greater and the impact of meteorological factors diminished. However, not even flying at more than 10 km from the ground can stations on the coast be received if crossing thousands of kilometres of water. In these conditions, flight was once again confined to the compass and astronomical navigation. The consequence is that the information regarding position and heading of the airplane may not be completely exact. Navigation: third generation The appearance of inertial navigation systems came to eliminate the limitations of the second-generation systems in flight over water or over land that did not have transmission stations. Inertial navigation systems are completely autonomous and, once an initial position is introduced, process changes in heading or flight level, accelerations or decelerations to determine the current position. This way, if the initial inputting of data is correct, it permits signalling with great precision where the airplane is located. Inertial navigation systems are based on the same property as the Foucault pendulum effect, that is, maintaining its direction of spin autonomously and independently of terrestrial movement; this special feature is what makes them ideal for correctly maintaining direction in the absence of external reference points. As the technological generations of inertial navigation systems have been superseded, their level of precision has increased, although the very concept on which the system is based introduces its own source of risk. An error in the initial loading of information can have grave consequences if this information is not verified by other means.

Safety in Commercial Aviation: Risk Factors

53

The accident of Mount Erebus in the Antarctic in 1979 took place on a tourist flight that left from New Zealand in a DC-10 airplane to show passengers the Antarctic from the air. The accident was caused by an initial error in position and the deficient training of the crew in the recognition of visual effects of flights over polar zones. Visibility was poor, which did not excessively worry the pilot, who, believing he knew where he was, descended to continue the flight below the level of the clouds. The initial error and the loss of visual discrimination, a common effect in polar latitudes, would lead the airplane to the slope of Mount Erebus, ending in the deaths of all on board. The inertial navigation system has remained as the most used on long flights for its reliability and the growing precision of the devices on which it depends. Setting off from here, the subsequent advances in the area of inertial navigation have taken place in systems integration, allowing for cross checks of information from different sources to be carried out and making the task of navigation easier as well. Given that the problem of flight over great expenses of water without transmitters persists, the contrast of information received from the system does so too, and lacking external references, inertial navigation systems redundancy is established. In addition, the appearance of satellite navigation, via GPS16 and the future Galileo system, provide said external references. Their practically universal coverage would make an accident like that of Mount Erebus impossible in an airplane fitted with such a system. Navigation: current situation The potential of satellite navigation would justify its treatment as a new generation. However, the levels of precision already present in the third generation are sufficient for the increase in said precision to have marginal effects, at least in its current use, with the noted point about the necessity of introducing an initially correct position. Increases in precision in navigation systems have invited the introduction of new functional requirements on the system. Hence currently, the regulations over air traffic permit that airplanes can reduce their vertical separation to 1,000 feet in areas where adequate support is available on the ground. To have a better idea of the significance of 1,000 feet in this context, it is sufficient to know that the wingspan of large airplanes reaches 240 feet. Despite 16

Global Positioning System.

54

Improving Air Safety through Organizational Learning

this, it is considered that the current precision levels are sufficient to guarantee the absence of risk, save a grave technical or human failure.17 Six new flight levels have been added since 24 January 2002, limited to airplanes equipped with the sufficiently precise technology to permit the small separation; airplanes not equipped with this technology are forced to fly below 29,000 feet. This is, by itself, a form of pressure towards technological update, given that, according to EUROCONTROL, to reach an optimal level of cost-effectiveness, flights should be performed starting at 35,000 feet (New Economist, 27 January 2002). Satellite navigation via the currently functioning system, GPS, represents a technological advance that provides even greater precision, not being dependent upon an initial position like inertial systems. Nevertheless, this system is not being used in its totality due to political reasons derived from its military origins and its control by the United States. A situation where the superior precision of satellite navigation could justify its consideration as a new generation is its use for approaches without visibility to airports lacking technological equipment. Large airports have at their disposal technological aids that, jointly with the specific training of crews, permit airplanes to land in practically any meteorological condition. The situation changes in airports with less technological equipment. In these, airplanes can be forced to perform complex manoeuvres not free of risk or, alternatively, cancel flights. Currently, due to the lack of complete infrastructure at some airports for landing with instruments, there are still manoeuvres that involve risk due to low visibility and closeness to the ground, such as that called ‘circling’. This manoeuvre is performed to make up for a lack of technology at an airport as a landing aid. If a runway has aids for the landing system at one of its ends, but the wind is strong and requires the use of the opposite end, the manoeuvre of ‘circling’ consists of approaching the end equipped with aids as if the airplane was going to land and once below the cloud level, circle the runway without losing sight of it at any time, and land visually by entering from the opposite end.

17 The collision of a DHL cargo airplane and a Bashkirian passenger airplane in 2002 is not attributable to this factor but rather to a grave failure in the traffic control services. Both airplanes were equipped with TCAS and, as such, each was aware of the presence of the other, and it was an erroneous instruction that caused the collision.

Safety in Commercial Aviation: Risk Factors

55

Considering a Boeing 747 airplane has a wingspan of 70 m, it’s easy to perceive the danger that surrounds the manoeuvre: the cloud ceiling in bad weather conditions could be considerably low and carrying out ‘circling’ requires four turns of the airplane close to the ground. A turn with a 30º inclination would mean the wing tip on the inside of the turn would be 18 metres below the longitudinal axis of the airplane. If the slow reaction times of turbine engines is added, there is barely enough margin for recovery if the height is misjudged. It could be thought that the risk might be less if, free of wind, the landing was made entering directly by the end of the runway with aids installed; however, this is not the case. An airplane flies by maintaining its speed relative to the air that surrounds it and the speed relative to the ground is only relevant in the time around take off and landing, as well as the calculations of fuel consumption and flight time. If an airplane has to land at 200 km/h with respect to the air that surrounds it, it will have to display that speed on its indicator. However, if the wind pushes it at 50 km/h, the airplane will be flying at 250 km/h with respect to the ground (its indicated 200 km/h plus the 50 km/h contributed by the mass of air surrounding the airplane). This means having to slow down an airplane that could weigh in excess of 200 tonnes from a speed of 250 km/h at the moment of contact with the ground. If the situation is the opposite and the wind, with the same intensity, is from the front, an indicated speed of 200 km/h would be conserved with a speed over the ground of only 150 km/h. At the moment of landing, this would be the speed from which it would be necessary to slow the airplane down. This difference, when the wind is strong, justifies assuming the risk of the explained manoeuvre to escape a greater risk: the heightened probability of not having a long enough runway to stop the airplane. These types of risks could be avoided using current technology. However, the reasons already highlighted prevent the usage of the full potential of satellite navigation.

56

Improving Air Safety through Organizational Learning

Current sources of risk derived from navigation As happened with inertial systems, however, the increases in precision bring with them new risks derived from electronic perturbations and the very precision that is required from them, as well as a more careless attitude as much on the part of the crew as on the passengers – these last ones constituting a new source of risk. The increases in precision achieved in navigation were used to achieve greater levels of safety; however, a point was reached where additional improvements in precision did not improve safety. In fact, a reduction in positional error from 50 m to 10 m in a flight of several thousand kilometres does not contribute additional safety, unless the conditions were generated that would make that increase in precision necessary. These conditions have been generated via the search for greater efficiency that, with the exception pointed out regarding the GPS system, has led to the performance of actions that would not have been within the reach of systems less precise than the current ones; the mentioned reduction of separation between airplanes is a clear example. With the objective of guaranteeing this greater precision, now necessary, sophisticated electronic equipment is introduced into the airplane which is sensitive to perturbations coming from very popular devices such as portable computers, mobile telephones, games consoles, personal music players, and so on. This fact has opened up a field of technological risk with a new component: the risk is introduced by the passengers via the execution of an action that, in other environments, is innocuous and is integrated into its normal conduct. The passenger, then, is not conscious that he is performing a grave transgression by habitual acts such as having the mobile telephone turned on. Publications have recently started to appear that detail the type of danger associated with the use of the mobile phone, the most affected instruments, and even the suspicion that some accidents may have been caused by this interference. A report by the Indian aeronautical authority highlights that ‘amongst the most typical examples is the spontaneous disconnection of the automatic pilot 400 feet from the ground during an automatic landing or a turn to the right, also spontaneous, whilst the airplane, just airborne was still over the runway, not obeying the pilot’s command in the opposite direction until the disconnection of a passenger’s mobile phone’. Similar situations – sharp spontaneous turns − have been reported on Australian airplanes; loss of instrument signals coinciding with the use of mobile phones have been reported in different occasions and is beginning to be a habitual experience of messages from the cockpit

Safety in Commercial Aviation: Risk Factors

57

asking passengers to check their mobile phones due to receiving interference. A report by the British civil aviation authority signals that the use of a mobile telephone in an airplane generates an intentional transmission with a power level that has the ability to interfere and affect multiple airplane systems. The failures due to those interferences, even for short periods, can lead to: •

false danger warnings

• increase in workload for the crew and the possibility of requiring emergency action • reduction in the crew’s confidence in protection systems that might be ignored in the case of a real alarm •

distraction of the crew from their normal duties



noise in the crew’s headsets, and

• failures not visible in the safety systems with loss of protection. The attempt to normalize as much as possible life aboard an airplane, based on the objective of not worrying the passengers, is not compatible with the need to prohibit actions considered normal by the passengers18 in other environments. The air operators, in this case, make the passengers aware of a prohibition, but the absence of an explanation about its importance and the lack of emphasis on its adherence can easily lead the passenger to consider the matter as something of minor importance. As in other areas, technological development of the navigation systems has led to some abilities, once necessary, becoming obsolete. Others that previously required the presence of a specialized crew member (Baberg, 2001) have become secondary and have been converted to a supervisory task. Apart from the flights of the pioneers of aviation, the so-called ‘Operation Bolero’, which meant the first massive transatlantic flight of B-17 airplanes during 18 An action similar to the one produced by electronic devices is the relative prohibition of smoking in toilets. A fire started in a toilet has already resulted in an accident with the complete destruction of one airplane (Macarthur, 1996). Toilets are equipped with smoke alarms but some passengers manage to avoid setting off the alarm, known by all the crew but without any operator taking more radical action to achieve adherence to the rule. Nevertheless, in some countries, the destruction or disconnection of this type of alarm is considered a felony and passengers are informed accordingly.

58

Improving Air Safety through Organizational Learning

the Second World War (Tibbets, 1989), was carried out making use of astronomical navigation, modality that has remained in use until the appearance of the inertial systems that left it in disuse. The same could be said of the practice of triangulation in commercial aviation, using radio aids. Technology in commercial aviation, as much on board as support on the ground, is sufficiently evolved to convert this into a marginal practice, even though it is still used in general aviation. The increases in precision in navigation have not only improved safety but have also introduced the conduct of passengers as a potential risk. The role of the crew has also changed, of whom different attitudes are required to those necessary in the initial stages of aviation. The organization and precision in the execution of procedures and knowledge of the operations prepared for each situation, all characteristics of technical positions, are highly valued and foreign to the profiles of the pioneers of aviation described by Bach (1985), Vázquez-Figueroa (1999), Saint-Exupéry (2000) or Wolfe (1981). A paragraph from Saint-Exupery19 can show us the changes in navigation and the requirements of their operators: I remember also another of those moments in which the limits of the real world are exceeded: the direction finder positions transmitted by the Saharan stations had been erroneous during all of that night and had seriously disoriented the radiotelegraphist as much as myself. When, having seen the brilliance of the water through a break in the bottom of the mist, I turned sharply toward the coast, we couldn’t know for how long we had been going deep into the high seas.

In the past, flight signified direct execution of tasks with little information, provided by unreliable technology and as a result requiring great capacity for improvisation. Today, it’s above all a supervisory task, and attention to contingencies might not be well received by persons trained according to the old necessities. This evolution may be under some involuntary omissions in the task of supervision. The confidence of the crew in the levels of precision and reliability achieved by navigation systems can give rise to new risks derived from the confidence in the system (Simmon, 1998). Some authors call this phenomenon self-complacency. Dekker (2005) made an excellent analysis of its causes, concluding that people are not well equipped for surveillance tasks. As a consequence, rather than blame the operator it would be necessary to review the design of the systems where the distribution of person–technology tasks may not be adequate. Accidents like that of American Airlines in Cali demonstrate as the basic cause an excessive confidence 19

Translation of the Spanish publication.

Safety in Commercial Aviation: Risk Factors

59

in the system derived from routine.20 Minutes before the impact the captain asks for the airplane’s position, ‘just out of curiosity’, and the co-pilot responds, ‘I had the flight plan around here somewhere.’ There was an error in tuning a radio aid. This error occurred thanks to a help system that avoids the manual searching for a map: the system supplied the radio frequency when the initials of the place being searched for were given (similar to that of a phone directory in a mobile phone), and there were two places (Rozo and Romeo) that began with the same initial. All this, in addition to a last minute change in the line of descent to Cali, was sufficient to cause the disaster. The operators, in an environment where they trust completely in the technology, can invert their habitual way of acting. An alarm intended to avoid forgetting something can come to be used as a prompt. This way, the required action is carried out when the alarm is activated. In these conditions, the warning system, initially conceived as a secondary system, comes to be used as a primary system. The type of experience is the same as that of any automobile driver. Luminous warnings for oil level or brakes invites the habitual driver to not check these parameters and only do it when the corresponding warning light is activated. A failure of a warning light can have grave consequences for the mechanism. In summary, the navigation systems have equipped commercial aviation with new abilities as they have increased their precision. In addition, these abilities can be applied in a growing number of situations. The precision achieved by the most advanced systems no longer seemed to have an impact relating to safety. However, the search for means aimed at increasing efficiency made the new levels of precision achieved necessary. In this process, new risks have been introduced derived as much from less tolerance of error of a more efficient system as from the conduct of the passengers and the crew, reduced to a more passive position than in less evolved phases. Propulsion systems Propulsion systems – engines − represent another component of technological risk. The importance of this factor is such that, as well as its impact on safety, without its 20 Despite this being a much-commented-on accident and from which have arisen a multitude of analyses, including the one used as reference, it’s necessary to point out that for years, the NTSB worked with a preliminary report that was ‘subject to errors’.

60

Improving Air Safety through Organizational Learning

contribution, commercial aviation would not exist. Although aviation, in times now long gone, began development without the use of mechanical propulsion, the use of aviation as a mode of transport is inseparable from the existence of engines. The evolution of engines has materialized in three different dimensions, which will be treated under separate headings: reliability, power and economy of use. Additional dimensions such as the reduction of noise or pollution have not been dealt with as they have an indirect relation to safety. The reduction in pollution is from a reduction in fuel consumption, which results as a collateral benefit of economic use. The reduction in noise can signify the imposition of limits on height or power for the overflight of determined zones, for which, save exceptional situations, does not have an impact on safety itself but through restrictions in navigation. An exceptional case in this respect known among pilots of heavy aircrafts is the take off in Mexico D.F. where the strict prohibition of overflying the presidential residence, along with the elevation of the airport and heavily loaded transoceanic airplanes, force going closer than desirable to a mountainous zone. Reliability This represents one of the dimensions where the advances since the first engines have been most visible. Initially, the technology of aviation engines was very similar to that used in the motor industry, and although this is still the case in general aviation, the types of engines used in commercial aviation are radically different. These engines have achieved such levels of reliability that indexes such as 0.2 failures per 1,000 hours (Boeing Commercial Airplanes Group, 2002b) of operation are becoming familiar. Actually, in 1994 the FAA (Federal Aviation Authority) had already reported that the reliability of engines as much for ETOPS21 (twin-engine long-range airplanes) as for non-ETOPS has improved substantially in past years. The world average of almost all jet airplanes of North American manufacture reaches or exceeds the requirement established for ETOPS airplanes of 0.2 failures per 1,000 hours of flight; this average also applies to airplanes not certified by ETOPS22 regulations and four-engine airplanes.

21 Extended Time Operations. Despite the acronym referring only to the extended flight time, airplanes subject to these regulations are, invariably, twin-engine aircraft. 22 It should be pointed out that ETOPS regulations are especially demanding, given that they apply to airplanes equipped with only two engines on transoceanic flights.

Safety in Commercial Aviation: Risk Factors

61

The reliability of engines is one key element for safety. However, the improvement of reliability has also been used here to obtain improvements in efficiency. This way, the familiar configuration of the first jets with four engines for transoceanic flights has been substituted by another already generalized configuration of twin-engine airplanes. The gain in efficiency, for the benefit of reliability, has introduced a new risk factor. The transcendence of an engine failure in a twin-engine airplane is, logically, superior to that of an engine failure in a four-engine airplane. This new risk has required the adoption of new safety measures aimed at preventing the eventuality of a double-engine failure. So, strict measures are taken as far as revisions are concerned, avoiding both engines being revised in the same operation and by the same team. In addition, the critical elements are redundant and devices have been introduced to prevent the loss of electrical and hydraulic energy in the eventuality of a double failure. Lastly, this practice is maintained based on a continuous statistical analysis of failures occurred and, although the theme might seem irrelevant in this context, the great increase in the power of engines. Power The technological change from piston engines to turbines not only signified an increase in reliability but also in power. This increase in power had, as well, two applications: increase in the speed of commercial airplanes and the possibility of flying at greater altitude. In their time, these factors were the object of a commercial race between the principal manufacturers. The increase in speed and altitude doesn’t rest just on the improvements in power but also on the construction of improved aerodynamic structures. The importance of this factor would be put to the test with the construction of the Comet, already mentioned, and the tragic results caused by the use of structures whose solidity had been calculated thinking about propeller-driven airplanes that flew at less altitude and speed. The explosions in flight of the Comet airplanes gave rise to a long judicial process where their manufacturer – De Havilland − was not considered responsible due to the fact that, with the engineering knowledge at the time, there could not have been any indication about what would occur. Since the appearance of large jets, nevertheless, there have not been any spectacular advances in speed, due the fact that the sound barrier represents, at this time, more of

62

Improving Air Safety through Organizational Learning

a commercial and environmental23 barrier than technological one and airplanes have flown at speeds close to 0.85 Mach for more than 30 years.24 The Concorde, and its disappearance, as the only representative of the supersonic passenger airplane and the fact that 25 years later, no other models have been made to compete in this market, attest to the lack of interest in the same.25 In the previous point, relating to reliability, the importance of power was briefly mentioned, with regard to safety and how the increases in power had made possible the appearance of large, long-range engines. The link between both concepts is the following: safety regulations require power of an airplane equipped with two engines to be much greater than the obvious, that is, that necessary to allow the airplane to fly. Any commercial multi-engine airplane is subject to the requirement that, in case of an engine failure after the take off decision26 speed, there should be enough power available to take off. In fact, the take off decision speed is known as critical engine failure recognition speed. For an airplane equipped with four engines, this requirement implies that it must be capable of taking off with 75 per cent of its nominal power. An airplane equipped with two engines should do it with 50 per cent of its nominal power. Therefore, under normal conditions, a twin-engine airplane will have more excess power. The information regarding ‘nominal power’ is not trivial because, if all of an airplane’s engines function correctly – regardless of the number of engines – the plane is propelled in a straight line. If one of them fails and it is not situated on the longitudinal axis of the airplane, the thrust is asymmetric, that is, the airplane will tend to deviate from the straight line to the side where the failed engine is; consequently part of the available power must 23 Sonic booms and flight in zones of the stratosphere that can affect the ozone layer more than other flights. 24 In fact, before the appearance of the Concorde, the title of ‘fastest commercial jet in the world’ was held by the Convair Coronado, in extensive use by the Spanish firm, Spantax, which flew at Mach 0.91 Despite this, it was withdrawn from service due to its high running costs. 25 There are experimental designs and during 2002, one of these designs, of Japanese manufacture, was destroyed in an accident; in a very primary phase are radically different designs aimed at reaching speeds of Mach 6−8 and which could completely alter the described scenario. 26 The decision speed, also known as V1, depends on the length of the runway, the atmospheric pressure and the load and features of the airplane and is that after which, before any contingency, including an engine failure, there would not be enough available runway to stop the airplane and therefore must take off. This speed, as can be deduced from the factors that define it, is not constant for an airplane but must be calculated for every flight.

Safety in Commercial Aviation: Risk Factors

63

be sacrificed with the objective of maintaining straight flight. For this, aerodynamic resistances are introduced on the side of the airplane that has more thrust to further reduce the available power after the engine failure. This effect is even more prominent in twin-engine airplanes, since despite the engines being situated as close to the longitudinal axis as possible (they can be seen to be very close to the fuselage), the asymmetric effect will normally be greater to that in a four-engine airplane where one fails. The practical consequence is that the excess power that they need is even greater. Economy of utilization The mention of economy of utilization carries with it the idea of efficiency, which, as was seen in Chapter 1, is a necessity of commercial aviation – which in some fields could exert pressures contrary to the increase of safety. Technological changes have been an important source of economy of utilization via, basically, a reduction in the fuel consumption, the number of engines and an increase in speed with respect to the earlier propeller-driven airplanes. An aspect that should be mentioned due to its collateral relation with safety in flight is the fact that the sources of improvement have rested on the possibility of experimenting with low cost. The capacity to create multiple designs thanks to simulation of these on computers instead of physically constructing them to test their behaviour has represented a source of innovation and economy. So, one of the benefits achieved with the introduction of new designs is the reduction in fuel consumption. This is not an exclusive achievement of aviation but is common to all types of internal-combustion engine. Nevertheless, in commercial aviation this is one of the main operating costs, and because of this it is an important source of competition for manufacturers and airline companies. The search for more efficient structures and engines, however, has led to improvement in manoeuvrability and economy at the cost of stability. Once again, the requirement for greater efficiency creates a safety issue and demands the introduction of new measures that guarantee safety and which before were not necessary. The stall in an airplane – the moment in which it stops behaving like an aerodynamic structure and begins to drop due to the force of gravity − is a situation practised as part of flight training and which, unless the airplane is near the ground, does not have major relevance other than the corresponding startle if the situation is unexpected.

64

Improving Air Safety through Organizational Learning

Some pilots have been in the situation of stall in an airplane loaded with passengers27 due to a combination of turbulence in a situation with small margin above the stall speed and automatic pilot programmed to keep a predetermined level of flight. This type of situation, however, almost corrects itself, save that some complication arises derived from the proximity to other airplanes or an incorrect action: once the airplane has stopped behaving like an aerodynamic structure, its weight distribution will lead to the nose dropping and the airplane begins to fall. Upon doing so, it regains lift and control. The current structures, more efficient, have a centre of gravity further to the rear and their behaviour in the stall condition is not as clear as that described. In the worst case, they could fall in a nose-high position that would prevent gaining speed and recover from the stall condition. To avoid this eventuality, a set of technological aids are introduced which are aimed at guaranteeing that it is virtually impossible to place the airplane in a stall condition, even though its structure invites this to happen. Another important factor in what is referred to as economy of utilization, also of great importance for safety, relates to the commented reduction in the number of engines. This reduction represents, of itself, a reduction in the maintenance costs, since obviously it is less expensive to maintain two engines than four, even when considering that the control requirements for long-range twin-engine airplanes are higher. Despite these requirements, effects of learning are generated; for example, the requirement that two engines not be revised at the same time by the same team can be complied with so that no additional costs are incurred. Boeing Commercial Airplanes Group (2002b) recommends that two teams work in parallel, one on each engine, and once work is completed they change over such that each team checks the other’s work.

27 Personal communication from the pilot involved in the incident. In conditions of small safety margin over the stall speed, due to having accepted a higher flight level with an airplane still heavily loaded, the turbulence caused a descent in the airplane and the automatic pilot responded by trying to recover the height lost, producing the stall as a result of the loss of speed.

Safety in Commercial Aviation: Risk Factors

65

In this way, they reduce the stopped time for the airplane, making it comparable to a four-engine airplane that was not subject to this requirement. A less evident reduction in costs comes from the greater available power available in twin-engine airplanes. For the safety reasons already highlighted, the airplane has to be able to take off with 50 per cent of its nominal power if one of the two engines fails after the decision speed is reached. Additionally, if one of the engines fails in cruising flight, the remaining engine must be able to maintain the airplane in flight up to a maximum of 210 minutes without exceeding the normal operating parameters for temperature, pressures, and so on. These requirements linked with safety represent a notable excess of power in twin-engine airplanes, excess that in normal operating conditions is used to operate more economically. Even at full load, these airplanes can ascend quickly to cruising altitudes where air resistance is less and consequently reduce fuel consumption. The advantage is partially compensated by the oversizing of the engine with respect to its four-engine competitors and because of the additional weight of reinforcements to the structure in expectation that the airplane might have to fly with asymmetric thrust in case of an engine failure. Despite this, its situation is, in the majority of cases, more favourable than that of airplanes with four engines and with fewer requirements for excess power. These, in the worst case, are supposed to be able to take off with 75 per cent of their power and in the longer flights are forced to climb in stages as fuel is consumed and so reduce weight. Only on very long flights, assuming a comparable28 technological level, does its lower excess power compensate in terms of fuel consumption and associated operating costs. The increase in speed is, similar to the technology and care in utilization, another factor for cost reductions. In flight, speed does not depend only on the power of the engines but also on the structures on which these are mounted. Nevertheless, the inclusion of this point in the paragraph relating to engines obeys that the greater number of its elements are revised in accordance with the hours flown. Other factors such as the cost of crew and the number of hours of work it represents to take the airplane from one point to another are sufficiently obvious to require further explanation. In any case, given the stagnation of speed for more than 30 years, this factor would only be relevant as a comparative element between jets and propellerdriven airplanes or, eventually, with new and faster models that might come on the market. There has been some attempt to advance in the field of speed, such as Boeing’s abandoned project, named ‘sonic’, with the objective of flying at 0.95 Mach. On another side, supersonic aviation, save the current experimental projects, 28 As occurs with automobiles, comparisons between airplanes of different technological generations cannot be made, as the differences are due to the generational jump, rather than the number of engines.

66

Improving Air Safety through Organizational Learning

alter the situation quantitatively and, as already mentioned, results in a marginal phenomenon. Projects of greater wingspan, in as far as the modification of velocity – such as the hypersonic airplane, the X-43, driven by NASA and capable of flying between 7 and 10 times the speed of sound (NASA, 2001) – are still in an experimental phase and therefore are not subject to this analysis. There is, as well, a good reason for why there have not been great efforts to advance speed: the current congestion of large airports implies that the most effective way to win in speed is not found in the air nor in new engine technology, but on the ground. Lastly, one aspect of great importance in engines is their role, on the outskirts of actual propulsion, as generators of energy, as much hydraulic for the handling of the airplane as electric for the functioning of all the onboard equipment. This problem has been resolved habitually via multiple redundancies, although experience has shown on some occasions that this approach might not be sufficient. Airplanes with two engines, in addition, include devices independent of the engines that permit having enough energy in the case of failure of both engines. Two known cases in large airplanes – incidentally neither was a twin-engine − were the accidents in a United DC-10 and in a JAL Boeing 747. In both cases, an explosion in the tail section of the airplane – in one case due to explosion of an engine and in the other due to the explosion of a section of the pressure cabin − destroyed the hydraulic lines. Despite the multiple redundancies, inevitably, there are zones in the airplane where the conduits for the different systems are close to each other. If one of these zones is damaged, it can result – as, actually, it did in these cases − in a complete loss of hydraulic fluid and the possibility of controlling the aircraft. Current situation of technological risk associated with propulsion It can be observed in the evolution of propulsion a line shared with navigation systems: the system evolves in three parameters – reliability, power and economy – and generates new abilities that resolve problems, which in the initial moments could be of great importance. From a certain point, the development begins to be marginal and does not appear to be justified from a safety point of view. At that point, an alternative utilization from the field of efficiency appears – reduction in separation and operations without visibility in the case of navigation, and reduction in the number of engines in the case of propulsion. The new utilization, however, has one consequence: it introduces new risks that were not there previously and it requires the appearance of new measures or safety devices that involve an increase in the global complexity of the system

Safety in Commercial Aviation: Risk Factors

67

Auxiliary systems Auxiliary systems represent the set of elements present in the construction of an airplane, which, without forming part of its basic systems, could represent a source of risk. The principal systems from the risk source point of view are the electrical, hydraulic and pressurization systems. The list could be expanded almost indefinitely to include the fuel feeding system and the whole set of systems that ensure that the conditions inside the airplane permit some grade of comfort. Nevertheless, for the purposes of illustrating the impact of the auxiliary systems, those proposed can be sufficient to understand the learning dynamic and expanding the list would not contribute new arguments. The character of risk factor of the three systems is visible even in an artificial approximation. A spark or fire of electrical origin is, on its own, sufficient to cause a disaster. The same can be said of the hydraulic systems, as they guarantee control, and of the pressurization systems, as they maintain survivable conditions in the cabin. The auxiliary devices have one difference with respect to the sources of technological risk seen until now: in the previous sources of risk, navigation and propulsion, an apparently useless development, invited the search for a use and introduced a new risk factor that had to be managed. Here it is the auxiliary systems themselves that introduce the risk. A failure of an engine or in navigation can be seen from the earliest stages of commercial aviation; however, the events that illustrate the importance of auxiliary systems could not have happened in earlier times since they refer to devices that were installed at a relatively late stage. Various occasions where the auxiliary systems have played a central role are shown in the box below. The auxiliary system involved in each one appears in italics. TWA-800 The case (known as TWA-800), mentioned as an illustration of the idea of complexity, gives an idea of the impact of auxiliary systems on safety. In 1996, a Boeing 747 exploded in the air due to two factors, both coming from auxiliary equipment. A particularly hot day and a delay in the flight forced the running for some time of the air conditioning motor, situated near an empty fuel tank. The gasses of hydrocarbons are more inflammable than the liquid form of these hydrocarbons and lead to a very explosive atmosphere in the empty fuel tank. A spark from a short circuit in an instrument ignited the gasses and caused the explosion of the airplane. United-232 The case of United-232, like that of JAL (already mentioned), had the total loss of the hydraulic systems and, consequently, it was impossible to control

68

Improving Air Safety through Organizational Learning

the airplane due to not having enough strength to handle the controls of a large airplane. De Havilland Comet The first commercial jets ever built (de Havilland Comet) exploded in the air for – at the time – unknown reasons. A sophisticated experimental design showed that the airplane fuselage, after several cycles of pressurization and depressurization, was not able to resist the pressure differences between the pressurized cabin and the exterior of the airplane. Saudi-163 The inadequate handling of pressurization and an automatic device that impeded the opening of the doors whilst the interior and exterior pressures are not equalized prevented the evacuation of Saudi Arabian 163 on 19 August 1980. A fire on board forced the airplane to turn back. However, as well as the slowness and lack of preparation for an evacuation, the pressure setting was at sea level rather than the setting at Riyadh airport where they had landed, and this created a difference in pressures between the interior and exterior of the airplane. This pressure difference prevented the opening of the doors of the burning airplane, which was completely destroyed on land and in the presence of the emergency crews who could do nothing to rescue the occupants. Learjet de Payne Mitchell Even in a case where the loss of pressure is not explosive and so does not result in the destruction of the airplane, it does mean that the atmosphere in the airplane becomes unbreathable, and is a motive for the installation of oxygen masks. Nevertheless, there have been cases – the most well known corresponds to the Learjet airplane that carried golfer Payne Mitchell in 1998 − in which the lack of breathable air led to the loss of consciousness of the occupants. And because the automatic pilot maintained the airplane at an altitude where the air was unbreathable, the occupants were asphyxiated and the airplane continued its flight until it ran out of fuel. Some devices and auxiliary systems are destined never to be used, but in an extreme situation represent the difference between an emergency and a mortal accident, for example the engine extinguishers, the onboard reserve oxygen and its corresponding controls, the manual systems for lowering the landing gear, controls that permit the dumping of fuel, redundant systems and all types of indicators imaginable.

Safety in Commercial Aviation: Risk Factors

69

This set of systems – not used in a flight without incident but fundamental in a critical situation – has given rise to crowded, multicoloured cockpits where every sound or every luminous indicator has to compete with others to get attention. As Beaty (1995) indicates, some types of airplane are equipped with so many audible warnings that their correct operation almost requires musical training.29 As well as the failures that, like the one seen, can interact with other systems in the airplane and cause a disaster, the possibility of forgetting something has led to laborious check-lists and help from the information systems. Unlike what occurs with systems previously analysed – navigation and propulsion − there is no appreciable specific course of action; instead modifications are made in response to specific events. The possible loss of the hydraulic systems has been solved in some models via the installation of an alternative system based on the differential use of the power of the engines. Structures are revised to check that the effect of cycles of pressurization– depressurization don’t affect the integrity of the structure, procedures for the oxygen reserve are emphasized, and so on. But a defined course of action is not perceived. Information and control systems The flight information systems are basically integrating systems and, as such, are difficult to separate from the rest on which they superimpose. The difference between the cockpits of the two Boeing 747 airplanes – one belonging to a first generation airplane and the other belonging to the latest generation airplane – can be representative of the advanced information and control systems section in favour of the second, especially if one considers that it is basically the same airplane in as far as its size and features. The fundamental efforts in the first case were made in the ergonomic area, ensuring that the key instruments or indicators were the most visible or audible. The evolution led to multicoloured and crowded cockpits, but in exchange, the pioneers of aviation had indicators that presented information about faults in a direct and immediate form and a few instruments that gave immediate feedback about the adequacy of their decisions before the fault. By comparison, in today’s automated cockpits, the design can mean that the crews do not perceive a problem until the moment in which there has already been a failure in its diagnosis, or sometimes, until after the system has automatically executed actions based on a faulty diagnosis. In modern systems, the integration of information has been put in the hands of information systems. The use of digital screens permits the use of virtual indicators that do not have to always be present and occupy visual space only when they are required by the operator or by the system itself. The use of screens in place of dozens of indicators makes work extraordinarily easier by avoiding the overload of unnecessary data. In a negative sense, given that 29 Despite Beaty’s intention, this may seem exaggerated when used in this illustration; but it was pretty close to reality. By way of anecdote, amongst the different sounds heard in the cockpit of the Concorde is Von Suppe’s Charge of the Light Brigade.

70

Improving Air Safety through Organizational Learning

Figure 3.1

The Boeing 747 – more advanced model

Source: Sam Chui

Figure 3.2

The Boeing 747 (Flight engineer panel)

Source: Ariel Shocron Images corresponding to two generations of Boeing 747 Observe the panel crowded with indicators on Figure 3.2 and how this panel – along with the crew member that managed it − has completely disappeared in the more advanced model (Figure 3.1). Despite this, the controls associated with the front seats have been notably simplified. Boeing confirms that the number of lights, indicators and switches has been reduced from 970 in the first model to 365 in the second. In the generation reflected in the second photo, the crew received information about everything that the airplane had to offer; the crew had to evaluate that information, separating the fundamental from the ancillary and integrating it to have some idea of how the flight was progressing.

Safety in Commercial Aviation: Risk Factors

71

the information system performs the job of integrating all these data, a failure in this system can be far more serious than the failure of an indicator in the previous generation. Several incidents or accidents have been caused by an incorrect interpretation of the information systems or by some failure in the entry of data into systems. Currently, there is abundant cause to permit a classification of the types of errors produced by advanced systems, illustrated by the cases in the box. Indicators whose designs induce error – apparently intranscendental errors in maintenance, activation of automatic systems not well known or understood by the pilot and systems designed to simplify tasks – may lead to certain types of error. Air Inter. 1992: Confusion in the use of an instrument that allowed the alternate selection of a rate of descent or an angle of descent was fatal in the case of Air Inter in 1992. In an Airbus A320, the same instrument could define two radically different forms of descent via a small difference in its setting. Under the first, a ‘4’ would order the airplane to descend with an angle of four degrees with respect to the horizontal (it’s not unusual to find roads with six or more degrees of inclination, to give an idea of the low inclination this represents). Under the second modality, the same ‘4’ would order the airplane to descend at a rate of 4,000 feet per minute, which in the approach phase can represent an inclination of approximately 17 per cent,30 sufficient error, in mountainous territory and without visibility, to cause a serious accident. Aeroperú (1996) An Aeroperu Boeing 757 in 1996, having just taken off, began to display confusing indications and terminated its flight, crashing into the ocean. The failure was due to the placing of adhesive tape on the fuselage over an air intake that is critical for indicating air speed.31 Although this problem would have affected an airplane with traditional instrumentation in the same way, a non-integrated system would have meant the pilot would do the integration of information and may have been able to detect the origin of the problem.

30 Calculated on a speed of 500 km/h, at which the airplane would travel 6.94 km horizontally at the same time as it descended 1,200 m. 31 The speed of the airplane is calculated via the difference in air pressure between two points. The first receives the direct impact of the air while the second is situated in a guarded position and externally, is a hole or port in the fuselage. It was this second point that was covered with adhesive tape.

72

Improving Air Safety through Organizational Learning

It is anecdotal but significant that this type of problem – loss of instrument information – and its solution are an integral part of private pilot courses and even unpowered flight, and the well-known solution of breaking the glass of the instrument (accompanied in this case by the depressurization of the cabin) would have been valid in the case of Aeroperu. Airbus A320 demonstration flight In a low-speed, low-altitude pass during an air exhibition in an Airbus A320, the pilot reduced excessively both parameters32 coming below the tops of trees past the end of the runway. Trusting in a device33 that would prevent the airplane stall, he applied maximum power to clear the obstacle. The low speed configuration of the airplane, however, prevented a quick response and resulted in the airplane crashing into the forest. All these cases demonstrate confusion, be it in the interpretation or in the use of information systems. Amongst the cases mentioned, the Aeroperu case is especially significant, where the pilots seemed incapable of imposing themselves in the position, almost of spectator, required by the most evolved information systems. In addition, when they found themselves incapable of fixing position, speed or altitude, they requested assistance from the controller, who provided them with the information appearing on his radar. However, he did not realize that his data came from the airplane’s transponder and so were also erroneous. At the moment of impact with the ocean, the data they had available indicated they were 9,000 feet above the surface. When a system gives confusing indications, the uncertainty can be greater in an advanced airplane than in a traditional one. Roberts and Bea (2001) explain this phenomenon, highlighting that we give meaning to events once they have occurred, 32 For the sake of spectacle, a protective system that would have impeded descending to such a low altitude was disabled. 33 Automatic protection cannot defy the laws of physics. As a simple example, an automobile could, theoretically, be equipped with a device that would prevent the engine from stalling. However, that device could not avoid the fact that an engine might not have enough spare power to quickly overtake another vehicle.

Safety in Commercial Aviation: Risk Factors

73

that is, in the past, and to achieve this, it is necessary to pay attention to the flow of the experience. In less evolved airplanes, this type of sudden incapacity does not tend to appear, since the integration of data is being carried out constantly by the pilot. In case there are confusing or contradictory data, an analysis of the sequence of presentation and its interpretations can give clues, which will be lacking if that function has been performed by an information system. This will present those data that the designer considers the pilot must have at all times to act correctly; as well, it provides data about abnormal parameter values and data that, not falling into either of these categories, might be requested of it. The risk presented by systems with an elevated level of automation is pointed out by Raeth and Raising (1999), indicating that placing the pilots out of the decision– action loop represents an abandonment of control and responsibility, at the same time going against the purpose of the system. However, they add, certain independence from automated systems is necessary given the complexity of the environments and the cognitive limitations of the human operator. To adequately appreciate this last idea from Raeth and Raising, which could coincide with the dominant mentality among the designers of information systems used in flight, it is necessary to add that their work has been carried out on combat airplanes. The authors themselves illustrate their idea with an example of an airplane ‘flying between 50 and 500 feet from the ground, at night and in bad meteorological conditions’. It seems obvious that in this type of situation, the same as for an automatic landing in commercial aviation, automation is essential despite the risks that it might introduce, since the situation exceeds the abilities of any human being. Automation, in extreme situations, increases the operational ability of the system. The mere physical description by Raeth and Raising or the description of a landing with minimum visibility are sufficient to show that these situations are beyond the limits of ability of a system without a high level of automation. The consequences, which in terms of safety could be obtained, are that, thanks to automation, current airplanes can fly in situations that, in a non-automated environment, would represent certain disaster. However, to conclude from this that automation increases safety would be deceptive; in the absence of automation, operators would opt for avoiding situations they couldn’t handle, cancelling flights, deviating from the planned route or landing at alternative airports. There is a visible improvement in efficiency via the acquisition of new abilities, but at the same time dangers are tackled that previously would have been avoided. This action takes place in the confidence that new technological abilities guarantee that the level of risk, understood in terms of seriousness by probability of occurrence, will be equal or less than that of the previous situation. Information systems, as has been shown, have caused accidents, whether by automatisms or by the confusion they can induce into their operator, but simultaneously they have achieved safer flight in marginal conditions. The improvement in efficiency achieved through information systems and the associated automatic systems have been achieved in different ways, including the following:

74

Improving Air Safety through Organizational Learning

1. 2. 3. 4.

Increase in ability to operate in marginal conditions. Homogeny between airplanes and optimization in the use of crew. Reduction in the number of cockpit crew. Substitution of mechanical links by flexible-response information systems.

Increase in ability to operate in marginal conditions The increase in precision in navigation systems and their integration with control systems permit current airplanes to operate in conditions of practically zero visibility, even in airports where manoeuvres are complex due to terrain. There is, then, an improvement in efficiency if an automated system has the ability to carry out an action inaccessible to the pilot due to physical limitations. As technology has advanced (for instance, landings in conditions that exceed human ability have become possible), an approach in Category I would require visibility on the runway of no less than 1 km and a decision height of no less than 60 m. In Category II these indexes are reduced to a little more than 350 m visibility on the runway and 30 m decision height. In Category III three grades with successive reductions in visibility exist and, in the last of these – Category IIIC − there is no requirement for minimum visibility. Similarly, automatic systems aimed at preventing the airplane from leaving the flight envelope34 permits its operators to fly closer to the limits knowing that the automatic system will prevent those limits being exceeded inadvertently. Homogeneity between airplanes and optimization in the use of crew If an information system is interposed between the pilot and the airplane, it becomes easier for different airplanes to appear similar to the pilot and, therefore, easier to transfer a pilot from one airplane to another or permit, without confusion, flying various types of airplane. This advantage is being particularly exploited by the new generations of Airbus airplanes, where there is multi-rating, which permits one pilot to fly different airplanes that share the same information system and consequently the same types of control. The average training time for the transition from an Airbus A320 to the Airbus A330 or A340, of much larger dimensions, is estimated by the company to be eight days, whilst the transition from the A340 to the A330 is carried out in one day. Based on this ease, some operators are opting for maintaining crew with ability to fly multiple models of airplane. Boeing – the principal competitor to Airbus – despite having much more diverse models as far as its information and control systems are concerned, maintains this same principle in its Boeing 757 and Boeing 767, of different size but identical cockpits (Campos, 2001). This situation, however, is due to be corrected in its 787 model, which is going to be manufactured in different sizes and for different

34 The flight envelope is expressed in graphic form and represents the limitations of the airplane’s behaviour, indicating maximum and minimum operating speeds and loads it can be subjected to without entering a stall condition or suffering structural damage.

Safety in Commercial Aviation: Risk Factors

75

segments. Acting in this way, Boeing will have a policy of commonality very similar to that of Airbus. The following may demonstrate the importance of this factor for operational efficiency. On a regular long-distance flight, it is common that the crew that took an airplane to its destination will stay there, being substituted for the return flight by another crew, which had brought the airplane there the day before. If, due to mechanical problems or the number of reservations for the flight, it was opted to use another model of airplane, the crew responsible for the return flight would not be qualified for that type of airplane and would have to travel as passengers and, as such, reduce the number of available seats as well as require a third crew for the flight. The possibility of cross-crew qualification solves this problem. Reduction in the number of cockpit crew As shown in the photos of the Boeing 747 (Figures 3.1 and 3.2), over the years the number of crew in the cockpit has been reduced. Baberg (2001) recalls how the first long-haul jets were operated by five crew in the cockpit: captain, co-pilot, flight engineer, navigator and radio operator. Today, even the largest airplanes are handled by only two pilots. Substitution of mechanical links by flexible-response information systems In the control section, there is a notable improvement in efficiency. The non-existence in the most modern airplanes of a mechanical union between the controls and the aerodynamic surfaces controlled by them permits the information system interposed between them to facilitate a flexible response. These systems respond, supposedly, to the intention of the pilot, who signals what while the information system defines how, and therefore, one same order can be executed in different ways in different circumstances of the flight. Naturally, the system responds to the interpretation of that intention by the designer of the system and therefore to a non-contextualized interpretation, which, it is supposed, enjoys general validity. In summary, there are numerous and visible facts that show how information systems have contributed to increasing efficiency. The situation put forward is once again similar to that found in the development of navigation and propulsion systems and, in less measure, in auxiliary systems. Some of the elements on which the improvement in efficiency is based may have introduced their own risks, carrying out a habitual transaction, because of which, as abilities increase, new situations are tackled and this leads to new risks derived from the transaction itself. However, as well as the transactional risk derived from tackling new situations, there is a specific source of risk derived from the alteration of the role of the human operator in an automated environment. Beaty (1995) expresses this risk, pointing out that the airline pilots have each time less opportunity to practise their trade. Since the automatic pilots do their job well and, actually, they are certified to do approaches and landings in conditions much worse than those permitted to the human version, pilots have been relegated to the role of supervisors, and experience shows that human beings tend to be very bad at this.

76

Improving Air Safety through Organizational Learning

Information systems represent an important advance for having added new abilities to the system. However, according to Beaty (1995), this is not a net increase, since at the same time there could be disqualification of the human element, derived from a design that hasn’t thought out the strong and weak points of that human element. It seems, therefore, that one of the components of the system – the airplane − has benefited from a technological development which has equipped it with greater abilities but, at the same time, another of the components – the person − has been relegated to a more passive position that makes less use of the abilities acquired through training and experience, contributing to the deterioration of the same. This net transfer of abilities can explain that the improvement in safety levels since 1975 does not reflect the technological improvement. This improvement, at least partially, seems to respond to a transfer within the system and not an acquisition from the exterior. This would correspond to Figure 3.3. Two human actors appear in the figure. Both increase their abilities according to their training and experience, although, for simplification, these factors have been included under the generic heading of training. The abilities acquired by the pilots have a double exit; on one side, their experience feeds the engineers who designed the information system with information, and on the other, there is a utilized ability.35 The engineers, on the other hand, have their own training and experience and both, jointly with the information received from pilots,36 place them in the design of the information system.

Figure 3.3

Ability transfer

35 The arrow representing utilized ability has a deliberately less intense colour to reflect the relative loss of importance due to developed information systems. 36 The less intense colour, as in the previous case, reflects the fact that the engineer pours into the design of the system his own interpretation of the information received from the pilot.

Safety in Commercial Aviation: Risk Factors

77

The system would then be in a condition to generate a set of abilities; however, these abilities are not yet net, given that there are some losses. There is unused ability from the pilots coming from less technology-intensive environments. This ability could have become obsolete due to technological evolution or have been lost. In this case, the design of the system would have lost parts of the learning acquired that will only manifest itself in the presence of an event that makes these shortcomings evident. As a result, a net ability emerges, which is that effectively used via the action of all the elements of the system. At the same time, resistances appear that respond to dysfunctional conduct or beliefs with the technology-intensive environment and that can lead to inadequate conduct. Baberg (2001) illustrates this idea, pointing out that for safe operation using automatic pilot, it is necessary that the crew are conscious of which mode the system is working in and understand the abilities of the different modes. This was relatively simple in the beginning of automation where there were few modes available and they were displayed on a separate electronic or mechanical indicator. Following, Baberg illustrates this affirmation with an example taken from a Boeing 747-400 where there are nine modes of angle of attack adjustment, 10 modes of turn rate, five forms of automatic power adjustment and six forms of automatic landing and combinations between them. In addition, the current mode in operation is displayed on a multifunction screen, together with many other digital and analogue indications and the changes in mode are displayed by sparkling text. As well as the overload of information, its handling carries its own complexity, since some modes must be selected manually whilst others change automatically. In a high workload situation, this can lead to information overload and confusion in modes, especially if the pilots are experiencing tension, giving rise to the ‘tunnel vision’ phenomenon. By way of summary, Baberg (2001) finishes his exposition with an anecdote that illustrates the situation of confusion to which he refers. There is a pilot joke, according to which the most often heard phrase in the flight cabin is ‘What is this bastard doing now?’ Of course, the phrase refers to an airplane that apparently flew according to its own plans. Environment The inclusion in the environment, as a risk factor, of a set of numerically excessive variables could lead to a complicated and inoperative model. In fact, in the first model for classification of risks, environment represented a heterogeneous mix where everything that didn’t fit into the other two factors – man and machine, which in principle, enjoyed a much clearer definition – was introduced. In reality, commercial aviation moves in an environment that is social, technological, regulatory, and so on. Nevertheless, with the object of clarifying, only external elements that affect an airplane in flight have been considered, since regulation, technology and others are treated as integral parts of the system. As a result, this section will deal only with the following factors: risks associated with meteorology and risks associated with air traffic.

78

Improving Air Safety through Organizational Learning

Risks associated with weather One of the causal elements of a large number of accidents has been weather. Among its principle effects is lack of visibility; this has caused accidents due to not knowing the exact location of the airplane, especially in the phases of flight close to the ground. Of a total of 6,655 deaths in commercial aviation accidents between 1990 and 1999, situations where the airplane had no control problem but in which there was no clear awareness of the exact location of the airplane led to 2,111 deaths. This figure rises to 2,314 if we add deaths in accidents because of ice or snow, wind shear or turbulence (Boeing Commercial Airplanes Group, 2002a). Any airplane of any size can suffer an accident before extreme meteorological conditions and, especially, if it finds itself close to the ground without time to recover from a temporary loss of control. Weather, as well as being a direct cause of numerous accidents, represents an important factor in many others, fundamentally due to the lack of visibility. In the study by Boeing Commercial Airplanes Group (2002a), 506 deaths were due to midair collisions and another 45 for unauthorized entry to the active runway. In cases like the already mentioned Los Rodeos accident in 1977 – even today the largest accident in commercial aviation with 583 victims, caused by an unauthorized take off – weather has represented a fundamental role, since minimum visibility was one of the factors, without the concurrency of which the accident would not have occurred. Weather presents, therefore, various types of danger: 1. Lack of visibility and, as a consequence, lack of knowledge of the exact geographical position or height of the airplane. Normally accidents caused by this factor are associated with navigation errors. An error in the adjustment of an altimeter, in the introduction of the position or in the selection of frequency of a radio aid would not have grave consequences if, as well as the error, there had not been the situation of low visibility. • Avianca Madrid-Barajas: the Avianca accident took place in 1983 in Madrid; an inadequate setting of the altimeter, together with the lack of visibility and other organizational factors, gave rise to the accident. • Mount Erebus DC-10: the Mount Erebus DC-10 accident had various causes, among them the introduction of an incorrect position in the navigation system, but the determinant was the lack of visibility. • American Airlines Cali: Lastly, the American Airlines accident in Cali was also the result of various causes, among which was a lack of visibility. 2. Extreme turbulence, with its potential effects on the structure of the airplane and especially if close to the ground, with the risk of loss of control without time for recovery. Large airplanes have been victims of wind shear due to the sharp changes in intensity and direction of the wind that this phenomenon produces. In some cases, turbulence occurs in calm air and, unexpectedly, the passengers can be launched against the ceiling in a sharp descent or, alternatively, the violent opening of the luggage compartments can throw

Safety in Commercial Aviation: Risk Factors

79

objects over the heads of passengers. Although not frequent, these phenomena have caused injuries and even deaths. The annual cost of damages caused by turbulence is estimated at 100 million dollars (National Center for Atmospheric Research, 1998). 3. Strong winds represent a danger in the critical phases of flight – take off and landing – and if adequate means of navigation are not available. Wind can deviate an airplane from its route or reduce significantly its range if its direction is contrary to that of the flight and thus reduces its speed. If the wind is associated with reduced visibility it can, as well, force low altitude manoeuvres in airports not technologically equipped. 4. Ice, in its different forms, can represent different dangers. The impact of hail can damage the airplane. Humidity and very low temperatures can form ice on the surface of the airplane, breaking its aerodynamic qualities and loading it with weight that can require more power than is available just to stay in the air. • Air Florida 90 in the Potomac River The formation of ice over an external sensor and the confidence of the pilots that the indicator was displaying correctly were key factors in the accident of the Air Florida 90, in which a Boeing 737 airplane fell into the Potomac River in Washington. The very name of the company could be representative of the lack of experience of the crew in this type of meteorological conditions; in fact, at no time during the flight was the full power available used, assuming that the indicator functioned correctly, and so had the adequate power. • American Eagle ATR Two ATR airplanes belonging to American Eagle fell suddenly to the ground for reasons unknown at the time. It was later determined that the accumulation of ice over the control surfaces could cause a sudden turn with the crew not having enough time to react. 5. Snow – snow is a clear sign of low temperature and, therefore, of the presence of some of the dangers derived from ice. In addition, snow represents an increase in the take off or landing run and the danger associated with the slippery surface. Lastly, snow can contaminate some instruments situated outside the airplane and produce false readings, leading to an accident. The risks associated with weather are avoided via the information coming from various sources. The meteorological predictions at origin, destination and en route as well as information on current conditions allow the modification of routes, changing timetables or cancelling flights according to the available information. More advanced aircraft are equipped with their own meteorological radar which warns of turbulence zones or storms to avoid, as well as specific alarms for situations of wind shear. When it’s considered that the meteorological perturbation doesn’t present important risks and it’s decided to tackle it, the airplane’s technological equipment – as well as the aforementioned meteorological radar − allows the elimination of ice

80

Improving Air Safety through Organizational Learning

that might accumulate on the various surfaces of the airplane. In addition, airplanes are equipped with systems to eliminate, without risk, discharges such as those created by lightning. As has been highlighted, one of the most usual forms of meteorological perturbation relevant to safety is low visibility. However, this perturbation is so integrated in the activity of commercial aviation that its treatment is handled fundamentally as a problem of navigation, even in the extreme cases of landings without visibility. Risks associated with air traffic Air traffic congestion, especially in the terminal areas of large airports, is another source of risk for which avoidance measures have been taken. As a final barrier, in the case that the whole of the system might have failed, there are TCAS – Traffic Collision Avoidance Systems – which warn in the case where another airplane is nearby and on a collision heading. The use of radar systems as much by control entities as by the airplanes themselves as well as regulations regarding airplane separation, assignment of flight levels and communications and authorization requests make a collision at cruising levels improbable. Even so, one of the most serious occurred in 2002. The scarce collisions between airplanes have, in the majority, occurred on the ground and in circumstances of low visibility. Many airports where this phenomenon is frequent are equipped, as a form of avoidance, with ground radar. Sometimes the collisions have occurred as a result of lack of knowledge about the position of the airplane, as much on the part of the pilot as on that of the controller. In these circumstances, the active runway may have been invaded; in others, the accident is the result of problems of communication derived from the fact that most of the communication is in English, which many operators have not completely mastered. This problem has been partially solved via the use of a very standardized phraseology; nevertheless, this requirement has equipped the environment with a rigidity that has led to accidents because the lack of a single word prevented the recognition of a situation as an emergency. •



Avianca 052 JFK: Possibly one of the saddest cases in this respect is that of Avianca 052 in New York (NTSB, 1991). In conditions of heavy air traffic, the control services forced the airplane on two prolonged waits. This led to the airplane running out of fuel and crashing just short of the runway because its situation was not recognized as an emergency by the control service until the last moment. Los Rodeos: In a similar way, one of the multiple elements in the cause of the accident in Los Rodeos was the use of a non-standard phrase by the co-pilot of the KLM; the incorrect phrase, ‘We are now at take off,’ was interpreted as meaning that the airplane was stopped and ready for take off, when in fact it was performing the take off run.

Safety in Commercial Aviation: Risk Factors

81

The system has also learned in relation to this. Recently there was a regulation published by the ICAO that pilots and controllers must demonstrate an adequate level of English to be able to operate. Management Another risk factor is that posed by management. Nevertheless, this heading can group very heterogeneous risks. For the purposes of clarifying their origin, risks coming from management have been grouped into the following classification: social and economic aspects and cultural aspects. Social and economic aspects The set of actors in the system is subjected to a set of pressures towards the improvement of safety. The responses from the system to the events produced over time have determined their current configuration. Above all, commercial aviation is a business activity of great span. In consequence, it is subject to important pressures towards the reduction of costs, which are transmitted along the production line. Baberg (2001) finalizes his document on the man–machine interaction, pointing out that there are economic pressures that inhibit the realization of changes that could increase the level of safety when these don’t have a direct link to an accident. Pressure towards efficiency The passenger demands lower prices and makes decisions based on factors of economy, schedules, connections, and so on. Safety is important but the passenger considers it is guaranteed, among other factors, thanks to the efforts towards image from the sector. Because of this, it does not tend to be a factor for discriminating between airlines. In this environment, the airlines try to reduce their costs and pressure the manufacturers so that their new airplanes reduce the operating costs, provoking a war between the principal manufacturers in this territory. The conduct of the two large manufacturers – Airbus and Boeing − regarding innovations of great importance to safety may serve as datum (Campos, 2001): With the appearance of the Airbus A-310, the flight engineer’s tasks were automated, meaning the disappearance of the flight engineer from the cockpit and leaving just two crew members. There were timid protests on the part of Boeing and some operators over trade-offs between safety and cost. Nevertheless, the reduction in one crew represented a clear advantage in costs, especially important when there was strong price competition. Finally, Boeing would, because of

82

Improving Air Safety through Organizational Learning

the market’s response to Airbus, adopt the two-crewmember cockpit in subsequent designs. The Airbus A320 would signify the appearance in subsonic commercial flight of the ‘fly-by-wire’ system. The real significance of this system is that the control stick is not physically connected to the airplane’s control surfaces but is instead an electronic control. Like a mouse or a joystick on a computer, this control gives input to an information system. Boeing would insist for a time in keeping direct control37 but the gain in efficiency of ‘fly-by-wire’ would finally lead it to use it in their Boeing 777 model. When the search for efficiency has come from Boeing, it has been based on the high reliability of the current engines to build large aircraft with long range – including transoceanic flights − with only two engines. The Airbus response to the construction by Boeing of transoceanic airplanes for almost 400 passengers and two engines has been as timid as Boeing’s to the reduction of crew. It has not argued, as has been done convincingly from other fields, the potential safety problem but instead functional aspects of minor relevance, arguing that four-engine airplanes are more flexible on long routes. At the same time, Airbus claims that certifications should be carried out under international regulations and not by the powerful North American regulator to avoid it taking part in the competition between the two big manufacturers in favour of Boeing. The lukewarm response by Airbus can only be interpreted in the sense of keeping itself at the expectations of market trends. An airplane with two engines has lower maintenance costs than an equivalent four-engine airplane and if the operators, pressured by price competition, opt for the first, Airbus would have to change its bet in favour of four-engine airplanes to its own twin-engine long-range airplanes. Boeing, for its part, tries to achieve the levels of requirement for revisions, and so on, common to all airplanes independent of the number of engines. The maintenance of an ETOPS engine – prepared for a long-range twin-engine airplane − is more costly than one for three- or four-engine airplanes, due to the additional controls required for a twin-engine airplane. Identical treatment ‘independent of the number of engines’ would make the cost of maintenance extraordinarily more expensive for a four-engine airplane – the Airbus bet − favouring Boeing’s position regarding twin-engine airplanes.

37 Boeing’s experience in military aviation, where the use of ‘fly-by-wire’ is almost universal, would make one think that there might be good reasons for its initial rejection.

Safety in Commercial Aviation: Risk Factors

83

At the same time, passengers and crew would reject an explicit reduction in safety, limiting, as a consequence, one of the possible options for reducing costs. A grave negligence in safety can be costly for a company or a manufacturer and a serious discredit for a regulating body. The disappearance of ValuJet after the accident of a DC9 in the Everglades for carrying on board unauthorized inflammable material is an example of what can happen to airlines as a result of neglecting safety – similar to what occurred with the DC-10 model after the events related to the so-called ‘door of death’. Haunschild and Ni (2000) make reference to these situations, pointing out that complex organizations are also probably more politicized than simpler organizations and politicized organizations investigate accidents in ways that don’t necessarily promote correct learning, but rather seek to protect the interests of those most powerful. Because of these factors, it seems reasonable to hope that an increase in the complexity of the information about a complex organization won’t be as beneficial as it is in a simpler organization. Those responsible for management find themselves, therefore, with a need to keep themselves in the area defined by Reason (1997) as the zone of dynamic equilibrium between safety and cost savings. Pressure toward safety Manufacturers and operators are on the same team when it comes to safety. Both support the same pressure towards the reduction of costs and have identical interest in keeping the confidence of their users. Therefore, if they can convince users and regulators that a new practice won’t represent a reduction in safety, they can achieve an increase in their equilibrium zone. The temporal dimension of the two factors that define the equilibrium zone – costs and safety − is different: A difference in costs can be immediately perceived, but a difference in safety usually is not; as a result there could be cost-saving practices whose impact on safety would only be perceived in the long term. There is an interesting example in the ageing of the fleets of many airlines and the possibility that problems not known until now may arise. Regulators do not have decisive technical arguments to limit the life of an airplane, but there can’t be certainty that they will not arise in future. There have already been serious events related to the ageing of electrical systems (Boeing Commercial Airplanes Group, 2002c). On the other hand, accidents like that of flight 243 of Aloha in 1988, which lost part of the roof of an airplane

84

Improving Air Safety through Organizational Learning

in flight, can only be explained by prolonged negligence where maintenance is concerned. The same can be said of the supposedly improved procedure that some DC-10 operators began to use to attach and remove engines to and from the wings, which ended in the loss of engines in flight and, in one case, that of a complete airplane due to an unforeseen interaction between the loss of hydraulic energy and the hyper lift surfaces. Cases apart are the low-cost airlines, which have placed a clear bet on notably lower costs than their competitors. There are structural elements of their business model that do not affect safety, for example, use of secondary airports, reduction in onboard comforts or utilization of a single model of airplane. The same cannot be said of the strong pressure that exists towards shortening the time on the ground and other factors aimed at optimizing operational efficiency to the maximum. Regulators, in this context, find themselves in a difficult situation: their function is the emission and policing of regulations, but to carry out that function, their principal source of information are the manufacturers and operators, who also have the better-qualified specialists. This forces the regulators to maintain an attitude of ambivalence in their activity of inspection. Regulators tend to convert themselves into dependents of those they regulate to acquire and interpret information. As a result both try to avoid open confrontation and in its place prefer negotiation and bartering when there’s conflict and confrontation (Reason, 1997). In addition, given that accidents follow patterns of multiple causes, it is not possible to prevent, by means of regulations, all the situations that can possibly occur. Kirwan (2001) examines the regulators’ situation from the perspective of relationship and points out that there is a dilemma for the authorities. Opt for being very active and interventionist or, in the place of this, promote the development of self-regulating systems. The first option may lead to a situation of over-normalization and to conflict between regulators and regulated with lack of cooperation and hiding of safety problems. The second could lead to a relaxation of safety standards and practices. Degani and Weiner (1998) opt for a functional perspective for their analysis and although they mention over-normalization, the terms in which they treat this factor differ from those used by Kirwan. For them, the worst problem of over-normalization is not related to potential conflicts between regulators and regulated. Furthermore, it enforces a scenario that does not use the most valuable asset of the system, which is the operator in situ.

Safety in Commercial Aviation: Risk Factors

85

The system designer and the director of operations know it cannot be a procedure for everything and that there will always be situations where the operator won’t have a written procedure available to tackle it. Cultural aspects Management, understood as a source of risk, also follows a set of cultural aspects of the sector. Previously, the role of a dominant culture in the interpretation of an event has been addressed. The description of the evolution in the treatment of the different factors of risk permits the affirmation that aviation constitutes, as a whole, a technical culture type. At the time of writing, engineers and pilots continue to be predominant collectives. Nevertheless, economic pressures coming from deregulation of prices have given rise to a growing prominence of cost-based decisions and to the appearance of collectives with strictly business training. Although neither manufacturers nor operators have monolithic cultures, there are important differences in relation to the type of decisions made. If we take as reference the two current large manufacturers – Boeing and Airbus − the written declaration from both companies with respect to technology shows some cultural values. Both are willing to use everything that technology can offer them but, from here, there are notable differences. Boeing has led its technological development towards simplification and reduction of the number of engines, increasing their reliability. Airbus, for its part, has opted to introduce automated systems that make more efficient use of the possibilities of the airplane, the possibility of reducing the number of persons necessary to control it and systems designed to, in extreme cases, impede the action of the pilot considered incorrect in the design of the airplane. These differences are reflected in the airplanes manufactured by both companies, as the economic pressures derived from a competitive situation tend to reduce them. So then, Boeing seems to give more importance to the pilot and tries to use the information systems to inform and suggest, adopting the simplification approach to avoid the risk of error. In Boeing’s own history, there are facts that point to the prominence of a culture marked by the pilot collective: The first large jet for the transport of passengers was built by Boeing and known commercially as the Boeing 707 (although the British ‘De Havilland Comet’ had entered service earlier, it was of smaller dimensions). The airplane was presented by the Boeing chief pilot in a public exhibition performing a complete turn over its longitudinal axis –‘tonneau’ − which involves placing the airplane in inverted flight in one phase of the manoeuvre. This had never been seen before, nor probably since, with an airplane of those dimensions – and was captured in images. Before the request for explanations from the Boeing president, the author of the manoeuvre, who kept his position in the company after the event, limited himself to clarify that, since he didn’t alter the load factor, the airplane, including devices such as the fuel pumps, ‘didn’t know’ it was in inverted flight. More recently, with the appearance of the Boeing 777, the North American manufacturer opted for the inclusion of the ‘fly-by-wire’ system. Its competitor,

86

Improving Air Safety through Organizational Learning

Airbus, when it designed the first airplane that used this system, modified the voluminous control columns known as ‘horns’ and substituted them with lateral controls similar in appearance to joysticks for computer games. All this gave pilots an unusual image of their workplace. As commented earlier, Boeing, in the same situation, has opted for maintaining elements familiar to pilots, such as the mentioned control column, even when this has lost its functionality, given that it’s not necessary to exert physical strength and exaggerate movements in the cabin. Airbus uses the automation approach and maintains the levels of complexity – if this contributes some advantage in the abilities of the airplane − trusting its management to automated systems as a way to avoid human error. In extreme cases, the airplane can go as far as not obeying an order from the pilot if this places the airplane outside its operational limits (Wells, 2001). Systems like the one aimed at the prevention of wind shear or the systems for the protection of the flight envelope are built under this philosophy and, instead of informing the operator, can directly impede an action that the design of the system considers erroneous. It’s worth hoping that a culture where the values of pilots prevail will try to obtain all the advantages possible from the technology but without giving up its own prominence in flight – Boeing’s situation in the terms that this company expressly communicates. Should it not, a company that understands that the human element is fundamentally fallible will try to limit as far as possible the possibility of making mistakes by creating environments that assign the pilot a secondary role. These cultural differences could explain facts like the use of ‘fly-by-wire’ controls and the consequent loss of direct control appeared earlier in the commercial models of Airbus than of Boeing. This happened despite Boeing’s superior experience in military aviation, where ‘fly-by-wire’ is common. Direct control and the use of the pressure feeling on the hand as a basic instrument of pilotage were too much appreciated in the American company to easily abandon them. Despite this, the pressures towards efficiency impose themselves and the superior abilities of ‘fly-by wire’ become universal. To facilitate the transition, the designers used the so-called ‘artificial control sensation’ via which are transmitted sensations to the pilot’s hands that, normally, would come from the aerodynamic pressure on the controls, but in this case is created by the information system. For a management model that bases its confidence in the pilots, this type of action represents an inevitable renouncement; on the other hand, for a model of management that bases its trust in technology, this is one way to increase the features at the same time as reducing the variability coming from human actions. The same principle would be applicable concerning the fulfilment of regulations. If the culture of the specific sector or company places trust in people, it will equip those people with training and information, and will allow the operators to make decisions based on their knowledge. If on the other hand that confidence does not exist, it will try to restrict their action to the maximum via rules that specify the adequate action to each contingency along with the sanctions associated with not fulfilling the rule.

Safety in Commercial Aviation: Risk Factors

87

Beck (2001) affirms that most of the analysis developed by social science of technological development doesn’t manage to recognize the difference between ‘technology requires legislation’ and ‘technology is legislation’. Automatisms that prescribe or directly execute an action lead to considering technological and regulatory development as two complementary instruments of a single model of learning. Hale and Swuste (1998) point out that it is necessary to decide when it is better to let persons decide for themselves what action to take and when it’s appropriate and/ or acceptable that the freedom to act be limited by another who decides which action should be carried out. Of course, this is an important decision to be taken in the management field and where the type of dominant culture will orientate the answer. In short, management involves making decisions attending as much to elements of pressure coming from the exterior as to elements of confidence coming from its own internal culture. Faced with a specific requirement from the environment, there are several possibilities to materialize the processes of learning. The choice between these possibilities will depend on the confidence that is assigned from one specific culture to each one of the possible depositories of the resultant knowledge. These are, after all, risk decisions, be it because of the indetermination associated with the human element or the rigidity and complexity derived from the technological and regulatory development. System complexity One of the characteristics of the management of air safety is that its development has given rise to a set of practices that constitute a highly complex system. This complexity can, by itself, represent a factor of risk. Because of it, the paradox can arise that the introduction of improvement measures for safety could, at the same time, have associated with it its own quota of increase of risk. An example is the accident of El-Al in Amsterdam, already mentioned, in which a Boeing 747 lost two engines. It’s representative of this fact since the accident was caused by a safety device. The same is applicable to the system for flightenvelope protection which, conceived as a means of increasing safety, can perform according to its design in contexts that are inadequate. Mauriño et al. (1997) point out that we have a natural tendency to assume that a disastrous result has been caused by equally monstrous failures. In reality, the magnitude of the disaster is determined more by situational factors than by extension of the errors. Many serious accidents are caused by a concatenation of relatively minor failures in different parts of the system.

88

Improving Air Safety through Organizational Learning

Without the concurrence of a set of failures the accident would not occur. So the improvement of the individual risk factors results in the useful measure of serving to break causal chains, but the number of these and their possible combinations is virtually infinite. The interactions can be produced between systems not functionally related but whose physical proximity is the origin of the interaction. Because of this, it’s common that, in serious accidents, there is a great disequilibrium in the magnitudes of the causes, often insignificant, and the effect. A brief reminder of a few large accidents already presented may illustrate this point: •









Case TWA-800: A set of minor elements, none of which represents an operational failure in safety, converges and causes the destruction of the aircraft. It can be argued that the failure was in the design – an electrical motor whose function produced heat next to the fuel tank − but it’s helpful to remember there were hundreds of airplanes of this type and the model had been in service 25 years without it ever happening. Case United 232: An engine is not functionally related – save for the fact that it provides the hydraulic power − with the control of the airplane; however, its explosion can destroy the nearby conduits that facilitate that control. Before this occurred, the probability of simultaneous loss of three hydraulic systems had been quantified as less than one in one billion. Case Los Rodeos: A set of unrelated elements like the coincidence in one airport of two airplanes, neither of which should have been there, the low visibility, the low recent experience and high status of a captain and the inexperience of the first officer in the type of plane, the hurry to take off, the temporal coincidence of one aircraft being incorrectly stationed and impede another from leaving and a radio failure caused by simultaneous transmission all caused the accident. Case American Airlines 191: An engine does not have functional relation to the hyper lift surfaces; however, its loss involves the loss of the hydraulic energy that it supplies and which maintains these surfaces38 extended. The pilots applied the prescribed procedure for the case of an engine failure not knowing what has happened with the hyper lift surfaces and when they did, caused the collapse of the airplane and its destruction. Case Concorde in Paris: A tyre has no relation whatsoever with an engine nor with a fuel tank, but a blow-out of a tyre close to an engine causes its failure due to taking in pieces of the tyre; also, loss of fuel doesn’t have to result in a fire, but if the airplane is equipped with afterburners and hence, there is flame at the engine outlet, the spilt fuel could reach this part of the airplane and cause a fire.

38 Hydraulic energy allows the movement of the surfaces. However, hydraulic energy is not required to keep them extended, once in position. A specific technological innovation of the DC-10 resulted in the retraction of the surfaces following the engine failure, but this does not apply to all airplanes, not even to the DC-10 after the mentioned accident.

Safety in Commercial Aviation: Risk Factors



89

Case Japan Airlines: The pressurization systems don’t have functional relation with the control of the airplane; however, an explosion of the rear pressure bulkhead in the pressure cabin can break the conduits that permit said control.

The possibility of unforeseen interactions in a complex system exceeds the capacity of foresight, converting the task of preparing for all of them impossible. In an extreme case, given that the predicting mechanisms are a part of the system, the complexity that these add can cause new unforeseen events. Perrow (1986) differentiates between ‘tightly-coupled systems’ and ‘flexibly coupled systems’. The first are notably more exposed to systemic risk; or, expressed another way, in a system that is flexibly coupled, the failure in one part of the system does not drag the failure chained to other parts, whilst in a rigidly coupled system this situation can happen. The search for new goals in efficiency to take advantage of improvements in the risk factors draw the system towards the tightly-coupled situation, making it sensitive to systemic risk. Summary of the treatment of risk factors In this chapter we have seen a guideline repeated in the treatment of various factors of risk: the search for improvements in safety gives rise to innovations which, parting from a very evolved situation, are not useable or introduce small improvements. These innovations are used to achieve operational improvements by increasing the level of efficiency. At the same time, these operational improvements in efficiency elevate the level of risk and this forces the need for new solutions for its reduction, repeating the cycle. This iterative process, which occurs once a certain level of perfection is reached, leads to a growing level of complexity that ends in the situation defined by Luhmann (1996) as hyper complexity. So from one of the possible perspectives –reduction of specific risk factors − improvements have been introduced that have affected the global function in unforeseen ways. A dominant mentality of a technical type has orientated the search for solutions, above all towards technological improvements or development and enforcement of procedures, no matter the risk factor in which the deficiency is identified. The historical perspective adopted in the section relative to technology makes this improvement model evident. However, risks less linked to technological evolution like meteorology or the human factor are also handled via technological or procedural improvements. The generation of new information and a broad system of distribution of this information has led to the solutions being adopted by all in relatively short periods. So the system has been capable of learning in the past. The reduction in the rate of learning, its causes and possible alternatives are the subject of the following chapters.

This page intentionally left blank

Chapter 4

Explanation of the Reduction in the Rate of Learning in Complex Environments In this chapter we will try to identify those elements of the system that may have contributed to the reduction in the rate of learning. We will do so from three different perspectives. 1. System objectives and their orientation towards learning or towards the avoidance of events and the consequences of each type of orientation. 2. Role of various characteristics that, once a certain level of complexity is reached, constitute barriers to learning. 3. Global vision, in terms of organizational paradigms and their suitability to the objective of learning. Systems that learn from errors and systems that avoid errors Once a satisfactory level of perfection is reached, the trend changes, described by Reason (1997) as the change of a system from feedback-driven to forward-driven. This change in trend may have created a different system in so far as its ability to learn and therefore improve. In Maturana and Varela’s (1996) words this change will cause a break-up of ‘operational closure’, a concept defined as ‘the need to maintain, without changes, the relationship between some variables with the end of maintaining the identity of the system’. Given the potential seriousness of events in commercial aviation, an evolution aimed at avoiding them cannot, in principle, be criticized. There is, however, a situation where this type of evolution is questionable – when prevention strategies, through restrictions in action, can incapacitate response to unforeseen events. Some safety specialists could still present this evolution as valid. If the system, in that configuration, prevents more accidents than it causes, under the supposition that this kind of accounting is feasible, the solution could be valid. Nevertheless, in the eventuality of an accident, it would be difficult to accept the goodness of a system that, without doubt and without alternative, caused it, with the argument that it has lowered the probability of others. Luhmann’s idea of risk origin is relevant in this context and can determine its acceptability, or otherwise, beyond what statistics might indicate. This type of evolution represents important changes in the system’s behaviour. In an initial situation, the system tries to learn from the events, requiring individual

92

Improving Air Safety through Organizational Learning

initiative and the ability to improvise. Once sufficient experience is gained, the change towards a more predictable and controllable environment occurs. The change has occurred, above all, thanks to an important technological evolution, which has contributed new organizational abilities or organizational competencies. The construction process of organizational competencies, in its advanced model, would not appear to require initiative of the operator but, instead, the following of rules. From a technical dominant logic, this represents an advantage, although it has a few consequences, which will be analysed later. Fukuyama (1998), in a reference to Taylor and Scientific Organization of Work, points out how Taylor attempted to code the laws of mass production, recommending a very high level of specialization1 which deliberately eliminated the need for initiative, criteria or ability of the workers on the production line. Under this perspective, to the extent that it might be possible to achieve a situation where the operator behaves as a machine, trust in that operator is dispensable and it’s not necessary to give way to individuality and possible irregularities in performance. Put another way, the concept of Scientific Organization of Work established a strict division between design and planning, which would correspond to directors and engineers, and execution, which would correspond to persons that carry out the operations on instructions from the former. Weber, where the strict division of tasks used formal rules, written documents and specialized training as instruments, expressed a similar idea. Huse and Bowditch (1998) group the approach of Taylor and Weber and their imposition of a strict division between the design and execution of tasks under the title of structural perspective of the organization. This perspective, thanks to the social advances in literacy and greater access to education, would remain relegated in the majority of environments. However, the technological advance appears to have once again recovered the division between design and execution sponsored by the structural perspective. General technological advancement and, in particular, that of information technology, would have equipped with new competencies an organization conceived under the principle of division of tasks. In effect, technological ability has increased and allows tackling, with technological resources, a growing organizational complexity. This way, organizational models that function without resorting to the operator’s ability or opinion, mentioned by Fukuyama, become feasible. If trust in human operators can be avoided thanks to technological potential, access by these operators to the meaning of their actions is unnecessary, that is, the operators are paid to execute actions and not to know what these actions mean. Additionally, the complexity of an organization with a high technological density makes that knowledge difficult for any person not specialized in technology. The learning obtained through event analysis has led to increasing the organization’s ability to attend to foreseen situations. In place of trusting in the unpredictable human being, from a technology dominant logic, it seems a better 1 The word ‘specialization’ can be confusing. In this case it does not refer to highly qualified professionals but rather to a skill acquired from a single operation or movement.

Explanation of the Reduction in the Rate of Learning in Complex Environments

93

solution to materialize these competencies in technological form or establish strict procedures to attend to foreseeable events. At the same time, however, the strategies for predicting known events can cause new and unknown ones, a fact warned about by Reason (1997) with the idea that an accident can be caused while trying to prevent another. Therefore, the ability to manage foreseen events would increase but, at the same time, the ability to manage unforeseen situations would begin to reduce. Furthermore, some of these situations can be caused by the complexity introduced into the system. Carroll (2002) indicates that learning is being aimed towards a greater control of that which is already known instead of exploring the unknown. The effort made in the reduction of errors can be inscribed in this sense, in the first place, through ergonomics and, secondly, through automation. This way, the ability for execution has been entrusted to technology. There are situations where this is an unavoidable decision, or, if preferred, there is no possibility to decide regarding this. Often, however, it’s about a deliberate option due to the regularity attributed to technology. In these cases it is hoped that the operator will remain in a role basically of supervision and intervene only for unforeseen events or at specific phases of the operation. There are operations that exceed human ability and so real capacity for choice does not exist. It is not possible that a human hand handles an airplane for hours with the same smoothness that an automatic pilot can achieve, or, along the same lines, it is not possible for a human operator to carry out a landing without visibility in the same conditions as the automatic system. A derivative of this phenomenon is the use by technology of advantages provided by the technology itself. The current precision of navigation systems allows scant separation between airplanes but a full use of this advantage might require an elevated precision in handling and require this to be carried out by machines. The evolution of the system’s competence to attend to different events – known and unknown − would be reflected in Figure 4.1:

UNKNOWN EVENTS

EVENTS

ORGANIZATIONAL COMPETENCE

KNOWN EVENTS

TIME

Figure 4.1

Evolution of competence

94

Improving Air Safety through Organizational Learning

Under the first operational model – feedback driven − learning has materialized as improvements in engines, cabins, communications and navigation. All these elements introduce improvements in specific parts of the system but are not integrated amongst them. In consequence, the operator is charged with the integration of the information and this becomes a task of growing difficulty as complexity increases. Under this first model, the events of unknown origin that could appear were many and it was hoped of the operators an adequate capacity for response based on their own competencies and knowledge of the system. For this model, human ability should be sufficient to go beyond the foreseen. Under the second functional model – forward driven − improvements are aimed at avoiding the events. For this, it is attempted to simplify the operator’s task by using information systems and automatisms as much for integrating information as for taking direct action. The previously identified situations achieve quick response with less probability of error. At the same time, information systems can induce error and the automatisms can perform inadequately or impede the operator’s response to an unknown situation, or one not introduced into the design. Therefore, a forward-driven system introduces limitations through cutbacks in the freedom of action of the operator. This cutback can impede an error but can also impede an action necessary for the avoidance of an event. Barriers to organizational learning The ability to reconstruct an event, as much through the use of technology as through the gathering of information by its performers, is very high. In fact, it reaches a sufficient level to guarantee that the necessary information is present in the practical totality of the cases. In addition, the avoidance of public debates on safety makes it possible that both the information and the causal analysis of the events are spread freely in the professional environment. Both elements are sufficient to guarantee the gathering of information and permit continuity of the learning process. However, the decisions made because of event analysis have developed paradoxical effects, reducing the ability to react when faced with the unknown. The resultant evolution has broken the operational closure that the system maintained from the beginning, orientated to respond to events, giving rise to a new system orientated towards prevention. This emphasis on prevention has created various system characteristics that are highlighted as follows: 1. 2. 3. 4. 5.

Regulatory hypertrophy. Emphasis on the determination of responsibility. Excessive automation. Learning difficulties for the human operator. Dysfunctional role of the regulatory bodies.

Explanation of the Reduction in the Rate of Learning in Complex Environments

95

Regulatory hypertrophy Regulations are forms, par excellence, of broadcasting information. Incidents such as an engine fire or a serious depressurization don’t leave much time to think or look up an instruction manual. It’s necessary to have acquired a set of motor skills that are executed automatically once the situation is identified. In extreme situations, regulations act as transmitters of knowledge. The ability to manage an event cannot be in a rule since this needs the contest of the human operator to be executed. However, contrary to what happens with many information systems, rules are transparent in the sense that the operator can investigate why a particular conduct is prescribed or prohibited. In short, rules have an important role in the learning process; however, their development over time has led to some negative effects. Rules can be difficult to fully comply with in complex environments if, at the same time, it is desired to maintain adequate levels of functionality. The very existence of phenomena such as the so-called work to rule2 leave the imbalance exposed as a result of achieving that a strict application of rules seriously damages functionality. Paradoxically, the excess of rules can lead to the protection of the internal organization of the system when this is complex. Once a broken rule is identified, its investigation can assign guilt,3 turning the operator into a type of fuse for the system. In a complex environment, it’s improbable that the exceptional circumstances that accompany an event are not associated with some kind of rule breaking. One of the possible reasons is that the event might not be covered by a rule, which supposedly would regulate it. There are cases in which this takes on special seriousness given that it’s the following of rules in an inadequate situation that can cause an accident. Pérez López (1991) discusses a productive–distributive system that specifies the actions to be carried out by persons and a system of incentives that specifies what persons receive in return for these actions. The distinction between these two concepts is highly relevant despite both systems being frequently mixed. This way, a rule forms part of the productive–distributive system if it informs which actions should be carried out. On the contrary, when the rule includes a coercive element in the form of penalties if the rule is not followed, this constitutes the incentive to lead the action in a determined direction. It can be appreciated, therefore, that rules have two separate facets in relation to learning. The first is informative and has great importance as a conduit of learning; the second, imperative in character, is focused on the materialization of this learning, whilst introducing elements of prevention.

2 On the occasion of one of these strikes, known also as ‘work to rule’, a pilot made a remark that, formally, is difficult to refute: when a passenger protests because pilots ‘work to rule’, the passenger should be asked which part of the rules the pilots should fail to follow. 3 This situation changed partially after the already mentioned Dryden accident.

96

Improving Air Safety through Organizational Learning

Informative facet Rules are used as an element of information transmission. Depending on the requirements of the situation for which they are designed, this information may be available in the form of manuals, circulars, databases, and so on, or a set of practices, derived from the rule, or can be included in the operators’ training so that they become automatic in their behaviour. In any case, as pointed out, rules tend to be transparent. Operators with curiosity can use rules to pose questions and the answers to these will serve to increase their knowledge. There are many examples that can be put forward. The rules relating to altimetry and the questions that these pose to a novice pilot are quite significant. Inevitably, the novice pilot asks why the altimeter is set to the atmospheric pressure at sea level when the actual pressure can be quite different. When the pilot knows that the altimeter information is approximate and that the adjustment is made to maintain a conventional altitude that permits maintaining vertical distance between aircraft, various things will have occurred: 1. The risk of forgetting or failing to comply with the rule is reduced, due to understanding its purpose. 2. The pilot’s trust in the altimeter will be reduced, due to understanding that, en route, it gives a conventional altitude that may be different from the actual altitude. 3. The importance of local atmospheric pressure information provided by airports will be understood, since this time the altimeter is used as a measure of the distance to the ground if maps with the elevations of nearby obstacles are available. Imperative facet Failing to follow rules carries with it serious harm for the operator, as much as if the failure follows error, negligence or violation as if it is due to the search for a better solution and there is no way to prove that following them would have caused a worse result. An extreme form of materialization of the imperative facet is automation, since it does not allow breaking of the rule. In an automated environment, if the rule has been materialized in an industrial design, by programming or a mixture of both, previously programmed responses are executed.

Explanation of the Reduction in the Rate of Learning in Complex Environments

97

It can be concluded, therefore, that the creation and application of regulations respond to a careful equilibrium: although it’s clearly positive in its informative facet, an excessive emphasis on its imperative facet drives the system in a tightlycoupled direction. This introduces consequent risks due to unforeseen interaction between parts of the system. Occasionally, the distinction between both facets of the rules is not completely clear. A rule can impose that the operator be informed of a determined risk condition – positive aspect − at the same time as a recording device records this fact. If the operator’s margin for action is small – for example, communicating that ice is on the runway in meteorological conditions where all the airports within reach of the airplane share the same situation − the communication would not have any real usefulness other than its recording with a view to determining responsibility in a possible event. In an environment of regulatory hypertrophy, there is strong pressure towards following the rule. Reason’s (1997) phrase, generating a new accident when trying to prevent the previous one, is representative of this, as is the following figure:

Figure 4.2

How necessary additional safety procedures reduce the scope of action required to perform tasks effectively

Source: Reason (1997), with permission

98

Improving Air Safety through Organizational Learning

Reason’s (1997) criticism centres itself explicitly on the restrictions on the operator imposed on possibly necessary actions when the rules turn out to be inadequate. For Reason, there will always be situations that are not covered by rules or in which they cannot be applied. In other words, there will always be situations of ‘inadequate rule’ or of ‘inexistent rule’. Wastell (1996) goes a step further by saying that the structured method encourages a rigid and mechanical approximation in which the method is applied in ritual fashion and inhibits creative thought. In this case, as well as the importance of rules as restrictions to behaviour, Wastell notes the difficulty in identifying a situation as not covered by rules, that is, a problem of greater depth than organizational discipline and which affects the very ability to attribute significance to a situation. Being an important problem, it does not form part of the worst scenario possible. Exactly, rigidity in the imposition of rules inhibits conduct contrary to these. Maybe it should be clarified that this rigidity does not inhibit creative thought, but rather creative action. Nevertheless, the rule’s transparency permits it to be questioned, at least at a purely cognitive level. As will be seen later, the same cannot be said of information systems. Emphasis in the determination of responsibility When a high-risk activity reaches a state of evolution so that it is considered relatively safe, the requirements – including those channelled by the judicial path − force the determination of responsibility in serious events. This fact generates an increase in pressure from the imperative facet of rules. Commercial aviation has its vehicles for obtaining learning without the search for guilt so long as the consequences of an event remain confined to the limits of the system. However, when an event has led to serious consequences, the system doesn’t have capacity for self-control and the effects come from a different track aimed at the search for responsibility, that is, guilt. A model for gathering information without disciplinary effects is the one used by the ‘Aviation Safety Reporting System’, known by its initials, ASRS and which, in respect of penalties for errors or violations of rules, establishes the following: 1. Section 91.25 of the FAR (Federal Aviation Regulations) prohibits the use of any report submitted to NASA under the ASRS in any disciplinary action, except that information which refers to criminal actions. 2. When a violation of the FAR comes to the knowledge of the FAA (Federal Aviation Authority) from a source other than an ASRS report, adequate action will be taken.

Explanation of the Reduction in the Rate of Learning in Complex Environments

99

3. The ASRS system is designed and operated by NASA to ensure confidentiality and anonymity of the informant as well as third parties implicated in the reported event. 4. NASA’s procedures for processing ASRS reports guarantee that the first analysis of said reports is aimed at: • The search for information relating to criminal acts, in which case it will be handed over to the Department of Justice and the FAA. • Information relating to accidents, which will be handed over to the NTSB and the FAA. Sophisticated information gathering, processing and distribution systems guarantee anonymity and immunity, so long as there is no act susceptible to generating legal responsibility. From the point of view of learning, this implies that in those cases where greater transparency in information could be required is precisely where there could be greater interest in its concealment. The means already mentioned to permit the reconstruction of an event were developed as much for the need for learning as for the requirement for responsibility. Nevertheless, a problem that can rarely be resolved by these means consists of knowing what would have happened in an alternative scenario, once an event has occurred. If the failure to follow a rule ends in an accident, it is not easy to determine if following it could have generated a more serious one, an aspect that can be illustrated by two cases that are poles apart: 1. Case of Air France Concorde in Paris: the results of the investigation demonstrated that the crew followed all the rules thoroughly. Given the results – the complete destruction of the airplane and the death of all its occupants plus some persons on the ground − a question without answer is whether it might have been better to fail to follow the rule that imposes take off, once the take off decision speed is reached. Additionally, it would be necessary to know in that case if, as a consequence of the more than foreseeable accident, there had been deaths or serious material damages. Although there would have been survivors amongst the occupants, the result of the alternative actually adopted – death of all the occupants and destruction of the airplane − would not be known and

100

Improving Air Safety through Organizational Learning

the accident would have been attributed to the pilot’s failure to comply with a rule. 2. Case of Spantax DC-10 in Málaga: An extreme vibration began once the take off decision speed was surpassed and increased following rotation. The captain decided to not take off, being conscious of not having enough space to stop the airplane. The airplane left the runway, leading to numerous deaths. An evacuation followed. Once the evacuation was completed, some passengers entered the airplane by force to recover their hand luggage and a fire ensued, leading to more deaths. Of the 48 deaths, the majority occurred in the fire after the aborted take off.4 The subsequent analysis showed that the vibration was due to a blown tyre on the nose-wheel of the airplane. Neither the effects nor the correct action figured in the airplane operations manual and this left the pilot subject to the generic rule that imposes ‘taking off once the decision speed is exceeded’. To put the failure to follow the rule in its proper context, it is necessary to highlight the following aspects: in the first place, the captain of the airplane did not know the cause of the vibration and its intensity led him to think that the airplane would not be able to fly. Secondly, the airplane already had a history of serious failures; one of them, similar to the case of United 232, happened to the same pilot of the crashed flight, but the difference was that it occurred on the ground and his decision to abort the take off avoided a probable disaster. Lastly, the subsequent analysis into the origin of the emergency showed that the vibration would have disappeared once the blown wheel – which, therefore, turned quickly, unbalanced under the cockpit − had stopped. Consequently, a retrospective analysis shows that the generic rule would have been the correct solution; 4 The rest were caused by two emergency exits being blocked by people whose level of obesity meant they could not sit anywhere else on the airplane. Subsequently, it would be expressly prohibited to place next to the emergency exits persons who might make them unusable.

Explanation of the Reduction in the Rate of Learning in Complex Environments

101

however, at the time and place that the event happened, the elements necessary to know this fact were not available. By way of additional information, the pilot had extensive experience and an anecdote in his resume could give us an idea of his profile: much like a good number of Spanish pilots of the time, he had military training and flew the F-86 fighter jet. The instructions given to pilots that flew this airplane were to, in case of an engine failure, abandon the aircraft, as it was uncontrollable with a stopped engine. Due to a then unknown phenomenon – that the exhaust from an airplane can instantly cause the stoppage of the engine of another airplane that receives the jet directly into its intake nozzle − the engine stopped and the pilot demonstrated with the facts that the aircraft could indeed be controlled and landed. This same pilot decided to disobey a rule, mistakenly thinking that taking off would have resulted in a crash. That decision had disastrous consequences for his professional life. The non-visibility of the alternative decision, together with the pressure of the assignment of responsibility, gives rise to a situation where breaking a rule – no matter how justified at the time of doing so − represents a great risk for the operator. The conclusion regarding pressure to determine responsibility can be stated in the following terms: 1. Due to the complexity of the system, rules cannot foresee all the possible contingencies (Reason, 1997). However, one of the actions of the investigation process of an event itself consists in determining if the rules have been followed (FAA, 2000). 2. If the action followed had serious consequences and the rules have not been complied with, it’s almost impossible to show that it was inadequate for the situation and that complying with them would have led to worse consequences. 3. If, on the contrary, the action has had serious consequences having followed the rules, it is considered that the operator has done everything possible and the rigidity in behaviour is not evaluated negatively. In conclusion, there is pressure towards following a rule of an intensity not justified by the very ability demonstrated by the rules to manage serious events. Hale and Swuste (1998) conclude that studies of high-reliability organizations have shown that they operate within goals and priorities clearly defined by the organization and instilled through training. However, they leave the decision

102

Improving Air Safety through Organizational Learning

regarding safety in emergency and high-pressure situations to those who carry out the primary functions. The cases shown of Spantax and Concorde are representative of situations where, facing pressures derived from the determination of responsibility, organizations act as a drive belt and develop rules and regulations with a strong imperative component or, alternatively, materialize this in automatic systems that prevent breaking them. Excesses in automation Automation represents an important difference with respect to rules regarding the process of materialization of abilities. Nevertheless, under a different point of view, the development of automation responds to the same organizational dynamics as regulatory development. Unlike information systems, a system that adds automated processes can physically impede carrying out actions considered prohibited in its design. In consequence, automation, from the point of view of organizational dynamics, can be considered as a materialization of the imperative component of rules. This fact was denounced by Beck under the statement ‘technology is legislation’. As such, it represents the same problems as the imperative component of rules, but with an added element. Rules can be broken – at a potentially high cost − if the operator considers that it must be done. On the other hand, in an automated environment, it may be physically impossible to break the rule. The introduction of the ‘fly-by-wire’ system by Airbus carried with it the installation of an automated system designed to prevent the airplane being taken outside of its flight envelope, that is, the system would prevent the stall or excessive loads that could affect the structural integrity of the airplane, whether through excessive speed or abrupt manoeuvres. On paper, this system represents a great advantage because the pilot can skirt the performance limits of the airplane without fear of exceeding them due to carelessness or incorrect handling. However, this trust has already caused some accidents. Furthermore, this approach invites the question of whether it’s possible that a situation arises where the pilot needs to exceed the limits of the flight envelope. The answer is affirmative5 and the source queried quotes three situations that might require this action: 1. An engine fire not extinguished following the application of the extinguishers. The alternative way 5

Personal communication from Boeing-747 captain.

Explanation of the Reduction in the Rate of Learning in Complex Environments

103

to try to extinguish it would be to take the airplane to speeds beyond the airplane’s theoretical limits. 2. Avoidance of imminent collision. The avoidance of excessive load due to an abrupt manoeuvre can at the same time prevent the airplane from changing direction with sufficient speed to avoid a collision. 3. Landing with excess weight. A landing at full load, due to an extreme emergency after take off, is a manoeuvre that is outside the operating limits and can damage the structure of the airplane. Furthermore, some questions are hard to answer. For instance, it can be difficult to know what the behaviour of a highly automated airplane will be if it is affected by external factors – for instance, storm, clear air turbulence or wake turbulence − causing it to leave the flight envelope. To know if the situation can be recovered and how, if it is going to be recovered automatically or not, if the actions from the pilot to recover the normal position are going to be obeyed or not… all of this is extremely important. Nevertheless, automation has meant an increase in aircraft performance. In addition, it has contributed to the constant search for the reduction in operating costs. As a controversial point, it has contributed in a decisive way to change the operator’s relationship with the airplane. Automation is a hoped-for evolution of an information system. When technological ability permits it, this directly carries out the required action instead of limiting itself to informing the operator. However, in operating this way, the system becomes altered in two aspects, giving rise to the loss of operational closure: 1. The role of the pilot in an automated environment is reduced to the supervision of the system. From the costs point of view, this represents an obvious advantage – the cost of training is reduced and it is possible to optimize the use of the acquired training through multi-rating in aircraft which share the same model of automation (Airbus, 2002a). 2. Automation facilitates tasks in simple situations but becomes one more factor to attend to in complex situations due to its ‘initiative’, based on models not always well understood by the operator (Reason, 1997). This way, a pilot can remain in a position of spectator and, faced with an emergency, be required to act quickly in a situation whose evolution is unknown. Despite these disadvantages of automation, it is difficult to give a final evaluation of its contribution to safety. The possibility of automation invites the acceptance of new

104

Improving Air Safety through Organizational Learning

risks and, in that environment, automation represents an element that affects safety, both positively and negatively. The materialization of abilities that exceed human possibilities necessarily requires the assistance of automation. In commercial aviation, automation has increased efficiency and permits operations in situations that could not be managed by other means. The possibility to define automation’s contribution to safety goes through a unit of measure that remains over time. If the units of measure change in their terminology and levels of aggregation, it is difficult to establish comparisons between different situations. Automation, along this line, does not change the risk of a specific operation but rather changes said operation, making measuring its contribution difficult. By way of illustration, an automatic landing in a situation of zero visibility means, in the first instance, the acceptance of risk, which, not having existed previously, has no valid element for comparison. At the same time, the automation process that makes it feasible contributes a notable level of safety. In principle, it could be interpreted that meteorology has been reduced as a risk factor thanks to automation. There is, however, an alternative analysis based on the fact that in the absence of automation, the meteorological risk would not have been assumed. This modification of two variables at the same time is what makes evaluation of the real contribution of automation difficult.

Automation, therefore, introduces its own source of risk as much by the increase in complexity as by the reduced role of the operator in an automated environment. At the same time, automation reduces a greater risk of external origin that would probably not have been confronted in the absence of an automated system. In other cases, automation permits the operation of intrinsically unstable designs which are therefore more difficult to manage without the help of automated systems, but more efficient in their performance. It can be concluded, therefore, that the use of automation is justified by reasons of efficiency removed from safety. However, even in cases where automation has a decisive role in safety, it keeps being closely related to efficiency. This way, automation’s contribution to safety is made necessary in situations that have been reached in the search for improvements in efficiency. In addition, automation makes unnecessary, at least on paper, the best contributions by the operator at the same time as it demands others for which the human software may not be ideal.

Explanation of the Reduction in the Rate of Learning in Complex Environments

105

A pilot forgetting to change fuel tanks led to a situation where, to the extent that fuel was consumed, the centre of gravity of the airplane was in an incorrect position due to having almost completely emptied one tank and while the other remained full. The automatic pilot made corresponding adjustments without the pilot perceiving these and so maintained a situation of imbalance not visible until it reached a level that the automatic pilot could not correct. A similar situation occurred when a pilot,6 who was playing with the screens on a latest-generation airplane during a passenger flight, discovered that the reserve oxygen tanks, in case of a depressurization, were empty. The system, due to its design, did not consider this to be important information and so did not generate an alarm; the expectation that the system will warn about all important matters led the pilots to perform a less careful pre-flight check than desirable. Learning difficulties for the human actor Technological and regulatory evolution has made some of the pilot’s abilities obsolete, or sometimes counterproductive. Learning acquired in a different environment can lead to an incorrect diagnosis based on experience with older generation airplanes. From the point of view of learning about the system, technological and regulatory evolution has contributed new operational abilities. That same evolution, however, limits the operator’s ability to generate alternative solutions when these are necessary. Automated systems are sufficiently complex to be fully understood by their operators. The substitution of one action based on an understanding of the principles of operation by an action based on instructions facilitates automatic execution and fewer errors – which are vital in situations that are extreme and lacking in time for deliberation − but, at the same time, impedes the development of cognitive models that go beyond mere skill in execution. There are interesting mnemotechnical resources in teaching piloting skills to achieve execution that is adequate and without deliberation but which don’t permit investigation in the search for alternatives if they become a recipe for that execution. So, to show the pilot in an engine failure not to press the pedal on the side of the faulty engine, the expression is: ‘Dead engine, dead leg.’ Or, to achieve coordinated flight one should ‘step on the ball’, referring to the fact that in coordinated flight, the

6

Personal communication from the pilot involved in the incident.

106

Improving Air Safety through Organizational Learning

corresponding indicator − a ball submerged in fluid − remains in the centre, and if it is displaced to one side, one should press the pedal on that side. For the handling of an airplane with a variable pitch propeller, the pilot is told, ‘the pitch regulator is the bravest because it is the first to advance and the last to retreat.’ In this same case, instructors tend to encourage the memorization of a set of engine parameter numbers corresponding to different phases of flight, but few explain that beyond the memorized numbers, the pressure indication has an external reference, which is the actual atmospheric pressure. In an automated environment, given the multiplicity of sensors and indicators, the problem is multiplied and training is basically operational. Accidents such as that of the National DC-10 (NTSB, 1975), which occurred while its crew experimented in an airplane with passengers to determine where the automatic pilot took its data from, can only be explained in the context of operational training. On the edge of the imprudence of experimenting on a flight with passengers, the case shows that the crew was unaware of the logic underlying the automatic process. The problem represented by a practice aimed at equipping the operator solely with skills appears only when it’s necessary to apply a radically new solution, unforeseen in the design of automatisms or operatives. The described case of United 232 represents an example of a creative solution totally removed from the system parameters handled at the time. In accordance with these parameters, the situation of total loss of control of the airplane was impossible and therefore unforeseen. The facts demonstrated that the situation was possible and was resolved by generating an alternative control system based on the knowledge of its operating principles. Dysfunctional role of regulatory bodies Regulatory bodies have a clear and positive role for the process of learning. Their dissociation from the interested parties confers upon them an optimal position as much in investigation as in the generation of information after an event. Nevertheless, their development has led regulators to find themselves with opposing requirements. In the first place, the economic capacity for research is generally in the hands of manufacturers and occasionally of the air operators. Regulators are generally part of a state department or are constituted by a group of states; consequently, they tend to have scant possibilities to remain in leading research positions. Faced with this situation, it is habitual that the regulators, before having any real possibility of supervision, have to obtain from manufacturers and operators the basic information about what they have to supervise. Expressed in other terms, they need to obtain from the supervised organizations the criteria that will allow them to supervise those same organizations in the future. Regulators carry out this supervision through issuing and checking regulations as well as direct or indirect inspections of their fulfilment. Additionally, they are responsible for the investigation of events to determine responsibility and detect possible regulatory dysfunctions.

Explanation of the Reduction in the Rate of Learning in Complex Environments

107

Regulations – the basic instrument of regulators − are insufficient to guarantee safety due to the systemic complications that they introduce once a certain level of development is surpassed. Nevertheless, the sector has transmitted outwardly an image of absolute control. This image reinforces the idea that, if a serious event occurs, someone has violated a rule. As corollary, if a serious event occurs and a rule hasn’t been violated, the responsibility can fall on the regulator itself for not having acted with due diligence. An exceptional situation could be that relating to Comet airplanes. Here it was concluded that, with the knowledge available at the time, there was no way to anticipate the accidents that happened. The same doesn’t occur in situations that border the limit. The most representative was what occurred with the DC-10 and reducing the level of importance of an incident with the cargo hold door. If it had been considered that there was a design failure, the costs would have fallen back on the manufacturer. If, on the other hand, it’s a mere recommendation, the costs of the change fall on the operator. Because of this, there were important economic consequences in the decision and these motivated an unorthodox relationship between regulator and manufacturer. Turkish Airlines was among the airlines that had ignored the recommendation and, when a second opening of the door occurred, resulting in the destruction of an airplane from this airline, manufacturer and regulator were held equally responsible.7 Given the seriousness associated with events in commercial aviation, a serious situation generates a demand for action to avoid it being repeated. Regulators are then obligated to issue new regulations either with modifications in action or reinforcing penalties for failure to comply. With this model of performance they assure their position in the system but also contribute to regulatory hypertrophy and the risks associated with it. It can therefore be concluded that the role that the system assigns to regulators pressures them, from the perspective of their responsibilities, to contribute to regulatory hypertrophy. This, in addition to failures in functionality, introduces risks due to complexity and results in a situation analogous to that related to technology denounced by Reason (1997). As technology was used to attempt to solve

7 Considering the questionable practices that were discovered at the time, it must be added that the accusation of both – manufacturer and regulator − was just.

108

Improving Air Safety through Organizational Learning

technological problems, regulations are being used to attempt to solve problems caused by the very complexity of the regulations. Organizational paradigms and their role in learning Until now, we have analysed risk factors and how air safety has evolved, learning to manage them better and better. In addition, we have seen a set of elements that can justify a reduction in the rate of learning. In continuation, possible organizational paradigms will be put forward with the aim of determining which one air safety is in, and which one it should be in. For this, the classification of behaviour of human operators by Pérez López (1993) and Reason (1997) will be used. Both classifications present complementary explanations and, jointly, can explain the evolution of learning, as much in its optimal phase as when learning slows down. Pérez López shows a model of organizational development where, in the search for improvements, the emphasis changes between different actors within the organization. For this, he establishes a classification type, according to which organizations can function under one of three different paradigms: mechanical or technical system paradigm; biological or organic system paradigm, and institutional or anthropological model paradigm. Mechanical paradigm These are also known as mechanical or technical system models and the organization is viewed as a simple coordination of human actions. Its purpose is to produce and distribute a series of products and/or services. Technical systems can achieve great precision in the analysis of those subjects related to obtaining maximum production from minimum consumption. They are, therefore, organizational designs centred on obtaining the highest grade of efficiency possible. Technical systems’ capacity to integrate activities has grown beyond what could have been thought before the appearance of information technology and the associated automation. Technical systems do not, as part of their design, gather the motivations or needs of the persons who operate them. Their emphasis on the execution of prescribed behaviour only contemplates this factor as a source of possible control alterations and therefore subject to organizational measures aimed at avoiding the loss of control or at regaining it. The ideal behaviour of a technical system is described by Pérez López (1993), who points out that ‘the existence of a data archive which contained all the true affirmations of the type If A, then B, where A represents all possible actions and B all the possible reactions would be extremely useful’. This ideal, however, is Utopian because ‘the accumulation of knowledge is itself a problem of action. Therefore, it’s necessary to handle it on the basis of a prior analysis of the function of knowledge in the solution of practical problems’ (Pérez López, 1991). This implies the need for a model that serves as a guide to that accumulation.

Explanation of the Reduction in the Rate of Learning in Complex Environments

109

In other terms, a technical system cannot be explained in terms of a technical system functioning exclusively with operational knowledge but, instead, needs prior analyses that transcend that operational level. These characteristics describe an organizational model with clear parallels to the Scientific Organization of Work and with Weber’s concept of organization. The difference with these is not of philosophy but comes from technological evolution and the potential this has introduced in technical systems to accumulate If A, then B situations as part of their design. Biological paradigm The biological or organic model paradigm represents a superior evolutionary step compared with technical systems. Here, not only do we rely on carrying out prescribed tasks, but also on satisfying motivations of the people that comprise the organization. Under an organic model, ‘the specific goals of action not only look for an external achievement but simultaneously for acceptance on the part of the people who make the effort to achieve them’ (Pérez López, 1991). The search for two types of objectives leads to the need to harmonize these as ‘the difficulty in maximising everything simultaneously can be appreciated … a realistic approach to the problem is that which sets off from the fact that there is an entire set of product and price combinations that sufficiently satisfy the current motivations, both for consumers and producers’ (Pérez López, 1991). Air safety illustrates these opposing objectives. Efficiency would be a need shared by both consumers and producers if the manifestation of that need were different for one or the other. Nevertheless, there is an opposing view in the sense that the former try to achieve it for the minimum cost and the latter with the maximum benefit. The organization can try to reduce the contradiction through external communication actions aimed at showing that the most efficient action is at the same time seen as satisfying the criteria for equilibrium. Senge (1990) suggests under the title of erosion of goals the possibility that tension and the search for equilibrium be produced, alternatively, by the search for improvements towards an established goal or by establishing less ambitious goals. The use of external communication could correspond to this second way of resolving tension. Behaviours, where the requirement for safety is concerned, are radically different according to the perceived situation being considered high or low risk. So, an alteration in equilibrium – as long as the system is maintained in the low-risk perception level − can be corrected in one of two alternative ways in terms of the factor that started the disequilibrium. Normally, an explicit reduction in the level of safety would not be accepted; however, as new technological developments are introduced, efficiency increases and this has the effect of a reduction in price to users. Once this first level of equilibrium is reached, it is necessary to carry out actions aimed at showing that there has not been a reduction in the level of safety attained. The low-cost companies are representative of this situation. Their communication specialists repeat that their companies are subject to the same regulations as the rest.

110

Improving Air Safety through Organizational Learning

Setting off from this affirmation, they expect the user to extract the idea that the level of safety is identical. If we take this same reasoning to the automobile sector, we would have to conclude that a utility vehicle is as safe as a luxury vehicle of renowned brand since both are subject to the same regulations. The difficulty of maximizing everything simultaneously, particular to organic models would, therefore, be reflected in the air safety situation, so long as the perceived level is situated in the low-risk position. If the transaction between motives – efficiency vs. safety − were explicit, the system as a whole would find itself in an organic model, since there would be a flow between both types of need. However, the absence of information towards users implies that these transactions are not carried out explicitly, but that the level of safety is considered constant by the users who evaluate other factors, for example price, comfort, punctuality, convenience of schedules, and so on. Institutional paradigm The organization resulting from an institutional or anthropological paradigm constitutes the most evolved stage. The institution sets itself the purpose not just of an organization but also of giving meaning to all human action that it coordinates. The explicit consideration of values with which people can identify improves the motivation of their actions and educates them in this sense. People’s search for meaning of actions becomes the basic value to direct action. This is so, as much from a purely conceptual point of view – the value conceded, for example, to safety as determining a way to act − as from an operational point of view – making operational decisions in terms of the perception of their meaning. In consequence, an organizational model that doesn’t take into account this dimension of meaning would find itself – in terms of evolution – below another that did do it. Paradigms and modes of action The three paradigms described represent a continuum where the value attributed to the human element increases, reaching a maximum in the institutional paradigm. Under the mechanical or technical system paradigm, the organization is seen as a coordination of human actions destined to produce and distribute products and services. Therefore, the organization is considered a machine of more or less complexity. Under the organic paradigm, the organization is treated like a living organism. In addition to a technical system, it includes informal interactions that should be taken into account to explain the functioning of the organization. As a result, the organic paradigm reaches the level of persons taking into consideration their real current motivations to cooperate with the organization. Lastly, under the institutional or anthropological paradigm, the organization is treated like a society where it is necessary to find full coherence between the

Explanation of the Reduction in the Rate of Learning in Complex Environments

111

organizational and individual objectives. The satisfaction of the people who compose it is not seen as a condition to be met but instead as its principal objective. The organization’s operations, under the institutional paradigm, find themselves conditioned by generic values or guides for action. Because of this, actions that oppose these are rejected independently that the conduct in question could be adequate, from a functional point of view. Reason (1997) contributes two fundamental elements which may clarify and complement this model of organizational development. In the first place, there is an inflection point, that is, during the development there has been a change in trend which has broken operational closure, configuring a different system. Reason, applying directly to high-risk organizations, identifies this point as the moment when they stop trying to learn from mistakes to, instead, try to avoid them limiting the freedom of action of the operators. In addition, Reason makes another contribution that can be related to the organizational development model, although here referring more to the behaviour of the individual than of the organization. Reason, through his works, extended and used the Rasmussen’s model known as ‘SRK’8. The Rasmussen’s model defines three different ways to act: • •



Skills: in this level, skills and tasks are carried out automatically with occasional conscious checking of their progress. Rules: when there is a change in the situation it is necessary to change the previously programmed behaviour. This level is used to resolve problems that have been found before, have been the object of training, or are covered by procedures. Written or memorized rules of the If A, then B type are used for their execution. Knowledge: this level only comes into play when no solutions have been found under the two previous ones; it is slower and requires more effort than these. In emergency situations, this type of conduct is subject to problems of comprehension or quantity of information that can be processed.

An integration of the various classifications can be established in accordance to the following figure: The black represents the relative weight of the behaviour based on skills; the dark grey represents the weight of the behaviour based on rules, while the light grey represents the weight of the behaviour based on knowledge. This way, a technical system will try to ensure that the individuals behave according to pre-established skills or to rules. A biological model will resort more to rules and leave the door open to behaviour based on knowledge. An anthropological model places its trust in individuals and will try to ensure that they can act based on their own knowledge with the least formal restrictions possible. Subsequent literature has also come into contact with organizational models and has reached similar conclusions. It’s worth highlighting current organizational

8

Initials corresponding to the words ‘Skills’, ‘Rules’ and ‘Knowledge’.

112

Improving Air Safety through Organizational Learning

Technical System

Figure 4.3

Organic System Anthropological Model

Integration of organizational development models

semiotics and its classification by representation type via the concepts of direct representation, language representation and conceptual representation. Gazendam (2001) establishes in mental representation terms what Reason established in conduct terms. This contributes an element that will be used further on and it is the treatment of meaning. As Stamper (2001) pointed out, computers don’t operate on meanings; only people do that. The field of knowledge management contributes an organizational type classification based on hierarchy, market and trust variables that has great similarity with the described models. So, a hierarchical organization corresponds to a technical system, an organization based on a market to a biological system, and an organization based on trust to the institutional model. Choo and Bontis understand that a person, an interpersonal system, or a collective can be generic objects of trust, as opposed to sociologists that reserve this term for interpersonal relations. For this treatment of trust as a variable, there isn’t a continuum whose extremes would be trust and control but rather is based on the idea that there is always trust in something. It is necessary to determine in what or whom trust is placed when trying – in this case − to reach the objective of improving air safety. Therefore, taking up again the concept or organizational paradigm and introducing the elements of trust and meaning, we will analyse to what extent each paradigm is able to adjust itself to the needs of learning in air safety. Adjustment of the different organizational paradigms to the needs of learning in air safety The hierarchy between paradigms would be given in the order that they have been stated. So, the mechanical or technical system paradigm would be the least evolved and the anthropological the most evolved, the organic remaining at an intermediate level. The needs of learning and answers to unforeseen events in the initial stages of the development of aviation would have required of the model behaviour situated in the anthropological paradigm. Here, a shared set of values represented the basic guide for action. The anthropological paradigm can be equivalent to the model defined by Reason as feedback-driven, that is, a model where preventive strategies don’t seriously limit

Explanation of the Reduction in the Rate of Learning in Complex Environments

113

individual initiative and, once an event occurs, it is evaluated to determine what the problem has been. Naturally, given the serious consequences that events can have, the acceptance of this type of model can only occur when the organizational values are fully accepted by all the individuals and that, in the absence of restrictions to act, they will behave adequately. Technological evolution and accumulated learning in air safety would open up the possibility of an alternative development. This development would make maximum use of technological ability and would follow an evolution in the inverse sense to that considered normal under the organizational paradigms model. Technology and regulatory development have become depositories of a great volume of knowledge within the system. This has permitted the establishment of a complex regulatory apparatus and, at the same time, the manufacture of devices that prevent carrying out actions contrary to those prescribed or, alternatively, that those same devices act without giving the operator any option. Under an anthropological paradigm, the limiting factor in its development is trust, in that the operator shares the organizational values. In a technical system, the limiting factor is, at least apparently, the technological-regulatory development and its ability to act as supervisor of an operator in whose action there is no trust and where it, apparently, is not necessary. Therefore, two opposing types of evolution are shown. The first type – from the technical system to the institution − is defended by many authors as ideal. However, many others, without denying this idea, show that the improvement in air safety has followed exactly the opposite path, based on a strong technological and regulatory evolution. Pérez López points out that ‘it’s much more difficult to lead an institution than a technical system’. This is an important incentive to try to maintain a technical system whilst this is feasible, at least from the managers’ point of view. To difficulty of control, James Reason would add another reason related to cost. To the extent that learning advances and the work becomes routine, prevention controls are introduced and eventually predominate over controls as feedback. After all, it is cheaper to get the work done by a team with relatively few qualifications and controlled by procedures than to trust in the judgement of expert individuals with elevated training. Hale (1998) gathers both possibilities and approaches the problem, pointing out that the matter that must be decided upon is when is it better to let people make decisions based on their own knowledge to operate safely, and when is it more appropriate that the operators’ freedom of action is limited so that someone else decides the action to carry out? A humanistic approach would be centred on the reduction of external controls and internalization by the individual of the organizational values that would make the controls dispensable. This way, the organization would free itself of the rigidity imposed by its control systems, allowing the use of human abilities that would otherwise have been restricted by organizational design. However, the increase in technological ability applied to air safety has been translated in the opposite sense: an increase in the number of foreseen circumstances

114

Improving Air Safety through Organizational Learning

for which individual initiative is not required but instead the mere application of skill in the following of rules. For Choo and Bontis (2002), under the hierarchical systems – comparable to technical systems – knowledge is treated as a scarce resource. That is why knowledge should not be spread all over the organization. Instead, the system brings it together with decision ability in specialized functional units in the upper levels of the organization. A large part of the research has shown that an organization structured under this paradigm may be efficient in carrying out fragmented and routine tasks but encounters enormous difficulties in carrying out new tasks that require the generation of new knowledge. Dennett (1996) contributes an interesting metaphor through a comparison between the world of information services and that of commando units – this may serve to illustrate different types of functioning. Information services are guided by the principle of the need to know which would establish, for security reasons, what information each one must have to be able to carry out their tasks, without providing more information than that considered necessary. At the other extreme, in commando units, it is attempted to furnish each member with the most complete information possible in case of having to use it. This second principle is established facilitating each agent with as much knowledge as is possible, such that the team has the possibility of responding adequately when faced with the unforeseen appearance of obstacles. At the heart of organizational paradigms are posed two questions that can be stated in the following terms: •

Under what conditions is it preferable that a person has a full picture of the situation and full freedom to act according to that picture?

Alternatively: •

Under what conditions is the principal value of the person the ability to efficiently carry out an action prescribed by a rule when faced with a situation identified by that same rule?

McIntyre (2002) responds with a new question that represents the approach habitually used in the management of safety. His question put together the seriousness of the risk and the probability of its occurrence. Thus, an event whose consequences are potentially catastrophic, especially if it has a high probability of occurrence, should count on available ability to respond. The own complexity of the systems may impede having a clear answer to the question (Beck, 2002a). The calculation of probabilities is carried out on independent events but does not take into account unexpected interactions between events that could multiply their seriousness. The behaviour of a technical system faced with foreseen situations makes it ideal. Excessive development of a technical system may make individuals incapable of attending to unforeseen situations. So the control systems may

Explanation of the Reduction in the Rate of Learning in Complex Environments

115

themselves impede carrying out adequate action and even the comprehension of the situation. If there are situations with potentially serious consequences, it seems necessary to steer towards anthropological models that use all the resources within their reach. There is, however, a condition in this principle: the resources used should not become limiting factors in other options. One of the motives that have made technical systems the option of choice for the technical-dominant mentality is the unpredictability of the human being. Under the anthropological model, the behaviour of the individual is made predictable by substituting external control by socialization processes and training. These are based on the acquisition of abilities based on routines and procedures but, at the same time, those processes provide content that goes beyond the operational level. The content should permit the generation of a mental model that allows the individual to reach the goals of the system in situations not foreseen in its design. An organization with these foundations is more evolved despite the greater difficulty in controlling it. This type of organization provides the ability to respond in situations that have not been previously analysed and have conduct previously prescribed by the organization. The facts have shown that, despite the limitations inherent in operational knowledge of technical systems, these have continued to function, contrary to predictions. Technological evolution, especially that related to information and communication technology, has permitted the increase in ability to store conditions of the If A, then B type, and this increase directly represents an increase in the possibilities of technical systems. Moore’s law is representative of this progress. According to this, the space occupied by the circuits of a computer is reduced by 50 per cent every 18 months, which in addition is accompanied by a reduction in cost. This technological evolution has reduced the need for a change of paradigm. The need only becomes visible in highly complex environments via reductions in the rate of improvement or via events caused by the complexity itself. Air safety seems to show just how far a technical system can go, making use of all the technological and regulatory resources within its reach. The possibility of future improvements beyond the edge will have to support itself on different assumptions than those of the technical system.

This page intentionally left blank

Chapter 5

Organizational Learning in Air Safety: Lessons for the Future The previous chapters have stated the development of the system in terms of learning through the management of the different risk factors and the limitations introduced by its own evolution. Also, it has been highlighted how the ability to learn has been reduced in the presence of an important technological revolution which, one supposes, should have accelerated it. In the previous chapter, we have seen how the description of a technical system can be used to define the commercial aviation environment. Technical systems manifest a set of qualities in terms of predictability, an aspect that has been strengthened thanks to technical evolution. For a technical-dominant mentality, predictability makes the technical system the organizational model of choice. Despite this, the technical system also shows deficits in the form of inability to respond to that which is unforeseen. This inability can be understood as a barrier to learning and, therefore, to the improvement of levels of air safety. As such, it will be addressed together with those emerging factors that could contribute to the disappearance of this barrier. Difficulties for the change in organizational paradigm The ability for learning of a technical system has developed thanks to a strong technological development. Nevertheless, complexity grows as new abilities are added and, for air safety, the hyper complexity phase may have been reached, with its associated effects. The alternative to recover the initial high level of learning occurs through a change of organizational paradigm. This paradigm change would imply a reduction of direct control through regulation or restriction to act, imposed by technology. From a technical-dominant logic, this reduction would be an important sacrifice that, in addition, means the acceptance of risks. Apart from the motives derived from the dominant logic itself, there is also a rational motive for conservatism: the rate of improvement in air safety has been reduced but the current level of accidents is sufficiently low to contemplate with prevention any change that might contribute to raising it. Apart from the position of the dominant logic and its effect as a barrier to learning, another barrier still exists. Competition between the large manufacturers has led to the evolution of aviation towards an increase in efficiency, which in the final instance should allow reductions in prices.

118

Improving Air Safety through Organizational Learning

If safety is considered guaranteed, it is not in the user’s selection criteria. In consequence, the user will put pressure on prices and not on increases in safety. The operator who, respecting the minimums imposed by regulations, can invest less in safety will have a cost advantage against the others, and will be able to offer better prices. In summary, the achievement of significant improvements in air safety requires the satisfaction of two conditions: 1. Increasing pressure towards the improvement in the level of safety. 2. Change to an organizational paradigm with greater learning ability. Pressure towards the increase in safety The permanent debate over safety between the different actors in the system – manufacturers, operators, regulators and professional groups − has not included the end-user as part of that debate. Users, faced with the good results shown in this area, have turned their interest more towards questions of operational efficiency – prices, destinations, on-board comfort, and so on – and have not applied pressure in the area of safety, placing trust in the managers of the system. The North American aviation agency has invited, for this reason, the various actors to be more transparent in their information to the outside, inviting the users to come into this territory. Expectation of increases in accidents Inside the system there is pressure towards the achievement of greater learning flows due to the expectation of increases in accidents. The White House Commission on Aviation Safety and Security (1997) showed an expectation of major increases in air traffic and, therefore, an increase in absolute terms in accidents. This requires a level of improvement superior to the one achieved by the current organizational model. The NTSB (2001) publishes the same conclusions, pointing out that the number of trips in the United States increased more than 100 per cent in the 16 years following 1983. According to FAA predictions it can be expected that this growth will continue, reaching 1,000 million trips by the year 2010, which represents a further increase of 53 per cent. The number of accidents has remained approximately the same in the last two decades. If this number continues, however, the expected increase in traffic will be accompanied by a significant increase in aviation accidents. A ratio of accidents that today seems acceptable, in view of the extensive use that is made of air transportation, may not be so in the future. According to Boeing’s predictions, with the predicted increases in air traffic, in the year 2015 there will be one crash a week somewhere in the world. Nevertheless, if this increase in accidents in absolute figures obeys an increase in traffic and does not represent a relative increase, apparently the argument in favour of a radical change has one important weakness. The weakness is, however, only apparent. In addition to the public impact of an accident with numerous victims, Hogarth (1987) contributes arguments that

Organizational Learning in Air Safety: Lessons for the Future

119

would invalidate an attitude of tranquillity based on maintaining relative accident figures. So, Hogarth affirms that people evaluate predictive ability, paying attention to absolute frequency more than to relative frequency. This justifies the reduction in passengers that occurs following a large accident, or the interest mentioned by The White House Commission on Air Safety on improving safety levels. Even though the relative frequency will remain identical, the public would be guided by the predicted absolute frequency – one serious accident per week in the year 2015 − with the resulting effects on the whole aviation market. Because of this, the conclusion of The White House Commission (1997) report explicitly points out the urgent need for improvement even though the relative frequency is maintained. Inclusion of the end-user in the debate on safety In previous chapters, the advantages were pointed out of, from a technical mentality, drawing an organizational boundary in debates over safety, excluding from these the end-user and, in general, all those without specific technical knowledge. The information supplied to the user has as its objective the distancing of safety from the field of the user’s preoccupation and is so centred on the positive aspects of safety that it borders on disinformation. The exclusion of the user in the debate on safety is an important element in the configuration of the current learning model. Excluding the user and, therefore, taking safety out of the factors over which there is competition, data relating to event analysis and conclusions can be transmitted freely within the system boundaries so that it learns as a whole. Additionally, the decisions within the system are simpler, so long as the factor subject to decision is not subject to opposing interests. Lastly, the inclusion of the user would have some specific difficulties. If the operators made counter charges regarding failures in safety, the result could affect the trust placed on technicians and elevate the demand for safety without clear criteria for action.1 Advertising messages tend to be sufficiently expressive of the attempt to prevent the user from questioning the level of safety. Exceptions are very scant and it’s necessary to go back to old advertisements such as, ‘Iberia: Where only the airplane receives more attention than you,’ or the self-proclaimed record by Qantas of not having lost a single airplane since its foundation in 1920 – although they did have to repair a Boeing that suffered an accident in Indonesia for a cost similar to acquiring a new one. More recently, Air Europa, with its mention of having the most modern fleet in the world, 1 One example of heightened requirement levels without clear criteria can be observed in the demands made of the nuclear industry. Its political use together with the absence of truthful information led to swings derived more from temporary political positions than from objective demands in the field of safety.

120

Improving Air Safety through Organizational Learning

leaves in the mind whether the advantage of a modern fleet refers to performance, comfort or safety. The facts are numerous, in addition to the actions in the field of information, which show the exclusion of the user. In particular, this is demonstrated by shared code flights, where for reasons of fleet optimization or maintenance of slots,2 a different airline can carry out flights to the one the user purchased the ticket from. If safety was the object of differentiation and competition between airlines, it would be unthinkable that one operator would place its passengers on another operator’s airplane. These passengers would demand as a basic right that they be carried by the company that they have bought their ticket from if they think that they incur a greater risk by flying with another operator. Doing so would impede fleet optimization policies with the predictable impact on prices. The same argument can be applied to competition between manufacturers. In decisions regarding fleet renovation or expansion, airlines take into account the operational costs of different models and the suitability to the markets they serve. At no time do they consider the greater or lesser popularity of manufacturers or models with their passengers. Naturally, given the current state of information, these wouldn’t have a base to establish such preference. In July 2001, as an answer to a strike by Iberia pilots, Iberia management responded to the resignation of pilots occupying technical positions in the company by suspending all flights and alleging that safety could not be guaranteed. Both the striking pilots and the Government responded immediately that safety was guaranteed. The speed and unanimity in a response that was not argued over and coming from habitually opposed sources can only be interpreted in the sense that a taboo subject had been touched and that safety entering public debate had to be avoided at all cost. There are powerful motives for not allowing the user to enter in a debate with a significant technical component, including the risk of loss of the current transparency within the system. The user lacks information on decisions relating to safety and their grounds and therefore cannot use information as a discriminatory element in purchasing decisions. This opens up the possibility of agreements within the system that, in the search for efficiency, could affect safety levels – a possibility hardly admissible for the user. 2 Right to carry out operations on specific days and times. The maintenance of slots is an important barrier for the entry to new competition; the reason for this is, if their own operations don’t use them, it’s accepted that other companies can use them, avoiding their loss due to non-usage.

Organizational Learning in Air Safety: Lessons for the Future

121

The contingency of a large accident that might expose an agreement of this type could imply a loss of trust (Beck, 2002a) and the collapse of the sector. Flights by twin-engine, long-range airplanes in so-called ETOPS conditions, authorized, in case of an engine failure, to fly more than three hours on the remaining engine, can illustrate this type of situation. There is an important open debate on the risk of this type of operation although, given the reliability of the engines and the operational measures adopted, it really is improbable that a simultaneous failure of both engines would occur.3 If, despite its low probability, this type of double failure due to unrelated causes occurred and gave rise to a disaster, the practice of transoceanic flights in twin-engine airplanes would inevitably be brought into public debate. The possibility of users refusing to fly in particular types of airplanes (the precedent of the massive rejection of the DC-10 following various accidents already exists) and attributing a serious accident to a non-transparent agreement between regulators and those regulated would have serious consequences for the sector. In addition, a reaction of this type would occur in a situation – the current one − where the lack of information makes the user, once trust in the technicians is lost, easy prey for manipulation of information. Alternatively, the inclusion of the user in the debate would limit the freedom of action of the technicians but, once a decision is made, this would be safer from uncontrollable reactions based on scant information faced with the contingency of an accident. The North American agency (FAA, 1997) has recommended including the user among the receivers of information and, at the same time, rejects the idea that the regulatory bodies carry out this function.

3 The current rates are below 0.2 engine failures per 1,000 h of flight. In addition, numerous precautions are taken to achieve that an eventual failure of the second engine be a totally unrelated event. In fact, there have been multiple engine failures already but their cause (running out of fuel, ingestion of volcanic ash) would have affected airplanes with any number of engines and, therefore, does not represent a discriminating element.

122

Improving Air Safety through Organizational Learning

The recommendation and simultaneous self-marginalization of the regulatory bodies from this communication function leaves the form of communication in the hands of the other actors of the system. This is not an easy task, since it implies training people or associations that, without acquiring a technical mentality, are capable of understanding technical arguments relating to safety. In addition, there are important interests that might try to sway making decisions in specific directions. By way of illustration, the reduction of the number of crew in the cockpit is of great interest to operators because of the reduction in cost that this implies (Campos, 2001). In the opposite sense, the pilot collectives may have an interest in maintaining a third crew member for reasons of professional union. Either group can use its effect on safety, or lack of it, to justify their decision. The role of the user as referee in this type of dispute has a great interest but the user’s intervention requires availability of information and ability to evaluate it. In summary, the inclusion of the user in decisions that affect safety contribute an element of solidity to the system. The user’s presence makes an uncontrolled situation of panic or rejection as a result of a serious event less probable. Therefore, the inclusion of the user and the resulting pressure towards transparency in the levels of safety would imply that the resulting decisions include safety among the considered factors. Lastly, the change that the inclusion of the user would imply requires a process of information channelling. When an event occurs, the available information is that transmitted by the general communication media,4 whose criteria for omitting or highlighting facts may make it differ from the technical information. The EgyptAir accident (NTSB, 2002) demonstrates this difference in criteria and its effects. The general information media highlighted the fact that before the initiation of the chain of events that led to the disaster, the co-pilot, alone in the cockpit, pronounced in a loud voice a phrase translated as ‘In God I trust’. These media interpreted the fact as proof of a clear intention of suicide. 4 There are some exceptions, like the creation of a Web page by Swissair, where they explained, in real-time, the investigation process of the accident that occurred in Halifax to their MD-11 airplane in 1998.

Organizational Learning in Air Safety: Lessons for the Future

123

The Egyptian authorities, for their part, took advantage of this interpretation to discredit the entire report, highlighting that this phrase is frequently pronounced by Muslims. The subsequent actions by the first officer are reflected in a public technical report on the Internet which does not leave room for error. The pilot who caused the disaster forced the situation to be alone in the cockpit and repeated the now famous phrase 11 times, the last of these following a struggle for control of the airplane and cutting off the flow of fuel to the engines. This report was issued three years after the accident and includes the Egyptian version. In the meantime, the matter was left open to speculation. The system can generate pressure towards greater safety by including the user in the debate instead of waiting until, due to an increase in accidents, the user demands it. This inclusion means more than allowing public access to information. The request by the North American agency for transparency implies that users, or the persons or associations that represent them, should have adequate training to evaluate the received information. Change of organizational paradigm The mechanical paradigm has shown relevance to the generation of operational efficiency. Accidents are exceptional events and a situation has occurred which can be analysed from two distinct points of view: 1. In the positive sense, technological advancement and information transparency have led to a build-up of the ability to respond to events that, until they occur, were considered impossible. 2. In the negative sense, the materialization of learning in the form of technology or regulations has limited the possibility to respond to and, on occasions, recognize exceptional situations. Technological development requires operators to have skills different from those that were necessary in an earlier phase of development. When the concepts that explain the cause–effect relationships associated with an operational solution are not mastered, it is probable that barriers are created not only to its application in the resolution of future problems but also to its incorporation in the mental model of the person who engendered it. In summary, the increase in competence in attending to known events has generated, as a secondary effect, incompetence before unknown ones. The search

124

Improving Air Safety through Organizational Learning

for a substantial improvement implies breaking the limit of ability that a technical system imposes. Technical systems before unknown situations An alternative model for action should have the ability to attend to unknown situations and, as such, avoid lines of development that might limit that ability. However, under the current organizational paradigm, the potential improvement is limited because each improvement appears to be accompanied by an undesired secondary effect, which is opposite to the intended effect. Certainly, it is much more difficult to control an organization that functions under an anthropological paradigm than a technical system. Because of this, a change to a model whose control is more difficult could be too costly, especially for a mentality that has made control its basic objective. Bureaucratic models are much more predictable in their actions, but at the same time, these models have serious problems in confronting unforeseen consequences. Sorensen (2002) highlights the limitations of systems based on technology and procedures, whether they are called bureaucratic models or technical systems, and alleges that good procedures and good practices are not completely adequate if they are performed mechanically. A safety culture should not only be structural, but should also be part of an attitude that affects organizations as much as individuals; this means treating all matters relating to safety with appropriate perception and action. Although Sorensen reaches this conclusion based on the field of energy production in nuclear power plants, his criteria can be shared by any high-risk organization. Let’s again remember that risk and complexity are difficult to separate. High-risk organizations share a fundamental element that differentiates them from other types of organization. This element is the theoretical requirement to respond to 100 per cent of the serious events that might occur. Although the requirement might be Utopian, a high-risk organization cannot be designed in a way that admits contingencies to which it does not respond because it is understood that their probability is low. Technology as well as regulations – basic instruments of technical systems – works only before previously planned situations. Therefore, these resources cannot be counted upon to resolve 100 per cent of the situations. Despite the existence of a technical-dominant logic, this fact has not been ignored but the response to it has tried to integrate it into a model called high reliability organization, which Sagan (1993) bases on the following principles: 1. The political and organizational elite grants great importance to safety and reliability. 2. There are considerable levels of redundancy allowing compensating for failures. 3. Errors are reduced through the decentralization of authority, strong organizational cultures and continuous operations and training. 4. Learning takes places through a process of trial and error accompanied by anticipation and simulation.

Organizational Learning in Air Safety: Lessons for the Future

125

Despite these principles enabling an organization to be sketched with high reliability, their application seems to represent an advanced version of a technical system with the problems associated with it. A review of each of the principles will allow clarification of this point. In the first place, the importance attributed to safety and reliability by political and organizational elite is manifested in the form of issuing and controlling regulations. Redundancy, the second of the principles, increases the complexity of the system and, in addition, presupposes that the events are predictable, since an unforeseen event can make redundancy useless. There are different cases that can demonstrate this. Among them, which can be quoted as paradigmatic, is the flight of United 232 (already discussed) and the loss of its three hydraulic systems. This is a very well-known case, but it is far from being the only one. There has been total loss of power in airplanes with various engines – a theoretically very improbable event − for reasons such as errors in fuel calculations or failures in fuel systems, as well as volcanic ash or simultaneous checking of all of the engines committing the same error in all of them. The decentralization of authority, the third principle, is unavoidable in an environment like commercial aviation since, in an emergency situation, the opportunity to request permission to carry out the corresponding actions is not usually available. Decentralization, however, takes place in a technological-regulatory frame that inhibits individual actions. This inhibition can occur directly, via technological restrictions, or indirectly, via the consequences of not adhering to the rules. When this occurs, the decentralization is more formal than real and also responds to the control perspective of a technical system. Lastly, the processes of trial and error are basically created in simulator training, although voluntary error information systems have played an important role. Both activities can result in the mechanization of a set of actions that, if necessary, can be carried out by following routines. Training, in a complex environment of technology and rules, can be directed to the correct operation of the system without going so far as to explain the logical model of said system, because of its inexistence or because of the inexistence of an integrated logical model. In conclusion, the idea of the high reliability organization has gathered a few principles from the institutional paradigm such as socialization, training, decentralization and learning by trial and error. Despite this, the use of these principles from a technical-dominant logic does not alter the organizational paradigm that it’s based on or its barriers to learning. Instrumental supremacy vs. trust Luhmann (1996) expresses the idea of insufficiency of technical systems, pointing out that despite all the organizational effort and rational planning, it’s impossible that all actions are guided by the trusted predictions of their consequences. There are surplus uncertainties that must be adjusted and there should be roles whose special task this is. Roles such as that of a politician or a manager, for example, are typically

126

Improving Air Safety through Organizational Learning

evaluated in terms of successful results more so than measurable rules, precisely because the specific action cannot be identified with sufficient details in advance. The consequence of ‘surplus uncertainties’ is the need to entrust to the operator situations that, as unforeseen, exceed the competence of the system resources. However, it can occur that the restrictions on action imposed through rules and technology invalidate the operator as an emergency resource. The trust that, according to Luhmann, should be placed in the operator is defined as a social relationship with its own special system of rules and represents an indispensable resource in highly complex environments. One characteristic of trust would be its ability to reduce complexity; therefore, in an environment that presents problems due to complexity, its presence seems necessary. Luhmann (1996) contrasts trust and instrumental supremacy. If instrumental supremacy can be assured, trust can be put aside. Technological evolution has generated in many circles the expectation of achieving that instrumental supremacy, making trust unnecessary. The results, however, show that technological evolution has also meant an increase in complexity with its associated problems. Operational knowledge vs. conceptual knowledge When instrumental supremacy is searched for, there arises an added problem. Technological complexity implies a separation between design and operation of a system and, therefore, the operator acts, following Rasmussen’s terms, based on skills or rules. If operators do not understand the system being operated beyond following rules or skills, they can’t act as an alternative before situations unforeseen by the system. The designer knows the significance of the design, but wants the operator to know a set of signs that enable carrying out correctly an operation, avoiding failures in manipulation and even errors of judgement. Despite this, the meaning of operators’ actions may be unknown to them. In the terms managed by Dennett (2002), operators would be acting under the need to know principle of the intelligence services, according to which they are given only that information considered necessary. So, the organization’s design presupposes that the human operator does not need to know the meaning of the action but instead the correct form of the operation. However, the interaction between the operator and a system should be far beyond mere skill, that is, in the level of knowledge. Only that level of comprehension enables the operators to have a model of the automation that could be reproduced cognitively so that they can anticipate future states of the system. Regulations and technology as supports of knowledge imply, in addition to the potential loss of meaning for the operator, their use as instruments of organizational control. The theoretical ability to respond to 100 per cent of events, required in high-risk organizations, implies causal knowledge of the system by its operators. At the same time, that causal knowledge implies the freedom to break a rule if an unforeseen situation requires it.

Organizational Learning in Air Safety: Lessons for the Future

127

So, when rules are insufficient or inadequate to situations for which there is not a pre-designed response, it’s necessary to use another type of resources removed from merely following rules and executing routines. If we must advance towards a more evolved paradigm and it is fundamentally based on the human element of the system, the selection and socialization processes acquire fundamental importance. At the same time, it is necessary to redesign the system so that it is capable of fully using human abilities. This is especially important in reference to the operator’s knowledge of the meaning of his/her action. A definition of a rule of safety might explain the highlighted need. Hale and Swuste (1998) define a rule of safety as a defined state of a system or a defined way of behaviour in response to a predicted situation, established before the event and imposed or accepted by those who operate the system as a way to improve safety or achieve a required level of safety. The rules are designed to limit the freedom of choice in a given situation through the imposition of defined forms of response. Following this definition, these authors extract the consequence that spontaneous conduct without prior planning – that is, based on the operator’s knowledge − should only be accepted when the situation confronted is unexpected. Given that this model of action – behaviour based on knowledge − is considered the least reliable and least subject to predictability, it is strongly restricted by organizations. This occurs up to the point where it can be considered that a safety rule has been violated if the operators have acted according to their knowledge with the intention of fulfilling the objective of maintaining or improving the level of safety. In consequence, the recovery of the system’s learning ability is based on two variables, which will be analysed in the next chapter: 1. Deeper knowledge of the meaning of the action itself, beyond that implied by a skill or following a rule. 2. Trust based on a set of guides for action accepted and promoted by the organization.

This page intentionally left blank

Chapter 6

Meaning and Trust as Keys to Organizational Learning The learning process that has enabled the improvement of air safety has led to changes in the organizational model that guaranteed it and, through these changes, the learning process itself. In the first instance, the key to advancement was people’s ability. To the extent that the system learned and materialized that learning in technical resources and regulations, it moved towards a technical system. As a result of this process, there are two variables – meaning and trust – that have experienced great changes and on which a return to a situation of high learning depends. To a technical-dominant mentality, technological progress is seen as an opportunity to withdraw trust in the human operator. In its place, it has been preferred to trust in the ability of different devices and consequently the design of the activities is separated from its operation. In performing this way, opacity is introduced that prevents the operators from knowing the system in-depth and therefore the real meaning of their actions. Meaning and its role in organizational learning A policy that trusts more in a device than in a person is a consequence of the belief that, given that more errors occur in actions based on conceptual knowledge than in those based on skills or rules, the suppression of conceptual knowledge is a good solution. This way, to the extent that it is possible, more trust has been placed in devices or detailed procedures than in the knowledge of the human actor. The process is not new. There are many organizational designs that, since Scientific Organization of Work, have tried to eliminate action based on knowledge of the repertoire of behaviour of the people that make them up. In this way, it has been attempted to reduce the errors and uncertainties associated with human actors and, in addition, use less qualified people to achieve the same results. Technological evolution has allowed the support of this type of organizational design via a growing ability to store situations whose response is predetermined. This, in itself, is a form of learning and the reduction in the rate of learning can be interpreted as a limit of the ability to learn of a system based on technology and regulations. Although the need to achieve a theoretical 100 per cent response in high-risk organizations is maintained, the way to achieve this level of response is uncertain if

130

Improving Air Safety through Organizational Learning

the learning strategy is inappropriate. Therefore, organizational complexity would be forcing many organizations to become high reliability organizations as a response to a high-risk situation. This development model affects the meaning variable since systems are constructed that are increasingly difficult for the people who operate them to understand. This effect – the difficulty to understand the functioning of the system being operated − has effects that can be linked in the following sequence: 1. Technological designs are progressively more powerful and complex and this makes their internal logic incomprehensible to their operators. 2. Given that the operator cannot access the logical model of the system, it is attempted to achieve the opposite, that is, that the design of the system adopts the user’s logical model. 3. The system’s designer has a limited comprehension of the user’s logic since part of this can be difficult to express in words and in an organized way. Nokana and Takeuchi (1999) call this ‘tacit knowledge’, which they define as informal and difficult-to-define abilities that include schema, mental models, beliefs and perceptions. Furthermore, they are so deeply rooted in each person that they are almost always ignored and, hence, hard to explain to any other or to embody in an information system. 4. The final result is a system that the user learns to operate in terms of inputs and outputs but basically does not know how it operates, that is, it lacks the ability that Rasmussen (1986) defined as having ‘available some sort of mental model of the system and a strategy to use this model to process observed data’. In other words, operators have to be able to cognitively execute the corresponding programme such that its operation can be anticipated. 5. When an unforeseen event appears, the operator confronts a situation in the development of which the operator has not participated and lacks clues to be able to properly respond to said event. The concept of backward thinking could be especially relevant in the idea that information systems should allow the operator to trace the evolution of a situation with sufficient speed and reliability to permit the operator to exercise his role as an alternative to the system itself. This situation is described by Gazendam (2001), indicating that the system communicates with its operator via its interface of sensors and effectors, reading and generating messages or actions; the components of the system’s cognitive architecture are impenetrable for the user and only its messages and actions can be perceived. Given that the system ‘chooses’ what information to give the user according to its design, it can prevent the user from reconstructing a situation and searching for solutions when the system itself fails. This transformation contributes to the reduction in learning despite the effort in technical and regulatory development. Therefore, with a view to future development, the role played by the operator’s inaccessibility to the internal functioning of the system should be considered. In Rasmussen’s terms, the impact of the operator’s inability to cognitively execute the program should be evaluated.

Meaning and Trust as Keys to Organizational Learning

131

Along the same line, Choo (1999) indicates that information is only useful when the user has found meaning in it and the same fragment of objective information can be attributed very different subjective meanings on the part of different individuals. The possibility of equipping information with meaning implies that the cognitive architecture of the human operator is sufficient to grasp it and operate in Rasmussen’s terms under the model of conduct based on knowledge. SRK model and its relationship to meaning Human beings are not prone to act based on knowledge and only do so when they have repeatedly failed in the search for a known solution that represents less effort. Under this premise, the action of the human operator will follow the path of least effort – which will begin with action based on skills (S), to, if these are insufficient, make use of the rules (R) and, only if an appropriate model for action is not found in these, move to action based on knowledge (K). It is the tendency to find the strategy of least effort in aviation, which has made it imperative to follow check-lists since, due to the number of items, it would be possible to forget one if the checks were made based on skill. The installation of warning devices that would be completely redundant if a check-list were thoroughly followed has the same sense. One example is the audible warning that indicates the airplane is in a landing configuration but the landing gear is not down and a wheels-up landing could occur. The feasibility of change to a more complex action model has preconditions. It’s obvious that the step from skill-based action to one based on rules requires rules of application for a specific situation. In parallel, the step from rules-based action to a knowledge-based one requires that the operator has that knowledge. The possibility of changing to a more complex action model when the previous one has failed acquires its greatest relevance in high-risk organizations and, therefore, for which high reliability is searched. This process can take place in the following three steps. 1. The continual carrying out of an activity gives rise to a set of automated routines that are executed without the need for conscious deliberation and therefore, with a low number of errors. In addition, not requiring conscious deliberation increases the speed of execution and leaves ability available for deliberation on other tasks that might need it. 2. The accumulated experience allows the design of training programmes that produce automated skills and, on the other hand, the improvement of regulations allowing the human operator to have this resource available when the automated skill is insufficient to handle a situation. 3. If, in addition to skills and rules, the operator has knowledge of the global functioning of the system and has the necessary information he/she may have the ability to manage an event for which there may not be appropriate skills or rules.

132

Improving Air Safety through Organizational Learning

The problem appears when the last step is omitted and, in its place, regulatory development beyond its possibilities is emphasized. Peter (1969) established as the operating principle of organizations his famous Peter Principle, according to which people rise until reaching their level of incompetence. Paraphrasing Peter, if skills, rules and knowledge are considered as resources, it could be said that each of the resources has risen until it has reached its level of incompetence. It is always tempting, once a miniature system has demonstrated its usefulness within a limited range of convenience, to try to expand that range of convenience. Skills have demonstrated their inability to manage a large volume of data and this has given rise to the need to use rules, the following of which could, incidentally, require the execution of skills. Rules can, for their part, reach a level of complexity that makes them impractical. Information technology and automated systems free the operator from execution but don’t reduce the complexity nor do they facilitate comprehension of the system. Therefore, a design centred on technological and regulatory development limits the use of the operator’s conceptual knowledge as a resource. The use of this resource can result in reticence in organizations where a technical mentality dominates. The substantial difference between knowledge and rules or skills is that the first converts the organization into being less controllable since the corresponding behaviours depend on specific individuals and not on instructions from the managers of the organization. For a technical mentality, this fact, to a lesser or greater extent, means loss of control or, in Luhmann’s terms, giving up instrumental supremacy. Under another point of view, however, there could be alternatives to instrumental supremacy. Kelly (1963) points out, as a fundamental part of his theory, that it is possible for two people to be involved in the same events. But given that they cognitively construct them in different ways, they experience them in different ways. Consequently, their forecasts differ and so do their behaviours. From a philosophical point of view, Kelly’s position would seem to approach solipsism, and from an organizational functionality point of view would seem to consider the human being basically unpredictable. The author, however, rejects both objections, pointing out that there are cultural similarities between individuals that would lead them to perceive situations in similar ways. The use of conceptual knowledge based on cultural similarities in an organization can guide the operator’s actions in the absence of external control. This control is substituted by intention, understood as a characteristic of the mind that serves as a base for the representation of objects and of the state of things in the world. For this reason, for an ability to materialize in a human operator, there should be certain prerequisites that can have varying levels of presence in the use made of that ability: 1. Action led by purpose: as counter-position to philosophical schools that doubt the existence of a reality or our ability to grasp it, Searle (1999) points out that there is, before any philosophical consideration, a ‘position by default’, a characteristic of which is the direct understanding of reality. In this understanding is included a meaning and an intention with respect to how things should function.

Meaning and Trust as Keys to Organizational Learning

133

2. Cognitive model capable of attributing meaning to the situation: for Kelly (1955), human thought is not completely fluid but is instead channelled by a network of channels constructed by humans. These channels structure thoughts and limit access to the ideas of others; the occurrence of unexpected events would lead to a revision of the network of channels but the revision process would come from that same network. 3. Specific skills that allow the execution of the chosen action: in addition to the two previous characteristics that make reference to cognitive models and the objective purpose of the action, it’s necessary to consider the instrumental aspect of the execution and the need for skills that are susceptible to being automated by way of acquisition of skills or by way of the appropriate execution of procedures. The case of Los Rodeos, already mentioned several times, is quite illustrative of the interaction between the different levels. The KLM pilot had developed his principal practice during the previous few years as instructor in a Boeing747 flight simulator and had a high position in the airline. In all probability, over the length of this type of professional experience he would have mechanized several technical abilities and, given his role as instructor, it is equally probable that he had a very high causal knowledge regarding the procedures to follow. Simulator flight can include situations that rarely appear in real life, such as engine fires; however, some situations do not appear, such as the urgency to take off due to the crew reaching maximum activity levels and, consequently, not taking off would mean looking for accommodation for 350 passengers, as well as the economic impact this would signify. Similarly, the take off authorizations would have marginal importance at that moment. Despite the urgency to take off, the KLM captain thoroughly followed the check-lists prior to take off and made a mistake in that which he was not accustomed to performing, that is, the request for authorization. In the terms presented in this chapter, an explanation of the conduct would be that the KLM captain had lost the specific skills of real flight linked to the management of contingencies, such as the quoted legal maximums for activity. This loss could cause a greater level of anxiety

134

Improving Air Safety through Organizational Learning

for him than the same event might cause another pilot who was used to real flight. In this situation, the instructor’s cognitive model prevailed, leading him to a perfect execution of the technical procedures and, equipped with this model, to forget a secondary element in his cognitive model of pilotage. In conclusion, there are various levels in which the human operator can act – skills, rules and knowledge − but the accessibility of this last one depends on the operators’ perception of the meaning of their actions. If that perception exists, action is directed by the attributed meaning. However, a paradox arises because it is in the most characteristically human level – knowledge − where most errors occur. Automation arose, above all, as a response to those errors, but it was not perceived that together with the gain derived from the reduction of errors, a loss was being introduced in terms of the ability to respond to exceptional situations. The automation of activities can be achieved through technology and regulations, although in the second case we can’t talk of complete automation. Next, we will analyse both options and their impact on the operators’ understanding the meaning of their actions. Technology and meaning Organizational competence can reside in a skill or in a rule for execution by operators. It is not required that they adopt a position of problem resolution but simply that they execute a prescribed sequence of actions. Once this level is reached, the next step is to be expected: eliminate the operator, allowing the sequence of actions to be performed by a device. Apparently, the change is trivial. When the human contribution has been reduced to the levels of Skills and Rules, that is, when people stop contributing Knowledge, as a specifically human element, the change doesn’t seem to have importance. In addition, and as pointed out by Edgar Morin, the machine-machine is superior to the man-machine. If operators are asked to execute a job in a robotic way, it seems there would be some advantages in their direct substitution by a machine. The change, contrary to appearances, is not so trivial. Even in simple tasks, it occurs frequently that the operators achieve an elevated level of learning. The occurrence of different events and their own curiosity can lead operators to ask why it is necessary to act in a specific way. If, in addition, the operator is equipped with training, a simple task can be the first step that allows access to the Knowledge level. On the contrary, when the chosen options are automation and complex information systems, operators may not reach an understanding of the internal logic of the system being operated, even in cases of lengthy practice. Comprehension of an information system is not acquired through the observation of machines, their programs and

Meaning and Trust as Keys to Organizational Learning

135

data. There should be a parallel with the comprehension of a novel, impossible by observing paper, pencil, printing, grammar and words instead of doing so with the story that is told. An information system is not a neutral representation of an objectively given world but, instead, needs a conceptual model to construct it that may be unknown to the operator of said system. The obstacle faced by an operator in understanding a conceptual model that is not directly observable implies serious difficulties to operate in the knowledge level. To avoid this, knowledge is substituted by specific skills and, as a consequence, operators cease to have access to the meaning of their actions. During the design of an information system, the operational aspects of the designer’s conceptual model are materialized up to the point that the operator is able to transmit them and the designer to understand them. Nevertheless, the information system doesn’t have the ability to directly understand the meaning of things. The information system, instead of having the human operator’s direct knowledge, detects and responds to syntactic properties and relationships, that is, they reflect the designer’s conceptual model. An automatic pilot is an extremely skilled actor up to the point that it is permitted actions prohibited for a human pilot. Nevertheless, despite its ability to steer an airplane, it doesn’t know that it is flying, it doesn’t even know what flying is and, given that it lacks emotions, even in an extreme situation, will never look for options outside of its program. The lack of an autonomous action that exceeds an information system’s design therefore represents a loss in ability to attend to unforeseen situations. In an ideal division of functions, the operator should be responsible for these situations, but this possibility may be more theoretical than real in an automated environment. If the conceptual model that guides the functioning of an information system is not visible to its operators, they will gain skill in its handling but the complexity of the design impedes learning the principles of its operation. Because of this, they won’t be able to access the meaning from which the design was initially derived. Some latest-generation airplane accidents have occurred as a result of confusing situations. The pilot would be converted into the operator of an information system and, when this begins to give indications different from those expected, is unable to dispense completely with the system. It must be added that, as a safety measure, all latest-generation airplanes have available the so-called basic instruments that are not integrated into the whole system. Unfortunately, these instruments are useful in the hypothesis that the entire information system – for example, a strange power failure − were to completely stop working. Their usefulness is marginal if instead of ceasing to work, they furnish confusing information without the operator knowing what they respond to. Information system designers have tried to solve the problem of the operator’s loss of meaning by presenting visual representations similar to those found in a more familiar environment. In addition, they have improved their legibility by omitting data that the system manages directly and whose knowledge by the operator is considered irrelevant.

136

Improving Air Safety through Organizational Learning

This improvement in the legibility is a result of important ergonomic studies and creates in the operator the sensation of integrated information. However, and in a negative sense, it adds a layer of complexity to the system. While the system works correctly, only the positive aspects of this approach are perceived, but if a failure occurs, the situation can be more serious than with traditional instrumentation. The representation that the operator receives in an advanced system is a product of the system itself and, as such, can also fail. Here, the operator is in a situation of confusion, difficult to recover from, due to the lack of meaning of the information being received. By way of illustration, the users of personal computers know that the Windows operating system presents a more intuitive user interface than the DOS from which it comes. However, that same interface has added a layer of complexity that has led to weaken it in its stability, as much in relation to DOS as to other operating systems considered less ‘friendly’1 to the operator. Under normal conditions the user works with ‘folders’ and ‘documents’, doing so on a ‘desktop’, and can throw a ‘document’ into a ‘recycle bin’. Naturally, the folders, documents, desktop and recycle bin do not exist. They all form a representation that, while it works, facilitates the user’s work. However, when something fails, that same user can be confronted by a blue screen, without the least clue regarding what has failed. In addition to the opacity introduced for ergonomic reasons, there is also knowledge that could be labelled pseudo-causal and is based on what Dennett (1998) defines as ‘intentional position’ through which intentions are attributed to the corresponding device such that its actions can be anticipated. This type of pseudo-causal explanation remains useful while the device behaves according to its design. On the contrary, when a fault occurs, the operator is left without an alternative explanation. The intentional position, even though apparently representing causal knowledge, does not represent a real knowledge of the functioning of the system but instead attributes intentions to it and predicts states from said position. Even in a situation of perfect functioning, information systems and their associated automatisms can give rise to behaviours whose comprehension is not intuitive to their operator. So in situations of handling various parameters in parallel, there are preference criteria that are not always intuitive but are instead the object of detailed learning, especially when referring to its behaviour in anomalous situations.

1 The term ‘friendly’ is frequently used in the information technology field and refers to the fact that information systems appear attractive and simple to use to the user.

Meaning and Trust as Keys to Organizational Learning

137

Faced with a stall situation, pilots know that their principal objective is to regain speed with the least possible loss of height. These same criteria are programmed in automated models but, as opposed to what happens with the pilot, the inclusion of exceptions to the rule can be more complicated and the pilot only misses them when a contingency appears showing that someone forgot, including a relevant exception in the design. In 2001, an Iberia Airbus A320 airplane landed with the nose wheel in Bilbao Airport, producing serious damage to the airplane. Paradoxically, that happened because the airplane’s automated system did not contemplate as an exception being too close to the ground. A gust of wind left the airplane in a situation very close to a stall just before landing. Given the proximity to the ground, the pilots decided to continue with the manoeuvre, bringing up the nose of the airplane for the landing. The airplane’s automated system ignored the pilot’s order and opted for gaining speed, lowering the nose and driving it into the ground. Similar situations could arise with the protection of the flight envelope faced with a risk of collision. An extreme manoeuvre by the pilot could be ‘smoothed out’ by the airplane to avoid the risk of damaging the airplane. Naturally, in carrying out this action the system could incur a much greater risk than the one it is trying to avoid. Transparency to meaning as requisite in technological design In environments where critical situations could occur, it is necessary to guarantee the ability to respond beyond the limits of the system. Hofstadter (1987) explains, via the concept of self-referencing circle, the impossibility of a design going beyond the design itself, which leads directly to the operator as an alternative resource outside the system. This explanation has been discussed by other authors who use different alternatives relative to computability and to the difficulty for parallel processing in current information systems. Nevertheless, for practical purposes – be it because of the concept of self-referencing circle or because of the ability for parallel processing − it remains the fact that the alternative resource is the human operator.

138

Improving Air Safety through Organizational Learning

Daniel Hillis, explaining the advances in the area of artificial intelligence, tells how it’s possible to make programs that do a type of ‘natural selection’. At the end of the process, the resulting program would function better than the original without the programmer knowing how it does so. Daniel Hillis points out that a program of this type could be used as an airplane pilot. It’s true that we don’t know how it works, but according to Hillis, we don’t know how a human pilot works either. Although apparently he could be correct, we do know something important about human pilots. Other than in exceptional cases, they have the intention to keep piloting for a long time and later enjoy a happy retirement. The same cannot be said about any information system and that would be reason enough to not assume the risk of being transported by an information system, at least if there is not a human alternative ready to act. The use of the operators as an alternative resource requires the information systems to be transparent so that maximum use can be made of the power of the system without leading them to error. Rasmussen, in the context of air safety, contributes as principal reason for the search for transparency the need that the pilot has a model of automation which can be cognitively run,2 with the objective of anticipating future states of the airplane. Therefore, the challenge should be in the design of systems and approaches that allow both understanding the meaning of their activity and giving meaning to the activity itself, so that the set of activities has greater meaning and value for all the parts. Instead of following the lines highlighted by Rasmussen or Walsham, the automation models which are being used, as demonstrated by some events related to air safety, can leave the operator in the situation of not knowing what is occurring nor being able to anticipate the system’s reaction precisely at the time in which its intervention is required. In an unforeseen situation, if there is not full access to the meaning of one’s own actions, neither operational knowledge nor the falsely causal coming from manufactured graphical user interfaces fulfil the conditions pointed out in terms of comprehension by their operators. As consequence, the loss of the meaning of

2 Run as a computer program might be. Rasmussen states that automated models should be sufficiently clear to allow a human operator to ‘mentally run’ them, which would imply a perfect understanding of the model and hence the automation and the ability to anticipate its operation under different circumstances.

Meaning and Trust as Keys to Organizational Learning

139

their own actions favoured by an information system could make a response to a contingency difficult. The reasons pointed out would be sufficient to ask that information systems be transparent in high-risk organizations. However, this is not the only effect derived from the loss of meaning that involves the opacity of these systems. A system that is non-transparent for its operators impedes them generating new abilities beyond mere operational skill. The system impedes cognitive progress by which an operator could reach conceptual knowledge setting off from operational skill in handling a system. Technological design and learning Until now, it can be seen that when a technological design makes a system opaque to its operator, the competence of this operator to respond to unforeseen events is reduced. An opaque design, therefore, limits or cancels one of the fundamental abilities of the operator. For Fodor (1995), individuals can demonstrate behaviour that exceeds their experience since part of that behaviour is caused by thoughts that normally go ahead of their experience. A system that is non-transparent to its user ruptures the cycle of generation of learning since the mental model created by the operation of the system is insufficient to achieve that anticipation. The generation of new learning would be inhibited if the increase in the knowledge base of the operator is reduced to learning actions by way of the If A, then B kind, yet there isn’t access to information about how the system works. The learning cycle would be cut and the increase in operational knowledge would only lead to one type of knowledge in which ‘things are seen before principles’ (Kelly, 1963). Due to the lack of perception of principles, the transformation that initiates a new cycle in the learning process does not occur in a natural way. Learning would remain confined to design laboratories that would introduce improvements and ‘gadgets’ into systems without being able to expect a significant contribution from the operators. To the extent that innovation might be necessary, either to respond to a contingency or to generate new solutions, a technological design that requires mere operational learning is insufficient, at least in high-risk organizations. Despite the relevance of opacity of information systems – as much because of the difficulty to respond to unforeseen events that they introduce as in the limitation in learning ability – it’s common for information systems to be opaque, even in highrisk organizations.3 Vakil and Hansman (2002), referring this fact to air safety, pointed out that the avionics manufacturers that were contacted were unable to provide a functional 3 As will be detailed later, the use in the field of information technology of the term ‘user transparency’ can remind us of Orwell’s ‘newspeak’ since it means exactly the opposite of what it says: the system makes a complex layer of its design invisible to the user, with which, it is supposed, the user will never have to work.

140

Improving Air Safety through Organizational Learning

model of the logic of the system. The documentation presented to the FAA (North American government agency) is a detailed specification of the implementation of the automation but not a global model. The lack of knowledge of the logical model of an information system on the part of its operator is so accepted by the manufacturers that they have reached the point of establishing redundant systems that make such knowledge virtually impossible. A functional failure in an information system would logically imply that other systems designed under the same parameters would share the same failure. Redundancy, in these conditions, would not be an appropriate response since, regardless of the number of systems, it would manifest the same fault in the identical situation. Aircraft manufacturers know that. Their answer has been brilliant, but, at the same time, they have introduced an interesting secondary effect. They equipped their latest-generation airplanes with three information systems and each of them had to provide the same output for a specific input. This should be the usual redundancy. However – and this is the real difference − they required that each of the three information systems be designed by a different manufacturer. This way, their inputs and outputs of information are identical but their internal logic is different. Put another way, each system must reach the same place by different paths. In this way, they made redundancy useful, but this is the secondary effect: if a single information system is already difficult for an operator to understand, acquiring the knowledge of three is possible only if its handling doesn’t exceed the skills and rules levels.

Limits to technological design We can conclude that transparency of technological design to its user is a necessity in environments where a response to unforeseen events is required from that user. The requirement that technological designs don’t make the system opaque to its operator represents a limitation for designers. The separation between the design and operation of a system has often led technological designs to only respect the functional limitation, that is, its own operational ability, dispensing with the effect that development might have on the operator.

Meaning and Trust as Keys to Organizational Learning

141

This does not signify a rejection of technology but rather the demand for a requisite for its use in high-risk environments. When operators do not have knowledge of the internal logic of a system, they cannot offer an alternative to this system. Knowledge of the internal logic of a system therefore requires two complementary paths of development: 1. It is not possible to rigidly separate the perspectives of the designer and operator. According to Rasmussen, the latter must have sufficient technological knowledge to cognitively reproduce the functioning of the system. 2. Technological development, in addition to providing new functionality for organizations, would need to work on itself and create systems whose internal logic is more accessible to its users. Regulations and meaning Regulatory and technological developments are closely related, even though the first uses the human operator as the necessary element for execution while the second can carry out actions independently. So a rule requires an operator to execute it while a device can be equipped with the ability to execute by itself the actions prescribed in its programming. Because of this, it can be said that ‘technology is legislation’. To the extent that regulations and execution are separated and the rule is not followed with immediate action, the organization will be away from the tightly-coupled extreme and will be less prone to systemic accidents. Between rule and execution, there is still an opportunity to evaluate the convenience of applying a rule to a specific situation. Therefore, regulations, not having the ability for self-execution, represent a more flexible coupling than technology. This theoretical advantage may be lost in situations where the imperative component of the rule is emphasized excessively and its critical analysis by the operator is discouraged. In addition to the advantage derived from the separation of the design and execution processes, regulations tend to have greater transparency than technology. Even when their creators don’t include causal explanations regarding the rules, the fact that these are written in a familiar language contributes to their being understood more easily than in the case of an information system. This way, the operator can pose the opportunity or the adequacy of the rule, but generally this does not conceal the functioning of the organization. In summary, a rule represents a limitation to an action but not to the perception of a situation. Role of trust in organizational learning The second variable to analyse is trust. This concerns a familiar variable in the sociological environment, although in that environment it is linked to a relationship between people.

142

Improving Air Safety through Organizational Learning

From one point of view, trust is thought of as an alternative to control. The presence of trust between people or groups enables the avoidance of using resources associated with the need to control. Nevertheless, there is an alternative view to the sociological one, which we will use – that is, where trust is understood as a tendency to use certain resources in preference to others. Under this second meaning, trust is not restricted to people but can also be placed in a device or a rule. In technical systems, trust is awarded to the techno-structural component of the organization, that is, to the rules about operation and the technology that allows that operation. In consequence, learning in a technical system flows in that direction and the improvements are materialized by way of technological and regulatory changes. When the system reaches a high level of complexity, it begins to generate events derived from it, making the generation of new learning difficult. In many organizations, technological evolution has allowed the management of growing levels of complexity and, therefore, technology has continued to accredit trust. However, activities with a high level of requirements, such as commercial aviation, can generate a level of complexity that is very difficult to manage with technology and regulations. It is of great interest that nuclear safety and air safety have followed very similar paths. It is so to such an extent that some experts believe that mandatory reference has worked, above all, in the nuclear field. However, there is an important difference in the level of requirements regarding safety between one type of organization and the other. In the nuclear field the operator is required to act on symptoms, and in modern commercial aviation, with a high technological load, something similar is required of the pilot. However, the resources available to a nuclear operator – including the possibilities of physical checks by other persons or being able to consult with experts who are in the same place and have very detailed information − are far greater than those available to an airplane pilot in flight. This difference means that, when faced with an event, a technical system can provide sufficient resources for its management in one case, but not in the other. There are many situations where technology hides its logical model from the operator, and, even worse, it sometimes seems that there is not such an integrated logical model. This way, each modification has the possibility of converting itself into a source of new events.

Meaning and Trust as Keys to Organizational Learning

143

Faced with this situation, the acquisition of new learning requires the reduction of systemic risk via the introduction of flexibility to avoid tightly-coupled links. The human operator, to provide the necessary flexibility, requires a real knowledge of the causes. The use of that flexibility requires, in addition, a transfer of trust to that operator. An action of this type, given that it modifies the relationship between basic variables for learning, would mean a new rupture of operational closure. In this case, the rupture would have a sense contrary to that created in the evolution of aviation and would give rise to a radically different system. If the generation of new learning requires the recovery of human potential, it will be necessary to design the organizations as well as their technology and procedures in a way that makes such recovery and its use possible. However, this also requires important changes in the organization’s selection and socialization processes. Technical systems dispense with an individual’s motivation and opt for controlling people, biological paradigms try to integrate motivation with the variables to be controlled and, finally, institutional paradigms take the individual as the foundation. In newly created organizations, the selection processes are carried out counting on trust being placed on people. On the contrary, in already consolidated organizations, a set of defensive skills have developed that make that transfer of trust difficult. The importance of this fact is due to external control – in an organization that seeks to make use of the human potential – being substituted by guides for action. These are based on the supposition that the objectives of the individual and the organization are shared. The technical system has tried to evade this problem via the evaluation of behaviour in a situation of observation as a way to exert control over the operator. The alternative to this action represents the change to an institutional paradigm and requires the determination of which specific persons are worthy of trust and, based on this, can act without the need for prior external control. The requirement to return trust from technology and regulations to persons may not be universal. Nevertheless, it is required in any high-risk activity that, by definition, requires the ability to respond to 100 per cent of events. The greater the seriousness of the consequences and the more difficult these are to prevent, the greater the need that the human operator represents an alternative to the established design. To practise this role as alternative, however, operators need to have the trust of the organization in which they operate. Naturally, their trust has nothing to do with grandiloquent declarations but with access to causal knowledge.

This page intentionally left blank

Chapter 7

The Future of Improvements in Air Safety In the previous chapter, we have seen the value of trust and meaning in the design of an organization or a system and the impact it has on the ability to improve the current levels of safety. The ability to improve will be different, depending on the position taken with respect to each of the variables. In this chapter, both variables will be joined to give rise to a design that overcomes the limitations to learning, recovering the previous levels. There is not an optimal design for learning but, instead, its adequacy is determined in each case by the pressure towards improvement and the level of complexity reached. Criteria for the definition of an organizational learning model The need to generate learning differs. So situations can be found where the level of current ability is sufficient. Here, it will be enough to use it and there will not be a perception of a need to improve that ability by acquiring new learning. In the aviation field, the problem is mainly one of location. The necessary ability exists, but may not be available where it is required. Here, it is necessary to search for the distribution resources that guarantee that the necessary abilities are found when and where they are needed. Lastly, there are situations where the level of competence is not perceived to be sufficient, either through the appearance of contingencies for which the known solutions are inadequate or through market dynamics, technology or others that force constant renovation. This is also frequently the case in aviation. The search for more efficient solutions often implies not just redeploying existing abilities but also generating new ones. To achieve this, they need to maintain a flow of learning that permits them to constantly improve their ability. Handy (1990) points out that the wheel begins to turn with a question, a problem to resolve, a dilemma or a challenge to achieve. The two extreme cases are represented by situations where the level of current competence is sufficient and new learning is not required and by situations that require the generation of new knowledge that, once generated, needs to be distributed.

146

Improving Air Safety through Organizational Learning

Organizational competencies and their resources One of the problems with the necessary competencies is of topographic character. Competencies don’t exist in a vacuum but need to be supported by resources that have specific potential. Figure 7.1 shows in synthetic form the level of adequacy of each of the supports: Generation

Distribution

Use

High

Low

Low

Mechanical Systems

Not applicable

Not applicable

Medium

Regulations

Not applicable

Medium

Not applicable

Information Systems

Low

High

Not applicable

Cybernetic Systems

Not applicable

Not applicable

High

People

Figure 7.1

Level of adequacy

As can be seen in the figure, the abilities of the different resources differ widely. So, people can’t be substituted in the task of generating new competencies, information systems reveal their main strength in their ability to distribute the information necessary to generate them, and the automatons or cybernetic systems appear as optimum resources for their use. For their part, mechanical systems are also necessary in the use phase of competencies and regulations in the distribution; nevertheless, they don’t reach the ability of automatons or information systems. An activity should lean towards the resource most appropriate for the most important phase of learning at a specific moment. So, the way in which one learns will be determined by the decision over the appropriate resources to base the learning process upon and, therefore, the upkeep and improvement of one’s abilities. However, this decision is made based as much on the perception regarding the need for learning as on the trust awarded to the different resources. In air safety, it can be observed that actions are focused on the coverage of the needs of distribution and use. So, there is an ample set of regulations and information systems for the distribution of all the necessary content in the use phase. Obviously, there are also mechanisms, although these have been strengthened by the inclusion of information systems, giving rise to cybernetic systems. Generation, however, requires people as principal actors, who, in a situation centred on distribution and use, have been left partially relegated.

147

The Future of Improvements in Air Safety

When – because of expectations of growth in activity and ultimately in accidents – the need for generation grows, people need to recover the leading role. Determinant factors of learning ability The specific needs of commercial aviation lead to the selection of the correct way to use, to maintain and to increase its competencies. The need to increase them, however, arises because of an imbalance between the desired situation and the real situation. This imbalance creates pressure towards improvement, which, if led appropriately, increases ability. On the other hand, to the extent that complexity increases, unforeseen effects arise and learning reduces when a high stage of complexity is reached. We can interpret the distance up to the level of hypercomplexity as the potential for learning. Consequently, the real capacity for learning corresponds to two factors: 1. Pressure towards improvement. 2. Level of complexity. Both factors define the environment in which decisions over which resources to use are made. Nevertheless, the dominant mentality establishes a preference to deposit trust in some or other resources. In Figure 7.2 the possible situations are described in terms of real learning ability as a function of the current stage, in terms of complexity and pressure. Each one of the stages shown in the figure imposes specific management models: 1. In the absence of pressure towards improvement and low complexity, there is no incentive towards improvement. There is a high learning potential but there is no reason to make use of this potential. There are few activities that find themselves in this happy state and, from the learning point of view, lack interest since they don’t manifest said interest. They have, however, an interesting point. An increase in the pressure towards improvement may be sufficient to achieve it. When a high level of complexity PRESSURE TOWARDS IMPROVEMENT

Low

High

LEVEL OF COMPLEXITY

Low

High potential Scant realization

High potential Realization according to potential

High

Low potential Scant realization

Scant potential Hard pressures in realization

Figure 7.2

Pressure towards improvement

148

Improving Air Safety through Organizational Learning

is reached, as happens in air safety, pressure – used as a kind of universal recipe − is no longer effective. 2. Low external pressure, combined with high complexity, represents a situation where there is scant incentive to improve but, even in the case that this incentive existed, the activity has little potential for improvement. This situation could be observed in air safety in the 1970s and early 1980s. Airplanes were quite safe and, if more learning were required, it was derived more from the search for efficiency than from improvements in air safety. Apparently, the required level of learning could be reached by increasing pressure, and this is one of the possible management decisions. However, the high complexity places the organization in a situation of low ability to respond to pressure, showing, therefore, that this might be an inappropriate alternative. The response to pressure by these organizations can consist, as in a technical system, of multiplying the control mechanisms and, therefore, increase complexity even more and lead to counterproductive results. Senge (1990) demonstrates this type of situation in what he terms ‘archetype of limits to growth’, where there is a limiting factor to growth. For Senge, the appropriate conduct is, instead of increasing pressure, to work on the limiting factor that, in this case, is the complexity of the organizational design. Evidently, this was not the adopted solution. Instead, technical potential was used to increase complexity. 3. High pressure and low internal complexity can achieve a significant increase in the system’s abilities without the risk of finding the barriers to development imposed by excessive complexity. This situation can describe the initial stages of commercial aviation. Pressure is motivated by the need for survival but with a low internal complexity due to its size. Contrary to those described in the previous point, these do not function like technical systems but instead, while the internal complexity remains low, can improve by following the tracks left by more advanced organizational paradigms. For this, they make broad use of individual initiative that is little affected by regulations or rigid information systems. These situations present the most favourable scenario for learning since they have both alternative models at their disposal: • Remain in the stage of low complexity and use people as the foundation of its learning process. It’s assumed that this is a slower process that can limit the rate of growth. • Increase its complexity via the implementation of controls, procedures, technology, and so on. Until its arrival at the hypercomplexity stage, this model can also allow the generation of learning. If the pressure towards improvement is strong and there is little availability of time for a model based on individuals, the conversion to a technical system allows a gain in control and immediate response at the cost of increasing organizational complexity.

The Future of Improvements in Air Safety

149

In the terms reflected in the second model, we can also interpret Reason’s idea of change from a learning model based on gaining lessons from events to one based on restricting individual action to avoid them. The pressure to achieve fast and ready-to-use results – as for air safety − can justify this change and force to trust in the most valuable resources in the field of this use. At the same time, and as a consequence, the use of the optimal resource is limited during the generation phase, consequently reducing ability in that phase. 4. Lastly, in the situation where pressure and complexity are both high, an attempt is made to improve, but there is no potential available to achieve improvement. This is, from everything said so far, the situation that describes commercial aviation in its current model. The situation can be aggravated if, in addition, the dominant mentality is of the technical type. The increase in complexity can have occurred because of the learning model of a technical system itself and having implemented control systems that have led to a growing complexity. There is a clear incentive towards improvement, but contrary to what occurred in the low-pressure situation, the organization or the system as a whole has been a victim of its own learning process. This has generated, along its evolution, barriers to subsequent learning. Pressure forces commercial aviation to travel an evolutional path with the utmost speed possible. The course of that path implies growing complexity and a point is reached where it’s impossible to continue forward. It can be concluded that continuing the improvement process in this situation requires a reduction in complexity. Alternative learning model High-risk activities such as commercial aviation, because of what they are, have a strong pressure towards improvement. If, because of this pressure and a technicaldominant mentality, the learning process has followed the development path of a technical system, a situation of growing complexity will have arisen. This is the situation for which the learning model would be defined. One objective of general improvement, attending to pressure and complexity, should consider among its objectives the correction of negative effects that have arisen from the current learning model. This model is based on complexity growth, and its principal negative effects – or barriers to growth – are the following: 1. Pressure towards following regulations limits, and on occasions avoids, critical evaluation regarding its adequacy on the part of the human operator. In addition, the regulation can contribute to false identification of events, producing worse results.

150

Improving Air Safety through Organizational Learning

2. Technological evolution has made parts of the system opaque to the operator via its inclusion in devices and programs. The concept of transparency to the user1 has itself been the subject of misinterpretation that assumes as normal the highlighted opacity. 3. Often, automatisms act, by default, without knowledge of the operator, appearing, when faced with unforeseen events, to restrict action rather than to assist it. 4. The integration of systems leads to situations of incapacity when faced with unforeseen events, which causes confusing information. These negative effects, in addition to being used as guides for the construction of an alternative learning model, can also be used as diagnostic tools of its stage of organizational evolution. Its presence is a clear sign for an organization that it is close to hypercomplexity and the consequent reduction in its learning ability. Change of organizational paradigm as part of the learning process When the usual tools for generating new learning cease to do so, pressure on these tools does not lead to sufficient results. In air safety it could be said, without fear of exaggeration, that pressure has been exerted for the past 30 years over tools that no longer give sufficient results. To the extent that there is growth in complexity, trust is placed in resources that show their suitability first in the use phase and second in the distribution phase. In this process the human operator loses the lead role, an event interpreted as the displacement of trust. This has been placed on growing technological resources that can substitute, at least partially, the operator. In addition, technology is used to limit the operator’s freedom of action as a way to avoid errors. In this process, a system is generated that is too complex to be fully understood by its operators. A learning model designed with these parameters shows the operator how to act but not the principles on which the action is based. The generic solution consists in reversing the situation via a new displacement of trust towards the human operator to exit the hypercomplexity stage. However, achieving this without returning to prior levels of safety has three basic conditions 1 The expression ‘user transparency’ in the computing field has exactly the opposite meaning to that which we are using in this point. This expression is used to point out that underneath an easy and pleasant way to use, there is another powerful and complex-to-handle program that is not visible to the user for operation. The expression therefore signals the exact opposite of an intuitive interpretation. ‘Transparency’ is used synonymously for ‘invisibility’ to indicate that the user won’t be affected by the complex and difficult-to-use program inside the system. This apparent fault in logic has an explanation in the fact that the expression refers to operational model, not the logical model – this being the difference highlighted. For some, this model of situation would be comparable to the so-called ‘Potemkin villages’, which showed an ideal scene to foreign visitors to the USSR in those early times. Without attempting to attribute intention to mislead to the designers, it is true that systems create a scene of ease of use that has little relation to what’s underneath, which inevitably comes to the surface when something not foreseen in its design is encountered.

The Future of Improvements in Air Safety

151

that, in turn, give rise to a set of instrumental conditions. The basic conditions are the following: 1. The limit of complexity in an organization, if it wishes to maintain its ability for learning, is the point where the operator is no longer able to understand its internal logic. The logical model should be visible to the operator, that is, the action of a device should be transparent to permit its monitoring by the operator. When it is necessary, the operator should be able to make the transition from operational knowledge to reflexive knowledge, which allows responding to a contingency. 2. The operator should have the ability to act in situations unforeseen by the system. For this, the operator should not be restricted in those situations by automatisms nor by the thwartive element of rules. 3. These two conditions cannot be respected at the cost of reducing the current levels of air safety. The first two conditions are based on the concept of meaning and trust in as much as the third represents an element of equilibrium. The development of complexity in organizations has led to the point where the first two conditions have not been respected. The result has been incapacity when faced with unforeseen situations. Nevertheless, the recovery of the ability for action when faced with the unforeseen cannot be achieved at the cost of reducing that ability when faced with the foreseeable. Such action – to deteriorate ability before foreseeable events in order to improve ability before unforeseen ones − would represent making the same type of error, but in the opposite sense. In addition, it would contribute to the deterioration of the global level of safety since, in a system that has accumulated experience, there are always more foreseeable situations than unforeseen. The three previous conditions acquire their importance due to not being able to respond to an event a posteriori. It is necessary to have the ability to respond to all events susceptible to having serious consequences and this has a practical consequence. A solution that increases the competence of the system to attend to more events is not acceptable if it incapacitates the alternative resources – generally, the human operator − to respond to other situations unforeseen by the system, sometimes generated by the solution introduced itself. The organization needs to simultaneously attend to the use and generation of learning and therefore can’t admit practices in which one activity represents a barrier for another.

152

Improving Air Safety through Organizational Learning

Interference between phases of learning The different possible resources have optimal possibilities in different phases of the learning process. In consequence, the emphasis by an organization in one of the phases of learning leads to trust more in some alternatives than in others. The problem appears when excessive emphasis on one phase of learning – and on its associated resources − affects another phase and therefore halts or slows down the learning process itself. The use of some or other alternatives, therefore, should consider not only its suitability for a phase of learning but also its global impact. In particular, the emphasis in air safety on the use of the acquired competencies and the resources used to get these competencies has affected the ability to generate new learning. The ability for distribution may work appropriately but that does not modify the situation if there is not sufficient valid content to channel. The search for an alternative model for improvements in air safety that, attempting to maintain that already acquired, tries to improve the ability to learn, represents the need to make decisions regarding which supports to use in the whole improvement process. The people-based organization Pérez López (1991) showed the problem derived from the technical system and its emphasis on learning through the use of the If A, then B model and how the institutional paradigm is superior, despite the greater difficulty in controlling it. The facts, however, have shown in the majority of organizations an evolution in the opposite sense. Peter Drucker, in his classic manual on company administration (edited for the first time in 1954), had already identified the problem of the loss or underuse of resources. According to Drucker, wherever work can be organized on the basis of the concept of one movement, one task, we’ll have evidence that it is possible to mechanize it with the resulting increase in efficiency and productivity. In other words, Drucker is pointing out that this is not the appropriate field for people’s activity. Technological advancement served to expand the field of application of Drucker’s affirmation. Currently, his affirmation is applicable not only in industrial manufacturing but also in organizations of very different nature. The intense mechanization of activities related to information can reduce the human operators to the category of dispensable, being equipped only with operational knowledge, itself susceptible to mechanization. However, there is one characteristic of a person that can’t be replicated, which is the ability to – setting off from conceptual knowledge – generate new capacities. Once the conceptual level of knowledge is reached in a specific field, the ability to generate new learning emerges. To ignore this ability always represents a loss of resources and, in many cases, its appearance comes from an opaque technological environment. The previous point obeys a characteristic of people not replicable – at least in the current state of technology – by any information system. However, this is a

The Future of Improvements in Air Safety

153

characteristic that also introduces unpredictability and errors, motive enough for a technical mentality to opt for a technical system as organizational paradigm and limit to the maximum the initiative or freedom of the operator, if not eliminated entirely. Technical systems, along the lines inaugurated by the Scientific Organization of Work, although with technological contributions, are still valid and have acquired new vigour thanks to technological innovation. Via a strict separation between planning and execution, they leave the generation of new competencies entrusted to reduced and highly specialized groups. Of the operators – fundamentally, although not exclusively, pilots, for aviation − it is only sought that they use their operational skills. In that environment, initiative in operation isn’t encouraged and conceptual knowledge is substituted by manufactured metaphors about the functioning of the system. Learning ability is, therefore, not found disseminated along the length of the system. Under a strict division of functions, the generation of improvements is carried out in a centralized way, although there might exist information-gathering mechanisms in the environment that might facilitate that generation. Lastly, the use of people as a vehicle for the transmission of information is an inefficient strategy, especially in comparison with what can be achieved via information systems. Drucker’s words appear to have been fulfilled: in all those places where an activity is susceptible to being mechanized, people have disappeared. There are situations now completely familiar in aviation that have been substituting personal transactions with mechanized transactions, like the purchase of tickets and boarding passes on the Internet, the use of self-service machines for checking in at airports and, of course, the elimination of the navigator on all fleets and of the flight engineer on the majority of them, and so on. In these situations, it seems to have been understood that a person could be substituted at an advantage. In some of them there are, in addition, a reduced number of persons with the specific function of acting when a corresponding system ceases to do so or a situation not foreseen in its design appears. The task of a person to act by exception in an automated environment could mean one of the following situations: 1. The person understands how the system works and, when something happens, can generate an appropriate response. 2. The possible incidents are of little importance and, although the person doesn’t understand the functioning, has specific instructions for action. 3. The possible incidents are of little importance and the persons can’t resolve them when they occur, in turn leading to an error or lack of response and posterior resolution. There could be a fourth option, voluntarily discarded: the possibility of having a full alternative system that the operator can use in case of an emergency. This is the idea behind keeping basic instruments in highly automated planes: if everything fails, there is an option outside the system. However, the kind of failures under an automated system are not ‘On’ or ‘Off’, but instead the system behaves in a bizarre

154

Improving Air Safety through Organizational Learning

way from the point of view of the operator. If we consider the automated system as a kind of ‘partner’, the situation is usually not defined by the idea of a ‘dead partner’ but by the idea of a ‘crazy partner’. Furthermore, in an emergency situation, the basic instruments are far from being a full system and provide only a few of the features of the full system, but the resources to manage the situation are the same as in the full operational situation – two people in the flight deck. That is why this option is not being analysed as a real alternative. The situations numbered 2 and 3 would be fully considered as those that Hillis (1998) designates as computable, that is, susceptible to being processed making use of information technology. It is therefore a question of time that these situations are totally or partially attended to by technological processes. In complex activities like those related to safety in commercial aviation, only the first option fits, since it is impossible to predict all the possible contingencies. It’s impossible to have specific instructions for every one of them, nor to ignore them, trusting that they’ll be resolved a posteriori. A minor contingency can, due to the complexity of the organization, enter into a strengthening spiral and reach an uncontrollable situation. The principal function of a person here will be to break the chain of events that can generate serious situations. With this approach, there is a dilemma that is difficult to confront: of the possible types of action that comprise the SRK2 model, the last one is the most susceptible to operator error. However, before an unforeseen situation, it’s precisely that type of action – that most susceptible to error − which is the only one that provides some possibility of saving the situation, given that it can generate responses not included in the design of the system. Reason warns against the use of superior abilities when the task can be carried out via others of lesser rank. In a high-risk activity such as aviation, experimentation with responses to situations already identified and for which there is a predetermined response in the form of skills or rules, should not be done. This way, most of the errors associated with behaviour based on knowledge are limited. At the same time, however, the organization should give real freedom to respond to unidentified situations. When a system is highly complex and the reduction of its learning ability has appeared, it doesn’t seem possible to find any alternative to the abandonment of the technical system as favourite paradigm. If it is admitted, to recover learning ability the person should represent the axis of the system. This idea can serve as a guide to establish priorities and define what the appropriate uses of the resources are in learning and the subsequent improvement. The objective, following the line highlighted by the institutional paradigm, would consist of training a person as a member of a community. This person should have sufficient ability to respond to unforeseen events. This implies a profound knowledge of the functional logic of the system being operated. At the same time, the system values and the rules for action derived from these should be respected. The motive for respecting the rules, however, should not be the sense of discipline or fear of the harm of not complying with them. The appropriate motive would be 2

Skills, Rules, Knowledge.

The Future of Improvements in Air Safety

155

knowledge of the reasons that inspire these rules and the harm, beyond a regime of sanctions, that might mean failing to comply with the rules. If, given the previous conditions, an unforeseen event requires it, a person should have the freedom to break a rule. This freedom should not be restricted by automatisms nor by the severity of a regime of sanctions. The achievement of this objective involves significant work in the area of management that not only affects people but the whole of the organizational resources. In that relating to people, selection becomes a key process followed by a socialization that clearly shows what is and what is not admissible as appropriate behaviour. Lastly, the training processes cannot be directed at functioning as suppliers of operational ability at the lowest possible cost but, instead, they should aim to enable the operators to reach a profound knowledge of the functional logic of the system being operated, be that an airplane, a control panel, an air operator or aviation as a whole. Maybe it’s this last point that is the most complex to achieve, as much because of economics as habits. Some philosophers, such as Bertrand Russell or Edgar Morin, recommend that training, from the beginning, flees from memorization and in its place introduces the habits of reasoning. Without doubt, it is a desirable principle, but taken to its extreme, one would need an entire lifetime to complete a process of schooling if, as Russell proposes ironically, subjects such as whether the Earth is round or flat should be intensely debated. Commercial aviation is a high-risk and complex system. Its operators cannot be equipped only with operational skills that permit them to carry out those tasks that have been assigned to them in the design of the system. They also have to be an alternative to this design and their role as alternative should be facilitated. On the other hand, aviation is an important economic activity. The operator’s training, therefore, does not have personal development as its objective but the increase of operational ability. It is necessary, therefore, to search for a formula of equilibrium between the philosopher’s humanist Utopia and the engineer’s machinist one. The set of processes necessary for a change of this type cannot occur in a vacuum and without alteration to the functioning of the current one. It’s necessary, to make sure that they work and the level of learning is recovered, to alter the rest of the system components, always with a view to obtain persons with the required abilities and behaviours. Regulations in an organization based on people As pointed out already, regulations have an informative component and an imperative component. Under an institutional paradigm, both should be modified. The informative component should be complemented by a clear description of the reasons that advise a rule when the rule itself does not make them explicit. The more complete this knowledge is, assuming that individuals accept as their own the objective of improving safety, the greater will be the level of spontaneous compliance. This way, compliance won’t have to be supported by the disciplinary regime and the rule will act as the coordinating element in actions.

156

Improving Air Safety through Organizational Learning

This informative component, even before reaching the stage of rule, can house itself in an information system. The ability for distribution that these systems add has an unquestionable impact on learning, as long as the motivations of the individuals, via selection or via socialization, are compatible with its use. The imperative component requires that organizations involved in air safety do not limit themselves to acting as a transmission belt of external regulations. In place of this, the imperative component should be used to transmit what is and what is not acceptable. Maturana and Varela (1996), referring to living beings, point out that it is necessary that a situation occurs according to which a disruption of the medium does not contain within it a specification of its effects on the living being. Instead, it is this in its structure that determines its own change before it. If we apply this rule to an activity and organization, we will find that its viability requires not limiting itself to reproducing in its interior the surrounding medium but must, instead, maintain its own rules of operation. This principle, taken to the use of regulations in air safety, is habitually broken. The usual solution is the establishment of a chain of responsibilities that, in the last instance, might reach the operator who failed to comply with a rule, producing some damage or harm. To the extent that it is wished to increase ability before unforeseen situations, different criteria should also be used – not necessarily always more benign than the judicial ones − regarding its behaviour with its operators. Small violations of rules carried out for comfort that could lead to a serious event cannot be treated with the same criteria as a violation of rules in a situation of limited rationality and with the objective of avoiding damages, even though the avoidance isn’t achieved. A paradigmatic case is the accident already mentioned of the company Spantax. In a situation of incomplete knowledge, the captain decided to abort a take off after having exceeded V1 and even having initiated rotation. The rule is absolutely clear in this respect: take off. Despite the rule and with the information available at the time, the captain had serious reasons to doubt that the airplane was really capable of flying. Facing this, he decided to not comply with the rule and abort the take off. The opposite situation would be represented by failures to comply that, without causing serious accidents, arise from significant carelessness. Minor failures to comply, from the point of view of the events they lead to, can have a serious character as indicators of abandonment or disinterest. In parallel, major failure to comply can be motivated by the search for a better option based on limited knowledge, like in the case indicated in the box. The behaviour of an organization engaged in improvements in safety should reflect this fact and not

The Future of Improvements in Air Safety

157

limit itself to transmitting external regulations destined for objectives other than improvement. Therefore, a softer sanctioning device is not required, but one that is different in its criteria. Admissible and inadmissible behaviour should be reflected in its own rules. To act differently represents renouncing an own model of socialization and accepting one from an external legal system. Basing itself on its own criteria, any organization has the authority to determine when it transmits, via sanctioning apparatus, the consequences of a failure or when it accepts them as its own without transmitting them to its members. Information systems in a people-based organization The principal labour regarding information systems so that they function in an organization based on people means converting them so that they are transparent to their operators. This requires a double work, as much in the training of operators as in the design of the systems themselves. When the operator – the pilot in this case − has learned in an information systems environment, the acquisition of operational knowledge is simpler than in the absence of these systems. However, the system itself makes the acquisition of in-depth knowledge difficult due to the opacity of its components. There exists a staggering of learning from the purely instrumental skills to the acquisition of concepts and values related to the activity. The information system itself can incapacitate or seriously limit reaching a level of knowledge superior to the merely operational. In these conditions, the operator has no possibilities to transcend the system and is an appendix of the same, unable to exert the role of alternative resource. To be able to exert this role, it’s necessary to have sufficient clues to carry out an appropriate diagnosis of the situation. In addition, it’s possible that even the operational ability to act outside of the skills prescribed by the system is lacking, supposing, of course, that the system permits such an option. The operator’s training, therefore, should go beyond the operational models, indicating what its logical model is. The ‘black box’ approach, situated between a set of inputs and outputs of the information system, cannot be valid for high-risk organizations. Therefore, it is necessary that the operator knows the contents of the ‘black box’ better. The designs of the information systems themselves handle languages and programming logic difficult to understand for someone not familiar with their technology. This would be another challenge for the information systems: to construct languages and logical structures sufficiently simple to be explained to the operator. Both tasks – training and change of design − should advance in parallel and can be used to define the limit of development of a system. The real limit should not consist of what is feasible with an information system but, instead – when complexity and risk are involved – in the ability of the operator to understand its logical model. Cybernetic systems in a people-based organization Cybernetic systems add their own problem to the opacity of information systems: their ability to act independently of the operator.

158

Improving Air Safety through Organizational Learning

This inconvenience might appear to invite the elimination of all type of cybernetic system. However, for a determined ability to exist, the resources supporting it have to be at the necessary place and time, and some abilities are inaccessible to a human operator. Nietzsche indicates that ‘all habit makes our hand more ingenious and our genius less agile’. There are situations where an ingenious hand is required far more than an agile mind and vice versa. There are even situations where the most ingenious or the strongest of hands is insufficient and, in these cases, the only option is the cybernetic type. That is why cybernetic systems exist. Their precision and their inability to tire permit them to manage situations unreachable for a human hand. However, it’s more difficult to justify the use of these systems except in the case that their action is perfectly known by the operators and cannot lead to situations in which they do not have real control. Because of this, cybernetic systems are subject to the same requirements as information systems, with a major emphasis due to their capacity to perform actions directly. Finally, it’s necessary to point out that, in the previous sections, functional modifications of the resources have been attempted to contribute to a common objective: to prepare operators able to act under foreseen or unforeseen events and, in this way, increasing the ability of the system to respond. Specific problems of the people-based organization De Geus (1997) emphasizes the aspect relative to a person as member of a community whose objectives are shared. In air safety, this seems clear; however, situations can arise where the involvement of the person with the organization is scant. The community’s character, necessary if we wish to get all the potential from people, can be damaged by organizational actions that send contradictory messages to its members. The socialization process is destined to prepare a member of a community to act according to values derived from the culture of the organization or activity. The use of persons as a mere economic variable makes this socialization difficult and converts the relationship of the individual with the organization into a transitory coincidence of interests. The relationship of an organization with its members is economic but it’s an error to think that it’s exclusively economic. Although the objective of this work is not to go deeper into this subject, its mention is inevitable given the impact it has on the process of socialization and the extent to which members of an organization are going to consider themselves as such members and act accordingly. Soros (1999) coined the concept of ‘market fundamentalism’ to highlight situations in which – although they shouldn’t work under a market dynamic – this imposes itself. The relationship of an organization with the people that make it up is often an exhibition of such market fundamentalism and this affects the set of processes of socialization.

Chapter 8

Conclusions Air safety uses a learning model that is showing signs of exhaustion. A learning model centred on the development of technology and regulations is not able to evolve at the same rate as the resources on which it depends. This is due to the secondary effects of the learning model put forward in the previous chapters. Commercial aviation has fundamentally learned to be safer through technological resources and regulations. In doing so, it has increased its complexity, resulting in large numbers of people who work in aviation not understanding its internal operation. The problems derived from this inconvenience have been partially avoided via the introduction of new, more powerful technology that allows the management of a larger number of events at the cost of increasing complexity even further. Limitations of the current learning model As has been put forward in previous chapters, it’s possible that the reduction in the level of learning is a phenomenon associated with the learning model itself. Through the previous chapters, we have seen how technological advancement has become a necessity to prevent new contingencies. We have also seen that system complexity itself cancels its operators’ ability to represent an alternative to the system when this fails. The current state of technological advancement and the new developments to be expected can lead the discussion about the roles of people and machines in complex systems to a new stage. Nevertheless, neither the current nor foreseeable advances seem capable of avoiding, for the time being, the problem identified as the initial cause of the reduction in the flow of learning. Human operators don’t understand the system they handle and, therefore, they can’t represent a real alternative when faced with a problem unforeseen in its design, given that they don’t know how or why the event has been generated. Some impressive technological advances announced do not signify qualitative changes in the learning model but rather another step along the same line of evolution. The fundamental limitation of this learning model is that while it builds the ability to increase the field of foreseeable events, it destroys the ability to attend to the unforeseen. In consequence, the frequency of the unforeseen and the seriousness of its consequences represent the determining factor in the acceptability of a model that, without this decisive factor, confronts contradictory requirements. On the one hand, the complexity of activities associated with aviation force being open to unforeseen situations, often generated by that same complexity. On the other

160

Improving Air Safety through Organizational Learning

hand, the demands of efficiency encourage the search for ‘tightly-coupled’ operations that introduce complexity and make their operation opaque to the operator. The requirements of air safety can’t accept the limitation of the learning model as one more characteristic, but instead require improvement beyond those limitations. Any activity subject to serious events, such as aviation, has to base itself on learning models that don’t disable alternative resources that allow attending to these events. Aviation, through its evolution, has at the same time learned to be safer and more efficient. However, the increase in efficiency of normal operations also increases the efficiency of failed operations and, therefore, increases the potential seriousness of their consequences. Said another way, efficiency is a two-way street: while it creates the means to speed up normal operations, it also creates the means to speed up the consequences of a failure. The failures of an efficient organization are also efficient. In consequence, the learning model based on technology and regulations represents a double risk. On the one hand, it pushes towards greater complexity; on the other, the growth of complexity leads to the high-risk state, for which it does not have an appropriate response. It can be concluded that an organizational model based on technology and regulations is effective until it reaches the threshold of complexity at which the system begins to become opaque to its users. Above this threshold, an alternative model needs to be found. Changes in the relationships between variables within the system The improvement of information systems has allowed their designers to store within these the possibility to execute actions that, frequently, are well above those accessible to human operators in so far as precision and other factors are concerned. Nevertheless, information systems do not attribute meaning to the actions they execute. Instead it’s their designers that allow systems to simulate that they understand the meaning of their actions. This fact was illustrated by showing that an automatic pilot is extremely skilled at the task of piloting an airplane, but at the same time, it neither knows it’s flying nor what flying is. Paradoxically, Winograd and Flores (1986) showed this as the great advancement of information systems. Their construction allows making the different levels of its design independent, equipping each with its own logic. This fragmentation between the different levels allows someone to operate a system without knowledge of its logical structure – that is, operate in a way very similar to that highlighted for the automatic pilot – or someone can know this logical structure without knowing the physical structure on which it rests. This advancement is real but, at the same time, is the origin of the problem. The operational model does not have a precise correspondence to the functional logic model and a lack of knowledge of the latter by the operators leaves them unarmed when an unforeseen failure makes the discrepancy between the two models obvious.

Conclusions

161

Knowledge of the logical model by the operator is something that, in the initial phases of development, can be considered a given. However, to the extent that the system grows in complexity, it becomes more difficult for the operator to ‘cognitively run the system’ (Rasmussen, 1986), confronting its designers with a decision, whether they consider this characteristic important: 1. Given that the organization is more complex, the first possibility is to train the operators in the technologies used for the design of the logical model. These subjects are also complex to learn and may be far removed from the training and experience of the operator. It can, therefore, be considered that this is not a viable way for the operator to know the logical model. 2. If the first option can’t be achieved, a second possibility would be to not introduce developments that are beyond the comprehension of their operators, that is, break the advantage that Winograd and Flores (1986) pointed out. In many fields, including air safety, this condition could mean a step backward of several decades with respect to the current situation. Faced with the difficulty of either of the two possibilities that permit the operator access to the logical model in complex environments, this learning model has opted for dispensing with this condition. In doing so, operational closure has been broken, giving rise to a different system through the progressive separation of its logical and operational models. Reason made a distinction – discussed in previous chapters – which we could translate as systems that learn and systems that avoid. Once the operators have come to master only the operational model, their contribution to the learning process is marginal and the system begins to reinforce itself against all foreseeable events. Through this change, the operator acquires less relevant positions within the organization. Using an analogy from the judicial system, we can say that while the operators know the logical model, they behave like expert witnesses and are asked for interpretations. When they only know the operational model, they behave like eyewitnesses who are interrogated over facts and not interpretations. Once this point is reached, the improvement model expects the operator to correctly execute the operation model, more so than contributions to improvement itself. This action offers some advantages in terms of efficiency. On the one hand, the operator needs less effort in training, given the model is simpler to understand; on the other, the operator restricted by technology and regulations makes the activity more predictable. Given that the appropriate conduct is prescribed at origin, more homogeneity can be achieved in the conduct of different operators. From another point of view, the function of the operator has been emptied of content in favour of system design. This has been possible thanks to technological evolution that allows ever-more complex designs covering a growing number of possibilities. This evolution has, in addition, led to safety managers learning to trust in technology and procedures and withdrawing that same trust from the operator. Maximum regularity in action is expected of the operator, according to that prescribed by the system design.

162

Improving Air Safety through Organizational Learning

Once trust is withdrawn from the operator, the lack of access to the logical model and, therefore, the meaning of the action itself seems like a small matter and reinforces the tendency to empty the operator’s labour of content, increasing the role of technology and regulations. The consequence of this evolution has been analysed throughout this work and can be summed up as follows. The organizational learning model has changed, displacing its preferences towards technology and regulations and progressively marginalizing the operator. The consequence of this change is that, were it not for its effects, it could be defined as trivial. Commercial aviation has followed an improvement model based on technology and regulation. With this model, it has learned to improve all that can be achieved through technology and regulations and, in parallel, it has made worse all that requires the contribution of a human operator. Future lines of development It would be difficult today for organizational evolution to occur that established as a condition the operator’s comprehension of the logical model of the system being operated. However, if this condition is proposed as a future requirement, it can force new developments of great importance in both the technological and regulatory fields. The fulfilment of this requirement makes the development of radically different designs necessary, where the ergonomics are aimed not only at achieving more legible operational models but also the introduction of another type of ergonomics that could be called cognitive ergonomics. That is, the fact that full use of human potential requires people to know the meaning of their actions and, therefore, the logical model on which they are based. Air safety is a field where the need for new learning is greater than that which the current model can provide and a new, alternative model should be developed. The development, or at least the basic lines, of this alternative model can be found in the emerging field of organizational semiotics as the science of meaning applied to the design and management of organizations. Future research along these lines may introduce new variables in technological designs. Until now, the meaning variable and its importance to the human operator have been ignored. Therefore, it has not evolved according to a pre-defined purpose but as a dependent variable of low rank. Instead of paying attention to the operator’s understanding of meaning, organizational evolution has centred itself on operational improvements. The lack of attention to the human actors and their need to assign meaning to their actions has led to a progressive deterioration of this variable. If it is accepted that the generation of new abilities requires human operators, precisely because these can attribute meaning to actions and, if it’s accepted that this generation of new abilities is necessary, it inevitably will lead to a new scenario. In this scenario, meaning must be treated as a variable with its own value and, in which, a good portion of the ability to generate learning, including the ability to attend to unforeseen events, resides in the system’s design.

Conclusions

163

To the extent that serious events do not occur, the absence of meaning for the operator allows making the whole system more predictable and searching for more efficient solutions. To the extent that new and potentially serious situations can appear, it will be necessary that the human operators have full access to the meaning of their actions and any design that hid this would be inadmissible. In consequence, in commercial aviation and, in high-risk organizations in general, the operator should have access to the logical model. In establishing this condition, we would establish a developmental ceiling in terms different from the current ones. The limit to the development of the system will be imposed by the ability of its operators to understand it. To equip this requirement with practical effects could require significant technological development but along different lines to the current one. Technological designs should be sufficiently transparent to offer not an operational model but a logical model, allowing the operator to truly understand how the system works. Current developments accept that the system’s designer establishes its internal functional logic, while the operator manages it. Both activities are considered separate specialities, each with its own logic. This great advantage, stated by Winograd and Flores, is dispensable if one considers, on the one hand, that it is generating the problems exhibited and, on the other, that it can be avoided if part of the technological power is dedicated to making itself more legible. Put another way, a good investment of technological ability would be in making technology less efficient. Currently, technology has sufficient power to dedicate part of itself to constructing sub-optimal systems, so their operators, at the cost of losing a negligible number of features, can understand them. The proposal could sound strange but it is precisely in the field of information technology where this solution has been applied for the first time. Programming under the old language Assembler was extremely complicated and slow. When compilers started to be built to use the so-called high level programming languages, the internal logic of the program was clear for the programmer but, at the same time, the final product (the program itself) was slower, less optimized but easier to read and understand. Final conclusions I would not wish to finish the conclusions without a special mention of the deceased Professor Pérez López, given that in his organizational development model he shows a line of evolution that the observable events seem to contradict. Professor Pérez López upheld the view that technical systems have a ceiling in their learning ability based on the storage of If, then conditions. He predicted a change towards an institutional paradigm, as much because of his conviction of this event as of humanistic considerations about the person’s role in an organization. The development in the area of information technology in recent years has dramatically increased the ability to store If, then situations. This created the mirage that the technical system was the only feasible organizational development model

164

Improving Air Safety through Organizational Learning

and, in addition, that its potential for development was virtually infinite, being limited only by ever-growing technological power. For many organizations, including high-risk and, among these, commercial aviation, a bet on technology was the equivalent of getting on a mechanical walkway. The walkway would increase its own speed and they would be able to progress more rapidly. This model of development, however, has introduced its own barriers in the form of hyper-complexity, producing a situation that can be illustrated by the following metaphor. In the deepest mines in the world, the bottom cannot be reached in a single lift because the weight of the cable breaks the cable and, logically, making a thicker cable does not fix the situation. It seems, therefore, that the model advocated by Pérez López, which involves a return to people as the primary resource in highly complex situations, may be more relevant today than when it was put forward 15 years ago. Access to the logical model or, in other terms, to the meaning of the action itself, represents a basic element of that model. The use of that element, however, has as a condition the return of trust in the operator. Certainly, that return of trust means opening the door to variability of behaviour, but the adamant search for homogeneity is not a valid response in an environment where events not foreseen in the initial design can appear. Trust in the operators and in the idea that their variability will be used in the organization’s favour also has an important implication: the operators’ actions are not driven by external impositions but by their own conviction that this is the action that should be carried out. This conviction, in turn, comes not only from the knowledge of the logical model but also of a socialization model that makes clear which conduct is acceptable and which is not, and why. The Bismarck code of the nineteenth century pointed out ‘all that is not expressly ordered is prohibited’. Technological and regulatory evolution has allowed the expansion of the field of what ‘is expressly ordered’ but has also allowed seeing what the limit of the model is. An alternative model, based on people, has important functional and social implications. They imply technological changes as well as changes in the internal functioning of organizations. The field of artificial intelligence should also not be lost sight of in future development. The first attempts made in this field set off from symbol systems with hypothetical parallels to external meanings that did not go beyond If, then situations. Since then, technology has continued working and authors such as Kurzweil or Hawkins propose operating principles for future systems, whose description seems more appropriate to a person than to an inanimate object. Paradoxically, if these developments are carried out, they will introduce levels of uncertainty in information systems, regarding their action, similar to those in human beings. Hillis (1998) mentioned development models where, via a mechanism of natural selection, a computer made programs that were very efficient but unintelligible to the programmer.

Conclusions

165

It pays to remember, however, that it was permanent uncertainty with respect to a human being that, in good part, justified organizational development along technical resources. For people, the uncertainty derived from their internal functioning is partially limited by socialization models. These try to work with a set of values that serve as a guide to action; for information systems, that limitation doesn’t exist and it does not seem viable to introduce it now via proposals such as those made by Asimov and his laws of robotics. Pérez López commented that it is much more difficult to lead an institution than a technical system and it is this that points to the most promising future lines of research. If one seeks to exceed the abilities of a technical system, people constituting its principal axis, it is not enough to equip them with the necessary training to generate corresponding ability. It’s necessary that people want to use these abilities in the most convenient way, given that, under the institutional paradigm, the person is not only the medium but also the objective of the organization. This objective involves deep changes. People cannot be mere resources that, as such, are susceptible to be managed and used as objects. De Geus (1997) pointed out that viewing an organization as a machine implies that its members are employees or, worse, ‘human resources’ that remain in reserve until it is necessary to use them. Viewing an organization as a living being leads to viewing its members as human communities. In the creation and maintenance of human communities, in their principles of operation and support and in an appropriate use of technology and regulations are where lines of improvement can be found in the field of air safety and, in general, in high complexity organizations. If, on the contrary, we continue to bet on the addition of more technology and regulations, the consequence is expressed by the mine metaphor: the weight of the cable will break it.

This page intentionally left blank

References Abeyratne, R. (1998), ‘The Regulatory Management of Safety in Air Transport’, Journal of Air Transport Management, 4, 25−37. Adizes, I. (1994), Ciclos de vida de la organización (Madrid: Díaz de Santos). Adsuar, J. (2001), Navegación aérea. Madrid: Paraninfo. Aghdassi, B. (1999), An Assessment of the Use of Portable Electronic Devices on Board Aircraft and their Implications on Flight Safety (Buckinghamshire: Cranfield University). Air Safety Week (2000). Pilot Group Charges Economics, Not Safety, Behind Twin Engine. http://www.aviationsafetyonline.com. Airbus Industries (2000), ‘New Developments in Airbus Design and Operational Philosophy for Training’, The Link Vol. I. http://www.airbus.com. Airbus Industries (2002a), ‘Cross Crew Qualification’. http://www.airbus.com. Airbus Industries (2002b), ‘Mixed Fleet Training’. http://www.airbus.com. Airbus Industries (2002c), ‘A330-A340 ETOPS’. http://www.airbus.com. Albrecht, K. (2000), El Radar Empresarial (Buenos Aires: Paidós). Alstom Transporte (1999), Documento de Solicitud al Premio Europeo de Calidad 1999 (Madrid: Club Gestión de Calidad). Álvarez Cabañes, C. (1997), Tratado de seguridad en la operación de aeronaves (Badajoz: Diazor). Ambah, F. (1999), Egypt Official Reports on Jet Crash (Associated Publishers), 22 November 1999. Andreu, R., Ricart, J.E. and Valor, J. (1997), La Organización En la Era de la Información: Aprendizaje, Innovación y Cambio (McGraw-Hill, Aravaca). Argyris, C. (1993), Cómo vencer las barreras organizativas (Madrid: Díaz de Santos). Argyris, C. (1996), ‘Unrecognized Defenses of Scholars: Impact on Theory and Research’, Organization Science, 7, 79−87. Argyris, C., Putnam, R. and Smith, D. (1985), Action Science: Concepts, Methods and Skills for Research and Intervention (San Francisco: Jossey-Bass). Aviation Safety Network (2002a), ‘Accident Description Avianca Flight 011, November 27, 1983, Madrid–Barajas’, http://aviation-safety.net/ database/1983/831127-0.htm. Aviation Safety Network (2002b), ‘Accident Description El Al Israel Airlines, 4 Oct. 1992, Amsterdam’, http://www.aviation-safety.net/database/1992/921004-2.htm. Baberg, T. (2001), ‘Man-machine Interface in Modern Transport Systems from an Aviation Safety Perspective’, Aerospatial Science and Technology, 5, 495, 504. Bach, R. (1985), El Don de Volar (Buenos Aires: Vergara). Beaty, D. (1995), The Naked Pilot (Shrewsbury: Airlife Publishing).

168

Improving Air Safety through Organizational Learning

Beck, U. (2002), ‘La Sociedad Del riesgo Global’, Siglo XXI, Madrid. Beer, S. (1994), Beyond Dispute: the Invention of Team Syntegrity (Chichester: John Wiley). Beer, S. (1994, original 1966), Decision and Control: the Meaning of Operational Research and Management Cybernetics (Chichester: John Wiley). Beer, S. (1995, original 1981), Brain of the Firm (Chichester: John Wiley). Besseyre des Horts, C.H. (1989), Gestión estratégica de los Recursos Humanos. (Bilbao: Deusto). Bettis, N. and Prahalad, C. (1995), ‘The Dominant Logic: Retrospective and Extension’, Strategic Management Journal 16: 1, 15−14 Boeing Commercial Airplanes Group (1993), Accident Prevention Strategies. (Seattle: Airplane Safety Engineering). Transporte, A. (1999), Documento de Solicitud al Premio Europeo de Calidad 1999 (Madrid: Club Gestión de Calidad). Boeing Commercial Airplanes Group (2002a), ‘Statistical Summary of Commercial Jet Airplane Accidents Worldwide Operations 1959–2001’, http://www.boeing. com/news/techissues. Boeing Commercial Airplanes Group (2002b), ‘ETOPS Maintenance on Non-ETOPS Planes’, http://www.boeing.com/commercial/aeromagazine/aero_07/etops.html. Boeing Commercial Airplanes Group (2002c), ‘Aging Airplane Systems Investigation’. http://www.boeing.com/commercial/aeromagazine/aero_07/agingair.html. Boeing Commercial Airplanes Group (2006), ‘Statistical Summary of Commercial Jet Airplane Accidents 1959–2005’. http://www.boeing.com/news/techissues/ pdf/statsum.pdf. Boy, G. (1999), Human-computer interaction in aeronautics: A cognitive engineering perspective, Air & Space Europe Vol. 1(1). Cameron, K. and Quinn, R. (1999), Diagnosing and Changing Organizational Culture Based on the Competing Values Framework (New York: Addison-Wesley). Campos, L. (2001), ‘On the Competition between Airbus and Boeing’, Air and Space, 3(1−2). Carroll, J., Rudolph, J. and Hatakenaka, S. (2002), Organizational Learning from Experience in High-Hazard Industries: Problem Investigation as Off-Line Reflective Practice MIT Sloan School of Management, Working Paper 4359-02. Checkland, P. and Holwell, S. (1998), Information, Systems and Information Systems (Chichester: Wiley). Choo, C. (1999), La Organización Inteligente (México: Oxford University Press). Choo, C. and Bontis, N. (2002), The Strategic Management of Intellectual Capital and Organizational Knowledge (New York: Oxford University Press). Comisión de Investigación (1977). Informe técnico del accidente ocurrido por la aeronave Boeing 747 matrícula PH-BUF ocurrido en el aeropuerto de Tenerife (Los Rodeos) el día 27 de marzo de 1977. Dirección General de Aviación Civil. Comisión de Investigación (1977). Informe técnico del accidente sufrido por la aeronave Boeing 747 matrícula N736PA ocurrido en el aeropuerto de Tenerife (Los Rodeos) el día 27 de marzo de 1977.

References

169

Corker, K. (1999), ‘Future Air Traffic Management: Technology, Operations and Human Factors Implications’, Air and Space Europe Vol, 1(1). Croasdell, D. (2001), IT’s Role in Organizational Memory and Learning. 30th Annual Hawaii International Conference on System Sciences. Curtis, T. (2001), Umpowered Jet Airliner Landings, September, 7, (2001). http:// www.airsafe.com/events/noengine.htm. Davenport, T. (1999), Ecología de la información: Por qué la tecnología no es suficiente para lograr el éxito en la era de la información. (México: Oxford University Press). Davenport, T. and Prusak, L. (1998), Working Knowledge (Boston: Harvard Business School Publishing). De Geus, A. (1997), The Living Company (Boston: Harvard Business Press). Degani, A. and Wiener, E. (1998), ‘Design and Operational Aspects of FlightDeck Procedures’ in Human Factors. Proceedings of International Air Transport Association (ed.). Dekker, S. (2005), Ten Questions about Human Error (New Jersey: Lawrence Erlbaum Associates Publishers). Dekker, S. (2006), The Field Guide to Understanding Human Error (Aldershot: Ashgate). Dennett, D. (1996), Kinds of Minds (New York: Basic Books). Dennett, D. (1998), Brainchildren (Cambridge: The MIT Press). Dennett, D. (2002), The Intentional Stance (Bradford: Cambridge). Dixon, N. (2000a), Common Knowledge (Boston: Harvard Business School Publishing). Dixon, N. (2000b), El Ciclo del Aprendizaje organizativo (Madrid: Aenor). Drucker, P. (1999, original 1954), La Gerencia de Empresas (Buenos Aires: Editorial Sudamericana). Eco, U. (1976), Tratado de semiótica general (Barcelona: Lumen). Eco, U. (1999), Kant y el Ornitorrinco (Barcelona: Lumen). Eco, U. (2000, original 1990), Los Límites de la Interpretación (Barcelona: Lumen). Einhorn, H. and Hogarth, R. (1999), Toma de decisiones: Avanzar Marcha Atrás (Bilbao: Deusto). European Foundation for Quality Management (1999), Modelo EFQM De Excelencia (Madrid: Club Gestión de Calidad). FAA, Federal Aviation Administration (1996), ‘Aviation Safety Reporting Program: Immunity Policy: Advisory Circular 0046-D’, October 1996. http://asrs.arc.nasa. gov/immunity_nf.htm. FAA, Federal Aviation Administration (1997), ‘Safety Reports: Aviation Safety Data Accesibility Study Index’. http://www.faa.gov. FAA, Federal Aviation Administration (2000), ‘Aircraft Accident and Incident Notification, Investigation and Reporting’, Washington, http://www.faa.gov. Fischhoff, B. (1998), ‘Communicate unto Others’, Reliability Engineering and System Safety, 59(1), 63−72. Fischhoff, B., Lichtenstein, S., Slovic, P., Derby, S. and Keeney, R. (1999, original 1981), Acceptable Risk (Cambridge: Cambridge University Press). Fodor, J. (1995), The Elm and the Expert: Mentalese and its Semantics (Bradford: London).

170

Improving Air Safety through Organizational Learning

Fodor, J. (2000), The Mind Doesn’t Work that Way (Bradford Books: Cambridge). Forrester, J. (1961), Industrial Dynamics (Cambridge: Productivity Press). Forrester, J. (1990, original 1971), Principles of Systems (Cambridge: Productivity Press). Foucault, M. (1999, original 1966), ‘Las Palabras y las Cosas’, Siglo XXI. Madrid. Fukuyama, F. (1998), La Confianza (Barcelona: Grupo Zeta). Gazendam, H. (2001), ‘Semiotics, Virtual Organizations and Information Systems. II’, International Workshop on Organizational Semiotics, Almelo. Ghoshal, S. and Bartlett, C.A (1997), The Individualized Corporation. A Fundamentally New Approach to Management (London: William Heinemann). Government of India. Office of the Director General of Civil Aviation (2000), Safety Hazards-Use of Mobile/Cellular Telephones inside the aircraft during flight. Civil Aviation Requirements, New Delhi. Gray, W. (2000), ‘The Nature and Processing of Errors in Interactive Behavior,’, Cognitive Science, 24(2), 205−248. Hale, A. and Baram, M. (1998), Safety Management (Oxford: Pergamon). Hale, A. and Swuste, P. (1998), ‘Safety Rules: Procedural Freedom or Actions Constraint?’, Safety Science 29, 163–177. Hale, A., Wilpert, B. and Freitag, M. (1997), After the Event (New York: Pergamon). Hamel, G. and Prahalad, C. (1995), Compitiendo por el futuro (Barcelona: Ariel). Handy, C. (1995), ‘Trust and the Virtual Organization’, Harvard Business Review, 73: 3, 40–50. Handy, C. (1996), Beyond Certainty: the Changing Worlds of Organizations (Boston: Harvard Business School Publishing). Haunschild, P. and Ni, B.(2000). Learning from Complexity: Effects on accident/ incident heterogeneity on airline learning. Academy of Management Proceedings. Hawkins, J. and Blakeslee, S. (2005), On Intelligence, (New York: Times Books). Hicks, M. and de Brito, G. (1999). Civil Aircraft Warning Systems: Who’s Calling the Shots, Air & Space Europe, Vol. 1 Human-computer interaction in aeronautics. Hillis, W.D. (1998), The Pattern on the Stone (New York: Basic Books). Hofstadter, D. (1985), Metamagical Themas (Questioning for the Essence of Mind and Pattern) (New York: Basic Books). Hofstadter, D. (1987), Gödel, Escher, Bach (Barcelona: Metatemas). Hofstadter, D. (1997), Fluid Concepts and Creative Analogies (London: Penguin Books). Hofstadter, D. and Dennet, D. (2000), The Mind’s I (New York: Basic Books). Hogarth, R. (1987), Judgement and Choice (Singapore: Wiley). Huse, E.F. and Bowditch, J.L. (1988), El Comportamiento Humano En la Organización (Bilbao: Deusto). Isaac, A. (1997), ‘The Cave-Creek Incident: A Reasoned Explanation’. The Australasian Journal of Disaster and Trauma Studies, Vol. 1997-3. Janic, M. (2000), ‘An Assessment of Risk and Safety in Civil Aviation,’ Journal of Air Transport Management, 6, 43−50.

References

171

Jensen, B. (1999), Simplicity: the New Competitive Advantage in a World of More, Better, Faster (Cambridge: Perseus Books). Johnson, C. (2001), ‘A Case Study in the Integration of Accident Reports and Constructive Design Documents’, Reliability Engineering and System Safety, 71(3), 311−326. Kelly, G. (1955), A Theory of Personality (New York: Norton). Kilroy, C. (1997), Special Report: Saudi Arabian Airlines Flight 163. http://www. airdisaster.com. Kirwan, B. (2001), ‘Coping with Accelerating Socio-Technical Systems’, Safety Science, 37(2–3), 77−107. Kohn, A. (1993), Punished by Rewards (Boston: Houghton Mifflin). Kolczynski, P. (2001), ‘What Accidents Does the NTSB Investigate?,’. http://www. avweb.com. Krause, S. (1996), Aircraft Safety (New York: McGraw-Hill). Kurzweil, R. (1999), La Era de las Máquinas Espirituales (Barcelona: Planeta). Lawrence, P.R. and Lorsch, J.W. (1986), Organization and Environment (Boston: Harvard Business School Classics). Leimann, H., Sager, L., Alonso, M., Insua, I. and Mirabal, J. (1997), Gerenciamento de los recursos humanos en las operaciones aeronáuticas: CRM, una Filosofía Operacional (Buenos Aires: Sociedad Interamericana de Psicología Aeronáutica). Liebenau, J. and Harindranath, G. (2002), ‘Organizational Reconciliation and its Implications for Organizational Decision Support Systems: a Semiotic Approach,’, Decision Support Systems, 955. Luhmann, N. (1993), Risk: A Sociological Theory (New York: Aldine de Gruyter). Luhmann, N. (1995, original 1975), Poder (Barcelona: Anthropos). Luhmann, N. (1996), Confianza (Barcelona: Anthropos). Luhmann, N. (1996), Introducción a la teoría de sistemas (México: Anthropos). Luhmann, N. (1998), Complejidad y modernidad: De la Unidad a la Diferencia (Madrid: Trotta). Luhmann, N. (1998, original 1984), Sistemas sociales: Lineamientos Para una Teoría General (Barcelona: Anthropos). Macarthur, J. (1996), Air Disasters, Vol. 1 (Sydney: Aerospace Publications). MacPherson, M. (1998), The Black Box (New York: Quill). Marina, J. (1993), Teoría de la inteligencia creadora (Barcelona: Anagrama). Marina, J. (2000), El Vuelo de la Inteligencia (Barcelona: Plaza y Janés). Maturana, H. (1996), La Realidad ¿Objetiva o Construida? (México: Anthropos). Maturana, H. and Varela, F. (1996), El Árbol Del Conocimiento (Madrid: Debate). Mauriño, D. (2000), Factories Humanos y Seguridad En Avión ¿La Edad de la Inocencia? (Madrid: Senasa). Mauriño, D., Reason, J., Johnston, N. and Lee, R. (1997), Beyond Aviation Human Factors (Aldershot: Ashgate). McFadden, K. and Towell, E. (1999), ‘Aviation Human Factors: A Framework for the New Millennium’, Journal of Air Transport Management, 5(4), 177−184. McFadden, K. and Hosmane, B. (2001), ‘Operations Safety: an Assessment of a Commercial Aviation Safety Program’, Journal of Operations Management,

172

Improving Air Safety through Organizational Learning

19(5), 579−591. [DOI: 10.1016/S0272-6963%2801%2900062-6] McIntyre, G. (2002), ‘The Application of System Safety Engineering and Management Techniques at the US Federal Aviation Administration’, Science, 40, 325−335. Miner, A. and Mezias, S. (1996), ‘Ugly Duckling no More: Pasts and Futures of Organizational Learning Research’, Organization Science, 7, 88−99. Ministère de l’Equipement, des Transports et du Logement, BEA, France (2002), ‘Accident survenu le 25 juillet 2000 a lieu-dit La Patte d’Oie de Gonesse (95) au Concorde immatriculé F-BTSC exploité par Air France’. http://www.bea-fr.org/ docspa/2000/f-sc000725/htm/part1.1-1.12.htm. Ministère de l’Équipement des Transports et du Logement, Inspection Générale de l’aviation Civil et de la météorologie, France (1992), Rapport de la commission d’enquête sur l’accident survenu le 20 janvier 1992 près du mont Saint-Odile (Bas Rhin)a l’Airbus A320 immatriculé F-GGED exploité par la compagnie Air Inter. http://www.bea-fr.org/docs/f-ed920120/htm/textef-ed920120.htm. Morecroft, J. and Sterman, J. (1994), Modeling for Learning Organizations (Portland: Productivity Press). Moreno-Luzón, M., Peris, F. and González, T. (2001), Gestión de la Calidad y Diseño de Organizaciones (Madrid: Prentice-Hall). Morgan, G.(1996). ‘Is There Anything More to be Said about Metaphor?’ In Grant, D. and Oswick, C. (eds.) Metaphor and Organization (London: Sage), 227–240. Morgan, G. (1999), Imaging-I-Zación (Barcelona: Granica). Moro, T. (1516), Utopía. Austral ed. 1999. Madrid. Muñoz-Seca, B. and Riverola, J. (1997), Gestión del conocimiento (Barcelona: Folio). Muñoz-Seca, B. and Riverola, J. (2003), Del buen pensar y mejor hacer (Madrid: McGraw-Hill). NACAA-National Association of Consumer Agency Administrators (2002), Need for the Airline Passenger Fairness Act. http://www.nacaanet.org/airlineact.htm . NASA (2001), ‘X43a Project’. Dryden Flight Research Center. http://www.dfrc. nasa.gov/Projects/hyperx/x43.html, National Center for Atmospheric Research (1998), Aircraft Tests NASA Clear-Air Turbulence Sensor through Colorado Skies. NCAR 1998-5. National Researh Council (1997), Aviation Safety and Pilot Control (Washington: National Academy Press). Netherlands Aviation Safety Board (1992), El Al Flight 1862 Boeing 747-258F-4XAXG Bijlmermeer, Amsterdam October 4, 1992. NTSB Safety Recommendations A-92-117. NewKnow Network S.A. (2002). Tour of the Newknow Solution. http://www.newknow. com. Nietzsche, F. (2000), El Gay Saber o Gaya Ciencia (Madrid: Austral). Nonaka, I. and Takeuchi, H. (1999), La Organización Creadora De Conocimiento (México: Oxford University Press). NTSB, National Transportation Safety Board (1973), ‘American Airlines Inc. McDonnell Douglas DC-10-10, N103AA near Windsor, Canada, June 12,1972,’. Washington: http://www.ntsb.gov.

References

173

NTSB, National Transportation Safety Board (1974). Iberia Lineas Aereas de España, (Iberian Airlines) McDonnell Douglas DC-10-30, EC CBN, Logan Int’l Airport, Boston, Massachusetts, December 17, 1973, Washington. http://www. ntsb.gov. NTSB, National Transportation Safety Board (1975), National Airlines Inc., DC-1010, N60NA, near Albuquerque, New México, November 3, 1973. http://www.ntsb. org. Washington. NTSB, National Transportation Safety Board (1986), Aircraft Accident Report: American Airlines, DC-10-10, N110AA Chicago O’Hare International Airport Chicago, Illinois March 25, 1979. Washington: http://www.ntsb.gov. NTSB, National Transportation Safety Board (1990), Aircraft Accident Report: United Airlines Flight 232 McDonnell Douglas DC-10-10. Sioux Gateway Airport Sioux City, Iowa. July 19, 1989. Washington: http://www.ntsb.gov. NTSB, National Transportation Safety Board (1991) Aircraft Accident Report: Avianca the Airline of Columbia. Boeing 707-321 B, HK 2016 Fuel Exhaustion. Cove Neck, New York. January 25, 1990. Washington: http://www.ntsb.gov. NTSB, National Transportation Safety Board (1996), Aircraft Accident Report: InFlight Icing Encounter and Loss of Control Simmons Airlines d.b.a. American Eagle Flight 4184 Avions de Transport Regional (ATR) Model 72-212, N401AM. Roselawn, Indiana. October 31, 1994. Washington: http://www.ntsb.gov. NTSB, National Transportation Safety Board (2000). Aircraft Accident Report: Inflight Break-up over the Atlantic Ocean, Trans World Airlines Flight 800 Boeing 747-131, N93119 Near East Moriches, New York, July 17, 1996. Washington: http://www.ntsb.gov. NTSB, National Transportation Safety Board (2001), Safety Report: Survivability of Accidents Involving Part 121 U.S. Air Carrier Operations, 1983 Through 2000. Washington: http://www.ntsb.gov. NTSB, National Transportation Safety Board (2002), Aircraft Accident Brief: EgyptAir Flight 990 Boeing 767-366ER, SU-GAP, 60 Miles South of Nantucket, Massachusetts, October 31,1999. Washington: http://www.ntsb.gov. Øwre, F. (2001), ‘Role of the Man-Machine Interface in Accident Management Strategies’, Nuclear Engineering and Design, 209(1–3), 201−210. Pasztor, A. and Michaels, D. (2001), ‘Popularity of Ultralong Flights Raises New Safety Concerns’, The Wall Street Journal, 31 August 2001. Pendleton, L. (1999), When Humans Fly High: What Pilots Should Know about High Altitude Physiology, Hypoxia and Rapid Decompression. http://www.avweb.com. Pérez López, J. (1991), Teoría de la acción humana en las organizaciones (Barcelona: Rialp). Pérez López, J. (1993), Fundamentos de la dirección de empresas (Barcelona: Rialp). Perrow, C. (1986), Complex Organizations: A Critical Essay (New York: McGrawHill). Perrow, C. (1999), Normal Accidents (Princeton: Princeton University Press). Peter, L. and Hull, R. (1984, original 1969), El Principio De Peter (Barcelona: Plaza y Janés).

174

Improving Air Safety through Organizational Learning

Peters, T. and Waterman, R. (1982), En Busca de la Excelencia (Barcelona: Planeta). Popper, K. (1993), Búsqueda sin término (Madrid: Tecnos). Raeth, P. and Raising, J. (1999), ‘Transitioning Basic Research to Build a Dynamic Model of Pilot Trust and Workload Allocation’, Mathematical and Computer Modelling, 30(5–6), 149−165. [DOI: 10.1016/S0895-7177%2899%2900154-5] Ranter, H. and Lujan, F. (2000), Aviation Safety Network: Accident Coverage: Tenerife Collision (Dutch Report), Gravenhage, 1979, http://aviation-safety.net/ specials/tenerife/dutch.htm. Ranter, H. and Lujan, F. (2001), Aviation Safety Network: the 100 Worst Accidents,. http://www.airsafe.com. Rasmussen, J. (1986), Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering (New York: North-Holland). Reason, J. (1990), Human Error (Cambridge: Cambridge University Press). Reason, J. (1997), Managing the Risks of Organizational Accidents (Vermont: Ashgate). Revilla, E. (1996), Factories Determinantes Del Aprendizaje organizativo: Un modelo de desarrollo de productos (Madrid: Club Gestión de Calidad). Ridder, K. (2002), ‘Boeing Chairman Says Sonic Cruiser Development Remains Possibility,’, Chicago Tribune Business News 20 March 2002. Riverola, J. and Muñoz-Seca, B. (1990), Implementación de proyectos de innovación: Un paradigma y sus implicaciones. Compilado por Escorsa en Ariel-La gestión de la empresa de alta tecnología, Barcelona. Roberts, K. and Bea, R. (2001), ‘When Systems Fail’, Organizational Dynamics, 29(3), 179−191. [DOI: 10.1016/S0090-2616%2801%2900025-0] Rochlin, G.I., La Porte, T.R. and Roberts, K.H. (1998), ‘The Self-Designing HighReliability Organization: Aircraft Carrier Flight Operations at Sea’, Naval War College Review, Summer 1998. http://www.nwc.navy.mil/press/Review/1998/ summer/art7su98.htm. Rognin, L. and Blanquart, J. (2001), ‘Human Communication, Mutual Awareness and System Dependability’. ‘Lessons Learnt from Air-Traffic Control Studies’, Reliability Engineering and System Safety, 71(3), 327−336. [DOI: 10.1016/ S0951-8320%2800%2900083-1] Rosario, A. (1990), Manual del piloto privado. (Madrid: Pilot’s). Sagan, S. (1993), The Limits of Safety (Princeton: Princeton University Press). Saint-Exupery, A. (2000, original 1939), Tierra de hombres (Barcelona: Emecé). Sánchez-Alarcos, J. and Revilla, E. (2002), Air Safety as a Model of Unexpected Systemic Evolution. 6th World Multiconference on Systemics, Cybernetics and Informatics. Sanfourche, J. (2001), ‘New Interactive Cockpit Fundamentally Improves How Pilots Manage and Aircraft’s Systems and its Flight,’, Air and Space Europe, 3, 1−2. Sasou, K. and Reason, J. (1999), ‘Team Errors: Definition and Taxonomy’, Reliability Engineering and System Safety, 65(1), 1−9. Schein, E. (1992), Organizational Culture and Leadership (San Francisco: JosseyBass). Schein, E. (1999), The Corporate Culture Survival Guide (San Francisco: JosseyBass).

References

175

Schlenker, B., Britt, T., Pennington, J., Murphy, R. and Doherty, K. (1994), ‘The Triangle Model of Responsibility’, Psychological Review, 101(4), 632–652. Schwartz, P. (2000), The Long Boom, (New York: Perseus). Searle, J. (1980), ‘The Intentionality of Intention and Action’, Cognitive Science, 4(1), 47−70. [DOI: 10.1016/S0364-0213%2881%2980004-3] Searle, J. (1999), Mind, Language and Society (London: Phoenix). Senge, P. (1990), La Quinta disciplina (Barcelona: Granica). Silva, J. (1992), El instante Decisivo (Barcelona: Ronsel). Simmon, D.A. (1998), Boeing 757 CFIT Accident at Cali, Colombia, Becomes Focus of Lessons Learned. Flight Safety Digest May–June 1998. http://www. flightsafety.org. Simon, H. (1996), Hidden Champions (Boston: HBS Press). Singer, G. and Dekker, S. (2001), ‘The Ergonomics of Flight Management Systems: Fixing Holes in the Cockpit Certification Net’, Applied Ergonomics, 32(3), 247−254. Smithsonian Institution’s National Air and Space Museum (1992), Grandes Épocas de la Aviación (Washington, DC: Discovery Productions). Sogame, H. (1991), Lauda Air B767 Accident Report (Bielefeld: Bielefeld University). Sorensen, J. (2002), ‘Safety Culture: a Survey of the State-of-the-Art’, Reliability Engineering and System Safety, 00. Spangler, W. and Peters, J. (2001), ‘A Model of Distributed Knowledge and Action in Complex Systems’, Decision Support Systems, 31, 103−125. [DOI: 10.1016/ S0167-9236%2800%2900122-6] Stamper, R. (2001), Organizational Semiotics. Informatics without the computer?. II Workshop on Organizational Semiotics, Almelo. Swissair Training Centre (2001), Enhancing Performance in High Risks Environments,, Behavioural Markers Workshop, Zürich. Taylor, F.W. (1911), The Principles of Scientific Management (New York: Harper & Bros). Taylor, S.E.T. and Parmar, H.A. (1982), Tecnología del vuelo: Navegación Aérea (Madrid: Paraninfo). The Learning Channel (1999). Black Box. The Learning Channel Series. Tibbetts, P. (1989), Flight of the Enola Gay (Columbus: Tibbetts Books). Transportation Safety Board of Canada (2001), Accident Summary Swissair 111 Updated 28 August 2001. Quebec: http://www.bst.gc.ca/ENG. Turner, B.A. (1978), Man-made Disaster (London: Wykeham). Vakil, S. and Hasman, J. (2002), ‘Approaches to Mitigating Complexity-Driven Issues in Commercial Auto Flight Systems’, Reliability Engineering and System Safety, 75(2), 133. Varela, F. (1988), Conocer (Barcelona: Gedisa). Varela, F., Thompson, E. and Rosch, E. (1997), De Cuerpo Presente: Las Ciencias cognitivas y la Experiencia Humana (Barcelona: Gedisa). Vázquez-Figueroa, A. (1999), Ícaro (Barcelona: Planeta).

176

Improving Air Safety through Organizational Learning

Vera, D.Y. and Crossan, M. (2000), ‘Organizational Learning, Knowledge Management and Intellectual Capital: An Integrative Conceptual Model’. Working Paper. Villaire, N. (1994), Aviation Safety: more than Common Sense (New York: Jeppesen). Von Krogh, G., Ichijo, K. and Nonaka, I. (2000), Enabling Knowledge Creation (New York: Oxford University Press). Walsham, G. (2001), ‘Knowledge Management: the Benefits and Limitations of Computer Systems’, European Management Journal, 19(6), 599. Walters, J. and Sumwalt, R., III (2000), Aircraft Accident Analysis (New York: McGraw-Hill). Wastel, D. (1996), ‘The Fetish of Technique: Methodology as a Social Defense’, Information Systems Journal, 6(1), 25−40. Weber, M. (1947), The Theory of Social and Economic Organization (New York: Oxford University Press). Weick, K. (1987), ‘Organizational Culture as a Source of High Reliability,’, California Management Review, Winter, 112−127. Wells, A. (2001), Commercial Aviation Safety (New York: McGraw-Hill). Wezel, W. and Jorna, R. (2001), ‘Paradoxes in Planning’, Engineering Applications of Artificial Intelligence, 14, 269−286. White House Commission on Aviation Safety and Security (1997), ‘Final Report to President Clinton,’. Washington: http://www.avweb.com/other/gorerprt.html. Winograd, T. and Flores, F. (1986), Understanding Computers and Cognition (Indianapolis: Addison-Wesley). Wolfe, T. (1981), Lo Que Hay Que Teener (Barcelona: Anagrama). Wright, T.P. (1946), ‘The State of Aviation’, Air Affairs, I, 2 (December), 139–151.

Index

Air accidents 19 AeroPeru 77-78 Air Florida 85 Air France 105 Air Inter 77 Aloha 89 American Airlines 15, 48, 64, 84, 94 American Eagle 85 Avianca 48, 84, 86, 173 Cali 48, 64-65, 84 Challenger 32 Dryden 42, 101 Egypt Air 25, 128-129 El-Al 93 Iberia 23, 143 Japan Airlines 15, 72-73, 95 KLM 33-34, 38, 46-47, 49, 86, 139 LAPA 14 Learjet 16, 74 Los Rodeos 32, 38, 46, 49, 84, 86, 94, 139 Mount Erebus 59, 84 National 112 Pan Am 33-34, 47 Saudi 74, 163 Spantax 106, 108, 162 Staines 47 Swissair 31, 128 TWA 34, 73, 94 United 27-28, 72-73, 94, 106, 112, 124, 131 ValuJet 89 Air safety Rates 10, 43 Safety culture 130 Safety device 36, 43, 48, 72, 93 Safety level 16, 19, 31, 82, 125, 126 Safety Management System 35, 39, 63 Safety managers 167 Safety regulations 68 Trade-off safety-efficiency 12, 13, 87

ASRS 104-105 Automation Automated cockpits 75 Automated environment 81, 102, 108110, 112, 141, 159 Automated planes 109, 159 Automated routines 137 Automated systems 51, 77, 79-80, 91-92, 99, 108, 110-111, 138, 143, 159-160 Automatic device 74 Automatic landing 62, 83, 110 Automatic pilot 16, 62, 70, 74, 81, 83, 99, 111-112, 141, 166 Automatic power 83 Automatic protection 78 Automation’s contribution to safety 110 Automatism 51, 161 Automatons 152 Auxiliary system 53-54, 73-74, 81 Causal chain 24, 25, 32, 35, 42, 94 CFIT 55, 181 Complexity 15-16, 19, 43, 51-52, 73, 79, 83, 89, 92, 95, 98-99, 110, 113, 116, 121, 123, 130, 153-156, 160, 163, 165-167 Dynamic complexity 16 Global complexity 72 Hyper-complexity 16, 35, 123, 153154, 156, 170 Level of complexity 97, 138, 148, 151, 153 Organizational complexity 98, 136, 154, 171 System complexity 43-44, 99, 107, 120, 131, 142 Technological complexity 132 Compliance 26, 32, 161 Concorde 15, 21, 68, 75, 94, 105, 108

178

Improving Air Safety through Organizational Learning

Confidence 13-15, 31, 41, 63-64, 79, 85, 89, 92-93 DC10 12, 15, 27-28, 59, 72, 84, 89-90, 94, 106, 112-113, 127 Dekker, S. 45, 64 Dominant logic 37-39, 98, 123, 130-131 Efficiency 7-15, 18, 41, 52-53, 62, 65, 67, 69, 72, 79-81, 87-88, 90, 92, 95, 110, 114-116, 123-124, 126, 129, 154, 158, 166-167 Engines 15, 17, 21, 25-26, 30, 36-37, 61, 65-68, 70-73, 75, 88, 90-91, 93-94, 100, 107, 111, 127, 129, 131 Engine failure 15, 28, 45, 67-69, 71, 94, 107, 111, 127 Engine fire 15, 48, 51, 101, 108, 139 Engine power 21, 46, 67 Engine reliability 66, 88 ETOPS 88 Four-engine planes 30, 66-69, 71, 88 Single engine 28 Twin-engine airplanes 8, 28, 30, 66-72, 88, 127 Engineer 30, 38, 81, 82, 87, 91, 98, 159, 161 Environment 16, 24, 32, 43, 45, 62-63, 65, 83, 87, 92-93, 110, 123, 131, 141, 143, 146-147, 153, 159, 163, 170 Automated environment 81, 102, 108110, 112, 141, 159 Complex environment 5, 35, 79, 97-99, 101, 103, 105, 107, 109, 111, 113, 115, 117, 119, 121, 131-132, 167 Controllable environment 98 Dynamic environment 42 Physical environment 49 Professional environment 32, 100 Real work environment 46 Regulatory environment 103 Technology-intensive environment 77, 83, 158 Ergonomics 75, 99, 142, 168 Cognitive ergonomics 46, 168 Ergonomic approach 46, 50-52 Event 13, 14-16, 21-30, 32, 35, 38-39, 41, 43-44, 48, 73, 75, 78, 83, 87, 89, 91, 97, 98, 99, 101, 103-105, 107, 112, 113, 119-121, 127-133, 137140, 144, 148-149, 155, 157, 160, 162, 165-167, 169, 170

Event analysis 21-23, 25-27, 29, 31, 33, 35, 37, 39, 100, 125 Event-based learning 21 Near events 29 Self-generated events 9 Unknown/unforeseen events 9, 36, 95, 97, 99, 100, 118, 136, 145-146, 156, 160, 161, 164, 168 Flight simulator 23, 27, 37, 38, 49, 50, 131, 139 Fly-by-wire 10, 52, 53, 88, 91, 92, 108 GPS 60, 62 High reliability organization 107 High-risk organization 7, 11, 117, 130, 132, 135, 137, 145, 163, 169 Human Factor 44-46, 95 ICAO 87 Improvement 10, 11, 16-19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 44, 48, 54, 57, 62, 67 ,69, 79-82, 87, 93-95, 100, 110, 114, 115, 119, 121, 123125, 130,135, 137, 142, 145, 148, 151-155, 157-163, 166-168, 171 Information Commando units 120 Information services 120 Information systems 10, 77, 79-82, 88, 92, 109, 136, 141, 144-147, 158, 162-163 Information technology 17, 98, 114, 138, 142, 160, 169 Sub-optimal information systems 169 Instrumental supremacy 131, 132, 138 Knowledge 15, 25, 29, 36, 55, 57, 64, 67, 84, 86, 92, 93, 98, 100-102, 104, 112-115, 117-121, 125, 132-133, 135-142, 144-147, 149, 151, 156163, 166-167, 170, 174-175 Conceptual knowledge 132, 135, 138, 145, 158-159 Operational knowledge 115, 121, 132, 144, 145, 157-158, 163 Low-cost 114, 115 Management 14, 38, 43, 44, 52, 87, 91-93, 148, 154, 161, 165, 173

Index Crew Resource Management (CRM) 48 Knowledge Management 118 Management model 153 Management’s pilots 38, 42, 46 Risk Management 29, 123 Safety Management Systems 35 Manufacturers 10, 13, 52, 67, 89, 91, 113, 146 Aerospatiale ATR 85 Airbus 10, 51-53, 77-78, 80-81, 87-88, 91-92, 108-109, 143, 173 Boeing 10, 14-18, 23, 25, 32, 34, 3738, 42, 45, 51-53, 61, 66, 70, 72, 73, 75-77, 80-81, 83-85, 87-89, 91-93, 108, 125, 139, 174 De Havilland Comet 22, 67, 74, 91, 113 Hawkersey Trident 47 McDonnell Douglas 53 Mauriño, D. 32, 42, 93 Meaning 78, 98, 116, 118, 132, 133, 135, 136, 138-139-145, 147-149, 151, 156, 157, 166, 168-170, 174 Mental model 38, 39, 41, 121, 129, 136, 145 Navigation 17, 31, 53-66, 72-73, 75, 80-81, 84-86, 99-100 NTSB 9, 23, 25, 65, 86, 105, 112, 124, 128 Operator Human Operator 37, 39, 42, 46, 50, 52, 64, 75, 79, 81, 90-92, 98-104, 107110, 112-113, 119, 132, 135-138, 140-157, 159-164, 166-170 Operator in the Aviation Market 12-14, 50, 63, 124, 126, 148-149, 155, 161 Operational closure 97, 100, 109, 117, 149, 167 Organizational Learning Barriers 123, 157 Feedback driven 97, 100, 118 Forward driven 97, 100 Learning Ability 124, 133, 145, 153, 156, 159-160, 169 Learning Difficulties 100, 111 Learning Model 16-18, 24, 41, 125, 146, 151, 155-156, 165-168 Limitations 35, 41, 123, 151, 165 Organizational Paradigm 118, 123-124, 129-131, 156, 159

179 Biological paradigm 114-115, 117-118, 149 Institutional paradigm 114, 116-118, 131, 149, 158-160-161, 169, 171 Mechanical paradigm/Technical system 114-121, 123, 129, 130-131, 135, 148-149, 154-155, 158-160, 169, 171

Pérez López, J. 114-115, 119, 158, 169171 Perrow, C. 9-10, 12, 15, 35, 43, 95 Pilot (human) 140-141, 143-144, 148, 163 Pressure (to safety improvement) 9-10, 17-18, 19, 26, 41, 50, 60, 87, 89-90, 103-104, 107-108, 124, 128-129, 151, 153-156 Rasmussen, J. 136, 144, 147, 167 Reason, J. 97, 99, 103-104, 107, 109, 113114, 117-119, 160, 167 Redundancy 27, 59, 72, 130, 131, 146 Regulator 88, 113 Responsibility 25-26, 29, 79, 100, 103105, 107-108, 112-113 Risk Acceptable risk 14-15, 18, 19, 27, 29, 31, 93, 97, 124, 157 Level of risk 11-12, 14, 79, 95 Perception of risk 11, 12 Risk calculation 14 Risk scale 9 Rules 92, 98, 101-105, 107-108, 117, 120, 131-133, 135, 137-138, 140, 146148, 157, 160-163 Skill 98, 101, 111, 112, 117, 120, 129, 132, 135, 137-141, 145-146, 149, 159161, 163, 166,173 Socialization 46, 121, 131, 133, 149, 161164, 170-171 SRK model 117, 137, 160 Systemic 15, 16, 35, 44, 95, 113, 147, 149 Systemic model 9, 25, 39 TCAS 55, 60, 86 Technology 132, 135, 140, 147-149, 151, 154, 156, 158, 163, 165, 166-171 Technological factor 44, 53 Technological risk 62, 65, 72, 73 Thinking backward 35, 38-39

180

Improving Air Safety through Organizational Learning

Thinking forward 35-36, 38-39 Training Multi-rating 80, 109 Operational 112, 115, 121, 132, 144, 145, 157, 159, 161, 163 Trust 149, 151-153, 155-160, 167-168, 170

User 10, 12-15, 29-31, 89, 115, 116, 124129, 136-137, 142, 144-147, 156, 166 User transparency 145,156 White House Commission 17-18,124-125