1,781 46 2MB
Pages 175 Page size 402.52 x 600.945 pts Year 2009
UNDERSTANDING HuMAN ERROR IN MINE SAFETY
This page has been left blank intentionally
Understanding Human Error in Mine Safety
GEOFF SIMPSON Human Factors Solutions Ltd, UK TIM HORBERRY University of Queensland, Australia JIM JOY University of Queensland, Australia
© Geoff Simpson, Tim Horberry and Jim Joy 2009 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise without the prior permission of the publisher. Geoff Simpson, Tim Horberry and Jim Joy have asserted their rights under the Copyright, Designs and Patents Act, 1988, to be identified as the authors of this work. Published by Ashgate Publishing Limited Ashgate Publishing Company Wey Court East Suite 420 Union Road 101 Cherry Street Farnham Burlington Surrey, GU9 7PT VT 05401-4405 England USA www.ashgate.com British Library Cataloguing in Publication Data Simpson, Geoff. Understanding human error in mine safety. 1. Industrial safety--Psychological aspects. 2. Mine accidents. 3. Coal mines and minings--Safety measures. 4. Safety education, Industrial. I. Title II. Horberry, Tim. III. Joy, Jim. 363.1'19622334-dc22 ISBN: 978-0-7546-7869-4 (hbk) ISBN: 978-0-7546-9716-9 (ebk.V) Library of Congress Cataloging-in-Publication Data Simpson, Geoff. Understanding human error in mine safety / by Geoff Simpson, Tim Horberry, and Jim Joy. p. cm. Includes bibliographical references and index. ISBN 978-0-7546-7869-4 (hardback) -- ISBN 978-0-7546-9716-9 (ebook) 1. Mine safety. 2. Human engineering. 3. Mine accidents--Prevention. I. Horberry, Tim. II. Joy, Jim. III. Title. TN295.S528 2009 622'.8--dc22 2009021825
Contents
List of Figures List of Tables Acknowledgements Preface
vii ix xi xiii
1 Introduction 1 2 The Nature of Human Error 7 3 Predisposing Factors: Level 1 – The Person – Machine Interface 15 4 Predisposing Factors: Level 2 – The Workplace Environment 37 5 Predisposing Factors: Level 3 – Codes, Rules and Procedures 45 6 Predisposing Factors: Level 4 – Training and Competence 59 7 Predisposing Factors: Level 5 – Supervision/First-Line Management Roles and Responsibilities 69 8 Predisposing Factors: Level 6 – Safety Management System/ Organisation and Safety Culture 81 9 Managing Human Error Potential 95 10 Conclusions 131 Glossary of Mining Terms References Index
139 145 157
This page has been left blank intentionally
List of Figures
2.1 Human error likelihood influence framework 13 3.1 Re-designed drill-loader operator 16 3.2 Gear Selection on Rhino and Domino machines 17 3.3 Inappropriate robot system design: entry from leg of T junction 20 3.4 Inappropriate robot system design: entry from arm of T junction 21 3.5 Restricted vision from the driver’s position on a drill loader 21 3.6 Two examples of data sheets from the UK Ergonomics Design Principles Reports 29 4.1 Design window without hearing defenders 41 4.2 Design window when wearing high attenuation hearing defenders 42 9.1 Outline procedure to ensure feedback from accident investigation to the risk assessment process 123 9.2 Output 1 from Figure 9.1 124 9.3 Output 2 from Figure 9.1 125 9.4 Output 3 from Figure 9.1 126 9.5 Output 4 from Figure 9.1 127 9.6 Output 5 from Figure 9.1 127 9.7 Output 6 from Figure 9.1 128
This page has been left blank intentionally
List of Tables
1.1 A summary of the human errors contributing to the fatal accident at Bentley Colliery 6 3.1 Position of skid correction controls across ten types of underground locomotive 19 5.1 Examples of alternative, simpler, words 54 7.1 Comparison of cross-deputy reporting on two problems 72 7.2 Summary of risk ranking for one accident-likely scenario 74 9.1 Examination of one element of new machine risk 99 9.2 High level risk assessment of three operational systems for loco manriding 101 9.3 Classification of activities where potential active failures were identified – underground mining operations 112 9.4 Classification of activities where potential active failures were identified – surface mining operations 112 9.5 Summary of the latent failures identified in the SIMRAC transport and tramming study 113–115 10.1 The Bentley accident errors revisited 132–135
This page has been left blank intentionally
Acknowledgements Much of the work referred to in this book was, in part at least, funded by the Ergonomics Action Programmes of the European Coal and Steel Community or the Safety in Mines Research Advisory Committee of the Department of Minerals and Energy of the Republic of South Africa. In addition, funding provided by ACARP (the Australian Coal Association Research Programme) and the United States Bureau of Mines (and, more recently the National Institute of Occupational Safety & Health) has been important in producing valuable research results in mining health and safety, some of which has been reflected in this book. Such funding also has the benefit of helping to maintain the pool of experienced researchers essential to ensuring that progress is made on reducing the impact of human error on mine safety. In addition many of the papers referred to in the chapters of this book were originally presentations to the “Minesafe International” series of conferences hosted primarily by the Department of Minerals and Energy Western Australia (the 1998 conference was hosted by the Department of Minerals and Energy of the Republic of South Africa) without which the pursuit of safety in mines would be much the poorer. Similarly, the Department of Mines and Energy in Queensland (Australia) and similar departments worldwide continue to play a valuable role in promoting a safe mining industry. A final word of thanks must go to our families – it is hoped that the publication of this book goes some way to proving that the long periods away from home did actually achieve something of value.
This page has been left blank intentionally
Preface
This book has been written primarily for the mining industry. Despite all the major improvements in mine safety (both surface and underground), it remains one of the most intrinsically hazardous occupations in the world. Moreover, in mining (as in other industries) human error in some form is almost certainly the most prevalent of accident causal factors. However, although there is considerable literature on mining human factors, the systematic assessment of human error potential, especially as a preventative measure, remains rare (as is equally the case in some other industrial settings). It is difficult to pinpoint why such an important topic has received relatively little systematic attention or why the potential safety improvements from a more detailed consideration of human error have yet to be fully realised; however, it is likely that some or all of the points below are significant: • •
•
•
There have been few, if any attempts to collate human factors research and action in mining into a single source. The European Coal and Steel Community (ECSC) which funded a major programme of mining ergonomics and human factors research during the period from the 1970s to the end of the 1990s no longer exists. In addition, the European coal mining industry which co-funded and hosted the ECSC research programme is also now almost non-existent. As a result a large volume of work is now relatively inaccessible. Errors and mistakes are such a fundamental part of human nature that there is a tendency to view them as inevitable with little that can be done to avoid or mitigate them. “Soft” areas of safety research are not normally a significant element in the education of the mining engineers who, quite correctly, represent the majority of the senior operational management of the industry. Moreover, as it is something which is outside primary education and considerably less tangible than the issues they are trained to address, it is also often outside their comfort zone.
This text has been written in an attempt to overcome the first three of the points above and, hopefully, help increase the mining engineer’s comfort zone with the “softer” human factors elements of safety assurance. Although not primarily written for human factors specialists, there is within the text a considerable body of real examples of human error which has, or is likely to, predispose accidents and a number of techniques and approaches which are probably not widely known outside of the mining human factors fraternity. It
xiv
Understanding Human Error in Mine Safety
is hoped that these may be of value both to the human factors specialists and (by analogy) to safety specialists in different industrial contexts.
Chapter 1
Introduction
Human error has been a factor in industrial/occupational accidents since the very first days of industrialisation. Indeed, it was probably just as much an issue in the agricultural period which provided most of the gainful employment prior to the industrial revolution. Equally, human error would have undoubtedly also created problems in early military activities (and it continues to do so today). In short, since humans have been interacting with their environment, errors have helped create potentially dangerous circumstances. Probably the only thing to have changed as industrialisation developed is that the potential consequences of human error have grown as the systems within which people interact have become more complex. In simple systems, the historical impact of human error was unlikely to extend much beyond the immediate area around the individual making the error. However, in modern, highly complex, nuclear or chemical plants and similar circumstances, the impact of error can be both widespread and devastating (for example, the disasters at Chernobyl, Bhopal and Seveso). Error is an inevitable consequence of being human; however, potential errors in safety critical circumstances can and need to be controlled, and their effects reduced. To have any opportunity to reach a position where we can systematically reduce the potential for human error to create accidents it is essential to understand the nature of human error and, in particular, what is likely to predispose error. In addition, two myths need to be dispelled immediately.
Myth 1 The first myth is that human error effectively equates to front-line operator error. It is of course true (almost by definition) that the error immediately preceding an accident event will be an operator error (for it is the operator who is in direct contact with the equipment, systems and the immediate environment). However, for the assumption to be true that all errors are front-line operator errors then either: •
managers, supervisors, designers etc. don’t make errors;
•
managers, supervisors, designers etc. aren’t human.
or:
Understanding Human Error in Mine Safety
Myth 2 The second myth is that all (or most) human errors likely to create accidents are as a result of the actions of “accident prone” people. The concept of accident proneness emerged from work undertaken by the UK Health of Munitions Workers Committee during the First World War and its peacetime replacement, the Industrial Fatigue Research Board (for example, Greenwood and Woods, 1919; Farmer and Chambers, 1926). This concept emerged as a result of statistical analysis which suggested that particular individuals had a disproportionate number of accidents and that this was likely to be a result of a particular personality trait or traits. This, combined with everyday experience (which seems to suggest that we all know people who do seem to have a particular “habit” of having accidents), firmly established the concept. This is true both in the scientific sense and also in the public imagination. Despite this, re-working of the data using improved statistical analysis methods has shown that there is no basis for believing that there are definable personality traits which predispose certain individuals to suffer more accidents than others (see, for example, Porter, 1988). No more telling statement on the importance of human error could be made than that of Rimmington (the then Director General of the UK Health and Safety Executive): Studies by HSE’s Accident Prevention Unit and others suggest that human error is a major contributory cause in 90 per cent of accidents of which 70 per cent could have been prevented by management action (Rimmington, 1989).
Rimmington later reinforced this point with a more specific emphasis on the need to address the human factors elements in safety as follows: We seem to have passed the era where the need was for more engineering safeguards ... What we need now is to capture the human factor.(Rimmington, 1993).
A similar, albeit more general point was made Sundstrom-Frisk (1998): All behaviour critical to the causation of accidents could not be shaped by information and motivation alone. Errors arising from insufficient adaptation of the work task to human capacity have to be approached through technical and ergonomic measures ... Without such understanding, people will still be blamed for things that they couldn’t have done otherwise and preventative measures will continue to be misdirected.
Introduction
1.1 The Importance of Human Error in Mining Accident Causation The increasing significance of human error relative to more traditional safety concerns within UK coal mining was specifically emphasised by Simpson and Widdas (1992) as follows: An analysis of accidents over the past few years has not thrown up any uniquely mining aspect on which to concentrate our attention. Accidents involving engineering are now identified as being related to engineering matters which are common to industry as a whole, albeit applied in more hazardous situations. What is evident is that by far the most significant common element in the current accident pattern is that of the human factor.
On a broader basis, but no less specifically in the context of the mining industry, Barnes (1993) stated: The process of humans interfacing with tools, machines and their work environment to accomplish a work task will continue to emerge as a major safety and health issue.
Despite being based on work in UK coal mining, it is likely that the Simpson and Widdas statement would find agreement in the coal mining industries of other countries (particularly those highly developed mining industries characterised by, for example, Australia, South Africa and the USA). The situation is, however, slightly different in underground hard rock mines where “traditional” concerns such as rock bursts remain a major problem. Nonetheless, almost 20 years earlier than the Simpson and Widdas paper quoted above, Lawerence (1974) had made almost exactly the same point specifically in relation to South African gold mining operations. Buchanan (2000) made an interesting distinction in relation to mining health and safety by separating the issue into what he called “catastrophic and insidious problems”. “Catastrophic problems” was used to indicate events which occur with little warning and which can result in considerable loss of life. Examples would include methane explosions in coal mining, and rock bursts in deep hard rock mining. “Insidious problems” are those whose effects are only noticed over a long period of time and would include, for example, the effects of noise, dust and vibration. Clearly human error can influence both of these categories (as can be seen from the examples set out in Chapters 3–8). This is reflected later in the Buchanan paper where he states: Most of the hazards of mining are well known ... A leap forward can only be made by adapting a new paradigm, that of risk assessment including human
Understanding Human Error in Mine Safety error audit ... It is the process of risk assessment, including human error audit, that will lead to further significant improvements in mine health and safety.
A similar sentiment was expressed by Simpson (1998a) as follows: Most of the required science for most mining hazards is now well understood (with one or two notable exceptions, for example, rock bursts in deep hard rock mines). However, even when the science is known and the necessary control measures are fully understood and documented for all to follow, the real control and the everyday reliability of risk management, lies in the hands of the people at the mine. If methane monitoring standards are not maintained, if effective dust control measures are not introduced, if PPE is not worn, if rules and procedures are not used, etc. then all the hard won knowledge which, theoretically, allows us to control risk will have been wasted.
More recently the conclusions of a study of the underlying causes of fatalities and significant injuries in Australian mining (MISHC, 2005) included the statement: As human error is unavoidable in the longer term, improving the tolerance for the presence of human error offers the opportunity to reduce the level of harm to people.
None of the authors quoted above would suggest that human errors did not influence previous accidents, or that the industry had ignored the behavioural factor in safety. Rather, changes have occurred which have made the behaviourallyrelated accidents proportionately more important in the overall accident pattern and that traditional concerns and the traditional focus to minimise human error potential (for example, training, campaigns, general safety auditing and, ultimately, disciplinary measures) have been at best only maintaining the status quo. If we are to achieve more in relation to minimising human error/behaviourallyrelated accidents, it would appear that we need more subtle measures based on a better understanding of the nature of human error and what is likely to predispose it. Human error is far from a simple problem, it is extremely varied and highly complex. This is shown well by examining the human errors involved in a UK coal mining accident which occurred at Bentley Colliery in Yorkshire in 1978 (see Section 1.2 for details). Although this accident occurred over 30 years ago it is particularly interesting as it arose exclusively from a catalogue of human errors, not only immediately prior to the accident or earlier in the accident shift but also errors which occurred weeks, even months, before the accident but which contributed directly to the causal chain. In addition the variety of the errors which occurred also gives a clear indication of the level of complexity involved in trying to understand the contribution of human error to accidents.
Introduction
1.2 The Fatal Accident at Bentley Colliery In November 1978 several dozen men were waiting at an underground train station in the mine ready to be taken out of the mine at the end of the shift. The train ran out of control on a downgrade shortly after leaving the station and derailed at the bottom of the downgrade killing seven men and injuring more. The primary factors which contributed to the accident are detailed in Table 1.1. The information presented in Table 1.1 is taken from the Official Inquiry (HSE, 1979) with some re-phrasing simply to emphasise the human error elements. Even a cursory examination of the limited detail presented in Table 1.1 shows clearly that a major accident: • •
• •
can occur exclusively as a result of human error; can occur from a combination of errors, the consequences of which would have been minimal/easily recovered if each error had occurred in isolation; can occur from a combination of errors many of which would be considered, in isolation, as trivial; can be influenced by errors which occurred at a time far removed from the accident event.
The remarkable co-incidence of so many errors coming together in a way which, one by one, combined to make a major accident more and more likely does, at first glance, raise the questions ... •
How on earth could this have been predicted?
and: •
What on earth can be done to avoid it or similar accidents happening in the future?
It is the purpose of this book to try to answer these questions. In setting out to do so, however, one word of caution is needed. Human behaviour in general and human error in particular is context dependent and as such there are no simple, universally applicable “magic bullets” to eliminate human error potential and, thereby, increase safety. That said, there is a good deal of information already published and a number of investigative approaches which, if used sensitively and tailored to local conditions, will contribute significantly to both reducing human error potential and improving safety.
Understanding Human Error in Mine Safety
Table 1.1
Error No.
A summary of the human errors contributing to the fatal accident at Bentley Colliery
Error Description
1
At deployment, it was noticed that one of the regular train guards had not turned in for work. On checking for a replacement, the official confused Allott with Aylot. One was a trained guard, the other not – the untrained man was deployed as the guard.
2
Neither the man incorrectly deployed, nor the driver, pointed out the official’s slip. In fact at least four individuals had the chance to correct him but none did. The official concerned had a reputation as awkward and someone you did not challenge.
3
When the driver of the first loco to enter the district passed the arrestor he left it defeated, in contravention of the rules, in the mistaken assumption that the headlights he saw behind him were following him into the same district; in the event the following train turned off to a different district. The arrestor is a device which is designed to cause a controlled derailment in the event of a train running out of control. Under normal circumstances a driver stops his train, disengages the arrestor, drives his train beyond it, stops again and puts the arrestor back into position.
4
When the second driver did eventually arrive he drove straight past leaving the arrestor defeated.
5
A degree of shunting was required at the station to enable each loco to take its load of four carriages. There were six carriages at the platform. The driver told the “guard” to sit in the last carriage; not realising that only four were coupled, he sat in the sixth. When the train set off, there was no guard, trained or untrained, in position – he was left at the platform.
6
When the train pulled off for shunting it was fully loaded, in contravention of the Transport Rules as there was a steep gradient immediately after leaving the station and passengers were not supposed to be on board during shunting to reduce the consequences in the event of a runaway.
7
The driver engaged 2nd gear, despite the fact that the rules and his training stipulated 1st gear.
8
There was evidence that the driver had not correctly carried out skid correction; however he had only recently completed his training and the layout of the throttle, service brakes and sanders on this loco was different from that on which he had been trained.
9
The gradients on the road were, in places, steeper than those specified in the Manager’s Rules. This had been spotted and reported four months prior to the accident but no action had been taken.
10
A practice had built up during regular loco maintenance to test the brakes with four empty carriages – you cannot, of course, assume that a loco passing this test will also stop with four fully loaded carriages. The “reason” for this was as men were not available to load the train during testing, the fitters were expected to fill the train with the same weight of sand-bags.
Chapter 2
The Nature of Human Error
It is easy to assume that human errors are one amorphous mass but in reality this is not the case as there are distinct types of human error. By knowing the type of error involved in a given situation, it is easier to identify its cause(s) and also the best approaches to remove the error potential or to mitigate the effects of the consequence(s). Probably the first classification of industrial accidents to recognise the importance of human error was proposed by Heinrich (1931) in his distinction between unsafe acts and unsafe conditions. Although there were some developments in the pursuit of an understanding of human error in industrial accidents in the decades immediately following Heinrich’s initial proposals, most notably the work of Bird and colleagues which led to the development of the International Safety Rating System (for example, Bird and Loftus, 1976), the topic remained relatively under researched until concern grew about human error potential in the nuclear industry, particularly after the Three Mile Island incident. Two classifications emerged from the nuclear context which have become the most widely accepted and used. One arose from the work led by Jens Rasmussen in Denmark and the second from the work of Jim Reason and colleagues in the UK (see, for example, Rasmussen, 1987; Reason, 1987). These two classifications are outlined in Sections 2.1 and 2.2 below. In addition to these two classifications, two other, much more general, classifications are often used which are outlined at Sections 2.3 and 2.4 below.
2.1 Skill-Based, Rule-Based and Knowledge-Based This classification was developed by Rasmussen and relates to the “mental context” in which the error occurs. Skill-based errors occur when you are working on “auto-pilot”, doing a task which is over-learnt and one with which you are very familiar. Skill may seem an odd word to apply in this context, but it is being used in a way where it is considered to mean an operation which you can complete almost without conscious thought. Rule-based errors occur when the operation is defined by a series of known rules (as encompassed in, for example, procedures and safe working practices etc). A rule-based error occurs when the wrong action is ascribed to a rule or when a rule requires action but no action is taken.
Understanding Human Error in Mine Safety
Knowledge-based errors occur when the situation has gone beyond that covered by the person’s training and/or experience. The circumstances are so novel that there is nothing in their understanding which directly relates to identifying the required action. In these situations you have to rely on your wider pool of knowledge and try to work out what to do by analogy and/or revert to basic principles.
2.2 Slips/Lapses, Mistakes and Violations This classification was proposed by Reason and is based, primarily, on the nature of the error itself. Slip/lapse errors are characterised by situations where we start with the correct intention but end up taking the wrong action. The classic “everyday” example which is often quoted and of which we have all been “guilty” of at some time is going to make a cup of coffee and suddenly realising that you have put a tea bag in the cup. Slip/lapse errors usually arise when we are distracted during an action and our mind is drawn away from the task in hand. The distraction can be external (for example, a sudden noise or someone talking to us) or internal (for example, thinking about where we are going in the evening or mulling over a previous argument at home). Mistake errors are where you choose to do the wrong thing but when you make this decision it is with the belief that it is, in fact, the correct action. Violations occur when you deliberately choose an action which deviates from that which is required. There are several sub-classifications within the violation category which are important to understanding both their causation and their remedy and these are considered in more detail in Section 2.5. However, the most important aspect to appreciate in relation to violation errors is that while intentional, they are not necessarily malicious or simply a result of laziness. For example, failure to wear Personal Protective Equipment (PPE) may be a function of it being uncomfortable or the correct PPE not being readily available. Alternatively, failure to use the correct tool or replacement part during maintenance may be a function of availability, and failure to complete all required checks during return to service may be a function of supervisory or other pressures to “get the job started again” etc.
2.3 Errors of Commission and Omission Errors of commission are, fundamentally, where you do something wrong. For example, you press the wrong button, read the wrong information or give the wrong instruction etc. Errors of omission are where you fail to do something which you should have done. For example, you forget to check a reading, miss a step out when returning an item to service after maintenance or forget to tell the oncoming shift of something to keep an eye on.
The Nature of Human Error
2.4 Input, Decision and Output The simplest description of human information processing consists of three steps – input (that is, receiving the information); decision (that is, deciding what it means and what action, if any, needs taking); and output (that is, taking the action). Although clearly rather simplistic, this classification can be very powerful in helping to identify the causes of an error and where to apply error mitigation techniques. For example: • •
•
Input errors immediately focus attention on the quality, comprehensiveness, availability, timeliness etc. of the information needed; Decision errors immediately focus attention on whether the operator understands the implications of the information received, whether they know what to do and how to do whatever is needed in response to the information; Output errors occur when you know, from the information received, that you should be doing something, you know what to do but there is a problem in actually carrying out or completing the necessary action. This could occur as a result of the accessibility of the control or lack of adequate feedback information (which informs you that the control action has had the intended effect) etc.
2.5 The Value of Human Error Classification The general value of human error classifications is that they allow a more structured and focused approach to understanding both the potential causes of the error and the best route to error reduction. For example, the cause of a slip error (which is by definition unintentional) is likely to be very different from those of a violation error (which is by definition intentional). Similarly while refresher or additional training may be of value in reducing mistake errors it will be of no value whatsoever in attempting to reduce slip errors. The particular value of each of the above classifications is as follows: •
The skill, rule and knowledge classification has considerable value but tends to produce many skill- and rule-based errors but very few knowledge-based errors (for by definition, the circumstances which create potential for knowledge based errors should be very rare). The recognition of the unique circumstances which occur in knowledge-based errors is however extremely important for, although rare, when circumstances take the workforce beyond their experience and capability comfort zone, the consequences can be considerable. The slightly atypical use of skill and knowledge in a way which varies from their everyday use can create difficulties in understanding and utilising this particular classification.
Understanding Human Error in Mine Safety
10
•
•
•
The slip/lapse–mistake–violation classification can be particularly useful in relation to identifying appropriate mitigation. The introduction of violations as a unique category is also of considerable importance in gaining a comprehensive classification of error type. This classification also has the advantage that the descriptors used are used in a way which is entirely consistent with their everyday meaning. While there is value in distinguishing between errors of commission and omission in terms of understanding causality and routes to improvement, and it is widely used, it is a rather crude distinction. Although the input, decision, output classification is a very simplistic representation of human information processing, it can be a key tool for identifying the causal factors and the best area of focus for remedial action. In addition it provides a systematic, straightforward tool for high-level initial identification of human error potential.
It has become widely accepted that the most comprehensive and useful human error classification is a combination of the Rasmussen skill, rule and knowledge structure with Reason’s slip/lapse, mistake and violations (for example, ACSNI, 1991, 1993). The combination most commonly cited is as follows: • • • • •
Skill-based slips/lapses; Rule-based slips/lapses; Rule-based mistakes; Knowledge-based mistakes; Violations.
Violations are treated as a unique category for, unlike the others, they are intentional. Although this intention is often perceived as malicious, violating behaviour frequently originates from the genuine belief that opposing the rules, procedures etc. will lead to a more efficient (and/or safer) outcome. It is now recognised that like human error in general, there are distinct categories of violation and one particular classification (HFRG, 1995) has gained considerable support: •
•
•
Routine violations demonstrate habitual behaviour that goes against set rules but seems to be the normal and accepted method of conduct within that working environment; Situational violations occur when factors within the workspace restrict or limit compliance with a rule and subsequent obedience to the rule is thought to be ineffective or unsafe given the specific circumstance; Exceptional violations are uncommon and often arise in unusual situations where an individual is attempting to solve a novel problem and feels violating a procedure is unavoidable. These violations convey high risk as the consequences of the action are not always predictable;
The Nature of Human Error
•
11
Optimising violations emerge to make a work situation as interesting as possible sometimes because of boredom and sometimes inquisitiveness. Exploring the boundaries of a task/operation perceived to be too restrictive is a good example of this.
As with human error in general it is clear to see from the above categories that the causality and best route to improvement will vary from one type of violation to another. There is an additional classification which, although not a classification of human error per se, is of interest in helping to understand human errors and their source, particularly in relation to decision errors. This classification was originally proposed by van der Molen and Botticher (1988) and adapted to a mining context by Joy (2000). The classification separates decisions into three groups: 1. Strategic decisions (formal, planned exercises which go through a systematic step-by-step process of gathering information, considering options and drawing carefully considered conclusions). 2. Tactical decisions (informal decisions where decisions are made on the basis of judgement against a known and established rule, or where no rules exist, based on experience and a local, on-the-spot consideration of pros and cons). 3. Operational decisions (those where there is almost no conscious thought, where the decision has become ingrained into an almost automatic process – this is exemplified well by learning to drive where every action starts as a deliberate conscious decision but, over time, the required actions become ones which are so smooth that there is almost no need for thought). It is easy to see how such a classification adds value to the process of using the more specific human error classifications to identify where remedial action should be targeted. For example, an accident investigation which shows that an error occurred at the tactical decision level because an on-the-spot decision had to be made without any guiding rules could clearly suggest the need for more thought at the strategic level to establish and promulgate new rules to protect against a similar situation arising in future.
2.6 Active and Latent Failures In addition to the above classifications there is one final distinction which is crucially important to the understanding of human error, its causality and to error reduction and that is Reason’s distinction between active and latent failures (for example, Reason 1987, 1990).
12
Understanding Human Error in Mine Safety
In essence active failures are the operator errors which directly affect the operation of the equipment/systems etc. and which are often thought of as the “immediate cause” of the accident or the “initiating event”. Latent failures are failings in design, management, training, supervision etc. which predispose the active failures. Reason likened latent failures to pathogens in the body which lie undetected but which can, at any time, predispose the active failure. To continue the medical analogy, the active failures are, in effect, the symptoms and latent failures the disease. Identification and understanding of the latent failures which predispose active failures is crucially important for no matter what you do in response to the active failure, if the latent failure(s) are not identified and addressed, the probability of repeat or similar accidents remains. Unfortunately traditional approaches to human error in accident investigation often stop once a human error (active failure) has been identified. Such an approach is bound to fail, or at the most, only partially succeed. Consider this example: A number of damage only accidents had occurred to several of the vehicles in the underground fleet at a hard rock mine. Examination of the damage suggested, without doubt according to the mine’s chief engineer, that overspeeding was a significant factor. The mine had carefully surveyed where speed limits would be required (for example, downgrades, bends, junctions etc.) and established appropriate speed limits. They had signposted these (with reflective signs) and carefully positioned them where they would be easy to see from the driver’s position. In addition they had included the speed limit requirements in both initial and refresher driver training (driver’s were re-trained and tested annually). On the basis of the risk management measures taken they had come to the conclusion that the only possible explanation was that they had a number of “cowboy drivers”. More detailed Human Factors examination based on identification of what could predispose over-speeding identified that not one of the vehicles in the underground fleet was fitted with a speedometer!
In this circumstance, despite all the very sensible risk control measures, the latent failure in the design of the vehicle meant that unless it was resolved on-going incidents were inevitable. While the above example focuses on a design failure (in that the machines had what was, in effect, designed-in error potential) as Reason (1990) identified, latent failures which predispose the potential for error can and do occur at all levels within an organisation. A classification of these levels is shown in Figure 2.1.
Although this may seem a fanciful, if not silly, example it is, in fact, based soundly in reality – see Chapter 3.
The Nature of Human Error
13
Safety Management System/Organisation and Safety Culture
Supervision/First Line Management Roles and Responsibilities Training and Competence Codes, Rules and Procedures Workplace Environment Person-Machine Interface
Figure 2.1
Human error likelihood influence framework
Errors can occur at each of these “levels of influence” which can predispose the “critical” active failure (operator error) which creates an accident. Identifying the “level of influence” where the predisposing error has occurred is important for two reasons: •
•
the nature of the error mitigation methods chosen will change with the level as will the organisational level where error mitigation action should be directed; the range of effect within the organisation increases as the level spreads away from the person–machine interface. So, for example, a design error predisposing an operator error at the workplace will, generally speaking, primarily affect that workplace (and those others which use the same equipment and involve the same person–machine interfaces), whereas a predisposing error associated with training or codes, rules and procedures will affect all workplaces/tasks where that training or the procedures etc. apply. Ultimately, and logically, poor safety culture will pervade the whole organisation.
Identifying the predisposing factors is therefore not only important to understanding human error but it is also crucial in identifying and targeting
14
Understanding Human Error in Mine Safety
appropriate remedial measures and ensuring that the likelihood of repeat, or similar, events is minimised across all aspects of the operation which could be influenced by the latent failure. The importance of understanding what might predispose human error was elegantly summarised by Fox (1991) as follows: The point to be taken is that the designer or manager who considers human error solely in terms of potential violations and failures will essentially only address the question of how an accident might happen; he will not fulfil a necessary condition for safety by considering why it might happen … It is the contribution which allows the designer and manager to play a more effective role in safety. It is the contribution which gives a new depth to safety activities.
More recently, Reason (for example, Reason, 2000) and others have taken this approach further by introducing the concept of system-wide (or “organisational”) errors; this approach focuses on issues such as management decision making, safety management, safety culture and communications. For the analysis of accidents, the fundamental principle here is that “system” error rather than “individual” error should be examined to get a full understanding of the incident. Along similar lines, Dekker (2006) recently provided a “new” view of human error, which combines the above points into three key factors: 1. Human error is not a “cause” of failure. It is the effect, or symptom, of deeper trouble; 2. Human error is not random – it is systematically connected to features of people’s tools, tasks and operating environments; 3. Human error is not the conclusion of an incident investigation – it should be the starting point. Each of the levels of influence identified in Figure 2.1 is considered in detail in Chapters 3–8. Each chapter is divided into two sections: the first provides a series of examples taken from the mining industry to indicate the type of problems and the potential (sometimes actual) errors that can be predisposed by failures at that level. The second section examines a number of potential “routes to improvement” to minimise error at that level. The examples used cover a range of mining activities both underground and on the surface and are associated with mining operations to extract a range of material including coal, gold, platinum, diamond, copper, iron ore and chrome.
Chapter 3
Predisposing Factors: Level 1 – The Person – Machine Interface
The major concern at this level is what Simpson (1996a) described as “designedin accident potential”, similarly McDonald (1993) states: “Too often, mining organisations buy their safety problems.” In the same vein, Harris and Rendalls (1993) state: “Put simply Eltin [the company Harris worked for] will continue to have injuries that are a function of machine design as long as manufacturers, statutory bodies, and purchasers choose to ignore the issue.” In short, the failure of mining equipment companies to utilise ergonomics research, guidance and recommendations within their design process creates the potential for human error and, thereby, the increased likelihood of accidents/incidents.
3.1 The Problem The extent of this problem can be seen in the examples below: Example 1: Underground coal mining During the second half of the 1970s and throughout the 1980s, the Ergonomics Branch of the Institute of Occupational Medicine carried out ergonomics evaluations of a selection of prototype mining equipment as part of a wider engineering and operational assessment in trials at the NCB/BCC surface test site. In one of these ergonomics evaluations (on a drill loading machine for use in development headings), the investigators identified so much wrong with the machine from an ergonomics perspective that they decided it would be easier to re-design the operator than re-design the machine! (The re-designed operator is shown, alongside a normal miner in Figure 3.1.) The design features of the “genetically modified” operator are as follows: •
• •
he has an extended neck to allow him to see over/round the twin drill booms which are directly in front of him at seated eye level (for a normal operator); he has a shortened left arm to enable him to easily operate a bank of 12 controls positioned at shoulder level and very close to the shoulder; he has a large right hand in order to operate the track controls (to turn a tracked vehicle to, say the left, you have to simultaneously forward the
Understanding Human Error in Mine Safety
16
• •
right track while reversing the left); he has bowed legs in order to sit comfortably around the bank of 8 controls positioned between his knees; he has a shortened right leg to comfortably operate the “deadman’s pedal” which is positioned very close to the seat squab.
While this may seem a rather flippant example, the crucial point is that if the machine “expects” its driver to be shaped like the genetically modified version in Figure 3.1 but in reality the driver is normal, then the likelihood of errors increases. This increase is both in direct errors (for example, the inability to easily reach and operate controls) and in indirect ones (by, for example, increasing musculoskeletal strain, fatigue etc.).
Figure 3.1
Re-designed drill-loader operator
Source: Original drawing by Steve Mason when working at the Ergonomics Branch, Institute of Occupational Medicine – reproduced with permission.
Example 2: Underground coal mining Two designs of load–haul–dump free-steered vehicles (known locally as the Rhino and the Domino) were used in the mine for underground materials transfer. Unfortunately, the relative position of the forward and reverse on the gear selector was exactly the opposite from one machine to the other, as shown in Figure 3.2. This would not have caused any problems if drivers were trained on one specific machine and subsequently only ever drove that machine. However, initial operator training on the surface used one or other machine (dependent on availability) with successful trainees licensed to drive both. In addition, it was not uncommon for operators to change machines during a shift. In this circumstance errors are, quite simply, inevitable.
Predisposing Factors: Level 1 – The Person – Machine Interface
FORWARD
REVERSE
Figure 3.2
17
REVERSE
FORWARD
Gear Selection on Rhino and Domino machines
The same vehicles also had a conflicting layout for the bucket controls. In both cases four vertical levers were used to control bucket float, crowd, eject and QAS. The layout on the Rhino reading from the driver’s left to right was as follows: Float – Crowd – Eject – QAS Whereas on the Domino (again reading from the driver’s left to right) was as follows: CROWD – FLOAT – EJECT – QAS As with the gear selection, dealing with such conflicting layouts when drivers swap from one vehicle to another even during the same shift will make errors inevitable. Example 3: Surface hard rock mining Harris and Rendalls (1993) catalogue a whole range of design limitations in surface mining equipment which either had or could contribute to errors/accidents. These included, in particular, concerns about the ease of access and egress on the machines. The access/egress problems included, for example, excessive reach (both for the arms and legs), unstable footing (including climbing on wheels) variability in the distance between steps, excessive distance between floor and first step, insufficient platform width/depth, lack of guard rails etc. Not only do limitations such as this considerably increase the risk of slip/trip/ fall accidents but given the awkward postures involved they tend to encourage risky behaviour such as jumping off the ladder or using alternative (easier to reach hand/footholds) rather than adopt the awkward postures associated with the “official” route. The authors also point out that the positioning of access/egress ladders etc. often leaves them vulnerable to damage as they are often outside the main envelope of the machine (almost as though they had been added as an
Understanding Human Error in Mine Safety
18
after-thought). Damage to ladders etc. is almost certainly likely to exacerbate the problems created by the original design (even if that was poor). Example 4: Underground hard rock mining At one of the mines included in a South African Safety in Mines Research Advisory Committee project (Simpson et al., 1996), the Engineering Department at the mine had noticed an on-going trend of damage-only accidents affecting the vehicles used underground. From examination of the damage, all of these incidents seemed to be related, at least in part, to excessive speed. Why this should be occurring was particularly puzzling as the mine believed that they had taken sufficient steps to ensure compliance with speed limits. These included: • • •
• •
they had surveyed and identified all the areas underground (down grades, bends, junctions etc.) where excessive speed could be a problem; they had sign-posted the speed limit at each point identified in the survey; they had seriously considered the visibility of the signs using reflective material and carefully positioned them at points which would be easily seen from the driver’s position; knowledge of the speed limits was part of the driver training and included in the competence testing before drivers were issued with their licences; speed limits were covered in the annual refresher training for the drivers.
While this does seem to be a comprehensive suite of risk control measures, the real reason for the problem emerged during a potential human error audit and safety management system review which was being undertaken at the mine – not one of the 150+ vehicles in the underground fleet at the mine was fitted with a speedometer. Example 5: Underground coal mining As part of a study of underground transport operations (Kingsley et al., 1980) it was noticed that there was little consistency, if any, in the layout of the throttle, service brakes, and sanders (the controls used in skid correction) across the fleet of locos used underground in UK coal mines. The layout of the skid correction controls across 10 locos from four manufacturers is shown in Table 3.1. Interestingly, close examination of Table 3.1 shows that there is not even any consistency in the layouts within a single manufacturer’s fleet. As in Example 2 above, this would not be of major importance if each mine used only one type of locomotive and drivers were only licensed to drive one type of locomotive. Unfortunately, this was not the case, individual mines would often have several different types of locomotive and it was not uncommon for drivers to change locos during their shift.
Predisposing Factors: Level 1 – The Person – Machine Interface
Table 3.1
19
Position of skid correction controls across ten types of underground locomotive
Skid Correction Controls
Manufacturer
Throttle
Brake
Sanders
LH
RH
RH
A&B
LH
RH
LH/RH
C&B
LH
RH
LH
B
LH
LH
RH
A
LH
LH
–
D
RH
RH
RH
A
RH
RH
–
D
RH
LH
LH
A
RH
LH
LH/ RF
E
RH
LH
RF
D
Key: RH = Positioned for operation by the Right-Hand LH = Positioned for operation by the Left Hand RF = Positioned for operation by the Right Foot
Example 6: Surface hard rock mining During the Simpson et al. (1996) study (mentioned previously in Example 4) it was noticed that the brake lights on haul trucks used on a surface mine were both relatively small and positioned such that they were very vulnerable to dirt and material thrown up by the rear wheels. To emphasise how small these brake lights were, the researchers calculated the ratio of the diameter of the brake light to the overall height of the vehicle and then applied the same ratio to a typical family car. Applying this ratio to a family car would produce brake lights of approximately the same size as an adult male thumb nail. The size, combined with the fact that the lights were so dirty that their visibility was seriously reduced, may have been one of the factors leading to a serious problem of tail gating which was evident on the mine – you had to be within the safe braking distance in order to see whether the vehicle in front was actually braking!
20
Understanding Human Error in Mine Safety
Example 7: Underground hard rock mine In one mine studied as part of the Simpson et al. (1996) study, the robots controlling access to junctions were intended to be operated by vehicle drivers on entry and exit to the junction by use of pull-wires suspended from the roof. At one T-junction the design of the robot system meant that it was actually impossible to follow the rules. The problem is shown diagrammatically in Figures 3.3 and 3.4. If a driver enters the junction from the leg he cannot activate the robots as there is no pull-wire in place, simply a STOP sign (see Figure 3.3). In this case traffic entering either of the arms of the junction will not be controlled (that is,. the robots will show green). If the driver enters from one of the arms of the junction (see Figure 3.4) and activates the pull-wire, he cannot activate it on exit via the leg of the junction which means the robot on the other arm will be left permanently on red. As this is clearly nonsense, drivers did not bother activating the pull-wires on entry from the arms of the junction. In effect therefore, the robot system at this junction served no purpose and any risk control envisaged by its introduction simply did not exist – the drivers were “forced” into error and contravention of the traffic rules at the mine. Example 8: Underground coal mining During a series of studies undertaken in UK coal mining it became apparent that the field of vision of drivers on a range of underground machines was severely restricted. An example (taken from Simpson, 1998b) of the extent of restricted vision on a development machine is shown in Figure 3.5.
STOP
Figure 3.3
Inappropriate robot system design: entry from leg of T junction
Predisposing Factors: Level 1 – The Person – Machine Interface
21
STOP
Figure 3.4
Inappropriate robot system design: entry from arm of T junction
LEFT SIDE OF ROADWAY
RIP/HEADING
RIGHT SIDE OF ROADWAY
AREA OBSCURED FROM OPERATOR'S VIEW Figure 3.5
Restricted vision from the driver’s position on a drill loader
Source: Simpson, 1990. Originally published in the journal Ergonomics, details can be found at http://www.informaworld.com. Reproduced with permission of the Taylor and Francis Group.
The extent of the restricted vision often meant that a man from the heading team had to be deployed as a “spotter” for the driver, positioning himself in front of the machine and signalling by arm and cap lamp movements to direct the driver’s positioning of the drill booms. Given that such machines are tracked (and therefore difficult to manoeuvre accurately in limited space) and that there was often less than a metre clearance on each side of the machine, it is clear that any error on the part of the driver could create considerable risk for the “spotter”. Equally, errors in drilling the required firing pattern holes could result in subsequent risks.
22
Understanding Human Error in Mine Safety
Not only does this restriction on vision create potential driver error but by necessitating a “spotter” it creates a task which is by definition intrinsically risky and would not have been necessary at all had the designers ensured even reasonable vision from the driver’s position. Similar problems have been identified on other vehicles in use in underground mines (both coal and hard rock). For example, in 1992 the UK Health and Safety Executive published a Topic Report (HSE, 1992) titled “The Safety of Free Steered Vehicle Operations Below Ground in British Coal Mines” (building on work carried out by both British Coal and the HSE) which includes several plots of dangerously restricted vision from free-steered vehicles (shuttle cars etc.). The same report highlights some of the specific accidents (including fatalities) which had occurred associated with FSV operation during the five-year period from 1986 to 1991. The report states: Eleven men sustained major injuries when they were struck or trapped by FSVs. Five occurred to men standing in the roadside to allow vehicles to pass. ... Many of the over 3 day accidents resulted from similar causes.
It would seem inevitable that restricted driver vision is likely to be a factor in at least some of these accidents (a point subsequently supported by Horberry et al., 2006, in their work examining forklift truck accidents). Similarly the Simpson et al. (1996) transport and tramming study in South Africa identified this as a common problem across most of the range of underground vehicles used in both the coal and hard rock mines studied. Equally, the same issue has been raised in the USA (for example, Sanders and Kelly, 1981). Example 9: Underground hard rock mining and underground coal mining As part of the SIMRAC transport and tramming study referred to earlier (Simpson et al., 1996), it was noticed at one mine that load–haul–dump drivers removed their self-rescuers before entering the cab and placed them on the floor. This action arose from the extremely cramped cab, very restricted access/egress route to the cab and, in particular, from the fact that the seat backrest made no allowance for wearing the self-rescuer. The self-rescuer could not be pulled round to the side of the belt as it restricted hip movement and further exacerbated the postural problems in the cramped cab. In an emergency, where the self-rescuer was required, the design of the cab and, in particular, the seat back rest, would have clearly created a situation which could compromise the safety of the driver for: (a) in emergency circumstances he is likely to leave the cab and forget his self-rescuer; and (b) even if he remembers, he is unlikely to be able to reach back easily into the cab to recover it. Similar problems had been observed in an earlier study of FSVs/LHDs in UK coal mining (Kingsley et al., 1980). In addition driver seating was identified
Predisposing Factors: Level 1 – The Person – Machine Interface
23
as a consistent problem in a wide-ranging study of coal mining ergonomics in Queensland and New South Wales (McPhee, 1992). Example 10: Surface hard rock mining At one mine in the SIMRAC transport and tramming study (Simpson et al., 1996) it was noticed on several haul trucks of the same type that the fire extinguisher release mechanism was located in different positions: • • • • •
in the cab, behind and to the right of the driver position; in the cab, behind the left shoulder of the driving position; in the cab, close to the floor and near the driver’s left foot position; outside the cab on the door wall; outside the cab on the front wall.
When the drivers were questioned as to where the release mechanism was on the truck he was driving, it was hardly surprising that, in each case, although they knew where it was likely to be, none could immediately specify where it was on that particular truck. Example 11: Underground coal mining The coal clearance system at two closely situated mines in the UK was rationalised so that the output of both mines would come to the surface at one mine only. As part of this process the coal clearance operations in one control room were closed down and a number of the operators transferred to work in the new combined control room. There were, as expected, a number of teething troubles immediately after the change-over. But within these troubles there were a number of “random” belt stoppages which had not been expected and which were initially unexplained, especially as the computer system used by the control room staff at both mines was essentially the same. More detailed examination identified that the computer systems differed in one crucial way – the colour coding used for stopped and running conveyor belts. At one mine green indicated running belts and red indicated stopped belts (in effect following the traffic light convention). At the other mine, red was used to indicate running belts and green indicated stopped belts (this followed safety logic in that running belts were dangerous and therefore red, while stopped belts were safe and therefore green). Although the logic behind both colour coding conventions was sound, it is not difficult to imagine the confusion (and error potential) for those operators who were suddenly presented with a coding system which was exactly the opposite to the one they had used on a day-to-day basis for several years.
24
Understanding Human Error in Mine Safety
Example 12: Underground coal mining A study funded by ACARP (Burgess-Limerick and Steiner, 2006) examined the causes of injuries associated with the operation of continuous miners, shuttle cars, load–haul–dumps and personnel transport in the New South Wales coalfield over the period 2002–05. One of the major categories of causal factors they identified was “hazards associated with inadvertent operation of controls, operation of incorrect controls, operating controls in an incorrect direction, or while a person is located in a pinch point”. The authors go on to note that the control operation errors which had been identified earlier (for example, Helander et al., 1983; Rushworth et al., 1990) in relation to portable roof bolting equipment “remain to some extent in the design of controls on the integrated bolter miners which are prominently employed in Australian mines”. This clearly indicates that little or no attempt was made to resolve a known problem even when radical design changes were made to bolting equipment. The same study also examined accidents in relation to shuttle cars and again identified design problems likely to induce human error. In relation to shuttle cars, the paper states: Extreme visibility issues also exist with SCs. These are also bidirectional vehicles in that they “shuttle” coal between the CM and the boot end of the conveyor belt without turning. An incompatibility between the steering wheel action and the vehicle response exists in the SCs employed in NSW when driving the SC towards the face. This is an extreme violation of a fundamental human factors principle which has the potential to contribute to high consequence events, especially when combined with restricted visibility.
Each of the above examples relate to operational activities rather than maintenance tasks. The reason for this is, quite simply, operational tasks have received more human factors/ergonomics attention than maintenance despite the fact that there is just as much potential for human error in maintenance and maintenance errors can also predispose accidents. Example 13: Surface hard rock and underground coal mining operations Ferguson et al. (1985) carried out what was probably the first detailed study of the ergonomics of maintenance tasks in mining (specifically underground coal mining). A whole catalogue of situations was identified where lack of design consideration placed the maintainer in an intrinsically unsafe position (for example, face side of shearers involving extremely cramped conditions and exposure to falls of coal or rock from the face line), the postures required were awkward at best, where required (or ideal) tools could not be used etc. Perhaps the most telling finding from an ergonomics and safety risk perspective was the fact that, on average, 30 per cent of the total task time was required simply to gain safe access to the component needing
Predisposing Factors: Level 1 – The Person – Machine Interface
25
attention. In breakdown maintenance in particular, the temptation/pressure to shortcut this time is considerable. Such short-cuts often involved not bothering with some of the actions required to make the working area safe. In addition to the concerns identified for the drivers of surface mining equipment arising from inadequate consideration of ergonomics (see Example 3 above), Harris and Rendalls (1993) also identified similar problems in relation to the maintenance access and poor operating postures forced onto the maintainer as a result of insufficient consideration during the design process of how the required tasks will actually be carried out.
3.2 Potential Routes to Improvement There are four levels at which action could/should be taken to reduce the likelihood of designed-in error potential in mining equipment: • • • •
Mine health and safety regulators; Mining equipment manufacturers/suppliers; Mining companies and individual mines; Human factors/ergonomics specialists.
Each of the above are considered in more detail below. 3.2.1 Mine health and safety regulators Section 6 of the UK Health and Safety at Work Act 1974 places an unequivocal responsibility on the manufacturers and suppliers of equipment to ensure, in design and manufacture, that nothing was done (or not done) which could compromise the end user of the equipment. Since that enactment similar regulations have been adopted by other major mining countries. For example: •
•
•
Torlach (1996) in a paper on the (then) new legislative directions in relation to Western Australian mining notes that: “For offences by third parties (equipment suppliers and manufacturers and consulting engineers), the time limit [for prosecution] is extended to two years”; The European Union Machinery Safety Directive (enacted under the Framework Directive on Health and Safety at Work) places similar duties on designers, manufacturers and suppliers as the UK H and SW Act and is in force within the regulatory systems of all Member States; Johnstone (2000) notes the development within legislation applying in mining in Europe, North America and Australia of “the imposition of general duties [of care] on ‘upstream duty holders’ (such as designers, suppliers and manufacturers of plant and substances)”;
Understanding Human Error in Mine Safety
26
•
The Mine Health and Safety Act 1996 in the Republic of South Africa also places similar duties on manufacturers and suppliers to take measures within their control to ensure the health and safety of the end users.
The fact that the regulators themselves recognise the problems caused by poor equipment design is shown clearly in a quote from Torlach (1998) in a paper reviewing regulatory needs in the mining industry for the twenty-first century: There exists a considerable gap between the standard of safety in mining plant design currently achieved by manufacturers, suppliers and importers, and the requirements of legislation. This is particularly the case in respect of ergonomics considerations for mobile plant operators.
Despite these provisions Simpson (1996a) pointed out that, within UK mining operations, no prosecution had been made against designers, manufacturers or suppliers in the 20-plus years since section 6 of the 1974 Act enabled such action. There can be no doubt that there were, during that period, accidents which were, in part at the very least, a result of designed-in human error (and thereby accident) potential. If prosecutions have been brought against designers, manufacturers or suppliers of mining equipment in any of the countries where legislation allows, there is little doubt that they are few and far between. There is clear evidence from the examples presented above that it is common for the industry to be provided with a wide range of equipment which has serious limitations in relation to the health and safety responsibilities placed on its manufacturers and suppliers. Equally there can be little doubt that such prosecutions would provide a very powerful motivation for mining equipment companies to take the human factors of their equipment much more seriously. There is a clear need therefore for mining industry regulators to hold the industry’s manufacturers and suppliers much more accountable for their failure to deliver the responsibilities placed on them by current legislation. 3.2.2 Mining equipment manufacturers/suppliers There is a clear legal and moral responsibility on the designers, manufacturers and suppliers of mining equipment to ensure that the current lamentably low level of consideration of both the operators and maintainers of their products is significantly improved as quickly as possible. None of the actual/potential human errors described in the examples above are subtle problems, nor do they relate to the technical intricacies of the human visual system, information processing etc. Equally no detailed understanding of human psychology, physiology or anatomy is needed to address what are essentially ergonomics limitations of the crudest type. The fact that such fundamental limitations can and do create serious health and safety risks shows clearly that
Predisposing Factors: Level 1 – The Person – Machine Interface
27
manufacturers and suppliers are currently falling lamentably short of their duty of care responsibilities. To redress this situation requires much more attention to be given to the consideration of human factors and ergonomics during their design processes. This is not to suggest necessarily that they need to employ human factors/ ergonomics professionals but rather, initially at least, they need to become much more aware of and routinely use the information which is already available in the open literature. Examples of what is already available to aid the incorporation of human factors/ergonomics in mining equipment design are shown below. Design aids for designers Projects undertaken by the Institute of Occupational Medicine (funded by the National Coal Board and the European Coal and Steel Community) during the 1970s and early 1980s had shown clearly that almost without exception the underground coal mining machines in use in the UK had serious ergonomics limitations, many with potential safety implications. It also became evident that most of the manufacturers used little, if any, ergonomics guidance or recommendations during their design process. Examination of the ergonomics guidance publically available at the time suggested that what was available would be unlikely to encourage designers to use it. This problem, ironically perhaps, that ergonomics guidance was not exactly user-friendly had been raised several years earlier by Meister and Sullivan (1968) who, after a study reviewing designers’ willingness to use human factors/ ergonomics recommendations/guidance, concluded: Despite the evidence that human engineering can prevent system failure, equipment designers continue to reject the remedies offered by human factors specialists. The specialists share the blame – often the engineer cannot read the prescription.
As typical of the kind of recommendation which concerned them, Meister and Sullivan quoted several examples from the then current version of the US Military Standard on human factors, one of which is quoted below: The intensity, duration and location of aural alarms and signals shall be selected so as to be compatible with the acoustical environment of the intended receiver, as well as the requirements of other personnel in the signal area.
While there is no doubt about the correctness of the above, it is hardly helpful to someone designing, say a reverse warning on a haul truck, against a commercially driven deadline. An attempt to make ergonomics recommendations more readily available and meaningful to mining equipment designers led to the concept of design aids for designers (Simpson and Mason, 1983). The concept was developed into a range
28
Understanding Human Error in Mine Safety
of reports which provided ergonomics recommendations and guidance specifically tailored to individual families of mining equipment. The equipment families covered by the Ergonomics Principles Reports were as follows: Free-Steered Vehicles (Mason and Simpson, 1990a) Underground Locomotives (Mason and Simpson, 1990b) Drill Loaders (Mason and Simpson, 1990c) Continuous Miners (Mason and Simpson, 1990d) Roadheaders (Mason and Chan, 1991) Shearers (Mason and Rushworth, 1991) Portable Roof Bolters (Rushworth and Mason, 1991) Designing for Maintainability (Mason et al., 1986) Within each report, guidance was provided on: Lateral clearances; Lines of sight; Seating; Access corridors; Operator reach envelopes; Operator visual envelope; Overall operator body clearance; Design and layout of controls; Design and layout of displays; Labelling and instructions; Machine lighting; Thermal environment; Auditory environment; Vibratory environment; Maintainability. Examples of the type of information provided are given in Figure 3.6. Following on from the idea of providing ergonomics recommendations tailored to specific equipment families, the British Coal team realised that mining equipment designers would also benefit from a process which would allow them to assess the ergonomics/human factors of their equipment during the design process. Two additional design aids were developed: 1. An Operability Index (Mason and Simpson 1992, summarised in Simpson 1993) 2. A Maintainability Index (Rushworth et al., 1993)
Predisposing Factors: Level 1 – The Person – Machine Interface
Figure 3.6
29
Two examples of data sheets from the UK Ergonomics Design Principles Reports (Simpson, 1993)
Note: Originally published in the Proceedings of “Minesafe International 1993”. Dept. Minerals and Energy Western Australia.
A third stream of the design aids for designers philosophy was developed in the creation of reports showing how the ergonomics/human factors of underground locomotives and free-steered vehicles could be improved by retrofit changes within the capability of individual mines (see, for example, Rushworth, 1996). Although the above initiatives provided a comprehensive coverage of the ergonomics recommendations appropriate to each family of equipment covered, within the wider mining context the range of equipment families covered is limited (for example, the principle was not applied to any surface mining equipment). In addition, it is likely that there is scope for updating the work done to ensure it is as applicable and comprehensive at the start of the twenty-first century as it was at the end of the twentieth. More recently than the UK work on design aids for designers, McPhee (2007) has advocated the need for the development of generic usability standards for large vehicles. The main categories of ergonomics design advice which she advocates are: • • •
Ingress/egress from the cabin; Operator’s space; Seating;
Understanding Human Error in Mine Safety
30
• • • • • • • • •
Controls; Instruments and displays; Other warning signals; The cab environment; Visibility inside and from the cab; Accessibility of fluid level gauges/sight glasses for operators; Accessibility for servicing by operators; Accessibility to regularly replaced or serviced components for maintenance personnel; Training.
If you replace McPhee’s word “generic” with the Simpson and Mason phrase “machine family” and compare the above list of ergonomic considerations with that from the UK work (given on p. 28) the similarity is considerable. This suggests, from a positive point of view, that there is clearly benefit from such an approach (given that the same conclusions in relation to what was needed were reached independently). However, more negatively, it also emphasises what little has been achieved in 30 years or so. Earth Moving Equipment Safety Round Table (EMESRT) In 2006, the Minerals Industry Safety and Health Centre (MISHC) of the University of Queensland helped initiate a programme supported by a number of the major mining houses (MISHC, 2009). The purpose was to increase surface mining equipment manufacturing companies’ awareness of: • • •
Ergonomics/human factors limitations in their existing designs which have safety or health implications; Retrofit ideas used on individual mines (or in individual mining companies) which have overcome or minimised the problems identified; The ergonomics/human factors guidance/recommendations necessary to avoid such design limitations in future designs.
The objective was to develop a process designed to accelerate the development and adoption of leading practice designs of earth-moving equipment to minimise health and safety risks. Discussions with representatives of the participating mining houses (largely based on their fatal risk protocols) together with information from the mining human factors literature were used to identify key aspects of the operation and maintenance of earth-moving equipment were there was a foreseeable risk of human error. Fifteen key areas were identified as follows: 1. Equipment access/egress; 2. Working at height;
Predisposing Factors: Level 1 – The Person – Machine Interface
31
3. Noise; 4. Whole body vibration; 5. Fire; 6. Dust; 7. Isolation of energy (including parking); 8. Visibility – collision detection/avoidance; 9. Machine stability (including slope indication); 10. Guarding; 11. Displays and controls (including labelling); 12. Tyres and rims; 13. Manual handling; 14. Work postures; 15. Confined spaces. The programme involves the development of “design philosophies” for each of the above areas. Each “design philosophy” includes the following elements: • • • •
Objective; General outline; Risks to be mitigated; Examples of industry attempts to mitigate risks.
As part of this overall process, MISHC and EMESRT have also developed an Operability and Maintainability Analysis Technique (OMAT) which is of particular value in allowing designers and manufacturers to qualitatively assess the adequacy of the human factors of their design throughout the design process. OMAT is a newly developed risk assessment technique, performed by original equipment manufacturers in conjunction with user input from mine sites. MISHC and EMESRT developed the OMAT technique to help identify, prioritise and eliminate or mitigate any potential safety issues found in new and current earth-moving equipment, specifically through the application of human factors engineering to haul trucks. Both the EMESRT “design philosophies” and the OMAT process are freely available on-line through the MISHC Minerals Industry Risk Management Gateway (MIRMgate) (http://www.mirmgate.com/). MIRMgate is designed as one-stop shop for good practice information about managing safety and health risks in the minerals industry and therefore contains a wide range of other information beyond ergonomics to assist in improving mine safety. An example of this is “TYREgate”: a decision support tool for mobile equipment tyre and rim risk management (http://www.mirmgate.com/tyregate/index.php). Tyres, rims and wheel assemblies are safety critical items which must be maintained and operated correctly. TYREgate is a risk management decision support tool on MIRMgate that allows the analysis of a large and diverse range of tyre and rim related incidents and accidents, in “real
32
Understanding Human Error in Mine Safety
Although this initiative is still in development it is likely to have more success than the design aids for designers approach for, although they have exactly the same objectives and many common elements, EMESRT has two major benefits: 1. It has the collective weight of many of the major mining houses supporting it thus providing collaborative “pressure” on the mining equipment companies. 2. By being a partly web-based/remote forum it not only allows for easier, effectively instant, access, but it also enables easier update and expansion, of, for example, equipment types. To fully capitalise on the contribution of ergonomics/human factors to reduce the likelihood of designed-in human error potential however, the EMESRT approach will need to be expanded to apply to underground mining equipment. At the time of writing (2009) this expansion of EMESRT is in progress. There is no doubt that a great deal of information is available to enable the designers in mining equipment companies to minimise the tendency toward designed-in human error potential. Equally, as the EMESRT initiative shows, this information is being made available in much more accessible/user-friendly ways. The only remaining question is whether the mining equipment manufacturers will recognise their duty of care and begin to use this information within their design process. Computer simulation Work by Denby and his colleagues at the University of Nottingham mining department began the process of exploiting the potential of computer simulation/virtual reality to consider the safety of mobile plant operations in mining (see, for example, Schofield et al., 1994; Denby et al., 1995; Denby, 1996; Hollands et al., 2000). The systems Denby and colleagues developed allowed risk envelopes to be created around the outline of a vehicle dependent on visibility, speed etc. and then allowed the vehicle to be moved about in a digital simulation of, say, an underground mine, with the risk envelopes changing dynamically with the movement of the vehicle. This approach has also generated development work in other mining countries both into vehicle and other aspects of safety (see, for example, Squelch, 2000 in a South African context and Carter et al., 2000 in an Australian context). More recently, computer simulation/virtual reality work for the minerals industry has been done in both Australia, at the University of New South Wales and the
time”. Results of this are presented in a range on intuitive graphical formats and reports. The purpose of TYREgate is to help improve the safety of tyre and rim maintenance and the use of rubber tyred equipment at mine sites (Kizil and Rasche, 2008).
Predisposing Factors: Level 1 – The Person – Machine Interface
33
University of Queensland (for example, Kizil, 2003), and in the USA at NIOSH and Virginia Tech (for example, Lucas and Thabet, 2008). The potential for such techniques to be used as part of ensuring safety within the design process is enormous especially as simulation/virtual reality techniques are developing and improving quickly. Generally these techniques can be used for training (for new operators, on different equipment types or refresher training for existing operators), systems design (for example, designing safe working procedures) or indeed as an evaluative tool during the design process. Many mining houses now possess fixed simulators for training purposes. Furthermore, mobile simulators are also becoming more widespread and are often valuable for use at smaller or more remote sites (for example, in central Queensland in Australia or in many of the metaliferous mines in Chile) where a permanent, fixed simulator would not be cost effective. Of course, as well as the safety benefits from simulator training, productivity benefits can also be obtained: for example, Parkes (2003) described the fuel savings that could be obtained in the UK with improved truck driver training. 3.2.3 Mining companies and individual mines There is both proactive and reactive potential for the mining houses and, indeed, individual mines to contribute to the reduction of designed-in error potential in mining equipment. At the proactive level, the most impact would be made by mining houses in particular taking a much more “aggressive” position in relation to their suppliers in terms of the importance of good standards of ergonomics/human factors in design. The inclusion of several major mining houses in the EMESRT “design philosophies” approach is a considerable step forward in this context. However, a bolder and almost certainly more effective step would be to include a formal requirement for incorporation of risk-based ergonomics/human factors information within their purchasing requirements/design specifications etc. It is possible that the ergonomics/human factors information necessary to support such a development is slightly different from that required in the design aids for designers and EMESRT approaches but once again a model has been set (see Mason et al., 1985 which established minimum sight-line requirements for underground mobile equipment). A more significant step to support the inclusion of ergonomics in design specification was taken in work funded by Worksafe Australia (for example, Teniswood et al., 1993) which extended the design aids for designers approach by providing ergonomics input to the development of a Draft Australian Standard on remote controls for mining equipment. The ergonomics input to the draft standard outlines: … a general method for the assessment of safety risk dependent upon the machine action, period of operation and exposure of people near the machine. Individual
Understanding Human Error in Mine Safety
34
hazard ratings can be given for each control function and an overall machine hazard can be established. Specific types of remote control are recommended, categorised by machine action types. Fundamental principles are stipulated for manual motion controls and the changeover method between remote and local control. Safeguards for transmitters, receivers and proximity protection are presented.
Despite the considerable step forward which the inclusion of ergonomics information into national standards represents, the authors also point out that: “Hazards still exist with remote controls and safe systems of working are still required for mining machinery.” O’Sullivan (2007) describes an important and unusual step by an individual mine to ensure consideration of human factors in the equipment they purchase. The mine decided to incorporate a detailed consideration of ergonomics in their equipment specification for a new bolter miner. Specifications were provided on: • • • • • • • •
Floor height; Monorail storage and handling; Mesh handling; Cassette storage and handling; Roof bolting operations; Rib bolting operations access/egress; Guarding; Visibility and viewing angles.
Among the improvements achieved by the detailed consideration of ergonomics in the procurement specification were: • • • • • • • •
An adjustable floor height to accommodate a larger range of users; Handrails to minimise the risk of falling off the side; A stairway style accessway with good dimensions for easy access/egress and slip resistant covering; A mesh tray which swivels around to the right for easier reach; Rib mesh holders just outside the guard rail; A 450mm forward distance between the platform and roof bolters; Push button miner bolter controls; Improved space in and around the bolter console.
The second proactive approach would be to examine existing operations to identify the sources of human error in equipment design rather than wait for them to emerge through accidents/incidents. This is discussed extensively in Chapter 9.
Predisposing Factors: Level 1 – The Person – Machine Interface
35
Reactive approaches to the reduction of designed-in error potential include the more active and systematic use of the information available on improving ergonomics/human factors through retrofit. The information available on the MIRMgate website is particularly helpful in this context as it allows individual mines to take advantage of improvements made elsewhere in the industry. As Biddle (2000) points out “good ideas have no boundaries”. Also of benefit in this area is a greater and more systematic consideration of both human error and the factors which predispose human error potential during accident/incident investigation. This is discussed extensively in Chapter 9. 3.2.4 Human factors/ergonomics specialists. The most important contribution from the ergonomics/human factors specialists is the provision of more and wider information which is specifically tailored to mining and which is presented in a designer-friendly format. Both the design aids for designers and MISHC/EMESRT approaches follow this principle but their application is still relatively limited in relation to the size of the designed-in accident problem in mining equipment. In addition, there is an increasing need for more research on the ergonomics/ human factors of remote control and automation/semi-automation in the mining context. Research in other industrial sectors, such as defence or rail transport, has found that there is the potential for automated systems to overload, confuse and distract, rather than assist, an operator unless it is ergonomically designed. Therefore the human factors specialist can assist with issues such as: • • • • • •
standardization of new equipment; usability of advanced technologies; appropriate training; alarm integration; operator and manager consultation to ensure new technologies are accepted and trusted but not over-relied on; and the design of controls and displays.
Likewise, ergonomists/human factors specialists can provide important information to help operators maintain a proper understanding of the situation they are in, and the state of their work process. This is essential for more complex tasks such as drilling, process control (for example, of grinding, minerals processing) or driving a large haul truck. One aspect of human error that has received a great deal of recent attention in the scientific world is lack of Situation Awareness (SA). SA involves an operator or work team being aware of what is happening around them to understand how information, events and their own actions will impact their task objectives, both now and in the near future. Up to the present day, most work on SA has been largely restricted to the aviation industry, and to some degree to the medical and road transport sectors. In other industries, research has found that the
36
Understanding Human Error in Mine Safety
greater use of automation can be linked to a greater loss of SA (Grech et al., 2008). However, SA is important in any complex and dynamic environment, such as the mining domain.
Chapter 4
Predisposing Factors: Level 2 – The Workplace Environment The primary issues of concern in terms of the influence of the immediate workplace environment on the potential for human error are: • • •
Noise; Lighting; Thermal Environment.
As will be seen in the examples below, each of these three has an effect upon individuals’ work performance. In addition, where noise levels, thermal environment or lighting are less than adequate they can all negatively impact upon an operator’s subjective comfort and health (both long-term and short-term). For example, the effects of noise can be threefold: 1. At extreme levels it can be dangerous causing deafness. 2. At moderate intensities it is more likely to affect performance – often due to interference with hearing. 3. At low levels noise can reduce comfort and increase annoyance (so, potentially, influencing work performance and well-being through, for example, lowering of an operators’ concentration). Likewise, symptoms of working in environments with poor/inappropriate lighting can include headaches, blurred/double vision, glare and lowering of visual performance as well as the direct effect of not being able to see clearly what is needed to be seen. Finally, the main components in the thermal environment are air temperature, humidity and air movement. Considering temperature, it can, of course, cause problems when either too high or too low. Moderately high temperatures can cause effects such as irritability, loss of concentration, increased errors, loss of performance in heavy work and intense fatigue, whereas moderately too low temperatures can cause less mobility in the hands/feet, slowing down of manual skills, increased clumsiness and a decreased sense of touch. Most human factors/ergonomics texts would also include vibration as a factor of concern in relation to the workplace environment. Both whole-body and hand–arm vibration have well established occupational health concerns (see, for example, Griffin, 1993) and both have documented safety implications. However,
38
Understanding Human Error in Mine Safety
the potential safety concerns of vibration (for example, “kick-back” using rotary powered tools, jolts wresting controls from the hands of drivers on poor road/ track conditions etc.) tend to be direct rather than indirect (for example, increasing human error potential which then causes the safety risk). On this basis, while vibration is recognised as a serious health and safety issue, it is not of primary concern in the context of human error. High level noise has a clearly established occupational health risk with, over extended periods of exposure, significant reductions in hearing ability. By far and away the most common risk mitigation measure (but by no means the best) to deal with the occupational health risk from noise is the use of personal hearing protection, in the form of hearing defenders/muffs, ear inserts etc. While, if correctly and appropriately worn, these can significantly reduce hearing damage, they can, if inappropriately selected, create a potential safety risk. In essence, if the attenuation of the hearing protection is unnecessarily high for the circumstances in which it is to be used, then it is likely to mask both verbal and non-verbal (for example, warning signals) communication which could predispose error. Sudden high level noise can also cause distraction which, as mentioned in Chapter 2, is a classic cause of slip/lapse errors. The very low or zero light levels which characterise many parts of underground mines (and surface mines during night time operation) will have obvious effects on the ability to collect visual information and therefore increase the probability of error. Equally, high levels of light can be error inducing, particularly glare (for example, from direct sunlight in surface mining operations or from vehicle headlights in underground mines). There are also rather more subtle effects. For example, it is not uncommon, especially in maintenance operations, for the operator to create his own shadow, making the work area considerably less well illuminated than would be expected from the general ambient light levels in the area. Extremes of both heat and cold can have the effect of increasing error potential both directly and indirectly. For example, cold conditions reduce dexterity thereby directly affecting the ability to carry out fine control or maintenance operations. The same effect can arise indirectly through, for example, the need for protective gloves or other clothing which may restrict ease of movement. Hot conditions (especially if also humid) in mining have been extensively studied since the pioneering work of Wyndham and colleagues in South Africa in the 1960s (see, for example, Wyndham, 1965). However, as well as the expected physiological effects, hot conditions are known to affect mental as well as physical performance and, therefore, the likelihood of error, as Hancock (1981) has pointed out: “Heat stress degrades mental performance well in advance of physical performance.” However, the factors involved in heat stress are many and complex (including, in addition to physical parameters, personal parameters, the nature of work undertaken and the workload, as well as the duration of exposure and lifestyle factors) and are therefore highly context dependent. Moreover, as Keilblock (1987) has pointed out, there are considerable individual variations in heat tolerance/ intolerance. In addition there are reservations about most of the heat stress indices
Predisposing Factors: Level 2 – The Workplace Environment
39
which have been proposed, including those which have been widely used (see, for example, Graves et al., 1981; Graveling et al., 1988; Strambi, 1999; Brake and Bates, 2000). As such the potential effect of heat on human error is less predictable and much more context dependent than is the case for other predisposing factors.
4.1 The Problem The extent of this problem in relation to noise and lighting can be seen in the examples below: Example 1: Surface workshop operations – coal mining A man was killed by a reversing fork-lift truck in the fettling bay of a colliery workshop. Both the driver and several other men in the area confirmed that the reverse warning was sounding at the time. The man was trapped between the fork-lift and an internal structure with his back to the reversing truck in an area which was outside the designated walk-ways. As a result the initial investigation concluded that poor personal positioning on behalf of the man killed was the primary cause. However, further investigation revealed that the hearing protectors the man was wearing were of a higher attenuation than that required for the ambient noise levels and likely to mask the audibility of the reverse warning. So while it remained the case that the man was in a position he should not have been, there can be no doubt that the failure to consider the potential safety problems of over attenuation also significantly contributed to the casuality. For further information about communications problems in noisy mining environments see, for example, Coleman et al. (1984) or Simpson and Coleman (1988). Example 2: Underground coal mining In a study of the retrofit improvements to the ergonomics of underground locomotive cabs (Rushworth et al., 1993; or Rushworth, 1996 for summary) it was noticed that there was no illumination provided on the instrument panel of several of the then commonly used locomotives. The problem was exacerbated on those fitted with windscreens as the driver could not use his cap-lamp as this created back reflections partially obscuring his view ahead. Example 3: Surface hard rock mining To enhance driver awareness of the edge of the travelling road and of bends in the road, the mine had (as is quite common practice) marked the edge of the road with poles with reflective markers on them. Although this is, in principle, a good visual aid, no consideration had been given to cleaning the poles and, as a consequence, there were several areas of the mine where the poles were so covered in caked mud
40
Understanding Human Error in Mine Safety
that the reflectors could not be clearly seen even when the headlights were shining directly onto them. Example 4: Surface hard rock mining Excessive dust can be a common problem when Shovels are loading haul trucks causing considerable restriction to visibility. Although water cars are commonly available to water down and lay the dust, at one mine in the Simpson et al. (1996) study, loading operations carried on for several hours without any watering down. Throughout these operations haul trucks were seen on both sides of the shovel and rubber tyred dozers moved in and out constantly to remove spillage. In addition, there were several other vehicles parked in the vicinity, numerous people moving about and a risk of collision with cable poles. Example 5: Underground hard rock mine The only “headlights” provided on all of the locos operating in one section of the mine were standard cap-lamps which were attached to the leading hopper. The mountings were so loose that the lights swung around and tended to shine either into the side of the travel-way or downwards (only a metre or two in front of the hopper). In addition, the red covers provided to mark the back of the train when travelling out of the mine were rarely removed when travelling in. Not only did this give a false indication of the direction of travel but also further reduced the effectiveness of the minimal illumination provided by the lights. Example 6: Coal preparation plant A UK coal preparation plant had received a series of complaints from nearby residential properties about nuisance noise from the plant’s pre-start warning. After muffling the signal, the neighbours were more than content but their complaints were replaced by complaints from staff working on the plant that the signal was now too quiet to be reliably heard. Example 7: Surface hard rock mining Positioning the haul truck for dumping at one point in the mine caused both the driver and the co-driver to be “blinded” by mobile lighting (used to provide general lighting in the area) which shone directly into the cab at eye level. This problem had clearly been foreseen as a safety instruction on the mine stipulated that mobile lighting at dumps should be positioned at tyre height so as to illuminate the bottom half of the tyres and berm walls and NOT shine into the eyes of haul truck operators.
Predisposing Factors: Level 2 – The Workplace Environment
41
4.2 Potential Routes to Improvement In relation to both lighting and noise considerable care should be taken to examine all aspects and all tasks to be carried out in the area – without care a solution for one task can easily become a problem for another. Using international standards (for example, ISO 6395 for noise), good human factors information or a structured risk-based process (such as the Operability and Maintainability Analysis Technique mentioned in Section 3.2.2) can be of considerable assistance to help ensure that aspects of workplace environment do not increase the likelihood of errors. In relation to auditory signal effectiveness relative to the prevailing ambient noise levels and, in particular in areas where operators need to wear hearing protection, work carried out by Coleman and his colleagues in UK coal mining (for example, Coleman et al., 1984) is particularly significant. This study developed a technique for predicting the audibility of warning signals based on the interaction between the signal level and frequency spectrum, the ambient noise level and it’s frequency spectrum, the hearing ability of the mining workforce, the effects of the attenuation of any hearing defenders worn in the area and a minimal allowance for attention gaining to create a “signal design window”. In essence, having created the design window for a particular noise environment, any signal having a combination of frequency and loudness which puts the signal within the design window will be reliably heard by the majority of the workforce. An outline of the technique, together with examples of how the approach had delivered tangible improvements, is given in Simpson and Coleman (1988). An example of the effect of wearing (unnecessarily high attenuation) hearing defenders on the design window in a workshop environment is shown in Figures 4.1 and 4.2.
Figure 4.1
Design window without hearing defenders
Source: G.C. Simpson and G.J. Coleman, 1988. Originally published in the journal The Mine Engineer.
Understanding Human Error in Mine Safety
42
Figure 4.2
Design window when wearing high attenuation hearing defenders
Source: G.C. Simpson and G.J. Coleman, 1988. Originally published in the journal The Mine Engineer.
Considering mobile equipment, the EMESRT design philosophy for noise mentions the following industry attempts that have been applied to mitigate noise risks (MISHC, 2009): • • • • • • • •
Enclosed, tightly sealed and pressurized air-conditioned cabins; Thicker sound material, and additional insulation to the cab; One piece dual pane glass (toughened, laminated, shatterproof) on all sides that significantly reduces the operator’s sound exposure; Door seals positioned so that they are not prone to physical damage in normal operation; Selection and relocation of air-conditioning systems to reduce noise; Sound suppression/absorption materials around outside components (exhaust system, engine compartments, cooling fans); Active noise cancelling devices designed to lower noise caused by low frequency sound waves; In-cab communication headsets with active listening technology designed to integrate all radio communication directly into the headset and limit noise output.
Although these examples might not always be directly applicable to other mining tasks, they do show that a wide range of measures are available. Ensuring local lighting aids safety without compromising it for others is best achieved by ensuring that all tasks in the work area (including those carried out occasionally such as maintenance) are considered early (for example, within task risk assessment). It is, however, sometimes possible to achieve very effective improvements through retrofit changes. A good example of effective retrofits
Predisposing Factors: Level 2 – The Workplace Environment
43
relates to Example 2. Changes to the electrical circuitry would have been prohibitively expensive as it would have nullified the existing electrical intrinsic safety requirements/certification. The solution suggested (and proven) was to run a fibre optic cable from the headlight across the instrument panel with holes in the cable above each display. In this way the problem was solved at minimal cost and with no compromise of the intrinsic safety requirements. Although the above retrofit worked extremely well such solutions should not, ideally, be left for individual mines to “make-good” design limitation. As was seen in Section 3.2.2, one of the central objectives of the EMESRT group is to accelerate the development and adoption of leading practice designs of earth-moving equipment to minimise Health and Safety risks. Also mentioned earlier, the Operability and Maintainability Analysis Technique was developed for designers and manufacturers to qualitatively assess the adequacy of human factors considerations in their designs. So for this example, the impetus would be for manufacturers and designers to develop fit-for-purpose equipment in the first place, rather than relying on mine site retrofits.
This page has been left blank intentionally
Chapter 5
Predisposing Factors: Level 3 – Codes, Rules and Procedures
Codes, rules and procedures (including, for example, method statements, safety instructions, permits to work, safe working practices etc.) are, in effect, the instruction manuals of the Safety Management System. Like instruction manuals in everyday life they range from totally invaluable to completely useless. The whole purpose of safety codes, rules and procedures is to provide the information on which to build routinely safe behaviour. On this basis, failure to implement the requirements of safety codes, rules and procedures is unquestionably a violation and is often seen as the root cause of an accident/incident and an unequivocal indication that the person who breached the rules is responsible for the ensuing accident/incident. However, this conclusion assumes that the code, rule or procedure is appropriate, practical, well written, well communicated (whether by formal training or otherwise) and appropriately supervised/reinforced. Unfortunately, this is often far from the truth as the examples below show quite clearly.
5.1 The Problem Example 1: Underground coal mining A mine had introduced a new procedure to ensure that loco drivers had an easy way of informing fitters and electricians of any safety concerns about their locos. The mine introduced a “Loco Drivers Defects Book” where the drivers could note any concerns they had as they came off-shift. The on-going fitters and electricians checked this book before going underground at the start of their shift. The effectiveness of this system was reviewed as part of a wider Human Error Audit study (Simpson et al., 1994). One driver had entered a concern as he came off shift on a Monday morning. The same entry was repeated after the Tuesday morning shift, after the Wednesday morning shift and after the Thursday morning shift. The entry on Friday morning was slightly different and read: “Doesn’t any ****** read this except me!”
46
Understanding Human Error in Mine Safety
The study team checked with the electricians to see why they had apparently ignored the comments. In fact, they had checked the problem on the Monday and decided that while the loco needed attention it could wait until the maintenance shift on Saturday morning. Unfortunately, they neither put this in the book nor told the driver. Clearly, the driver concerned had, by this time, become very irritated and concluded that the book was a complete waste of time. In fact the idea was sound but the failure of the craftsmen to use it correctly had seriously undermined its potential benefit. A good idea was wasted. Example 2: Surface hard rock mining In the Simpson et al. (1996) study of the causes of transport and tramming accidents, vehicles at one mine were seen regularly, throughout the study period, failing to stop at crossings despite clearly positioned STOP signs and particular emphasis on this in rules in place at the mine. Some vehicles crossed at speed while others slowed but continued to drive over the crossing when it “appeared safe”. Also observed at the same mine were numerous occasions when vehicles clearly exceeded the specified and signed speed limits. The drivers of all types of vehicle on the mine were seen to contravene the Standard Procedures in place on the mine at some time during the study period. Also apparent was the frequency with which members of supervisory staff were the “guilty” party. Circumstances where managers/supervisors are seen, even implicitly, to condone non-compliance are classic examples of where rule-erosion will come to pervade the whole operation, as seemed to be the case on these issues at this mine. Example 3: Underground coal mining Below is a summary of a Standard Procedure addressing how the movement of rail transport vehicles should be controlled in the event of a failure of the robot (traffic light) system: 1. At the start of shift, green rings must be displayed at all intersections. 2. The first loco to enter the zone will stop, unlock the box and display a yellow ring (caution). This will indicate to the next loco or railbus that there is already a vehicle in this zone. 3. The second vehicle will stop at the intersection, unlock the box, replace the yellow ring and display a red ring. This will prevent a third vehicle from entering the same zone and he will have to wait until one of the first two locos come back to change the ring. 4. The same procedure will be repeated until the vehicle reaches the last loop on that specific route and will then display a red ring instead of a yellow ring as there are no passing facilities beyond this point.
Predisposing Factors: Level 3 – Codes, Rules and Procedures
47
5. On returning to the shaft, the procedure will be in the reverse, that is, if a yellow ring is found it will be changed to green. It is possible to understand this instruction if you read it several times very carefully. However, given that very few of the mine employees had English as their first language and a good proportion were functionally illiterate in English, it is hardly likely that the majority would have a chance of understanding it. Interestingly there are also a number of factual questions which need to be asked in terms of the utility of this procedure (in what it must be remembered is an intrinsically dangerous circumstance). For example, does point 1 imply that if the failure occurs in mid-shift, you wait until the start of the next shift before implementing this procedure? Example 4: Underground hard rock mine Despite the fact that the Standard Instructions at the mine together with the driver/ guard training clearly stipulate that blocked ore-pass chutes should be cleared by men working from safe platforms, no platforms or access ladders were provided. Guards undertaking this activity were seen to climb up the sides of, and balance on, the narrow lips of the hoppers. The dangers from undertaking the work in this way are considerable – hence the reason why specific conditions were stipulated in the codes, rules and procedures. The failure to provide platforms from which to work is a clear example of a rule which is, quite simply, impossible to comply with. Example 5: Underground coal mining As part of a rationalisation of the duties of first line management in the Continuous Miner sections it was decided to delegate the responsibility of taking the required methane readings to the Continuous Miner driver. The Standard Procedure had been changed to reflect this but otherwise all important points like the frequency and position of methane readings remained the same. During a Safety Management review it was noticed that no methane readings appeared to be taken over several shifts, certainly not in the positions that the Standard Procedure specified. When the drivers were questioned it became clear that they had not understood the importance of where the readings were to be taken as, although they were all logging the methanometer reading at the appropriate times, they were taken from the methanometer which they had all hung over one of the controls on the Continuous Miner. Recording the readings in such a position was clearly meaningless and in one section bordered on the farcical – the methanometer was slung over a control directly in line and downstream of the forced ventilation ducting. The driver was taking methane readings in what was, by definition, the nearest thing to fresh air available underground!
48
Understanding Human Error in Mine Safety
Not only was there a clear failure to adequately train the drivers in relation to their new responsibilities but there was clearly little attention given to the readings being logged, for if there had been it would have been readily apparent that they were spurious. Example 6: Underground coal mining It is not uncommon in UK coal mines for men to be allowed to ride on designated lengths of coal clearance conveyor belting at designated times. Although the lengths of belts designated as man-riding are carefully selected with adequate clearance, with purpose designed boarding and alighting platforms and the times of official man-riding clearly defined, the process can, nonetheless, be a dangerous one (not least because the belts do not stop for boarding or alighting). As the process is potentially dangerous, the correct way to board, ride and exit belts is part of underground training and most man-riding belts have warning notices positioned alongside the boarding platforms. One particular notice had 17 rules, all of which were phrased negatively – “Do not ...” “Never ...” etc. While it is important in terms of safety to know what not to do, when all the information is presented in this way, while you have a wealth of information telling you how to avoid being unsafe, you have no information telling you how to behave safely. Unfortunately it does not always follow that the safe way is the opposite of the unsafe way. Example 7: Underground hard rock mining The Standards at the mine clearly defined which items of protective equipment were required for each job/task. In addition, the Standards were backed up by a process known as the Critical Task Inventory which was designed to ensure that the items specified in the Standards were procured and available to the men undertaking those critical tasks. The Standard relating to checking and topping up battery electrolyte specified that this should only be undertaken when wearing eye protection, rubber gloves and an apron. However, observations showed no such protective equipment was worn when this task was undertaken (during the study period). Further investigation indicated that the task of checking and topping up the electrolyte had never been included on the Critical Task Inventory and, as a result, the required equipment had not been procured. A sensible requirement had been identified but could not be implemented because of a failure in the overall system of the purchase and provision of Personal Protective Equipment. Example 8: Surface hard rock mining The action required in the event of a breakdown of vehicles in the mine was specified in Standard Procedures and included, for example, blocking the vehicle, switching on hazard lights, turning wheels left, positioning red triangles (as forewarning for
Predisposing Factors: Level 3 – Codes, Rules and Procedures
49
other drivers), placing marker drums and conducting traffic around the vehicle. Although several breakdowns were seen during the study, the only action routinely taken was to switch on the hazard lights. Part of the problem leading to the failure to comply was that the actions specified were spread over a number of Standard Procedures rather than being collated into a single required action set within a single Standard or the full set of actions copied across all relevant Standards. Example 9: Underground hard rock and coal mining One aspect of a major study into the reasons why safety standards are not complied with (funded by the SIMRAC programme of the RSA Department of Minerals and Energy) compared management, supervisory and workforce attitudes to safety codes, rules and procedures (Talbot et al., 1996). The study, which covered two gold, one coal and one platinum mine, used questionnaires to assess attitudes to various safety related issues. A total of 326 data sets were obtained (representing a total of over 86,000 responses). The “high level” results dealing with whether the codes, rules and procedures were considered as being relevant and practical were (across the four mines) extremely positive with over 90 per cent of respondents considering them to be so. However, once the questions moved from abstract principles to questions more directly concerned with day-to-day compliance, the results were nowhere near as convincing. For example: •
•
•
•
•
Over 60 per cent of management staff, 38 per cent of supervisors and 28 per cent of the workforce recognised that some of the rules and procedures operating on their mine were out-of-date or inaccurate. Not only is this of direct concern but the fact that the recognition of this problem was lowest among the workforce is significant, for it would appear that over 70 per cent would be using out-of-date and/or inaccurate rules and procedures without being aware that they were. Over half of the managers and supervisors studied considered that some of the rules and procedures on their mine took too much time to implement and that some were not really practical in day-to-day operational circumstances. Over half of the managers and almost half of the supervisors in the sample considered that there were rules and procedures in place on their mines which were difficult to follow. Over 50 per cent of the workforce group considered that the design of equipment and tools made compliance with associated rules and procedures difficult (this was also recognised by almost 40 per cent of both the management and supervisory groups). Almost 60 per cent of managers and 50 per cent of supervisors recognised that adverse working conditions could, at times, increase the difficulty of compliance with safety rules and procedures.
Understanding Human Error in Mine Safety
50
•
The influence of logistical issues (for example, availability of tools and equipment or sufficient time for compliance with safety rules and procedures) generated the most disparate results across the four mines in the study, with concern on the issue as low as 13 per cent in one mine and as high as 70 per cent on the “worst”.
The general consistency of these results across four mines which span a wide range of mining operations and conditions suggests clearly that problems with codes, rules and procedures are widespread and, widely, recognised. However, the results of the study also indicate that while such problems are recognised there does not seem to be any great sense of urgency to correct them. Example 10: Underground coal mining maintenance operations The procedures relating to the safe dismantling of bridge conveyors (in UK coal mines) stated that the fitter should not be under the structure when dismantling. A miner was killed when such a structure collapsed on him in a way which made it clear that he was working under the structure when it failed. The investigation identified that the coupling pins could only be released from under the conveyor. As a result of this accident new pins were designed that allowed removal from a safe position and a national instruction issued that all pins should be replaced with the new design. Several years later another fatality occurred to a man dismantling the conveyor from under the structure. It emerged that although the colliery records included a copy of the national instruction, the new design of pins had not been used.
5.2 Potential Routes to Improvement Preparing clear and reliable safety codes, rules and procedures has been a problem in just about every circumstance in which they have been used (see, for example, Chapanis, 1965 and 1988). In a survey of safety rules and regulations in an Australian mine site, Laurence (2005) found that detailed prescriptive regulations and safe work procedures, and huge safety management plans did not “connect” with most miners. He concluded that achieving more effective rules and regulations is not the simple answer to a safer workplace (Laurence, 2005). Furthermore, the international nature of modern mining, where operations are often conducted in very remote areas with workforces which often have limited literacy generally and in English in particular (which is normally what might be described as the “management language”), makes the use of written procedures particularly difficult and problematic. Despite these particular difficulties mining operations generally rely heavily on written safety codes, rules and procedures.
Predisposing Factors: Level 3 – Codes, Rules and Procedures
51
Instructions/procedures etc. are designed to guide staff behaviour on certain tasks to ensure that specific actions are taken and specified processes followed. They serve no useful purpose unless they are used and complied with. Although codes, rules and procedures are used across all aspects of the business they all have the same fundamental purpose/objectives which are to ensure: • •
all who need to use them understand clearly what is required of them; they are written in a way that encourages compliance.
Although all instructions/procedures have the same purpose and objectives, their significance varies and does so in direct proportion to the consequences of failure to comply. For example, a misunderstanding when reading the instructions on expense submissions may lead to inefficiency and delayed payment but, in reality, the consequences are unlikely to have any more impact than to inconvenience a small number of people. However, failure to comply with the instruction/procedure covering a safety critical operation or maintenance task could (as previously shown) result in fatalities. In some circumstances (on health and safety in particular) instructions and procedures which are not used can be positively dangerous, as the rest of the organisation will be working on the assumption that they are being followed. In this situation the risk protection that the instruction/procedure was designed to provide will, simply, not be there. The greater the potential consequences of failure to comply with an instruction/ procedure the less it should be relied on as the single focus for compliance. In tasks where failure to comply has significant consequences, the instruction/procedure, no matter how well written, should always be supported by, for example, required qualifications, training, competence testing, supervision etc. (for more information on supporting critical instructions/procedures see Section 5.2.2). The general principles which underpin the preparation of written codes, rules and procedures are described in Section 5.2.1; useful means of supporting and supplementing codes, rules and procedures are described in Section 5.2.2 and some special considerations on the use of procedures in an international mining context are outlined in Section 5.2.3. 5.2.1 General principles in preparing written codes, rules and procedures There are four fundamentally important principles which must be considered in the preparation of effective instructions/procedures: 1. 2. 3. 4.
functional simplicity; tailoring; use of plain, positive English (or any other language); piloting.
52
Understanding Human Error in Mine Safety
Each of these principles is discussed further below. Functional simplicity An instruction/procedure should always be: As simple as possible in order to achieve its function.
While this may seem obvious, there are many examples of procedures where it has clearly not been applied. For example, it has been reported that the total weight of paper used for instructions/procedures in the design of a major civil aircraft was actually heavier than the aircraft itself. Fundamental to achieving functional simplicity is the need to define the objective(s), as carefully and as narrowly as possible. One common problem of interacting objectives with regard to safety is the differing responses to an objective which aims at ensuring people act safely on a task and one which aims at ensuring that you have understood and acted on the requirements placed on you in relation to that task (by, for example, the local regulator). Both are legitimate objectives but they can lead to very different contents in the respective instructions/procedures. For example, in a regulator assurance approach it might be considered appropriate to list all the legislation and guidance relevant to the task being considered. In terms of a safe operation approach this information is simply not necessary. Equally, in a regulator assurance approach you may argue it is necessary to include reference to all the issues which are raised in legislation and guidance (in order to show how comprehensive your consideration has been) whereas in a safe operation approach you would only cover those which are specifically necessary to the task under consideration. Tailoring It is essential to know the audience for whom the instruction/procedure is being prepared. If the audience is not carefully defined a whole series of assumptions may be made which could significantly reduce the effectiveness of the instruction. For example, if an instruction/procedure is being written for new starters it is essential to make no assumptions at all about their understanding of the structure or organisation of the company. Similarly, while technical jargon will cause problems for those not familiar with it, where an instruction/procedure is written specifically and exclusively for technical staff, jargon can not only be appropriate but can often also reduce the number of words needed. A good example of jargon in the current context is given in the example shown in Figure 3.2. Not only is there a problem alternating between the different control movements but the configuration on the right is known to ergonomists as “anti-population stereotype”. While ergonomists will have absolutely no problem with this phrase when speaking to fellow specialists, it is doubtful whether non-ergonomists would immediately realise that all that it really means is “the wrong way round” (that is, the control movement is
Predisposing Factors: Level 3 – Codes, Rules and Procedures
53
opposite to that which would be expected by the majority of the population). Use of jargon such as this out of its context may have been part of the problem when Meister and Sullivan (1968) raised the difficulties that designers often have when trying to use human factors information. The fluency of the audience in the language in which the procedure is prepared is a particularly important consideration in modern mining and is discussed further in Section 5.2.3. Use of plain, positive English The need to use plain, positive language is fundamental whatever the language. However, as the “management” language in most mining operations is English, the rest of this section focuses on writing codes, rules and procedures in English. English is a surprisingly “loose” language in which individual words and phrases can have different interpretations dependent on the context. For example, the phrases “liaise with” or “consult with” can both mean meet with, or discuss with, or work with, or even simply ask. Why, therefore, would you use phrases like liaise with or consult with which are non-specific when there are specific and simpler alternatives? While this fluidity makes English an extremely colourful and creative language for the novelist it can be a difficult language for the instruction/procedure writer. Consider the sign below: NO SMOKING REGULATIONS APPLY HERE It is possible and quite legitimate, both grammatically and linguistically, to interpret this sign in two completely different ways. It can be interpreted as there are no regulations on smoking here (and therefore you can smoke) or that there are (non)-smoking regulations in place here (and therefore you cannot smoke). This is a classic example of an instruction that obviously seemed reasonable to the person who wrote it but which could lead to two completely contradictory but equally justifiable actions. How much simpler and clearer would it have been to have said no more than: NO SMOKING or: DO NOT SMOKE Neither of these alternatives is open to misinterpretation and both use less words than the original. This is a good example of functional simplicity in practice. There is some evidence that (UK) mineworkers may be slightly different to the general population in relation to display/control movement stereotypes (see Simpson and Chan, 1988).
54
Understanding Human Error in Mine Safety
The more words used and the longer the words and/or sentences are, the less likely it will be that the message is understood quickly and reliably; equally, the likelihood of misunderstanding will increase. Writing instructions/procedures is not the place to impress people with the size of your vocabulary. Table 5.1 provides some good examples of plain, simple alternatives to the more flowery/“impressive” language which is all too frequently used. In addition to the number and length of the words used the use of positive words and phrases also aids understanding and compliance. In safety in particular it is very common to focus on avoiding what may be unsafe actions which tends to lead to an excess of negative instructions. For example, “do not enter without hearing defenders”, “do not engage gears before starting” etc. Unfortunately, instructions which state what not to do, do not necessarily tell you what to do – the opposite of unsafe is not necessarily safe. By way of example you cannot learn how to drive a car safely by only knowing what you should not do to avoid being unsafe (no matter how important the “do nots” are). Conversely, if instructions specifically state what to do at a given point in an operation, what not to do is, in
Table 5.1
Examples of alternative, simpler, words Instead of
Use
Application
use
Erroneous
wrong
Extant
current
in conjunction with
with
Initiate
start
(it is) obligatory/mandatory
(you) must
Perform
do
Request
ask
Subsequently
later
Terminate
stop
Utilise
use
with effect from
from
The examples given are from The A to Z of Alternative Words published by the Plain English Campaign 2001 (more information can be obtained from info@plainenglish. co.uk)
Predisposing Factors: Level 3 – Codes, Rules and Procedures
55
effect, unnecessary. For example, if an instruction states press the green button .rst , there is no need to say do not press the blue button before the green one. Positive statements (especially action statements) are: • • • •
easier to understand; use less words; easier to remember; less likely to be misunderstood/misinterpreted.
If there is an absolute need to instruct people what not to do then this should not be the lead information. Negative instructions should either follow the positive ones (by way of reminder/emphasis) or be collected at the end in a specific section. Nonetheless the instruction/procedure writer’s primary focus should be on defining what is the correct action(s) rather than avoiding incorrect action(s). Piloting No matter how experienced the instruction/procedure writer is with either the instruction/procedure writing process or with the operations/task addressed, all instructions/procedures should be piloted on a sample of end-users before implementation. Without such a check, it is impossible to be sure that any potential misunderstandings have been identified or even whether the draft instruction/procedure is, in fact, practical in the circumstances in which it has to be applied. In essence, in setting out to prepare a new (or revised) safety code, rule or procedure you need to address the following: • • • •
Who authorises/initiates the action, who takes the action, who needs to know that action has been taken, etc.? What action should be taken, what information, equipment etc. is needed to take the action, what needs to be done once the action is taken, etc.? When is the action taken? Why is the action taken (which should include the consequences of not taking it or taking an incorrect action)?
Only when all of these questions have been answered should any thought be given to preparing the actual code, rule or procedure. 5.2.2 Supporting/supplementing written codes, rules and procedures No matter how well written and presented, instructions/procedures on their own should not be considered sufficient protection against risks with potentially high consequences. In high consequence areas additional support should be considered in the form of:
Understanding Human Error in Mine Safety
56
•
•
•
•
•
Specific qualifications/experience: Where specific qualifications/experience is necessary in order to undertake a task this should be made clear as the first statement on an instruction/procedure. There is no point in letting the reader reach the end before realising the document is not appropriate for them or, worse, not even noticing that it is inappropriate for them and thinking they can go ahead. Where there are specific qualifications/experience requirements for a task, the instruction/procedure should include some means of verification at deployment to the task. Formal training: For many instructions/procedures especially those concerned with genuinely administrative tasks it will be sufficient to simply issue the document with, if necessary, some form of acknowledgement from individuals that they have, in fact, read it. On others there may be a need for informal training (for example, tool-box talk). However, where the instruction/procedure relates to operations which are significant in terms of safety it may be appropriate to institute a more formal training programme. Formal training in this context has three particular advantages. Firstly, it ensures that all staff across all shifts receive the same message. Secondly, it provides a specific opportunity for the learners to seek further explanation/clarification on any points of potential misunderstanding. Thirdly, it allows for the training to be formally recorded on, for example, training logs which then provides a basis for ensuring only appropriately trained staff are deployed to the task. On particularly safety critical tasks serious consideration should be given to regular, refresher training. Competence assessment: On particularly critical tasks it may be appropriate to end the formal training highlighted above with a test of competence, either in the form of a test of understanding or by a “dry-run”, wherein each learner follows the instruction/procedure under observation in safe (perhaps simulated) conditions. On particularly safety critical tasks serious consideration should be given to regular re-testing of competence. Adequate supervision: It is essential to ensure that all staff likely to be supervising the implementation of an instruction/procedure are fully trained in its use and, where possible, aware of any indications that the instructions are not being followed. It is also crucial that supervisory staff understand and recognise the need for zero tolerance of breaches of instructions/ procedures especially on activities with safety and/or plant efficiency impact. This includes, for example, insisting on the use of all required PPE, discouraging the use of improvised tools etc. Where supervisory staff are involved in the planning of a task or the formal sign-off of completed tasks (for example, the use of Permit to Work systems), it is essential that they understand the importance of being available when needed and to have thoroughly examined the operation before signing-off. Regular audit/monitoring: It is well established that compliance with instructions/procedures can and does erode over time if breaches are not identified and corrected. It is essential, therefore, that compliance with
Predisposing Factors: Level 3 – Codes, Rules and Procedures
57
instructions/procedures is a significant consideration in all forms of routine monitoring and audit of operational practice. Such audits must, however, seek to identify the reasons for any non-compliance observed for it may be that there are good reasons and that limitations in the instruction/procedure have been identified when it is “used-in-anger” which had not been appreciated earlier. If good reason for non-compliance is identified then the instruction/procedure must be changed – to leave it unchanged on the basis that “the task is getting done anyway and there is a lot of effort involved in changing an instruction/procedure”, will undermine the credibility of the whole instruction/procedure process and, if it relates to safety, undermine the safety culture. There are two other issues of considerable importance which need to be considered in addition to the points made above: •
•
Has the change management process (including document control requirements) been fully implemented? One of the common problems in relation to compliance with instructions/procedures is where staff are compliant with the old requirements rather than the new ones. Has the implementation taken note of any involvement of contractor staff in the use of the instruction/procedure? A common problem with compliance with instructions/procedures is where contractors are working to their inhouse process rather than the site processes.
5.2.3 Particular problems in international mining operations It is by no means unusual in modern, multi-national, mining operations to have mines where the overall workforce will have several different first (or only) languages. Although English is often what may be described as the “management language”, in many circumstances English will not be the first language of all the management team and is very unlikely to be the first language of all supervisory/ operator staff. It is impossible to underestimate the very real cultural, ethnic, language and educational difficulties which are evident in many mines and the difficulties that these create for the effective development and implementation of good quality safety codes, rules and procedures. This was graphically exemplified by a supervisor at one of the mines in the SIMRAC transport and tramming study (Simpson et al., 1996) who stated in relation to the Standard Instructions at the mine: I read them in English, translate them into Northern Sotho and answer questions in Afrikaans.
A mine overseer at the same mine stated that:
58
Understanding Human Error in Mine Safety 75 per cent of the accidents in my section can be ascribed to a failure to follow SIs.
Perhaps, in light of the supervisor’s comment, this is not a great surprise. It is important, however, not to allow these difficulties to become excuses; they are, quite simply, the real circumstances which exist and safety provisions must take them into account. Although this problem is by no means unique to South Africa (see, for example, Morrison, 1996 in relation to Indonesian mining), the South African position does fully encapsulate it as was recognised in the Report of the Leon Commission of Inquiry into Safety and Health in the Mining Industry (Leon et al., 1994). At p. 14 (of volume 1 of the Commission’s report) it states: The question of communication and transfer of information in a dangerous occupation such as mining is of enormous importance, but there are problems and constraints involved. The great majority of the mining workforce is illiterate and innumerate. They speak a range of different languages, some in addition to the previous official languages of Afrikaans and English. The mining industry has sought to overcome the problem of communication by using the mining lingua franca called Fanagalo as the language of the mines. The Commission considers this to be very unsatisfactory, because the language has a very limited vocabulary and is unable to convey subtle meaning. While it may be satisfactory for giving simple commands it is quite inadequate to convey the nature and extent of the dangers that lurk beneath the surface, the source of such dangers and how best to avoid them.
The “nature and extent of the dangers … the source of such dangers and how best to avoid them” is a very succinct description of the purpose and objectives of safety codes, rules and procedures. Elsewhere in the Report (at p. 73 of volume 1) the Commission states: In view of the high degree of illiteracy and lack of a common language, the scheme of training employed in South Africa, especially that for novice miners, must be more imaginative and more creative than most run of the mill industrial training programmes.
Many mining houses and individual mines have introduced literacy and numeracy training (not only in South Africa) and innovations in the use of highly visual material including theatre (see, for example, Standish-White, 2000). The fact remains, however, that most safety codes, rules and procedures will be written, in the first instance, in English and unless they adhere to the principles outlined above any attempt, no matter how innovative, to translate them into local languages or move to more visual means for training, will be likely to fail.
Chapter 6
Predisposing Factors: Level 4 – Training and Competence
Training is without doubt the most commonly used health and safety risk control measure across industry in general. Despite this, it can be one of the weakest. As the extent of complex technology used in mining increases continually and the working out of many traditional mining areas brings into play more and more varied workforces across the globe, training will continue to be a crucial element in safety assurance generally as well as in approaches to minimise human error.
6.1 The Problem Unfortunately there is considerable evidence that, all too often, the training provided in the mining industry is not always as good as it is believed to be, as the examples below show. These should be considered in conjunction with many of the examples given previously in Section 5.1 where problems are seen not only in the preparation of effective codes, rules and procedures but also in the way staff are trained in their use. Example 1: Underground coal mining On the 7 August 1994 an underground explosion occurred at the Moura No. 2 mine in Queensland killing 11 men. Among the many comments and recommendations in the report of the official inquiry (Windridge et al., 1995) were the following: It is clear from the evidence that many personnel at the Moura No. 2 mine from the superintendent down were inadequately trained in important aspects of their duties. Some training initiatives had commenced at the mine in recent times, but overall the extent of training seems to have been inadequate to keep people up to date. As demonstrated repeatedly in evidence, it should not be taken for granted that a statutory certificate of competency to practise as a mine manager, undermanager or deputy carries an assurance that the person possessing it is maintaining, and where necessary developing, the original knowledge base required for the appointment.
Understanding Human Error in Mine Safety
60
It is recommended, therefore, that the procedures for granting statutory certificates for underground coal mining and the conditions under which they are awarded, be reviewed. In particular, it is recommended that certificates not be granted for life and that a system needs to be developed and put into effect as soon as is practicable that requires certificate holders to demonstrate their fitness to retain the certificate of competency on a regular basis, at intervals of not less than three and not more than five years.
These quotes, in a report which ascribes much of the causality of a major accident involving 11 fatalities, to poor/inadequate training strongly reinforce two points: 1. Human errors with safety implications can be made at any point in the hierarchy, from the rawest recruit to the most senior management. 2. The mere provision of training cannot be assumed to imbue competence. Example 2: Underground hard rock mining Throughout one section of the mine several examples of unsafe vehicle parking were seen (Simpson et al., 1996 study). Few of these examples were temporary expedients as in most instances the vehicles were left in an unsafe position throughout the shift and some extended across shifts. The problems included: •
•
• •
Locos and rolling stock left parked on the main haulage without stop blocks or chain sprags etc. This was even the case where tracks were on the downgrade toward the tip. Rolling stock was left unspragged in crosscuts which were graded downwards toward the main haulage. The mine standards and training stipulate that aeroplane sprags should be used to prevent runaways from rolling onto the main haulage; however, not one aeroplane sprag was seen in use throughout the study period. Rolling stock left parked for long periods in “fouling areas” in contravention of mine standards. Locos left unbraked, unspragged and with keys in place and power on.
All of the above are in contravention of both mine standards and driver and guard training. It is clear that either the driver/guard training or the level of supervision of safety standards (or both) were inadequate. Example 3: Surface hard rock mining During a study at the mine it was noticed that several haul trucks reporting for refuelling and “general inspection” were sent off to the workshops to deal with leaks in the hydraulic braking system which the drivers had “failed to pick up”. On examining why this was happening it emerged that there was little compliance with
Predisposing Factors: Level 4 – Training and Competence
61
the pre-shift checks required in the rules and driver training, no brake tests were observed at any point (as required, at least once a day, by the rules and training) and that drivers were actively discouraged to undertake brake tests as such tests “could cause damage to the braking system”. In short, it became increasingly confused as to whether or how any of the required brake tests were conducted. It also emerged that although the initial driver licensing test included a practical test, once drivers were licensed it was assumed that they were “trained for life”. No mechanism existed to identify whether performance against and compliance with the training and procedures had “slipped” or to provide refresher or updating training. Example 4: Underground hard rock and coal mining When pushing hoppers (with the loco at the rear of the hoppers) drivers were seen periodically to lean out of their cabs to get a better view of the road ahead. They did so as the hoppers obscured their forward vision and communication with the guard (positioned on the leading hopper) was difficult and unreliable. Leaning out of the cab while driving is in contravention to the safety standards at the mine and the dangers were emphasised during driver training. Despite the recognition of the risk in both the mine standards and driver training, this behaviour is inevitable given the restricted vision and communications problems. In the event of an accident it would seem highly likely that a driver would be seen as in-breach of both the standard and the training despite the fact that behaviour advocated in both is essentially impractical. A similar problem had been seen previously in a UK coal mine when drivers were seen leaning out of the cabs and looking back down the track. When questioned as to why they were doing something which was clearly very risky, it was explained that a new binding system had been recently introduced for material on flat beds which had failed on several occasions. The drivers were, in effect, left to balance the risk from leaning out with the risk of shedding the load (and possible derailment). Example 5: Underground coal mining Throughout the 1970s and 1980s the single largest cause of lost time within UK coal mining was associated with musculo-skeletal injury. Within these periods the largest single cause was manual handling activities. This was despite the fact that all new recruits were given detailed manual handling training and the fact that manual handling was often the focus of both local and national initiatives. Examination of the manual handling training provided indicated that it was exclusively based on the principles of kinetic lifting following the primary rule of “lift with your legs” (or “straight back, bent legs”). While there is some biomechanical sense in this approach (in that it encourages use of the large leg muscles rather than the much smaller back muscles) even limited knowledge of underground circumstances expose its most serious limitation – all too frequently it
62
Understanding Human Error in Mine Safety
is impossible to get into position to effectively use the lifting technique advocated by kinetic lifting training. As a result hundreds of thousands of pounds were spent on what was believed to be a reasonable risk mitigation measure which was almost certainly proved to be impractical on the first day that any one spent underground. It was perhaps not so surprising that the manual handling problem has proved so intractable when the core risk management tool was, at best, of minimal value and, at worst, dangerous in that it created a false sense of security (for wider discussion of this issue see, for example, Graveling et al., 1985; Simpson, 2000; and for examples of ergonomics approaches to reducing manual handling risk in mining see, for example, Graveling et al., 1992, Talbot and Simpson, 1995 or, more recently, Gallagher, 2008). Example 6: Underground hard rock mining During the Simpson et al., 1996 study drivers were frequently seen tramming too close to trains in front of them. This was particularly hazardous on downgrades (especially those travelling toward the tip where the track was often wet and when pushing hoppers which increased the vulnerability of the guard who was positioned on the leading hopper). In addition several circumstances were seen where the locos appeared to be travelling too fast although no speed limit signs were seen to confirm this. Examination of the driver/guard training modules and the mine standards revealed that no safe tramming distance or speed limits were specified in either the training or standards. Example 7: Surface hard rock mining In the section on use of fire extinguishers in the induction training at the mine and the “pocket guide” issued to all staff as a job aid on safety, the information given related to how to use stored-pressure type extinguishers whereas all the extinguishers observed on the mine were of the cartridge-operated type. The method required to operate a cartridge extinguisher is quite different from that for a stored-pressure type and if the procedure for the latter was used with the former, the extinguisher would not work. This is a clear example of training provision becoming out-of-step with the actual circumstances on the mine. As a result, this renders the training given (no matter how good) as effectively useless. Example 8: Underground coal mining In a study undertaken of human error potential at a South African coal mine (Simpson and Talbot, 1994) it was noticed that following the inby journey the driver/assistant uncoupled the loco from the mansets after which the driver drove
Predisposing Factors: Level 4 – Training and Competence
63
the loco round the pass-by and positioned it at the outby end of the mansets ready for coupling and connection of the brake lines prior to the outby journey. Although the procedure in place at the mine required the driver to test the integrity of the brake line prior to moving off, no driver was seen doing this. Further examination showed that although the brake test was stipulated in the procedure it was not covered in the training. Example 9: Surface hard rock mining At one mine studied in the SIMRAC transport and tramming study (Simpson et al., 1996) it emerged that there was a requirement for all drivers to attend review training every three months on all of the vehicles for which they were licensed. This is clearly a sensible provision; however, it also emerged that there appeared to be no limit on the number of licences that a driver could hold. An example of this was one man holding, at the time of the study, no less than 19 licences! Clearly holding so many licences (or even with considerably fewer) it would be impractical to attend a training course for each every three months as he would be attending training more often than working. Moreover, this position almost certainly breached the requirements of regulation 18.1.7 of the Minerals Act and Regulations which requires a driver to drive any vehicle for which he is licensed at least once every six months. Example 10: Underground coal mining In a British Coal/European Coal and Steel Community funded project (Rushworth et al., 1986) on the ergonomics of safe working in bunkers, an initial examination of the previous ten UK fatalities in coal bunker operations identified that of the ten men killed nine had not been wearing a safety harness at the time of the accident and the tenth, although wearing his harness, did not have it connected to anything! This apparent tendency to ignore the harness was in complete contradiction to both the safety rules and training provisions in place at each of the mines at the time of the accidents. It seemed highly unlikely that ten separate accidents at ten different mines all involving the same violation would be entirely down to coincidence. When the project team began to ask the question why there was such a reluctance to wear safety harnesses a number of basic problems emerged: • •
Many of the safety harnesses were extremely uncomfortable to wear and several did not have sufficient adjustment to fit larger or smaller men. A number of the harnesses restricted movement quite significantly, sufficient to hinder the work to be carried out and several were quite difficult to work out how exactly to put them on (one was particularly difficult and one miner described it as looking “like someone had spilt a bowl of spaghetti on the floor”).
Understanding Human Error in Mine Safety
64
•
•
Bunker tops and access points often made the use of harnesses a real nuisance, often with no purpose designed anchor points and access positions which meant that the safety line was effectively irrelevant even if it was used. Observation of men working in bunkers exposed some rather strange behaviour including, for example, two men working off the same safety line and one supervisor wearing his safety harness upside down (with his arms through the leg holes and his legs through the arm holes) because “it was more comfortable that way”.
These points clearly identified shortcomings in the training – in relation to the first two bullets, the safe behaviour advocated in the training was clearly compromised in the “real world” by limitations in the design of both the harnesses and the bunker tops. In addition the third bullet point clearly indicated a lack of serious consideration of hazard awareness and risk perception in the existing training provisions.
6.2 Potential Routes to Improvement All of the information provided in Section 5.2 in relation to improving the training provision for codes, rules and procedures applies equally in relation to the wider aspects of safety training considered in this chapter. Collectively the examples presented in Section 6.1 emphasise six important points which must be considered if training is to work effectively as a safety risk control measure and to improve the effectiveness of the training given: 1. All too often training is seen as a one-off event capable of equipping the trainee with sufficient information to ensure safe behaviour in perpetuity; this will never be the case. 2. Training is often seen as an end in itself almost as though the mere fact of attendance is sufficient in itself to ensure the required outcome and required competency levels on the issues addressed. 3. While competency is almost always tested in some way at the end of skills training, it remains a relatively rare inclusion in safety focused training. 4. Training must take account of the actual way that the work is done (and, in particular, the actual conditions under which it is done), rather than an exclusive focus on the way “it should be done”. Equally, mine training must not simply assume that training which has proved reasonable in other industrial contexts will transfer, equally effectively, to mining conditions. 5. Training must be focused on information from risk assessment and the training needs analysis must take cognisance of all aspects of the risk assessment.
Predisposing Factors: Level 4 – Training and Competence
65
6. It must never be assumed that a good appreciation of hazard awareness and risk perception will simply emerge from the rest of the information provided. These issues need to be specifically addressed for, without a good appreciation of hazards and risks, there is little justification to take the rest of the information provided seriously. The Rushworth et al. (1986) bunker study provides a good model of how to develop improved training focused specifically on safety. Following on from the initial problem definition in the bunker study (Example 10 in Section 6.1) it was clearly apparent that there were issues (such as harness design and bunker top access design) which could not be addressed by training whereas the issues of hazard awareness and risk perception could only be addressed by training. The study assessed a range of fall arrest harnesses and identified the best from an ergonomics perspective and also provided guidance on the design of bunker top access points (covering both potential retrofit improvements and new design principles). Having addressed the non-training issues the project turned to training improvements. A questionnaire was developed to assess, more systematically, limitations in hazard awareness and risk perception. This was then used to provide proposals for changes to the training programme and modified to act as a beforeand-after assessment of the degree of learning during training. This enabled trainers to identify where their training course was limited (based on issues which the majority of trainees failed to fully understand) as well as identifying individuals who needed “topping-up” on particular issues before being “passed-out”. All of the examples given in Section 6.1 and the discussion above in relation to potential improvements have focused on training for direct employees. However the training requirements and standards of contractor staff raise their own particular issues. Traditionally mining, especially underground mining, rarely used contractor staff; however, in recent years that position has changed and contractors are much more common in mining than has ever been the case. While it is unlikely that contracting-out will become as significant in mining as has been the case in other industries, the size of the mining contractor workforce is such that the issues of training standards and competency assurance in contracting companies must be considered. Shortly before the demise of British Coal contracting-out of some underground mining tasks became increasingly commonplace. The importance of competency became apparent very quickly and a “passport to work scheme” was introduced whereby a central data base (accessible to all mines) held the training and experience record of all the staff of all authorised contracting companies. Although this scheme proved to be considerably successful it is worth noting that the last methane explosion to occur in a UK mine happened in a heading being driven by contracting staff (see Simpson, 1996b). Although the inquiry identified failings on the part of both British Coal and the contractor’s officials one of the comments in
Understanding Human Error in Mine Safety
66
the witness statement of one of the contractor’s officials is worthy of quotation as it clearly emphasises the lax standards which prevailed in the heading: Before I left the heading I told the driver not to cut more than 3 metres. The foreman was there but I don’t know whether he or anyone else heard me.
Concern over the health and safety management of contractors was such that the Western Australian Chamber of Minerals and Energy produced an excellent, concise summary of what needs to be addressed throughout the contractor lifecycle (Chamber of Minerals and Energy, 1997). The topic areas covered were as follows: • • • •
The contractor/principal relationship; Contract management model – contractor selection phase; Contract management model – post-award phase; Contract management model – post-contract completion phase.
It also provided a series of aides-memoires for use during the process covering: • • •
Pro-forma pre-tender qualification; Model safety and health management system check-list; Minimum requirements for small or sub-contractors check-list.
Below is an outline process, based on the issues raised above, which would enable a systematic review of current training provision for safety critical tasks/ jobs. 1. Review human error potential by observation (see Chapter 10 for more detail), and examination of relevant risk assessments and accident/near miss investigations. 2. Define training needs from 1 above (including information needs on risk perception and hazard awareness). 3. Review the current content of training provision in light of information from 2 above. 4. Create gap analysis between training needs (as defined in 2) and current content (as defined in 3). 5. Prepare contents of course to cover gap analysis. 6. Examine alternative ways of presentation (given consideration of language, literacy etc.). 7. Incorporate new material with existing training provision to form a new course. 8. Pilot new course on selection of target audience and obtain “warts and all” feedback. 9. Revise new course material in light of 8.
Predisposing Factors: Level 4 – Training and Competence
67
10. Identify suitable coverage and format for competence assessment (taking triggers from 2 and 9). 11. Pilot new course and competence assessment (and revise if necessary). 12. Implement new course. 13. Review after suitable period (dependent on throughput but no more than 12 months). 14. Collate review from the start of 13 any relevant accident information during the period. 15. Observe operation for human error potential (in effect re-creating the observations undertaken in 1 above) and collate with information from 12, 13, and 14. 16. Revise course content/method of presentation in light of information from 12, 13, 14 and 15. 17. Define from 13 and 14 refresher training intervals. 18. Adopt training as standard and define what changes to the task/job would be likely to require review of training given. Attempting to work through this detailed approach for management jobs which encompass a wide range of potential influences on error and a wide range of circumstances in which error could be introduced would be daunting to say the least. However, the critical points made in the Moura No. 2 Mine inquiry (Windridge et al., 1995) regarding the dangers of assuming that knowledge gained early in a career will (a) last and (b) remain adequate, must not be overlooked. There is a clear need to incorporate some form of systematic and on-going continuous professional development programme for managers, regardless of their perceived competency.
This page has been left blank intentionally
Chapter 7
Predisposing Factors: Level 5 – Supervision/ First-Line Management Roles and Responsibilities
Supervisory grades and first-line managers have long been considered crucial to safety for they are the conduit between those who (traditionally) set the rules and those who use them. In addition, it is supervisors and first-line managers who are best placed to see, on a day-to-day basis, whether the operations are being conducted safely and/or whether there are practical difficulties which militate against safe operation. They are, in theory, best placed to know not only what should be done but also what is done. Historically, however, probably as a function of the inherent dangers of the work, the mining industry has adopted a quasi-military, command and control, management style. This is implicit in a comment in the Leon Commission Report on health and safety in mines, which has been referred to previously and is worth reiterating here, where, when discussing the importance of communications in a multiple language environment, it stated: The mining industry has sought to overcome the problem of communication by using the mining lingua franca called Fanagalo as the language of the mines. The Commission considers this to be very unsatisfactory, because the language has a very limited vocabulary and is unable to convey subtle meaning. While it may be satisfactory for giving simple commands it is quite inadequate to convey the nature and extent of the dangers that lurk beneath the surface, the source of such dangers and how best to avoid them. (Leon et al., 1994)
The same point is made, much more explicitly, in the old UK mining joke: That’s not the mine manager – that’s God, he just thinks he’s the mine manager!!
While the quasi-military management style is diminishing in many mining operations, the combination of the inherently hazardous conditions and the ever continuing moves into countries and regions with no mining history and little or no local workforce with any experience of mining means that the demise of the “sergeant-major” supervisor is probably likely to be slower in mining than in many other industrial contexts.
Understanding Human Error in Mine Safety
70
Briefly looking at the scientific literature on leadership, Grech, et al. (2008) note that it is often possible to operate within four different leadership styles: 1. Autocratic: complete control by the leader (like the quasi-military management style mentioned above); 2. Laissez-faire: The leader remains completely passive and allows other operators total freedom in their decisions; 3. Self-centred: everybody works on their own, using their own plans, with their own focus of attention and with very little communication about what they do; 4. Democratic: The democratic leader will consult others, asking them for their opinion prior to important decisions. Which leadership style is best depends on the situation. It is always important that the team work together to create synergy, supporting each other through communication and the sharing of information. Under normal circumstances, the democratic leadership style best facilitates the creation of synergy. However, in some circumstances it may be necessary to deviate from this leadership style and move into, for example, the autocratic style (for example, in the case of a mine site emergency). Good leaders often possess the ability to change leadership style according to the situation. In addition, if the pivotal role of supervisors and first-line managers is to be fully capitalised in the context of health and safety, they need: • • •
to know that they have safety responsibilities and what exactly those responsibilities are; the knowledge and experience to fulfil those responsibilities; and the time to fulfil their safety responsibilities (as well as their other responsibilities for, for example, production).
Unfortunately, not even these minimal requirements for fulfilling their safety roles and responsibilities are always in place as the examples below show.
7.1 The Problem Example 1: Underground coal mining Example 5 in Section 5.1 arose initially from the fact that the first-line managers at the mine who had previously only one section to deal with had been given two sections. There was simply not enough time available to ensure that they were in each section when a methane reading was needed. While, on realising this, it may seem a reasonable decision to delegate the responsibility for the methane reading to someone who was always in the section, the lack of adequate training given to the
Predisposing Factors: Level 5
71
Continuous Miner drivers meant that no-one was reliably taking methane readings. Given the importance of a full understanding of the hazards and risks associated with methane monitoring, it could be argued that the fundamental error was to take that safety critical action away from the knowledgeable and experienced staff and that this error was then compounded by the failure to provide adequate training to those staff taking on the responsibility. Example 2: Underground hard rock and coal mining In the Talbot et al. (1996) study of reasons why accepted safety and work standards are not complied with on mines (funded by the South African SIMRAC programme) a wide range of circumstances were identified which indicated clearly that supervisors and first-line managers knew of many instances of breaches of standards and procedures but seemed to accept this as, if not inevitable, “normal”. This study covered four mines (two gold, one coal and one platinum) and was based on a series of five questionnaires presented to a sample of management, supervisory and workforce staff and supplemented by discussion and observation. The approach taken created a data set and conclusions which were based almost entirely on the experience, views and opinions of the front-line staff themselves, rather than theories or conjecture on the part of the researchers. Among the results obtained in this study were the following: • • •
•
•
•
• •
Over two-thirds of managers and supervisors did not feel that they could trust their staff to work safely. Over a quarter of the supervisors admitted that they may “give the impression” that they expect workmen to break safety rules and procedures. Over 75 per cent of managers and almost 70 per cent of supervisors admitted that they see other people taking risks they would not be willing to take themselves. Almost 60 per cent of managers and almost 50 per cent of supervisors acknowledged that adverse working conditions make compliance with standard rules and procedures difficult. Although the majority of managers and supervisors at the mines considered that the safety training given was good, almost 40 per cent of them felt that many of the men did not fully understand the training given. Both supervisors and managers accepted that rules and procedures had to be broken as a result of failures in logistic support (insufficient tools, materials or manpower for the work to be done within the rules). Both groups considered that there were rules which were over-prescriptive and others where they had no idea why they had been introduced. Both groups acknowledged inconsistencies in terms of the response to breaches of the rules ranging from disciplinary measures to (most commonly) no action.
Understanding Human Error in Mine Safety
72
Example 3: Underground coal mining In a study of the reliability of safety inspections (funded by the ECSC and British Coal), Pratt and Simpson (1994) tracked the defects reported during routine safety inspections carried out by several deputies inspecting the same district over the same time period. One example from the results of this study is given in Table 7.1. The figures given in the body of Table 7.1 indicate the number of times the deputy reported the problem/the number of opportunities he had to report it (that is, the number of inspections he undertook during the sample period). There are clear inconsistencies between the deputies in terms of whether they spotted the problem or whether they considered it severe enough to be reported. The study went on to consider the training given for safety inspections and, in light of the results obtained, whether there was a need for additional training. The report broke the training needs into three broad decision categories: 1. Concrete judgements: those which have a standard measurement procedure and defined criterion values for action (for example, methane monitoring and ventilation etc.); 2. Factual judgements: “simple” yes/no decisions of the type something is there or not (for example, fire-fighting and first-aid equipment, support rules, transport rules and noise zone signage etc.); 3. Subjective judgements: those where a decision has to be made on experience and where criteria, if they exist, are ill-defined (for example, house-keeping standards, wet conditions etc.). Both of the problems used in Table 7.1 are examples of subjective judgements – the ones where inconsistency is likely to be greatest and also the ones where, because of a lack of defined (or definable) criteria, the training is likely to be at its weakest. Despite this there is a clear potential for many of the issues requiring subjective judgement to relate to significant hazards/risks. This, given the difficulty of addressing subjective judgements with ill-defined criteria in formal training,
Table 7.1
Comparison of cross-deputy reporting on two problems
Problem
Deputy 1
2
3
4
5
6
7
8
9
Flushboarding required
3/3
1/1
1/1
5/5
0/2
2/2
1/1
1/1
0/1
Wood legging required
0/3
0/1
0/1
5/5
0/2
2/2
1/1
1/1
1/1
Predisposing Factors: Level 5
73
suggests that the training given would benefit from an additional on-the-job coaching period under an experienced deputy acting as mentor. Example 4: Surface hard rock mining In the Simpson et al. (1996) study of transport and tramming accidents on South African mines, the study at one surface mine identified numerous occasions where vehicles did not stop at junctions despite clear signage requiring them to do so. Some vehicles crossed at full speed while others slowed but then crossed as “it appeared to be safe”. At no time during the study did any supervisor take action on such obvious breaches of the rules. In fact, when the observations were collated toward the end of the study period it was clear that the majority of those failing to stop were, in fact, supervisors. Failure by supervisory staff to uphold safety rules will gradually lead to ruleerosion. Rule erosion will also occur, but quicker, when supervisors/managers breach rules themselves, thereby implicitly condoning such action. Example 5: Underground coal mining As part of the Simpson et al. (1994) study of the role of human error in accident aetiology a review was undertaken of the (then) recent fatal accident reports within British Coal. Extracts from the witness statements taken in two of these accidents are particularly pertinent to this chapter: •
•
One of the witness statements in relation to a conveyor man-riding fatality stated: “I am aware of men riding the conveyor and have seen them doing so since legal man-riding was discontinued .. but none have been caught by a District Overman or Deputy as far as I am aware.” This may seem a reasonable and straightforward statement. However, the whole interpretation changes when you realise that the man making the statement was himself a deputy! During the investigation of a fall-of-ground fatality one of the crucial first pieces of evidence to emerge was that the support rules for the heading was not being adhered to. Among the witness statements was these comments from an official working for the contracting company who were driving the heading: “I have been issued with, and signed for, copies of the Manager’s Support Rules .. I have read them and fully understood their content … I was aware that the Support Rules were not being adhered to … however, in my opinion no additional support [other than that being used] was required.”
Both of these examples show supervisory staff fully aware of the fact that safety rules were being ignored in clearly hazardous circumstances but neither took any action to rectify the situation. Both circumstances resulted in fatalities where the failure to act on the recognised breaches was undoubtedly a contributory factor.
Understanding Human Error in Mine Safety
74
Example 6: Underground coal mining During a study of human error potential in the haulage and transport system of one underground coal mine it was noticed that a large stock of supplies had built up in the gate road to the face. This extended so far outby that the loco delivering the supplies had to stop and unload 80 m short of the end of the loco track as there was no room to unload beyond that point. This excess of material in the face-end area created a considerable increase in slip, trip, fall risk and added to the musculoskeletal risk by requiring material to be manhandled into the face/face-end over much greater distances than would have been the case if the loco had been capable of unloading nearer to the end of the loco run. The reason for this situation was that the district deputies were submitting a standard supplies requirement each day regardless of actual need. The reasons given when the deputies were questioned were all essentially the same – “we’re not going to get caught with the face stopped due to lack of supplies”. While the intention was reasonable, the approach taken to ensure the face wasn’t stopped created considerable additional risk – the potential cause for concern in relation to a possible production problem effectively created additional, real, safety risks. Example 7: Underground hard rock mining As part of the Talbot et al. (1966) study of compliance with safety rules and procedures, the relative perception of risk of a sample of risky behaviours was compared across a range of job categories. Table 7.2 summarises the results of this relative ranking for one risky scenario (guard jumps onto a moving train).
Table 7.2
Summary of risk ranking for one accident-likely scenario
Job Category
Ranking (1=most risky, 6=least risky)
Underground Managers
6
Mine Overseers
1
Shift Bosses
2
Miners
1
Safety Officers
1
Loco Drivers/Guards
6
Training Officers
2
Team Leaders
6
Predisposing Factors: Level 5
75
The fact that the average ranking for the riskiness of this activity ranged from the highest to the lowest across the job categories studied clearly implies that there is likely to be very different perceptions of the importance of taking action if and when such behaviour is seen. The fact that three of the eight job categories ranked it most risky while three of the eight ranked it the least risky of the six scenarios presented is particularly worrying. Example 8: Surface hard rock mining During the SIMRAC transport and tramming study (Simpson et al., 1996) it was pointed out that a significant proportion of vehicle repairs (and potential accidents) could have been dealt with earlier if the pre-shift checks were correctly and routinely completed. The process required the drivers to complete a checklist and “post” it in one of several “post-boxes” placed around the mine. The supervisors would then collect them as they travelled the mine and return them to the engineering department for examination and a decision as to whether any action (particularly pre-emptive action) was needed. The general feeling on the mine was that the drivers were (a) not routinely undertaking the pre-shift checks and (b) only partially completing those that they did conduct. Detailed examination of the problem showed that while there was some evidence to support both of these concerns, the main problem was that the pre-shift checklists were not being routinely collected by the supervisors (several “post boxes” contained completed check-lists which were over a week old). Example 9: Underground coal mining In the Pratt and Simpson (1994) study, a common issue arose in relation to reported problems not being addressed quickly which arose, in turn, from a lack of clarity on who was responsible for the action. For example, after tracking through the deputies’ reports and noticing regular incidences of issues identified which did not appear to have been addressed, the researchers took a sample of issues and questioned various levels in the organisation to identify who was seen as having the responsibility for action. Dependent on the nature of the problem, the deputies expected action to be taken by the overman, the undermanager or the safety engineer. However, it was equally clear that these three members of staff assumed that if the problem was not too great, that the deputy himself would initiate the required action. This problem was exacerbated by the fact that very few of the deputies’ reports seen during the study included any indication of severity. For example, three deputies (one from each shift) had reported the same problem in relation to wet roadway conditions; two had simply stated: Wet roadway beyond 34’s Tailgate Wet roadway between 34’s Tailgate and Maingate
Understanding Human Error in Mine Safety
76
Whereas the third had written: Wet roadway between 34’s Tailgate and Maingate is causing FSVs to slide into air pipes and belt structure
Clearly there is a much better indication of the potential severity of the problem in the latter entry and one which, even if there had been assumptions that it was already being addressed, would have resulted in someone checking that action had been taken. Example 10: Underground coal mining In an ECSC/BCC study (Mason et al., 1995) examining the supervisory attitudes to safety two particularly relevant points emerged in relation to deputies’ perception of their role as the front-line management staff with responsibility for safety inspections: •
•
The deputies considered that their role was primarily concerned with the identification and reporting of unsafe conditions and that unsafe behaviour was largely ignored. Indeed, several of those involved in the study specifically stated that their role was restricted to the reporting of unsafe conditions. District deputies often had the (at least implied) responsibility to oversee safety of all staff in their district. This included electricians despite the fact that few deputies had any electrical training or any basis on which to assess electrical safety beyond day-to-day domestic knowledge.
7.2 Potential Routes to Improvement There are four crucial considerations if the potential for human error by supervisors and first line managers is to be reduced: 1. 2. 3. 4.
Clarity of roles, responsibilities and authorities. Adequate training. Support. Active monitoring.
Each of the above is expanded in more detail in the following sections. 7.2.1 Clarity of roles, responsibilities and authorities It is by no means uncommon to find (as the Examples in Section 7.1 show) that confusion can arise between the various levels of operational management as to
Predisposing Factors: Level 5
77
the extent of their safety roles, responsibilities and, equally importantly, their level of authority to take or instigate remedial action. In some cases this is a genuine problem in so far as there is a real lack of clarity (for example, a failure to carefully define the boundaries of various roles). In other cases making the assumption that other people are addressing a given issue may well be a convenient “excuse”. In either case however, the end result is the same: action which should have been taken is not taken. Equally, clear, unequivocal definitions of roles, responsibilities and levels of authority will reduce the problem (whether real or convenient). In particular, the following issues should be considered: 1. Are there any roles and responsibilities specified in local mining regulations? If so do they cover all of the relevant issues? For example, local mining regulations may state that certain role holders have the responsibility to conduct safety inspections but they may not define in any detail what the inspections should cover, or the issues defined may omit some which the organisation thinks should also be included in routine inspections. In addition, it may be that while local regulations require regular inspection of certain conditions etc., they may not specify in any detail what action should be taken in the event of any concerns being identified. Such local regulatory requirements can be used as an initial minimum requirements set in a process of mapping out safety roles, responsibilities etc. across the organisation. 2. If there are no local regulatory requirements, has any attempt been made to specify the safety roles, responsibilities and authorities at the various supervisory and management levels within the organisation? All too often what actually happens is the acceptance of an unwritten custom and practice. 3. Using any requirements from 1 above as a minimal set, and existing risk assessment information, together with information from existing safety rules and procedures (for example, Permits to Work, Safety Instructions, and Standard Procedures etc.) create a list of critical circumstances, conditions and activities which need active monitoring. 4. From 3 identify in each case which role is best placed to monitor each situation and which role has the required knowledge and experience to authorise action on problems which emerge. 5. From 4 a communication net can be developed linking potential hazard to the defined monitoring role and then to the defined action role. 6. From 5 define the required action by both the monitoring and action roles. 7. From 6 examine whether any of the hazards requiring monitoring can be sensibly and practically clustered into a meaningful suite of responsibilities.
78
Understanding Human Error in Mine Safety
8. Examine the feasibility of individual responsibilities (and/or clusters) against the other responsibilities placed on that role (considering for example, the problems which emerged in relation to Example 5 in Section 5.1 and Example 10 in Section 7.1). 9. Ensure the boundaries between the responsibilities are well defined and unequivocally clear especially where more than one role may be involved in aspects of monitoring the same potential hazard. 7.2.2 Adequate training All too frequently the assumption is made that a good operator will be a “natural” supervisor or that a young engineer will, by dint of his knowledge alone, be adequately equipped to be a first-line manager. In both of these circumstances the assumption is rarely, if ever, true. Training, especially in relation to safety responsibilities, is essential. The process outlined in Section 7.2.1 above provides a sound and systematic basis on which to define the (safety) training needs analysis for all those supervisory and first-line management roles allocated safety responsibilities. The crucial importance of the supervisory and first-line management monitoring and action roles in relation to safety issues is such that the training provided should be formal (in the sense that it is a specific training course) although the best way of delivering such courses will need to take consideration of local issues (and therefore considered in the context of the issues raised in Sections 5.2 and 6.2). In particular, for such crucial roles, there must be some form of competency testing and, in light of the conclusions of the Moura No 2 Investigation (Windridge et al., 1995), periodic reviews of competence and, where necessary, re-training. One of the crucial issues raised about the effectiveness of supervisors and first-line managers is that a new set of skills is required for the role which have previously either not been required or not required to anywhere near the same extent. In particular, a good deal of emphasis has been placed on the importance of inter-personal skills in supervision and first-line management. While there remains some doubt as to whether inter-personal skills can be trained there is no doubt that an individual’s awareness of what is expected of them and their knowledge of how to deliver those expectations is crucial to the confidence in dealing with the additional responsibilities. The systematic training needs analysis followed by appropriate training (as advocated above) will undoubtedly help develop the individual’s confidence in their ability to deliver the role, especially if those above and below them in the organisation have received training which clearly specifies their roles, responsibilities and authorities in relation to safetymonitoring activities. Best practice developed in other industries can help here; one example of this is the use of a form of Crew Resource Management (CRM). This technique began in the 1970s in the aviation industry when it was recognised that most
Predisposing Factors: Level 5
79
aircraft accidents were due to human error, rather than technical aspects of flying (Helmreich, et al., 1999). Many of these errors were failures of human factors issues mentioned at the beginning of this chapter, such as team co-ordination, leadership and/or communication. Although CRM was originally designed for cockpit crews, this was subsequently extended to consider all members of the extended aviation team, including air traffic controllers and flight attendants. Closer to mining, CRM is becoming increasingly used in other domains such as fire services, railway operations, emergency response teams and the offshore drilling industry (Ahern, 2008). CRM training helps operators gain greater influence over the human and situational factors that contribute to critical incidents. This can help optimise the benefits of multi-person crews to improve safety; as such, it should be applicable to many mining tasks. CRM training workshops often involve the following subjects that can be readily adapted to mining and similar domains (Ahern, 2008): • • • • • •
Leadership/team work; Workload management; Communication; Problem /decision making; Decision making; Techniques to improve Situation Awareness.
7.2.3 Support Where supervisors and first-line managers have defined responsibilities for monitoring the safety of conditions and/or activities (whether in the form of, for example, methane measurement, roof sounding, signing-off Permits to Work etc.) it is essential that their activities in delivering these responsibilities is actively and systematically supported by senior staff on the mine. Without this support, the individuals concerned become discouraged, their standards slip and the workforce become used to “getting away with …”. This not only reduces the credibility of the supervisor/first-line manager but also of the management as a whole – “they talk a good message but they don’t put their money where their mouth is”. The net result will be that many safety risk management actions which are assumed to be in place are being routinely violated. Common issues associated with a lack of active support for the safety responsibilities of supervisors/first-line managers include: •
•
Logistic support – problems of insufficient manpower and/or equipment to enable adverse conditions to be addressed or activities undertaken in the prescribed way. Failure to act on information fed forward – while there may be good reasons for this (such as differing views on the urgency of a given issue) there must
Understanding Human Error in Mine Safety
80
•
always be a response to enable the supervisor/first-line manager to pass on the reason why action is not being taken. Failure to provide supporting information – insufficient information when procedures etc. are changed or updated; insufficient information provided for “tool-box talks”, “safety briefings” etc.
7.2.4 Active monitoring It is essential that supervisor and first-line management responsibilities for safety are actively monitored to ensure that the required standards are not slipping. This can be achieved by formal auditing processes where work is observed, permits checked, etc. or by less formal means such as overseeing by the member of staff to whom the supervisor/first-line manager reports. Whatever format is used to monitor performance on the delivery of safety responsibilities, two issues are crucially important: •
•
Any lack of compliance is examined to identify what is behind the problem as there may be genuine reasons beyond the control of the supervisor/firstline manager and their staff which are actually creating the difficulty. The purpose of the exercise should always and only be to improve safety. It should only lead to action taken against the supervisor/first-line manager if they are clearly the root cause of the problem.
One significant form of performance monitoring for supervisors/first-line managers which should be seriously considered (if not already in place) is the provision of mentoring/coaching during the early months in the job, perhaps as part of a formal probationary period. Unless the issues raised in Sections 7.2.1–7.2.4 are addressed in some systematic way appropriate for the local circumstances, the potential for human error arising will be considerable. It is at the supervisory and first-line management level that the safety systems and procedures are brought together. Unless supervisors and first-line managers have (collectively) been given carefully defined roles, appropriate authority, quality training and active support, attempts to improve safety will be like trying to “boil potatoes in a colander”.
Chapter 8
Predisposing Factors: Level 6 – Safety Management System/Organisation and Safety Culture
It could be argued that all the examples provided in Chapters 3–7 represent failures in the Safety Management System, the Safety Management Organisation or the safety culture (or all three) as it is the Safety Management System and the Safety Organisation which are meant to bring together all actions necessary to control risk and ensure safety and which should set the tone for the safety culture. In this sense it is difficult to provide specific examples associated with failure of the Safety Management System or those arising from a poor safety culture without simply repeating variations on a theme of those already provided. Although the Safety Management System (and the Safety Management Organisation in place to deliver it) and safety culture are intrinsically linked, the two are addressed separately below simply for the sake of clarity. The chapter concludes with a short introduction to the notions of “organisational maturity” and “high reliability organisations”. These concepts show how selected companies in other domains have addressed similar issues.
8.1 Safety Management System/Organisation The elements of effective Safety Management Systems have been extensively documented in international standards (such as AS/NZ 4801, BS8800, ISO 14001, OHSAS 18001, etc.) and concisely summarised in documents such as HS(G)65 (HSE, 1997) so there is little benefit in repeating the high level requirements here. It is, however, worth re-emphasising three crucially important points: •
Although it has become almost a health and safety cliché, the importance of committed and on-going leadership in relation to health and safety cannot be under-estimated. Unless there is a clear and maintained lead “from the top” any benefits arising from new initiatives will be short lived. In addition, it is only from the very highest level of an organisation that the budgetary commitment to safety initiatives can be obtained and, although many improvements can be made at little cost, some expenditure will be inevitable. So, unless the recognition of the importance of safety and a willingness to invest in safety is present and echoed down the organisation
Understanding Human Error in Mine Safety
82
•
by action as well as words, the root causes will remain and the sources of potential error continue to predispose accidents. Safety Management (including the management of potential human error) must be active to be effective. No matter how sound and comprehensive a Safety Management System is on paper, unless it is translated into day-today reality it will be of no value at all. The official Inquiry into the Clapham Junction railway accident, which occurred in London in 1987, summed up this problem very succinctly: The concern for safety was permitted to co-exist with working practices which … were positively dangerous … The best of intentions regarding safe working practices was permitted to go hand in hand with the worst of inaction in ensuring that such practices were put into effect. (Department of Transport 1988a)
•
The Safety Management System must include an organisation, an infrastructure, within the operation which provides the mechanism whereby the requirements of the system can be/are delivered. A Safety Management System is not simply a collection of codes, rules, procedures, Permits to Work, safe working practices etc. The Safety Management System, like all other business systems within a company, cannot function without clear definition of who holds what responsibilities, what their authorities are and what the reporting lines are. Without this, assumptions will be made and errors emerge with the potential to create accidents and disasters as was succinctly pointed out in the Inquiry to another UK rail disaster (the King’s Cross Underground fire): Although I accept that London Underground believed that safety was enshrined in the ethos of railway operation, it became clear that they had a blind spot … I believe this arose because no one person was charged with overall responsibility for safety. Each director believed he was responsible for safety in his division, but that it covered principally the safety of staff. The operations director, who was responsible for the safe operation of the system, did not believe he was responsible for the safety of lifts and escalators which he believed came within the engineering director’s department. Specialist safety staff were mainly in junior positions and concerned solely with safety of staff. (Department of Transport 1988b)
•
Finally, in this outline of the crucial elements for an effective safety management, the Safety Management System must be comprehensive. As well as covering the “obvious” elements such as defined responsibilities, codes, rules and procedures, training etc. it must encompass those aspects
Predisposing Factors: Level 6
83
of the general operation which could (even indirectly) affect safety. Such considerations would include, for example, fatigue, shift work and the effects of both on, for example, vigilance. While crucial considerations, such issues are highly context dependent (and often context specific) and, as such, it is difficult to generalise in terms of their absolute influence on human error. Nonetheless a comprehensive Safety Management System should at least acknowledge their potential influence. One word of warning is required from studies in this area. Although generally it would seem, for example, that hours of work, time on tasks and time of day are the main risk factors regarding fatigue, such assumptions do not always stand up to scrutiny. For example, recent work by Cliff and Horberry (2008) examined the principal variations in accident and incident risk in relation to roster design within the Australian coal mining industry. The main results of their research did not indicate any strong association between hours of work and the number of incidents or injuries. It is not unusual in studies of the influence of shift work, fatigue etc. on safety to generate results which are counter-intuitive. Context is all important in such considerations. Not only does the quotation from the King’s Cross Inquiry serve to emphasise the point made in the previous bullet, but it also raises another important point in relation to the organisation of safety management – the role of the safety specialist. The role of the safety specialist in industry generally has been changing in recent decades, not least because for many years it was not seen as an independent professional role (see, for example, Nicholas, 1998 on the professional registration of safety practitioners in South Africa). In some senses this change has been (justifiably) slower in mining than in other industrial sectors for, while the emphasis was on unique mining hazards, it made sense that safety “specialists” would “emerge” from traditional mining engineering disciplines/training. As Joy (1996) states: In the past it was common for the safety professional to be developed from other roles in the mine such as production or maintenance. The person would often have a genuine interest in safety which the mine might develop through short training courses. He would already be familiar with the regulations as part of his previous role so some grounding in OH&S plus his operational expertise were often seen to be the ticket to effective safety management.
It is relatively recently, following the logic in, for example, Buchannan (2000) quoted in Chapter 1, that it has become apparent that the “modern” mine safety professional needs a much broader knowledge and experience base than has been, traditionally, the case. The Joy (1996) paper, quoted above, was based on a workshop conducted at the Minesafe International Conference at which he facilitated a discussion on the
Understanding Human Error in Mine Safety
84
future role of mine safety professionals. He introduced the debate by stating six changes which he believed were both necessary and beginning to emerge. These are summarised below: •
•
•
•
•
•
A move from safety manager to safety coach: rather than the “player” role where the safety specialist was in the front line touring the site, monitoring standards and taking direct remedial action, a coaching role involves supporting managers, feeding them information, suggesting solutions, facilitating action, co-ordinating and monitoring implementation. A move from responding to prescriptive regulation to supporting enabling regulation: following the changes which have occurred in many national regulatory systems where the emphasis has changed from proving compliance with detailed, specific, requirements to proving that you have systems in place to manage safety across the risk spectrum. A move from being familiar with risk management to being an expert in risk management: this reflects a need for a broader perspective which takes account of best practice not only elsewhere in mining but also being in a position to capitalise on best practice from other industrial contexts. A move from being expert in mining operations to being familiar with mining operations: this reflects the change in focus evident in, for example, the Buchanan (2000) and the Simpson and Widdas (1992) papers where the point is made that more expertise is needed in the industry on safety issues which are not associated with the traditional uniquely mining hazards. A move from internal communicator to external communicator: this reflects the need to break out of what Joy describes as the mining industry’s tendency to “inward thinking” and the “not invented here syndrome”. A move from administrative skills to management skills: this reflects the move away from a high proportion of the safety specialists’ time being spent on keeping records, filling forms and generally administrating safety information toward the management skills needed to facilitate and coordinate safety initiatives across the operational departments and within the context of the wider business processes.
Collectively these six changes reflect the perception of a need for a change (from both within the industry and from its regulators) in the role and focus of safety professionals within the industry. The concept of a coaching role is particularly useful. Joy uses an analogy to team sports:
Excellent examples of the way in which computer-based systems can help free safety specialists from their often heavy involvement in statistics is given in Calder (1998) and Brown et al. (1998).
Predisposing Factors: Level 6
85
Consider the change in role when a player in a team sport moves to being the coach. His or her focus must change from delivering performance on the field to encouraging, supporting and assisting the performance of others.
While this change of role is clearly significant, it does not separate the coach from responsibility for performance on the field. Indeed in some circumstances (UK professional soccer being a good example), the coach often becomes (or is made) more accountable for team performance (or the lack of it) than are the players. This integration of the coaching role with outcome is crucial and emphasises an important difference from the safety advisor role which has been prevalent in many industries including mining over recent years. While the concept of safety advisors was developed, in large part, to (correctly) reinforce the fact that it is operational management who must own the safety problem rather than the safety manager, the transition from safety manager to safety advisor may well have been a step too far and one which may be seen, in many senses, to have been counter-productive. The crux of this assertion is that functions whose sole responsibility is to advise not only have little direct involvement in actual action but also have little direct accountability. This was summed up very succinctly by a deputy mine manager as follows: Health and Safety people are a pain – all they do is audit, complain and walk away – if you ask for advice you get pat answers such as comply with such and such regulations or guidance without any real help as to how to comply – they are no more than policemen, no that’s unfair to policemen – they are traffic wardens.
That may, in fact, also be unfair to traffic wardens for they, like safety advisors, would undoubtedly reply that they are (a) serving a real purpose and (b) doing what they are required to do. This is, however, the crucial distinction between Joy’s concept of the safety coach and the concept of a safety advisor (in at least many of its incarnations) – a coach is an integral part of the team, neither can succeed without the other; an advisor, by contrast, can (and often does) remain independent of and separate to the rest of the team (if they don’t take my advice then what can I do?). There is another interesting distinction which is crucial to measuring the effectiveness of the safety performance/function within an organisation. On what basis do you conduct, for example, an annual performance appraisal of a safety advisor? What measures of success (or otherwise) can be put in place which can be used to indicate the value of the contribution of an advisor? In a different but analogous context, civil servants tend, generally, to have much more stable careers than the government ministers they advise. Apprising the contribution and performance of a safety coach is however, a much less convoluted issue. For example, the safety coach and the operational
86
Understanding Human Error in Mine Safety
manager can be held jointly accountable for delivering, say, continual improvement in safety standards because, unlike the manager/advisor combination, the manager/ coach combination is mutually dependent. There can be little doubt therefore that the changes advocated by Joy not only reflect external influences and on-going changes in the safety focus within mining but will also generate a better, more meaningful and more integrated safety management function/operation. Donoghue (2001) proposes an interesting further step in relation to the issue of clarifying accountability for action (or inaction) in relation to safety and health issues. The paper suggests that management could be held accountable for injury (or ill-health) incidents to their staff by the use of what is, in effect, retrospective risk assessment. The principle involved is that should the retrospective risk assessment show inadequate control, this should be reflected in the manager’s annual performance appraisal. What is proposed is that a risk matrix is devised for the operation in which a banding is agreed so that above a particular criterion level management can be considered accountable for inadequate risk and “penalised” whereas below that level, no penalty is enacted. While such an approach has appeal and reinforces the need to consider the risk assessment as part of accident investigation, it would be fraught with organisational difficulties in terms of the equity of the allocation of responsibility (not dissimilar to the legal difficulties encountered with the concept of, for example, corporate manslaughter). In particular, such an approach would further challenge the issue of the relationship between management and safety professionals (for example, a case could easily be made to “exonerate” a safety advisor in this accountability equation).
8.2 Safety Culture The problems which arise from a poor safety culture are invidious and allpervasive; they will undermine all provisions in place to promote safety and damage the effectiveness of any new initiatives. It is hardly surprising therefore that few, if any, safety specialists would question the importance of a positive and sustainable safety culture. In fact, many would consider it to be the fundamental requirement to effective safety assurance. However, despite this almost universal acceptance of its importance, it is a concept which is, in many ways, rather vague and intangible. Many of the definitions which have been proposed do little to clarify the situation. For example, an often referenced definition was that proposed by the International Nuclear Safety Advisory Group of the International Atomic Energy Agency (International Nuclear Safety Advisory Group, 1988) as follows:
Predisposing Factors: Level 6
87
That assembly of characteristics and attitudes in organisations and individuals which establish that, as an overriding priority, nuclear plant safety issues received the attention warranted by their significance.
An alternative which is also often quoted is given in a report by the Human Factors Study Group of the UK Health and Safety Commission’s Advisory Committee on the Safety of Nuclear Installations (ACSNI, 1993) which defined safety culture as follows: The safety culture of an organisation is the product of individual and group values, attitudes, perceptions, competencies and patterns of behaviour that determine the commitment to, and the style and proficiency of, an organisation’s health and safety management. Organisations with a positive safety culture are characterised by communications founded on mutual trust, by shared perceptions of the importance of safety and by confidence in the efficacy of preventative measures.
At the opposite end of the succinctness spectrum, the Confederation of British Industry (1990) describes safety culture as: Put simply it is “the way we do things around here”
Although few working in the field of safety culture would argue with the essence of any of the definitions above (or others that have been proposed), it is difficult to avoid thinking that they offer little by way of increased understanding of the concept. Indeed Guldenmund (2000), in a detailed review of the concept of safety culture, concludes that it is still largely ill defined. The position is further complicated by the use of the term safety climate in parallel with safety culture. Although many use these two terms as effectively interchangeable, many authors see a clear distinction. For example, Gadd and Collins (2002) taking a lead from Flin et al. (2000) describe safety climate as “… the current surface features of safety culture which are discerned from employee’s attitudes and perceptions”. In this sense safety culture can be seen as the underlying ethos of an organisation and safety climate as the day-to-day manifestation of the culture. If this distinction is accepted then it introduces an additional benefit. Safety climate, by focusing on employee’s attitudes and perceptions, is, at least in principle, easily measureable (whereas safety culture is not). Numerous approaches to the measurement of safety climate have been proposed although not all have been subject to any meaningful attempts at validation and there are often differences in emphasis. Davies et al. (2001) provides an excellent comparison of six proposed measures of safety climate. The problem is further complicated by the fact that several publications (for example, Gadd and Collins, 2002) point out that it is not uncommon for definable
88
Understanding Human Error in Mine Safety
sub-cultures to exist within an organisation. In this circumstance it is quite possible that a sub-culture can represent a “local” situation which is either more positive or more negative than the overall position. For example, a particular department may be “better” than others as a function of the approach taken by the local managers/ supervisors etc. Returning to the issue of obtaining a general understanding of what actually constitutes safety culture, several authors have focused on purely listing the observable attributes of a positive safety culture (see, for example, in a mining context, Pitzer, 1993; Davies, 1993; Schutte, 1998; and Calder, 1998). This approach has the benefit of avoiding yet more attempted definitions, which, while impressively worded, in reality impart little. Mulder (1998) provides a list of the important attributes of an organisation which are vital in creating and maintaining a good safety culture which can be seen as representative of most of the other attribute lists given in the literature. The list is as follows: 1. genuine and visible commitment and leadership from the top; 2. acceptance that improving health and safety performance is a long term goal which requires sustained effort and interest; 3. a policy statement of high expectation which conveys a sense of optimism; 4. sound codes of practice and health and safety standards; 5. health and safety should be awarded adequate resources; 6. health and safety must be a line management responsibility; 7. employee ownership, involvement, training and communication; 8. setting of achievable targets and measurement of performance against those targets; 9. all incidents or deviations, irrespective of whether injury or damage occurred, must be thoroughly investigated, documented and disseminated; 10. compliance with standards must be ensured through auditing; 11. good health and safety behaviour must be a condition of employment; 12. all deficiencies must be remedied promptly; 13. managers at all levels must regularly assess performance; and 14. factors that influence the behaviour of managers, supervisors and employees must be properly managed. One crucial element in the development of a positive safety culture which is not explicitly mentioned in Mulder’s list of attributes is that of the creation of a noblame culture. There is no doubt that this is a central issue and, equally no doubt that where human error has been implicated in an accident, the allocation of blame has, all too often, been the primary focus of the investigation. A good example of this was the result of the initial investigation of the accident which formed the basis of Example 1 in Section 4.1, where the initial conclusion was that of poor personal positioning on the part of the man who was killed. While this was true,
Predisposing Factors: Level 6
89
it took no account of the fact that the attenuation of the hearing defenders he was wearing masked the fork lift truck warning signal. Not only is a focus on blame morally questionable, it is counter-productive. For example, if you discover when investigating an accident that Fred “caused” an accident as a result of an error he made and stop the investigation at that point, there are very few options available for remedial measures. You can decide he needs re-training, but unless you have identified why he made the error the chances are that all that will be done is to repeat the training already given which, almost by definition, has failed (for if it had worked, he would have been less likely to be in error). Alternatively, you can choose to discipline him but, dependent on the nature of the error and its root cause that will not necessarily remove the error potential. Finally, you could decide that the error was so great that you should sack him; while that will stop him repeating the error, it will not remove it for anyone else. It is only by focusing on what actually caused the error that you can maximise the identification of the most appropriate accident prevention action. The issue of blame focus in relation to human error has been an issue of concern in every industrial context. However, the post-apartheid changes in mining regulation in South Africa were probably the first time that reducing a blame focus has been a specific objective in framing mining regulations (see, for example, Hermanus and van der Bergh, 1996; Marx, 2000). However, the adoption of a no-blame culture does not exclude the need to identify where the responsibility lies for this is part of the root cause and of linking the root cause to the most appropriate remedial measures. Returning to the wider issue of the total suite of attributes which constitute the basis of a positive safety culture; after setting out his list of attributes, Mulder goes on, very usefully, to list a number of pitfalls associated with culture change programmes. It is however, a touch ironic that after detailing a 14-point attribute list that the second in his list of pitfalls is Setting too many goals. Mulder’s point is both real and important but its proximity to such a long list of things to achieve in improving safety culture is emblematic of the difficulties faced by anyone wishing to embark on improving safety culture. This is also evident in a quotation from the Schutte paper mentioned above: The challenge to leadership is to change the worker’s poor perception about safety towards an excited, empowered, valued employee who is ongoingly committed towards the achievement of high levels of health, safety and conformance (Schutte, 1998).
There is no doubt that achieving this would improve safety culture but equally it would be no surprise if the manager at the mine on receiving this advice was left wondering: where on earth do I start!? There are three fundamental difficulties in seeking a widely applicable model for the improvement of safety culture within an organisation:
Understanding Human Error in Mine Safety
90
•
•
•
Improving safety culture is about changing human behaviour but our behaviour is dynamic and context dependent. We often react differently to some event today than we would if the same event had occurred a week ago. Similarly, we will do and say things to adults which we would never contemplate in front of children, we will do things after “a few drinks” which we would probably never do when sober, we drive differently on long straight quiet country roads than we do in the chaos of the city etc. In addition, our behaviour differs depending on who we are dealing with; we will, for example, take criticism from some people which we would take great exception to from others. In this context there can be no universally applicable, generic, approaches to safety culture improvement except at the very high level of general attributes/targets. Improving safety culture is also about changing attitudes and that, as decades of applied psychology research has shown, is far from easy. One of the main problems is that people are fundamentally wary of anyone who is trying to change their attitudes. Consider, for example, what would be the replies from a sample of people to the question: which professions do you trust least? It is a pretty fair guess that among the most consistent replies would be for example, politicians and second-hand car dealers. What do these apparently disparate professions have in common – they are both trying to change our attitudes. Politicians are trying to persuade you to vote for them because they have the best interests of the community closer to their heart than the other candidates but you know full well that their own best interests are what really matters to all of them. Similarly the secondhand car dealer is trying to persuade you that this rotting heap in front of you is not only safe, reliable and sound but worth twice as much as you think it is, etc. Fundamentally people do not trust other people who they perceive as trying to change their attitudes. A frontal attack on changing safety attitudes is therefore likely to fail. The elements of a positive safety culture (as the Mulder list above shows) are both numerous and wide ranging but many are also interdependent and some are consequential (in that it will be easier to achieve one if others have been achieved in advance). This makes a deliberate attempt to change safety culture head-on highly complex and extremely demanding.
Much of the psychological research on attitudes and attitude change has shown that it is much more effective to change the circumstances which predispose the attitudes (and let the attitude change occur as a consequence) rather than to try to change the attitudes directly. This is important in the context of safety culture for what it implies is that if there is a cynical attitude among the workforce that management really don’t care about safety, removing that perception by tangible action will change the attitude. Similarly, a lax attitude to safety compliance generated by no one taking action on breaches of the regulations will not change by exhortation alone; the advocacy of compliance needs to be seen to be important
Predisposing Factors: Level 6
91
by clear action to show that it is not accepted. It is similarly recognised in psychological research that changes in attitude will engender changes in behaviour. These points also have relevance to the problem alluded to in the third bullet above in that small successes are cumulative; gradually, small specific improvements will result in an improved culture. Few, if any, mining studies have attempted to change safety culture headon. However many would claim to have done so by various approaches which are generally characterised by the introduction of a programme of (often small) changes and initiatives which, collectively, have resulted in an overall change in safety performance which clearly reflects a change in culture (despite the fact that it has rarely been the case that a measurable link between the initiatives and the culture change has been proven). Among those where changes have occurred which seem to encompass a change in safety culture are, for example, Johnston (1993), Neindorf and Fasching (1993), Pitzer (1993), Mulder (1998) and Jordinson et al. (2000). Each of the above (and many other similar studies) has been, essentially, limited to improvement activities in individual mines or mining houses and are, as a result, at least context dependent. However, there have also been suggestions that indirect approaches to safety culture improvement can be encompassed within more widely applicable, less context dependent, techniques. For example, Purdy (2000) encompasses the indirect approach to safety culture improvement very elegantly by, in effect, stating that you can, to all intents and purposes, ignore safety culture initiatives and yet still improve it: This paper is not about culture change per se; it is about risk management … Our contention is that Risk Management when properly embedded into an organisation, does change the culture in a positive manner, forever more.
Similarly, the Potential Human Error Audit approach developed by Simpson and his colleagues in the UK (see, for example, Simpson, 1994) also resulted in clear improvements in safety culture without directly addressing the issue (this approach is covered in more detail in Chapter 9). In summary, there can be no doubt that a positive safety culture is important to the delivery of safety improvements (including minimising the impact of human error) and absolutely central to the sustainability of improvements in safety. However, there is equally no doubt that it remains a rather ephemeral and nebulous concept. By collating all the “evidence” presented above it is possible to outline a series of crucial elements which need to be taken into account in the pursuit of improved safety culture as follows: • •
Improvements must be driven from the top down. There must be corporate and senior management commitment to improvement. The corporate commitment must be tangible and apparent (that is, senior
Understanding Human Error in Mine Safety
92
•
•
•
management time and budgetary provision must be clearly available to support the improvement process). Without such provisions the initiatives will rapidly fall into disrepute as “another management nine-day wonder”. There must be positive steps to encourage and maintain active two-way communications. This cannot be just simple passive measures (“come and talk to me if you have problems”). Two-way communication has to be actively promoted with specific provisions for workforce involvement (this could include, for example, involving the workforce in risk assessment, ensuring the workforce have the opportunity to “sanity check” codes, rules and procedures etc. before they are introduced/imposed). Each and every step possible needs to be taken to become (and be seen to become) a positive learning organisation where health and safety is concerned. This entails, for example, ensuring that actions required in response to an accident/incident are (a) taken and (b) checked to see whether they are relevant to operations beyond that where the accident/ incident occurred. In a similar vein, open reporting of near miss events is a crucial proactive element of becoming a positive learning organisation. A clear and obvious move away from blame culture (or even being perceived as such). This is particularly crucial where human error is concerned. The adoption of an approach to human error which encompasses the deliberate and systematic identification of precursors to error is a significant and necessary step towards the development of a no-blame culture.
While it is easy to present the above as separate “objectives”, they do, in reality, interact. For example, it will be extremely difficult to encourage active and open reporting of near miss events unless there is good two-way communication combined with a no-blame approach. This complication which arises from the highly interactive nature of the issues involved is one good reason why it can be better to develop a programme of small initiatives with defined boundaries and a high likelihood of success. In this way, small successes will reinforce the perception of real change and, over time, aggregate in a way whereby an improved safety culture becomes an inevitable outcome. Fleming (2001) encompassed the above in the development of a “safety culture maturity model” which tracks the steps in developing an improved safety culture in a simple and elegant way. The five levels in the maturity process he proposed were: 1. 2. 3. 4. 5.
Emerging. Managing. Involving. Co-operating. Continually Improving.
Predisposing Factors: Level 6
93
However, what is arguably more interesting and important is that he briefly summarises the requirements for moving from one level to the next and, implicitly, emphasises the point that progression is cumulative (that is, each step must be taken before the next can have a chance of success). His points are paraphrased below: • • • •
From Emerging to Managing: develop management commitment From Managing to Involving: realise the importance of front-line staff in delivery and actively promote personal responsibility From Involving to Co-operating: engage with all staff in programmes to promote co-operation to improvement From Co-operating to Continually Improving: develop consistency and avoid complacency.
Despite all the problems of definition, and the difficulties of dealing with a wide range of complex and interacting high-level issues, there is clear evidence that targeted programmes of safety improvement on individual mines can deliver improvements in safety culture and that there are also more generic approaches that appear to have the ability to deliver safety culture improvements “through the back-door” (this aspect is discussed further in Chapter 9).
8.3 Organisational Maturity and High Reliability Organisations The Fleming (2001) paper referred to above which specifically considered safety culture maturity reflects a wider literature in relation to organisational maturity. This issue has been increasingly discussed in many industries, including mining, although no agreement exists of what precisely are the different levels of maturity. Another model identifies five levels of process maturity for an organisation: 1. Initial: the starting point for use of a new safety process, or starting to use any safety process. 2. Repeatable: the process is used repeatedly. 3. Defined: the process is defined/confirmed as a standard process. 4. Managed/quantified: safety management and measurement takes place. 5. Optimising: safety management includes deliberate and continuous process optimisation/improvement. Whichever model is used the top maturity level links closely to the idea of a “high reliability organisation”. In the mining domain that essentially means an organisation working in a high hazard area that has succeeded in avoiding catastrophes. As Weick and Sutcliffe (2007) and Hayes (2006) point out, high reliability organisations are characterised by the following five qualities to keep them working well when facing unexpected situations (often termed mindfulness).
94
Understanding Human Error in Mine Safety
1. Preoccupation with failure, and detailed analysis of failure as essential for organisational learning 2. Reluctance to simplify interpretations, and seeking a diversity of views on organisational issues 3. Sensitivity to operations, that is, one or more individuals having an understanding of the state of the operational system, and the organisation placing emphasis on the understanding of operations 4. Commitment to resilience, often by defence in depth, especially to eliminate hazards or prevent incidents. Resilient organisations are robust yet flexible, and have the ability to recover from irregular variations and disruptions of working conditions to prevent control being lost (Hollnagel, et al., 2006). 5. Deference to expertise, especially having an organisation flexible enough to allow responsibility for decision making in emergency situations to be passed to experts close to the situation. Most of the work with high reliability organisations has come from outside mining, with nuclear power and air traffic control being two examples where this sort of approach has been most commonly applied. Although very few examples could be found in the open literature of systematic attempts to incorporate organisational maturity models in mining operations, it is expected that this could be an area of particular benefit for mining.
Chapter 9
Managing Human Error Potential
While Chapters 3–8 indicate an extraordinarily wide range of mining operations and circumstances which can (or have) predisposed human errors with clear safety implications, and some specific suggestions have been put forward to resolve the error potential, far greater value would arise if there were also approaches which could be used to actually manage the human error potential in mining operations. A number of possibilities to aid in the process of managing human error potential, either by incorporating a more detailed consideration of human error into existing processes and procedures, or in terms of specific human error identification and mitigation techniques, are discussed below.
9.1 Proactive Approaches 9.1.1 Risk assessment Risk assessment first emerged as a central tool in proactive approaches to health and safety in the work of the Robens Committee (Robens 1972) and its subsequent incorporation in the UK’s Health and Safety at Work Act 1974. Since then, risk assessment has become a central part of general and mining health and safety regulation across a wide spectrum of countries which include, for example: Australia (see, for example, Torlach, 1996) Canada (see, for example, Arnold, 1996) European Union (see, for example, Hunter, 1993) Poland (see, for example, Filipek and Brodzinksi, 1996) South Africa (see, for example, Bakker, 1996) Sweden (see, for example, Mellblom, 2000) USA (see, for example, Green, 1998). In addition, risk assessment is at the core of Convention 176 of the International Labour Organisation (see, for example, Jennings 1996; Nkurlu 1998). While few, if any, of these regulatory requirements for pre-emptive risk assessment detail a specific risk assessment procedure, they all agree on the essential elements, which are: •
Identification of potential hazards.
Understanding Human Error in Mine Safety
96
•
• •
Allocation of some form of risk “score” for each hazard (based on risk being a function of the likelihood of the hazard materialising and the severity of the consequences). Specification of the existing risk control measures. A decision as to whether the current controls are adequate and, if not, the identification (and introduction) of additional controls sufficient for the risk to be considered adequately controlled.
In addition, it is common for the risk “scores” to be used to provide a basis for prioritising action across the suite of risks which have to be addressed. No concrete guidance is given in relation to when a risk is adequately controlled; most commonly the phrase used in the European Framework Directive on Health and Safety – “risks should be controlled to a level as low as is reasonably practicable” (or variations thereon) – is the convention adopted. These, very broad requirements are entirely in line with the move away from the “old” position of dozens of small scope, highly prescriptive, regulations to a management framework to support self-regulation as advocated by the Robens report (Robens, 1972). However, in an industry “brought-up” on a history of highly prescriptive regulation, there is no doubt that the slightly intangible nature of what was actually required by way of the processes and procedures for an adequate risk assessment caused, initially, a good deal of confusion and debate. The crucially important point about the value of risk assessment is that the process is more important than the procedure. More specifically, the major benefits from risk assessment arise from the detailed, systematic and comprehensive discipline it invokes in order to identify and control hazards, rather than from the adoption of any particular risk assessment method/technique (for more discussion of this point see, for example, Simpson, 1996c). For more information on the application of risk assessment methods, see the very comprehensive risk assessment guidelines in Joy and Griffiths (2007). Similarly, for a review of published risk management and assessment applications see Komljenovic and Kecojevic (2007). One other crucial point must be made which is often left implicit in regulations and that is that risk assessment has little value in isolation; it is only when risk assessment is used as the systematic start point of a process to define on-going risk management that it has any value for improving health and safety. Having shown, therefore, the centrality of risk assessment to modern international mining regulations and emphasised the important elements thereof, the question arises, how can a risk assessment process be used to help manage human error potential? There are two general ways in which risk assessment can be used as a tool for helping manage human error potential: 1. By incorporating detailed consideration of human error in what might be called “conventional” risk assessment.
Managing Human Error Potential
97
2. By the increased use of what can be described as Case for Safety risk assessments and, in the process, ensuring specific consideration of human error potential. Both of the previous points are expanded below. Human error potential in “conventional” risk assessment “Conventional” risk assessment in the present context is the process outlined above applied routinely to tasks, operations or workplaces where the purpose is to identify hazards, assess risks and decide whether such risks are adequately controlled. Human error becomes an important consideration in this process at two points: 1. Human error can be the “trigger” for the hazard to materialise. 2. Human error can undermine the risk control measures assumed to be in place and effective. The examples given in Chapters 3–8 cover both of these issues. For example: •
•
Selecting the wrong gear position due to directly opposite gear control movements as you switch from one vehicle to another is a clear example of how human error can trigger a significant risk of someone being run over (Example 2, Section 3.1). Failure to check brake line integrity on locos after shunting (Example 8, Section 6.1) is another example of an error which triggers a risk. Wearing a safety harness upside down (Example 10, Section 6.1) and inconsistent reporting of problems during safety inspections (Example 3, Section 7.1) are both situations where control measures which seemed reasonable and which were assumed to be in place were simply not effective.
A simple process of considering how human error could trigger a hazard to materialise and, therefore, a risk to exist, will provide a systematic basis on which to identify safety related human error potential and then, as a natural progression of the risk assessment process, allow appropriate controls to be identified and implemented. Similarly, by systematically considering human error potential when assessing the effectiveness of current control measures (or evaluating the utility of new ones), additional control measures can be put in place to ensure that all the measures taken can be reasonably assumed to be effective. This process should also then trigger issues to be checked in routine monitoring (both informal, such as supervisors “keeping an eye on what is done” and formal, for example, routine auditing). Ensuring that the potential for human error compromising safety is an integral part of risk assessment in the way outlined above is not a major step, nor will it
Understanding Human Error in Mine Safety
98
require significant additional resources, but it is a highly effective way of starting the process of pre-emptive human error management. Incorporating human error in “case for safety” risk assessment For many years there has been a requirement in the nuclear industry for plants to develop a Safety Case and have it approved by regulators in order to obtain a licence to operate. The Safety Case is, essentially, a very comprehensive quantified risk assessment based on the aggregated probability of an event (for example, the escape of radiation) against a defined benchmark probability. Once the licence has been granted, the Safety Case has to be formally reviewed at defined intervals. Such regulatory requirements are known in the UK, for example, as permissioning regimes in the sense that the licence gives the plant permission to operate under the Safety Management System and associated provisions that were proposed in the Safety Case. Although characteristic of the nuclear industry, such approaches are being adopted on a wider base (for example, in the oil and gas industry post Piper Alpha). Although there are examples of quantified probabilistic risk assessment in the mining industry (see, for example, Tripathy and Rourkela, 1998), there are good practical reasons why formalised Safety Cases based on quantified probabilistic risk assessment could not be adopted in mining without considerable, concerted effort across the industry as a whole: •
•
•
The mining industry does not have detailed historical data of, for example, equipment failure rates on which to base meaningful probability estimates (see Rasche 2002 for more discussion of this point in a mining context). No regulatory body dealing with the mining industry has established a benchmark for “acceptable” probability (as is the case in the nuclear industry) against which to assess the derived probabilities; without such a target value the vast amount of effort required to generate acceptable data will be futile, for the probabilities generated have no intrinsic value without a criterion against which to judge them. The benefits to be gained from risk assessment are not, in any way, dependent on quantification. All the benefits to be derived can be derived by systematic qualitative assessment; even the need to prioritise across a range of risks is essentially a relative rather than an absolute judgement.
Among the conclusions to a discussion of the potential for the introduction of a Safety Case approach to mining risk management, Rasche (2001) states: The Safety Case has its origins in industries that are relatively static and are less people intensive compared to the mining industry. The current approach … has worked well for these sectors and is a worthwhile starting point in the reduction of multiple fatalities arising from system, technology or engineering failures. However, the reduction of single fatality events probably requires a
Managing Human Error Potential
99
less structured and formal process that enables mine employees to recognise and manage the inherent hazards of our dynamic industry.
However, even if there is little justification for the introduction of formalised Safety Cases based on quantified probabilistic risk assessment, the idea of using (qualitative) risk assessment to provide safety assurance for the introduction of new equipment, systems and operating practices has considerable potential merit. In order to distinguish such an approach from the formalised Safety Case approach, a simplified qualitative risk assessment approach to new equipment etc. could be usefully described as a Case for Safety approach. Perhaps the earliest systematic examples of a Case for Safety approach in mining (although it was not described as such) was the work undertaken by MineRisk and ACRIL in Australia (particularly the work of O’Beirne, Joy and their colleagues, see, for example, O’Beirne, 1992; Turner and Joy, 1996). The O’Beirne paper includes an example of part of a risk review of a new machine which provides a good example of how this approach can be a significant aid to proactive safety improvement in general and the reduction of the contribution of human error to safety risk. An extract from the table presented in O’Beirne (1992) is given in Table 9.1 (slight modifications have been made to improve clarity when used out of its original context).
Table 9.1
Examination of one element of new machine risk
Activity
Scenario
Operate by radio remote
Injury from wrong control action
Prob. H
Consequence People – H Plant – L Prodn – H
Risk H
Controls
In Design
Maintain similarity with other manual controls
No Manufacturer action
Panel layout
No Manufacturer action
Operator training
Mine Action
Parallel multiple functions inhibited
Yes
Pre-start warning for critical functions
Yes
Dual switches for critical controls
Yes
Maintain design within accepted standard
Yes
Emergency stop
Yes
100
Understanding Human Error in Mine Safety
It is easy to see how the approach can be used to identify what the risk assessment team consider important controls which have or have not been considered in the design. Actions can then be placed to ensure additional measures are taken during the design process. It is also evident from this example how what is essentially a slip/lapse error (that is, accidental activation) has been (correctly) addressed largely by design details which will reduce the likelihood of the error occurring. Equally, it can be readily appreciated how such an approach will reduce the opportunity for designed-in error potential. Slightly behind the Australian initiatives in this area (but independent thereof) similar work was undertaken in the UK. The first such Case for Safety exercise involved the extension of locomotive manriding into a new district with a FSV/ loco materials transhipment point just inby of the passenger boarding/alighting station. The colliery considered that the cost of developing a conventional two-rail system into the district would be prohibitively costly and had made a proposal to the inspectorate for an exemption (to the current regulations) to allow a single loco to breast the manriding cars on the inby journey. Breasting, which involves the loco pushing the manriding cars from behind, was considered highly dangerous and was, under the regulations, effectively illegal. The regulator refused to consider the proposal unless it could be shown that the system could be arranged in such a way that no additional risk (over and above those associated with conventional loco operations) would arise. The colliery worked in conjunction with the Ergonomics and Safety Management function of British Coal to undertake what was, in effect, a Case for Safety risk assessment. This work is described in detail in Simpson and Moult (1995) and outlined below. A risk assessment team was established consisting of a mechanical engineer, the colliery safety engineer, an undermanager, a deputy and a locomotive driver and facilitated by the Head of Ergonomics and Safety Management. The following three systems were compared: 1. Conventional operation with passbys at each end of the inby and outby journeys (however, as the roadway narrowed after a bend at the inby end and there was insufficient room for a passby, the track for the conventional method had to be stopped short of the bend and, consequently, shorter than the colliery had hoped/intended). 2. The breasting proposal. As this did not require any passbys, the track could extend beyond the inby bend running as far as was required by the colliery. 3. A two-loco system whereby one loco hauls the inby journey with a second loco following. On completion of the inby journey the leading loco is decoupled and the following loco coupled to haul outby. While this obviates the need for an inby passby (and therefore allows the track to extend beyond
Managing Human Error Potential
101
the bend) it does not obviate the need for an outby passby. The first action of the risk assessment team was to draw up a detailed operational sequence with all the associated activities for each of these options. Once this was done, the team focused on identifying the main operational hazards which were agreed as follows: •
Primary hazards: – Collision with object (including people) or derailment (for example, from unseen object on track) while on “normal” run (this was further subdivided into risks when breasting and risks while hauling). – Derailment due to attempts to negotiate incorrectly set points. – Machine failure arising from incomplete driver checks (for example, brake continuity) after coupling.
•
Secondary hazard: – Unstable manriding cars due to failure to park safely.
Each of these hazards was then assessed against each track design option using conventional risk assessment techniques. The results of the assessment are shown in Table 9.2. Table 9.2
High level risk assessment of three operational systems for loco manriding
Hazard
Risk Score Conventional System
Two Loco System
Breasting System
Collision/derailment when breasting
45
45
125
Collision/derailment when hauling
130
260
65
Collision between locos
0
260
0
Derailment due to incorrectly set points
60
24
6
Failure to complete checks
20
20
5
Manrider carriages not safely parked
12
12
3
Totals
267
621
204
Note: For full details of the risk scoring system see Simpson and Moult (1995). It should be noted that while all of these options involve some breasting activities, only the breasting method actually requires the loco to be breasted while men are on-board.
Understanding Human Error in Mine Safety
102
Table 9.2 clearly shows that for the major operational hazards, both the conventional and breasting systems are significantly better than the two-loco system (not surprisingly, as the risks which apply to a single loco, for example, failure to check, are doubled). In addition, the breasting system, based on the operational risks, is less risky than the conventional system. On completion of the initial high-level assessment, the team turned their attention to the identification of associated/additional risks with each mode of operation. These were identified as follows: •
•
•
Two-loco operation – maintenance activities will double over the two single loco options; the use of two locos would exceed the collieries garaging and battery charging facilities; and the colliery did not, at the time, have sufficient trained drivers/guards to man two additional locos. While, the increased maintenance would introduce on-going additional risks, both the garaging/battery charging and training issues could be overcome. Despite this, the risk assessment team considered that they should be included, as there would be increased risk until the issues had been resolved. Conventional operation – the reduction of separation of shunting and boarding areas (due to the inability in this system to extend track beyond the curve); the considerably larger number of points also constituted an additional slip, trip, fall hazard as well as the operational implications; and the additional risks associated with moving the FSV/loco transhipment point outby of the bend thereby increasing pedestrian risk from driving the FSVs around a blind bend. Breasting operation – the only additional concern identified related to a general concern about breasting the locos into the FSV transhipment area (although no specific risks were identified, the risk assessment team were all “uncomfortable” on this issue).
When these issues were added into the operational risk assessment, the conclusion was, as Simpson and Moult state, “there can be no doubt therefore on the basis of the above additional risk considerations that the knock-on effects are greater for both the conventional and two locomotive methods …”. The final risk assessment exercise undertaken was to consider how each system would work when dealing with the need to get an injured man on a stretcher out of the area. This was investigated by estimating the time required to complete the journey using each method (with the locos starting at the outby end of the system). The results were as follows: • • •
Two-loco operation – 51 minutes. Conventional operation – 46 minutes. Breasting operation – 36 minutes.
Managing Human Error Potential
103
All the risk assessments undertaken suggested that not only was the breasting option no worse than the conventional method, a convincing argument could be made that it was potentially less risky. Despite this, the team recognised that additional control measures would be required in order to ensure safe operation, given that a number of assumptions had been implicit in the risk assessments. A detailed suite of control measures was identified covering training, design and track standards/warnings. Those proposed under the design heading are outlined below by way of example: •
• • • •
• • • •
•
A cab area should be created for the guard (in the leading manriding car when breasting) by building a partition to “isolate” him from nuisance from the passengers. This cab should be such that the guard can adopt a reasonably comfortable, forward facing, seated position. There should be no facility in this cab to accommodate anyone other than the guard. The cab must be designed with a windscreen giving forward visibility at least equal to that which can be obtained from the driver’s cab. The cab should include a “deadman’s pedal” sited to make it easy to use (thereby obviating the temptation to defeat it by spragging). Operation of the loco should be impossible unless the deadman’s pedal in both the driver and guard cabs are depressed. The cab should include an emergency brake which can be operated independently of the braking provision in the driver’s cab. The guard cab should include a klaxon/horn which can be operated independently of the provision in the driver’s cab. There should be lighting at the guard cab end of the train which is at least equal to that provided at the driver’s end. Signal communications should be provided between the driver and guard cabs (and consideration given to the possibility of providing verbal communication between them). It is not considered that the guard cab needs any instrumentation as he will have no control that is dependent on the information provided; however, it is considered essential that the instrumentation in the driver’s cab should be illuminated.
The results of the risk assessment and the details of the additional controls were submitted to the regulator for further consideration of the possibility of an exemption from the regulations. This exemption was to allow the breasting system to be used. The exemption was granted, the first to have been granted in the UK on the basis of a pre-emptive risk assessment. The granting of an exemption in this way is in line with the Safety Case approach mentioned above in relation to the nuclear industry. Permission was being given
Understanding Human Error in Mine Safety
104
to operate the system based on the safety management provisions detailed in the submission (that is, in the Case for Safety prepared from the risk assessment). The same approach to the development of a risk assessment based Case for Safety was subsequently applied to a number of other circumstances in which new equipment, systems or operations were envisaged. These included, for example: • •
•
Adopting the travelling operator winding practices used during shaft inspections to the winding of limited numbers of men during weekends. The use of a new auto degassing box (equipment which has sensors built in and which will trigger a diversion of air flow once preset gas levels have been detected). The use of an underground grinding and welding station for roof support modifications during face to face transfer.
It is evident that the risk assessment Case for Safety approach can deliver preemptive safety assurance, both generally and specifically, in relation to human error potential. This is not only for new equipment, systems and operations generally, but even for new equipment, systems and operations which were previously considered to be of potentially very high risk. 9.1.2 Techniques to identify potential human error Several techniques have been developed or modified to identify human error potential including, for example: • • •
Hazard and Operability Studies (HAZOPS), see for example, Crawley et al. (1999); Management Risk Oversight Trees (MORT), see for example, Johnson (1975); TRIPOD, see for example, Wagenaar et al. (1980).
Although there are examples of some of these techniques being used in mining (for example, Fewell and Davies, 1992; and Fewell, 1993 briefly discuss the use of HAZOPs in what was then the Genmin Group in South Africa) they were primarily developed in and for process industries (for example, nuclear, chemicals, petrochemicals and oil and gas). In addition, while it is widely accepted that such techniques are effective at identifying human error potential (especially within the
The “MIRM” (Minerals Industry Risk Management) framework referred to in Section 3.2 can be thought of as a cut-down version of MORT. It has as two main elements: management system and work process factors, and is designed for an organisation to deliver “safe production” in the minerals industry context. Further details of the MIRM framework are given by MISHC (2005).
Managing Human Error Potential
105
design process), they can be time consuming and costly (see, for example, Grech et al., 2008 in relation to MORT). While it is true that the range of potential human error will be the same wherever humans are working, the nature of the tasks, environment, and organisation is likely to affect, at least, the likelihood of error and the relative range of potential errors. For example, the majority of operational tasks in process industries largely involve monitoring the “activities” of automated/semi-automated computer controlled systems. While there are similar roles in mining (for example, surface control room staff working with automated coal clearance systems), mining continues to be a much more hands-on operation (even where remote control systems have been introduced the operator remains in control albeit at a distance). In addition, mining remains, relative to the process industries, a labour intensive operation. Both of these differences suggest that the Rasche (2001) comment on the potential difficulties in transferring the safety case concept from the process industries to mining is equally applicable in terms of possible approaches to the identification (and mitigation) of safety-related human error. One technique for the identification human error potential, the Potential Human Error Audit, has been developed in (and specifically for) the mining industry. This was developed initially in the UK under British Coal and European Coal and Steel Community funding (see Simpson et al., 1994 and summarised in Simpson, 1994). The technique was also subsequently used in South African mining operations both under SIMRAC funding (see Simpson et al., 1996) and direct consultancy (see Simpson and Talbot, 1994). Before describing the Potential Human Error Audit and examples of its use, it should be emphasised that the same point that was raised earlier in relation to risk assessment – that the process is more important than the procedure – is equally valid here. The Potential Human Error Audit is but one example of many possible approaches which could be developed. The approach chosen should provide a systematic means of identifying the potential human error in existing operations, the ability to classify the error type, the ability to identify the predisposing factors (and, where appropriate, collate these into latent failures) and the ability to link error type to the best route to solution. If this is so, then the actual means by which these are achieved is less important. Potential Human Error Audit (PHEA) The following list outlines the primary elements in undertaking a PHEA study: 1. Agree the boundaries of the study. Clearly, a study encompassing the full range of mine activities would be extraordinarily time consuming, and
Understanding Human Error in Mine Safety
106
2.
3.
4.
5. 6.
7.
theoretically not required. The boundaries should be set to encompass a wide range of activities within a discrete process which involves a degree of interdependence between the main elements/activities in the process (including whether the study is to be restricted to operations, maintenance or both). A good example in, say, an underground coal mine, would be supplies haulage from surface to face-end and return (using, for example, locos). List the primary elements in the system/process chosen – for example, in relation to supplies haulage this could include: – How are supplies ordered from the underground – How are supplies made-up on the surface – How are supplies loaded on the surface – How are supplies sent down the shaft – How are supplies marshalled into route-based loads at the pit bottom – What pre-start checks are required of the loco driver (for example, load checks, brake continuity after shunting etc.) – Inby journey – Unloading at the face end – Shunting (and associated checks) – Loading of salvage to be brought out (inc. load checks etc.) – Outby journey – Marshalling of loads for winding to the surface – Disposal of scrap on the surface Make a note of any of these activities where insufficient consideration at one point could create problems downstream (for example, loads made up on the surface without any consideration of where they are to be delivered could considerably increase the amount of marshalling and shunting needed at the pit bottom). Examine any recent accident reports related to these activities, together with related safety documentation such as risk assessments and procedures (Standard Instructions, Permits to Work etc.), to identify any areas which could be sensitive to human error. Create a “candidate” list of potential safety critical human errors from 4 above. Undertake an observation/discussion review of each of the primary elements in the listing in 3 above. This study should begin with the input–decision– output analysis outlined in Chapter 2 (at Section 2.4). Create a “candidate” list of potentially safety critical human errors for each major element from 6 above.
Assuming the boundaries are set in a way which enables a wide range of representative tasks to be considered then there should be sufficient information on which to derive a clear indication of latent failures for, by definition, they pervade the whole operation – limitations in, for example, training provision are unlikely to be restricted to one single training issue.
Managing Human Error Potential
107
8. Sanity check the potentially safety critical human error “candidate” lists (defined in 5 and 7 above) in a workshop with mine staff, including management, supervisors and workmen familiar with the system/process under study – reject/revise any where misunderstandings may have arisen during the initial data collection. 9. Classify each of the potentially safety critical human errors which remain on the “candidate” list using, for example, the slips/lapse–mistake–violation classification (or other classifications, if preferred). 10. Define the best route(s) for solution for each individual candidate error (based on error type). 11. Review each error to create an initial list of possible precursors to each error. 12. Review initial error precursor list in a workshop with the same mine staff used in 8 above. 13. Collate common precursors to create a list of latent failures 14. Establish working parties to address each latent failure and, where appropriate, individual errors/active failures. Although this may seem a considerable effort the UK studies undertaken using PHEA were led by two “consultants” and took two weeks to reach point 14 above (that is, excluding the mine staff used in the two workshops, ten man days of effort, were required to complete the above programme of activities). The mine working parties ran for however long it took to establish improvements (ranging from one two-hour, meeting on one human error active failure, to a morning meeting held once a fortnight for three months to address a safety inspection and reporting latent failure). Human error active failures Some of the human error active failures identified during the first UK study are outlined below (see Simpson et al., 1994 for more detailed consideration). Misreading of loco displays Some locos are fitted with glass windscreens which cause reflections when cap lamps are used. The driver’s displays are not illuminated and cannot be read without a cap lamp. Drivers use the less bright setting on their cap lamp but it is an unsatisfactory compromise and drivers acknowledge that errors are made. Type of error – slip/lapse Preferred route to solution – design
It is possible, for example, that remedial action on an individual error will be quicker to implement than addressing the latent failure associated with that error and, in this way, provide some improvement in the period prior to the “ultimate solution”.
Understanding Human Error in Mine Safety
108
Manufacturers should be required to provide illuminated displays on all new machines – possible retrofit improvements should be considered for existing machines. Fitting incorrect AMOT thermal cut-off valves There are three different AMOT thermal cut-off valves used in the mine (one for water-cooled brake retarders, one for traction motor resistors and one for compressors) all of which have different cut-out temperatures. All three are identical in size, appearance and method of fitting – the only means of distinguishing between them being a part number and temperature stamped, in very small letters, on an attached plate. This creates two potential errors, one on the surface in selecting the correct device to be sent underground and one at the loco garage in fitting an incorrect valve. Both of these errors had occurred. To worsen the situation, suppliers had been known to supply the valves with no plate attached. Type of error – slip/lapse Preferred route to solution – design Manufacturers should be required to provide a much clearer form of labelling (suitable for use underground) to distinguish between the three, differently rated, valves. Setting-off in a loco with the parking brake on The handbrake linkages on the 50 hp and 90 hp locos can be very stiff. They do not release easily and are sometimes left partially on when the locos are driven off. A fire in which a loco was partially burnt out was believed to have been caused, at least in part, by this error. Type of error – mistake Preferred route to solution – design The loco manufacturers should be consulted to see if there is anything which can be done to remove this problem at source; meanwhile, a check should be made with the loco fitters to see whether it is possible to introduce routine lubrication. Using components beyond their replacement intervals Delays in the supplies system (both ordering and delivery) and poor service from suppliers, creates a potential error in that, as parts due for replacement are not available, the locos are left to operate with the old part in place (the alternative is to take the loco out of commission until all necessary work has been completed – there is an obvious reluctance to do this despite the fact that it would be, strictly speaking, correct). Examples of this problem include: •
Klaxon horns for Pony locos ordered in February but not delivered until September.
Managing Human Error Potential
•
•
109
Fitters get one week’s notice of the annual check on AMOT valves and new valves are ordered one week before. However, there is a four/five-week delivery period. AMOTs returned for re-calibration/repair have been sent back marked “calibrated and passed” but subsequently proved to be inoperable. Type of error – violation Preferred route to solution – organisation
The relationship between maintenance scheduling and ordering on the mine is clearly in need of serious review, to ensure much closer alignment. In addition, suppliers need to be consulted about improving the quality of service offered and meeting agreed delivery times. The maintenance department/purchasing department linkage needs strengthening to ensure that late delivery is chased early if the need to maintain a large spares inventory is to be avoided. Unsafe pit bottom track It is almost impossible to shunt a collection of wagons round the pit bottom loop and into the mine and then organise them into journey sets without using unauthorised methods of working. Examples of the problem include: • • •
Lengths of loose track spring open under heavy loads and have to be held in place by a person levering on a baulk of timber as the wagons roll past. Locos and laden wagons have to be levered over misaligned rail sections. Hammers, crow bars, wedges and girder clamps are used routinely to change and set points in positions where the changeover mechanism is broken. Type of error – violation Preferred route to solution – organisation and training
It is clear that these problems had been in existence for some time and that, although checks showed that most of them had been raised in safety inspections, the lack of action had resulted in them “falling off the radar”. There is a clear need to reinforce the importance of reporting, the importance of quick action on identified (significant) concerns and clarity of the responsibility for action. A complete review of the management chain for actioning reported safety concerns is necessary. Examples of the improvements made by the active failure working parties included: •
Improved marking for AMOT valves was agreed with the manufacturer/ supplier and action was also agreed to reduce the number of valves returned incorrectly calibrated (or even not working).
Understanding Human Error in Mine Safety
110
• • • •
Penalty clauses in relation to late delivery were introduced into supplier contracts. Trials were undertaken successfully for the use of fibre optics to illuminate loco cab instruments and subsequently fitted on all the locos needing them. A warning light was incorporated in the loco cabs to indicate whether the parking brake was still on. A decision was taken to scrap the complete fleet of one type of rolling stock which significantly improved the position in relation to errors associated with load binding and load security.
Latent failure Among the latent failures identified in this study, three were particularly important (in that they each influenced a significant number of the human error active failures identified). These are set out below. Latent failure – locomotive design The results of the PHEA study emphasised that previous work to improve the human factors/ergonomics of underground loco design (Kingsley et al., 1980) appeared to have had little impact, as even the most modern locos at the mine had numerous ergonomics limitations. As a result of this the revised version of the Ergonomics Principles on underground locos (Mason and Simpson, 1990b) was used for a series of discussions between British Coal and the main manufacturers of underground locos used in the UK. From these discussions British Coal’s Chief Mechanical Engineer issued a listing of “minimum safety requirements for the design of underground locomotives” to be used as a basis for future design and future purchasing specifications. The success of the small number of retrofit improvements which were developed in response to human errors resulted in the establishment of the ECSC/British Coal research project on retrofit improvement to the ergonomics of underground locomotive design (subsequently published as Rushworth et al., 1993). Latent failure – safety inspection and reporting procedure Two improvement initiatives were developed. The first was a new training course devised by the PHEA team, mine staff and the Group Training Department. Examples of poor reporting from the PHEA study were used to initiate discussion of what action would be expected. This approach proved particularly useful in emphasising the need for detailed but concise reporting, as it showed very clearly how imprecise reports could be misinterpreted and how important points could be missed. To emphasise the importance placed on this training, the first course had the mine manager and deputy manager among the trainees. The course was considered so valuable that it was subsequently delivered at all other mines in the regional group. The second initiative related to improving the reliability of follow-up action. After considerable discussion among the Working Group, it was decided to computerise action tracking and monitoring as an addition to a safety computer
Managing Human Error Potential
111
program already in use at the mine. The reports were entered into the system which then generated each day, a three-day and a seven-day log of work that was still outstanding and which were circulated to the department heads at the mine. The reports could only be removed from the system by being signed-off by a member of the senior management whose sphere of operation covered each issue. To ensure accountability, the logs of work still outstanding became an agenda item at the manager’s and deputy manager’s meetings. As with the training course above, the extension of this system was subsequently made available to all other mines in the group. Latent failure – safety attitudes and mine organisation In order to gain a clearer focus on the many issues associated with safety attitudes on the mine the Working Party decided to design and issue a questionnaire to all mine staff to elicit their specific concerns. The idea was then to target the issues raised and, thereby, to be seen to respond directly rather than addressing more general issues. The three major concerns were: the standard of safety training (which was felt to have become routine), the issue of track standards and the level of accident investigation. In addition to the safety inspection training mentioned above a full review of the then current training provision was undertaken and a priority listing for new training initiatives drawn up and implemented. In relation to the track initiatives mentioned above (in terms of active failures), the mine also created (on the suggestion of the Working Party) a single team dealing with track laying, track inspection and track repair (previously, they had all operated in functionally independent departments). A new accident investigation and prevention team was set up, chaired by the deputy manager, which was charged with investigating accidents in sufficient detail as to identify root causes (including the precursors of human error) and examining where else in the mine similar accidents could occur so that preventative action was taken wherever it was likely to be beneficial. Prior to the PHEA study, the mine accident rate (over three days) was 35.8 per 100,000 manshifts. In the year following the study and the implementation of the action arising from the Working Groups, the three-day-plus accident rate had reduced to 8.4 per 100,000. This represented an 80 per cent improvement and took the mine from the poorest in the accident league of the 15 mines in the regional group to the best, with the least accidents in the accident league, not only in their group but in British Coal as a whole. In addition to the improvement in the threeday-plus accident rate, the first aid accident incidence dropped, as did the sickness absence (even when accident related absence was excluded). Although the improvements gained at the other five UK coal mines which instigated PHEA studies was not as spectacular as those of the first mine, the threeday-plus accident rates dropped (over a 12- month period) by an average of over 20 per cent. Similar improvements were also recorded in relation to both first aid accidents and sickness absence rates. The PHEA studies undertaken in South African mines were more associated with research than with intervention as part of the SIMRAC research study on the
112
Understanding Human Error in Mine Safety
identification of the causes of transport and tramming accidents (in mines other than coal, gold and platinum). Four mines were studied, an underground chrome mine, a surface copper mine, an underground diamond mine and a surface iron ore mine. A total of 207 individual human errors/active failures were identified across the mines (several of which have been presented earlier as examples in Chapters 3–8). Tables 9.3 and 9.4 summarise the main areas where potential active failures were identified. The latent failures identified were common across the mines studied (albeit to varying degrees) and encompassed all of the influential levels described in Figure 2.1. Table 9.5 summarises the latent failures identified across the mines in the study. Table 9.3
Classification of activities where potential active failures were identified – underground mining operations
Activity
No. PAFs
Transfer of ore to tips by locomotive haulage
31
Transfer of supplies using locomotive haulage
11
Transfer of ore to tips using LHDs
21
General movement of supplies and personnel using a variety of vehicles
32
Table 9.4
Classification of activities where potential active failures were identified – surface mining operations
Activity
No. PAFs
Loading activities involving shovels, dozers, loaders, haul trucks etc.
22
Transfer of ore to tips using haul trucks
44
Transport and tramming operations involving other support/service vehicles
40
Transfer of mineral/supplies between mine and main line railway using locomotive haulage
8
Managing Human Error Potential
Table 9.5
113
Summary of the latent failures identified in the SIMRAC transport and tramming study
Design Specific indicators included: • Operational difficulties due to limitations in the design of transport equipment (limitations which are likely to predispose human error). • Similar limitations in the design of equipment/machinery, furnishings and infrastructure/layout relating to transport and tramming operations. • Limitations in (or created by) modifications made by the mine to equipment. • Limitations in the fundamental design of systems introduced on the mine Training Specific indicators included: • Inadequate basic training. • Inadequate refresher training. • Deficiencies in the content of training modules/course material. • Lack of specific training on hazards/risks. • Failure to identify training needs. • Failure to “train the trainers” (particularly the supervisors who play a significant role in providing training on Standard Instructions, Procedures etc.). • Failure to evaluate the effectiveness of the training provided. Rules and Procedures Specific indicators included: • Failures in the formulation of new rules and procedures. • Failure to review and update existing rules and procedures. • Critical situations and activities that are not adequately covered by rules and procedures or rules which lack vital information (omissions) • Situations and activities covered by sets of safety rules that are in conflict with one another (conflicts) • Insufficient information provided to enable the requirements of a rule to be fully understood and reliably complied with (vagueness). • Situations in which the rules do not accurately reflect and address the hazards and practical difficulties associated with the tasks for which they were formulated (impractical). • Failures in the reliable communication of new rules and procedures to the workforce.
114
Table 9.5
Understanding Human Error in Mine Safety
Continued
Attitudes to Rules and Procedures Specific indicators included: • Lax attitudes influenced by lack of clarity in the aims and objectives of the rules. • Lack of appreciation of the actual hazards and risks to which the rules relate. • Lack of commitment from workforce to comply with the rules. • Lack of commitment by management to demonstrate compliance with rules. • Inadequate monitoring and detection of rule violations. • Failure in the style, nature and consistency of supervisory attitudes to rules. • Attitudes influenced by limitations in the design of plant/equipment. • Attitudes influenced by limitations in physical/environmental conditions. • Attitudes influenced by limitations in the organisation of work. Organisation and Working Methods Specific indicators included: • Failure to organise and plan tasks and operations in the safest way feasible. • Failure to provide adequate resources in terms of manpower and/or appropriate equipment. • Inconsistencies in roles and responsibilities leading to inadequate planning. • Failure to provide adequate safety equipment. • Failure to control the influence of poorly maintained plant and equipment on the safety of transport and tramming operations. • Failure to control the influence of poorly maintained environmental effects on the safety of transport and tramming operations. Organising for Safety Specific indicators included: • Prevalence of unsafe working practices/conditions due to limited frequency and duration of safety inspections. • Unsafe working practices/conditions continue through lack of accountability and action to follow-up on inspections. • Limitations associated with the operation of safety representative initiatives. • Failure to use near-miss reports to forewarn of potential accidents.
Managing Human Error Potential
Table 9.5
115
Concluded
Attitudes to Safety Specific indicators included: • Failure to by management to take appropriate action in situations where unsafe conditions and methods of working are known to exist. • Managers and supervisors fail to obey rules and consequently adopt unsafe working practices in front of the workforce. • Workers appear to be encouraged to break rules and work unsafely. • Inadequate monitoring and disciplining of unsafe behaviour. • Poor attitudes to safety by workforce (arising primarily by poor hazard awareness and risk perception). • Poor attitudes to compliance with Standards and Procedures.
There is clear evidence therefore that the Potential Human Error Audit approach, whether used in an interventionist or research role, can: • • • •
identify safety-related potential human errors (that is, potential active failures) from within current operations; identify those elements of operation/organisation which predispose the human errors identified; identify latent failures in terms of common predisposing factors; and based on the UK work, deliver (when working with mine staff on targeted active and latent failures) substantial safety improvements.
From the South African work in particular, it is also interesting to note that it is possible to use the approach without significant human factors experience. The research team which undertook the South African study consisted of two members who were human factors professionals (with considerable experience), two engineers who, although not formally trained in human factors, had been working on human factors issues for some time and two mining engineers for whom this project was their first involvement in human factors work. The last two, after one week of training (based on the previous UK work) made a considerable contribution to the work at all levels from identification of potential human errors through to the identification of latent failures and the generic approaches to improvement. While the specific Potential Human Error Audit approach described above has clearly proven value, it should be emphasised again that in this circumstance the process is more important than the procedure and, as a result, there is little doubt that carefully developed alternative approaches which address the same issues in an equally systematic way would also be likely to generate similar benefits. There is another approach, the Behaviourally Based Safety (BBS) techniques, which should be mentioned for, although they do not address human error potential
116
Understanding Human Error in Mine Safety
directly, they are focused specifically on promotion of safe behaviour and should therefore have an impact on safety-related human error potential. The BBS approach has been very widely, and often successfully, used in industry in general as well as mining. The approach emerged during the early 1980s and was popularised mainly by the work of Krause and his colleagues (see for example, Krause et al., 1990). The starting point of the BBS approach is that as “In the majority of cases – from 80–95 per cent – accidents are caused by unsafe behaviour.” (Krause et al., 1990) it follows that addressing unsafe behaviour should be an efficient approach to safety improvement. There are three cardinal principles of the BBS approach: 1. There is a critical mass in terms of the ratio of the percentage of unsafe acts to safe acts – once the latter exceeds the former overall safety improvements will occur. 2. Both the antecedents of behaviour and its consequences influence behaviour both positively and negatively (hence both reward and punishment can be used to shape safety behaviour). 3. BBS is an inclusive process and it requires the involvement of all levels in the organisation in a long-term, continual improvement process. Despite these concise and clear primary principles, Krause (2000), in an extremely forthright paper, suggests that, in some ways, BBS has become a victim of its own success: BBS now means many different things to different people. This ambiguity or fuzziness of the term is so far advanced that “BBS’’ has lost its power to describe anything clearly. Even a casual search of the literature turns up contradictory uses of the term. For example, some organisations call what they are doing as BBS even though they admit they have no involvement of shopfloor personnel in the effort, no operational definitions of critical behaviours, and no continuous improvement mechanism. What they do have is the traditional supervisor audit programme focused on disciplinary action.
This would chime well with a number of managers who have faced the kind of audit-based approach that BBS has “slipped” into in many organisations, one of whom, from a construction company using “BBS”, stated: I’ve just had a highly critical safety audit report clobbering me for only 95% compliance with wearing hard hats – nobody thinks or cares about how much effort it took to get it to that level!
Managing Human Error Potential
117
Such misuse of what is, in many senses, a sound approach with many elements in common with the human error approaches advocated in the rest of this book, is undoubtedly colouring the perception of its utility. Although there are several points of agreement between the BBS approach and the human error approach advocated above there are two important differences. Firstly, behaviour is entirely context dependent (as Krause and his co-workers would readily admit) yet many of the audit-focused approaches to BBS operate on the basis of what are, essentially, generic behaviour patterns. In this misuse of the original concepts, BBS becomes at best a blunderbuss approach – you fire in vaguely the right direction in the hope that some of the shot will hit something interesting. A human error-based approach has a much more focused attack in which the context dependency is built in (you cannot consider an error in the abstract). Secondly, BBS is often used in a way which effectively separates behaviour from other critical error producing elements of the operation such as the equipment design, the working environment etc. A much more direct linkage is used in the human error approach in that poor equipment design, poor working conditions, poor safety procedures etc. are seen as direct precursors to or causes of unsafe behaviour. While this distinction may appear somewhat pedantic it is both real and meaningful for it shapes the nature of the action required, especially when the nature of the error is taken into account (for example, slip/lapse errors will only be avoided by design changes).
9.2 Reactive 9.2.1 Accident/incident investigation The primary purpose of any accident investigation is to learn sufficient about the circumstances leading to the accident to ensure that the same (or similar) accidents do not occur again. It follows, therefore, bearing in mind Rimmington’s comment that “human error is a major contributory cause in 90 per cent of accidents” (Rimmington, 1989), that all accident investigations should be able to identify any contributory human error in the accident aetiology. However, as the previous chapters have shown, this is not, of itself, sufficient. All too frequently accident investigations stop once human error has been identified. Consider, for example, a situation occurs where an accident happens and the investigation shows that Fred made a mistake which was the immediate cause of the accident. If the investigation stops at this point there are, in reality, very few options available by way of remedial measures. You can either: •
Decide that as Fred was “directly responsible” he should be disciplined (or even sacked). However, while sacking Fred may stop him making the same mistake again, it is unlikely to stop anyone else (in fact, dependent on the error it may have no impact at all, particularly if it was a slip/lapse error).
Understanding Human Error in Mine Safety
118
•
•
You can give Fred the benefit of the doubt and decide he needs retraining. However, in most circumstances, this retraining will involve simply revisiting the training previously given (which the accident has shown to be, at least in part, inadequate). Even if you consider the need for different training, you have not learnt anything on which to base a new training needs analysis (let alone define a new training course). You can give him the benefit of the doubt, have a quiet word and also get supervisors/managers to keep a closer eye on similar situations arising. However, both of these actions (that is, sensitising both Fred and his supervisors and managers) have clearly already failed.
While the above example is hypothetical, consider the quote below which is taken from an actual accident report dealing with a man who broke his arm when he tripped over a Continuous Miner cable which was, at the time, buried in debris: Benny instructed to increase his awareness of his surroundings. As he is new to the mine his confidence and awareness will increase as his experience increase.
The implication is clearly that Benny was “at fault” and that he will be less likely to make such a mistake in the future. However, how can you meaningfully instruct anyone to increase their awareness; in particular, how can anyone be expected to “increase his awareness” of something which is buried from view? While there is no doubt that Benny’s awareness of his general surroundings will increase with experience, it does presume that he will live long enough to benefit from that experience! Stopping an accident investigation at the point where human error has been identified will achieve nothing. The question arises, therefore, how can consideration of human error causation be systematically incorporated into accident investigations? The most crucial step toward incorporating detailed consideration of human error in accident investigation is that, in addition to identifying WHAT happened, it is absolutely crucial to identify WHY it happened. Only by doing this, as a matter of routine, will it be possible to go beyond the point where the identification of a human error is the end point of the investigation. All accidents are unique, even accidents which have common root causes (latent failures) and which occur in similar circumstances will have different elements. It is impossible, therefore, to create a universal “paint-by-numbers” approach to accident investigation. It is, however, possible to identify the crucial elements/steps in a good accident investigation process. One such approach, with While this may seem obvious there are many circumstances where such an approach is (albeit unwittingly) imposed. For example, some accident reporting formats effectively constrain the investigation by limiting what is covered to what is required by the report structure.
Managing Human Error Potential
119
particular reference to incorporating a detailed consideration of the role of human error is outlined below. The initial step required is to create an event sequence (or time line) of what happened leading up to the accident, creating a detailed time sequence of WHAT happened. For each WHAT happened event the first WHY question needs to be addressed. This could be classified, for example, as either equipment/system failure or human failure (or both). In effect this WHY statement then becomes a WHAT happened statement to trigger the next WHY question (that is, why did the human failing occur?). For each human failing identified the WHY answer at this level is likely to indicate an active failure and the type of error should be categorised (for example, slip/lapse, mistake or violation) as this will be of particular value when returning at the end of the analysis to identify possible improvement measures. The input– decision–output categorisation can also be used at this point as a framework for ensuring systematic coverage of the range of potential errors/failings (as above this can be particularly useful in providing a focus for remedial measures). The WHAT–WHY process should continue until a point is reached where there is no reason to believe that there were any additional predisposing factors. Common predisposing factors identified across the analysis can be reasonably assumed to represent latent failures and collated as such. On completion of the analysis an approach such as that outlined above will generate a considerable understanding of the role of human error covering the following issues: •
• •
•
The active failures (specific errors), the type of each error and from the classifications used to describe the error, the most appropriate route to improvement. The factors which predispose each error (which will provide a significant focus for the identification of remedial measures). Common predisposing factors (latent failures) which will need addressing at the mine-wide level together with a clear indication of the level(s) at which the latent failure is having its effect (as described in Figure 2.1) and, therefore, a clear focus for the level at which remedial measures should be addressed. Detailed information from both the active and latent failures identified which can be used to consider whether such errors could impact elsewhere in the mine (or indeed the company) to ensure that lessons are learnt wherever they are relevant.
Although the process outlined above has focused on detailed analysis of the human failings identified, similar consideration should be given (where appropriate) to failings initially defined as equipment/systems, as the answer to WHAT failure occurred could involve a human failing (for example, incorrect/ inadequate maintenance).
Understanding Human Error in Mine Safety
120
As mentioned above in relation to the pre-emptive approaches to human error management, the additional analysis required is not fundamentally different from that which is often already undertaken. All that is necessary is a framework which ensures that the factors which predispose human error are investigated systematically and comprehensively. Once again, the process is more important than the procedure. Despite the above assurance that the process outlined in not significantly different, in principle, to that already adopted in many organisations it will undoubtedly require more time and more systematic approaches to be used routinely. This, in itself, may be a barrier to adopting such an approach; however, the fact that it is fundamentally crucial to routinely identify human error predisposing factors (particularly those which are common and therefore likely to represent latent failures) can be seen clearly by revisiting some of the examples of human error potential given in Chapters 3–8. Consider, for example: •
•
•
•
•
Section 3.1, Example 2 – An accident arising from the incorrect selection of forward and reverse gears could clearly be seen simply as a human error on the part of the driver. However, unless the fact that the inconsistency of gear selection action across the two vehicles is recognised and addressed, the potential for repeat (or similar) accidents will remain, regardless of what action is taken “against” the driver seen to be “at fault”. Section 3.1, Example 4 – The fact that, despite many and various overspeeding risk controls introduced by the mine, the lack of any speedometers on the vehicles is such a fundamental failure in ensuring that the drivers have the correct information on which to drive safely that the accident risk will never be resolved until the speedometer issue is addressed. Section 3.1, Example 5 – As long as loco drivers are expected to use a variety of locos with fundamentally differing layouts for skid correction controls the potential for accidents arising from a failure to correctly control a runaway will remain, even if the extreme action of sacking any driver who was seen to fail to control a runaway was taken. Section 3.1, Example 6 – Criticising drivers for tail-gating when they have to be within the safe braking distance in order to see the brake lights of the vehicle in front (given their size and the fact that they are, due to their position, likely to be covered in dirt) will achieve absolutely nothing in terms of reducing the collision risk which is inevitable from tail-gating. Section 3.1, Example 8 – The degree of restricted vision from the driver’s position on both development machines and FSVs clearly increases the risk to workers close to the machines, particularly in dark environments. This is because the visual information necessary to ensure safe driving is simply not available to the driver in this situation. Retraining or criticising drivers for unsafe driving (or pedestrians for poor personal positioning)
Managing Human Error Potential
•
•
•
•
•
•
121
will achieve little in terms of a predisposing factor, which is an integral design failure in the vehicle being used. Section 4.1, Example 1 – While it is undoubtedly true that the man killed was in an unsafe position and one designated as such, if the over-attenuation of the hearing defenders being worn had not been identified, the potential for similar accidents involving the failure to hear the fork-lift reversing warning would have remained and other men would have continued to have been at risk. Section 5.1, Example 3 – The risk of injury when clearing ore chutes while standing in a position with unstable footing is clear and was recognised in the safety procedures, so it would seem “reasonable” to draw a conclusion that any accident which occurred could be ascribed to human error/ irresponsible behaviour in terms of contravention of the rules. Or at least it would seem reasonable until you examine further and discover that despite the risk having been recognised, no platforms for the men to work from had been built at any ore chutes in the mine. On this basis, not only would such a conclusion be unreasonable, the injury risk will continue until such platforms have been built. Section 5.1, Example 9 – Safety rules and procedures have only one purpose in life – to promote safe behaviour. Despite this, in this example, a significant proportion of managers and supervisors recognised that some of the rules and procedures were difficult to understand, impractical, too time consuming and often ignored. However, nothing had been done to correct the recognised difficulties. While this remains the case, the protection the rules and procedures are expected to provide will simply be non-existent and the risk of accidents will remain, regardless of what action is taken for individual breaches of the rules and procedures. Section 6.1, Example 3 – A driver involved in an accident as a result of leaning out of a cab would appear to be “guilty” of a clear case of irresponsible behaviour. However, when further investigation reveals (as it did in both of the cases discussed) that the behaviour is predisposed by restrictions on vision which necessitate leaning out in order to get any effective view, then unless the restricted vision or the concerns which tempts drivers to lean out are resolved, the risk will remain. Section 6.1, Example 9 – The fact that several men had been killed over a number of years while working in bunkers (silos) without wearing fallarrest harnesses would, once again, appear simply irresponsible. However, problems in the design and fit of the harnesses and the design of the bunker tops in terms of both the access points and the position of the anchor points for the harnesses meant, once again, that unless these predisposing factors were addressed the risk of further injury would remain high. Section 7.1, Example 5 – Unless there was a significant cultural change in the attitudes to compliance with support rules (which should be considered inviolate) the probability of repeat accidents would remain high.
122
Understanding Human Error in Mine Safety
It is clear, therefore, that any accident investigation which assigns any causal element to “human error” but which does not identify what predisposed the error behaviour will lead to remedial actions which are at best, limited and at worst, meaningless. Furthermore, the risk of repeat or similar accidents will usually remain. 9.2.2 Refining the risk assessment process from accident investigations An accident represents, almost by definition, a failure of the risk assessment process. If the hazards had all been identified, the risks adequately assessed and the controls in place are effective, comprehensive and routinely used, then the accident would not have occurred. Risk assessment is now central to the regulatory systems in most mining operations and should therefore be an integral part of the health and safety management systems in place on such mines. As such, a feedback loop between accident investigation and risk assessment is essential if repeat or similar accidents are to be avoided. While it is not uncommon to see situations where there is an established link between the outcome of an accident investigation and the specific risk assessment which apply to the tasks directly associated with the accident, it is much less common to see a direct and deliberate link between an accident investigation and the effectiveness of the risk assessment process. Figure 9.1 outlines a process which enables accident investigation information to be used as a feedback to both specific risk assessments and the risk assessment process. Each of these elements is set out below. Accident investigation Hazard identification The first critical element in an accident investigation process is to identify the potential hazard that has been realised. A hazard is defined as anything with the potential to cause harm. In addition to identifying the basic hazard, it is also important to identify hazard triggers. Identification of controls in place Controls are measures designed to reduce the likelihood of a hazardous event occurring or the severity of harm that may arise if it does occur. Any accident can be regarded as a failure to control a hazard or hazards. Failure to effectively control a hazard can be due either to a lack of control measures or a failure of the controls that were in place. The provision of controls and their effectiveness are therefore essential issues that need to be assessed in the accident investigation process. Identification of control shortcomings Having established what controls were in place at the time of the accident, the next step is to determine which of these controls, if any, failed. There is no mechanistic
Managing Human Error Potential
123
A ACCIDENT
B 1
Risk Assessment Not Done
Define Hazard that caused the accident
Hazard Not Identified
System Not Changed
2
Hazard Identified
System Changed
3
Risk Assessment Done Identify all controls in place at time of accident
Identify control shortcomings that led to accident
C (Re-)Evaluate Risk
Action Plan Not Completed
4
Action Plan Completed
5
Action Plan Inadequate
6
Improve/propose additional controls Risk Management System Review
Create Action Plan
Analogous risks
Monitor
Other Sites
Figure 9.1
D
Outline procedure to ensure feedback from accident investigation to the risk assessment process
approach that can be followed to determine which controls failed; the answers will have to be obtained by examining the scene of the accident and statements made by witnesses etc. in order to identify the obvious and underlying causes. These are analogous to control shortcomings. Risk management action The completed accident investigation phase will have identified the hazard, the hazardous event (description of the accident), the controls in place and their shortcomings. This information can be fed directly into the existing health and safety risk management system. This will enable the risks to be
Understanding Human Error in Mine Safety
124
re-evaluated and the creation of an action plan to improve or introduce additional controls and hence reduce the likelihood of similar accidents re-occurring. Identification of risk assessment system shortfalls It has been shown that almost every accident that occurs reflects a failure, in some degree, of the Safety Management System. Where risk assessment/risk management forms the basis of this system, it follows that the accident has resulted from corresponding shortfalls in the risk assessment/risk management process. In the event of an accident it is therefore important that the risk management process is examined to determine the nature of these shortfalls. The outcomes/ results of the accident investigation can be used for this purpose. The process used in this examination is shown in section B of Figure 9.1 which has six primary outputs. This process is described below. The first issue that needs to be addressed is whether the hazardous event had previously been subjected to a risk assessment. If a risk assessment had not been undertaken, the reasons for this omission need to be determined in order to initiate any necessary corrective action. If a risk assessment had been undertaken, the reasons why the assessment failed to prevent or control the hazard must be established in order to identify and correct any failures in the risk assessment process. The six primary outputs are expanded below. Risk Assessment not undertaken (output 1) There are numerous reasons why a risk assessment may not previously have been undertaken, but basically these reasons are likely to result from an oversight or because the assessment had not been attempted for some particular reason (see Figure 9.2). The term “Oversight” encompasses a range of potential shortfalls. For example, the boundary of the risk assessment may not have been clearly defined with the result that the operation was precluded from the assessment process. Similarly, while the boundary may have been defined, the assessment team could simply have failed to complete the assessment.
Oversight
11
Lack of Resources Not Attempted Priorities/Planned Actions
Figure 9.2
Output 1 from Figure 9.1
Managing Human Error Potential
125
The term “Not Attempted” encompasses shortfalls in the context of lack of resources and priorities/planned actions. “Lack of Resources” indicates a situation where the assessment has not been completed because, for example, of a shortage of manpower and/or time. “Priorities/Planned Actions” indicates a situation where the assessment has not been completed because of a failure to set an appropriate level of priority for the assessment to be carried out. Similarly, while the correct priorities might have been set out, the accident could have happened before the planned completion date of the assessment. Risk assessment undertaken (outputs 2–6) There could be numerous reasons why the accident occurred, even though a risk assessment had been carried out. In order to identify the relevant shortfalls, it is necessary to consider the conditions associated with outputs 2–6 in section B of Figure 9.1 individually. Hazard not identified (outputs 2–3) Output 2 examines the situation where the hazard had not been identified in the risk assessment, and the system of work had not changed since the assessment. The implications are that the hazard identification process in the original risk assessment failed in some way. There are two potential shortfalls here as shown in Figure 9.3. “Superficial” indicates a situation where a hazard may be considered insignificant if its potential consequences have not been given due consideration. For example, there may be an assumption that all slip and fall accidents result in minor injuries. However, in certain circumstances such incidents could result in major injuries. For example, a person could have tripped where the ground was littered with sharp objects. The likelihood of such a hazardous event may have seemed (un)reasonably trivial and ignored at the time of the assessment. An “oversight” indicates a situation where a hazard has inadvertently been overlooked. For example, over-reliance could have been placed on a hazard checklist that was incomplete. Similarly, the assessment team may not have used the hazard checklist but relied on misplaced faith in their own judgement. An oversight may also occur if the hazard identification process was undertaken at an inadequate level of detail. If the risk assessment lacks sufficient detail then some of the hazards will undoubtedly be missed out and the assessment will have lost some of its effectiveness. Superficial
2 Oversight
Figure 9.3
Output 2 from Figure 9.1
Understanding Human Error in Mine Safety
126
Output 3 examines the situation where the hazard had not been identified because the system had changed in some respect in the period since the assessment. For example, with the introduction of new technologies, changes are frequently made in methods of work (Horberry et al., 2004). To address these changes, pre-emptive risk assessments need to be undertaken. This output addresses the situation where the pre-emptive risk assessment has not been undertaken or has not been suitable and sufficient (see Figure 9.4). A pre-emptive risk assessment may be inadequate for many of the reasons mentioned above (such as the failure to identify hazards, lack of sufficient depth etc.). The assessment may have been omitted because the system failed to detect that new methods of work had been introduced, or because insufficient resources had been allocated for the assessment. Hazard identified (outputs 4–6) If the hazard was identified in the risk assessment then the shortfall may be down to some failure with the action plan that was drawn up. Either the plan was inadequate to start with, was not completed adequately, was not completed at all or there was no need for such an action plan as the risk was considered to be adequately controlled. These potential shortfalls are represented by outputs 4–6. Output 4 examines the situation where the action plan was considered to be incomplete for the reasons indicated below (see Figure 9.5). “Incorrect priority” indicates a situation where the action plan following the assessment has not been given an appropriate level of priority and the recommended controls had not been implemented prior to the accident. A situation may arise however, where the correct priorities might have been set but the action plan may not have required the controls to be implemented by the time the accident had occurred, that is, the action plan was “not due for completion”. “Failure to meet completion date” indicates the situation where the controls have not been implemented in accordance with the agreed action plan. Output 5 examines the situation where, despite the accident, the action plan in the risk assessment was, in retrospect, considered to be complete (see Figure 9.6).
Inadequate pre-emptive risk assessment
3
Failure to monitor change No pre-emptive risk assessment Lack of Resources
Figure 9.4
Output 3 from Figure 9.1
Managing Human Error Potential
127
Incorrect priority
4
Not due for completion
Failure to meet completion date
Figure 9.5
Output 4 from Figure 9.1
5 Figure 9.6
Risk reduced to a level considered ALARP
Output 5 from Figure 9.1
For this, the risk was considered to have been controlled to a level “as low as is reasonably practicable” (ALARP) and as such this output is not a “shortfall”. However, in the light of the accident, the risk will have to be re-evaluated, and this is done in the “Risk Management Action” section of the accident investigation. Output 6 examines the situation where the action plan was considered to be inadequate. This could be for any of the reasons indicated below (see Figure 9.7). In the risk assessment, the assessors could have failed to identify shortcomings associated with the controls measures that were in place. In other words, the risks were considered to be controlled ALARP when, in fact, shortcomings existed and additional controls should have been recommended. Similarly, there could be shortcomings in new or modified control measures that were put in place by the action plan, that were not considered in the risk assessment. The estimations of risk, in terms of severity, likelihood or both, could either have been over- or underestimated by the assessment team. This would have affected the decision regarding additional control measures. The control measures identified in the risk assessment and action plan could also be deemed inadequate if new evidence came to light that altered the risk that had previously been estimated in some way. For example, evidence may come to light that a bolting resin that had been used for years may have a chemical in it that is found to be carcinogenic or more risky as a carcinogen than was previously thought. As this new evidence had not been taken into account in the previous risk assessment, it should also be echoed back into the hazard identification process to see if the nature of the hazard has changed.
Understanding Human Error in Mine Safety
128
Failure to identify control shortcomings
Shortcomings in new control measures
6 Incorrect estimation of risk
New evidence that changes level of risk Figure 9.7
Output 6 from Figure 9.1
Review of risk management system and wider implications Any shortfall identified indicates a need to review the risk management system since, as has already been indicated, it is a failure within the system that caused the accident. In particular, the shortfalls identified focus attention on the relevant part of the system that needs to be reviewed and remedial action targeted. Classifying the shortfalls in this way is helpful in focusing attention on the failures within the risk management process. However, this classification on its own is insufficient for the purpose of planning and implementing effective remedial action. For example, it is not sufficient just to know that a risk assessment had not been undertaken due to a lack of resources, or even that this inaction was due specifically to a lack of time. If corrective actions are to be effectively targeted, it is essential to know why there was insufficient time. The question “why did this shortfall exist” needs to be asked for each shortfall identified as being influential to the system in the accident, if corrective action is to be effectively targeted. The importance of reviewing the risk management system in this way cannot be overstated. Any shortfall identified could have much wider safety implications than those specifically related to the accident under investigation. For example, factors that may have led to a failure to carry out a risk assessment may exist in other parts of the operation, with the result that there is a catalogue of other accidents, waiting to happen. It is possible that the accident occurred due to a failure in the process at corporate level, in which the potential for similar accidents may exist at other sites. In the light of the accident, a person should be appointed with the responsibility for: •
conducting a review of the risk management system along the lines outlined above, and in implementing the appropriate corrective measures;
Managing Human Error Potential
• • •
129
developing and implementing an action plan to prevent the risk of a repeat or similar accident occurring; considering whether there are analogous risks within the operation/process that require further corrective action; and examining the implications of the accident and any shortfalls in the risk management systems at other sites.
Creating a process which ensures that lessons learnt from accidents lead to improvements in the risk assessment process, as well as those which related to the specific activities in which the accident occurred is crucial to ensuring maximum benefit, especially where human error is a critical factor.
This page has been left blank intentionally
Chapter 10
Conclusions
As this text began by examining the extraordinary range of human errors which pre-disposed the tragic accident at Bentley Colliery, it is not unreasonable to ask, as the first element of the conclusions, whether the research on human error in a mining context has added anything to our understanding which would help in the understanding of the Bentley accident. This is examined below (in Section 10.1) by re-visiting the contributory factors identified in the original investigation in light of the information presented in Chapters 2–9. Following this, in Section 10.2, a series of broader conclusions is considered in relation to the potential value of a more systematic examination of human error potential in mine safety. 10.1 The Fatal Accident at Bentley Colliery – Revisited It is highly unlikely that anyone would have predicted that all the human errors which contributed to the Bentley accident (as described previously in Table 1.1) would have come together on the same day, or that they would have been taken seriously if they had done so. In this sense it is probably not unreasonable to say that the chain of events leading to the accident that November day was unforeseeable. However, as the likelihood of a major accident increased as each error compounded those made previously, taking out any of the errors in the chain would have broken it with, at least, a significant reduction in the severity of the end event. Table 10.1 revisits Table 1.1 adding in the error classification and what could have been done to avoid the errors made. There is little doubt from the re-interpretation of the Bentley errors given in Table 10.1, that pre-emptive consideration of the potential for human error in the system would have identified most of the errors which occurred and could have indicated the route to solution for most. That said, the preponderance of violation errors (and the nature of those violations) indicates that the most critical latent failure was the poor safety culture that acted as a major predisposing factor for This raises an interesting issue in relation to the use of quantitative risk assessment. In a standard QRA approach the probability of two contingent low probability events is lower than either single event. While this is entirely logical, indeed inevitable in mathematical terms, in real terms (as in the Bentley accident) the likelihood of a major accident actually increases as each error builds on the previous ones.
Understanding Human Error in Mine Safety
132
Table 10.1
The Bentley accident errors revisited
Error No.
Error Description
Error Type
Risk Control Measures
1
At deployment it was noticed that one of the regular train guards had not turned in for work. On checking for a replacement, the official confused Allott with Aylot; one was a trained guard, the other not – the untrained man was deployed as the guard.
Slip/Lapse
Deploying the untrained guard is undoubtedly a slip/lapse error and a very common one (we have all confused names at some time). As slip/lapse errors need some form of “design” solution then the requirement would be for some form of additional cue as to who constituted trained guards (or indeed any form of specialist staff). This could have been done by, for example, separating grades of staff on the deployment board or by colour coding, say, the helmets.
2
Neither the man incorrectly deployed, nor the driver, pointed out the official’s slip. In fact at least four individuals had the chance to correct him but none did. The official concerned had a reputation as awkward and someone you did not challenge.
Mistake
It would be hard to designate this as a violation as it is hardly likely that any mine would ever have a rule which stated “always correct someone who has made a mistake”! However, it is a mistake in the fact that the men concerned clearly thought that not saying anything was the “right” thing to do in the particular circumstances, if only to avoid the argument they obviously expected. What seems to have happened is that the men concerned behaved as they did to avoid hassle in the hope that they would cope with anything which arose by having an untrained man as a guard. While it may seem strange in hindsight to give greater priority to avoiding inter-personal trouble than ensuring safety, such a position is not uncommon. The perceived urgency of possible consequences is often a driving force in day-today decision making. What is, however, very important is that such general reluctance to correct the official clearly represents a poor safety culture. It seems highly unlikely that if the official concerned was well known among the workforce for being awkward that the management were unaware of this. Assuming this to be reasonable, opportunities must have existed to correct the “fear” culture which had developed.
Conclusions
Table 10.1
133
Continued
3
When the driver of the first loco to enter the district passed the arrestor he left it defeated, in contravention of the rules, in the mistaken assumption that the headlights he saw behind him were following him into the same district; in the event the following train turned off to a different district. The arrestor is a device which is designed to cause a controlled derailment in the event of a train running out of control. Under normal circumstances a driver stops his train, disengages the arrestor, drives his train beyond it, stops again and puts the arrestor back into position.
Violation
This is a clear violation of the rules, procedures and driver training in place at the mine. As with many violations however, it was done for what seemed reasonable reasons at the time. It is almost certain that the reason such action seemed reasonable is a combination of the fact that the second driver was close behind and no risk was seen as likely during the short period of time required before he arrived at the arrestor, combined with the fact that breaches of arrestor rules are likely to go unnoticed by anyone other than drivers especially on an unloaded inby journey.
4
When the second driver did eventually arrive he drove straight past leaving the arrestor defeated.
Violation
As with 3 above this is a clear violation of the rules, procedures and training in place at the mine. However, in some respects this is “worse” as the second driver did not have the “excuse” that someone was following close behind. This does suggest that compliance with arrestor rules and procedures had become lax. As mentioned in 3 above, this type of breach can be difficult to spot especially as, on many occasions, no one other than the driver and the guard will be in a position to witness the breach. Nonetheless, it seems highly unlikely that everyone other than drivers and guards had failed to realise this was happening. In addition, the fact that it seems likely that drivers and guards were prone to ignore such breaches indicates that the training given did not sufficiently address risk perception and hazard awareness. Such emphasis is especially important in situations where the need for compliance addresses an almost certainly rare eventuality.
Understanding Human Error in Mine Safety
134
Table 10.1
Continued
5
A degree of shunting was required at the station to enable each loco to take its load of four carriages. There were six carriages at the platform. The driver told the “guard” to sit in the last carriage; not realising that only four were coupled, he sat in the sixth. When the train set off, there was no guard, trained or untrained, in position – he was left at the platform.
Mistake
There is little doubt that when the guard sat in the last carriage (rather than the last coupled carriage) he was under the impression that he was doing the right thing. The driver almost certainly assumed that the “guard” would be aware of the limited number of carriages which were coupled, as a trained guard would have. This error is a direct consequence of Error 1 combined with the fact that the inexperienced driver did not give sufficiently precise instructions. The only real protection against this error was to have avoided Error 1.
6
When the train pulled off for shunting it was fully loaded, in contravention of the Transport Rules as there was a steep gradient immediately after leaving the station and passengers were not supposed to be on board during shunting to reduce the consequences in the event of a runaway.
Violation
This is a clear violation, by a large number of men, including officials and, almost certainly, management staff. It is hardly surprising that, at the end of the shift, men will take any opportunity to sit down rather than stand and, on this basis, it should have been apparent that this rule would need careful supervision. The fact that this clearly was not the case presents, once again, a strong indication of a poor safety culture. The fact that supervisors and management staff were clearly aware of the breach and were almost certainly party to it indicates the point made in Chapter 5 that rules and procedures are unreliable risk management techniques unless actively supported by, for example, close supervision.
7
The driver engaged 2nd gear, despite the fact that the rules and his training stipulated 1st gear.
Violation and/or Mistake
8
There was evidence that the driver had not correctly carried out skid correction; however he had only recently completed his training and the layout of the throttle, service brakes and sanders on this loco was different from that on which he had been trained.
This would be a violation if the driver had deliberately selected second gear for some (unfathomable) reason. However, it seems much more likely that both these errors were mistakes (the second probably compounded by a degree of panic). Problems of inconsistency in the layout of loco controls had already been identified and discussed at corporate level (although whether the information about such concerns had filtered down to individual mines is not clear). Regardless of the ergonomics studies which had been carried out, this is not an obscure problem dependent on esoteric ergonomics knowledge; it is, in reality, little more than common sense. No one would own two cars where the clutch, brake and accelerator pedals went from left to right on one and right to left on the other, yet this was analogous to the problem of skid correction control layout on the loco fleet in UK mines at the time (see Table 3.1). This potential error could have been foreseen.
Conclusions
Table 10.1
135
Concluded
9
The gradients on the road were, in places, steeper than those specified in the Manager’s Rules. This had been spotted and reported four months prior to the accident but no action had been taken.
Violation
This is a violation on the part of the management of the track repair function. There is clear evidence that the problem had been identified (on several occasions) by the deputies responsible for the safety inspections. Action could clearly have been taken.
10
A practice had built up during regular loco maintenance to test the brakes with four empty carriages – you cannot, of course, assume that a loco passing this test will also stop with four fully loaded carriages. The “reason” for this was as men were not available to load the train during testing, the fitters were expected to fill the train with the same weight of sand-bags.
Violation
This is a violation, but once again, a predictable one. It is most probable that if you filled a set of carriages with sufficient sand bags to equal the weight of dozens of colliers on one occasion that you would try to avoid having to do so again! This raises serious reservations about the likely practicality of the procedure in place and about the extent to which it was actively supervised.
most of the errors which, collectively, made such a serious accident increasingly inevitable on the day. Every accident is a child of its times and there is some justification in the suggestion that it is “unfair” to analyse an accident in the light of the prevalent thinking 30 years later. However, the reason why it is important to do so is that while the general importance of both human behaviour and safety culture has been increasingly recognised over the 30 years since Bentley, relatively little has been done to translate that perceived importance into systematic day-to-day consideration and delivery in mining operations. In this sense the tragedy that was Bentley still does have crucially important messages even 30 years on.
10.2 General Conclusions Considerable evidence has been presented to indicate the all-pervasive influence of human error on mine accident potential across a wide range of mining operations and locations. Equally, the examples quoted indicate clearly that, while there may be occasions where the person who made the error is culpable, in the vast majority of cases the errors have been at least predisposed (and, on some occasions, directly caused) by errors made elsewhere in the organisation (or indeed in other
Understanding Human Error in Mine Safety
136
organisations such as manufacturers/suppliers). Two general conclusions can be drawn from this: • •
A better understanding of human error in mine safety will be crucial to future safety improvements in the industry. Central to the need for a better understanding of human error is the need to be more sophisticated in terms of a realisation that most errors are predisposed and that any attempts at human error mitigation which does not address the predisposing factors will fail.
Fortunately it can also be seen from the information provided earlier that there is a good deal of guidance and recommendations already available which can be readily used to reduce human error potential, particularly in relation to the design of mining equipment and systems. While it is true that some of this information may now be difficult to source, recent developments such as the University of Queensland’s MIRMgate initiative are already beginning to address this issue by making design ergonomics information (tailored specifically to the mining industry) more readily available. In addition, it is evident that techniques are available to incorporate a more systematic consideration of human error potential at mine level. These include: •
• •
Detailed consideration of human error potential in risk assessment (both in the conventional sense and in the Case for Safety approach for the introduction of new equipment/systems). Assessment of human error potential and identification of tailored improvement measures in current operations. Detailed consideration of human error in accident investigation, in particular the systematic identification of the factors which predispose human error potential – the crucial element in ensuring that improvements made in the event of an accident/incident are sustained.
These three opportunities provide an overall approach to permanently reducing the influence of human error on safety in a way which is analogous to the battle plan of one of the major figures in the military history of one of the major mining countries – the “horns of the buffalo” – where a central force (the assessment of current operations) is supported by two flanking attacks (risk assessment and accident investigation). The need to systematically address the influence of human error on mine safety is overwhelming; a great deal of useful information and a framework of techniques
The horns of the buffalo was the description of the battle plan of Cetshwayo, the Zulu leader, when the Zulus inflicted at Isandhlwana what is still the worst ever single day’s defeat suffered by British troops.
Conclusions
137
to achieve this are already available; the question which remains is whether the industry will rise to the challenge. Human error is and always will be inevitable. However, to accept that its consequences are always an inevitability would be both foolish and dangerous.
This page has been left blank intentionally
Glossary of Mining Terms
Below are explanations of a series of mining terms used, but not explicitly defined in the text. Berm walls
A berm wall refers to dirt and rock piled alongside a haulage road or along the edge of a dump point. Intended as a safety measure, they are commonly required by government organisations to be at least one-half as tall as the wheel of the largest mining machine on site.
Bolter miners
A machine for cutting coal which has an integrated roof bolting (see below) capability so that the coal can be cut and the roof secured by bolting by a single machine.
Bunker (silo)
A storage area as part of the coal clearance system (see below) which acts as a buffer to enable coaling to continue when, for example, there is a problem winding coal out of the mine.
Cap-lamp
A lamp powered by a battery on the miner’s belt, the bulb of which is attached to their helmet.
Coal clearance system
A series of conveyors running from the end of the coal face to the pit bottom (see below) for winding out of the mine.
Continuous Miners
A machine for cutting coal usually using a cutting mat with picks embedded in it running on a ranging arm. Modern versions of these machines often incorporate a roof bolting capability (see Bolter miners above).
Firing pattern
A particular pattern of holes drilled for shotfiring.
140
Understanding Human Error in Mine Safety
Free-steered vehicle
A rubber-tyred vehicle used (primarily) for the movement of supplies and coal/ore around the mine. Free-steered refers to the fact that it does not run on rails. Several versions of these machines exist depending on their primary use (e.g., some have flat-bed load areas whereas others have buckets). See also Shuttle cars, Load-haul-dumps below.
Deadman’s pedal
A control device which has to be continually activated in order for power to be maintained to the machine. These are usually foot pedals but can also (albeit unusually in mining) be hand operated.
Development heading
This is the end of a new tunnel (roadway) being driven into the mine – it is where the coal/rock is extracted in order to advance the tunnel.
Development machine
This is a generic name for machines which drive new tunnel (roadways) in the development heading. The machines have some mechanism to remove the rock and the ability to load the debris and pass it through the machine to discharge onto a conveyor system. See also drill loader, roadheader below.
Downgrade
A slope running down hill.
Drill loading machine
A particular form of development machine with drills to create shot holes for blasting, combined with the capability to load out the debris from the heading.
Face end
The area at the end of the face (where the coal is cut) where it interacts with the roadway leading to the face (at one end the roadway is used to take material/rock out of the mine while the one at the other end of the face is used to bring supplies to the face).
Glossary of Mining Terms
141
Flushboarding
Boards (usually wooden) which are jammed between supports on the side of the roadway to stop small rock spillage encroaching on the roadway.
Haul truck
A truck used in surface mining to transfer rock/earth from the blasting area to the main transportation system between the mine and the processing plant.
Heading team
The team of miners who operate in the development heading (see above).
Inby
The direction from any point in the mine which takes you further into the mine.
Load-haul-dump machine
A type of, or alternative name for, a free-steered vehicle (see above).
Ore pass
A vertical or steeply inclined passage between one level and a lower level, eventually reaching the level from which material is hoisted to the surface via the shaft.
Outby
The direction from any point in the mine which takes you back to the pit bottom (and thence out of the mine).
Pass by
A loop of rail on a single track railway which enables the loco/train to change its direction of travel.
Portable roof bolters
Small machines for inserting roof bolts which can be operated by the miners working from the floor and moved about the heading manually (an alternative method of roof bolting which can be used if a bolter miner is not in use).
142
Understanding Human Error in Mine Safety
Roadheader
A specific type of development machine which cuts the rock by the use of a rotary cutting head on the end of a boom. As with all development machines, it also has the capability to gather debris, pass it through the machine and off-load at the rear.
Rock burst
A sudden break-out of rock from the walls of a tunnel caused by the rapid release of accumulated strain in highly stressed rock.
Sanders
This is a control often used in skid correction or the control of runaways on rail mounted locomotives. Operation of the control dumps sand (from a box in the loco) onto the rails to increase friction when the brakes are applied.
Self-rescuer
This is a device used to aid breathing in the event of a fire by converting carbon monoxide to carbon dioxide. When not in use it is worn on the miner’s belt.
Shearers
This is a type of machine used to cut coal at the face. The cutting is done by a rotating drum with a spiral arrangement of picks. Shearers can be single or double ended (i.e. a cutting drum at one or both ends). The machine is mounted on the face conveyor and drops the cut coal onto the conveyor.
Shovel
Any bucket-equipped machine used for digging and loading earth and/or rock.
Shuttle cars
These are a specific type of free-steered vehicle which normally run from the area where material is being cut to the point where it is loaded onto/ into the main materials transfer system. Normally this particular type of free-steered vehicle is bi-directional (which often involves the driver sitting at 90 degrees to the line of travel).
Glossary of Mining Terms
143
Spragging
This is a mechanical means of locking the wheels on locomotive wagons when not in use. There are a wide range of devices used from simple metal bars to specifically designed devices (such as aeroplane sprags).
Support rules
These are detailed specifications of how the roof and walls are to be supported (e.g. placement and density of roof bolts to be set). These must be considered mandatory and inviolate in all circumstances where they apply if strata stability is to be achieved.
The following are job titles that denote various grades of management and supervisory roles used in the various mining locations covered by the studies encompassed by the text: Deputy (District Deputy) Overman Undermanager Underground Manager Mine overseer Shift Boss Team Leader
This page has been left blank intentionally
References
Advisory Committee on the Safety of Nuclear Installations (ACSNI) (1991), Study Group on Human Factors Second Report: Human Reliability Assessment – A Critical Overview. Swindon: HSE Books. Advisory Committee on the Safety of Nuclear Installations (ACSNI) (1993), Study Group on Human Factors Third Report: Organising for Safety. Swindon: HSE Books. Ahern, D. (2008), Adopting the Principles of “Crew Resource Management” to the Offshore Drilling Industry. Unpublished Masters Degree thesis, the University of Queensland, Australia. Arnold, I.M.F. (1996), Occupational Health and Safety in the Mining industry in Canada. In: Proceedings of “Minesafe International 1996”. Perth WA: WA Department of Minerals and Energy. Bakker, R. (1996), Mine Health and Safety Act: A New Era in the South African Mining Industry. In: Proceedings of “Minesafe International 1996”. Perth WA: WA Department of Minerals and Energy. Barnes, H.F. (1993), Challenges for Management in Occupational Health and Safety in the 1990s. In: Proceedings of “Minesafe International 1993”. Perth WA: WA Department of Minerals and Energy. Biddle, T.M. (2000), Mine Safety and Health: A Trade Balance. In: Proceedings of “Minesafe International 2000”. Perth WA: WA Department of Minerals and Energy. Bird, F.E. and Loftus, R.G. (1976), Loss Control Management. Loganville, Georgia: Institute Press. Brake, R. and Bates, G. (2000), Assessment of Underground Thermal Environments and the Prevention of Heat Illness. In: Proceedings of “Minesafe International 2000”. Perth WA: WA Department of Minerals and Energy. Brown, M.J., Long, T.L. and Oosterhof, J. (1998), An Integrated Occupational Safety and Health Information Management System for the Western Australian Mining Industry. In: Proceedings of “Minesafe International 1998”. Pretoria RSA: Department of Minerals and Energy Affairs. Buchanan, D.J. (2000), What is the Role of Mining Health and Safety Research in the 21st Century? In: Proceedings of “Minesafe International 2000”. Perth WA: WA Department of Minerals and Energy. Burgess-Limerick, R. and Steiner, L. (2006), “Injuries associated with continuous miners, shuttle cars, load–haul–dump and personnel transport in New South Wales underground coal mines”, Mining Technology, 115, 160–68.
146
Understanding Human Error in Mine Safety
Calder, A. (1998), Make Health and Safety Byte – Using Computers to Zero in on Safety. In: Proceedings of “Minesafe International 1998”. Pretoria RSA: Department of Minerals and Energy Affairs. Carter, G., Robertson, W. and Mallet, C. (2000), Creating Decision Support Tools Through Data Integration and 4D Visualisation for Risk Based Safety Management Systems. In: Proceedings of “Minesafe International 2000”. Perth WA: WA Department of Minerals and Energy. Chamber of Minerals and Energy (WA) (1997), A Guide to Contractor Occupational Health and Safety Management for Western Australian Mines. Perth WA: Chamber of Minerals and Energy. Chapanis, A. (1965), “Words, words, words”, Human Factors, 7, 1–17. Chapanis, A. (1988), “‘Words, words, words’ Revisited.” In Oborne, D.J. (ed.), International Review of Ergonomics, Volume 2. London: Taylor and Francis. Cliff, D. and Horberry, T. (2008), “Hours of Work Risk Factors for Coal Mining”, International Journal of Mining and Mineral Engineering, 1(1), 74–94. Coleman, G.J., Graves, R.J., Simpson, G.C., Sweetland, K.F., Collier, S.G. and Golding, D. (1984), Communications in Noisy Environments. Final Report on European Coal and Steel Community contract no. 7206-00-8/09. Edinburgh: Institute of Occupational Medicine. Confederation of British Industry (1990), Developing a Safety Culture. London: Confederation of British Industry. Crawley, F., Preston, M. and Tyler, B. (1999), HAZOP; Guide to Best Practice. Institution of Chemical Engineers. Davies, E. (1993), Safety – Performance, Principles and Practice. In: Proceedings of “Minesafe International 1993”. Perth WA: WA Department of Minerals and Energy. Davies, F., Spencer, R. and Dooley, K. (2001), Summary Guide to Safety Climate Tools. HSE Offshore Technology Report 1999/063. Sudbury: HSE Books. Dekker, S. (2006), The Field Guide to Understanding Human Error. Aldershot, UK: Ashgate. Denby, B. (1996), Improving Mine Safety Using Virtual Reality Techniques. In: Proceedings of “Minesafe International 1996”. Perth WA: WA Department of Minerals and Energy. Denby, B., Schofield, D. and McClarnon, D. (1995), “The use of virtual reality and computer graphics in mining engineering” Luxembourg: European Coal and Steel Community, Ergonomics Action Bulletin, 32, 1–6. Department of Transport (1988a), Investigation into the Clapham Junction Railway Accident. London: HMSO. Department of Transport (1988b), Investigation into the King’s Cross Underground Fire. London: HMSO. Donoghue, A.M. (2001), “A risk-based system to penalize and reward line management for occupational safety and health performance”, Occupational Medicine, 51, 354–6.
References
147
Farmer, E. and Chambers, E.G. (1926), A Psychological Study of Individual Differences in Accident Rates. London: Industrial Fatigue Research Board Report No. 54. Ferguson, C.A., Mason, S., Collier, S.G., Golding, D., Gravelling, R.A., Morris, L.A., Pethick, A.J. and Simpson, G.C. (1985), The Ergonomics of the Maintenance of Mining Machinery. Final Report on European Coal and Steel Community contract no. 7249-12/11. Edinburgh: Institute of Occupational Medicine Report TM/85/12. Fewell, P.T. (1993), A New Dimension in Health and Safety in the South African Mining Industry. In: Proceedings of “Minesafe International 1993”. Perth WA: WA Department of Minerals and Energy. Fewell, P.T. and Davies, A.W. (1992), The Approach to Safety, Hygiene and Health in a Large Mining Company in South Africa. In: Proceedings of “Safety, Hygiene and Health in Mining”. Doncaster: The Institution of Mining Engineers. Filipeck, M. and Brodzinski, S. (1996), Safety in the Polish Mining Industry; New Challenges. In: Proceedings of “Minesafe International 1996”. Perth WA: WA Department of Minerals and Energy. Fisher, R.J. (2000), Application of Safety Best Practice at AngloGold. In: Proceedings of “Minesafe International 2000”. Perth WA: WA Department of Minerals and Energy. Fleming, M. (2001), Safety Culture Maturity Model. SHE Offshore Technology report 2000/049. Sudbury: HSE Books. Flin, R., Mearns, K., O’Connor, P. and Bryden, R. (2000), “Measuring safety climate: identifying the common features”, Safety Science, 34, 177–93. Fox, J.G. (1991), “Mining ergonomics in the European Communities.” In: Carr, T.L. (1991), MinTeach 91. London: Stirling Publications. Gadd, S. and Collins, A.M. (2002), Safety Culture: A Review of the Literature. Health and Safety Laboratories Report HSL/2002/25. Buxton: Health and Safety Laboratories. Gallagher, S. (2008), Reducing Low Back Pain and Disability in Mining. DHHS (NIOSH) Publication No. 2008-135 (Information Circular 95072008). Pittsburgh: Department of Health and Human Services, National Institute for Occupational Safety and Health. Graveling, R.A., Johnstone, J. and Symes, A.M. (1992), Development of a Screening Method for Manual Handling. Edinburgh: Institute of Occupational Medicine Report No. TM/92/08. Graveling, R.A., Morris, L.A. and Graves, R.J. (1988), Working in Hot Conditions in Mining: A Literature Review. Final Report on Health and Safety Executive project no. 2229/R53.55. Edinburgh: Institute of Occupational Medicine Report No. TM/88/13.
148
Understanding Human Error in Mine Safety
Graveling, R.A., Simpson, G.C. and Sims, M.T. (1985), “Lift with Your Legs, Not With Your Back: A Realistic Directive?” In Brown, I.D., Goldsmith, R., Coombes, K. and Sinclair, M. (eds), Proceedings of the IEA Congress: Ergonomics International. London: Taylor and Francis. Graves, R.J., Leamon, T.B., Morris, L.A., Nicholl, A. McK, Simpson, G.C. and Talbot, C.F. (1981), Thermal Conditions in Mining Operations. Final Report on European Coal and Steel Community contract no. 6245-11/8/049. Edinburgh: Institute of Occupational Medicine Report TM/80/9. Grech, M., Horberry, T. and Koester, T. (2008), Human Factors in the Maritime Domain. USA, CRC Press. Green, E. (1998), A Personal Reflection on Thirty Years of Mine Safety and Health Regulation in the USA; and a Look Ahead to the Future. In: Proceedings of “Minesafe International 1998”. Pretoria RSA: Department of Minerals and Energy. Greenwood, M. and Woods, H.M. (1919), A Report on the Incidence of Industrial Accidents on Individuals, with Special Reference to Multiple Accidents. London: Industrial Fatigue Research Board Report No. 4. Griffin, M.J. (1993), Vibration Evaluation and Worker Health. In: Proceedings of “Minesafe International 1993”. Perth WA: WA Department of Minerals and Energy. Guldenmund, F.W. (2000), “The nature of safety culture: a review of theory and research”, Safety Science, 34, 215–57. Hancock, P.A. (1981), “Heat stress impairment of mental performance: a revision of tolerance limits”, Aviation and Space Environmental Medicine, 52, 177–80. Harris, G. and Rendalls, T. (1993), Man–machine Design – A Contractor’s View. In: Proceedings of “Minesafe International 1993”. Perth WA: WA Department of Minerals and Energy. Hayes, J. (2006), “Safety Decision Making in High Hazard Organisations at the Production/Maintenance Interface – A Literature Review.” National Research Centre for OHS regulation. Downloaded 14 February 2009 from: http://ohs. anu.edu.au/publications/index.php. Health and Safety Executive (HSE) (1979), The Accident at Bentley Colliery, South Yorkshire 21 November 1978. London: HMSO. Health and Safety Executive (HSE) (1992), The Safety of Free Steered Vehicle Operations Below Ground in British Coal Mines. London: HMSO. Health and Safety Executive (HSE) (1997), Successful Health and Safety Management. Norwich: HSE Books. Heinrich, H.W. (1931), Industrial Accident Prevention: A Scientific Approach. New York: McGraw-Hill. Helander, M., Krohen, G.S. and Curtin, R. (1983), “Safety of roof bolting operations in underground mines”, Journal of Occupational Accidents, 5, 161–75. Helmreich, R.L., Merritt, A.C. and Wilhelm, J.A. (1999), “The evolution of crew resource management training in commercial aviation”, ICAO Journal of Flight Safety and Accident Prevention, 6, 32–49.
References
149
Hermanus, M. and Van der Bergh, A. (1996), Health Safety and the Environment: Charting a New Course; Strategic Issues and Challenges Facing a Major South African Mining Group. In: Proceedings of “Minesafe International 1996”. Perth WA: WA Department of Minerals and Energy. Hollands, R., Denby, B., Brooks, G. and Burton, A. (2000), Equipment Operation/ Safety Training Using Virtual Reality. In: Proceedings of “Minesafe International 2000”. Perth WA: WA Department of Minerals and Energy. Hollnagel, E., Woods, D. and Leveson, N. (eds) (2006), Resilience Engineering: Concepts and Precepts. Aldershot, UK: Ashgate Horberry, T., Gunatilaka, A. and Regan, M. (2006), “Intelligent systems for industrial mobile equipment”, The Journal of Occupational Health and Safety – Australia and New Zealand, 22(4), 323–34. Horberry, T., Larsson, T., Johnston, I. and Lambert, J. (2004), “Forklift safety, traffic engineering and intelligent transport systems: a case study”, Applied Ergonomics, 35(6), 575–81. Human Factors in Reliability Group (HFRG) (1995), Improving Compliance with Safety Procedures: Reducing Industrial Violations. Swindon: HSE Books. Hunter, W.J. (1993), Occupational Health and Safety in the European Community Post 1992. In: Proceedings of “Minesafe International 1993”. Perth WA: WA Department of Minerals and Energy. International Nuclear Safety Advisory Group (1988), Basic Safety Principles For Nuclear Power Plants. IAEA Safety series report no. 75-INSAG-3. Vienna: International Atomic Energy Agency. Jennings, N. (1996), Improving Safety and Health in Mines: The Role of the International Labour Convention 176. In: Proceedings of “Minesafe International 1996”. Perth WA: WA Department of Minerals and Energy. Johnson, W.G. (1975), “Management Oversight Risk Tree (MORT)”, Journal of Safety Research, 7(1), 4–15. Johnston, A. (1993), Commitment Through Workplace Involvement. In: Proceedings of “Minesafe International 1993”. Perth WA: WA Department of Minerals and Energy. Johnstone, R. (2000), Enforcement of Occupational Health and Safety Statutes: Issues and Future Directions. In: Proceedings of “Minesafe International 2000”. Perth WA: WA Department of Minerals and Energy. Jordinson, R., Taylor, P., Hagan, J. and Butler, B. (2000), Can Safety Systems Work Without the Right Attitudes? In: Proceedings of “Minesafe International 2000”. Perth WA: WA Department of Minerals and Energy. Joy, J. (1996), The Role of the Mining Safety Professional After 2000: From Player to Coach. In: Proceedings of “Minesafe International 1996”. Perth WA: WA Department of Minerals and Energy. Joy, J. (2000), Risk and Decision Making in the Minerals Industry. In: Proceedings of “Minesafe International 2000”. Perth WA: WA Department of Minerals and Energy.
150
Understanding Human Error in Mine Safety
Joy, J. and Griffiths, D. (2007), National Minerals Industry Safety and Health Risk Assessment Guidelines. Brisbane: University of Queensland Minerals Industry Safety and Health Centre. Keilblock, A.J. (1987), “Strategies for the prevention of heat disorders with particular reference to the efficacy of body cooling procedures.” In: Shiraki, K. and Yousef, M.K. (eds), Thermal and Work Physiology. Amsterdam: Elsevier. Kingsley, C.E., Mason, S., Pethick, A.J., Simpson, G.C., Sims, M.T. and Leamon, T.B. (1980), An Investigation of Underground Haulage and Transport Systems. Final Report on European Coal and Steel Community contract no. 724511/8/052. Edinburgh: Institute of Occupational Medicine Report TM/80/10. Kizil, M. (2003), “Virtual reality applications in the Australian minerals industry”, Application of Computers and Operations Research in the Minerals Industries, South African Institute of Mining and Metallurgy. Downloaded on 4 February 2009 from http://www.saimm.co.za/events/0305apcom/downloads/ 569-574%20Kizil.pdf. Kizil, G. and Rasche, T. (2008), TYREgate – a Causal Factors Database and Risk Management Decision Making Support Tool for Earthmover Tyres and Rims. Paper presented at the Queensland Mining Industry Health and Safety Conference, August 2008, Townsville, Australia. Downloaded on 24 February 2009 from http://www.qrc.org.au/conference/_dbase_upl/Papers2008_Rasche. pdf. Komljenovic, D. and Kecojevic, V. (2007), “Risk management programme for occupational safety and health in surface mining operations”, International Journal of Risk Assessment and Management, 7(5), 620–38. Krause, T.R. (2000), The Role of Behaviour Based Safety in the Workplace. In: Proceedings of “Minesafe International 2000”. Perth WA: WA Department of Minerals and Energy. Krause, T.R., Hindley, J.H. and Hodgson, S.J. (1990), The Behaviour Based Safety Process. New York: van Nostrand Reinhold. Laurence, D. (2005), “Safety rules and regulations on mine sites – the problem and a solution”, Journal of Safety Research, 36(1), 39–50. Lawerence, A.C. (1974), “Human error as a cause of accidents in gold mining”, Journal of Safety Research, 6, 78–88. Leon, R.N., Salamon, M.D.G., Davies, A.W. and Davies, J.C.A. (1994), Commission of Inquiry into Safety and Health in the Mining Industry. Pretoria RSA: Department of Mineral and Energy Affairs. Lucas, J. and Thabet, W. (2008), “Implementation and evaluation of a VR taskbased training tool for conveyor belt safety training”, ITcon, 13, 637–59. Downloaded on 5 February 2009 from http://itcon.org/data/works/att/2008_ 40.content.09404.pdf. Marx, C. (2000), Developing a Decriminalised Accident Investigation Methodology. In: Proceedings of “Minesafe International 2000”. Perth WA: WA Department of Minerals and Energy.
References
151
Mason, S. and Chan, W.L. (1991), Ergonomics Design Handbook for Roadshearers. British Coal Corporation TSRE Report TM 91/01. Mason, S., Chan, W.L. and Simpson, G.C. (1985), “Development of sightline criteria for mobile machinery.” In Oborne, D.J. (ed.) Contemporary Ergonomics 1985. London: Taylor and Francis. Mason, S. and Rushworth, A.M. (1991), Ergonomics Design Handbook for Shearers. British Coal Corporation TSRE Report TM 91/03. Mason, S., Ferguson, C.A. and Pethick, A.J. (1986), Ergonomic Principles in Designing for Maintainability. Luxembourg: ECSC Community Ergonomics Action Report 8, Series 3. Mason, S. and Simpson, G.C. (1990a), Ergonomics Principles in the Design of Free Steered Vehicles. British Coal Corporation TSRE Report SSL/90/173. Mason, S. and Simpson, G.C. (1990b), Ergonomics Principles in the Design of Underground Locomotives. British Coal Corporation TSRE Report SSL/90/174. Mason, S. and Simpson, G.C. (1990c), Ergonomics Principles in the Design of Combined Drilling and Loading Machines. British Coal Corporation TSRE Report SSL/90/165. Mason, S. and Simpson, G.C. (1990d), Ergonomics Principles in the Design of Continuous Miners. British Coal Corporation TSRE Report SSL/90/166. Mason, S. and Simpson, G.C. (1992), Ergonomics Aspects in the Design of Face Control and Monitoring Systems. Final Report on European Coal and Steel Community contract CEC 7249-11/055. Eastwood: British Coal Corporation. Mason, S., Talbot, C.F. and Simpson, G.C. (1995), Assessment of the Supervisory Attitudes to Safety and the Development of Training Material for the Promotion of Human Reliability. Final Report on CEC contract 7250-13/060. Eastwood: British Coal Corporation Operations Directorate. McDonald, G.L. (1993), The Nature of the Conflict and the Need to Resolve It. In: Proceedings of “Minesafe International 1993”. Perth WA: WA Department of Minerals and Energy. McPhee, B. (1992), Ergonomics Project. Final report on NERDDC project no. 1278. Sydney NSW: National Institute of Occupational Health and Safety. McPhee, B. (2007), “Ergonomics in large machinery design”, HFESA Journal of Ergonomics Australia, 21, 22–5. Mellblom, B. (2000), Future Direction of Safety and Health Practices in the Swedish Mining Industry. In: Proceedings of “Minesafe International 2000”. Perth WA: WA Department of Minerals and Energy. Meister, D. and Sullivan, D.T. (1968), “Human factors: engineering blind spot”, Electro-Technology, August. MISHC (2005), Causes of Fatalities and Significant Injury in the Australian Mining Industry. Final Report on Queensland Resources Council: Brisbane: University of Queensland Minerals Industry Safety and Health Centre. Downloaded 15 January 2009 from www.mishc.uq.edu.au/Files_for_download/QRC_report/ QRC_Final_report.pdf.
152
Understanding Human Error in Mine Safety
MISHC (2009), Earth Moving Equipment Safety Round Table. Downloaded 15 January 2009 from http://www.mishc.uq.edu.au/index.html?page=58384. Morrison, D.J. (1996), Establishing and Maintaining Safety Standards in Indonesian Mining Operations. In: Proceedings of “Minesafe International 1996”. Perth WA: WA Department of Minerals and Energy. Mulder, I. (1998), Design of a Strategy for ISCOR Limited to Enhance Its Performance in Occupational Health and Safety. In: Proceedings of “Minesafe International 1998”. Pretoria RSA: Department of Minerals and Energy Affairs. Neindorf, L.B. and Fasching, H.H. (1993), Safety Programmes in Practice in the Isa Lead Mine. In: Proceedings of “Minesafe International 1993”. Perth WA: WA Department of Minerals and Energy. Nicholas, H.P. (1998), Professional Registration of Safety Practitioners in South Africa. In: Proceedings of “Minesafe International 1998”. Pretoria RSA: Department of Minerals and Energy. Nkurlu, J. (1998), The ILO Strategies in Regard to Occupational Health and Safety. In: Proceedings of “Minesafe International 1998”. Pretoria RSA: Department of Minerals and Energy. O’Beirne, T. (1992), Risk Analysis and Risk Management in Australian Coal Mines – The First Five Years. In: Proceedings of “Safety, Hygiene and Health in Mining”. Doncaster: The Institution of Mining Engineers. O’Sullivan, J. (2007), “Ergonomics in the design process”, HFESA Journal Ergonomics Australia, 21, 13–18. Parkes, A. (2003), Truck Driver Training Using Simulation in England. Driving Assessment 2003: The Second International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design. Downloaded on 24 February 2009 from: http://www.trucksim.co.uk/Documents/Truck%20dri ver%20training%20using%20simulation%20in%20England.pdf. Pitzer, C.J. (1993), Safety psychology: Managing Safety Attitudes in the Real World. In: Proceedings of “Minesafe International 1993”. Perth WA: WA Department of Minerals and Energy. Porter, C.A. (1988), “Accident Proneness – a review of the concept.” In Oborne, D. (ed.), International Review of Ergonomics. London: Taylor and Francis. Pratt, R. and Simpson, G.C. (1994), Improved Human Reliability in Safety Inspections. Final Report on CEC contract no. 7250/13/013. Eastwood: British Coal Corporation Operations Directorate. Purdy, G. (2000), Embedding Risk Management – Changing the Culture. In: Proceedings of “Minesafe International 2000”. Perth WA: WA Department of Minerals and Energy. Rasche, T. (2001), Development of a Safety Case Methodology for the Minerals Industry: A Discussion Paper. Brisbane: University of Queensland: Minerals Industry Safety and Health Centre. Downloaded on 25 February 2009 from http://www.mishc.uq.edu.au/Publications/Development_of_a_Safety_Case. pdf.
References
153
Rasche, T. (2002), Databases for Applications in Quantitative Risk Analysis (QRA): Discussion Paper. Brisbane: University of Queensland: Minerals Industry Safety and Health Centre. Downloaded on 25 February 2009 from http://www. mishc.uq.edu.au/Publications/Databases_for_Equipment_Failure011.pdf. Rasmussen, J. (1987), “Reasons, causes and human error.” In: Rasmussen, J., Duncan, K. and Leplat, J. (eds), New Technology and Human Error. New York: Wiley. Reason, J.T. (1987), “A framework for classifying errors.” In: Rasmussen, J., Duncan, K. and Leplat, J. (eds), New Technology and Human Error. New York: Wiley. Reason, J.T. (1990), Human Error. Cambridge: Cambridge University Press. Reason, J.T. (2000), “Human error: models and management”, British Medical Journal, 320; 768–70. Rimmington, J. (1989), Annual Report of the Health and Safety Executive. London: HMSO. Rimmington, J. (1993), “The cost of accidents”, Health and Safety Management, March. Lord Robens (1972), Health and Safety at Work. London: HMSO. Rushworth, A.M. (1996), “Reducing accident potential by improving the ergonomics and safety of locomotive and FSV driver’s cabs by retrofit”, Mining Technology, June, 153–9. Rushworth, A.M., Best, C.F., Coleman, G.J., Graveling, R.A., Mason, S. and Simpson, G.C. (1986), Study of Ergonomic Principles Involved in Accident Prevention for Bunkers. Final report on European Coal and Steel Community contract 7247/12/049. Edinburgh: Institute of Occupational Medicine Report TM/86/05. Rushworth, A.M. and Mason, S. (1991), Design Study of Manual Roofbolting Machines. Burton-upon-Trent: British Coal Corporation Technical Services & Research Executive. Rushworth, A.M., Mason, S., Morton, G., Simpson, G.C. and Talbot, C.F. (1993), Improving the Ergonomics of Locomotive Design by Retrofit. Final Report on European Coal and Steel Community contract no. 7250-13/026. Eastwood: British Coal Operations Dept. Rushworth, A.M., Mason, S. and Simpson, G.C. (1990), The Ergonomics of Roof Bolting Operations. Final Report on European Coal and Steel Community contract no. 7249-12/036. Eastwood: British Coal Operations Dept. Rushworth, A.M., Mason, S. and Talbot, C. (1994), Operational Handbook of the Bretby Maintainability Index. Luxembourg: ECSC Community Ergonomics Action Report 8A Series 3. Sanders, M.S. and Kelly, G.R. (1981), Visibility Attention Locations for Operating Continuous Miners, Shuttle Cars and Scoops. Washington DC: US Bureau of Mines Report no. BuMines OFR 29(1)-82. Schofield, D., Denby, B. and McClarnon, D. (1994), “Computer graphics and virtual reality in the mining industry”, Mining Magazine, November, 284–6.
154
Understanding Human Error in Mine Safety
Schutte, P. (1998), The Human Factor in Safety: A Behaviour-based Approach Enhancing Empowered, Valued and Safety Committed Employees. In: Proceedings of “Minesafe International 1998”. Pretoria RSA: Department of Minerals and Energy Affairs. Simpson, G.C. (1990), “Costs and benefits in occupational ergonomics”, Ergonomics, 33, 261–8. Simpson, G.C. (1993), “Applying ergonomics in industry: some lessons from the mining industry.” In: Lovesey, E.J. (ed.), Contemporary Ergonomics. London: Taylor & Francis. Simpson, G.C. (1994), “Promoting safety improvements via potential human error audits”, The Mining Engineer, 154, 38–42. Simpson, G.C. (1996a), Duty of Care: Are Mining Equipment Companies Failing to Deliver? In: Proceedings of “Minesafe International 1996”. Perth WA: WA Department of Minerals and Energy. Simpson, G.C. (1996b), “Safety training: the need to start at the top”, Journal of Occupational Health and Safety – Australia and New Zealand, 12, 693–700. Simpson, G.C. (1996c), “Toward a rational approach to risk assessment”, Mining Technology, 78, 19–23. Simpson, G.C. (1998a), It’s the People – Stupid! In: Proceedings of “Minesafe International 1998”. Perth WA: WA Department of Minerals and Energy. Simpson, G.C. (1998b), Machine Design Ergonomics: A Crucial Safety Issue Often Overlooked. In proceedings of Society of Mining Engineers Annual Meeting. Littleton, CO, USA: Society for Mining Metallurgy and Exploration Inc. Simpson, G.C. (2000), Reducing Manual Handling Risk: The Holy Grail of Health and Safety. In: Proceedings of “Minesafe International 2000”. Perth WA: WA Department of Minerals and Energy. Simpson, G.C. and Chan, W.L. (1988), “The derivation of population stereotypes for mining machines and some reservations on the general applicability of published stereotypes”, Ergonomics, 31, 327– 35. Simpson, G.C. and Coleman, G.J. (1988), “The development of a procedure to ensure effective auditory warning signals”, The Mining Engineer, May, 511–14. Simpson, G.C. and Mason, S. (1983), “Design aids for designers: an effective role for ergonomics”, Applied Ergonomics, 14, 117–83. Simpson, G.C., Mason, S., Rushworth, A.M. and Talbot, C.F. (1994), The Role of Human Error in Accident Aetiology and the Development of an Operational Human Error Audit System. Final Report on European Coal and Steel Community contract no. 7250-12/025. Eastwood: British Coal Operations Dept. Simpson, G.C. and Moult, D.J. (1995), “Risk assessment as the basis for the introduction of new systems”, The Mining Engineer, 77, 343–8. Simpson, G.C., Rushworth, A.M., von Glehn, F.H. and Lomas, R.M. (1996), Causes of Transport and Tramming Accidents on Mines other than Coal, Gold and Platinum. Final Report on Safety in Mines Research Advisory Committee Project OTH 202. Huthwaite: International Mining Consultants Ltd.
References
155
Simpson, G.C. and Talbot, C.F. (1994), Human Error Audit and Safety Management Review. Huthwaite: International Mining Consultants Ltd. Report no. IMCL/1963. Simpson, G.C. and Widdas, M. (1992), “Reducing major accident/incident risk”, The Mining Engineer, March, 259–65. Squelch, A. (2000), Application of Virtual Reality for Mine Safety Training. In: Proceedings of “Minesafe International 2000”. Perth WA: WA Department of Minerals and Energy. Standish-White, J. (2000), “SIMUNYE” Safety Culture – The Human Factor in Risk Management. In: Proceedings of “Minesafe International 2000”. Perth WA: WA Department of Minerals and Energy. Strambi, F. (1999), Contribution of Community Ergonomics Research and Community Ergonomics Action to Improvement of Working Conditions Involving Heat Exposure. In: Proceedings of the Closing Conference of ECSC Social Research. Luxembourg: European Coal and Steel Community. Sundstrom-Frisk, C. (1998), Understanding Human Behaviour – A Necessity in Improving Safety and Health Performance. In: Proceedings of “Minesafe International 1998”. Pretoria RSA: Department of Minerals and Energy Affairs. Talbot, C.F., Mason, S., von Glehn, F., Lomas, R.M. and Simpson, G.C. (1996), Improve the Safety of Workers by Investigating the Reasons Why Accepted Safety and Work Standards are not Complied with on Mines. Final Report on Safety in Mines Research Advisory Committee Project GEN 213. Huthwaite: International Mining Consultants Ltd. Talbot, C.F. and Simpson, G.C. (1995), Development of Low-cost Improvements to Equipment/System Design and Work Organisation to Reduce Manual Handling Risk. Eastwood: British Coal Corporation Final Report on European Coal and Steel Community Contract No. 7250/12/059. Teniswood, C.F., Clark, D.G.N. and Todd, D.A. (1993), Design Guidelines for Remote Controls of Mining Equipment. In: Proceedings of “Minesafe International 1993”. Perth WA: WA Department of Minerals and Energy. Torlach, J. (1996), New Legislative Directions. In: Proceedings of “Minesafe International 1996”. Perth WA: WA Department of Minerals and Energy. Torlach, J. (1998), Regulating the Mining Industry in the 21st Century. In: Proceedings of “Minesafe International 1998”. Pretoria RSA: Department of Minerals and Energy Affairs. Tripathy, D.P. and Rourkela, R.E.C. (1998), Quantitative Safety Risk Assessment in Coal Mines. In: Proceedings of “Minesafe International 1998”. Pretoria RSA: Department of Minerals and Energy Affairs. Turner, J. and Joy, J. (1996), The Use of Team-based Risk Assessment Processes to Reduce the Risk of Rock Related Accidents. In: Proceedings of “The Human Element in Rock Engineering”. Orange Grove, RSA: Sangorm. Van der Molen, H.H. and Botticher, A.M.T. (1988), “A hierarchical risk model for traffic participants”, Ergonomics, 31, 537–55.
156
Understanding Human Error in Mine Safety
Wagenaar, W.A., Hudson, P.T.W. and Reason, J.T. (1990), “Cognitive failures and accidents”, Applied Cognitive Psychology, 4, 273–94. Weick, K.E. and Sutcliffe, K.M. (2007), Managing the Unexpected: Resilient Performance in an Age of Uncertainty. San Francisco, USA: Jossey-Bass. Windridge, F.W., Parkin, R.J., Neilson, P.J., Roxborough, F.F. and Ellicott, C.W. (1995), Report on an accident at Moura No. 2 Underground Mine on Sunday, 7 August 1994. Brisbane: State of Queensland Department of Minerals and Energy. Wyndham, C.H. (1965), “A survey of the causal factors in heat stroke and of their prevention in the gold mining industry”, Journal of South African Institute of Mining and Metallurgy, 66, 125–55.
Index
accident investigations 117–29 accident proneness 2 accidents classification of 7–11 active failures 12–14, 107–10 air temperature 37, 38–9 Australia 24 autocratic leadership 70 automated systems 35 Behaviourally Based Safety (BBS) 115–17 Bentley Colliery 4–5, 131–5 berm walls 139 blame 88–9 bolter miners 24, 34, 139 breasting 100–104 bunkers 63–4, 65, 139 cap lamps 139 catastrophic problems 3 coaching 84, 85 coal bunkers 63–4, 65, 139 coal clearance systems 23, 139 coal mining maintenance operations 50 safety codes, rules and procedures 45–6, 46–8, 49–50 supervision 70–73, 75–6 training 59–60, 61–2, 62–3, 63–4 underground 15–17, 18, 20–25, 39, 40, 45–50, 59–63, 70–73, 75–6 workplace environment 39, 40 competence assessment 56 competency 65–6 computer simulation 32–3 continuous miners 24, 47–8, 139 contract staff 65–6 control shortcomings 122–3 controls identification 122
conveyor belts 48 deadman’s pedal 140 decision errors 9, 10 decisions 11 democratic leadership 70 design aids for designers 27–30 development headings 140 development machines 140 downgrades 140 drill loading machines 15, 140 drivers field of vision 20–22 drivers’ licences 63 dust 40 Earth Moving Equipment Safety Round Table (EMESRT) 30–32, 33, 35, 42, 43 ergonomics 27–30, 33–6, 110 in safety 2 specialists 35–6 Ergonomics Principles Reports 28 ergonomists 35–6 errors of commission 8, 10 errors of omission 8, 10 exceptional violations 10–11 face ends 140 failures 11–14 Fanagalo 58, 69 fire extinguishers 23, 62 firing patterns 139 first-line managers see supervision flushboarding 141 free-steered vehicles 16–17, 22, 29, 140 hard rock mining human error 3–4 safety codes, rules and procedures 46, 48–9, 49–50
158
Understanding Human Error in Mine Safety
supervision 71, 74–5 surface 17–18, 19, 23, 24–5, 39–40, 46, 48–9, 60–61, 62, 63, 75 training 60–61, 62, 63 underground 18, 20, 22–3, 48, 49–50, 60, 71, 74–5 workplace environment 39–40 haul trucks 19, 23, 141 hazard identification 122, 125–7 heading teams 141 health and safety regulators 25–6 hearing protection 38, 39, 41 heat stress 38–9 high reliability organisations 93–4 human error 1–2 accident investigations 117–22 designed-in 25–36 key factors 14 mining accidents 3–4 nature of 7–14 potential 96–9117 predisposing factors 13–14 human factors see ergonomics inbys 141 incident investigations see accident investigations input errors 9, 10 insidious problems 3 jargon 52–3 knowledge-based errors 8, 9 laissez-faire leadership 70 latent failures 12–14, 110–11 leadership 70 levels of influence 13 lighting 37, 38, 39–41, 42–3 load–haul–dumps 16–17, 22, 24, 141 locomotives 18 manriding 100–104 manual handling 61–2 methane monitoring 47–8, 70–71 mindfulness 93–4 Minerals Industry Safety and Health Centre 30–31, 35
mining accidents 3–4 mining equipment computer simulation 32–3 design aids for designers 27–30 ergonomics guidance 27–30 error potential improvements 25–36 health and safety prosecutions 26 manufacturers 26–7 mining companies 33–5 retrofit changes 29 specification of 34 suppliers 26–7 MIRM (Minerals Industry Risk Management) 104 MIRMgate 31, 35 MISHC (Minerals Industry Safety and Health Centre) 30–31, 35 mistake errors 8 New South Wales 23, 24, 32 noise 37, 38, 39–41, 42 Operability and Maintainability Analysis Technique 31, 43 operational decisions 11 operator errors 1 optimising violations 11 ore passes 141 organisational errors 14 organisational maturity 93–4 outbys 141 output errors 9, 10 pass bys 141 permissioning regimes 98 person-machine interface coal mining 15–17, 18, 20–22, 22–3, 23–5 hard rock mining surface 17–18, 19, 23, 24–5 underground 18, 20, 22–3 improvement 25–36 personal protective equipment 8, 48 PHEA see Potential Human Error Audits plain English 53–5 portable roof bolters 24, 141 Potential Human Error Audits 91, 105–15 active failures 107–10
Index latent failures 110–11 PPE (Personal Protective Equipment) 8, 48 quantitative risk assessment 131 Queensland 23 retrofit 29, 35, 42–3 risk assessment 95–104 accident investigations 122–9 elements of 95–6 human error potential 96–8 quantitative 131 regulatory requirements 95 retrospective 86 safety cases 98–104 system shortfalls 124–9 risk management 123–4, 128–9 roadheaders 142 robots 20 rock bursts 3–4, 142 routine violations 10–11 rule-based errors 7, 9 safety and human factors 2 safety cases 98–104 safety climate 87–8 safety codes, rules and procedures auditing 56–7 coal mining 45–6, 46–8, 49–50 competence assessment 56 experience 56 functional simplicity 52 hard rock mining surface 46, 48–9 underground 48, 49–50 improvements 50–59 in international mining operations 57–8 jargon 52–3 monitoring 56–7 objectives 51 piloting 55 plain language 53–5 preparation of 51–5 qualifications 56 supervision 56 supplementing 55–7 tailoring 52–3 training 56
159
safety culture attitude change 90–91 attributes 88, 89 blame 88–9 definition 86–7 human behaviour 90 improvement 91–3 interdependency 90 maturity model 92–3 safety harnesses 63–4, 65 Safety in Mines Research Advisory Committee (South Africa) 18, 22, 23 Safety Management Systems 81–6 safety specialists 83–6 sanders 142 self-centred leadership 70 self-rescuers 22, 142 shearers 142 shovels 142 shuttle cars 24, 142 signal design window 41 SIMRAC (Safety in Mines Research Advisory Committee (South Africa)) 18, 22, 23 Situation Awareness (SA) 35–6 situational violations 10–11 skill-based errors 7, 9 slip/lapse errors 8, 10 South Africa blame focus 89 gold mining 3 hard rock mining 18 Potential Human Error Audits 111–12, 115 safety codes, rules and procedures 58 Safety in Mines Research Advisory Committee (SIMRAC) 18, 22, 23 underground vehicles 22 spragging 143 strategic decisions 11 supervision clarity of roles 76–8 coal mining 70–73, 75–6 hard rock mining surface 75 underground 71, 74–5 improvement 76–80
160
Understanding Human Error in Mine Safety
monitoring 80 safety 56 support 79–80 training 78–9 support rules 143 system-wide errors 14
underground coal mining 59–60, 61–2, 62–3, 63–4 underground hard rock mining 60 TYREgate 31–2
tactical decisions 11 temperature 37, 38–9 thermal environment 37, 38–9 traffic lights 46–7 training contract staff 65 improvement 64–7 review process 66–7 safety 56 supervision 78–9 surface hard rock mining 60–61, 62, 63
vehicle breakdowns 48–9 vehicle parking 60 vibration 37–8 violations 8, 10–11
underground locomotives 18, 29
workplace environment 37–43 coal mining 39, 40 coal preparation 40 hard rock mining 39–40 improvements 41–3 Worksafe Australia 33–4